https://wiki.swarma.org/api.php?action=feedcontributions&user=%E7%B2%B2%E5%85%B0&feedformat=atom集智百科 - 复杂系统|人工智能|复杂科学|复杂网络|自组织 - 用户贡献 [zh-cn]2024-03-29T06:44:20Z用户贡献MediaWiki 1.35.0https://wiki.swarma.org/index.php?title=%E5%90%88%E6%88%90%E7%94%9F%E7%89%A9%E7%94%B5%E8%B7%AF&diff=20731合成生物电路2020-12-31T11:43:09Z<p>粲兰:</p>
<hr />
<div>此词条暂由袁一博翻译,翻译字数共956,未经人工整理和审校,带来阅读不便,请见谅。<br />
<br />
--[[用户:小趣木木|小趣木木]]([[用户讨论:小趣木木|讨论]])文本缺失 需要补充<br />
{{Synthetic biology}}<br />
<br />
[[File:Lac Operon.svg|thumb|275px|Lac Operon|The ''lac'' operon is a natural biological circuit on which many synthetic circuits are based. Top: Repressed, Bottom: Active. <br /><br />
<br />
[[File:Lac Operon.svg|thumb|275px|Lac Operon|The lac operon is a natural biological circuit on which many synthetic circuits are based. Top: Repressed, Bottom: Active. <br /><br />
<br />
[文件: Lac Operon.svg | thumb | 275px | Lac Operon | Lac Operon | Lac Operon 是一种天然的生物电路,许多合成电路都是以它为基础的。<br />
上图: 压抑状态,下图: 活跃状态。< br/><br />
<br />
'''''1'': RNA polymerase, ''2'': Repressor, ''3'': Promoter, ''4'': Operator, ''5'': Lactose, ''6'': ''lacZ'', ''7'': ''lacY'', ''8'': ''lacA''.]]<br />
<br />
1: RNA polymerase, 2: Repressor, 3: Promoter, 4: Operator, 5: Lactose, 6: lacZ, 7: lacY, 8: lacA.]]<br />
<br />
1: RNA 聚合酶,2: 抑制子,3: 启动子,4: 操作者,5: 乳糖,6: lacZ,7: lacY,8: lacA ]<br />
<br />
<br />
<br />
'''Synthetic biological circuits''' are an application of [[synthetic biology]] where biological parts inside a [[Cell (biology)|cell]] are designed to perform logical functions mimicking those observed in [[electronic circuit]]s. The applications range from simply inducing production to adding a measurable element, like [[Green Fluorescent Protein|GFP]], to an existing [[Gene regulatory network|natural biological circuit]], to implementing completely new systems of many parts.<ref name="Kobayashi">{{cite journal | last1 = Kobayashi | first1 = H. | last2 = Kærn | first2 = M. | last3 = Araki | first3 = M. | last4 = Chung | first4 = K. | last5 = Gardner | first5 = T. S. | last6 = Cantor | first6 = C. R. | last7 = Collins | first7 = J. J. | year = 2004 | title = Programmable cells: Interfacing natural and engineered gene networks | journal = PNAS | volume = 101 | issue = 22| pages = 8414–8419 | doi=10.1073/pnas.0402940101 | pmid=15159530 | pmc=420408}}</ref><br />
<br />
Synthetic biological circuits are an application of synthetic biology where biological parts inside a cell are designed to perform logical functions mimicking those observed in electronic circuits. The applications range from simply inducing production to adding a measurable element, like GFP, to an existing natural biological circuit, to implementing completely new systems of many parts.<br />
<br />
合成生物电路通过利用细胞内的生物部件(细胞器)执行电子电路的逻辑功能,是对合成生物学的一种应用。合成生物电路的应用范围十分广泛,从简单的诱导生产到添加一个可测量的元素,如绿色荧光蛋白,到利用现有的自然生物电路,再到组建由许多部分组成的全新系统等。<br />
<br />
[[Image:Protein translation.gif|thumb|300px| A [[ribosome]] is a [[biological machine]].核糖体是一个生物机器。]]<br />
<br />
A [[ribosome is a biological machine.]]<br />
<br />
[核糖体是一种生物机器]<br />
<br />
The goal of synthetic biology is to generate an array of tunable and characterized parts, or modules, with which any desirable synthetic biological circuit can be easily designed and implemented.<ref name="SynBioFaq">{{cite web|title=Synthetic Biology: FAQ|url=http://syntheticbiology.org/FAQ.html|work=SyntheticBiology.org|accessdate=21 December 2011|url-status=dead|archiveurl=https://web.archive.org/web/20021212065409/http://syntheticbiology.org/faq.html|archivedate=12 December 2002}}</ref> These circuits can serve as a method to modify cellular functions, create cellular responses to environmental conditions, or influence cellular development. By implementing rational, controllable logic elements in cellular systems, researchers can use living systems as engineered "[[biological machine]]s" to perform a vast range of useful functions.<ref name="Kobayashi"/><br />
<br />
The goal of synthetic biology is to generate an array of tunable and characterized parts, or modules, with which any desirable synthetic biological circuit can be easily designed and implemented. These circuits can serve as a method to modify cellular functions, create cellular responses to environmental conditions, or influence cellular development. By implementing rational, controllable logic elements in cellular systems, researchers can use living systems as engineered "biological machines" to perform a vast range of useful functions. The second, by Michael Elowitz and Stanislas Leibler, showed that three repressor genes could be connected to form a negative feedback loop termed the Repressilator that produces self-sustaining oscillations of protein levels in E. coli.<br />
<br />
合成生物学旨在生成一系列可调谐和特征化的部件或模块,利用这些部件或模块,我们可以依据设想轻易设计合成并搭建生物电路。这些电路可以用来修改细胞功能,创造针对环境条件的细胞响应,或影响细胞的发展。通过向细胞系统中插入合理、可控的逻辑元素,研究人员可以将活体系统作为工程化的“生物机器”一样运用其许多实用功能。迈克尔·埃洛维茨(Michael Elowitz)和斯坦尼斯拉斯·雷布勒(Stanislas Leibler)的第二项研究表明,三个阻遏基因可以相互联结,形成一个负反馈环路,称为'''<font color="#ff8000">抑制震荡子(Repressilator)</font>''',它可以在大肠杆菌中产生蛋白质水平的自维持振荡。<br />
<br />
== History 发展历程 ==<br />
<br />
Currently, synthetic circuits are a burgeoning area of research in systems biology with more publications detailing synthetic biological circuits published every year. There has been significant interest in encouraging education and outreach as well: the International Genetically Engineered Machines Competition manages the creation and standardization of BioBrick parts as a means to allow undergraduate and high school students to design their own synthetic biological circuits.<br />
<br />
目前,合成生物电路是系统生物学研究的一个新兴领域,每年都有许多新的刊物详细介绍合成生物电路。关于合成生物电路的教育引导和促进推广等方面,外界也有着浓厚的兴趣,如国际基因工程机器竞赛,它通过倡导生物积木零件的创造和标准化这样一种方式来鼓励本科生和高中生设计自己的合成生物电路。<br />
<br />
The first natural gene circuit studied in detail was the [[lac operon]]. In studies of [[diauxie|diauxic growth]] of ''[[E. coli]]'' on two-sugar media, [[Jacques Monod]] and [[Francois Jacob]] discovered that ''E.coli'' preferentially consumes the more easily processed [[glucose]] before switching to [[lactose]] metabolism. They discovered that the mechanism that controlled the metabolic "switching" function was a two-part control mechanism on the lac operon. When lactose is present in the cell the [[enzyme]] [[β-galactosidase]] is produced to convert lactose into [[glucose]] or [[galactose]]. When lactose is absent in the cell the lac repressor inhibits the production of the enzyme β-galactosidase to prevent any inefficient processes within the cell.<br />
The first natural gene circuit studied in detail was the lac operon. In studies of diauxic growth of E. coli on two-sugar media, Jacques Monod and Francois Jacob discovered that E.coli preferentially consumes the more easily processed glucose before switching to lactose metabolism. They discovered that the mechanism that controlled the metabolic "switching" function was a two-part control mechanism on the lac operon. When lactose is present in the cell the enzyme β-galactosidase is produced to convert lactose into glucose or galactose. When lactose is absent in the cell the lac repressor inhibits the production of the enzyme β-galactosidase to prevent any inefficient processes within the cell.<br />
第一个被细致研究的天然基因电路是乳糖操纵子。贾克斯·莫诺德(Jacques Monod)和弗朗索瓦·雅各布(Francois Jacob)在研究大肠杆菌在双糖培养基上的二次生长时发现,大肠杆菌在转换为乳糖代谢之前,会优先消耗更易加工的葡萄糖。他们发现控制代谢“转换”功能的机制是对乳糖操纵子的一种二分控制机制。当乳糖存在于细胞中时,β-半乳糖苷酶将被表达出来,从而将乳糖转化为葡萄糖或半乳糖。当细胞中缺乏乳糖,乳糖阻遏物便会抑制β-半乳糖苷酶的表达,以阻止细胞内任何效率低下的过程。<br />
<br />
<br />
The lac operon is used in the [[biotechnology]] industry for production of [[recombinant DNA|recombinant]] [[proteins]] for therapeutic use. The gene or genes for producing an [[exogenous]] protein are placed on a [[plasmid]] under the control of the lac promoter. Initially the cells are grown in a medium that does not contain lactose or other sugars, so the new genes are not expressed. Once the cells reach a certain point in their growth, [[IPTG|Isopropyl β-D-1-thiogalactopyranoside (IPTG)]] is added. IPTG, a molecule similar to lactose, but with a sulfur bond that is not hydrolyzable so that E. Coli does not digest it, is used to activate or "[[Regulation of gene expression#Inducible vs. repressible systems|induce]]" the production of the new protein. Once the cells are induced, it is difficult to remove IPTG from the cells and therefore it is difficult to stop expression.<br />
The lac operon is used in the biotechnology industry for production of recombinant proteins for therapeutic use. The gene or genes for producing an exogenous protein are placed on a plasmid under the control of the lac promoter. Initially the cells are grown in a medium that does not contain lactose or other sugars, so the new genes are not expressed. Once the cells reach a certain point in their growth, Isopropyl β-D-1-thiogalactopyranoside (IPTG) is added. IPTG, a molecule similar to lactose, but with a sulfur bond that is not hydrolyzable so that E. Coli does not digest it, is used to activate or "induce" the production of the new protein. Once the cells are induced, it is difficult to remove IPTG from the cells and therefore it is difficult to stop expression.<br />
<br />
<br />
乳糖操纵子用于生物技术工业生产治疗用重组蛋白。在乳糖启动子的控制下,将用于产生外源蛋白的基因插入到质粒上。最初,细胞是在不含乳糖或其他糖类的培养基中生长的,因此新的基因不表达。一旦细胞达到生长的某个点,向其中加入异丙基β- d -1-硫代半乳糖苷(IPTG)。IPTG是一种类似乳糖的分子,但它的硫键不能水解,所以大肠杆菌无法消化它,IPTG被用来激活或“诱导”新蛋白质的产生。一旦细胞被诱导,IPTG便很难从细胞中去除,故而很难停止表达。<br />
<br />
Both immediate and long term applications exist for the use of synthetic biological circuits, including different applications for metabolic engineering, and synthetic biology. Those demonstrated successfully include pharmaceutical production, and fuel production. However methods involving direct genetic introduction are not inherently effective without invoking the basic principles of synthetic cellular circuits. For example, each of these successful systems employs a method to introduce all-or-none induction or expression. This is a biological circuit where a simple repressor or promoter is introduced to facilitate creation of the product, or inhibition of a competing pathway. However, with the limited understanding of cellular networks and natural circuitry, implementation of more robust schemes with more precise control and feedback is hindered. Therein lies the immediate interest in synthetic cellular circuits.<br />
<br />
包括代谢工程学和合成生物学在内等多重领域,合成生物电路的近期和长期应用都存在。这些成功的示范包括制药生产和燃料生产。然而,如果不引用合成细胞电路的基本原理,涉及直接基因导入的方法并不是天然有效的。例如,每个成功的系统都使用一种方法来引入全或非的诱导或表达。这是一种生物电路,通过引入简单的阻遏物或启动子来促进产物的生成,或抑制竞争途径。然而,由于对蜂窝网络和自然电路的了解有限,实施更精确的控制和反馈更具鲁棒性的方案会面临阻碍。这就是人工合成蜂窝电路的直接利益所在。<br />
<br />
<br />
Two early examples of synthetic biological circuits were published in [[Nature (journal)|Nature]] in 2000. One, by Tim Gardner, Charles Cantor, and [[James Collins (bioengineer)|Jim Collins]] working at [[Boston University]], demonstrated a "bistable" switch in ''E. coli''. The switch is turned on by heating the culture of bacteria and turned off by addition of IPTG. They used GFP as a reporter for their system.<ref name="Gardner">Gardner, T.s., Cantor, C.R., Collins, J. Construction of a genetic toggle switch in Escherichia coli. ''Nature'' 403, 339-342 (20 January 2000).</ref> The second, by [[Michael Elowitz]] and [[Stanislas Leibler]], showed that three repressor genes could be connected to form a negative feedback loop termed the [[Repressilator]] that produces self-sustaining oscillations of protein levels in ''E. coli.''<ref>{{Cite journal|last=Stanislas Leibler|last2=Elowitz|first2=Michael B.|date=January 2000|title=A synthetic oscillatory network of transcriptional regulators|journal=Nature|volume=403|issue=6767|pages=335–338|doi=10.1038/35002125|pmid=10659856|issn=1476-4687}}</ref><br />
Two early examples of synthetic biological circuits were published in Nature in 2000. One, by Tim Gardner, Charles Cantor, and Jim Collins working at Boston University, demonstrated a "bistable" switch in E. coli. The switch is turned on by heating the culture of bacteria and turned off by addition of IPTG. They used GFP as a reporter for their system. The second, by Michael Elowitz and Stanislas Leibler, showed that three repressor genes could be connected to form a negative feedback loop termed the Repressilator that produces self-sustaining oscillations of protein levels in E. coli. <br />
<br />
2000年,《自然》杂志发表了两个合成生物电路的早期例子。波士顿大学的蒂姆·加德纳、查尔斯·康托和吉姆·柯林斯研究了一种大肠杆菌的“双稳态”开关。通过加热细菌培养物打开开关,加入IPTG关闭开关。他们使用GFP作为他们系统的记者。第二项研究是由迈克尔·埃洛维茨(Michael Elowitz)和斯坦尼斯拉斯·莱布勒(Stanislas Leibler)提出的,他们发现三个抑制因子基因可以连接在一起,形成一个负反馈回路,称为抑制因子,它可以在大肠杆菌中产生自维持的蛋白质水平的振荡。<br />
<br />
Development in understanding cellular circuitry can lead to exciting new modifications, such as cells which can respond to environmental stimuli. For example, cells could be developed that signal toxic surroundings and react by activating pathways used to degrade the perceived toxin. To develop such a cell, it is necessary to create a complex synthetic cellular circuit which can respond appropriately to a given stimulus.<br />
<br />
细胞电路理解的发展可以导致令人兴奋的新修正,例如可以对环境刺激作出反应的细胞。例如,细胞可以警示当前处于有毒环境,并通过激活用于降解感知毒素的途径来进行反应。为了研制这样一个细胞,有必要创建一个复杂的合成细胞电路,它可以适当地响应给定的刺激。<br />
<br />
<br />
<br />
Currently, synthetic circuits are a burgeoning area of research in [[systems biology]] with more publications detailing synthetic biological circuits published every year.<ref>{{cite journal | last1 = Purnick | first1 = Priscilla E. M. | last2 = Weis | first2 = Ron | year = 2009 | title = The second wave of synthetic biology: from modules to systems | url = | journal = Nature Reviews Molecular Cell Biology | volume = 10 | issue = 6| pages = 410–422 | doi = 10.1038/nrm2698 | pmid=19461664}}</ref> There has been significant interest in encouraging education and outreach as well: the International Genetically Engineered Machines Competition<ref>International Genetically Engineered Machines (iGem) http://igem.org/Main_Page</ref> manages the creation and standardization of [[BioBrick]] parts as a means to allow undergraduate and high school students to design their own synthetic biological circuits.<br />
Currently, synthetic circuits are a burgeoning area of research in systems biology with more publications detailing synthetic biological circuits published every year.[5] There has been significant interest in encouraging education and outreach as well: the International Genetically Engineered Machines Competition[6] manages the creation and standardization of BioBrick parts as a means to allow undergraduate and high school students to design their own synthetic biological circuits.<br />
<br />
目前,合成电路是系统生物学研究的一个新兴领域,每年都有更多的出版物详细介绍合成生物电路。外界对其在鼓励教育和推广方面也有很大的兴趣:国际基因工程机器竞赛管理着生物砖部件的创造和标准化,着使得本科生和高中生能够设计自己的合成生物电路。<br />
<br />
Given synthetic cellular circuits represent a form of control for cellular activities, it can be reasoned that with complete understanding of cellular pathways, "plug and play" synthetic cells can be developed implementing only the pathways necessary for cell survival reproduction. From this cell, to be thought of as a minimal genome cell, one can add pieces from the toolbox to create a well defined pathway with appropriate synthetic circuitry for an effective feedback system. Because of the basic ground up construction method, and the proposed database of mapped circuitry pieces, techniques mirroring those used to model computer or electronic circuits can be used to redesign cells and model cells for easy troubleshooting and predictive behavior and yields.<br />
<br />
鉴于合成细胞电路代表了一种控制细胞活动的形式,可以推断,只要完全了解细胞通路,就可以开发出只执行细胞生存繁殖所必需的通路的“即插即用”的合成细胞。从这个被认为是一个最小的基因组细胞,我们可以添加工具箱中的片段,为一个有效的反馈系统创造一个拥有合适的合成电路的良定路径。由于基本的基础构造方法,以及提议的映射电路片数据库,反映用于模拟计算机或电子电路的技术可用于重新设计细胞和模型细胞,以便排除故障和预测行为与产量。<br />
<br />
== Interest and goals 研究方向和目标==<br />
<br />
Both immediate and long term applications exist for the use of synthetic biological circuits, including different applications for [[metabolic engineering]], and [[synthetic biology]]. Those demonstrated successfully include pharmaceutical production,<ref>{{cite journal | last1 = Ro | first1 = D.-K. | last2 = Paradise | first2 = E.M. | last3 = Ouellet | first3 = M. | last4 = Fisher | first4 = K.J. | last5 = Newman | first5 = K.L. | last6 = Ndungu | first6 = J.M. | last7 = Ho | first7 = K.A. | last8 = Eachus | first8 = R.A. | last9 = Ham | first9 = T.S. | last10 = Kirby | first10 = J. | last11 = Chang | first11 = M.C.Y. | last12 = Withers | first12 = S.T. | last13 = Shiba | first13 = Y. | last14 = Sarpong | first14 = R. | last15 = Keasling | first15 = J.D. | year = 2006 | title = Production of the antimalarial drug precursor artemisinic acid in engineered yeast | url = | journal = Nature | volume = 440 | issue = 7086| pages = 940–943 | doi=10.1038/nature04640 | pmid=16612385}}</ref> and fuel production.<ref>{{cite journal | last1 = Fortman | first1 = J.L. | last2 = Chhabra | first2 = S. | last3 = Mukhopadhyay | first3 = A. | last4 = Chou | first4 = H. | last5 = Lee | first5 = T.S. | last6 = Steen | first6 = E. | last7 = Keasling | first7 = J.D. | year = 2008 | title = Biofuel alternatives to ethanol: pumping the microbial well | url = https://digital.library.unt.edu/ark:/67531/metadc1013351/| journal = Trends Biotechnol | volume = 26 | issue = 7| pages = 375–381 | doi=10.1016/j.tibtech.2008.03.008| pmid = 18471913 }}</ref> However methods involving direct genetic introduction are not inherently effective without invoking the basic principles of synthetic cellular circuits. For example, each of these successful systems employs a method to introduce all-or-none induction or expression. This is a biological circuit where a simple [[repressor]] or [[promoter (genetics)|promoter]] is introduced to facilitate creation of the product, or inhibition of a competing pathway. However, with the limited understanding of cellular networks and natural circuitry, implementation of more robust schemes with more precise control and feedback is hindered. Therein lies the immediate interest in synthetic cellular circuits.<br />
Both immediate and long term applications exist for the use of synthetic biological circuits, including different applications for metabolic engineering, and synthetic biology. Those demonstrated successfully include pharmaceutical production,[7] and fuel production.[8] However methods involving direct genetic introduction are not inherently effective without invoking the basic principles of synthetic cellular circuits. For example, each of these successful systems employs a method to introduce all-or-none induction or expression. This is a biological circuit where a simple repressor or promoter is introduced to facilitate creation of the product, or inhibition of a competing pathway. However, with the limited understanding of cellular networks and natural circuitry, implementation of more robust schemes with more precise control and feedback is hindered. Therein lies the immediate interest in synthetic cellular circuits.<br />
<br />
包括代谢工程和合成生物学的在内,合成生物电路具有短期和长期应用。成功的案例包括制药生产和燃料生产。然而,如果不运用合成细胞电路的基本原理,直接引入基因的方法本身便不是有效的。例如,每一个成功的系统都使用一种独特的方法来引入与或非的归纳或表达式。这种生物电路通过引入简单的抑制子或启动子来促进产物的生成,或抑制竞争途径。然而,由于人们对蜂窝网络和自然电路的了解有限,实现具有更精确的控制和反馈的、更具鲁棒性的方案必然受到阻碍。这也是合成细胞电路的直接利益所在。<br />
<br />
Development in understanding cellular circuitry can lead to exciting new modifications, such as cells which can respond to environmental stimuli. For example, cells could be developed that signal toxic surroundings and react by activating pathways used to degrade the perceived toxin.<ref>{{cite journal | last1 = Keasling | first1 = J.D. | year = 2008 | title = Synthetic biology for synthetic chemistry. | url = | journal = ACS Chem Biol | volume = 3 | issue = 1| pages = 64–76 | doi=10.1021/cb7002434| pmid = 18205292 | title-link = synthetic chemistry }}</ref> To develop such a cell, it is necessary to create a complex synthetic cellular circuit which can respond appropriately to a given stimulus.<br />
Development in understanding cellular circuitry can lead to exciting new modifications, such as cells which can respond to environmental stimuli. For example, cells could be developed that signal toxic surroundings and react by activating pathways used to degrade the perceived toxin.[9] To develop such a cell, it is necessary to create a complex synthetic cellular circuit which can respond appropriately to a given stimulus.<br />
<br />
关于了解细胞电路方面的学科进展可以催生令人兴奋的新的修正,例如,细胞可以对环境刺激作出反应。处于有毒环境时,细胞可以发出信号,并通过激活降解所感知毒素的途径进行反应。要形成这样的细胞,就必须创造一个复杂的合成细胞电路,能够对给定的刺激做出适当的反应。<br />
<br />
Repressilator<br />
<br />
抑制震荡子<br />
<br />
<br />
<br />
Mammalian tunable synthetic oscillator<br />
<br />
哺乳动物可调谐合成振荡器<br />
<br />
Given synthetic cellular circuits represent a form of control for cellular activities, it can be reasoned that with complete understanding of cellular pathways, "plug and play"<ref name="Kobayashi" /> cells with well defined genetic circuitry can be engineered. It is widely believed that if a proper toolbox of parts is generated,<ref>{{cite journal | last1 = Lucks | first1 = Julius B | last2 = Qi | first2 = Lei | last3 = Whitaker | first3 = Weston R | last4 = Arkin | first4 = Adam P | year = 2008 | title = Toward scalable parts families for predictable design of biological circuits | url = | journal = Current Opinion in Microbiology | volume = 11 | issue = 6| pages = 567–573 | doi = 10.1016/j.mib.2008.10.002 | pmid = 18983935 }}</ref> synthetic cells can be developed implementing only the pathways necessary for cell survival reproduction. From this cell, to be thought of as a minimal [[genome]] cell, one can add pieces from the toolbox to create a well defined pathway with appropriate synthetic circuitry for an effective feedback system. Because of the basic ground up construction method, and the proposed database of mapped circuitry pieces, techniques mirroring those used to model computer or electronic circuits can be used to redesign cells and model cells for easy troubleshooting and predictive behavior and yields.<br />
Given synthetic cellular circuits represent a form of control for cellular activities, it can be reasoned that with complete understanding of cellular pathways, "plug and play"[1] cells with well defined genetic circuitry can be engineered. It is widely believed that if a proper toolbox of parts is generated,[10] synthetic cells can be developed implementing only the pathways necessary for cell survival reproduction. From this cell, to be thought of as a minimal genome cell, one can add pieces from the toolbox to create a well defined pathway with appropriate synthetic circuitry for an effective feedback system. Because of the basic ground up construction method, and the proposed database of mapped circuitry pieces, techniques mirroring those used to model computer or electronic circuits can be used to redesign cells and model cells for easy troubleshooting and predictive behavior and yields.<br />
<br />
假定合成细胞电路代表了一种控制细胞活动的形式,那么可以推论,在完全理解细胞通路的情况下,具有明确定义的遗传电路的“即插即用”的细胞可以被制造出来。人们普遍认为,如果生成一个适当的工具部件,那么可以将合成细胞改造为只执行细胞生存繁殖所需的生命过程。从这个被认为是最小的基因组细胞的细胞中,我们可以从工具箱中添加一些片段,创建一个具有适当的合成电路的良定通路,从而形成一个有效的反馈系统。由于基本的自底向上的方法,以及提出的映射电路片段的数据库,那些用于构建计算机或电子电路的技术可以用于重新设计单元和模型单元,以便于故障排除以及预测行为和产量。<br />
<br />
Bacterial tunable synthetic oscillator<br />
<br />
细菌可调谐合成振荡器<br />
<br />
<br />
<br />
Coupled bacterial oscillator<br />
<br />
耦合细菌振荡器<br />
<br />
== Example circuits 电路示例 ==<br />
<br />
Globally coupled bacterial oscillator<br />
<br />
球状耦合细菌振荡器<br />
<br />
<br />
<br />
Elowitz et al. and Fung et al. created oscillatory circuits that use multiple self-regulating mechanisms to create a time-dependent oscillation of gene product expression. <br />
<br />
埃洛维茨等人。和 Fung 等人。创造的振荡电路,使用多种自我调节机制,创造一个时间相关的振荡基因产品表达。<br />
<br />
=== Oscillators 振荡器 ===<br />
<br />
# [[Repressilator]]<br />
<br />
# Mammalian tunable synthetic oscillator<br />
<br />
Toggle-switch<br />
<br />
拨动开关<br />
<br />
# Bacterial tunable synthetic oscillator<br />
<br />
Gardner et al. used mutual repression between two control units to create an implementation of a toggle switch capable of controlling cells in a bistable manner: transient stimuli resulting in persistent responses If Signal A AND Signal B are present, then the desired gene product will result. All promoters shown are inducible, activated by the displayed gene product. Each signal activates expression of a separate gene (shown in light blue). The expressed proteins then can either form a complete complex in cytosol, that is capable of activating expression of the output (shown), or can act separately to induce expression, such as separately removing an inhibiting protein and inducing activation of the uninhibited promoter.]]<br />
<br />
加德纳等人使用两个控制单元之间的相互抑制来创建一个能够以双稳态方式控制细胞的拨动开关的实现: 如果接收到信号 A 和信号 B ,那么期望的基因产物将被表达出来。所有显示出的启动子都是诱导性的,并被表达的基因产物激活。每个信号激活一个单独基因的表达(如浅蓝色所示)。然后,表达的蛋白质可以在细胞溶胶中形成一个完整的能够激活输出的表达(如图所示)的复合体,或者可以单独作用诱导表达,例如单独去除抑制蛋白和诱导激活不受抑制的启动子。<br />
<br />
# Coupled bacterial oscillator<br />
耦合细菌振荡器<br />
<br />
# Globally coupled bacterial oscillator<br />
球状耦合细菌振荡器<br />
<br />
The logical [[OR gate.<br />
<br />
逻辑的[或门。<br />
<br />
Elowitz et al. and Fung et al. created oscillatory circuits that use multiple self-regulating mechanisms to create a time-dependent oscillation of gene product expression.埃洛维茨等人和冯等人创造了一种振荡电路,它使用多个自调节机制来形成基因表达的依赖于时间的振荡器。<ref>{{cite journal | last1 = Elowitz | first1 = M.B. | last2 = Leibler | first2 = S. | year = 2000 | title = A synthetic oscillatory network of transcriptional regulators | pmid = 10659856| journal = Nature | volume = 403 | issue = 6767| pages = 335–338 | doi=10.1038/35002125}}</ref><ref>{{cite journal | last1 = Fung | first1 = E. | last2 = Wong | first2 = W.W. | last3 = Suen | first3 = J.K. | last4 = Bulter | first4 = T. | last5 = Lee | first5 = S. | last6 = Liao | first6 = J.C. | year = 2005 | title = A synthetic gene–metabolic oscillator | url = | journal = Nature | volume = 435 | issue = 7038| pages = 118–122 | doi=10.1038/nature03508| pmid = 15875027 }}</ref> <br />
<br />
<br />
<br />
=== Bistable switches 双稳态开关===<br />
<br />
Synthetic gene circuits can control gene expression heterogeneity and can be controlled independently of the gene expression mean.<br />
<br />
合成基因电路可以控制基因表达的异质性,并且可以独立于基因表达方式进行控制。<br />
<br />
# Toggle-switch<br />
<br />
Gardner et al. used mutual repression between two control units to create an implementation of a toggle switch capable of controlling cells in a bistable manner: transient stimuli resulting in persistent responses<ref name="Gardner" />.<br />
<br />
<br />
<br />
Engineered systems are the result of implementation of combinations of different control mechanisms. A limited counting mechanism was implemented by a pulse-controlled gene cascade and application of logic elements enables genetic "programming" of cells as in the research of Tabor et al., which synthesized a photosensitive bacterial edge detection program.<br />
<br />
工程系统是不同控制机制组合的结果。脉冲控制基因级联实现了有限计数机制,逻辑元件的应用实现了细胞的遗传“编程”,例如泰伯等人合成了一个光敏细菌边缘检测程序。<br />
<br />
=== Logical operators 逻辑运算===<br />
<br />
[[File:SynBioCirc-AndLogicGate.jpg|frame|center|The logical [[AND gate]].逻辑与门<ref name="rocha">{{cite journal | last1 = Silva-Rocha | first1 = R. | last2 = de Lorenzo | first2 = V. | year = 2008 | title = Mining logic gates in prokaryotic transcriptional regulation networks | url = | journal = FEBS Letters | volume = 582 | issue = 8| pages = 1237–1244 | doi=10.1016/j.febslet.2008.01.060 | pmid=18275855}}</ref><ref name="buchler">{{cite journal | last1 = Buchler | first1 = N.E. | last2 = Gerland | first2 = U. | last3 = Hwa | first3 = T. | year = 2003 | title = On schemes of combinatorial transcription logic | journal = PNAS | volume = 100 | issue = 9| pages = 5136–5141 | doi=10.1073/pnas.0930314100 | pmid=12702751 | pmc=404558}}</ref> If Signal A '''AND''' Signal B are present, then the desired gene product will result. All promoters shown are inducible, activated by the displayed gene product. Each signal activates expression of a separate gene (shown in light blue). The expressed proteins then can either form a complete complex in [[cytosol]], that is capable of activating expression of the output (shown), or can act separately to induce expression, such as separately removing an inhibiting protein and inducing activation of the uninhibited promoter.如果信号 A 和信号 B 产生,那么期望的基因产物将被表达出来。所有显示出的启动子都是诱导性的,并被表达的基因产物激活。每个信号激活一个单独基因的表达(如浅蓝色所示)。表达的蛋白质可以在细胞溶胶中形成一个完整的能够激活输出的表达(如图所示)的复合体,或者可以单独作用诱导表达,例如分别去除抑制蛋白和诱导激活不受抑制的启动子。]]<br />
<br />
<br />
<br />
Computational design and evaluation of DNA circuits to achieve optimal performance<br />
<br />
实现最佳性能的 DNA 电路的计算设计和评估<br />
<br />
[[File:SynBioCirc-OrLogicGate.jpg|frame|center|The logical [[OR gate]].逻辑或门<ref name="rocha" /><ref name="buchler" /> If Signal A '''OR''' Signal B are present, then the desired gene product will result. All promoters shown are inducible. Either signal is capable of activating the expression of the output gene product, and only the action of a single promoter is required for gene expression. Post-transcriptional regulation mechanisms can prevent the presence of both inputs producing a compounded high output, such as implementing a low binding affinity [[ribosome binding site]].如果信号 A 或信号 B 产生,那么期望的基因产物将被表达出来。所有显示出的启动子都是诱导性的。无论何种信号,都能激活基因产物的表达,并且只需一个启动子就可以产生这种表达。转录后的调节机制可以阻止产生复合高产出的两个输入信号的产生,比如插入一个低结合亲和力的核糖体结合点。]]<br />
<br />
<br />
<br />
Recent developments in artificial gene synthesis and the corresponding increase in competition within the industry have led to a significant drop in price and wait time of gene synthesis and helped improve methods used in circuit design. At the moment, circuit design is improving at a slow pace because of insufficient organization of known multiple gene interactions and mathematical models. This issue is being addressed by applying computer-aided design (CAD) software to provide multimedia representations of circuits through images, text and programming language applied to biological circuits. Some of the more well known CAD programs include GenoCAD, Clotho framework and j5. GenoCAD uses grammars, which are either opensource or user generated "rules" which include the available genes and known gene interactions for cloning organisms. Clotho framework uses the Biobrick standard rules.<br />
<br />
最近在人工基因合成法领域的发展和行业内竞争的相应增加已经导致了基因合成的价格和等待时间的显著下降,并且帮助改进了电路设计中使用的方法。目前,由于对已知的多基因相互作用和数学模型的架构不足,电路设计正在缓慢地改进。这个问题目前通过应用计算机辅助设计(CAD)软件,利用图像、文本和应用于生物电路的编程语言来提供电路的多媒体表示来解决。一些更著名的 CAD 程序包括 GenoCAD,Clotho框架和 j5。GenoCAD 使用语法,这些语法要么是开源的,要么是用户生成的“规则” ,其中包括克隆生物的可用基因和已知的基因相互作用。Clotho框架使用“生物积木”标准规则。<br />
<br />
[[File:SynBioCirc-NandLogicGate.jpg|frame|center|The logical [[Negated AND gate]].逻辑非门<ref name="rocha" /><ref name="buchler" /> If Signal A '''AND''' Signal B are present, then the desired gene product will '''NOT''' result. All promoters shown are inducible. The activating promoter for the output gene is constitutive, and thus not shown. The constitutive promoter for the output gene keeps it "on" and is only deactivated when (similar to the AND gate) a complex as a result of two input signal gene products blocks the expression of the output gene.如果信号 A 和信号 B 产生,那么期望的基因产物不会被表达出来。所有显示出的启动子都是诱导性的。输出基因的活性启动子是本构的,因而不会显示出来。输出基因的本构启动子让它始终被表达,只有在两个输入信号基因的产物阻滞输出基因的表达,形成一个复合体时,输出基因才失活。]]<br />
<br />
<br />
<br />
=== Analog tuners 模拟调谐器===<br />
<br />
Using negative feedback and identical promoters, linearizer gene circuits can impose uniform gene expression that depends linearly on extracellular chemical inducer concentration.<br />
<br />
线性化电路使用负反馈和完全相同的启动子,可以利用线性依赖于细胞外化学诱导物浓度的统一基因的表达。<ref name="pmid19279212">{{cite journal | vauthors = Nevozhay D, Adams RM, Murphy KF, Josic K, Balázsi G| title = Negative autoregulation linearizes the dose-response and suppresses the heterogeneity of gene expression | journal = Proc. Natl. Acad. Sci. U.S.A. | volume = 106 | issue = 13 | pages = 5123-8 | date = March 31, 2009 | pmid = 19279212 | pmc = 2654390 | doi = 10.1073/pnas.0809901106 }}</ref><br />
<br />
<br />
<br />
=== Controllers of gene expression heterogeneity 基因表达异质性的控制===<br />
<br />
Synthetic gene circuits can control gene expression heterogeneity and can be controlled independently of the gene expression mean.<br />
<br />
合成基因电路可以控制基因表达的异质性,并且可以独立于基因表达均值来控制。<ref name="pmid17189188">{{cite journal | vauthors = Blake WJ, Balázsi G, Kohanski MA, Isaacs FJ, Murphy KF, Kuang Y, Cantor CR, Walt DR, Collins JJ| title = Phenotypic Consequences of Promoter-Mediated Transcriptional Noise | journal = Molec. Cell | volume = 24 | issue = 6 | pages = 853-65 | date = December 28, 2006 | pmid = 17189188 | doi = 10.1016/j.molcel.2006.11.003 }}</ref><br />
<br />
<br />
<br />
=== Other engineered systems 其他工程系统===<br />
<br />
<!--- Categories ---><br />
<br />
< ! ——类别—— > <br />
<br />
Engineered systems are the result of implementation of combinations of different control mechanisms. A limited counting mechanism was implemented by a pulse-controlled gene cascade<ref>{{cite journal | last1 = Friedland | first1 = A.E. | last2 = Lu | first2 = T.K | last3 = Wang | first3 = X. | last4 = Shi | first4 = D. | last5 = Church | first5 = G. | last6 = Collins | first6 = J.J. | year = 2009 | title = Synthetic Gene Networks That Count | url = | journal = Science | volume = 324 | issue = 5931| pages = 1199–1202 | doi=10.1126/science.1172005 | pmid=19478183 | pmc=2690711}}</ref> and application of logic elements enables genetic "programming" of cells as in the research of Tabor et al., which synthesized a photosensitive bacterial edge detection program.<ref>{{cite journal | last1 = Tabor | first1 = J.J. | last2 = Salis | first2 = H.M. | last3 = Simpson | first3 = Z.B. | last4 = Chevalier | first4 = A.A. | last5 = Levskaya | first5 = A. | last6 = Marcotte | first6 = E.M. | last7 = Voigt | first7 = C.A. | last8 = Ellington | first8 = A.D. | year = 2009 | title = A Synthetic Edge Detection Program | url = | journal = Cell | volume = 137 | issue = 7| pages = 1272–1281 | doi=10.1016/j.cell.2009.04.048| pmid = 19563759 | pmc = 2775486 }}</ref><br />
<br />
Category:Synthetic biology<br />
<br />
类别: 合成生物学<br />
<br />
<noinclude><br />
<br />
<small>This page was moved from [[wikipedia:en:Synthetic biological circuit]]. Its edit history can be viewed at [[合成生物电路/edithistory]]</small></noinclude><br />
<br />
[[Category:待整理页面]]</div>粲兰https://wiki.swarma.org/index.php?title=%E5%90%88%E6%88%90%E7%94%9F%E7%89%A9%E7%94%B5%E8%B7%AF&diff=20729合成生物电路2020-12-31T07:11:00Z<p>粲兰:</p>
<hr />
<div>此词条暂由袁一博翻译,翻译字数共956,未经人工整理和审校,带来阅读不便,请见谅。<br />
<br />
--[[用户:小趣木木|小趣木木]]([[用户讨论:小趣木木|讨论]])文本缺失 需要补充<br />
{{Synthetic biology}}<br />
<br />
[[File:Lac Operon.svg|thumb|275px|Lac Operon|The ''lac'' operon is a natural biological circuit on which many synthetic circuits are based. Top: Repressed, Bottom: Active. <br /><br />
<br />
[[File:Lac Operon.svg|thumb|275px|Lac Operon|The lac operon is a natural biological circuit on which many synthetic circuits are based. Top: Repressed, Bottom: Active. <br /><br />
<br />
[文件: Lac Operon.svg | thumb | 275px | Lac Operon | Lac Operon | Lac Operon 是一种天然的生物电路,许多合成电路都是以它为基础的。<br />
上图: 压抑状态,下图: 活跃状态。< br/><br />
<br />
'''''1'': RNA polymerase, ''2'': Repressor, ''3'': Promoter, ''4'': Operator, ''5'': Lactose, ''6'': ''lacZ'', ''7'': ''lacY'', ''8'': ''lacA''.]]<br />
<br />
1: RNA polymerase, 2: Repressor, 3: Promoter, 4: Operator, 5: Lactose, 6: lacZ, 7: lacY, 8: lacA.]]<br />
<br />
1: RNA 聚合酶,2: 抑制子,3: 启动子,4: 操作者,5: 乳糖,6: lacZ,7: lacY,8: lacA ]<br />
<br />
<br />
<br />
'''Synthetic biological circuits''' are an application of [[synthetic biology]] where biological parts inside a [[Cell (biology)|cell]] are designed to perform logical functions mimicking those observed in [[electronic circuit]]s. The applications range from simply inducing production to adding a measurable element, like [[Green Fluorescent Protein|GFP]], to an existing [[Gene regulatory network|natural biological circuit]], to implementing completely new systems of many parts.<ref name="Kobayashi">{{cite journal | last1 = Kobayashi | first1 = H. | last2 = Kærn | first2 = M. | last3 = Araki | first3 = M. | last4 = Chung | first4 = K. | last5 = Gardner | first5 = T. S. | last6 = Cantor | first6 = C. R. | last7 = Collins | first7 = J. J. | year = 2004 | title = Programmable cells: Interfacing natural and engineered gene networks | journal = PNAS | volume = 101 | issue = 22| pages = 8414–8419 | doi=10.1073/pnas.0402940101 | pmid=15159530 | pmc=420408}}</ref><br />
<br />
Synthetic biological circuits are an application of synthetic biology where biological parts inside a cell are designed to perform logical functions mimicking those observed in electronic circuits. The applications range from simply inducing production to adding a measurable element, like GFP, to an existing natural biological circuit, to implementing completely new systems of many parts.<br />
<br />
合成生物电路通过利用细胞内的生物部件(细胞器)执行电子电路的逻辑功能,是对合成生物学的一种应用。合成生物电路的应用范围十分广泛,从简单的诱导生产到添加一个可测量的元素,如绿色荧光蛋白,到利用现有的自然生物电路,再到组建由许多部分组成的全新系统等。<br />
<br />
[[Image:Protein translation.gif|thumb|300px| A [[ribosome]] is a [[biological machine]].核糖体是一个生物机器。]]<br />
<br />
A [[ribosome is a biological machine.]]<br />
<br />
[核糖体是一种生物机器]<br />
<br />
The goal of synthetic biology is to generate an array of tunable and characterized parts, or modules, with which any desirable synthetic biological circuit can be easily designed and implemented.<ref name="SynBioFaq">{{cite web|title=Synthetic Biology: FAQ|url=http://syntheticbiology.org/FAQ.html|work=SyntheticBiology.org|accessdate=21 December 2011|url-status=dead|archiveurl=https://web.archive.org/web/20021212065409/http://syntheticbiology.org/faq.html|archivedate=12 December 2002}}</ref> These circuits can serve as a method to modify cellular functions, create cellular responses to environmental conditions, or influence cellular development. By implementing rational, controllable logic elements in cellular systems, researchers can use living systems as engineered "[[biological machine]]s" to perform a vast range of useful functions.<ref name="Kobayashi"/><br />
<br />
The goal of synthetic biology is to generate an array of tunable and characterized parts, or modules, with which any desirable synthetic biological circuit can be easily designed and implemented. These circuits can serve as a method to modify cellular functions, create cellular responses to environmental conditions, or influence cellular development. By implementing rational, controllable logic elements in cellular systems, researchers can use living systems as engineered "biological machines" to perform a vast range of useful functions. The second, by Michael Elowitz and Stanislas Leibler, showed that three repressor genes could be connected to form a negative feedback loop termed the Repressilator that produces self-sustaining oscillations of protein levels in E. coli.<br />
<br />
合成生物学旨在生成一系列可调谐和特征化的部件或模块,利用这些部件或模块,我们可以依据设想轻易设计合成并搭建生物电路。这些电路可以用来修改细胞功能,创造针对环境条件的细胞响应,或影响细胞的发展。通过向细胞系统中插入合理、可控的逻辑元素,研究人员可以将活体系统作为工程化的“生物机器”一样运用其许多实用功能。迈克尔·埃洛维茨(Michael Elowitz)和斯坦尼斯拉斯·雷布勒(Stanislas Leibler)的第二项研究表明,三个阻遏基因可以相互联结,形成一个负反馈环路,称为'''<font color="#ff8000">抑制震荡子(Repressilator)</font>''',它可以在大肠杆菌中产生蛋白质水平的自维持振荡。<br />
<br />
== History 发展历程 ==<br />
<br />
Currently, synthetic circuits are a burgeoning area of research in systems biology with more publications detailing synthetic biological circuits published every year. There has been significant interest in encouraging education and outreach as well: the International Genetically Engineered Machines Competition manages the creation and standardization of BioBrick parts as a means to allow undergraduate and high school students to design their own synthetic biological circuits.<br />
<br />
目前,合成生物电路是系统生物学研究的一个新兴领域,每年都有许多新的刊物详细介绍合成生物电路。关于合成生物电路的教育引导和促进推广等方面,外界也有着浓厚的兴趣,如国际基因工程机器竞赛,它通过倡导生物积木零件的创造和标准化这样一种方式来鼓励本科生和高中生设计自己的合成生物电路。<br />
<br />
The first natural gene circuit studied in detail was the [[lac operon]]. In studies of [[diauxie|diauxic growth]] of ''[[E. coli]]'' on two-sugar media, [[Jacques Monod]] and [[Francois Jacob]] discovered that ''E.coli'' preferentially consumes the more easily processed [[glucose]] before switching to [[lactose]] metabolism. They discovered that the mechanism that controlled the metabolic "switching" function was a two-part control mechanism on the lac operon. When lactose is present in the cell the [[enzyme]] [[β-galactosidase]] is produced to convert lactose into [[glucose]] or [[galactose]]. When lactose is absent in the cell the lac repressor inhibits the production of the enzyme β-galactosidase to prevent any inefficient processes within the cell.<br />
<br />
<br />
<br />
The lac operon is used in the [[biotechnology]] industry for production of [[recombinant DNA|recombinant]] [[proteins]] for therapeutic use. The gene or genes for producing an [[exogenous]] protein are placed on a [[plasmid]] under the control of the lac promoter. Initially the cells are grown in a medium that does not contain lactose or other sugars, so the new genes are not expressed. Once the cells reach a certain point in their growth, [[IPTG|Isopropyl β-D-1-thiogalactopyranoside (IPTG)]] is added. IPTG, a molecule similar to lactose, but with a sulfur bond that is not hydrolyzable so that E. Coli does not digest it, is used to activate or "[[Regulation of gene expression#Inducible vs. repressible systems|induce]]" the production of the new protein. Once the cells are induced, it is difficult to remove IPTG from the cells and therefore it is difficult to stop expression.<br />
<br />
Both immediate and long term applications exist for the use of synthetic biological circuits, including different applications for metabolic engineering, and synthetic biology. Those demonstrated successfully include pharmaceutical production, and fuel production. However methods involving direct genetic introduction are not inherently effective without invoking the basic principles of synthetic cellular circuits. For example, each of these successful systems employs a method to introduce all-or-none induction or expression. This is a biological circuit where a simple repressor or promoter is introduced to facilitate creation of the product, or inhibition of a competing pathway. However, with the limited understanding of cellular networks and natural circuitry, implementation of more robust schemes with more precise control and feedback is hindered. Therein lies the immediate interest in synthetic cellular circuits.<br />
<br />
包括代谢工程学和合成生物学在内等多重领域,合成生物电路的近期和长期应用都存在。这些成功的示范包括制药生产和燃料生产。然而,如果不引用合成细胞电路的基本原理,涉及直接基因导入的方法并不是天然有效的。例如,每个成功的系统都使用一种方法来引入全或非的诱导或表达。这是一种生物电路,通过引入简单的阻遏物或启动子来促进产物的生成,或抑制竞争途径。然而,由于对蜂窝网络和自然电路的了解有限,实施更精确的控制和反馈更具鲁棒性的方案会面临阻碍。这就是人工合成蜂窝电路的直接利益所在。<br />
<br />
<br />
Two early examples of synthetic biological circuits were published in [[Nature (journal)|Nature]] in 2000. One, by Tim Gardner, Charles Cantor, and [[James Collins (bioengineer)|Jim Collins]] working at [[Boston University]], demonstrated a "bistable" switch in ''E. coli''. The switch is turned on by heating the culture of bacteria and turned off by addition of IPTG. They used GFP as a reporter for their system.<ref name="Gardner">Gardner, T.s., Cantor, C.R., Collins, J. Construction of a genetic toggle switch in Escherichia coli. ''Nature'' 403, 339-342 (20 January 2000).</ref> The second, by [[Michael Elowitz]] and [[Stanislas Leibler]], showed that three repressor genes could be connected to form a negative feedback loop termed the [[Repressilator]] that produces self-sustaining oscillations of protein levels in ''E. coli.''<ref>{{Cite journal|last=Stanislas Leibler|last2=Elowitz|first2=Michael B.|date=January 2000|title=A synthetic oscillatory network of transcriptional regulators|journal=Nature|volume=403|issue=6767|pages=335–338|doi=10.1038/35002125|pmid=10659856|issn=1476-4687}}</ref><br />
<br />
Development in understanding cellular circuitry can lead to exciting new modifications, such as cells which can respond to environmental stimuli. For example, cells could be developed that signal toxic surroundings and react by activating pathways used to degrade the perceived toxin. To develop such a cell, it is necessary to create a complex synthetic cellular circuit which can respond appropriately to a given stimulus.<br />
<br />
细胞电路理解的发展可以导致令人兴奋的新修正,例如可以对环境刺激作出反应的细胞。例如,细胞可以警示当前处于有毒环境,并通过激活用于降解感知毒素的途径来进行反应。为了研制这样一个细胞,有必要创建一个复杂的合成细胞电路,它可以适当地响应给定的刺激。<br />
<br />
<br />
<br />
Currently, synthetic circuits are a burgeoning area of research in [[systems biology]] with more publications detailing synthetic biological circuits published every year.<ref>{{cite journal | last1 = Purnick | first1 = Priscilla E. M. | last2 = Weis | first2 = Ron | year = 2009 | title = The second wave of synthetic biology: from modules to systems | url = | journal = Nature Reviews Molecular Cell Biology | volume = 10 | issue = 6| pages = 410–422 | doi = 10.1038/nrm2698 | pmid=19461664}}</ref> There has been significant interest in encouraging education and outreach as well: the International Genetically Engineered Machines Competition<ref>International Genetically Engineered Machines (iGem) http://igem.org/Main_Page</ref> manages the creation and standardization of [[BioBrick]] parts as a means to allow undergraduate and high school students to design their own synthetic biological circuits.<br />
<br />
Given synthetic cellular circuits represent a form of control for cellular activities, it can be reasoned that with complete understanding of cellular pathways, "plug and play" synthetic cells can be developed implementing only the pathways necessary for cell survival reproduction. From this cell, to be thought of as a minimal genome cell, one can add pieces from the toolbox to create a well defined pathway with appropriate synthetic circuitry for an effective feedback system. Because of the basic ground up construction method, and the proposed database of mapped circuitry pieces, techniques mirroring those used to model computer or electronic circuits can be used to redesign cells and model cells for easy troubleshooting and predictive behavior and yields.<br />
<br />
鉴于合成细胞电路代表了一种控制细胞活动的形式,可以推断,只要完全了解细胞通路,就可以开发出只执行细胞生存繁殖所必需的通路的“即插即用”的合成细胞。从这个被认为是一个最小的基因组细胞,我们可以添加工具箱中的片段,为一个有效的反馈系统创造一个拥有合适的合成电路的良定路径。由于基本的基础构造方法,以及提议的映射电路片数据库,反映用于模拟计算机或电子电路的技术可用于重新设计细胞和模型细胞,以便排除故障和预测行为与产量。<br />
<br />
== Interest and goals 研究方向和目标==<br />
<br />
Both immediate and long term applications exist for the use of synthetic biological circuits, including different applications for [[metabolic engineering]], and [[synthetic biology]]. Those demonstrated successfully include pharmaceutical production,<ref>{{cite journal | last1 = Ro | first1 = D.-K. | last2 = Paradise | first2 = E.M. | last3 = Ouellet | first3 = M. | last4 = Fisher | first4 = K.J. | last5 = Newman | first5 = K.L. | last6 = Ndungu | first6 = J.M. | last7 = Ho | first7 = K.A. | last8 = Eachus | first8 = R.A. | last9 = Ham | first9 = T.S. | last10 = Kirby | first10 = J. | last11 = Chang | first11 = M.C.Y. | last12 = Withers | first12 = S.T. | last13 = Shiba | first13 = Y. | last14 = Sarpong | first14 = R. | last15 = Keasling | first15 = J.D. | year = 2006 | title = Production of the antimalarial drug precursor artemisinic acid in engineered yeast | url = | journal = Nature | volume = 440 | issue = 7086| pages = 940–943 | doi=10.1038/nature04640 | pmid=16612385}}</ref> and fuel production.<ref>{{cite journal | last1 = Fortman | first1 = J.L. | last2 = Chhabra | first2 = S. | last3 = Mukhopadhyay | first3 = A. | last4 = Chou | first4 = H. | last5 = Lee | first5 = T.S. | last6 = Steen | first6 = E. | last7 = Keasling | first7 = J.D. | year = 2008 | title = Biofuel alternatives to ethanol: pumping the microbial well | url = https://digital.library.unt.edu/ark:/67531/metadc1013351/| journal = Trends Biotechnol | volume = 26 | issue = 7| pages = 375–381 | doi=10.1016/j.tibtech.2008.03.008| pmid = 18471913 }}</ref> However methods involving direct genetic introduction are not inherently effective without invoking the basic principles of synthetic cellular circuits. For example, each of these successful systems employs a method to introduce all-or-none induction or expression. This is a biological circuit where a simple [[repressor]] or [[promoter (genetics)|promoter]] is introduced to facilitate creation of the product, or inhibition of a competing pathway. However, with the limited understanding of cellular networks and natural circuitry, implementation of more robust schemes with more precise control and feedback is hindered. Therein lies the immediate interest in synthetic cellular circuits.<br />
<br />
<br />
<br />
Development in understanding cellular circuitry can lead to exciting new modifications, such as cells which can respond to environmental stimuli. For example, cells could be developed that signal toxic surroundings and react by activating pathways used to degrade the perceived toxin.<ref>{{cite journal | last1 = Keasling | first1 = J.D. | year = 2008 | title = Synthetic biology for synthetic chemistry. | url = | journal = ACS Chem Biol | volume = 3 | issue = 1| pages = 64–76 | doi=10.1021/cb7002434| pmid = 18205292 | title-link = synthetic chemistry }}</ref> To develop such a cell, it is necessary to create a complex synthetic cellular circuit which can respond appropriately to a given stimulus.<br />
<br />
Repressilator<br />
<br />
抑制震荡子<br />
<br />
<br />
<br />
Mammalian tunable synthetic oscillator<br />
<br />
哺乳动物可调谐合成振荡器<br />
<br />
Given synthetic cellular circuits represent a form of control for cellular activities, it can be reasoned that with complete understanding of cellular pathways, "plug and play"<ref name="Kobayashi" /> cells with well defined genetic circuitry can be engineered. It is widely believed that if a proper toolbox of parts is generated,<ref>{{cite journal | last1 = Lucks | first1 = Julius B | last2 = Qi | first2 = Lei | last3 = Whitaker | first3 = Weston R | last4 = Arkin | first4 = Adam P | year = 2008 | title = Toward scalable parts families for predictable design of biological circuits | url = | journal = Current Opinion in Microbiology | volume = 11 | issue = 6| pages = 567–573 | doi = 10.1016/j.mib.2008.10.002 | pmid = 18983935 }}</ref> synthetic cells can be developed implementing only the pathways necessary for cell survival reproduction. From this cell, to be thought of as a minimal [[genome]] cell, one can add pieces from the toolbox to create a well defined pathway with appropriate synthetic circuitry for an effective feedback system. Because of the basic ground up construction method, and the proposed database of mapped circuitry pieces, techniques mirroring those used to model computer or electronic circuits can be used to redesign cells and model cells for easy troubleshooting and predictive behavior and yields.<br />
<br />
Bacterial tunable synthetic oscillator<br />
<br />
细菌可调谐合成振荡器<br />
<br />
<br />
<br />
Coupled bacterial oscillator<br />
<br />
耦合细菌振荡器<br />
<br />
== Example circuits 电路示例 ==<br />
<br />
Globally coupled bacterial oscillator<br />
<br />
球状耦合细菌振荡器<br />
<br />
<br />
<br />
Elowitz et al. and Fung et al. created oscillatory circuits that use multiple self-regulating mechanisms to create a time-dependent oscillation of gene product expression. <br />
<br />
埃洛维茨等人。和 Fung 等人。创造的振荡电路,使用多种自我调节机制,创造一个时间相关的振荡基因产品表达。<br />
<br />
=== Oscillators 振荡器 ===<br />
<br />
# [[Repressilator]]<br />
<br />
# Mammalian tunable synthetic oscillator<br />
<br />
Toggle-switch<br />
<br />
拨动开关<br />
<br />
# Bacterial tunable synthetic oscillator<br />
<br />
Gardner et al. used mutual repression between two control units to create an implementation of a toggle switch capable of controlling cells in a bistable manner: transient stimuli resulting in persistent responses If Signal A AND Signal B are present, then the desired gene product will result. All promoters shown are inducible, activated by the displayed gene product. Each signal activates expression of a separate gene (shown in light blue). The expressed proteins then can either form a complete complex in cytosol, that is capable of activating expression of the output (shown), or can act separately to induce expression, such as separately removing an inhibiting protein and inducing activation of the uninhibited promoter.]]<br />
<br />
加德纳等人使用两个控制单元之间的相互抑制来创建一个能够以双稳态方式控制细胞的拨动开关的实现: 如果接收到信号 A 和信号 B ,那么期望的基因产物将被表达出来。所有显示出的启动子都是诱导性的,并被表达的基因产物激活。每个信号激活一个单独基因的表达(如浅蓝色所示)。然后,表达的蛋白质可以在细胞溶胶中形成一个完整的能够激活输出的表达(如图所示)的复合体,或者可以单独作用诱导表达,例如单独去除抑制蛋白和诱导激活不受抑制的启动子。<br />
<br />
# Coupled bacterial oscillator<br />
耦合细菌振荡器<br />
<br />
# Globally coupled bacterial oscillator<br />
球状耦合细菌振荡器<br />
<br />
The logical [[OR gate.<br />
<br />
逻辑的[或门。<br />
<br />
Elowitz et al. and Fung et al. created oscillatory circuits that use multiple self-regulating mechanisms to create a time-dependent oscillation of gene product expression.埃洛维茨等人和冯等人创造了一种振荡电路,它使用多个自调节机制来形成基因表达的依赖于时间的振荡器。<ref>{{cite journal | last1 = Elowitz | first1 = M.B. | last2 = Leibler | first2 = S. | year = 2000 | title = A synthetic oscillatory network of transcriptional regulators | pmid = 10659856| journal = Nature | volume = 403 | issue = 6767| pages = 335–338 | doi=10.1038/35002125}}</ref><ref>{{cite journal | last1 = Fung | first1 = E. | last2 = Wong | first2 = W.W. | last3 = Suen | first3 = J.K. | last4 = Bulter | first4 = T. | last5 = Lee | first5 = S. | last6 = Liao | first6 = J.C. | year = 2005 | title = A synthetic gene–metabolic oscillator | url = | journal = Nature | volume = 435 | issue = 7038| pages = 118–122 | doi=10.1038/nature03508| pmid = 15875027 }}</ref> <br />
<br />
<br />
<br />
=== Bistable switches 双稳态开关===<br />
<br />
Synthetic gene circuits can control gene expression heterogeneity and can be controlled independently of the gene expression mean.<br />
<br />
合成基因电路可以控制基因表达的异质性,并且可以独立于基因表达方式进行控制。<br />
<br />
# Toggle-switch<br />
<br />
Gardner et al. used mutual repression between two control units to create an implementation of a toggle switch capable of controlling cells in a bistable manner: transient stimuli resulting in persistent responses<ref name="Gardner" />.<br />
<br />
<br />
<br />
Engineered systems are the result of implementation of combinations of different control mechanisms. A limited counting mechanism was implemented by a pulse-controlled gene cascade and application of logic elements enables genetic "programming" of cells as in the research of Tabor et al., which synthesized a photosensitive bacterial edge detection program.<br />
<br />
工程系统是不同控制机制组合的结果。脉冲控制基因级联实现了有限计数机制,逻辑元件的应用实现了细胞的遗传“编程”,例如泰伯等人合成了一个光敏细菌边缘检测程序。<br />
<br />
=== Logical operators 逻辑运算===<br />
<br />
[[File:SynBioCirc-AndLogicGate.jpg|frame|center|The logical [[AND gate]].逻辑与门<ref name="rocha">{{cite journal | last1 = Silva-Rocha | first1 = R. | last2 = de Lorenzo | first2 = V. | year = 2008 | title = Mining logic gates in prokaryotic transcriptional regulation networks | url = | journal = FEBS Letters | volume = 582 | issue = 8| pages = 1237–1244 | doi=10.1016/j.febslet.2008.01.060 | pmid=18275855}}</ref><ref name="buchler">{{cite journal | last1 = Buchler | first1 = N.E. | last2 = Gerland | first2 = U. | last3 = Hwa | first3 = T. | year = 2003 | title = On schemes of combinatorial transcription logic | journal = PNAS | volume = 100 | issue = 9| pages = 5136–5141 | doi=10.1073/pnas.0930314100 | pmid=12702751 | pmc=404558}}</ref> If Signal A '''AND''' Signal B are present, then the desired gene product will result. All promoters shown are inducible, activated by the displayed gene product. Each signal activates expression of a separate gene (shown in light blue). The expressed proteins then can either form a complete complex in [[cytosol]], that is capable of activating expression of the output (shown), or can act separately to induce expression, such as separately removing an inhibiting protein and inducing activation of the uninhibited promoter.如果信号 A 和信号 B 产生,那么期望的基因产物将被表达出来。所有显示出的启动子都是诱导性的,并被表达的基因产物激活。每个信号激活一个单独基因的表达(如浅蓝色所示)。表达的蛋白质可以在细胞溶胶中形成一个完整的能够激活输出的表达(如图所示)的复合体,或者可以单独作用诱导表达,例如分别去除抑制蛋白和诱导激活不受抑制的启动子。]]<br />
<br />
<br />
<br />
Computational design and evaluation of DNA circuits to achieve optimal performance<br />
<br />
实现最佳性能的 DNA 电路的计算设计和评估<br />
<br />
[[File:SynBioCirc-OrLogicGate.jpg|frame|center|The logical [[OR gate]].逻辑或门<ref name="rocha" /><ref name="buchler" /> If Signal A '''OR''' Signal B are present, then the desired gene product will result. All promoters shown are inducible. Either signal is capable of activating the expression of the output gene product, and only the action of a single promoter is required for gene expression. Post-transcriptional regulation mechanisms can prevent the presence of both inputs producing a compounded high output, such as implementing a low binding affinity [[ribosome binding site]].如果信号 A 或信号 B 产生,那么期望的基因产物将被表达出来。所有显示出的启动子都是诱导性的。无论何种信号,都能激活基因产物的表达,并且只需一个启动子就可以产生这种表达。转录后的调节机制可以阻止产生复合高产出的两个输入信号的产生,比如插入一个低结合亲和力的核糖体结合点。]]<br />
<br />
<br />
<br />
Recent developments in artificial gene synthesis and the corresponding increase in competition within the industry have led to a significant drop in price and wait time of gene synthesis and helped improve methods used in circuit design. At the moment, circuit design is improving at a slow pace because of insufficient organization of known multiple gene interactions and mathematical models. This issue is being addressed by applying computer-aided design (CAD) software to provide multimedia representations of circuits through images, text and programming language applied to biological circuits. Some of the more well known CAD programs include GenoCAD, Clotho framework and j5. GenoCAD uses grammars, which are either opensource or user generated "rules" which include the available genes and known gene interactions for cloning organisms. Clotho framework uses the Biobrick standard rules.<br />
<br />
最近在人工基因合成法领域的发展和行业内竞争的相应增加已经导致了基因合成的价格和等待时间的显著下降,并且帮助改进了电路设计中使用的方法。目前,由于对已知的多基因相互作用和数学模型的架构不足,电路设计正在缓慢地改进。这个问题目前通过应用计算机辅助设计(CAD)软件,利用图像、文本和应用于生物电路的编程语言来提供电路的多媒体表示来解决。一些更著名的 CAD 程序包括 GenoCAD,Clotho框架和 j5。GenoCAD 使用语法,这些语法要么是开源的,要么是用户生成的“规则” ,其中包括克隆生物的可用基因和已知的基因相互作用。Clotho框架使用“生物积木”标准规则。<br />
<br />
[[File:SynBioCirc-NandLogicGate.jpg|frame|center|The logical [[Negated AND gate]].逻辑非门<ref name="rocha" /><ref name="buchler" /> If Signal A '''AND''' Signal B are present, then the desired gene product will '''NOT''' result. All promoters shown are inducible. The activating promoter for the output gene is constitutive, and thus not shown. The constitutive promoter for the output gene keeps it "on" and is only deactivated when (similar to the AND gate) a complex as a result of two input signal gene products blocks the expression of the output gene.如果信号 A 和信号 B 产生,那么期望的基因产物不会被表达出来。所有显示出的启动子都是诱导性的。输出基因的活性启动子是本构的,因而不会显示出来。输出基因的本构启动子让它始终被表达,只有在两个输入信号基因的产物阻滞输出基因的表达,形成一个复合体时,输出基因才失活。]]<br />
<br />
<br />
<br />
=== Analog tuners 模拟调谐器===<br />
<br />
Using negative feedback and identical promoters, linearizer gene circuits can impose uniform gene expression that depends linearly on extracellular chemical inducer concentration.<br />
<br />
线性化电路使用负反馈和完全相同的启动子,可以利用线性依赖于细胞外化学诱导物浓度的统一基因的表达。<ref name="pmid19279212">{{cite journal | vauthors = Nevozhay D, Adams RM, Murphy KF, Josic K, Balázsi G| title = Negative autoregulation linearizes the dose-response and suppresses the heterogeneity of gene expression | journal = Proc. Natl. Acad. Sci. U.S.A. | volume = 106 | issue = 13 | pages = 5123-8 | date = March 31, 2009 | pmid = 19279212 | pmc = 2654390 | doi = 10.1073/pnas.0809901106 }}</ref><br />
<br />
<br />
<br />
=== Controllers of gene expression heterogeneity 基因表达异质性的控制===<br />
<br />
Synthetic gene circuits can control gene expression heterogeneity and can be controlled independently of the gene expression mean.<br />
<br />
合成基因电路可以控制基因表达的异质性,并且可以独立于基因表达均值来控制。<ref name="pmid17189188">{{cite journal | vauthors = Blake WJ, Balázsi G, Kohanski MA, Isaacs FJ, Murphy KF, Kuang Y, Cantor CR, Walt DR, Collins JJ| title = Phenotypic Consequences of Promoter-Mediated Transcriptional Noise | journal = Molec. Cell | volume = 24 | issue = 6 | pages = 853-65 | date = December 28, 2006 | pmid = 17189188 | doi = 10.1016/j.molcel.2006.11.003 }}</ref><br />
<br />
<br />
<br />
=== Other engineered systems 其他工程系统===<br />
<br />
<!--- Categories ---><br />
<br />
< ! ——类别—— > <br />
<br />
Engineered systems are the result of implementation of combinations of different control mechanisms. A limited counting mechanism was implemented by a pulse-controlled gene cascade<ref>{{cite journal | last1 = Friedland | first1 = A.E. | last2 = Lu | first2 = T.K | last3 = Wang | first3 = X. | last4 = Shi | first4 = D. | last5 = Church | first5 = G. | last6 = Collins | first6 = J.J. | year = 2009 | title = Synthetic Gene Networks That Count | url = | journal = Science | volume = 324 | issue = 5931| pages = 1199–1202 | doi=10.1126/science.1172005 | pmid=19478183 | pmc=2690711}}</ref> and application of logic elements enables genetic "programming" of cells as in the research of Tabor et al., which synthesized a photosensitive bacterial edge detection program.<ref>{{cite journal | last1 = Tabor | first1 = J.J. | last2 = Salis | first2 = H.M. | last3 = Simpson | first3 = Z.B. | last4 = Chevalier | first4 = A.A. | last5 = Levskaya | first5 = A. | last6 = Marcotte | first6 = E.M. | last7 = Voigt | first7 = C.A. | last8 = Ellington | first8 = A.D. | year = 2009 | title = A Synthetic Edge Detection Program | url = | journal = Cell | volume = 137 | issue = 7| pages = 1272–1281 | doi=10.1016/j.cell.2009.04.048| pmid = 19563759 | pmc = 2775486 }}</ref><br />
<br />
Category:Synthetic biology<br />
<br />
类别: 合成生物学<br />
<br />
<noinclude><br />
<br />
<small>This page was moved from [[wikipedia:en:Synthetic biological circuit]]. Its edit history can be viewed at [[合成生物电路/edithistory]]</small></noinclude><br />
<br />
[[Category:待整理页面]]<br />
<br />
<br />
[[Category:待整理页面]]</div>粲兰https://wiki.swarma.org/index.php?title=%E5%90%88%E6%88%90%E7%94%9F%E7%89%A9%E5%AD%A6&diff=19789合成生物学2020-12-07T14:34:35Z<p>粲兰:</p>
<hr />
<div>此词条暂由袁一博翻译,翻译字数共4491,未经人工整理和审校,带来阅读不便,请见谅。<br />
<br />
{{redirect|Artificial life form|simulated life forms|Artificial life}}<br />
<br />
{{short description|Interdisciplinary branch of biology and engineering}}<br />
<br />
{{Synthetic biology}}<br />
<br />
[[File:Synthetic Biology Research at NASA Ames.jpg|thumb|Synthetic Biology Research at [[Ames Research Center|NASA Ames Research Center]]. NASA埃姆斯研究中心的合成生物学研究。]]<br />
<br />
NASA Ames Research Center.]]<br />
<br />
美国国家航空和宇宙航行局/美国国家航空航天局埃姆斯研究中心<br />
<br />
<br />
<br />
'''Synthetic biology''' ('''SynBio''') is a multidisciplinary area of research that seeks to create new biological parts, devices, and systems, or to redesign systems that are already found in nature.<br />
<br />
Synthetic biology (SynBio) is a multidisciplinary area of research that seeks to create new biological parts, devices, and systems, or to redesign systems that are already found in nature.<br />
<br />
合成生物学(SynBio)是一个多学科的研究领域,旨在创造新的生物部件、设备和系统,或重新设计已经在自然界中发现的系统。<br />
<br />
<br />
<br />
It is a branch of science that encompasses a broad range of methodologies from various disciplines, such as [[biotechnology]], [[genetic engineering]], [[molecular biology]], [[molecular engineering]], [[systems biology]], [[Model lipid bilayer|membrane science]], [[biophysics]], [[Biological engineering|chemical and biological engineering]], [[Electrical engineering|electrical and computer engineering]], [[control engineering]] and [[evolutionary biology]].<br />
<br />
It is a branch of science that encompasses a broad range of methodologies from various disciplines, such as biotechnology, genetic engineering, molecular biology, molecular engineering, systems biology, membrane science, biophysics, chemical and biological engineering, electrical and computer engineering, control engineering and evolutionary biology.<br />
<br />
它是科学的一个分支,包括广泛的方法,从不同的学科,如生物技术,基因工程,分子生物学,分子工程,系统生物学,膜科学,生物物理学,化学和生物工程,电子和计算机工程,控制工程和进化生物学。<br />
<br />
<br />
<br />
Due to more powerful [[genetic engineering]] capabilities and decreased DNA synthesis and [[DNA sequencing|sequencing costs]], the field of synthetic biology is rapidly growing. In 2016, more than 350 companies across 40 countries were actively engaged in synthetic biology applications; all these companies had an estimated net worth of $3.9 billion in the global market.<ref>{{cite journal | last1 = Bueso | first1 = F. Y. | last2 = Tangney | first2 = M. | year = 2017 | title = Synthetic Biology in the Driving Seat of the Bioeconomy | url = | journal = Trends in Biotechnology | volume = 35 | issue = 5| pages = 373–378 | doi = 10.1016/j.tibtech.2017.02.002 | pmid = 28249675 }}</ref><br />
<br />
Due to more powerful genetic engineering capabilities and decreased DNA synthesis and sequencing costs, the field of synthetic biology is rapidly growing. In 2016, more than 350 companies across 40 countries were actively engaged in synthetic biology applications; all these companies had an estimated net worth of $3.9 billion in the global market.<br />
<br />
由于更强大的基因工程能力和降低的 DNA 合成及测序成本,合成生物学领域正在迅速发展。2016年,来自40个国家的350多家公司积极参与合成生物学应用; 所有这些公司在全球市场的净值估计为39亿美元。<br />
<br />
<br />
<br />
== Definition 定义 ==<br />
<br />
Synthetic biology currently has no generally accepted definition. Here are a few examples:<br />
<br />
Synthetic biology currently has no generally accepted definition. Here are a few examples:<br />
<br />
合成生物学目前还没有公认的定义。以下是一些定义的示例:<br />
<br />
<br />
<br />
* "the use of a mixture of physical engineering and genetic engineering to create new (and, therefore, synthetic) life forms混合使用物理工程和基因工程来创建新的(因而也即合成的)生命形式。"<ref>{{cite journal | last1 = Hunter | first1 = D | year = 2013 | title = How to object to radically new technologies on the basis of justice: the case of synthetic biology | url = | journal = Bioethics | volume = 27 | issue = 8| pages = 426–434 | doi = 10.1111/bioe.12049 | pmid = 24010854 }}</ref><br />
<br />
<br />
* "an emerging field of research that aims to combine the knowledge and methods of biology, engineering and related disciplines in the design of chemically synthesized DNA to create organisms with novel or enhanced characteristics and traits一个新兴的研究领域,旨在将生物学,工程学和相关学科领域的知识和方法结合到化学合成DNA 的设计中,从而创造出具有新颖或增强特性和特征的有机体。<br />
"<ref>{{cite journal | last1 = Gutmann | first1 = A | year = 2011 | title = The ethics of synthetic biology: guiding principles for emerging technologies | url = | journal = Hastings Center Report | volume = 41 | issue = 4| pages = 17–22 | doi = 10.1002/j.1552-146x.2011.tb00118.x | pmid = 21845917 | s2cid = 20662786 }}</ref><br />
<br />
* "designing and constructing [[BioBrick|biological modules]], [[biological systems]], and [[biological machine]]s or, re-design of existing biological systems for useful purposes设计并构建生物积木、生物系统以及生物机器,或为有用的目的重新设计现有的生物系统。"<ref name="NakanoEckford2013">{{cite book|url={{google books |plainurl=y |id=uVhsAAAAQBAJ}}|title=Molecular Communication|last1=Nakano|first1=Tadashi|last2=Eckford|first2=Andrew W.|last3=Haraguchi|first3=Tokuko|date=12 September 2013|publisher=Cambridge University Press|isbn=978-1-107-02308-6|name-list-style=vanc}}</ref><br />
<br />
<br />
* “applying the engineering paradigm of systems design to biological systems in order to produce predictable and robust systems with novel functionalities that do not exist in nature” (The European Commission, 2005)This can include the possibility of a [[molecular assembler]], based upon biomolecular systems such as the [[ribosome]]”<ref name="RoadMap">{{Cite web|url=http://www.foresight.org/roadmaps/Nanotech_Roadmap_2007_main.pdf|title=Productive Nanosystems: A Technology Roadmap|website=Foresight Institute}}</ref><br />
将系统设计的工程范式应用到生物系统中,以产生具有自然界中不存在的新功能的可预测且健全的系统”(欧洲委员会,2005年),这可能包括基于生物分子系统——例如核糖体——的分子组合器的可能性。<br />
<br />
<br />
<br />
To note, synthetic biology has traditionally been divided into two different approaches: top down and bottom up.<br />
<br />
To note, synthetic biology has traditionally been divided into two different approaches: top down and bottom up.<br />
<br />
值得注意的是,合成生物学在传统上被分为两种不同的方法: 自上而下和自下而上。<br />
<br />
<br />
<br />
# The <u>top down</u> approach involves using metabolic and genetic engineering techniques to impart new functions to living cells.<br />
<br />
The <u>top down</u> approach involves using metabolic and genetic engineering techniques to impart new functions to living cells.<br />
<br />
自上而下的方法包括利用代谢和基因工程技术赋予活细胞以新的功能。<br />
<br />
# The <u>bottom up</u> approach involves creating new biological systems ''in vitro'' by bringing together 'non-living' biomolecular components,<ref>{{cite journal | vauthors = Schwille P | title = Bottom-up synthetic biology: engineering in a tinkerer's world | journal = Science | volume = 333 | issue = 6047 | pages = 1252–4 | date = September 2011 | pmid = 21885774 | doi = 10.1126/science.1211701 | bibcode = 2011Sci...333.1252S | s2cid = 43354332 }}</ref> often with the aim of constructing an [[artificial cell]].<br />
<br />
The <u>bottom up</u> approach involves creating new biological systems in vitro by bringing together 'non-living' biomolecular components, often with the aim of constructing an artificial cell.<br />
<br />
自下而上的方法包括在体外创建新的生物系统,将“非活性”的生物分子组件聚集在一起,其目的通常是构建一个人工细胞。<br />
<br />
<br />
<br />
Biological systems are thus assembled module-by-module. [[Cell-free protein synthesis|Cell-free protein expression systems]] are often employed,<ref>{{cite journal | vauthors = Noireaux V, Libchaber A | title = A vesicle bioreactor as a step toward an artificial cell assembly | journal = Proceedings of the National Academy of Sciences of the United States of America | volume = 101 | issue = 51 | pages = 17669–74 | date = December 2004 | pmid = 15591347 | pmc = 539773 | doi = 10.1073/pnas.0408236101 | bibcode = 2004PNAS..10117669N }}</ref><ref>{{cite journal | vauthors = Hodgman CE, Jewett MC | title = Cell-free synthetic biology: thinking outside the cell | journal = Metabolic Engineering | volume = 14 | issue = 3 | pages = 261–9 | date = May 2012 | pmid = 21946161 | pmc = 3322310 | doi = 10.1016/j.ymben.2011.09.002 }}</ref><ref>{{cite journal | vauthors = Elani Y, Law RV, Ces O | title = Protein synthesis in artificial cells: using compartmentalisation for spatial organisation in vesicle bioreactors | journal = Physical Chemistry Chemical Physics | volume = 17 | issue = 24 | pages = 15534–7 | date = June 2015 | pmid = 25932977 | doi = 10.1039/C4CP05933F | bibcode = 2015PCCP...1715534E | doi-access = free }}</ref> as are membrane-based molecular machinery. There are increasing efforts to bridge the divide between these approaches by forming hybrid living/synthetic cells,<ref>{{cite journal | vauthors = Elani Y, Trantidou T, Wylie D, Dekker L, Polizzi K, Law RV, Ces O | title = Constructing vesicle-based artificial cells with embedded living cells as organelle-like modules | journal = Scientific Reports | volume = 8 | issue = 1 | pages = 4564 | date = March 2018 | pmid = 29540757 | pmc = 5852042 | doi = 10.1038/s41598-018-22263-3 | bibcode = 2018NatSR...8.4564E }}</ref> and engineering communication between living and synthetic cell populations.<ref>{{cite journal | vauthors = Lentini R, Martín NY, Forlin M, Belmonte L, Fontana J, Cornella M, Martini L, Tamburini S, Bentley WE, Jousson O, Mansy SS | title = Two-Way Chemical Communication between Artificial and Natural Cells | journal = ACS Central Science | volume = 3 | issue = 2 | pages = 117–123 | date = February 2017 | pmid = 28280778 | pmc = 5324081 | doi = 10.1021/acscentsci.6b00330 }}</ref><br />
<br />
Biological systems are thus assembled module-by-module. Cell-free protein expression systems are often employed, as are membrane-based molecular machinery. There are increasing efforts to bridge the divide between these approaches by forming hybrid living/synthetic cells, and engineering communication between living and synthetic cell populations.<br />
<br />
生物系统就是这样一个模块一个模块地组装起来的。无细胞蛋白表达系统作为以膜为基础的分子机制,经常被采用。通过形成活细胞/合成细胞的混合体,以及在活细胞和合成细胞群之间进行工程交流,科学家前赴后继地努力为这些系统之间架起桥梁。<br />
<br />
<br />
<br />
== History 发展历程 ==<br />
<br />
'''1910:''' First identifiable use of the term "synthetic biology" in [[Stéphane Leduc]]'s publication ''Théorie physico-chimique de la vie et générations spontanées''.<ref>[https://openlibrary.org/books/OL23348076M/Théorie_physico-chimique_de_la_vie_et_générations_spontanées Théorie physico-chimique de la vie et générations spontanées, S. Leduc, 1910]</ref> He also noted this term in another publication, ''La Biologie Synthétique'' in 1912.<ref>{{cite book |url=http://www.peiresc.org/bstitre.htm |title=La biologie synthétique, étude de biophysique |last=Leduc |first=Stéphane |date=1912 | veditors = Poinat A }}</ref><br />
<br />
1910: First identifiable use of the term "synthetic biology" in Stéphane Leduc's publication Théorie physico-chimique de la vie et générations spontanées. He also noted this term in another publication, La Biologie Synthétique in 1912.<br />
<br />
1910: First identifiable use of the term "synthetic biology" in Stéphane Leduc's publication Théorie physico-chimique de la vie et générations spontanées.他还在1912年的另一本出版物《生物合成学》中提到了这个术语。<br />
<br />
<br />
<br />
'''1961:''' Jacob and Monod postulate cellular regulation by molecular networks from their study of the ''lac'' operon in ''E. coli'' and envisioned the ability to assemble new systems from molecular components.<ref>Jacob, F.ß. & Monod, J. On the regulation of gene activity. Cold Spring Harb. Symp. Quant. Biol. 26, 193–211 (1961).</ref><br />
<br />
1961: Jacob and Monod postulate cellular regulation by molecular networks from their study of the lac operon in E. coli and envisioned the ability to assemble new systems from molecular components.<br />
<br />
1961年: 雅各布和莫诺德通过他们对大肠杆菌中乳酸操纵子的研究,设想了通过分子网络调控细胞的方法,并预期了由分子组件组装新系统的能力。<br />
<br />
<br />
<br />
'''1973:''' First molecular cloning and amplification of DNA in a plasmid is published in ''P.N.A.S.'' by Cohen, Boyer ''et al.'' constituting the dawn of synthetic biology.<ref>{{cite journal | vauthors = Cohen SN, Chang AC, Boyer HW, Helling RB | title = Construction of biologically functional bacterial plasmids in vitro | journal = Proc. Natl. Acad. Sci. USA | volume = 70 | issue = 11 | pages = 3240–3244 | date = 1973 | pmid = 4594039 | doi = 10.1073/pnas.70.11.3240 | bibcode = 1973PNAS...70.3240C | pmc = 427208 }}</ref></blockquote><br />
<br />
1973: First molecular cloning and amplification of DNA in a plasmid is published in P.N.A.S. by Cohen, Boyer et al. constituting the dawn of synthetic biology.</blockquote><br />
<br />
1973年:第一篇关于质粒中 DNA 的分子克隆和扩增的文章在 P.N.A.S. 发表。作者: 科恩,波义耳等人(Cohen,Boyer et al.) 。构成了合成生物学的黎明<br />
<br />
<br />
<br />
'''1978:''' [[Werner Arber|Arber]], [[Daniel Nathans|Nathans]] and [[Hamilton O. Smith|Smith]] win the [[Nobel Prize in Physiology or Medicine]] for the discovery of [[restriction enzyme]]s, leading Szybalski to offer an editorial comment in the journal ''[[Gene (journal)|Gene]]'':<br />
<br />
1978: Arber, Nathans and Smith win the Nobel Prize in Physiology or Medicine for the discovery of restriction enzymes, leading Szybalski to offer an editorial comment in the journal Gene:<br />
<br />
1978年: 阿尔伯 (Arber) ,纳森斯 (Nathans) 和 史密斯 (Smith) 因为限制性内切酶的发现而获得美国诺贝尔生理学或医学奖学会奖,这使得齐巴尔斯基 (Szybalski) 在《基因》杂志上发表了一篇社论评论:<br />
<br />
<br />
<br />
<blockquote>The work on restriction nucleases not only permits us easily to construct recombinant DNA molecules and to analyze individual genes, but also has led us into the new era of synthetic biology where not only existing genes are described and analyzed but also new gene arrangements can be constructed and evaluated.<ref>{{cite journal | vauthors = Szybalski W, Skalka A | title = Nobel prizes and restriction enzymes | journal = Gene | volume = 4 | issue = 3 | pages = 181–2 | date = November 1978 | pmid = 744485 | doi = 10.1016/0378-1119(78)90016-1 }}</ref></blockquote><br />
<br />
<blockquote>The work on restriction nucleases not only permits us easily to construct recombinant DNA molecules and to analyze individual genes, but also has led us into the new era of synthetic biology where not only existing genes are described and analyzed but also new gene arrangements can be constructed and evaluated.</blockquote><br />
<br />
限制性核酸酶的研究不仅使我们能够很容易地构建重组 DNA 分子和分析单个基因,而且使我们进入了合成生物学的新时代,不仅可以描述和分析现有的基因,而且可以构建和评估新的基因序列。</blockquote ><br />
<br />
<br />
<br />
'''1988:''' First DNA amplification by the polymerase chain reaction (PCR) using a thermostable DNA polymerase is published in ''Science'' by Mullis ''et al.''<ref>{{cite journal | vauthors = Saiki RK, Gelfand DH, Stoffel S, Scharf SJ, Higuchi R, Horn GT, Mullis KB, Erlich HA | title = Primer-directed enzymatic amplification of DNA with a thermostable DNA polymerase | journal = Science | volume = 239 | issue = 4839 | pages = 487–491 | date = 1988 | pmid = 2448875 | doi = 10.1126/science.239.4839.487 }}</ref></blockquote> This obviated adding new DNA polymerase after each PCR cycle, thus greatly simplifying DNA mutagenesis and assembly.<br />
<br />
1988: First DNA amplification by the polymerase chain reaction (PCR) using a thermostable DNA polymerase is published in Science by Mullis et al.</blockquote> This obviated adding new DNA polymerase after each PCR cycle, thus greatly simplifying DNA mutagenesis and assembly.<br />
<br />
1988年: 第一次利用热稳定的 DNA 聚合酶进行聚合酶链式反应以实现 DNA 扩增(PCR)的成果由马利斯 (Mullis) 等人发表在《科学》杂志上,这样就避免了在每次 PCR 循环后增加新的 DNA 聚合酶,从而大大简化了 DNA 的突变和组装。<br />
<br />
<br />
<br />
'''2000:''' Two papers in [[Nature (journal)|Nature]] report [[synthetic biological circuits]], a genetic toggle switch and a biological clock, by combining genes within [[Escherichia coli|''E. coli'']] cells.<ref name=":0">{{cite journal | vauthors = Elowitz MB, Leibler S | title = A synthetic oscillatory network of transcriptional regulators | journal = Nature | volume = 403 | issue = 6767 | pages = 335–8 | date = January 2000 | pmid = 10659856 | doi = 10.1038/35002125 | bibcode = 2000Natur.403..335E | s2cid = 41632754 }}</ref><ref name=":1">{{cite journal | vauthors = Gardner TS, Cantor CR, Collins JJ | title = Construction of a genetic toggle switch in Escherichia coli | journal = Nature | volume = 403 | issue = 6767 | pages = 339–42 | date = January 2000 | pmid = 10659857 | doi = 10.1038/35002131 | bibcode = 2000Natur.403..339G | s2cid = 345059 }}</ref><br />
<br />
2000: Two papers in Nature report synthetic biological circuits, a genetic toggle switch and a biological clock, by combining genes within E. coli cells.<br />
<br />
2000年: 《自然》杂志的两篇论文报告了通过结合大肠杆菌细胞内的基因制造合成生物电路、基因切换开关和生物钟。<br />
<br />
<br />
<br />
'''2003:''' The most widely used standardized DNA parts, [[BioBrick]] plasmids, are invented by [[Tom Knight (scientist)|Tom Knight]].<ref>{{Cite journal|last1=Knight|first1=Thomas| name-list-style = vanc |year=2003|title=Tom Knight (2003). Idempotent Vector Design for Standard Assembly of Biobricks|hdl=1721.1/21168}}</ref> These parts will become central to the international Genetically Engineered Machine competition (iGEM) founded at MIT in the following year.<br />
<br />
2003: The most widely used standardized DNA parts, BioBrick plasmids, are invented by Tom Knight. These parts will become central to the international Genetically Engineered Machine competition (iGEM) founded at MIT in the following year.<br />
<br />
2003年: 最广泛使用的标准化 DNA 部件,生物积木质粒,是由汤姆·奈特 (Tom Knight) 发明的。这些部分将成为2004年在麻省理工学院成立的国际基因工程机器竞赛 (iGEM) 的中心。<br />
<br />
<br />
<br />
[[File:Synthetic Biology Open Language (SBOL) standard visual symbols.png|thumb|upright=1.25| [[Synthetic Biology Open Language]] (SBOL) standard visual symbols for use with [[BioBrick|BioBricks Standard]]]]<br />
<br />
[[Synthetic Biology Open Language (SBOL) standard visual symbols for use with BioBricks Standard]]<br />
<br />
[与生物积木标准一起使用的合成生物学开放式语言(SBOL)标准视觉符号]<br />
<br />
<br />
<br />
'''2003:''' Researchers engineer an artemisinin precursor pathway in ''E. coli''.<ref>Martin, V. J., Pitera, D. J., Withers, S. T., Newman, J. D. & Keasling, J. D. Engineering a mevalonate pathway in Escherichia coli for production of terpenoids. Nature Biotech. 21, 796–802 (2003).</ref><br />
<br />
2003: Researchers engineer an artemisinin precursor pathway in E. coli.<br />
<br />
2003年: 研究人员在大肠杆菌中设计出青蒿素前体途径。<br />
<br />
<br />
<br />
'''2004:''' First international conference for synthetic biology, Synthetic Biology 1.0 (SB1.0) is held at the Massachusetts Institute of Technology, USA.<br />
<br />
2004: First international conference for synthetic biology, Synthetic Biology 1.0 (SB1.0) is held at the Massachusetts Institute of Technology, USA.<br />
<br />
2004年: 第一届合成生物学国际会议,合成生物学1.0(SB1.0)在美国麻省理工学院举行。<br />
<br />
<br />
<br />
'''2005:''' Researchers develop a light-sensing circuit in ''E. coli''.<ref>{{cite journal | last1 = Levskaya | first1 = A. | display-authors = etal | year = 2005 | title = "Synthetic biology " engineering Escherichia coli to see light | url = | journal = Nature | volume = 438 | issue = 7067| pages = 441–442 | doi = 10.1038/nature04405 | pmid = 16306980 | s2cid = 4428475 }}</ref> Another group designs circuits capable of multicellular pattern formation.<ref>Basu, S., Gerchman, Y., Collins, C. H., Arnold, F. H. & Weiss, R. "A synthetic multicellular system for programmed pattern formation. ''Nature'' 434,</ref><br />
<br />
2005: Researchers develop a light-sensing circuit in E. coli. Another group designs circuits capable of multicellular pattern formation.<br />
<br />
2005年: 研究人员在大肠杆菌中开发出一种感光电路。另一个研究小组设计出了能够形成多细胞模式的电路。<br />
<br />
<br />
<br />
'''2006:''' Researchers engineer a synthetic circuit that promotes bacterial invasion of tumour cells.<ref>{{cite journal | last1 = Anderson | first1 = J. C. | last2 = Clarke | first2 = E. J. | last3 = Arkin | first3 = A. P. | last4 = Voigt | first4 = C. A. | year = 2006 | title = Environmentally controlled invasion of cancer cells by engineered bacteria | url = | journal = J. Mol. Biol. | volume = 355 | issue = 4| pages = 619–627 | doi = 10.1016/j.jmb.2005.10.076 | pmid = 16330045 }}</ref><br />
<br />
2006: Researchers engineer a synthetic circuit that promotes bacterial invasion of tumour cells.<br />
<br />
2006年: 研究人员设计了一种能促进细菌侵入肿瘤细胞的合成电路。<br />
<br />
<br />
<br />
'''2010:''' Researchers publish in ''Science'' the first synthetic bacterial genome, called ''M. mycoides'' JCVI-syn1.0.<ref name="gibson52" /><ref>{{Cite news|url=https://www.telegraph.co.uk/news/science/science-news/7747779/American-scientist-who-created-artificial-life-denies-playing-God.html|title=American scientist who created artificial life denies 'playing God'|last=|first=|date=May 2010|website=The Telegraph|url-status=live|archive-url=|archive-date=|access-date=}}</ref> The genome is made from chemically-synthesized DNA using yeast recombination.<br />
<br />
2010: Researchers publish in Science the first synthetic bacterial genome, called M. mycoides JCVI-syn1.0. The genome is made from chemically-synthesized DNA using yeast recombination.<br />
<br />
2010年: 研究人员在《科学》杂志上发表了第一个人工合成的细菌基因组,名为丝状支原体 JCVI-syn1.0。基因组是使用酵母由化学合成的 DNA重组得到的。<br />
<br />
<br />
<br />
'''2011:''' Functional synthetic chromosome arms are engineered in yeast.<ref>{{cite journal | last1 = Dymond | first1 = J. S. | display-authors = etal | year = 2011 | title = Synthetic chromosome arms function in yeast and generate phenotypic diversity by design | url = | journal = Nature | volume = 477 | issue = 7365 | pages = 816–821 | doi = 10.1038/nature10403 | pmid = 21918511 | pmc = 3774833 }}</ref><br />
<br />
2011: Functional synthetic chromosome arms are engineered in yeast.<br />
<br />
2011年: 成功在酵母中设计出功能性合成染色体臂。<br />
<br />
<br />
<br />
'''2012:''' Charpentier and Doudna labs publish in ''Science'' the programming of CRISPR-Cas9 bacterial immunity for targeting DNA cleavage.<ref>{{cite journal | vauthors = Jinek M, Chylinski K, Fonfara I, Hauer M, Doudna JA, Charpentier E | title = A programmable dual-RNA-guided DNA endonuclease in adaptive bacterial immunity | journal = Science | volume = 337 | issue = 6096 | pages = 816–821 | date = 2012 | pmid = 22745249 | doi = 10.1126/science.1225829 | pmc = 6286148 }}</ref></blockquote> This technology greatly simplified and expanded eukaryotic gene editing.<br />
<br />
2012: Charpentier and Doudna labs publish in Science the programming of CRISPR-Cas9 bacterial immunity for targeting DNA cleavage.</blockquote> This technology greatly simplified and expanded eukaryotic gene editing.<br />
<br />
2012年: Charpentier 和 Doudna 实验室在《科学》杂志上发表了 CRISPR-Cas9细菌免疫系统的程序设计,用于靶向 DNA 的裂解。这项技术极大地简化和扩展了真核生物的基因编辑。<br />
<br />
<br />
<br />
'''2019:''' Scientists at [[ETH Zurich]] report the creation of the first [[bacterial genome]], named ''[[Caulobacter crescentus|Caulobacter ethensis-2.0]]'', made entirely by a computer, although a related [[wikt:viability|viable form]] of ''C. ethensis-2.0'' does not yet exist.<ref name="EA-20190401">{{cite news |author=ETH Zurich |title=First bacterial genome created entirely with a computer |url=https://www.eurekalert.org/pub_releases/2019-04/ez-fbg032819.php |date=1 April 2019 |work=[[EurekAlert!]] |accessdate=2 April 2019 |author-link=ETH Zurich }}</ref><ref name="PNAS20190401">{{cite journal |author=Venetz, Jonathan E. |display-authors=et al. |title=Chemical synthesis rewriting of a bacterial genome to achieve design flexibility and biological functionality |date=1 April 2019 |journal=[[Proceedings of the National Academy of Sciences of the United States of America]] |volume=116 |issue=16 |pages=8070–8079 |doi=10.1073/pnas.1818259116 |pmid=30936302 |pmc=6475421 }}</ref><br />
<br />
2019: Scientists at ETH Zurich report the creation of the first bacterial genome, named Caulobacter ethensis-2.0, made entirely by a computer, although a related viable form of C. ethensis-2.0 does not yet exist.<br />
<br />
2019年: 苏黎世联邦理工学院 (ETH Zurich) 的科学家报告说,他们已经创造出了第一个细菌基因组,并将其命名为 Caulobacter ethensis-2.0 ,这个基因组完全是由计算机制造的,尽管与之相关的可存活的Caulobacter ethensis-2.0还不存在。<br />
<br />
<br />
<br />
'''2019:''' Researchers report the production of a new [[Synthetic biology#Synthetic life|synthetic]] (possibly [[Artificial life#Biochemical-based ("wet")|artificial]]) form of [[wikt:viability|viable]] [[life]], a variant of the [[bacteria]] ''[[Escherichia coli]]'', by reducing the natural number of 64 [[codon]]s in the bacterial [[genome]] to 59 codons instead, in order to encode 20 [[amino acid]]s.<ref name="NYT-20190515">{{cite news |last=Zimmer |first=Carl |authorlink=Carl Zimmer |title=Scientists Created Bacteria With a Synthetic Genome. Is This Artificial Life? - In a milestone for synthetic biology, colonies of E. coli thrive with DNA constructed from scratch by humans, not nature. |url=https://www.nytimes.com/2019/05/15/science/synthetic-genome-bacteria.html |date=15 May 2019 |work=[[The New York Times]] |accessdate=16 May 2019 }}</ref><ref name="NAT-20190515">{{cite journal |author=Fredens, Julius |display-authors=et al. |title=Total synthesis of Escherichia coli with a recoded genome |date=15 May 2019 |journal=[[Nature (journal)|Nature]] |volume=569 |issue=7757 |pages=514–518 |doi=10.1038/s41586-019-1192-5 |pmid=31092918 |pmc=7039709 |bibcode=2019Natur.569..514F }}</ref><br />
<br />
2019: Researchers report the production of a new synthetic (possibly artificial) form of viable life, a variant of the bacteria Escherichia coli, by reducing the natural number of 64 codons in the bacterial genome to 59 codons instead, in order to encode 20 amino acids.<br />
<br />
2019年: 研究人员报告了一种新式合成(可能是人工的)可行生命形式的产生,这是大肠杆菌的变种,它通过将细菌基因组中64个密码子的自然数目减少到59个密码子来编码20个氨基酸。<br />
<br />
<br />
<br />
== Perspectives 各方观点 ==<br />
<br />
Engineers view biology as a ''technology'' (in other words, a given system's ''[[biotechnology]]'' or its ''[[biological engineering]]'')<ref>{{cite journal | volume = 6 | last = Zeng | first = Jie (Bangzhe) | title = On the concept of systems bio-engineering | journal = Coomunication on Transgenic Animals, June 1994, CAS, PRC }}</ref> Synthetic biology includes the broad redefinition and expansion of biotechnology, with the ultimate goals of being able to design and build engineered biological systems that process information, manipulate chemicals, fabricate materials and structures, produce energy, provide food, and maintain and enhance human health (see [[Biomedical Engineering]]) and our environment.<ref>{{cite journal | volume = 6 | last = Chopra | first = Paras | author2 = Akhil Kamma | title = Engineering life through Synthetic Biology | journal = In Silico Biology }}</ref><br />
<br />
Engineers view biology as a technology (in other words, a given system's biotechnology or its biological engineering) Synthetic biology includes the broad redefinition and expansion of biotechnology, with the ultimate goals of being able to design and build engineered biological systems that process information, manipulate chemicals, fabricate materials and structures, produce energy, provide food, and maintain and enhance human health (see Biomedical Engineering) and our environment.<br />
<br />
工程师将生物学视为一种技术(换句话说,一个特定系统的生物技术或其生物工程)合成生物学包括生物技术的广泛重新定义和扩展,最终目标是能够设计和建造工程生物系统,处理信息,操纵化学品,制造材料和结构,生产能源,提供食物,维护和增强人类健康(见生物医学工程)和我们的环境。<br />
<br />
<br />
<br />
Studies in synthetic biology can be subdivided into broad classifications according to the approach they take to the problem at hand: standardization of biological parts, biomolecular engineering, genome engineering. {{citation needed|date=May 2020}}<br />
<br />
Studies in synthetic biology can be subdivided into broad classifications according to the approach they take to the problem at hand: standardization of biological parts, biomolecular engineering, genome engineering. <br />
<br />
合成生物学的研究可以根据它们对手头问题所采取的方法再广泛细分为: 生物部分的标准化、生物分子工程、基因组工程。<br />
<br />
<br />
<br />
Biomolecular engineering includes approaches that aim to create a toolkit of functional units that can be introduced to present new technological functions in living cells. [[Genetic engineering]] includes approaches to construct synthetic chromosomes for whole or minimal organisms.<br />
<br />
Biomolecular engineering includes approaches that aim to create a toolkit of functional units that can be introduced to present new technological functions in living cells. Genetic engineering includes approaches to construct synthetic chromosomes for whole or minimal organisms.<br />
<br />
生物分子工程包括旨在创建一个功能单元工具包的方法,这些功能单元可以用来展示活细胞中的新技术性功能。基因工程包括为整个或最小的有机体构建合成染色体的方法。<br />
<br />
<br />
<br />
Biomolecular design refers to the general idea of de novo design and additive combination of biomolecular components. Each of these approaches share a similar task: to develop a more synthetic entity at a higher level of complexity by inventively manipulating a simpler part at the preceding level.<ref>{{cite journal | vauthors = Channon K, Bromley EH, Woolfson DN | title = Synthetic biology through biomolecular design and engineering | journal = Current Opinion in Structural Biology | volume = 18 | issue = 4 | pages = 491–8 | date = August 2008 | pmid = 18644449 | doi = 10.1016/j.sbi.2008.06.006 }}</ref><br />
<br />
Biomolecular design refers to the general idea of de novo design and additive combination of biomolecular components. Each of these approaches share a similar task: to develop a more synthetic entity at a higher level of complexity by inventively manipulating a simpler part at the preceding level.<br />
<br />
生物分子设计是指生物分子组分的重新设计和加合的总体思想。这些方法都有一个相似的任务: 通过在前一级创造性地操作一个更简单的部分,从而在更高的复杂性水平上开发一个更高度合成化的实体。<br />
<br />
<br />
<br />
On the other hand, "re-writers" are synthetic biologists interested in testing the irreducibility of biological systems. Due to the complexity of natural biological systems, it would be simpler to rebuild the natural systems of interest from the ground up; In order to provide engineered surrogates that are easier to comprehend, control and manipulate.<ref>{{cite journal | first = M | last = Stone | title = Life Redesigned to Suit the Engineering Crowd | journal = Microbe | volume = 1 | issue = 12 | pages = 566–570 | date = 2006 | s2cid = 7171812 | url = https://pdfs.semanticscholar.org/8d45/e0f37a0fb6c1a3c659c71ee9c52619b18364.pdf }}</ref> Re-writers draw inspiration from [[refactoring]], a process sometimes used to improve computer software.<br />
<br />
On the other hand, "re-writers" are synthetic biologists interested in testing the irreducibility of biological systems. Due to the complexity of natural biological systems, it would be simpler to rebuild the natural systems of interest from the ground up; In order to provide engineered surrogates that are easier to comprehend, control and manipulate. Re-writers draw inspiration from refactoring, a process sometimes used to improve computer software.<br />
<br />
另一方面,“重写者”指的是对测试生物系统的不可还原性感兴趣的合成生物学家。由于自然生物系统的复杂性,从头开始重建感兴趣的自然系统会更简单; 为了提供更容易理解、控制和操作的工程替代品。重写者从重构中获得灵感,我们有时用这种重构以改进计算机软件。<br />
<br />
<br />
<br />
== Enabling technologies 使能技术 ==<br />
<br />
Several novel enabling technologies were critical to the success of synthetic biology. Concepts include [[standardization]] of biological parts and hierarchical abstraction to permit using those parts in synthetic systems.<ref>{{cite journal | vauthors = Baker D, Church G, Collins J, Endy D, Jacobson J, Keasling J, Modrich P, Smolke C, Weiss R | title = Engineering life: building a fab for biology | journal = Scientific American | volume = 294 | issue = 6 | pages = 44–51 | date = June 2006 | pmid = 16711359 | doi = 10.1038/scientificamerican0606-44 | bibcode = 2006SciAm.294f..44B }}</ref> Basic technologies include reading and writing DNA (sequencing and fabrication). Measurements under multiple conditions are needed for accurate modeling and [[computer-aided design]] (CAD).<br />
<br />
Several novel enabling technologies were critical to the success of synthetic biology. Concepts include standardization of biological parts and hierarchical abstraction to permit using those parts in synthetic systems. Basic technologies include reading and writing DNA (sequencing and fabrication). Measurements under multiple conditions are needed for accurate modeling and computer-aided design (CAD).<br />
<br />
一些新的使能技术对于合成生物学的成功至关重要。它的概念囊括了生物部分的标准化和层次抽象,以允许在合成系统中使用这些部分。基本技术包括读写 DNA (测序和编码)。为了精确地建模和计算机辅助设计(CAD) ,需要在多种条件下进行测量。<br />
<br />
<br />
<br />
=== DNA and gene synthesis DNA 和基因合成===<br />
<br />
{{Main|Artificial gene synthesis|Synthetic genomics}}Driven by dramatic decreases in costs of [[oligonucleotides|oligonucleotide]] ("oligos") synthesis and the advent of PCR, the sizes of DNA constructions from oligos have increased to the genomic level.<ref>{{cite journal | vauthors = Kosuri S, Church GM | title = Large-scale de novo DNA synthesis: technologies and applications | journal = Nature Methods | volume = 11 | issue = 5 | pages = 499–507 | date = May 2014 | pmid = 24781323 | doi = 10.1038/nmeth.2918 | pmc = 7098426 }}</ref> In 2000, researchers reported synthesis of the 9.6 kbp (kilo bp) [[Hepatitis C]] virus genome from chemically synthesized 60 to 80-mers.<ref>{{cite journal | vauthors = Blight KJ, Kolykhalov AA, Rice CM | title = Efficient initiation of HCV RNA replication in cell culture | journal = Science | volume = 290 | issue = 5498 | pages = 1972–4 | date = December 2000 | pmid = 11110665 | doi = 10.1126/science.290.5498.1972 | bibcode = 2000Sci...290.1972B }}</ref> In 2002 researchers at [[Stony Brook University]] succeeded in synthesizing the 7741 bp [[poliovirus]] genome from its published sequence, producing the second synthetic genome, spanning two years.<ref>{{cite journal | vauthors = Couzin J | title = Virology. Active poliovirus baked from scratch | journal = Science | volume = 297 | issue = 5579 | pages = 174–5 | date = July 2002 | pmid = 12114601 | doi = 10.1126/science.297.5579.174b | s2cid = 83531627 | url = https://semanticscholar.org/paper/248000e7bc654631ae217274a77253ceddf270a1 }}</ref> In 2003 the 5386 bp genome of the [[bacteriophage]] [[Phi X 174]] was assembled in about two weeks.<ref name="assembly2003">{{cite journal | vauthors = Smith HO, Hutchison CA, Pfannkoch C, Venter JC | title = Generating a synthetic genome by whole genome assembly: phiX174 bacteriophage from synthetic oligonucleotides | journal = Proceedings of the National Academy of Sciences of the United States of America | volume = 100 | issue = 26 | pages = 15440–5 | date = December 2003 | pmid = 14657399 | pmc = 307586 | doi = 10.1073/pnas.2237126100 | bibcode = 2003PNAS..10015440S }}</ref> In 2006, the same team, at the [[J. Craig Venter Institute]], constructed and patented a [[Synthetic genomics|synthetic genome]] of a novel minimal bacterium, ''[[Mycoplasma laboratorium]]'' and were working on getting it functioning in a living cell.<ref>{{cite news|url=https://www.nytimes.com/2007/06/29/science/29cells.html|title=Scientists Transplant Genome of Bacteria|last=Wade|first=Nicholas|date=2007-06-29|work=The New York Times|access-date=2007-12-28|issn=0362-4331}}</ref><ref>{{cite journal | vauthors = Gibson DG, Benders GA, Andrews-Pfannkoch C, Denisova EA, Baden-Tillson H, Zaveri J, Stockwell TB, Brownley A, Thomas DW, Algire MA, Merryman C, Young L, Noskov VN, Glass JI, Venter JC, Hutchison CA, Smith HO | title = Complete chemical synthesis, assembly, and cloning of a Mycoplasma genitalium genome | journal = Science | volume = 319 | issue = 5867 | pages = 1215–20 | date = February 2008 | pmid = 18218864 | doi = 10.1126/science.1151721 | bibcode = 2008Sci...319.1215G | s2cid = 8190996 | url = https://semanticscholar.org/paper/8c662fd0e252c85d056aad7ff16009ebe1dd4cbc }}</ref><ref name="Ball">{{cite journal|last1=Ball|first1=Philip|date=2016|title=Man Made: A History of Synthetic Life|url=https://www.sciencehistory.org/distillations/magazine/man-made-a-history-of-synthetic-life|journal=Distillations|volume=2|issue=1|pages=15–23|access-date=22 March 2018}}</ref><br />
<br />
Driven by dramatic decreases in costs of oligonucleotide ("oligos") synthesis and the advent of PCR, the sizes of DNA constructions from oligos have increased to the genomic level. In 2000, researchers reported synthesis of the 9.6 kbp (kilo bp) Hepatitis C virus genome from chemically synthesized 60 to 80-mers. In 2002 researchers at Stony Brook University succeeded in synthesizing the 7741 bp poliovirus genome from its published sequence, producing the second synthetic genome, spanning two years. In 2003 the 5386 bp genome of the bacteriophage Phi X 174 was assembled in about two weeks. In 2006, the same team, at the J. Craig Venter Institute, constructed and patented a synthetic genome of a novel minimal bacterium, Mycoplasma laboratorium and were working on getting it functioning in a living cell.<br />
<br />
由于寡核苷酸 (oligos) 合成成本的大幅度降低和 PCR 的出现,寡核苷酸 DNA 构建的大小已经提高到基因组水平。2000年,研究人员报道了化学合成的60到80碱基数的聚丙型肝炎病毒基因组的合成。2002年,石溪大学的研究人员成功地从已发表的序列中合成了7741碱基数的脊髓灰质炎病毒基因组,生产了跨越两年的第二个合成基因组。2003年,噬菌体 Phi x 174的5386个基因组在大约两周内组装完毕。2006年,克莱格·凡特 (J. Craig Venter) 研究所的同一个团队,构建了一种新型微型细菌——支原体的合成基因组,并申请了专利,他们正在努力使其在活细胞中发挥作用。<br />
<br />
<br />
<br />
In 2007 it was reported that several companies were offering [[gene synthesis|synthesis of genetic sequences]] up to 2000 base pairs (bp) long, for a price of about $1 per bp and a turnaround time of less than two weeks.<ref>{{cite news| issn = 0362-4331| last = Pollack| first = Andrew| title = How Do You Like Your Genes? Biofabs Take Orders | work = The New York Times | access-date = 2007-12-28| date = 2007-09-12 | url = https://www.nytimes.com/2007/09/12/technology/techspecial/12gene.html?pagewanted=2&_r=1}}</ref> [[Oligonucleotide]]s harvested from a photolithographic- or inkjet-manufactured [[DNA chip]] combined with PCR and DNA mismatch error-correction allows inexpensive large-scale changes of [[codons]] in genetic systems to improve [[gene expression]] or incorporate novel amino-acids (see [[George M. Church]]'s and Anthony Forster's synthetic cell projects.<ref>{{Cite web|url=http://arep.med.harvard.edu/SBP|title=Synthetic Biology Projects|website=arep.med.harvard.edu|access-date=2018-02-17}}</ref><ref>{{cite journal | vauthors = Forster AC, Church GM | title = Towards synthesis of a minimal cell | journal = Molecular Systems Biology | volume = 2 | issue = 1 | pages = 45 | date = 2006-08-22 | pmid = 16924266 | pmc = 1681520 | doi = 10.1038/msb4100090 }}</ref>) This favors a synthesis-from-scratch approach.<br />
<br />
In 2007 it was reported that several companies were offering synthesis of genetic sequences up to 2000 base pairs (bp) long, for a price of about $1 per bp and a turnaround time of less than two weeks. Oligonucleotides harvested from a photolithographic- or inkjet-manufactured DNA chip combined with PCR and DNA mismatch error-correction allows inexpensive large-scale changes of codons in genetic systems to improve gene expression or incorporate novel amino-acids (see George M. Church's and Anthony Forster's synthetic cell projects.) This favors a synthesis-from-scratch approach.<br />
<br />
2007年有报道称,几家公司提供了长达2000个碱基对的基因序列合成,价格约为每个碱基1美元,一个完整流程所需工时不到两周。从光刻或喷墨制造的 DNA 芯片中提取的寡核苷酸,结合 PCR 和 DNA 错配错误校正,可以大规模改变遗传系统中的密码子,从而改善基因表达或合并新的氨基酸(参见乔治·M·丘奇和安东尼·福斯特的合成细胞项目)这有利于采用从头开始的合成方法。<br />
<br />
<br />
<br />
Additionally, the [[CRISPR|CRISPR/Cas]] system has emerged as a promising technique for gene editing. It was described as "the most important innovation in the synthetic biology space in nearly 30 years".<ref name="washpost_crispr">{{cite news|last1=Basulto|first1=Dominic|title=Everything you need to know about why CRISPR is such a hot technology|url=https://www.washingtonpost.com/news/innovations/wp/2015/11/04/everything-you-need-to-know-about-why-crispr-is-such-a-hot-technology/|access-date=5 December 2015|work=Washington Post|date=November 4, 2015}}</ref> While other methods take months or years to edit gene sequences, CRISPR speeds that time up to weeks.<ref name="washpost_crispr" /> Due to its ease of use and accessibility, however, it has raised ethical concerns, especially surrounding its use in [[Do-it-yourself biology|biohacking]].<ref>{{cite news|last1=Kahn|first1=Jennifer|title=The Crispr Quandary|url=https://www.nytimes.com/2015/11/15/magazine/the-crispr-quandary.html?_r=0|access-date=5 December 2015|work=New York Times|date=November 9, 2015}}</ref><ref>{{cite journal|last1=Ledford|first1=Heidi|title=CRISPR, the disruptor|url=http://www.nature.com/news/crispr-the-disruptor-1.17673|access-date=5 December 2015|agency=Nature News|journal=Nature|date=June 3, 2015|pmid=26040877|doi=10.1038/522020a|volume=522|issue=7554|pages=20–4|bibcode=2015Natur.522...20L|doi-access=free}}</ref><ref>{{cite magazine|last1=Higginbotham|first1=Stacey|title=Top VC Says Gene Editing Is Riskier Than Artificial Intelligence|url=http://fortune.com/2015/12/04/khosla-crispr-ai/|access-date=5 December 2015|magazine=Fortune|date=4 December 2015}}</ref><br />
<br />
Additionally, the CRISPR/Cas system has emerged as a promising technique for gene editing. It was described as "the most important innovation in the synthetic biology space in nearly 30 years". While other methods take months or years to edit gene sequences, CRISPR speeds that time up to weeks.<br />
<br />
此外,CRISPR/Cas 系统已经成为一种很有前途的基因编辑技术。它被称作“近30年来合成生物学领域最重要的创新”。虽然其他方法需要数月或数年来编辑基因序列,CRISPR 将这个时间缩短到数周。<br />
<br />
<br />
<br />
=== Sequencing 测序 ===<br />
<br />
[[DNA sequencing]] determines the order of [[nucleotide]] bases in a DNA molecule. Synthetic biologists use DNA sequencing in their work in several ways. First, large-scale genome sequencing efforts continue to provide information on naturally occurring organisms. This information provides a rich substrate from which synthetic biologists can construct parts and devices. Second, sequencing can verify that the fabricated system is as intended. Third, fast, cheap, and reliable sequencing can facilitate rapid detection and identification of synthetic systems and organisms.<ref>{{cite journal| author = Rollie| date = 2012 |title = Designing biological systems: Systems Engineering meets Synthetic Biology| journal = Chemical Engineering Science| volume = 69 | pages = 1–29| doi=10.1016/j.ces.2011.10.068| issue=1|display-authors=etal}}</ref><br />
<br />
DNA sequencing determines the order of nucleotide bases in a DNA molecule. Synthetic biologists use DNA sequencing in their work in several ways. First, large-scale genome sequencing efforts continue to provide information on naturally occurring organisms. This information provides a rich substrate from which synthetic biologists can construct parts and devices. Second, sequencing can verify that the fabricated system is as intended. Third, fast, cheap, and reliable sequencing can facilitate rapid detection and identification of synthetic systems and organisms.<br />
<br />
DNA 测序决定了 DNA 分子中核苷酸碱基的顺序。合成生物学家在他们的工作中以几种方式使用 DNA 测序。首先,大规模的基因组测序工作继续提供有关自然发生的生物体的信息。这些信息为合成生物学家提供了一个丰富的基质,他们可以从中构建零件和设备。其次,排序可以验证制造的系统是否如预期的那样。第三,快速、廉价和可靠的测序可以促进快速检测和识别合成系统和有机体。<br />
<br />
<br />
<br />
=== Microfluidics ===<br />
<br />
[[Microfluidics]], in particular droplet microfluidics, is an emerging tool used to construct new components, and to analyse and characterize them.<ref>{{cite journal | vauthors = Elani Y | title = Construction of membrane-bound artificial cells using microfluidics: a new frontier in bottom-up synthetic biology | journal = Biochemical Society Transactions | volume = 44 | issue = 3 | pages = 723–30 | date = June 2016 | pmid = 27284034 | pmc = 4900754 | doi = 10.1042/BST20160052 }}</ref><ref>{{cite journal | vauthors = Gach PC, Iwai K, Kim PW, Hillson NJ, Singh AK | title = Droplet microfluidics for synthetic biology | journal = Lab on a Chip | volume = 17 | issue = 20 | pages = 3388–3400 | date = October 2017 | pmid = 28820204 | doi = 10.1039/C7LC00576H | osti = 1421856 | url = http://www.escholarship.org/uc/item/6cr3k0v5 }}</ref> It is widely employed in screening assays.<ref>{{cite journal | vauthors = Vinuselvi P, Park S, Kim M, Park JM, Kim T, Lee SK | title = Microfluidic technologies for synthetic biology | journal = International Journal of Molecular Sciences | volume = 12 | issue = 6 | pages = 3576–93 | date = 2011-06-03 | pmid = 21747695 | pmc = 3131579 | doi = 10.3390/ijms12063576 }}</ref><br />
<br />
Microfluidics, in particular droplet microfluidics, is an emerging tool used to construct new components, and to analyse and characterize them. It is widely employed in screening assays.<br />
<br />
微流体,特别是液滴微流体,是一种新兴的工具,常用于构造新的元件,并分析和表征它们。它被广泛应用于筛选分析。<br />
<br />
<br />
<br />
=== Modularity 模块化 ===<br />
<br />
The most used<ref name="primer">{{Cite book|title=Synthetic Biology – A Primer|last1=Freemont|first1=Paul S.|last2=Kitney|first2=Richard I.| name-list-style = vanc |date=2012|publisher=World Scientific|isbn=978-1-84816-863-3|doi=10.1142/p837}}</ref>{{rp|22–23}} standardized DNA parts are [[BioBrick]] plasmids, invented by [[Tom Knight (scientist)|Tom Knight]] in 2003.<ref>{{Cite journal|last1=Knight|first1=Thomas| name-list-style = vanc |year=2003|title=Tom Knight (2003). Idempotent Vector Design for Standard Assembly of Biobricks|hdl=1721.1/21168}}</ref> Biobricks are stored at the [[Registry of Standard Biological Parts]] in Cambridge, Massachusetts. The BioBrick standard has been used by thousands of students worldwide in the [[international Genetically Engineered Machine]] (iGEM) competition.<ref name="primer" />{{rp|22–23}}<br />
<br />
The most used standardized DNA parts are BioBrick plasmids, invented by Tom Knight in 2003. Biobricks are stored at the Registry of Standard Biological Parts in Cambridge, Massachusetts. The BioBrick standard has been used by thousands of students worldwide in the international Genetically Engineered Machine (iGEM) competition. SH3 domain-peptide binding or SpyTag/SpyCatcher offer such control. In addition it is necessary to regulate protein-protein interactions in cells, such as with light (using light-oxygen-voltage-sensing domains) or cell-permeable small molecules by chemically induced dimerization.<br />
<br />
最常用的标准化 DNA 部分是生物积木质粒,由汤姆·奈特在2003年发明。生物积木储存在马萨诸塞州剑桥的标准生物部件注册处。生物积木标准已经被全世界成千上万的学生应用于国际基因工程机器竞赛 (iGEM) 。SH3域-肽结合或 SpyTag/SpyCatcher 能提供这样的控制。此外,还必须通过化学诱导的二聚化来调节细胞中蛋白质-蛋白质的相互作用,例如利用光(光-氧-电压感应域)或细胞渗透性小分子。<br />
<br />
<br />
<br />
While DNA is most important for information storage, a large fraction of the cell's activities are carried out by proteins. Tools can send proteins to specific regions of the cell and to link different proteins together. The interaction strength between protein partners should be tunable between a lifetime of seconds (desirable for dynamic signaling events) up to an irreversible interaction (desirable for device stability or resilient to harsh conditions). Interactions such as [[coiled coil]]s,<ref>{{cite journal | vauthors = Woolfson DN, Bartlett GJ, Bruning M, Thomson AR | title = New currency for old rope: from coiled-coil assemblies to α-helical barrels | journal = Current Opinion in Structural Biology | volume = 22 | issue = 4 | pages = 432–41 | date = August 2012 | pmid = 22445228 | doi = 10.1016/j.sbi.2012.03.002 }}</ref> [[SH3 domain]]-peptide binding<ref>{{cite journal | vauthors = Dueber JE, Wu GC, Malmirchegini GR, Moon TS, Petzold CJ, Ullal AV, Prather KL, Keasling JD | title = Synthetic protein scaffolds provide modular control over metabolic flux | journal = Nature Biotechnology | volume = 27 | issue = 8 | pages = 753–9 | date = August 2009 | pmid = 19648908 | doi = 10.1038/nbt.1557 | s2cid = 2756476 }}</ref> or [[SpyCatcher|SpyTag/SpyCatcher]]<ref>{{cite journal | vauthors = Reddington SC, Howarth M | title = Secrets of a covalent interaction for biomaterials and biotechnology: SpyTag and SpyCatcher | journal = Current Opinion in Chemical Biology | volume = 29 | pages = 94–9 | date = December 2015 | pmid = 26517567 | doi = 10.1016/j.cbpa.2015.10.002 | doi-access = free }}</ref> offer such control. In addition it is necessary to regulate protein-protein interactions in cells, such as with light (using [[light-oxygen-voltage-sensing domain]]s) or cell-permeable small molecules by [[chemically induced dimerization]].<ref>{{cite journal | vauthors = Bayle JH, Grimley JS, Stankunas K, Gestwicki JE, Wandless TJ, Crabtree GR | title = Rapamycin analogs with differential binding specificity permit orthogonal control of protein activity | journal = Chemistry & Biology | volume = 13 | issue = 1 | pages = 99–107 | date = January 2006 | pmid = 16426976 | doi = 10.1016/j.chembiol.2005.10.017 | doi-access = free }}</ref><br />
<br />
In a living cell, molecular motifs are embedded in a bigger network with upstream and downstream components. These components may alter the signaling capability of the modeling module. In the case of ultrasensitive modules, the sensitivity contribution of a module can differ from the sensitivity that the module sustains in isolation.<br />
<br />
分子模体被嵌入到活细胞中一个更大的由上游或下游组件构成的网络中。这些组件可以改变建模模块的信令能力。在超灵敏模块的情况下,模块的灵敏度贡献可能不同于该模块在隔离状态下支持的灵敏度。<br />
<br />
<br />
<br />
In a living cell, molecular motifs are embedded in a bigger network with upstream and downstream components. These components may alter the signaling capability of the modeling module. In the case of ultrasensitive modules, the sensitivity contribution of a module can differ from the sensitivity that the module sustains in isolation.<ref name="altszylerUltrasens2014">{{cite journal | vauthors = Altszyler E, Ventura A, Colman-Lerner A, Chernomoretz A | title = Impact of upstream and downstream constraints on a signaling module's ultrasensitivity | journal = Physical Biology | volume = 11 | issue = 6 | pages = 066003 | date = October 2014 | pmid = 25313165 | pmc = 4233326 | doi = 10.1088/1478-3975/11/6/066003 | bibcode = 2014PhBio..11f6003A }}</ref><ref name="altszylerUltrasens2017">{{cite journal | vauthors = Altszyler E, Ventura AC, Colman-Lerner A, Chernomoretz A | title = Ultrasensitivity in signaling cascades revisited: Linking local and global ultrasensitivity estimations | journal = PLOS ONE | volume = 12 | issue = 6 | pages = e0180083 | year = 2017 | pmid = 28662096 | pmc = 5491127 | doi = 10.1371/journal.pone.0180083 | bibcode = 2017PLoSO..1280083A | arxiv = 1608.08007 }}</ref><br />
<br />
<br />
<br />
Models inform the design of engineered biological systems by better predicting system behavior prior to fabrication. Synthetic biology benefits from better models of how biological molecules bind substrates and catalyze reactions, how DNA encodes the information needed to specify the cell and how multi-component integrated systems behave. Multiscale models of gene regulatory networks focus on synthetic biology applications. Simulations can model all biomolecular interactions in transcription, translation, regulation and induction of gene regulatory networks.<br />
<br />
模型通过在制造之前更好地预测系统行为来指导工程生物系统的设计。合成生物学受益于更好的模型,这些模型包括生物分子如何结合底物和催化反应,DNA 如何编码指定细胞所需的信息,以及多组分综合系统如何运作。基因调控网络的多尺度模型聚焦于合成生物学的应用。模拟可以模拟所有生物分子间相互作用的转录,翻译,调节和基因调控网络的诱导。<br />
<br />
=== Modeling 建模 ===<br />
<br />
Models inform the design of engineered biological systems by better predicting system behavior prior to fabrication. Synthetic biology benefits from better models of how biological molecules bind substrates and catalyze reactions, how DNA encodes the information needed to specify the cell and how multi-component integrated systems behave. Multiscale models of gene regulatory networks focus on synthetic biology applications. Simulations can model all biomolecular interactions in [[Transcription (biology)|transcription]], [[Translation (biology)|translation]], regulation and induction of gene regulatory networks.<ref>{{cite journal | vauthors = Carbonell-Ballestero M, Duran-Nebreda S, Montañez R, Solé R, Macía J, Rodríguez-Caso C | title = A bottom-up characterization of transfer functions for synthetic biology designs: lessons from enzymology | journal = Nucleic Acids Research | volume = 42 | issue = 22 | pages = 14060–14069 | date = December 2014 | pmid = 25404136 | pmc = 4267673 | doi = 10.1093/nar/gku964 }}</ref><br />
<br />
<ref>{{cite journal | vauthors = Kaznessis YN | title = Models for synthetic biology | journal = BMC Systems Biology | volume = 1 | issue = 1 | pages = 47 | date = November 2007 | pmid = 17986347 | pmc = 2194732 | doi = 10.1186/1752-0509-1-47 }}</ref><br />
<br />
<ref>{{cite conference |vauthors=Tuza ZA, Singhal V, Kim J, Murray RM | title = An in silico modeling toolbox for rapid prototyping of circuits in a biomolecular "breadboard" system. |book-title=52nd IEEE Conference on Decision and Control |date=December 2013 |doi=10.1109/CDC.2013.6760079}}</ref><br />
<br />
<br />
<br />
Studies have considered the components of the DNA transcription mechanism. One desire of scientists creating synthetic biological circuits is to be able to control the transcription of synthetic DNA in unicellular organisms (prokaryotes) and in multicellular organisms (eukaryotes). One study tested the adjustability of synthetic transcription factors (sTFs) in areas of transcription output and cooperative ability among multiple transcription factor complexes. Researchers were able to mutate functional regions called zinc fingers, the DNA specific component of sTFs, to decrease their affinity for specific operator DNA sequence sites, and thus decrease the associated site-specific activity of the sTF (usually transcriptional regulation). They further used the zinc fingers as components of complex-forming sTFs, which are the eukaryotic translation mechanisms. and demonstrated both analog and digital computation in living cells.<br />
--[[用户:粲兰|袁一博]]([[用户讨论:粲兰|讨论]]) “and demonstrated both analog and digital computation in living cells. ”这句首字母没有大写,不知是搬运时少了一点还是仅仅是格式错误。<br />
They demonstrated that bacteria can be engineered to perform both analog and/or digital computation. In human cells research demonstrated a universal logic evaluator that operates in mammalian cells in 2007. Subsequently, researchers utilized this paradigm to demonstrate a proof-of-concept therapy that uses biological digital computation to detect and kill human cancer cells in 2011. Another group of researchers demonstrated in 2016 that principles of computer engineering, can be used to automate digital circuit design in bacterial cells. In 2017, researchers demonstrated the 'Boolean logic and arithmetic through DNA excision' (BLADE) system to engineer digital computation in human cells.<br />
<br />
相关研究已经考虑到了 DNA 转录机制的组成部分。科学家创造合成生物电路,以期能够控制单细胞生物(原核生物)和多细胞生物(真核生物)中合成 DNA 的转录。一项研究测试了合成转录因子(sTFs)在转录输出和多个转录因子复合物之间的合作能力方面的可调节性。研究人员能够使被称为锌指——转录因子的一段特殊DNA——的功能区域突变,以减少它们对特定算子DNA 序列位点的亲和力,从而减少相关的特定位点的转录因子活性(通常是转录调控)。他们进一步使用锌指作为复杂地组成的转录因子的组成部分,这是真核翻译的机制。并在活细胞中进行了模拟和数字计算。他们证明了可以改造细菌使其同时执行模拟和/或数字计算。2007年,关于人类细胞的研究展示了一种在哺乳动物细胞中运作的通用逻辑求值器。随后,研究人员在2011年利用这一范式展示了一种概念验证疗法,利用生物数字计算来检测和杀死人类癌细胞。另一组研究人员在2016年证明了计算机工程的原理,可以用来自动化细菌细胞中的数字电路设计。2017年,研究人员演示了“通过 DNA 删除的布尔逻辑和算术”(BLADE)系统,用于在人类细胞中构建数字计算。<br />
<br />
=== Synthetic transcription factors 合成转录因子 ===<br />
<br />
Studies have considered the components of the [[Transcription (biology)|DNA transcription]] mechanism. One desire of scientists creating [[synthetic biological circuit]]s is to be able to control the transcription of synthetic DNA in unicellular organisms ([[prokaryote]]s) and in multicellular organisms ([[eukaryote]]s). One study tested the adjustability of synthetic [[transcription factor]]s (sTFs) in areas of transcription output and cooperative ability among multiple transcription factor complexes.<ref name="Khalil AS 2012">{{cite journal | vauthors = Khalil AS, Lu TK, Bashor CJ, Ramirez CL, Pyenson NC, Joung JK, Collins JJ | title = A synthetic biology framework for programming eukaryotic transcription functions | journal = Cell | volume = 150 | issue = 3 | pages = 647–58 | date = August 2012 | pmid = 22863014 | pmc = 3653585 | doi = 10.1016/j.cell.2012.05.045 }}</ref> Researchers were able to mutate functional regions called [[zinc finger]]s, the DNA specific component of sTFs, to decrease their affinity for specific operator DNA sequence sites, and thus decrease the associated site-specific activity of the sTF (usually transcriptional regulation). They further used the zinc fingers as components of complex-forming sTFs, which are the [[eukaryotic translation]] mechanisms.<ref name="Khalil AS 2012"/><br />
<br />
<br />
<br />
A biosensor refers to an engineered organism, usually a bacterium, that is capable of reporting some ambient phenomenon such as the presence of heavy metals or toxins. One such system is the Lux operon of Aliivibrio fischeri, which codes for the enzyme that is the source of bacterial bioluminescence, and can be placed after a respondent promoter to express the luminescence genes in response to a specific environmental stimulus. One such sensor created, consisted of a bioluminescent bacterial coating on a photosensitive computer chip to detect certain petroleum pollutants. When the bacteria sense the pollutant, they luminesce. Another example of a similar mechanism is the detection of landmines by an engineered E.coli reporter strain capable of detecting TNT and its main degradation product DNT, and consequently producing a green fluorescent protein (GFP).<br />
<br />
生物传感器是一种工用有机体,它能够报告周边某些环境现象,如重金属或毒素的存在,通常是细菌。其中一个这样的系统是 Aliivibrio 费氏弧菌的 Lux 操纵子,它编码的酶可以使细菌生物发光,可以放置在应答启动子之后表达发光基因以响应特定的环境刺激。其中一个传感器是由一个光敏计算机芯片上的生物发光细菌涂层组成的,用以检测某些石油污染物。当细菌感觉到污染物时,它们就会发光。另一个类似机制的例子是地雷的检测,用一个能够检测 TNT 及其主要降解产物 DNT 的大肠杆菌报告基因工程菌株,从而产生绿色荧光蛋白。<br />
<br />
== Applications 应用 ==<br />
<br />
=== Biological computers 生物计算机 ===<br />
<br />
Modified organisms can sense environmental signals and send output signals that can be detected and serve diagnostic purposes. Microbe cohorts have been used.<br />
<br />
改良有机体可以感知环境信号,并发送能够被检测到的输出信号,用于诊断目的。微生物群落已经被应用于这种用途。<br />
<br />
A [[biological computer]] refers to an engineered biological system that can perform computer-like operations, which is a dominant paradigm in synthetic biology. Researchers built and characterized a variety of [[logic gate]]s in a number of organisms,<ref>{{cite journal | vauthors = Singh V | title = Recent advances and opportunities in synthetic logic gates engineering in living cells | journal = Systems and Synthetic Biology | volume = 8 | issue = 4 | pages = 271–82 | date = December 2014 | pmid = 26396651 | pmc = 4571725 | doi = 10.1007/s11693-014-9154-6 }}</ref> and demonstrated both analog and digital computation in living cells. They demonstrated that bacteria can be engineered to perform both analog and/or digital computation.<ref>{{cite journal | vauthors = Purcell O, Lu TK | title = Synthetic analog and digital circuits for cellular computation and memory | journal = Current Opinion in Biotechnology | volume = 29 | pages = 146–55 | date = October 2014 | pmid = 24794536 | pmc = 4237220 | doi = 10.1016/j.copbio.2014.04.009 | series = Cell and Pathway Engineering }}</ref><ref>{{cite journal | vauthors = Daniel R, Rubens JR, Sarpeshkar R, Lu TK | title = Synthetic analog computation in living cells | journal = Nature | volume = 497 | issue = 7451 | pages = 619–23 | date = May 2013 | pmid = 23676681 | doi = 10.1038/nature12148 | bibcode = 2013Natur.497..619D | s2cid = 4358570 }}</ref> In human cells research demonstrated a universal logic evaluator that operates in mammalian cells in 2007.<ref>{{cite journal | vauthors = Rinaudo K, Bleris L, Maddamsetti R, Subramanian S, Weiss R, Benenson Y | title = A universal RNAi-based logic evaluator that operates in mammalian cells | journal = Nature Biotechnology | volume = 25 | issue = 7 | pages = 795–801 | date = July 2007 | pmid = 17515909 | doi = 10.1038/nbt1307 | s2cid = 280451 }}</ref> Subsequently, researchers utilized this paradigm to demonstrate a proof-of-concept therapy that uses biological digital computation to detect and kill human cancer cells in 2011.<ref>{{cite journal | vauthors = Xie Z, Wroblewska L, Prochazka L, Weiss R, Benenson Y | title = Multi-input RNAi-based logic circuit for identification of specific cancer cells | journal = Science | volume = 333 | issue = 6047 | pages = 1307–11 | date = September 2011 | pmid = 21885784 | doi = 10.1126/science.1205527 | bibcode = 2011Sci...333.1307X | s2cid = 13743291 | url = https://semanticscholar.org/paper/372e175668b5323d79950b58f12b36f6974a81ef }}</ref> Another group of researchers demonstrated in 2016 that principles of [[computer engineering]], can be used to automate digital circuit design in bacterial cells.<ref>{{cite journal | vauthors = Nielsen AA, Der BS, Shin J, Vaidyanathan P, Paralanov V, Strychalski EA, Ross D, Densmore D, Voigt CA | title = Genetic circuit design automation | journal = Science | volume = 352 | issue = 6281 | pages = aac7341 | date = April 2016 | pmid = 27034378 | doi = 10.1126/science.aac7341 | doi-access = free }}</ref> In 2017, researchers demonstrated the 'Boolean logic and arithmetic through DNA excision' (BLADE) system to engineer digital computation in human cells.<ref>{{cite journal | vauthors = Weinberg BH, Pham NT, Caraballo LD, Lozanoski T, Engel A, Bhatia S, Wong WW | title = Large-scale design of robust genetic circuits with multiple inputs and outputs for mammalian cells | journal = Nature Biotechnology | volume = 35 | issue = 5 | pages = 453–462 | date = May 2017 | pmid = 28346402 | pmc = 5423837 | doi = 10.1038/nbt.3805 }}</ref><br />
<br />
<br />
<br />
=== Biosensors 生物传感器 ===<br />
<br />
Cells use interacting genes and proteins, which are called gene circuits, to implement diverse function, such as responding to environmental signals, decision making and communication. Three key components are involved: DNA, RNA and Synthetic biologist designed gene circuits that can control gene expression from several levels including transcriptional, post-transcriptional and translational levels.<br />
<br />
细胞使用相互作用的基因和蛋白质,即所谓的基因回路,来实现不同的功能,如响应环境信号,决策和沟通。其中涉及到三个关键组成部分: DNA、 RNA 和合成生物学家设计的基因电路,可以从转录、转录后和翻译水平等几个层面控制基因表达。<br />
<br />
A [[biosensor]] refers to an engineered organism, usually a bacterium, that is capable of reporting some ambient phenomenon such as the presence of heavy metals or toxins. One such system is the [[Luciferase|Lux operon]] of ''[[Aliivibrio fischeri]],''<ref>{{cite journal | vauthors = de Almeida PE, van Rappard JR, Wu JC | title = In vivo bioluminescence for tracking cell fate and function | journal = American Journal of Physiology. Heart and Circulatory Physiology | volume = 301 | issue = 3 | pages = H663–71 | date = September 2011 | pmid = 21666118 | pmc = 3191083 | doi = 10.1152/ajpheart.00337.2011 }}</ref> which codes for the enzyme that is the source of bacterial [[bioluminescence]], and can be placed after a respondent [[Promoter (genetics)|promoter]] to express the luminescence genes in response to a specific environmental stimulus.<ref>{{cite journal | vauthors = Close DM, Xu T, Sayler GS, Ripp S | title = In vivo bioluminescent imaging (BLI): noninvasive visualization and interrogation of biological processes in living animals | journal = Sensors | volume = 11 | issue = 1 | pages = 180–206 | date = 2011 | pmid = 22346573 | pmc = 3274065 | doi = 10.3390/s110100180 }}</ref> One such sensor created, consisted of a [[bioluminescent bacteria]]l coating on a photosensitive [[computer chip]] to detect certain [[petroleum]] [[pollutant]]s. When the bacteria sense the pollutant, they luminesce.<ref>{{cite journal|last=Gibbs|first=W. Wayt| name-list-style = vanc |date=1997 |title=Critters on a Chip |url=http://www.sciam.com/article.cfm?id=critters-on-a-chip |journal=Scientific American|access-date=2 Mar 2009}}</ref> Another example of a similar mechanism is the detection of landmines by an engineered ''E.coli'' reporter strain capable of detecting [[TNT]] and its main degradation product [[2,4-Dinitrotoluene|DNT]], and consequently producing a green fluorescent protein ([[Green fluorescent protein|GFP]]).<ref>{{Cite journal|last1=Belkin|first1=Shimshon|last2=Yagur-Kroll|first2=Sharon|last3=Kabessa|first3=Yossef|last4=Korouma|first4=Victor|last5=Septon|first5=Tali|last6=Anati|first6=Yonatan|last7=Zohar-Perez|first7=Cheinat|last8=Rabinovitz|first8=Zahi|last9=Nussinovitch|first9=Amos|date=April 2017|title=Remote detection of buried landmines using a bacterial sensor|journal=Nature Biotechnology|volume=35|issue=4|pages=308–310|doi=10.1038/nbt.3791|pmid=28398330|s2cid=3645230|issn=1087-0156}}</ref><br />
<br />
<br />
<br />
Traditional metabolic engineering has been bolstered by the introduction of combinations of foreign genes and optimization by directed evolution. This includes engineering E. coli and yeast for commercial production of a precursor of the antimalarial drug, Artemisinin.<br />
<br />
通过引入外源基因的组合和定向进化,传统的代谢工程学得到巨大发展。这包括改造大肠杆菌和酵母菌,用于商业化生产抗疟药物青蒿素的前体。<br />
<br />
Modified organisms can sense environmental signals and send output signals that can be detected and serve diagnostic purposes. Microbe cohorts have been used.<ref name="pmid26019220">{{cite journal | vauthors = Danino T, Prindle A, Kwong GA, Skalak M, Li H, Allen K, Hasty J, Bhatia SN | title = Programmable probiotics for detection of cancer in urine | journal = Science Translational Medicine | volume = 7 | issue = 289 | pages = 289ra84 | date = May 2015 | pmid = 26019220 | pmc = 4511399 | doi = 10.1126/scitranslmed.aaa3519 }}</ref><br />
<br />
<br />
<br />
Entire organisms have yet to be created from scratch, although living cells can be transformed with new DNA. <br />
--[[用户:粲兰|袁一博]]([[用户讨论:粲兰|讨论]])“Entire organisms have yet to be created from scratch, although living cells can be transformed with new DNA.”have 疑似应为 haven’t <br />
Several ways allow constructing synthetic DNA components and even entire synthetic genomes, but once the desired genetic code is obtained, it is integrated into a living cell that is expected to manifest the desired new capabilities or phenotypes while growing and thriving. Cell transformation is used to create biological circuits, which can be manipulated to yield desired outputs.<br />
<br />
虽然活细胞可以通过新的 DNA 转化,但整个有机体还没有从头开始创造。有几种方法可以构建合成 DNA 组件,甚至是整个合成基因组,但是一旦获得了所需的遗传密码,它就会被整合到一个活细胞中,这个活细胞在生长和发育的过程中,有望表现所需的新能力或表型。细胞分化被用于创造生物电路,我们可以通过操纵这些电路来产生所需的输出。<br />
<br />
=== Cell transformation 细胞分化 ===<br />
<br />
{{Main|Transformation (genetics)}}Cells use interacting genes and proteins, which are called gene circuits, to implement diverse function, such as responding to environmental signals, decision making and communication. Three key components are involved: DNA, RNA and Synthetic biologist designed gene circuits that can control gene expression from several levels including transcriptional, post-transcriptional and translational levels.<br />
<br />
<br />
<br />
Traditional metabolic engineering has been bolstered by the introduction of combinations of foreign genes and optimization by directed evolution. This includes engineering ''E. coli'' and [[yeast]] for commercial production of a precursor of the [[Antimalarial medication|antimalarial drug]], [[Artemisinin]].<ref>{{cite journal | vauthors = Westfall PJ, Pitera DJ, Lenihan JR, Eng D, Woolard FX, Regentin R, Horning T, Tsuruta H, Melis DJ, Owens A, Fickes S, Diola D, Benjamin KR, Keasling JD, Leavell MD, McPhee DJ, Renninger NS, Newman JD, Paddon CJ | title = Production of amorphadiene in yeast, and its conversion to dihydroartemisinic acid, precursor to the antimalarial agent artemisinin | journal = Proceedings of the National Academy of Sciences of the United States of America | volume = 109 | issue = 3 | pages = E111–8 | date = January 2012 | pmid = 22247290 | pmc = 3271868 | doi = 10.1073/pnas.1110740109 | bibcode = 2012PNAS..109E.111W }}</ref><br />
<br />
The [[Top7 protein was one of the first proteins designed for a fold that had never been seen before in nature ]]<br />
<br />
[[ Top7蛋白是最初为了折叠而设计的几个蛋白质之一,以前从未在自然界中见过]<br />
<br />
<br />
<br />
Entire organisms have yet to be created from scratch, although living cells can be [[Transformation (genetics)|transformed]] with new DNA. Several ways allow constructing synthetic DNA components and even entire [[Artificial gene synthesis|synthetic genomes]], but once the desired genetic code is obtained, it is integrated into a living cell that is expected to manifest the desired new capabilities or [[phenotype]]s while growing and thriving.<ref>{{cite news|url=https://www.independent.co.uk/news/science/eureka-scientists-unveil-giant-leap-towards-synthetic-life-9219644.html|title=Eureka! Scientists unveil giant leap towards synthetic life|last=Connor|first=Steve|date=28 March 2014|work=The Independent|access-date=2015-08-06}}</ref> Cell transformation is used to create [[Synthetic biological circuit|biological circuits]], which can be manipulated to yield desired outputs.<ref name=":0" /><ref name=":1" /><br />
<br />
Natural proteins can be engineered, for example, by directed evolution, novel protein structures that match or improve on the functionality of existing proteins can be produced. One group generated a helix bundle that was capable of binding oxygen with similar properties as hemoglobin, yet did not bind carbon monoxide. A similar protein structure was generated to support a variety of oxidoreductase activities while another formed a structurally and sequentially novel ATPase. Another group generated a family of G-protein coupled receptors that could be activated by the inert small molecule clozapine N-oxide but insensitive to the native ligand, acetylcholine; these receptors are known as DREADDs. Novel functionalities or protein specificity can also be engineered using computational approaches. One study was able to use two different computational methods – a bioinformatics and molecular modeling method to mine sequence databases, and a computational enzyme design method to reprogram enzyme specificity. Both methods resulted in designed enzymes with greater than 100 fold specificity for production of longer chain alcohols from sugar.<br />
<br />
天然蛋白质可以被设计出来,例如,通过定向进化,新的蛋白质结构可以匹配或改进现有蛋白质的功能。其中一组可以产生能将血红蛋白与具有类似性质的氧结合的螺旋束,但不结合一氧化碳。一个类似的蛋白质结构被生成以支持多种氧化还原酶活性,而另一组生成一个在结构和顺序上全新的 ATP 酶。另一组产生了一类 g 蛋白偶联受体,这类受体可以被惰性小分子N-氧化氯氮平激活,但对天然配体乙酰胆碱不敏感; 这些受体被称为 DREADDs。新的功能或蛋白质特异性也可以利用计算方法进行设计。一项研究能够使用两种不同的计算方法——生物信息学和分子模拟方法挖掘序列数据库,使用计算酶设计方法重新编写酶的专一性。这两种方法都可以使设计的酶具有大于100倍的专一性,可以用糖生产出长链醇。<br />
<br />
<br />
<br />
By integrating synthetic biology with [[materials science]], it would be possible to use cells as microscopic molecular foundries to produce materials with properties whose properties were genetically encoded. Re-engineering has produced Curli fibers, the [[amyloid]] component of extracellular material of [[biofilms]], as a platform for programmable [[nanomaterial]]. These nanofibers were genetically constructed for specific functions, including adhesion to substrates, nanoparticle templating and protein immobilization.<ref>{{cite journal|vauthors=Nguyen PQ, Botyanszki Z, Tay PK, Joshi NS|date=September 2014|title=Programmable biofilm-based materials from engineered curli nanofibres|journal=Nature Communications|volume=5|pages=4945|bibcode=2014NatCo...5.4945N|doi=10.1038/ncomms5945|pmid=25229329|doi-access=free}}</ref><br />
<br />
Another common investigation is expansion of the natural set of 20 amino acids. Excluding stop codons, 61 codons have been identified, but only 20 amino acids are coded generally in all organisms. Certain codons are engineered to code for alternative amino acids including: nonstandard amino acids such as O-methyl tyrosine; or exogenous amino acids such as 4-fluorophenylalanine. Typically, these projects make use of re-coded nonsense suppressor tRNA-Aminoacyl tRNA synthetase pairs from other organisms, though in most cases substantial engineering is required.<br />
<br />
另一个常见的研究是对20种天然氨基酸的扩展。除了终止密码子, 61个密码子已被破译出,但所有生物体中一般只有20个氨基酸。某些密码子被设计为编码可替代的氨基酸,包括: 非标准氨基酸,如 o- 甲基酪氨酸; 或外源氨基酸,如4- 氟苯丙氨酸。通常情况下,这些项目利用从其他生物体获取的重新编码的无意义抑制 tRNA-氨酰基 tRNA 合成酶对,虽然在大多数情况下这需要大量的工程。<br />
<br />
<br />
<br />
=== Designed proteins 设计蛋白质 ===<br />
<br />
Other researchers investigated protein structure and function by reducing the normal set of 20 amino acids. Limited protein sequence libraries are made by generating proteins where groups of amino acids may be replaced by a single amino acid. For instance, several non-polar amino acids within a protein can all be replaced with a single non-polar amino acid. . One project demonstrated that an engineered version of Chorismate mutase still had catalytic activity when only 9 amino acids were used.<br />
<br />
其他研究人员通过减少一般的20种天然氨基酸来研究蛋白质的结构和功能。有限的蛋白质序列库是通过生成蛋白质制成的,其中一组氨基酸可以被一个单一的氨基酸所取代。例如,一个蛋白质中的几个非极性氨基酸都可以被一个非极性氨基酸所取代。.一个研究项目证明了,当只使用9种氨基酸时,一种改造过的分支酸变位酶仍然具有催化活性。<br />
<br />
<br />
<br />
[[File:Top7.png|thumb|The [[Top7]] protein was one of the first proteins designed for a fold that had never been seen before in nature<ref name="kuhlman03">{{cite journal | vauthors = Kuhlman B, Dantas G, Ireton GC, Varani G, Stoddard BL, Baker D | title = Design of a novel globular protein fold with atomic-level accuracy | journal = Science | volume = 302 | issue = 5649 | pages = 1364–8 | date = November 2003 | pmid = 14631033 | doi = 10.1126/science.1089427 | bibcode = 2003Sci...302.1364K | s2cid = 1939390 | url = https://semanticscholar.org/paper/3188f905b60172dcad17a9b8c23567400c2bb65f }}</ref> ]]<br />
<br />
Researchers and companies practice synthetic biology to synthesize industrial enzymes with high activity, optimal yields and effectiveness. These synthesized enzymes aim to improve products such as detergents and lactose-free dairy products, as well as make them more cost effective. The improvements of metabolic engineering by synthetic biology is an example of a biotechnological technique utilized in industry to discover pharmaceuticals and fermentive chemicals. Synthetic biology may investigate modular pathway systems in biochemical production and increase yields of metabolic production. Artificial enzymatic activity and subsequent effects on metabolic reaction rates and yields may develop "efficient new strategies for improving cellular properties ... for industrially important biochemical production".<br />
<br />
研究人员和公司运用合成生物学来合成具有高活性、最佳产量和有效性的工业酶。这些合成酶旨在改善产品,如洗涤剂和无乳糖乳制品,以及使他们更具成本效益。合成生物学对代谢工程学的改进是生物技术用于工业发现药物和发酵性化学品的一个典例。合成生物学可以研究生化生产中的模块化途径系统,并提高代谢生产的产量。人工酶活性及其对代谢反应速率和产量的后续影响可能推动“改善细胞特性有效的新策略...... 用于重要的工业生化产品”。<br />
<br />
<br />
<br />
Natural proteins can be engineered, for example, by [[directed evolution]], novel protein structures that match or improve on the functionality of existing proteins can be produced. One group generated a [[helix bundle]] that was capable of binding [[oxygen]] with similar properties as [[hemoglobin]], yet did not bind [[carbon monoxide]].<ref>{{cite journal | vauthors = Koder RL, Anderson JL, Solomon LA, Reddy KS, Moser CC, Dutton PL | title = Design and engineering of an O(2) transport protein | journal = Nature | volume = 458 | issue = 7236 | pages = 305–9 | date = March 2009 | pmid = 19295603 | pmc = 3539743 | doi = 10.1038/nature07841 | bibcode = 2009Natur.458..305K }}</ref> A similar protein structure was generated to support a variety of [[oxidoreductase]] activities <ref>{{cite journal | vauthors = Farid TA, Kodali G, Solomon LA, Lichtenstein BR, Sheehan MM, Fry BA, Bialas C, Ennist NM, Siedlecki JA, Zhao Z, Stetz MA, Valentine KG, Anderson JL, Wand AJ, Discher BM, Moser CC, Dutton PL | title = Elementary tetrahelical protein design for diverse oxidoreductase functions | journal = Nature Chemical Biology | volume = 9 | issue = 12 | pages = 826–833 | date = December 2013 | pmid = 24121554 | pmc = 4034760 | doi = 10.1038/nchembio.1362 }}</ref> while another formed a structurally and sequentially novel [[ATPase]].<ref name="WangHecht2020">{{cite journal|last1=Wang|first1=MS|last2=Hecht|first2=MH|title=A Completely De Novo ATPase from Combinatorial Protein Design|journal=Journal of the American Chemical Society|year=2020|volume=142|issue=36|pages=15230–15234|issn=0002-7863|doi=10.1021/jacs.0c02954|pmid=32833456}}</ref> Another group generated a family of G-protein coupled receptors that could be activated by the inert small molecule [[clozapine N-oxide]] but insensitive to the native [[ligand]], [[acetylcholine]]; these receptors are known as [[Receptor activated solely by a synthetic ligand|DREADDs]].<ref>{{cite journal | vauthors = Armbruster BN, Li X, Pausch MH, Herlitze S, Roth BL | title = Evolving the lock to fit the key to create a family of G protein-coupled receptors potently activated by an inert ligand | journal = Proceedings of the National Academy of Sciences of the United States of America | volume = 104 | issue = 12 | pages = 5163–8 | date = March 2007 | pmid = 17360345 | pmc = 1829280 | doi = 10.1073/pnas.0700293104 | bibcode = 2007PNAS..104.5163A }}</ref> Novel functionalities or protein specificity can also be engineered using computational approaches. One study was able to use two different computational methods – a bioinformatics and molecular modeling method to mine sequence databases, and a computational enzyme design method to reprogram enzyme specificity. Both methods resulted in designed enzymes with greater than 100 fold specificity for production of longer chain alcohols from sugar.<ref>{{cite journal | vauthors = Mak WS, Tran S, Marcheschi R, Bertolani S, Thompson J, Baker D, Liao JC, Siegel JB | title = Integrative genomic mining for enzyme function to enable engineering of a non-natural biosynthetic pathway | journal = Nature Communications | volume = 6 | pages = 10005 | date = November 2015 | pmid = 26598135 | pmc = 4673503 | doi = 10.1038/ncomms10005 | bibcode = 2015NatCo...610005M }}</ref><br />
<br />
<br />
<br />
Scientists can encode digital information onto a single strand of synthetic DNA. In 2012, George M. Church encoded one of his books about synthetic biology in DNA. The 5.3 Mb of data was more than 1000 times greater than the previous largest amount of information to be stored in synthesized DNA. A similar project encoded the complete sonnets of William Shakespeare in DNA. More generally, algorithms such as NUPACK, ViennaRNA, Ribosome Binding Site Calculator, Cello, and Non-Repetitive Parts Calculator enables the design of new genetic systems.<br />
<br />
科学家可以将数字信息编码到一条合成 DNA 链上。2012年,乔治·M·丘奇用 DNA 将他的一本关于合成生物学的书编码。这5.3 Mb 的数据量比之前存储在合成 DNA 中的最大信息量大了1000多倍。一个类似的项目将威廉·莎士比亚的十四行诗全部编码在 DNA 中。更广泛地说,例如努派克,维也纳,核糖体结合位点算子,大提琴,和非重复部分算子等算法使新遗传系统的设计成为可能。<br />
<br />
Another common investigation is [[Expanded genetic code|expansion]] of the natural set of 20 [[amino acid]]s. Excluding [[stop codon]]s, 61 [[codons]] have been identified, but only 20 amino acids are coded generally in all organisms. Certain codons are engineered to code for alternative amino acids including: nonstandard amino acids such as O-methyl [[tyrosine]]; or exogenous amino acids such as 4-fluorophenylalanine. Typically, these projects make use of re-coded [[nonsense suppressor]] [[Transfer RNA|tRNA]]-[[Aminoacyl tRNA synthetase]] pairs from other organisms, though in most cases substantial engineering is required.<ref>{{cite journal | vauthors = Wang Q, Parrish AR, Wang L | title = Expanding the genetic code for biological studies | journal = Chemistry & Biology | volume = 16 | issue = 3 | pages = 323–36 | date = March 2009 | pmid = 19318213 | pmc = 2696486 | doi = 10.1016/j.chembiol.2009.03.001 }}</ref><br />
<br />
<br />
<br />
Many technologies have been developed for incorporating unnatural nucleotides and amino acids into nucleic acids and proteins, both in vitro and in vivo. For example, in May 2014, researchers announced that they had successfully introduced two new artificial nucleotides into bacterial DNA. By including individual artificial nucleotides in the culture media, they were able to exchange the bacteria 24 times; they did not generate mRNA or proteins able to use the artificial nucleotides.<br />
<br />
无论是在体外还是体内,在核酸和蛋白质中将非天然的核苷酸和氨基酸结合的技术已经实现。例如,2014年5月,研究人员宣布他们已经成功地将两种新的人工核苷酸引入细菌 DNA。通过在培养基中加入单个的人工核苷酸,他们能够交换细菌24次; 但是他们没有得到人工核苷酸表达的 mRNA 或蛋白质。<br />
<br />
Other researchers investigated protein structure and function by reducing the normal set of 20 amino acids. Limited protein sequence libraries are made by generating proteins where groups of amino acids may be replaced by a single amino acid.<ref>{{cite journal|author=Davidson, AR|author2=Lumb, KJ|author3=Sauer, RT|date=1995|title=Cooperatively folded proteins in random sequence libraries|journal=Nature Structural Biology|volume=2|issue=10|pages=856–864|doi=10.1038/nsb1095-856|pmid=7552709|s2cid=31781262}}</ref> For instance, several [[Chemical polarity|non-polar]] amino acids within a protein can all be replaced with a single non-polar amino acid.<ref>{{cite journal|vauthors=Kamtekar S, Schiffer JM, Xiong H, Babik JM, Hecht MH|date=December 1993|title=Protein design by binary patterning of polar and nonpolar amino acids|journal=Science|volume=262|issue=5140|pages=1680–5|bibcode=1993Sci...262.1680K|doi=10.1126/science.8259512|pmid=8259512}}</ref> . One project demonstrated that an engineered version of [[Chorismate mutase]] still had catalytic activity when only 9 amino acids were used.<ref>{{cite journal|vauthors=Walter KU, Vamvaca K, Hilvert D|date=November 2005|title=An active enzyme constructed from a 9-amino acid alphabet|journal=The Journal of Biological Chemistry|volume=280|issue=45|pages=37742–6|doi=10.1074/jbc.M507210200|pmid=16144843|doi-access=free}}</ref><br />
<br />
<br />
<br />
Researchers and companies practice synthetic biology to synthesize [[industrial enzymes]] with high activity, optimal yields and effectiveness. These synthesized enzymes aim to improve products such as detergents and lactose-free dairy products, as well as make them more cost effective.<ref>{{cite web|url=https://www.thermofisher.com/us/en/home/life-science/synthetic-biology/synthetic-biology-applications.html|title=Synthetic Biology Applications|website=www.thermofisher.com|access-date=2015-11-12}}</ref> The improvements of metabolic engineering by synthetic biology is an example of a biotechnological technique utilized in industry to discover pharmaceuticals and fermentive chemicals. Synthetic biology may investigate modular pathway systems in biochemical production and increase yields of metabolic production. Artificial enzymatic activity and subsequent effects on metabolic reaction rates and yields may develop "efficient new strategies for improving cellular properties ... for industrially important biochemical production".<ref>{{cite journal | vauthors = Liu Y, Shin HD, Li J, Liu L | title = Toward metabolic engineering in the context of system biology and synthetic biology: advances and prospects | journal = Applied Microbiology and Biotechnology | volume = 99 | issue = 3 | pages = 1109–18 | date = February 2015 | pmid = 25547833 | doi = 10.1007/s00253-014-6298-y | s2cid = 954858 }}</ref><br />
<br />
Synthetic biology raised NASA's interest as it could help to produce resources for astronauts from a restricted portfolio of compounds sent from Earth. On Mars, in particular, synthetic biology could lead to production processes based on local resources, making it a powerful tool in the development of manned outposts with less dependence on Earth.<br />
<br />
合成生物学引起了美国国家航空航天局的兴趣,因为它可以促使来自地球的受限化合物组合为宇航员生产资源。特别是在火星上,合成生物学可以产生基于局部资源的生产过程,使其成为开发对地球依赖性较低的载人前哨站的有力工具。<br />
<br />
<br />
<br />
=== Designed nucleic acid systems 设计核酸系统 ===<br />
<br />
Scientists can encode digital information onto a single strand of [[synthetic DNA]]. In 2012, [[George M. Church]] encoded one of his books about synthetic biology in DNA. The 5.3 [[Megabit|Mb]] of data was more than 1000 times greater than the previous largest amount of information to be stored in synthesized DNA.<ref>{{cite journal | vauthors = Church GM, Gao Y, Kosuri S | title = Next-generation digital information storage in DNA | journal = Science | volume = 337 | issue = 6102 | pages = 1628 | date = September 2012 | pmid = 22903519 | doi = 10.1126/science.1226355 | bibcode = 2012Sci...337.1628C | s2cid = 934617 | url = https://semanticscholar.org/paper/0856a685e85bcd27c11cd5f385be818deceb27bd }}</ref> A similar project encoded the complete [[sonnet]]s of [[William Shakespeare]] in DNA.<ref>{{cite web|url=http://news.sky.com/story/1041917/huge-amounts-of-data-can-be-stored-in-dna|title=Huge amounts of data can be stored in DNA|date=23 January 2013|publisher=Sky News|access-date=24 January 2013|archive-url=https://web.archive.org/web/20160531044937/http://news.sky.com/story/1041917/huge-amounts-of-data-can-be-stored-in-dna|archive-date=2016-05-31 }}</ref> More generally, algorithms such as NUPACK,<ref>{{Cite journal|last1=Zadeh|first1=Joseph N.|last2=Steenberg|first2=Conrad D.|last3=Bois|first3=Justin S.|last4=Wolfe|first4=Brian R.|last5=Pierce|first5=Marshall B.|last6=Khan|first6=Asif R.|last7=Dirks|first7=Robert M.|last8=Pierce|first8=Niles A.|date=2011-01-15|title=NUPACK: Analysis and design of nucleic acid systems|journal=Journal of Computational Chemistry|language=en|volume=32|issue=1|pages=170–173|doi=10.1002/jcc.21596|pmid=20645303}}</ref> ViennaRNA,<ref>{{Cite journal|last1=Lorenz|first1=Ronny|last2=Bernhart|first2=Stephan H.|last3=Höner zu Siederdissen|first3=Christian|last4=Tafer|first4=Hakim|last5=Flamm|first5=Christoph|last6=Stadler|first6=Peter F.|last7=Hofacker|first7=Ivo L.|date=2011-11-24|title=ViennaRNA Package 2.0|journal=Algorithms for Molecular Biology|language=en|volume=6|issue=1|pages=26|doi=10.1186/1748-7188-6-26|issn=1748-7188|pmc=3319429|pmid=22115189}}</ref> Ribosome Binding Site Calculator,<ref>{{Cite journal|last1=Salis|first1=Howard M.|last2=Mirsky|first2=Ethan A.|last3=Voigt|first3=Christopher A.|date=October 2009|title=Automated design of synthetic ribosome binding sites to control protein expression|journal=Nature Biotechnology|language=en|volume=27|issue=10|pages=946–950|doi=10.1038/nbt.1568|pmid=19801975|issn=1546-1696|pmc=2782888}}</ref> Cello,<ref>{{Cite journal|last1=Nielsen|first1=A. A. K.|last2=Der|first2=B. S.|last3=Shin|first3=J.|last4=Vaidyanathan|first4=P.|last5=Paralanov|first5=V.|last6=Strychalski|first6=E. A.|last7=Ross|first7=D.|last8=Densmore|first8=D.|last9=Voigt|first9=C. A.|date=2016-04-01|title=Genetic circuit design automation|journal=Science|language=en|volume=352|issue=6281|pages=aac7341|doi=10.1126/science.aac7341|pmid=27034378|issn=0036-8075|doi-access=free}}</ref> and Non-Repetitive Parts Calculator<ref>{{Cite journal|last1=Hossain|first1=Ayaan|last2=Lopez|first2=Eriberto|last3=Halper|first3=Sean M.|last4=Cetnar|first4=Daniel P.|last5=Reis|first5=Alexander C.|last6=Strickland|first6=Devin|last7=Klavins|first7=Eric|last8=Salis|first8=Howard M.|date=2020-07-13|title=Automated design of thousands of nonrepetitive parts for engineering stable genetic systems|url=https://www.nature.com/articles/s41587-020-0584-2|journal=Nature Biotechnology|language=en|pages=1–10|doi=10.1038/s41587-020-0584-2|pmid=32661437|s2cid=220506228|issn=1546-1696}}</ref> enables the design of new genetic systems.<br />
<br />
<br />
<br />
[[Gene functions in the minimal genome of the synthetic organism, Syn 3.]]<br />
<br />
[[在合成生物的最小基因组中发挥功能的基因,Syn 3. ]]<br />
<br />
Many technologies have been developed for incorporating [[Nucleic acid analogue|unnatural nucleotides]] and amino acids into nucleic acids and proteins, both ''in vitro'' and ''in vivo''. For example, in May 2014, researchers announced that they had successfully introduced two new artificial [[nucleotides]] into bacterial DNA. By including individual artificial nucleotides in the culture media, they were able to exchange the bacteria 24 times; they did not generate [[Messenger RNA|mRNA]] or proteins able to use the artificial nucleotides.<ref name="NYT-20140507">{{cite news|url=https://www.nytimes.com/2014/05/08/business/researchers-report-breakthrough-in-creating-artificial-genetic-code.html|title=Researchers Report Breakthrough in Creating Artificial Genetic Code|last=Pollack|first=Andrew|date=May 7, 2014|work=[[New York Times]]|access-date=May 7, 2014}}</ref><ref name="NATURE-20140507">{{cite journal|last=Callaway|first=Ewen|date=May 7, 2014|title=First life with 'alien' DNA|url=http://www.nature.com/news/first-life-with-alien-dna-1.15179|journal=[[Nature (journal)|Nature]]|doi=10.1038/nature.2014.15179|s2cid=86967999|access-date=May 7, 2014}}</ref><ref name="NATJ-20140507">{{cite journal|vauthors=Malyshev DA, Dhami K, Lavergne T, Chen T, Dai N, Foster JM, Corrêa IR, Romesberg FE|date=May 2014|title=A semi-synthetic organism with an expanded genetic alphabet|journal=Nature|volume=509|issue=7500|pages=385–8|bibcode=2014Natur.509..385M|doi=10.1038/nature13314|pmc=4058825|pmid=24805238}}</ref><br />
<br />
One important topic in synthetic biology is synthetic life, that is concerned with hypothetical organisms created in vitro from biomolecules and/or chemical analogues thereof. Synthetic life experiments attempt to either probe the origins of life, study some of the properties of life, or more ambitiously to recreate life from non-living (abiotic) components. Synthetic life biology attempts to create living organisms capable of carrying out important functions, from manufacturing pharmaceuticals to detoxifying polluted land and water. In medicine, it offers prospects of using designer biological parts as a starting point for new classes of therapies and diagnostic tools. Nobody has been able to create such a cell. The host cells were able to grow and replicate. The Mycoplasma laboratorium is the only living organism with completely engineered genome.<br />
<br />
合成生物学的一个重要课题是合成生命,它涉及到在体外由生物分子和/或其化学类似物创造的假想生物体。合成生命实验或者试图探索生命的起源,研究生命的某些特性,或者更雄心勃勃地,试图从非生命(非生物)组成部分中重新创造生命。合成生命生物学试图创造能够执行重要功能的生命有机体,从制造药品到净化被污染的土地和水。在医学上,它提供了使用设计生物学部件作为新类型治疗和诊断工具的起点的前景。没有人能够制造出这样的细胞。宿主细胞能够生长和复制。实验室合成支原体是唯一拥有完全工程化基因组的生物体。<br />
<br />
<br />
<br />
=== Space exploration 太空探索 ===<br />
<br />
The first living organism with 'artificial' expanded DNA code was presented in 2014; the team used E. coli that had its genome extracted and replaced with a chromosome with an expanded genetic code. The nucleosides added are d5SICS and dNaM. followed by national synthetic cell organizations in several countries, including FabriCell, MaxSynBio and BaSyC. <br />
--[[用户:粲兰|袁一博]]([[用户讨论:粲兰|讨论]])“followed by national synthetic cell organizations in several countries, including FabriCell, MaxSynBio and BaSyC.”该句首字母未大写,疑似搬运时少了部分语句。<br />
The European synthetic cell efforts were unified in 2019 as SynCellEU initiative.<br />
<br />
2014年,第一个具有人工扩展 DNA 编码的有机活体问世; 研究小组使用大肠杆菌提取了它的基因组,并用扩展基因编码的染色体替换了它。添加的核苷是 d5SICS 和 dNaM。其次是一些国家的国家合成细胞组织,包括 FabriCell,MaxSynBio 和 BaSyC。SynCellEU 倡议在2019年总结了欧洲合成细胞的相关工作。<br />
<br />
Synthetic biology raised [[NASA|NASA's]] interest as it could help to produce resources for astronauts from a restricted portfolio of compounds sent from Earth.<ref name="Verseux, C. 2015 73–100">{{Cite book|author=Verseux, C.|author2=Paulino-Lima, I.|author3=Baque, M.|author4=Billi, D.|author5=Rothschild, L.|date=2016|title=Synthetic Biology for Space Exploration: Promises and Societal Implications|journal=Ambivalences of Creating Life. Societal and Philosophical Dimensions of Synthetic Biology, Publisher: Springer-Verlag|volume=45|pages=73–100|doi=10.1007/978-3-319-21088-9_4|series=Ethics of Science and Technology Assessment|isbn=978-3-319-21087-2}}</ref><ref>{{cite journal|last1=Menezes|first1=A|last2=Cumbers|first2=J|last3=Hogan|first3=J|last4=Arkin|first4=A|date=2014|title=Towards synthetic biological approaches to resource utilization on space missions|journal=Journal of the Royal Society, Interface|volume=12|issue=102|pages=20140715|doi=10.1098/rsif.2014.0715|pmid=25376875|pmc=4277073}}</ref><ref>{{cite journal | vauthors = Montague M, McArthur GH, Cockell CS, Held J, Marshall W, Sherman LA, Wang N, Nicholson WL, Tarjan DR, Cumbers J | title = The role of synthetic biology for in situ resource utilization (ISRU) | journal = Astrobiology | volume = 12 | issue = 12 | pages = 1135–42 | date = December 2012 | pmid = 23140229 | doi = 10.1089/ast.2012.0829 | bibcode = 2012AsBio..12.1135M }}</ref> On Mars, in particular, synthetic biology could lead to production processes based on local resources, making it a powerful tool in the development of manned outposts with less dependence on Earth.<ref name="Verseux, C. 2015 73–100" /> Work has gone into developing plant strains that are able to cope with the harsh Martian environment, using similar techniques to those employed to increase resilience to certain environmental factors in agricultural crops.<ref>{{Cite web|title=NASA - Designer Plants on Mars|url=https://www.nasa.gov/centers/goddard/news/topstory/2005/mars_plants.html|last=GSFC|first=Bill Steigerwald |website=www.nasa.gov|language=en|access-date=2020-05-29}}</ref><br />
<br />
<br />
<br />
=== Synthetic life 合成生命 ===<br />
<br />
{{Further|Artificially Expanded Genetic Information System|Hypothetical types of biochemistry}}<br />
<br />
Bacteria have long been used in cancer treatment. Bifidobacterium and Clostridium selectively colonize tumors and reduce their size. Recently synthetic biologists reprogrammed bacteria to sense and respond to a particular cancer state. Most often bacteria are used to deliver a therapeutic molecule directly to the tumor to minimize off-target effects. To target the tumor cells, peptides that can specifically recognize a tumor were expressed on the surfaces of bacteria. Peptides used include an affibody molecule that specifically targets human epidermal growth factor receptor 2 and a synthetic adhesin. The other way is to allow bacteria to sense the tumor microenvironment, for example hypoxia, by building an AND logic gate into bacteria. The bacteria then only release target therapeutic molecules to the tumor through either lysis or the bacterial secretion system. Lysis has the advantage that it can stimulate the immune system and control growth. Multiple types of secretion systems can be used and other strategies as well. The system is inducible by external signals. Inducers include chemicals, electromagnetic or light waves.<br />
<br />
长期以来,细菌一直被用于癌症治疗。双歧杆菌和梭状芽胞杆菌选择性地定殖于肿瘤并减小肿瘤体积。最近,合成生物学家对细菌进行了重新编码,使其能够感知特定的癌症状态并对其做出反应。大多数情况下,细菌被用来直接向肿瘤输送治疗分子,以最小化脱靶效应。为了定靶于肿瘤细胞,细菌表面表达出了可以特异性识别肿瘤的肽。识别过程中涉及的多肽包括一个特定作用于人类表皮生长因子受体的粘附分子和一个合成粘附素。另一种方法是通过在细菌中建立一个与逻辑门让细菌感知肿瘤的微环境,例如,缺氧。然后,细菌只通过溶菌或细菌分泌系统向肿瘤释放靶向治疗分子。溶菌具有刺激免疫系统和控制生长的优点。这个过程中可以使用多种类型的分泌系统和其他策略。该系统由外部信号诱导。这些诱导因子包括化学物质、电磁波或光波。<br />
<br />
[[File:Syn3 genome.svg|thumb|upright=1.25|[[Gene]] functions in the minimal [[genome]] of the synthetic organism, ''[[Syn 3]]''.<ref name="Hutchison">{{cite journal | vauthors = Hutchison CA, Chuang RY, Noskov VN, Assad-Garcia N, Deerinck TJ, Ellisman MH, Gill J, Kannan K, Karas BJ, Ma L, Pelletier JF, Qi ZQ, Richter RA, Strychalski EA, Sun L, Suzuki Y, Tsvetanova B, Wise KS, Smith HO, Glass JI, Merryman C, Gibson DG, Venter JC | title = Design and synthesis of a minimal bacterial genome | journal = Science | volume = 351 | issue = 6280 | pages = aad6253 | date = March 2016 | pmid = 27013737 | doi = 10.1126/science.aad6253 | bibcode = 2016Sci...351.....H | doi-access = free }}</ref>]]<br />
<br />
One important topic in synthetic biology is ''synthetic life'', that is concerned with hypothetical organisms created ''[[in vitro]]'' from [[biomolecule]]s and/or [[hypothetical types of biochemistry|chemical analogues thereof]]. Synthetic life experiments attempt to either probe the [[origins of life]], study some of the properties of life, or more ambitiously to recreate life from non-living ([[abiotic components|abiotic]]) components. Synthetic life biology attempts to create living organisms capable of carrying out important functions, from manufacturing pharmaceuticals to detoxifying polluted land and water.<ref name="enzymes2014">{{cite news |last=Connor |first=Steve |url=https://www.independent.co.uk/news/science/major-synthetic-life-breakthrough-as-scientists-make-the-first-artificial-enzymes-9896333.html |title=Major synthetic life breakthrough as scientists make the first artificial enzymes |work=The Independent |location=London |date=1 December 2014 |access-date=2015-08-06 }}</ref> In medicine, it offers prospects of using designer biological parts as a starting point for new classes of therapies and diagnostic tools.<ref name="enzymes2014" /><br />
<br />
Multiple species and strains are applied in these therapeutics. Most commonly used bacteria are Salmonella typhimurium, Escherichia Coli, Bifidobacteria, Streptococcus, Lactobacillus, Listeria and Bacillus subtilis. Each of these species have their own property and are unique to cancer therapy in terms of tissue colonization, interaction with immune system and ease of application.<br />
<br />
在这些治疗方法中应用了多种菌株。最常用的细菌是鼠伤寒沙门氏菌、大肠桿菌、双歧杆菌、链球菌、乳酸菌、李斯特菌和枯草杆菌。这些物种中的每一个都有自己的特性。在定殖组织、与免疫系统的相互作用和易于应用方面,它们对癌症治疗各有独到之处。<br />
<br />
<br />
<br />
A living "artificial cell" has been defined as a completely synthetic cell that can capture [[energy]], maintain [[electrochemical gradient|ion gradients]], contain [[macromolecules]] as well as store information and have the ability to [[mutate]].<ref name="Deamer">{{cite journal | vauthors = Deamer D | title = A giant step towards artificial life? | journal = Trends in Biotechnology | volume = 23 | issue = 7 | pages = 336–8 | date = July 2005 | pmid = 15935500 | doi = 10.1016/j.tibtech.2005.05.008 }}</ref> Nobody has been able to create such a cell.<ref name='Deamer'/><br />
<br />
<br />
<br />
The immune system plays an important role in cancer and can be harnessed to attack cancer cells. Cell-based therapies focus on immunotherapies, mostly by engineering T cells.<br />
<br />
免疫系统在癌症中起着重要作用。可以利用免疫系统攻击癌细胞。以细胞为基础的疗法主要是免疫疗法,主要方法是通过改造 T 细胞来完成治疗。<br />
<br />
A completely synthetic bacterial chromosome was produced in 2010 by [[Craig Venter]], and his team introduced it to genomically emptied bacterial host cells.<ref name="gibson52">{{cite journal | vauthors = Gibson DG, Glass JI, Lartigue C, Noskov VN, Chuang RY, Algire MA, Benders GA, Montague MG, Ma L, Moodie MM, Merryman C, Vashee S, Krishnakumar R, Assad-Garcia N, Andrews-Pfannkoch C, Denisova EA, Young L, Qi ZQ, Segall-Shapiro TH, Calvey CH, Parmar PP, Hutchison CA, Smith HO, Venter JC | title = Creation of a bacterial cell controlled by a chemically synthesized genome | journal = Science | volume = 329 | issue = 5987 | pages = 52–6 | date = July 2010 | pmid = 20488990 | doi = 10.1126/science.1190719 | bibcode = 2010Sci...329...52G | doi-access = free }}</ref> The host cells were able to grow and replicate.<ref>{{cite web| url=https://www.npr.org/templates/transcript/transcript.php?storyId=127010591| title=Scientists Reach Milestone On Way To Artificial Life| access-date=2010-06-09|date=2010-05-20}}</ref><ref>{{cite web|last1=Venter|first1=JC|title=From Designing Life to Prolonging Healthy Life|url=https://www.youtube.com/watch?v=Gwu_djYMm3w&t=30s|website=YouTube|publisher=University of California Television (UCTV)|access-date=1 February 2017}}</ref> The [[Mycoplasma laboratorium]] is the only living organism with completely engineered genome.<br />
<br />
<br />
<br />
T cell receptors were engineered and ‘trained’ to detect cancer epitopes. Chimeric antigen receptors (CARs) are composed of a fragment of an antibody fused to intracellular T cell signaling domains that can activate and trigger proliferation of the cell. A second generation CAR-based therapy was approved by FDA.<br />
<br />
T 细胞受体被设计和“训练”用以检测癌症表位。嵌合抗原受体(CAR)是由融合于细胞内 T 细胞信号域的抗体片段组成,这些信号域可以激活并触发细胞增殖。美国食品药品监督管理局(FDA)批准了第二代基于嵌合抗原受体的基因治疗。<br />
<br />
The first living organism with 'artificial' expanded DNA code was presented in 2014; the team used ''E. coli'' that had its genome extracted and replaced with a chromosome with an expanded genetic code. The [[nucleoside]]s added are [[d5SICS]] and [[dNaM]].<ref name="NATJ-20140507"/><br />
<br />
<br />
<br />
Gene switches were designed to enhance safety of the treatment. Kill switches were developed to terminate the therapy should the patient show severe side effects. Mechanisms can more finely control the system and stop and reactivate it. Since the number of T-cells are important for therapy persistence and severity, growth of T-cells is also controlled to dial the effectiveness and safety of therapeutics.<br />
<br />
基因开关的设计初衷是提高治疗的安全性。如果病人出现严重的副作用,杀伤开关就会终止治疗。这种机制可以更好地控制系统,停止和重新激活它。由于 T 细胞的数量对治疗的持续性和强度非常重要,因此 T 细胞的生长也受到控制,从而平衡治疗的有效性和安全性。<br />
<br />
In May 2019, researchers, in a milestone effort, reported the creation of a new [[Synthetic biology#Synthetic life|synthetic]] (possibly [[Artificial life#Biochemical-based ("wet")|artificial]]) form of [[wikt:viability|viable]] [[life]], a variant of the [[bacteria]] ''[[Escherichia coli]]'', by reducing the natural number of 64 [[codon]]s in the bacterial [[genome]] to 59 codons instead, in order to encode 20 [[amino acid]]s.<ref name="NYT-20190515"/><ref name="NAT-20190515"/><br />
<br />
<br />
<br />
Although several mechanisms can improve safety and control, limitations include the difficulty of inducing large DNA circuits into the cells and risks associated with introducing foreign components, especially proteins, into cells.<br />
<br />
虽然有几种机制可以提高安全性和控制性,但他们也都存在局限性,包括很难将大型 DNA 电路诱导入细胞,以及将外来成分,特别是蛋白质引入细胞的风险。<br />
<br />
In 2017 the international [[Build-a-Cell]] large-scale research collaboration for the construction of synthetic living cell was started,<ref>{{cite web|url=http://buildacell.io/|title=Build-a-Cell|accessdate=4 Dec 2019}}</ref> followed by national synthetic cell organizations in several countries, including FabriCell,<ref>{{cite web|url=http://fabricell.org/|title=FabriCell|accessdate=8 Dec 2019}}</ref> MaxSynBio<ref>{{cite web|url=https://www.maxsynbio.mpg.de/home/|title=MaxSynBio - Max Planck Research Network in Synthetic Biology|accessdate=8 Dec 2019}}</ref> and BaSyC.<ref>{{cite web|url=http://www.basyc.nl/|title=BaSyC|accessdate=8 Dec 2019}}</ref> The European synthetic cell efforts were unified in 2019 as SynCellEU initiative.<ref>{{cite web|url=http://www.syntheticcell.eu/|title=SynCell EU|accessdate=8 Dec 2019}}</ref><br />
<br />
<br />
<br />
=== Drug delivery platforms 药物输送平台 ===<br />
<br />
==== Engineered bacteria-based platform 基于细菌设计的平台 ====<br />
<br />
Bacteria have long been used in cancer treatment. ''[[Bifidobacterium]]'' and ''[[Clostridium]]'' selectively colonize tumors and reduce their size.<ref name="Zu_2014">{{cite journal|vauthors=Zu C, Wang J|date=August 2014|title=Tumor-colonizing bacteria: a potential tumor targeting therapy|url=|journal=Critical Reviews in Microbiology|volume=40|issue=3|pages=225–35|doi=10.3109/1040841X.2013.776511|pmid=23964706|s2cid=26498221}}</ref> Recently synthetic biologists reprogrammed bacteria to sense and respond to a particular cancer state. Most often bacteria are used to deliver a therapeutic molecule directly to the tumor to minimize off-target effects. To target the tumor cells, [[peptide]]s that can specifically recognize a tumor were expressed on the surfaces of bacteria. Peptides used include an [[affibody molecule]] that specifically targets human [[Epidermal growth factor receptor|epidermal growth factor receptor 2]]<ref name="Gujrati_2014">{{cite journal|vauthors=Gujrati V, Kim S, Kim SH, Min JJ, Choy HE, Kim SC, Jon S|date=February 2014|title=Bioengineered bacterial outer membrane vesicles as cell-specific drug-delivery vehicles for cancer therapy|url=|journal=ACS Nano|volume=8|issue=2|pages=1525–37|doi=10.1021/nn405724x|pmid=24410085}}</ref> and a synthetic [[Adhesin molecule (immunoglobulin -like)|adhesin]].<ref name="Piñero-Lambea_2015">{{cite journal|vauthors=Piñero-Lambea C, Bodelón G, Fernández-Periáñez R, Cuesta AM, Álvarez-Vallina L, Fernández LÁ|date=April 2015|title=Programming controlled adhesion of E. coli to target surfaces, cells, and tumors with synthetic adhesins|journal=ACS Synthetic Biology|volume=4|issue=4|pages=463–73|doi=10.1021/sb500252a|pmc=4410913|pmid=25045780}}</ref> The other way is to allow bacteria to sense the [[tumor microenvironment]], for example hypoxia, by building an AND logic gate into bacteria.<ref>{{cite journal | last1 = Deyneko | first1 = I.V. | last2 = Kasnitz | first2 = N. | last3 = Leschner | first3 = S. | last4 = Weiss | first4 = S. | year = 2016| title = Composing a tumor specific bacterial promoter | url = | journal = PLOS ONE | volume = 11| issue = 5| page = e0155338| doi = 10.1371/journal.pone.0155338 | pmid = 27171245 | pmc = 4865170 }}</ref> The bacteria then only release target therapeutic molecules to the tumor through either [[lysis]]<ref>{{cite journal | last1 = Rice | first1 = KC | last2 = Bayles | first2 = KW | year = 2008 | title = Molecular control of bacterial death and lysis | journal = Microbiol Mol Biol Rev | volume = 72 | issue = 1| pages = 85–109 | doi = 10.1128/mmbr.00030-07 | pmid = 18322035 | pmc = 2268280 }}</ref> or the [[bacterial secretion system]].<ref>{{cite journal | last1 = Ganai | first1 = S. | last2 = Arenas | first2 = R. B. | last3 = Forbes | first3 = N. S. | year = 2009 | title = Tumour-targeted delivery of TRAIL using Salmonella typhimurium enhances breast cancer survival in mice | url = | journal = Br. J. Cancer | volume = 101 | issue = 10| pages = 1683–1691 | doi = 10.1038/sj.bjc.6605403 | pmid = 19861961 | pmc = 2778534 }}</ref> Lysis has the advantage that it can stimulate the immune system and control growth. Multiple types of secretion systems can be used and other strategies as well. The system is inducible by external signals. Inducers include chemicals, electromagnetic or light waves.<br />
<br />
The creation of new life and the tampering of existing life has raised ethical concerns in the field of synthetic biology and are actively being discussed.<br />
<br />
创造新生命以及篡改现有生命引起了合成生物学领域的伦理问题,目前正处于积极的讨论中。<br />
<br />
<br />
<br />
Multiple species and strains are applied in these therapeutics. Most commonly used bacteria are ''[[Salmonella enterica subsp. enterica|Salmonella typhimurium]]'', [[Escherichia coli|''Escherichia Coli'']], ''Bifidobacteria'', ''[[Streptococcus]]'', ''[[Lactobacillus]]'', ''[[Listeria]]'' and ''[[Bacillus subtilis]]''. Each of these species have their own property and are unique to cancer therapy in terms of tissue colonization, interaction with immune system and ease of application.<br />
<br />
<br />
<br />
The ethical aspects of synthetic biology has 3 main features: biosafety, biosecurity, and the creation of new life forms. Other ethical issues mentioned include the regulation of new creations, patent management of new creations, benefit distribution, and research integrity.<br />
<br />
合成生物学的伦理方面有三个主要特点: 生物研究安全性、生物安全性和创造新的生命形式。其他提到的伦理问题包括新生命的管理、新生命的专利管理、利益分配和研究的完整性。<br />
<br />
==== Cell-based platform 基于细胞的平台====<br />
<br />
The immune system plays an important role in cancer and can be harnessed to attack cancer cells. Cell-based therapies focus on [[Cancer immunotherapy|immunotherapies]], mostly by engineering [[T cell]]s.<br />
<br />
Ethical issues have surfaced for recombinant DNA and genetically modified organism (GMO) technologies and extensive regulations of genetic engineering and pathogen research were in place in many jurisdictions. Amy Gutmann, former head of the Presidential Bioethics Commission, argued that we should avoid the temptation to over-regulate synthetic biology in general, and genetic engineering in particular. According to Gutmann, "Regulatory parsimony is especially important in emerging technologies...where the temptation to stifle innovation on the basis of uncertainty and fear of the unknown is particularly great. The blunt instruments of statutory and regulatory restraint may not only inhibit the distribution of new benefits, but can be counterproductive to security and safety by preventing researchers from developing effective safeguards.".<br />
<br />
重组 DNA 和转基因生物(GMO)技术的伦理问题已经浮出水面,许多司法管辖区对基因工程和病原体研究有着广泛的规定。生物伦理总统委员会前任主席艾米 · 古特曼认为,我们应该避免过度监管合成生物学,尤其是基因工程。古特曼认为,“过度监管在新兴技术领域尤为显要...... 在这些领域,处于不确定性和对未知事物的恐惧而扼杀创新的倾向尤为强烈。法律和监管限制的生硬手段可能不仅会抑制新利益的分配,而且可能阻碍研究人员制定有效的保障措施,从而对研究安全性和生命安全性产生反作用。".<br />
<br />
<br />
<br />
T cell receptors were engineered and ‘trained’ to detect cancer [[epitope]]s. [[Chimeric antigen receptor]]s (CARs) are composed of a fragment of an [[antibody]] fused to intracellular T cell signaling domains that can activate and trigger proliferation of the cell. A second generation CAR-based therapy was approved by FDA.{{Citation needed|date=April 2018}}<br />
<br />
<br />
<br />
Gene switches were designed to enhance safety of the treatment. Kill switches were developed to terminate the therapy should the patient show severe side effects.<ref>Jones, B.S., Lamb, L.S., Goldman, F. & Di Stasi, A. Improving the safety of cell therapy products by suicide gene transfer. Front. Pharmacol. 5, 254 (2014).</ref> Mechanisms can more finely control the system and stop and reactivate it.<ref>{{cite journal | last1 = Wei | first1 = P | last2 = Wong | first2 = WW | last3 = Park | first3 = JS | last4 = Corcoran | first4 = EE | last5 = Peisajovich | first5 = SG | last6 = Onuffer | first6 = JJ | last7 = Weiss | first7 = A | last8 = LiWA | year = 2012 | title = Bacterial virulence proteins as tools to rewire kinase pathways in yeast and immune cells | url = | journal = Nature | volume = 488 | issue = 7411| pages = 384–388 | doi = 10.1038/nature11259 | pmid = 22820255 | pmc = 3422413 }}</ref><ref>{{cite journal | last1 = Danino | first1 = T. | last2 = Mondragon-Palomino | first2 = O. | last3 = Tsimring | first3 = L. | last4 = Hasty | first4 = J. | year = 2010 | title = A synchronized quorum of genetic clocks | url = | journal = Nature | volume = 463 | issue = 7279| pages = 326–330 | doi = 10.1038/nature08753 | pmid = 20090747 | pmc = 2838179 }}</ref> Since the number of T-cells are important for therapy persistence and severity, growth of T-cells is also controlled to dial the effectiveness and safety of therapeutics.<ref>{{cite journal | last1 = Chen | first1 = Y. Y. | last2 = Jensen | first2 = M. C. | last3 = Smolke | first3 = C. D. | year = 2010 | title = Genetic control of mammalian T-cell proliferation with synthetic RNA regulatory systems | journal = Proc. Natl. Acad. Sci. U.S.A. | volume = 107 | issue = 19| pages = 8531–6 | doi = 10.1073/pnas.1001721107 | pmid = 20421500 | pmc = 2889348 }}</ref><br />
<br />
One ethical question is whether or not it is acceptable to create new life forms, sometimes known as "playing God". Currently, the creation of new life forms not present in nature is at small-scale, the potential benefits and dangers remain unknown, and careful consideration and oversight are ensured for most studies. Regarding auxotrophy, bacteria and yeast can be engineered to be unable to produce histidine, an important amino acid for all life. Such organisms can thus only be grown on histidine-rich media in laboratory conditions, nullifying fears that they could spread into undesirable areas.<br />
<br />
有这样一个道德问题,创造新的生命形式,或称“扮演上帝”,是否可以接受。目前,自然界中不存在的新生命形式的创造规模很小,潜在的好处和风险仍然不为人知,并且大多数研究确保进行了认真的考虑和监督。通过制造营养缺陷,细菌和酵母可以被改造为不能生产组氨酸的类型。组氨酸是一种对所有生命来说都很重要的氨基酸。因此,这些微生物只能在实验室条件下在富含组氨酸的培养基上生长,从而消除了人们对它们可能扩散到不良区域的担忧。<br />
<br />
<br />
<br />
<br />
== Ethics 伦理问题 ==<br />
<br />
Some ethical issues relate to biosecurity, where biosynthetic technologies could be deliberately used to cause harm to society and/or the environment. Since synthetic biology raises ethical issues and biosecurity issues, humanity must consider and plan on how to deal with potentially harmful creations, and what kinds of ethical measures could possibly be employed to deter nefarious biosynthetic technologies. With the exception of regulating synthetic biology and biotechnology companies, however, the issues are not seen as new because they were raised during the earlier recombinant DNA and genetically modified organism (GMO) debates and extensive regulations of genetic engineering and pathogen research are already in place in many jurisdictions.<br /><br />
<br />
一些伦理问题与生物安全有关,在这方面,生物合成技术可能被有意用以破坏社会和/或环境。由于合成生物学引起了伦理问题和生物安全问题,人类必须考虑和计划如何处理潜在的有害创造物,以及何种伦理措施具有阻止邪恶的生物合成技术的可行性。然而,除了监管合成生物学和生物技术公司之外,这些问题并不被视为新问题,因为它们是在早期的重组 DNA 和转基因生物(GMO)辩论中提出的,而且许多司法辖区已经对基因工程和病原体研究进行了广泛的监管。< br/><br />
<br />
{{Update|section|date=January 2019}}<br />
<br />
<br />
<br />
The creation of new life and the tampering of existing life has raised [[Ethics|ethical concerns]] in the field of synthetic biology and are actively being discussed.<ref name=":3" /><br />
<br />
<br />
<br />
The European Union-funded project SYNBIOSAFE has issued reports on how to manage synthetic biology. A 2007 paper identified key issues in safety, security, ethics and the science-society interface, which the project defined as public education and ongoing dialogue among scientists, businesses, government and ethicists. The key security issues that SYNBIOSAFE identified involved engaging companies that sell synthetic DNA and the biohacking community of amateur biologists. Key ethical issues concerned the creation of new life forms.<br />
<br />
欧盟资助的项目 SYNBIOSAFE 已经发布了关于如何管理合成生物学的报告。2007年的一篇论文确定了技术安全、生命安全、伦理和科学-社会接口方面的关键问题,并将其定义为公共教育和科学家、企业、政府和伦理学家之间的持续交流。SYNBIOSAFE 确定的关键生命安全问题涉及到销售合成 DNA 的公司和业余生物学家组成的生物黑客社区。关键的伦理问题涉及到创造新的生命形式。<br />
<br />
Common ethical questions include:<br />
常见的伦理问题包括:<br />
<br />
<br />
A subsequent report focused on biosecurity, especially the so-called dual-use challenge. For example, while synthetic biology may lead to more efficient production of medical treatments, it may also lead to synthesis or modification of harmful pathogens (e.g., smallpox). The biohacking community remains a source of special concern, as the distributed and diffuse nature of open-source biotechnology makes it difficult to track, regulate or mitigate potential concerns over biosafety and biosecurity.<br />
<br />
随后的一份聚焦于于生物安全的报告,特别是所谓的两用挑战。例如,虽然合成生物学可能带来更有效的医疗生产,但它也可能合成或改造出有害的病原体(例如天花)。生物黑客仍然是一个特别令人关切的问题,因为开源生物技术的分布和扩散性质使得跟踪、管理或减轻对生物安全和生物安保的隐忧变得困难。<br />
<br />
* Is it morally right to tamper with nature?<br />
篡改自然在道德上是正确的吗?<br />
<br />
* Is one playing God when creating new life?<br />
创造新生命时,人是否就是上帝?<br />
<br />
COSY, another European initiative, focuses on public perception and communication. To better communicate synthetic biology and its societal ramifications to a broader public, COSY and SYNBIOSAFE published SYNBIOSAFE, a 38-minute documentary film, in October 2009.<br />
<br />
COSY 是欧洲的另一项倡议,主要关注于公众认知和交流。为了更好地向更广泛的公众宣传合成生物学及其社会影响,COSY 和 SYNBIOSAFE 于2009年10月出版了一部38分钟的纪录片《安全的合成生物学》。<br />
<br />
* What happens if a synthetic organism accidentally escapes?<br />
如果一种合成生命体意外地从实验室中泄露出去,会发生什么?<br />
<br />
* What if an individual misuses synthetic biology and creates a harmful entity (e.g., a biological weapon)?<br />
假如某个个体错误地使用合成生物学并制造了一个有害的实体,那该怎么办?<br />
<br />
The International Association Synthetic Biology has proposed self-regulation. This proposes specific measures that the synthetic biology industry, especially DNA synthesis companies, should implement. In 2007, a group led by scientists from leading DNA-synthesis companies published a "practical plan for developing an effective oversight framework for the DNA-synthesis industry".<br />
<br />
国际合成生物学协会已经建议进行自我调节。它提出了合成生物产业,特别是 DNA 合成公司,应该实施的具体措施。2007年,由主要的 DNA 合成公司的科学家领导的一个小组发表了“为 DNA 合成工业制定有效监督框架的实用计划”。<br />
<br />
* Who will have control of and access to the products of synthetic biology? <br />
谁会拥有控制和访问合成生物产品的权限?<br />
<br />
* Who will gain from these innovations? Investors? Medical patients? Industrial farmers?<br />
谁会从这些创新中获利?投资者?患者?工业农民?<br />
<br />
On July 9–10, 2009, the National Academies' Committee of Science, Technology & Law convened a symposium on "Opportunities and Challenges in the Emerging Field of Synthetic Biology".<br />
<br />
2009年7月9日至10日,美国国家学院科学、技术和法律委员会召开了一次名为“合成生物学新兴领域的机遇与挑战”的研讨会。<br />
<br />
* Does the patent system allow patents on living organisms? What about parts of organisms, like HIV resistance genes in humans?<ref>{{Cite web|url=https://www.theguardian.com/science/2018/nov/26/worlds-first-gene-edited-babies-created-in-china-claims-scientist|title= World's first gene-edited babies created in China, claims scientist |last=Staff|first=Agencies|date=November 2018|website=The Guardian|url-status=live|archive-url=|archive-date=|access-date=}}</ref><br />
<br />
* What if a new creation is deserving of moral or legal status?<br />
如果一个新生命理应拥有道德和法律地位该怎么办?<br />
<br />
After the publication of the first synthetic genome and the accompanying media coverage about "life" being created, President Barack Obama established the Presidential Commission for the Study of Bioethical Issues to study synthetic biology. The commission convened a series of meetings, and issued a report in December 2010 titled "New Directions: The Ethics of Synthetic Biology and Emerging Technologies." The commission stated that "while Venter’s achievement marked a significant technical advance in demonstrating that a relatively large genome could be accurately synthesized and substituted for another, it did not amount to the “creation of life”. It noted that synthetic biology is an emerging field, which creates potential risks and rewards. The commission did not recommend policy or oversight changes and called for continued funding of the research and new funding for monitoring, study of emerging ethical issues and public education. These security issues may be avoided by regulating industry uses of biotechnology through policy legislation. Federal guidelines on genetic manipulation are being proposed by "the President's Bioethics Commission ... in response to the announced creation of a self-replicating cell from a chemically synthesized genome, put forward 18 recommendations not only for regulating the science ... for educating the public". Richard Lewontin wrote that some of the safety tenets for oversight discussed in The Principles for the Oversight of Synthetic Biology are reasonable, but that the main problem with the recommendations in the manifesto is that "the public at large lacks the ability to enforce any meaningful realization of those recommendations".<br />
<br />
在发表了第一个合成基因组以及随之而来的关于”生命”的媒体报道之后,巴拉克·奥巴马总统设立了研究合成生物学的生物伦理问题总统委员会。该委员会召开了一系列会议,并于2010年12月发布了一份题为《新方向: 合成生物学和新兴技术的伦理学》的报告。委员会指出:“虽然文特尔的成就标志着一项重大的技术进步,证明了一个相对较大的基因组可以准确地合成和替代另一个基因组,但它并不等于‘创造生命’。”报告指出,合成生物学是一个新兴的领域,它产生了潜在的风险和回报。该委员会没有对政策或监督方面的改变提出建议,并呼吁继续为研究提供资金,并为监测、研究新出现的道德问题和公共教育提供新资金。这些安全问题可以通过政策立法规范生物技术的工业用途来避免。“生物伦理总统委员会正在提出关于基因操纵的联邦指导方针...... 作为对宣布从化学合成的基因组中创造出自我复制细胞的回应,提出了18项建议,不仅仅是为了规范科学...... 为了教育公众。”。理查德·路文汀 (Richard Lewontin) 写道,《合成生物学监督原则》中讨论的一些监督安全原则是合理的,但宣言中的建议存在的主要问题是“广大公众缺乏能力,无法强制任意有意义地实现这些建议”。<br />
<br />
<br />
<br />
The ethical aspects of synthetic biology has 3 main features: biosafety, biosecurity, and the creation of new life forms.<ref>{{Cite journal|title=Synthetic Biology and Ethics: Past, Present, and Future|last=Hayry|first=Mattie|date=April 2017|journal=Cambridge Quarterly of Healthcare Ethics|volume=26|issue=2|pages=186–205|doi=10.1017/S0963180116000803|pmid=28361718}}</ref> Other ethical issues mentioned include the regulation of new creations, patent management of new creations, benefit distribution, and research integrity.<ref>{{Cite journal|title=Synthetic biology applied in the agrifood sector: Public perceptions, attitudes and implications for future studies|last=Jin |display-authors=etal |first=Shan|date=September 2019|journal=Trends in Food Science and Technology|volume=91|pages=454–466|doi=10.1016/j.tifs.2019.07.025}}</ref><ref name=":3">{{Cite journal|url=https://heinonline.org/HOL/LandingPage?handle=hein.journals/macq15&div=8&id=&page=| title=Synthetic Biology: Ethics, Exeptionalism and Expectations| pages=45| last=Newson|first=AJ|date=2015|journal=Macquarie Law Journal| volume=15|url-status=live|archive-url=|archive-date=|access-date=}}</ref><br />
<br />
<br />
<br />
Ethical issues have surfaced for [[recombinant DNA]] and [[genetically modified organism]] (GMO) technologies and extensive regulations of [[genetic engineering]] and pathogen research were in place in many jurisdictions. [[Amy Gutmann]], former head of the Presidential Bioethics Commission, argued that we should avoid the temptation to over-regulate synthetic biology in general, and genetic engineering in particular. According to Gutmann, "Regulatory parsimony is especially important in emerging technologies...where the temptation to stifle innovation on the basis of uncertainty and fear of the unknown is particularly great. The blunt instruments of statutory and regulatory restraint may not only inhibit the distribution of new benefits, but can be counterproductive to security and safety by preventing researchers from developing effective safeguards.".<ref>{{cite journal | first = Gutmann | last = Amy | date = 2012 | title = The Ethics of Synthetic Biology | volume=41 | issue=4 | pages = 17–22 | journal = The Hastings Center Report | doi = 10.1002/j.1552-146X.2011.tb00118.x | pmid = 21845917 | s2cid = 20662786 }}</ref><br />
<br />
<br />
<br />
The hazards of synthetic biology include biosafety hazards to workers and the public, biosecurity hazards stemming from deliberate engineering of organisms to cause harm, and environmental hazards. The biosafety hazards are similar to those for existing fields of biotechnology, mainly exposure to pathogens and toxic chemicals, although novel synthetic organisms may have novel risks. For biosecurity, there is concern that synthetic or redesigned organisms could theoretically be used for bioterrorism. Potential risks include recreating known pathogens from scratch, engineering existing pathogens to be more dangerous, and engineering microbes to produce harmful biochemicals. Lastly, environmental hazards include adverse effects on biodiversity and ecosystem services, including potential changes to land use resulting from agricultural use of synthetic organisms.<br />
<br />
合成生物学的危害包括对工人和公众的生物安全危害、蓄意设计可造成危害的生物体所产生的生物安全危害以及环境危害。生物安全危害类似于现有生物技术领域的危害,尽管新的合成生物可能有新的风险,它的主要形式是接触病原体和有毒化学品。为了生物安全,人们担心人工合成或重新设计的生物体在理论上可能被用于生物恐怖主义。潜在的风险包括从零开始再造已知的病原体,将现有的病原体设计成更危险的,以及设计微生物来生产有害的生物化学产品。最后,环境危害包括合成生物学对生物多样性和生态系统服务的不利影响,包括在农业上利用合成生物体对土地使用的潜在变化。<br />
<br />
=== The "creation" of life 创造生命 ===<br />
<br />
<br />
<br />
Existing risk analysis systems for GMOs are generally considered sufficient for synthetic organisms, although there may be difficulties for an organism built "bottom-up" from individual genetic sequences. Synthetic biology generally falls under existing regulations for GMOs and biotechnology in general, and any regulations that exist for downstream commercial products, although there are generally no regulations in any jurisdiction that are specific to synthetic biology.<br />
<br />
通常认为,尽管由单个基因序列构建”自下而上”的生物体可能存在困难,现有的转基因生物风险分析系统足以用于合成生物体。一般而言,尽管任何法域都没有专门针对合成生物学的条例,合成生物学属于现有的转基因生物和生物技术条例的范围,也属于现有的关于下游商业产品的条例的范围。<br />
<br />
One ethical question is whether or not it is acceptable to create new life forms, sometimes known as "playing God". Currently, the creation of new life forms not present in nature is at small-scale, the potential benefits and dangers remain unknown, and careful consideration and oversight are ensured for most studies.<ref name=":3" /> Many advocates express the great potential value—to agriculture, medicine, and academic knowledge, among other fields—of creating artificial life forms. Creation of new entities could expand scientific knowledge well beyond what is currently known from studying natural phenomena. Yet there is concern that artificial life forms may reduce nature’s "purity" (i.e., nature could be somehow corrupted by human intervention and manipulation) and potentially influence the adoption of more engineering-like principles instead of biodiversity- and nature-focused ideals. Some are also concerned that if an artificial life form were to be released into nature, it could hamper biodiversity by beating out natural species for resources (similar to how [[algal bloom]]s kill marine species). Another concern involves the ethical treatment of newly created entities if they happen to [[nociception|sense pain]], [[sentience]], and self-perception. Should such life be given moral or legal rights? If so, how?<br />
<br />
<br />
<br />
=== Biosafety and biocontainment 生物技术安全和生物抑制 ===<br />
<br />
What is most ethically appropriate when considering biosafety measures? How can accidental introduction of synthetic life in the natural environment be avoided? Much ethical consideration and critical thought has been given to these questions. Biosafety not only refers to biological containment; it also refers to strides taken to protect the public from potentially hazardous biological agents. Even though such concerns are important and remain unanswered, not all products of synthetic biology present concern for biological safety or negative consequences for the environment. It is argued that most synthetic technologies are benign and are incapable of flourishing in the outside world due to their "unnatural" characteristics as there is yet to be an example of a transgenic microbe conferred with a fitness advantage in the wild.<br />
<br />
<br />
<br />
In general, existing [[Hierarchy of hazard controls|hazard controls]], risk assessment methodologies, and regulations developed for traditional [[genetically modified organism]]s (GMOs) are considered to be sufficient for synthetic organisms. "Extrinsic" [[biocontainment]] methods in a laboratory context include physical containment through [[biosafety cabinet]]s and [[glovebox]]es, as well as [[personal protective equipment]]. In an agricultural context they include isolation distances and [[pollen]] barriers, similar to methods for [[Biocontainment of genetically modified organisms|biocontainment of GMOs]]. Synthetic organisms may offer increased hazard control because they can be engineered with "intrinsic" biocontainment methods that limit their growth in an uncontained environment, or prevent [[horizontal gene transfer]] to natural organisms. Examples of intrinsic biocontainment include [[auxotrophy]], biological [[kill switch]]es, inability of the organism to replicate or to pass modified or synthetic genes to offspring, and the use of [[Xenobiology|xenobiological]] organisms using alternative biochemistry, for example using artificial [[xeno nucleic acid]]s (XNA) instead of DNA.<ref name=":12" /><ref name=":32">{{Cite journal|url=https://publications.europa.eu/en/publication-detail/-/publication/bfd7d06c-d3ae-11e5-a4b5-01aa75ed71a1/language-en|title=Opinion on synthetic biology II: Risk assessment methodologies and safety aspects|last=|first=|date=2016-02-12|website=EU [[Directorate-General for Health and Consumers]]|pages=|via=|doi=10.2772/63529|archive-url=|archive-date=|access-date=|volume=|publisher=Publications Office}}</ref> Regarding auxotrophy, bacteria and yeast can be engineered to be unable to produce [[histidine]], an important amino acid for all life. Such organisms can thus only be grown on histidine-rich media in laboratory conditions, nullifying fears that they could spread into undesirable areas.<br />
<br />
<br />
<br />
<br />
<br />
=== Biosecurity 生物安全 ===<br />
<br />
Some ethical issues relate to biosecurity, where biosynthetic technologies could be deliberately used to cause harm to society and/or the environment. Since synthetic biology raises ethical issues and biosecurity issues, humanity must consider and plan on how to deal with potentially harmful creations, and what kinds of ethical measures could possibly be employed to deter nefarious biosynthetic technologies. With the exception of regulating synthetic biology and biotechnology companies,<ref name="Bügl, H. et al. 2007 627–629">{{cite journal | vauthors = Bügl H, Danner JP, Molinari RJ, Mulligan JT, Park HO, Reichert B, Roth DA, Wagner R, Budowle B, Scripp RM, Smith JA, Steele SJ, Church G, Endy D | title = DNA synthesis and biological security | journal = Nature Biotechnology | volume = 25 | issue = 6 | pages = 627–9 | date = June 2007 | pmid = 17557094 | doi = 10.1038/nbt0607-627 | s2cid = 7776829 }}</ref><ref>{{cite web|url = http://www.synbioproject.org/site/assets/files/1335/hastings.pdf|title = Ethical Issues in Synthetic Biology: An Overview of the Debates|date = |access-date = |website = }}</ref> however, the issues are not seen as new because they were raised during the earlier [[recombinant DNA]] and [[genetically modified organism]] (GMO) debates and extensive regulations of [[genetic engineering]] and pathogen research are already in place in many jurisdictions.<ref name="bioethics.gov">Presidential Commission for the study of Bioethical Issues, December 2010 [http://bioethics.gov/synthetic-biology-report NEW DIRECTIONS The Ethics of Synthetic Biology and Emerging Technologies] Retrieved 2012-04-14.</ref><br /><br />
<br />
<br />
<br />
=== European Union 欧盟方面 ===<br />
<br />
<br />
<br />
The [[European Union]]-funded project SYNBIOSAFE<ref>[http://www.synbiosafe.eu/ SYNBIOSAFE official site]</ref> has issued reports on how to manage synthetic biology. A 2007 paper identified key issues in safety, security, ethics and the science-society interface, which the project defined as public education and ongoing dialogue among scientists, businesses, government and ethicists.<ref name="Priorities">{{cite journal | vauthors = Schmidt M, Ganguli-Mitra A, Torgersen H, Kelle A, Deplazes A, Biller-Andorno N | title = A priority paper for the societal and ethical aspects of synthetic biology | journal = Systems and Synthetic Biology | volume = 3 | issue = 1–4 | pages = 3–7 | date = December 2009 | pmid = 19816794 | pmc = 2759426 | doi = 10.1007/s11693-009-9034-7 | url = http://www.synbiosafe.eu/uploads/pdf/Schmidt_etal-2009-SSBJ.pdf }}</ref><ref>Schmidt M. Kelle A. Ganguli A, de Vriend H. (Eds.) 2009. [https://www.springer.com/biomed/book/978-90-481-2677-4 "Synthetic Biology. The Technoscience and its Societal Consequences".] Springer Academic Publishing.</ref> The key security issues that SYNBIOSAFE identified involved engaging companies that sell synthetic DNA and the [[Do-it-yourself biology|biohacking]] community of amateur biologists. Key ethical issues concerned the creation of new life forms.<br />
<br />
<br />
<br />
A subsequent report focused on biosecurity, especially the so-called [[dual use technology|dual-use]] challenge. For example, while synthetic biology may lead to more efficient production of medical treatments, it may also lead to synthesis or modification of harmful pathogens (e.g., [[smallpox]]).<ref>{{cite journal | vauthors = Kelle A | title = Ensuring the security of synthetic biology-towards a 5P governance strategy | journal = Systems and Synthetic Biology | volume = 3 | issue = 1–4 | pages = 85–90 | date = December 2009 | pmid = 19816803 | pmc = 2759433 | doi = 10.1007/s11693-009-9041-8 }}</ref> The biohacking community remains a source of special concern, as the distributed and diffuse nature of open-source biotechnology makes it difficult to track, regulate or mitigate potential concerns over biosafety and biosecurity.<ref>{{cite journal | vauthors = Schmidt M | title = Diffusion of synthetic biology: a challenge to biosafety | journal = Systems and Synthetic Biology | volume = 2 | issue = 1–2 | pages = 1–6 | date = June 2008 | pmid = 19003431 | pmc = 2671588 | doi = 10.1007/s11693-008-9018-z | url = http://www.markusschmidt.eu/pdf/Diffusion_of_synthetic_biology.pdf }}</ref><br />
<br />
<br />
<br />
COSY, another European initiative, focuses on public perception and communication.<ref>[http://www.synbio.at/ COSY: Communicating Synthetic Biology]</ref><ref>{{cite journal | vauthors = Kronberger N, Holtz P, Kerbe W, Strasser E, Wagner W | title = Communicating Synthetic Biology: from the lab via the media to the broader public | journal = Systems and Synthetic Biology | volume = 3 | issue = 1–4 | pages = 19–26 | date = December 2009 | pmid = 19816796 | pmc = 2759424 | doi = 10.1007/s11693-009-9031-x }}</ref><ref>{{cite journal | vauthors = Cserer A, Seiringer A | title = Pictures of Synthetic Biology : A reflective discussion of the representation of Synthetic Biology (SB) in the German-language media and by SB experts | journal = Systems and Synthetic Biology | volume = 3 | issue = 1–4 | pages = 27–35 | date = December 2009 | pmid = 19816797 | pmc = 2759430 | doi = 10.1007/s11693-009-9038-3 }}</ref> To better communicate synthetic biology and its societal ramifications to a broader public, COSY and SYNBIOSAFE published ''SYNBIOSAFE'', a 38-minute documentary film, in October 2009.<ref>[http://www.synbiosafe.eu/DVD COSY/SYNBIOSAFE Documentary]</ref><br />
<br />
<br />
<br />
The International Association Synthetic Biology has proposed self-regulation.<ref>Report of IASB [http://www.ia-sb.eu/tasks/sites/synthetic-biology/assets/File/pdf/iasb_report_biosecurity_syntheticbiology.pdf "Technical solutions for biosecurity in synthetic biology"] {{webarchive |url=https://web.archive.org/web/20110719031805/http://www.ia-sb.eu/tasks/sites/synthetic-biology/assets/File/pdf/iasb_report_biosecurity_syntheticbiology.pdf |date=July 19, 2011 }}, Munich, 2008</ref> This proposes specific measures that the synthetic biology industry, especially DNA synthesis companies, should implement. In 2007, a group led by scientists from leading DNA-synthesis companies published a "practical plan for developing an effective oversight framework for the DNA-synthesis industry".<ref name="Bügl, H. et al. 2007 627–629" /><br />
<br />
<br />
<br />
=== United States 美国方面 ===<br />
<br />
<br />
<br />
In January 2009, the [[Alfred P. Sloan Foundation]] funded the [[Woodrow Wilson Center]], the [[Hastings Center]], and the [[J. Craig Venter Institute]] to examine the public perception, ethics and policy implications of synthetic biology.<ref>Parens E., Johnston J., Moses J. [http://www.thehastingscenter.org/who-we-are/our-research/selected-past-projects/ethical-issues-in-synthetic-biology-2/ Ethical Issues in Synthetic Biology.] 2009.</ref><br />
<br />
<br />
<br />
On July 9–10, 2009, the National Academies' Committee of Science, Technology & Law convened a symposium on "Opportunities and Challenges in the Emerging Field of Synthetic Biology".<ref>[http://sites.nationalacademies.org/PGA/stl/PGA_050738 NAS Symposium official site]</ref><br />
<br />
<br />
<br />
After the publication of the [[Mycoplasma laboratorium|first synthetic genome]] and the accompanying media coverage about "life" being created, President [[Barack Obama]] established the [[Presidential Commission for the Study of Bioethical Issues]] to study synthetic biology.<ref>Presidential Commission for the study of Bioethical Issues, December 2010 [http://bioethics.gov/node/353 FAQ]</ref> The commission convened a series of meetings, and issued a report in December 2010 titled "New Directions: The Ethics of Synthetic Biology and Emerging Technologies." The commission stated that "while Venter’s achievement marked a significant technical advance in demonstrating that a relatively large genome could be accurately synthesized and substituted for another, it did not amount to the “creation of life”.<ref>[http://bioethics.gov/node/353 Synthetic Biology F.A.Q.'s | Presidential Commission for the Study of Bioethical Issues]</ref> It noted that synthetic biology is an emerging field, which creates potential risks and rewards. The commission did not recommend policy or oversight changes and called for continued funding of the research and new funding for monitoring, study of emerging ethical issues and public education.<ref name="bioethics.gov" /><br />
<br />
<br />
<br />
Synthetic biology, as a major tool for biological advances, results in the "potential for developing biological weapons, possible unforeseen negative impacts on human health ... and any potential environmental impact".<ref name=":2">{{cite journal | vauthors = Erickson B, Singh R, Winters P | title = Synthetic biology: regulating industry uses of new biotechnologies | journal = Science | volume = 333 | issue = 6047 | pages = 1254–6 | date = September 2011 | pmid = 21885775 | doi = 10.1126/science.1211066 | bibcode = 2011Sci...333.1254E | s2cid = 1568198 | url = https://semanticscholar.org/paper/6ae989f6b07dc3c8a8694792d6fe8f036a0e0292 }}</ref> These security issues may be avoided by regulating industry uses of biotechnology through policy legislation. Federal guidelines on genetic manipulation are being proposed by "the President's Bioethics Commission ... in response to the announced creation of a self-replicating cell from a chemically synthesized genome, put forward 18 recommendations not only for regulating the science ... for educating the public".<ref name=":2" /><br />
<br />
<br />
<br />
=== Opposition 反对意见 ===<br />
<br />
On March 13, 2012, over 100 environmental and civil society groups, including [[Friends of the Earth]], the [[International Center for Technology Assessment]] and the [[ETC Group (AGETC)|ETC Group]] issued the manifesto ''The Principles for the Oversight of Synthetic Biology''. This manifesto calls for a worldwide moratorium on the release and commercial use of synthetic organisms until more robust regulations and rigorous biosafety measures are established. The groups specifically call for an outright ban on the use of synthetic biology on the [[human genome]] or [[human microbiome]].<ref>Katherine Xue for Harvard Magazine. September–October 2014 [http://harvardmagazine.com/2014/09/synthetic-biologys-new-menagerie Synthetic Biology’s New Menagerie]</ref><ref>Yojana Sharma for Scidev.net March 15, 2012. [http://www.scidev.net/global/genomics/news/ngos-call-for-international-regulation-of-synthetic-biology.html NGOs call for international regulation of synthetic biology]</ref> [[Richard Lewontin]] wrote that some of the safety tenets for oversight discussed in ''The Principles for the Oversight of Synthetic Biology'' are reasonable, but that the main problem with the recommendations in the manifesto is that "the public at large lacks the ability to enforce any meaningful realization of those recommendations".<ref>[http://www.nybooks.com/articles/archives/2014/may/08/new-synthetic-biology-who-gains/?insrc=rel#fnr-1 The New Synthetic Biology: Who Gains?] (2014-05-08), [[Richard C. Lewontin]], ''[[New York Review of Books]]''</ref><br />
<br />
<br />
<br />
== Health and safety 健康和安全 ==<br />
<br />
{{Main|Hazards of synthetic biology}}<br />
<br />
<br />
<br />
The hazards of synthetic biology include [[biosafety]] hazards to workers and the public, [[biosecurity]] hazards stemming from deliberate engineering of organisms to cause harm, and environmental hazards. The biosafety hazards are similar to those for existing fields of biotechnology, mainly exposure to pathogens and toxic chemicals, although novel synthetic organisms may have novel risks.<ref name=":02">{{Cite journal|url=https://blogs.cdc.gov/niosh-science-blog/2017/01/24/synthetic-biology/|title=Synthetic Biology and Occupational Risk|last1=Howard|first1=John|last2=Murashov|first2=Vladimir|date=2017-01-24|journal=Journal of Occupational and Environmental Hygiene|archive-url=|archive-date=|access-date=2018-11-30|last3=Schulte|first3=Paul|volume=14|issue=3|pages=224–236|pmid=27754800|doi=10.1080/15459624.2016.1237031|s2cid=205893358}}</ref><ref name=":12">{{Cite journal|last1=Howard|first1=John|last2=Murashov|first2=Vladimir|last3=Schulte|first3=Paul|date=2016-10-18|title=Synthetic biology and occupational risk|journal=Journal of Occupational and Environmental Hygiene|volume=14|issue=3|pages=224–236|doi=10.1080/15459624.2016.1237031|pmid=27754800|s2cid=205893358|issn=1545-9624}}</ref> For biosecurity, there is concern that synthetic or redesigned organisms could theoretically be used for [[bioterrorism]]. Potential risks include recreating known pathogens from scratch, engineering existing pathogens to be more dangerous, and engineering microbes to produce harmful biochemicals.<ref name=":7">{{Cite book|title=Biodefense in the Age of Synthetic Biology|date=2018-06-19|publisher=[[National Academies of Sciences, Engineering, and Medicine]]|isbn=9780309465182|location=|pages=|doi=10.17226/24890|pmid=30629396|last1=National Academies Of Sciences|first1=Engineering|author2=Division on Earth Life Studies|last3=Board On Life|first3=Sciences|author4=Board on Chemical Sciences Technology|author5=Committee on Strategies for Identifying Addressing Potential Biodefense Vulnerabilities Posed by Synthetic Biology}}</ref> Lastly, environmental hazards include adverse effects on [[biodiversity]] and [[ecosystem services]], including potential changes to land use resulting from agricultural use of synthetic organisms.<ref name=":8">{{Cite web|url=http://ec.europa.eu/environment/integration/research/newsalert/multimedia/synthetic_biology_and_biodiversity.htm|title=Future Brief: Synthetic biology and biodiversity|last=|first=|date=September 2016|website=European Commission|pages=14–15|archive-url=|archive-date=|access-date=2019-01-14}}</ref><ref>{{Cite web|url=https://publications.europa.eu/en/publication-detail/-/publication/9b231c71-faf1-11e5-b713-01aa75ed71a1/language-en/format-PDF|title=Final opinion on synthetic biology III: Risks to the environment and biodiversity related to synthetic biology and research priorities in the field of synthetic biology|last=|first=|date=2016-04-04|website=EU Directorate-General for Health and Food Safety|pages=8, 27|archive-url=|archive-date=|access-date=2019-01-14}}</ref><br />
<br />
<br />
<br />
Existing risk analysis systems for GMOs are generally considered sufficient for synthetic organisms, although there may be difficulties for an organism built "bottom-up" from individual genetic sequences.<ref name=":32" /><ref name=":22">{{Cite web|url=http://www.hse.gov.uk/research/rrpdf/rr944.pdf|title=Synthetic biology: A review of the technology, and current and future needs from the regulatory framework in Great Britain|last1=Bailey|first1=Claire|last2=Metcalf|first2=Heather|date=2012|website=UK [[Health and Safety Executive]]|archive-url=|archive-date=|access-date=2018-11-29|last3=Crook|first3=Brian}}</ref> Synthetic biology generally falls under existing regulations for GMOs and biotechnology in general, and any regulations that exist for downstream commercial products, although there are generally no regulations in any jurisdiction that are specific to synthetic biology.<ref name=":5">{{Citation|last1=Pei|first1=Lei|title=Regulatory Frameworks for Synthetic Biology|date=2012|work=Synthetic Biology|pages=157–226|publisher=John Wiley & Sons, Ltd|doi=10.1002/9783527659296.ch5|isbn=9783527659296|last2=Bar‐Yam|first2=Shlomiya|last3=Byers‐Corbin|first3=Jennifer|last4=Casagrande|first4=Rocco|last5=Eichler|first5=Florentine|last6=Lin|first6=Allen|last7=Österreicher|first7=Martin|last8=Regardh|first8=Pernilla C.|last9=Turlington|first9=Ralph D.}}</ref><ref name=":4">{{Cite journal|last=Trump|first=Benjamin D.|date=2017-11-01|title=Synthetic biology regulation and governance: Lessons from TAPIC for the United States, European Union, and Singapore|journal=Health Policy|volume=121|issue=11|pages=1139–1146|doi=10.1016/j.healthpol.2017.07.010|pmid=28807332|issn=0168-8510|doi-access=free}}</ref><br />
<br />
<br />
<br />
== See also 请参阅 ==<br />
<br />
{{Colbegin|colwidth=20em}}<br />
<br />
* ''[[ACS Synthetic Biology]]'' (journal)<br />
<br />
* [[Bioengineering]]<br />
<br />
* [[Biomimicry]]<br />
<br />
Category:Biotechnology<br />
<br />
类别: 生物技术<br />
<br />
* [[Carlson Curve]]<br />
<br />
Category:Molecular genetics<br />
<br />
类别: 分子遗传学<br />
<br />
* [[Chiral life concept]]<br />
<br />
Category:Systems biology<br />
<br />
分类: 系统生物学<br />
<br />
* [[Computational biology]]<br />
<br />
Category:Bioinformatics<br />
<br />
类别: 生物信息学<br />
<br />
* [[Computational biomodeling]]<br />
<br />
Category:Biocybernetics<br />
<br />
类别: 生物控制论<br />
<br />
* [[DNA digital data storage]]<br />
<br />
Category:Appropriate technology<br />
<br />
类别: 适当的技术<br />
<br />
* [[Engineering biology]]<br />
<br />
Category:Emerging technologies<br />
<br />
类别: 新兴技术<br />
<br />
<noinclude><br />
<br />
<small>This page was moved from [[wikipedia:en:Synthetic biology]]. Its edit history can be viewed at [[合成生物学/edithistory]]</small></noinclude><br />
<br />
[[Category:待整理页面]]</div>粲兰https://wiki.swarma.org/index.php?title=%E5%90%88%E6%88%90%E7%94%9F%E7%89%A9%E5%AD%A6&diff=19662合成生物学2020-12-04T13:16:53Z<p>粲兰:</p>
<hr />
<div>此词条暂由袁一博翻译,翻译字数共4491,未经人工整理和审校,带来阅读不便,请见谅。<br />
<br />
{{redirect|Artificial life form|simulated life forms|Artificial life}}<br />
<br />
{{short description|Interdisciplinary branch of biology and engineering}}<br />
<br />
{{Synthetic biology}}<br />
<br />
[[File:Synthetic Biology Research at NASA Ames.jpg|thumb|Synthetic Biology Research at [[Ames Research Center|NASA Ames Research Center]]. NASA埃姆斯研究中心的合成生物学研究。]]<br />
<br />
NASA Ames Research Center.]]<br />
<br />
美国国家航空和宇宙航行局/美国国家航空航天局埃姆斯研究中心<br />
<br />
<br />
<br />
'''Synthetic biology''' ('''SynBio''') is a multidisciplinary area of research that seeks to create new biological parts, devices, and systems, or to redesign systems that are already found in nature.<br />
<br />
Synthetic biology (SynBio) is a multidisciplinary area of research that seeks to create new biological parts, devices, and systems, or to redesign systems that are already found in nature.<br />
<br />
合成生物学(SynBio)是一个多学科的研究领域,旨在创造新的生物部件、设备和系统,或重新设计已经在自然界中发现的系统。<br />
<br />
<br />
<br />
It is a branch of science that encompasses a broad range of methodologies from various disciplines, such as [[biotechnology]], [[genetic engineering]], [[molecular biology]], [[molecular engineering]], [[systems biology]], [[Model lipid bilayer|membrane science]], [[biophysics]], [[Biological engineering|chemical and biological engineering]], [[Electrical engineering|electrical and computer engineering]], [[control engineering]] and [[evolutionary biology]].<br />
<br />
It is a branch of science that encompasses a broad range of methodologies from various disciplines, such as biotechnology, genetic engineering, molecular biology, molecular engineering, systems biology, membrane science, biophysics, chemical and biological engineering, electrical and computer engineering, control engineering and evolutionary biology.<br />
<br />
它是科学的一个分支,包括广泛的方法,从不同的学科,如生物技术,基因工程,分子生物学,分子工程,系统生物学,膜科学,生物物理学,化学和生物工程,电子和计算机工程,控制工程和进化生物学。<br />
<br />
<br />
<br />
Due to more powerful [[genetic engineering]] capabilities and decreased DNA synthesis and [[DNA sequencing|sequencing costs]], the field of synthetic biology is rapidly growing. In 2016, more than 350 companies across 40 countries were actively engaged in synthetic biology applications; all these companies had an estimated net worth of $3.9 billion in the global market.<ref>{{cite journal | last1 = Bueso | first1 = F. Y. | last2 = Tangney | first2 = M. | year = 2017 | title = Synthetic Biology in the Driving Seat of the Bioeconomy | url = | journal = Trends in Biotechnology | volume = 35 | issue = 5| pages = 373–378 | doi = 10.1016/j.tibtech.2017.02.002 | pmid = 28249675 }}</ref><br />
<br />
Due to more powerful genetic engineering capabilities and decreased DNA synthesis and sequencing costs, the field of synthetic biology is rapidly growing. In 2016, more than 350 companies across 40 countries were actively engaged in synthetic biology applications; all these companies had an estimated net worth of $3.9 billion in the global market.<br />
<br />
由于更强大的基因工程能力和降低的 DNA 合成及测序成本,合成生物学领域正在迅速发展。2016年,来自40个国家的350多家公司积极参与合成生物学应用; 所有这些公司在全球市场的净值估计为39亿美元。<br />
<br />
<br />
<br />
== Definition 定义 ==<br />
<br />
Synthetic biology currently has no generally accepted definition. Here are a few examples:<br />
<br />
Synthetic biology currently has no generally accepted definition. Here are a few examples:<br />
<br />
合成生物学目前还没有公认的定义。以下是一些定义的示例:<br />
<br />
<br />
<br />
* "the use of a mixture of physical engineering and genetic engineering to create new (and, therefore, synthetic) life forms混合使用物理工程和基因工程来创建新的(因而也即合成的)生命形式。"<ref>{{cite journal | last1 = Hunter | first1 = D | year = 2013 | title = How to object to radically new technologies on the basis of justice: the case of synthetic biology | url = | journal = Bioethics | volume = 27 | issue = 8| pages = 426–434 | doi = 10.1111/bioe.12049 | pmid = 24010854 }}</ref><br />
<br />
<br />
* "an emerging field of research that aims to combine the knowledge and methods of biology, engineering and related disciplines in the design of chemically synthesized DNA to create organisms with novel or enhanced characteristics and traits一个新兴的研究领域,旨在将生物学,工程学和相关学科领域的知识和方法结合到化学合成DNA 的设计中,从而创造出具有新颖或增强特性和特征的有机体。<br />
"<ref>{{cite journal | last1 = Gutmann | first1 = A | year = 2011 | title = The ethics of synthetic biology: guiding principles for emerging technologies | url = | journal = Hastings Center Report | volume = 41 | issue = 4| pages = 17–22 | doi = 10.1002/j.1552-146x.2011.tb00118.x | pmid = 21845917 | s2cid = 20662786 }}</ref><br />
<br />
* "designing and constructing [[BioBrick|biological modules]], [[biological systems]], and [[biological machine]]s or, re-design of existing biological systems for useful purposes设计并构建生物积木、生物系统以及生物机器,或为有用的目的重新设计现有的生物系统。"<ref name="NakanoEckford2013">{{cite book|url={{google books |plainurl=y |id=uVhsAAAAQBAJ}}|title=Molecular Communication|last1=Nakano|first1=Tadashi|last2=Eckford|first2=Andrew W.|last3=Haraguchi|first3=Tokuko|date=12 September 2013|publisher=Cambridge University Press|isbn=978-1-107-02308-6|name-list-style=vanc}}</ref><br />
<br />
<br />
* “applying the engineering paradigm of systems design to biological systems in order to produce predictable and robust systems with novel functionalities that do not exist in nature” (The European Commission, 2005)This can include the possibility of a [[molecular assembler]], based upon biomolecular systems such as the [[ribosome]]”<ref name="RoadMap">{{Cite web|url=http://www.foresight.org/roadmaps/Nanotech_Roadmap_2007_main.pdf|title=Productive Nanosystems: A Technology Roadmap|website=Foresight Institute}}</ref><br />
将系统设计的工程范式应用到生物系统中,以产生具有自然界中不存在的新功能的可预测且健全的系统”(欧洲委员会,2005年),这可能包括基于生物分子系统——例如核糖体——的分子组合器的可能性。<br />
<br />
<br />
<br />
To note, synthetic biology has traditionally been divided into two different approaches: top down and bottom up.<br />
<br />
To note, synthetic biology has traditionally been divided into two different approaches: top down and bottom up.<br />
<br />
值得注意的是,合成生物学在传统上被分为两种不同的方法: 自上而下和自下而上。<br />
<br />
<br />
<br />
# The <u>top down</u> approach involves using metabolic and genetic engineering techniques to impart new functions to living cells.<br />
<br />
The <u>top down</u> approach involves using metabolic and genetic engineering techniques to impart new functions to living cells.<br />
<br />
自上而下的方法包括利用代谢和基因工程技术赋予活细胞以新的功能。<br />
<br />
# The <u>bottom up</u> approach involves creating new biological systems ''in vitro'' by bringing together 'non-living' biomolecular components,<ref>{{cite journal | vauthors = Schwille P | title = Bottom-up synthetic biology: engineering in a tinkerer's world | journal = Science | volume = 333 | issue = 6047 | pages = 1252–4 | date = September 2011 | pmid = 21885774 | doi = 10.1126/science.1211701 | bibcode = 2011Sci...333.1252S | s2cid = 43354332 }}</ref> often with the aim of constructing an [[artificial cell]].<br />
<br />
The <u>bottom up</u> approach involves creating new biological systems in vitro by bringing together 'non-living' biomolecular components, often with the aim of constructing an artificial cell.<br />
<br />
自下而上的方法包括在体外创建新的生物系统,将“非活性”的生物分子组件聚集在一起,其目的通常是构建一个人工细胞。<br />
<br />
<br />
<br />
Biological systems are thus assembled module-by-module. [[Cell-free protein synthesis|Cell-free protein expression systems]] are often employed,<ref>{{cite journal | vauthors = Noireaux V, Libchaber A | title = A vesicle bioreactor as a step toward an artificial cell assembly | journal = Proceedings of the National Academy of Sciences of the United States of America | volume = 101 | issue = 51 | pages = 17669–74 | date = December 2004 | pmid = 15591347 | pmc = 539773 | doi = 10.1073/pnas.0408236101 | bibcode = 2004PNAS..10117669N }}</ref><ref>{{cite journal | vauthors = Hodgman CE, Jewett MC | title = Cell-free synthetic biology: thinking outside the cell | journal = Metabolic Engineering | volume = 14 | issue = 3 | pages = 261–9 | date = May 2012 | pmid = 21946161 | pmc = 3322310 | doi = 10.1016/j.ymben.2011.09.002 }}</ref><ref>{{cite journal | vauthors = Elani Y, Law RV, Ces O | title = Protein synthesis in artificial cells: using compartmentalisation for spatial organisation in vesicle bioreactors | journal = Physical Chemistry Chemical Physics | volume = 17 | issue = 24 | pages = 15534–7 | date = June 2015 | pmid = 25932977 | doi = 10.1039/C4CP05933F | bibcode = 2015PCCP...1715534E | doi-access = free }}</ref> as are membrane-based molecular machinery. There are increasing efforts to bridge the divide between these approaches by forming hybrid living/synthetic cells,<ref>{{cite journal | vauthors = Elani Y, Trantidou T, Wylie D, Dekker L, Polizzi K, Law RV, Ces O | title = Constructing vesicle-based artificial cells with embedded living cells as organelle-like modules | journal = Scientific Reports | volume = 8 | issue = 1 | pages = 4564 | date = March 2018 | pmid = 29540757 | pmc = 5852042 | doi = 10.1038/s41598-018-22263-3 | bibcode = 2018NatSR...8.4564E }}</ref> and engineering communication between living and synthetic cell populations.<ref>{{cite journal | vauthors = Lentini R, Martín NY, Forlin M, Belmonte L, Fontana J, Cornella M, Martini L, Tamburini S, Bentley WE, Jousson O, Mansy SS | title = Two-Way Chemical Communication between Artificial and Natural Cells | journal = ACS Central Science | volume = 3 | issue = 2 | pages = 117–123 | date = February 2017 | pmid = 28280778 | pmc = 5324081 | doi = 10.1021/acscentsci.6b00330 }}</ref><br />
<br />
Biological systems are thus assembled module-by-module. Cell-free protein expression systems are often employed, as are membrane-based molecular machinery. There are increasing efforts to bridge the divide between these approaches by forming hybrid living/synthetic cells, and engineering communication between living and synthetic cell populations.<br />
<br />
生物系统就是这样一个模块一个模块地组装起来的。无细胞蛋白表达系统作为以膜为基础的分子机制,经常被采用。通过形成活细胞/合成细胞的混合体,以及在活细胞和合成细胞群之间进行工程交流,科学家前赴后继地努力为这些系统之间架起桥梁。<br />
<br />
<br />
<br />
== History 发展历程 ==<br />
<br />
'''1910:''' First identifiable use of the term "synthetic biology" in [[Stéphane Leduc]]'s publication ''Théorie physico-chimique de la vie et générations spontanées''.<ref>[https://openlibrary.org/books/OL23348076M/Théorie_physico-chimique_de_la_vie_et_générations_spontanées Théorie physico-chimique de la vie et générations spontanées, S. Leduc, 1910]</ref> He also noted this term in another publication, ''La Biologie Synthétique'' in 1912.<ref>{{cite book |url=http://www.peiresc.org/bstitre.htm |title=La biologie synthétique, étude de biophysique |last=Leduc |first=Stéphane |date=1912 | veditors = Poinat A }}</ref><br />
<br />
1910: First identifiable use of the term "synthetic biology" in Stéphane Leduc's publication Théorie physico-chimique de la vie et générations spontanées. He also noted this term in another publication, La Biologie Synthétique in 1912.<br />
<br />
1910: First identifiable use of the term "synthetic biology" in Stéphane Leduc's publication Théorie physico-chimique de la vie et générations spontanées.他还在1912年的另一本出版物《生物合成学》中提到了这个术语。<br />
<br />
<br />
<br />
'''1961:''' Jacob and Monod postulate cellular regulation by molecular networks from their study of the ''lac'' operon in ''E. coli'' and envisioned the ability to assemble new systems from molecular components.<ref>Jacob, F.ß. & Monod, J. On the regulation of gene activity. Cold Spring Harb. Symp. Quant. Biol. 26, 193–211 (1961).</ref><br />
<br />
1961: Jacob and Monod postulate cellular regulation by molecular networks from their study of the lac operon in E. coli and envisioned the ability to assemble new systems from molecular components.<br />
<br />
1961年: 雅各布和莫诺德通过他们对大肠杆菌中乳酸操纵子的研究,设想了通过分子网络调控细胞的方法,并预期了由分子组件组装新系统的能力。<br />
<br />
<br />
<br />
'''1973:''' First molecular cloning and amplification of DNA in a plasmid is published in ''P.N.A.S.'' by Cohen, Boyer ''et al.'' constituting the dawn of synthetic biology.<ref>{{cite journal | vauthors = Cohen SN, Chang AC, Boyer HW, Helling RB | title = Construction of biologically functional bacterial plasmids in vitro | journal = Proc. Natl. Acad. Sci. USA | volume = 70 | issue = 11 | pages = 3240–3244 | date = 1973 | pmid = 4594039 | doi = 10.1073/pnas.70.11.3240 | bibcode = 1973PNAS...70.3240C | pmc = 427208 }}</ref></blockquote><br />
<br />
1973: First molecular cloning and amplification of DNA in a plasmid is published in P.N.A.S. by Cohen, Boyer et al. constituting the dawn of synthetic biology.</blockquote><br />
<br />
1973年:第一篇关于质粒中 DNA 的分子克隆和扩增的文章在 P.N.A.S. 发表。作者: 科恩,波义耳等人(Cohen,Boyer et al.) 。构成了合成生物学的黎明<br />
<br />
<br />
<br />
'''1978:''' [[Werner Arber|Arber]], [[Daniel Nathans|Nathans]] and [[Hamilton O. Smith|Smith]] win the [[Nobel Prize in Physiology or Medicine]] for the discovery of [[restriction enzyme]]s, leading Szybalski to offer an editorial comment in the journal ''[[Gene (journal)|Gene]]'':<br />
<br />
1978: Arber, Nathans and Smith win the Nobel Prize in Physiology or Medicine for the discovery of restriction enzymes, leading Szybalski to offer an editorial comment in the journal Gene:<br />
<br />
1978年: 阿尔伯 (Arber) ,纳森斯 (Nathans) 和 史密斯 (Smith) 因为限制性内切酶的发现而获得美国诺贝尔生理学或医学奖学会奖,这使得齐巴尔斯基 (Szybalski) 在《基因》杂志上发表了一篇社论评论:<br />
<br />
<br />
<br />
<blockquote>The work on restriction nucleases not only permits us easily to construct recombinant DNA molecules and to analyze individual genes, but also has led us into the new era of synthetic biology where not only existing genes are described and analyzed but also new gene arrangements can be constructed and evaluated.<ref>{{cite journal | vauthors = Szybalski W, Skalka A | title = Nobel prizes and restriction enzymes | journal = Gene | volume = 4 | issue = 3 | pages = 181–2 | date = November 1978 | pmid = 744485 | doi = 10.1016/0378-1119(78)90016-1 }}</ref></blockquote><br />
<br />
<blockquote>The work on restriction nucleases not only permits us easily to construct recombinant DNA molecules and to analyze individual genes, but also has led us into the new era of synthetic biology where not only existing genes are described and analyzed but also new gene arrangements can be constructed and evaluated.</blockquote><br />
<br />
限制性核酸酶的研究不仅使我们能够很容易地构建重组 DNA 分子和分析单个基因,而且使我们进入了合成生物学的新时代,不仅可以描述和分析现有的基因,而且可以构建和评估新的基因序列。</blockquote ><br />
<br />
<br />
<br />
'''1988:''' First DNA amplification by the polymerase chain reaction (PCR) using a thermostable DNA polymerase is published in ''Science'' by Mullis ''et al.''<ref>{{cite journal | vauthors = Saiki RK, Gelfand DH, Stoffel S, Scharf SJ, Higuchi R, Horn GT, Mullis KB, Erlich HA | title = Primer-directed enzymatic amplification of DNA with a thermostable DNA polymerase | journal = Science | volume = 239 | issue = 4839 | pages = 487–491 | date = 1988 | pmid = 2448875 | doi = 10.1126/science.239.4839.487 }}</ref></blockquote> This obviated adding new DNA polymerase after each PCR cycle, thus greatly simplifying DNA mutagenesis and assembly.<br />
<br />
1988: First DNA amplification by the polymerase chain reaction (PCR) using a thermostable DNA polymerase is published in Science by Mullis et al.</blockquote> This obviated adding new DNA polymerase after each PCR cycle, thus greatly simplifying DNA mutagenesis and assembly.<br />
<br />
1988年: 第一次利用热稳定的 DNA 聚合酶进行聚合酶链式反应以实现 DNA 扩增(PCR)的成果由马利斯 (Mullis) 等人发表在《科学》杂志上,这样就避免了在每次 PCR 循环后增加新的 DNA 聚合酶,从而大大简化了 DNA 的突变和组装。<br />
<br />
<br />
<br />
'''2000:''' Two papers in [[Nature (journal)|Nature]] report [[synthetic biological circuits]], a genetic toggle switch and a biological clock, by combining genes within [[Escherichia coli|''E. coli'']] cells.<ref name=":0">{{cite journal | vauthors = Elowitz MB, Leibler S | title = A synthetic oscillatory network of transcriptional regulators | journal = Nature | volume = 403 | issue = 6767 | pages = 335–8 | date = January 2000 | pmid = 10659856 | doi = 10.1038/35002125 | bibcode = 2000Natur.403..335E | s2cid = 41632754 }}</ref><ref name=":1">{{cite journal | vauthors = Gardner TS, Cantor CR, Collins JJ | title = Construction of a genetic toggle switch in Escherichia coli | journal = Nature | volume = 403 | issue = 6767 | pages = 339–42 | date = January 2000 | pmid = 10659857 | doi = 10.1038/35002131 | bibcode = 2000Natur.403..339G | s2cid = 345059 }}</ref><br />
<br />
2000: Two papers in Nature report synthetic biological circuits, a genetic toggle switch and a biological clock, by combining genes within E. coli cells.<br />
<br />
2000年: 《自然》杂志的两篇论文报告了通过结合大肠杆菌细胞内的基因制造合成生物电路、基因切换开关和生物钟。<br />
<br />
<br />
<br />
'''2003:''' The most widely used standardized DNA parts, [[BioBrick]] plasmids, are invented by [[Tom Knight (scientist)|Tom Knight]].<ref>{{Cite journal|last1=Knight|first1=Thomas| name-list-style = vanc |year=2003|title=Tom Knight (2003). Idempotent Vector Design for Standard Assembly of Biobricks|hdl=1721.1/21168}}</ref> These parts will become central to the international Genetically Engineered Machine competition (iGEM) founded at MIT in the following year.<br />
<br />
2003: The most widely used standardized DNA parts, BioBrick plasmids, are invented by Tom Knight. These parts will become central to the international Genetically Engineered Machine competition (iGEM) founded at MIT in the following year.<br />
<br />
2003年: 最广泛使用的标准化 DNA 部件,生物积木质粒,是由汤姆·奈特 (Tom Knight) 发明的。这些部分将成为2004年在麻省理工学院成立的国际基因工程机器竞赛 (iGEM) 的中心。<br />
<br />
<br />
<br />
[[File:Synthetic Biology Open Language (SBOL) standard visual symbols.png|thumb|upright=1.25| [[Synthetic Biology Open Language]] (SBOL) standard visual symbols for use with [[BioBrick|BioBricks Standard]]]]<br />
<br />
[[Synthetic Biology Open Language (SBOL) standard visual symbols for use with BioBricks Standard]]<br />
<br />
[与生物积木标准一起使用的合成生物学开放式语言(SBOL)标准视觉符号]<br />
<br />
<br />
<br />
'''2003:''' Researchers engineer an artemisinin precursor pathway in ''E. coli''.<ref>Martin, V. J., Pitera, D. J., Withers, S. T., Newman, J. D. & Keasling, J. D. Engineering a mevalonate pathway in Escherichia coli for production of terpenoids. Nature Biotech. 21, 796–802 (2003).</ref><br />
<br />
2003: Researchers engineer an artemisinin precursor pathway in E. coli.<br />
<br />
2003年: 研究人员在大肠杆菌中设计出青蒿素前体途径。<br />
<br />
<br />
<br />
'''2004:''' First international conference for synthetic biology, Synthetic Biology 1.0 (SB1.0) is held at the Massachusetts Institute of Technology, USA.<br />
<br />
2004: First international conference for synthetic biology, Synthetic Biology 1.0 (SB1.0) is held at the Massachusetts Institute of Technology, USA.<br />
<br />
2004年: 第一届合成生物学国际会议,合成生物学1.0(SB1.0)在美国麻省理工学院举行。<br />
<br />
<br />
<br />
'''2005:''' Researchers develop a light-sensing circuit in ''E. coli''.<ref>{{cite journal | last1 = Levskaya | first1 = A. | display-authors = etal | year = 2005 | title = "Synthetic biology " engineering Escherichia coli to see light | url = | journal = Nature | volume = 438 | issue = 7067| pages = 441–442 | doi = 10.1038/nature04405 | pmid = 16306980 | s2cid = 4428475 }}</ref> Another group designs circuits capable of multicellular pattern formation.<ref>Basu, S., Gerchman, Y., Collins, C. H., Arnold, F. H. & Weiss, R. "A synthetic multicellular system for programmed pattern formation. ''Nature'' 434,</ref><br />
<br />
2005: Researchers develop a light-sensing circuit in E. coli. Another group designs circuits capable of multicellular pattern formation.<br />
<br />
2005年: 研究人员在大肠杆菌中开发出一种感光电路。另一个研究小组设计出了能够形成多细胞模式的电路。<br />
<br />
<br />
<br />
'''2006:''' Researchers engineer a synthetic circuit that promotes bacterial invasion of tumour cells.<ref>{{cite journal | last1 = Anderson | first1 = J. C. | last2 = Clarke | first2 = E. J. | last3 = Arkin | first3 = A. P. | last4 = Voigt | first4 = C. A. | year = 2006 | title = Environmentally controlled invasion of cancer cells by engineered bacteria | url = | journal = J. Mol. Biol. | volume = 355 | issue = 4| pages = 619–627 | doi = 10.1016/j.jmb.2005.10.076 | pmid = 16330045 }}</ref><br />
<br />
2006: Researchers engineer a synthetic circuit that promotes bacterial invasion of tumour cells.<br />
<br />
2006年: 研究人员设计了一种能促进细菌侵入肿瘤细胞的合成电路。<br />
<br />
<br />
<br />
'''2010:''' Researchers publish in ''Science'' the first synthetic bacterial genome, called ''M. mycoides'' JCVI-syn1.0.<ref name="gibson52" /><ref>{{Cite news|url=https://www.telegraph.co.uk/news/science/science-news/7747779/American-scientist-who-created-artificial-life-denies-playing-God.html|title=American scientist who created artificial life denies 'playing God'|last=|first=|date=May 2010|website=The Telegraph|url-status=live|archive-url=|archive-date=|access-date=}}</ref> The genome is made from chemically-synthesized DNA using yeast recombination.<br />
<br />
2010: Researchers publish in Science the first synthetic bacterial genome, called M. mycoides JCVI-syn1.0. The genome is made from chemically-synthesized DNA using yeast recombination.<br />
<br />
2010年: 研究人员在《科学》杂志上发表了第一个人工合成的细菌基因组,名为丝状支原体 JCVI-syn1.0。基因组是使用酵母由化学合成的 DNA重组得到的。<br />
<br />
<br />
<br />
'''2011:''' Functional synthetic chromosome arms are engineered in yeast.<ref>{{cite journal | last1 = Dymond | first1 = J. S. | display-authors = etal | year = 2011 | title = Synthetic chromosome arms function in yeast and generate phenotypic diversity by design | url = | journal = Nature | volume = 477 | issue = 7365 | pages = 816–821 | doi = 10.1038/nature10403 | pmid = 21918511 | pmc = 3774833 }}</ref><br />
<br />
2011: Functional synthetic chromosome arms are engineered in yeast.<br />
<br />
2011年: 成功在酵母中设计出功能性合成染色体臂。<br />
<br />
<br />
<br />
'''2012:''' Charpentier and Doudna labs publish in ''Science'' the programming of CRISPR-Cas9 bacterial immunity for targeting DNA cleavage.<ref>{{cite journal | vauthors = Jinek M, Chylinski K, Fonfara I, Hauer M, Doudna JA, Charpentier E | title = A programmable dual-RNA-guided DNA endonuclease in adaptive bacterial immunity | journal = Science | volume = 337 | issue = 6096 | pages = 816–821 | date = 2012 | pmid = 22745249 | doi = 10.1126/science.1225829 | pmc = 6286148 }}</ref></blockquote> This technology greatly simplified and expanded eukaryotic gene editing.<br />
<br />
2012: Charpentier and Doudna labs publish in Science the programming of CRISPR-Cas9 bacterial immunity for targeting DNA cleavage.</blockquote> This technology greatly simplified and expanded eukaryotic gene editing.<br />
<br />
2012年: Charpentier 和 Doudna 实验室在《科学》杂志上发表了 CRISPR-Cas9细菌免疫系统的程序设计,用于靶向 DNA 的裂解。这项技术极大地简化和扩展了真核生物的基因编辑。<br />
<br />
<br />
<br />
'''2019:''' Scientists at [[ETH Zurich]] report the creation of the first [[bacterial genome]], named ''[[Caulobacter crescentus|Caulobacter ethensis-2.0]]'', made entirely by a computer, although a related [[wikt:viability|viable form]] of ''C. ethensis-2.0'' does not yet exist.<ref name="EA-20190401">{{cite news |author=ETH Zurich |title=First bacterial genome created entirely with a computer |url=https://www.eurekalert.org/pub_releases/2019-04/ez-fbg032819.php |date=1 April 2019 |work=[[EurekAlert!]] |accessdate=2 April 2019 |author-link=ETH Zurich }}</ref><ref name="PNAS20190401">{{cite journal |author=Venetz, Jonathan E. |display-authors=et al. |title=Chemical synthesis rewriting of a bacterial genome to achieve design flexibility and biological functionality |date=1 April 2019 |journal=[[Proceedings of the National Academy of Sciences of the United States of America]] |volume=116 |issue=16 |pages=8070–8079 |doi=10.1073/pnas.1818259116 |pmid=30936302 |pmc=6475421 }}</ref><br />
<br />
2019: Scientists at ETH Zurich report the creation of the first bacterial genome, named Caulobacter ethensis-2.0, made entirely by a computer, although a related viable form of C. ethensis-2.0 does not yet exist.<br />
<br />
2019年: 苏黎世联邦理工学院 (ETH Zurich) 的科学家报告说,他们已经创造出了第一个细菌基因组,并将其命名为 Caulobacter ethensis-2.0 ,这个基因组完全是由计算机制造的,尽管与之相关的可存活的Caulobacter ethensis-2.0还不存在。<br />
<br />
<br />
<br />
'''2019:''' Researchers report the production of a new [[Synthetic biology#Synthetic life|synthetic]] (possibly [[Artificial life#Biochemical-based ("wet")|artificial]]) form of [[wikt:viability|viable]] [[life]], a variant of the [[bacteria]] ''[[Escherichia coli]]'', by reducing the natural number of 64 [[codon]]s in the bacterial [[genome]] to 59 codons instead, in order to encode 20 [[amino acid]]s.<ref name="NYT-20190515">{{cite news |last=Zimmer |first=Carl |authorlink=Carl Zimmer |title=Scientists Created Bacteria With a Synthetic Genome. Is This Artificial Life? - In a milestone for synthetic biology, colonies of E. coli thrive with DNA constructed from scratch by humans, not nature. |url=https://www.nytimes.com/2019/05/15/science/synthetic-genome-bacteria.html |date=15 May 2019 |work=[[The New York Times]] |accessdate=16 May 2019 }}</ref><ref name="NAT-20190515">{{cite journal |author=Fredens, Julius |display-authors=et al. |title=Total synthesis of Escherichia coli with a recoded genome |date=15 May 2019 |journal=[[Nature (journal)|Nature]] |volume=569 |issue=7757 |pages=514–518 |doi=10.1038/s41586-019-1192-5 |pmid=31092918 |pmc=7039709 |bibcode=2019Natur.569..514F }}</ref><br />
<br />
2019: Researchers report the production of a new synthetic (possibly artificial) form of viable life, a variant of the bacteria Escherichia coli, by reducing the natural number of 64 codons in the bacterial genome to 59 codons instead, in order to encode 20 amino acids.<br />
<br />
2019年: 研究人员报告了一种新式合成(可能是人工的)可行生命形式的产生,这是大肠杆菌的变种,它通过将细菌基因组中64个密码子的自然数目减少到59个密码子来编码20个氨基酸。<br />
<br />
<br />
<br />
== Perspectives 各方观点 ==<br />
<br />
Engineers view biology as a ''technology'' (in other words, a given system's ''[[biotechnology]]'' or its ''[[biological engineering]]'')<ref>{{cite journal | volume = 6 | last = Zeng | first = Jie (Bangzhe) | title = On the concept of systems bio-engineering | journal = Coomunication on Transgenic Animals, June 1994, CAS, PRC }}</ref> Synthetic biology includes the broad redefinition and expansion of biotechnology, with the ultimate goals of being able to design and build engineered biological systems that process information, manipulate chemicals, fabricate materials and structures, produce energy, provide food, and maintain and enhance human health (see [[Biomedical Engineering]]) and our environment.<ref>{{cite journal | volume = 6 | last = Chopra | first = Paras | author2 = Akhil Kamma | title = Engineering life through Synthetic Biology | journal = In Silico Biology }}</ref><br />
<br />
Engineers view biology as a technology (in other words, a given system's biotechnology or its biological engineering) Synthetic biology includes the broad redefinition and expansion of biotechnology, with the ultimate goals of being able to design and build engineered biological systems that process information, manipulate chemicals, fabricate materials and structures, produce energy, provide food, and maintain and enhance human health (see Biomedical Engineering) and our environment.<br />
<br />
工程师将生物学视为一种技术(换句话说,一个特定系统的生物技术或其生物工程)合成生物学包括生物技术的广泛重新定义和扩展,最终目标是能够设计和建造工程生物系统,处理信息,操纵化学品,制造材料和结构,生产能源,提供食物,维护和增强人类健康(见生物医学工程)和我们的环境。<br />
<br />
<br />
<br />
Studies in synthetic biology can be subdivided into broad classifications according to the approach they take to the problem at hand: standardization of biological parts, biomolecular engineering, genome engineering. {{citation needed|date=May 2020}}<br />
<br />
Studies in synthetic biology can be subdivided into broad classifications according to the approach they take to the problem at hand: standardization of biological parts, biomolecular engineering, genome engineering. <br />
<br />
合成生物学的研究可以根据它们对手头问题所采取的方法再广泛细分为: 生物部分的标准化、生物分子工程、基因组工程。<br />
<br />
<br />
<br />
Biomolecular engineering includes approaches that aim to create a toolkit of functional units that can be introduced to present new technological functions in living cells. [[Genetic engineering]] includes approaches to construct synthetic chromosomes for whole or minimal organisms.<br />
<br />
Biomolecular engineering includes approaches that aim to create a toolkit of functional units that can be introduced to present new technological functions in living cells. Genetic engineering includes approaches to construct synthetic chromosomes for whole or minimal organisms.<br />
<br />
生物分子工程包括旨在创建一个功能单元工具包的方法,这些功能单元可以用来展示活细胞中的新技术性功能。基因工程包括为整个或最小的有机体构建合成染色体的方法。<br />
<br />
<br />
<br />
Biomolecular design refers to the general idea of de novo design and additive combination of biomolecular components. Each of these approaches share a similar task: to develop a more synthetic entity at a higher level of complexity by inventively manipulating a simpler part at the preceding level.<ref>{{cite journal | vauthors = Channon K, Bromley EH, Woolfson DN | title = Synthetic biology through biomolecular design and engineering | journal = Current Opinion in Structural Biology | volume = 18 | issue = 4 | pages = 491–8 | date = August 2008 | pmid = 18644449 | doi = 10.1016/j.sbi.2008.06.006 }}</ref><br />
<br />
Biomolecular design refers to the general idea of de novo design and additive combination of biomolecular components. Each of these approaches share a similar task: to develop a more synthetic entity at a higher level of complexity by inventively manipulating a simpler part at the preceding level.<br />
<br />
生物分子设计是指生物分子组分的重新设计和加合的总体思想。这些方法都有一个相似的任务: 通过在前一级创造性地操作一个更简单的部分,从而在更高的复杂性水平上开发一个更高度合成化的实体。<br />
<br />
<br />
<br />
On the other hand, "re-writers" are synthetic biologists interested in testing the irreducibility of biological systems. Due to the complexity of natural biological systems, it would be simpler to rebuild the natural systems of interest from the ground up; In order to provide engineered surrogates that are easier to comprehend, control and manipulate.<ref>{{cite journal | first = M | last = Stone | title = Life Redesigned to Suit the Engineering Crowd | journal = Microbe | volume = 1 | issue = 12 | pages = 566–570 | date = 2006 | s2cid = 7171812 | url = https://pdfs.semanticscholar.org/8d45/e0f37a0fb6c1a3c659c71ee9c52619b18364.pdf }}</ref> Re-writers draw inspiration from [[refactoring]], a process sometimes used to improve computer software.<br />
<br />
On the other hand, "re-writers" are synthetic biologists interested in testing the irreducibility of biological systems. Due to the complexity of natural biological systems, it would be simpler to rebuild the natural systems of interest from the ground up; In order to provide engineered surrogates that are easier to comprehend, control and manipulate. Re-writers draw inspiration from refactoring, a process sometimes used to improve computer software.<br />
<br />
另一方面,“重写者”指的是对测试生物系统的不可还原性感兴趣的合成生物学家。由于自然生物系统的复杂性,从头开始重建感兴趣的自然系统会更简单; 为了提供更容易理解、控制和操作的工程替代品。重写者从重构中获得灵感,我们有时用这种重构以改进计算机软件。<br />
<br />
<br />
<br />
== Enabling technologies 使能技术 ==<br />
<br />
Several novel enabling technologies were critical to the success of synthetic biology. Concepts include [[standardization]] of biological parts and hierarchical abstraction to permit using those parts in synthetic systems.<ref>{{cite journal | vauthors = Baker D, Church G, Collins J, Endy D, Jacobson J, Keasling J, Modrich P, Smolke C, Weiss R | title = Engineering life: building a fab for biology | journal = Scientific American | volume = 294 | issue = 6 | pages = 44–51 | date = June 2006 | pmid = 16711359 | doi = 10.1038/scientificamerican0606-44 | bibcode = 2006SciAm.294f..44B }}</ref> Basic technologies include reading and writing DNA (sequencing and fabrication). Measurements under multiple conditions are needed for accurate modeling and [[computer-aided design]] (CAD).<br />
<br />
Several novel enabling technologies were critical to the success of synthetic biology. Concepts include standardization of biological parts and hierarchical abstraction to permit using those parts in synthetic systems. Basic technologies include reading and writing DNA (sequencing and fabrication). Measurements under multiple conditions are needed for accurate modeling and computer-aided design (CAD).<br />
<br />
一些新的使能技术对于合成生物学的成功至关重要。它的概念囊括了生物部分的标准化和层次抽象,以允许在合成系统中使用这些部分。基本技术包括读写 DNA (测序和编码)。为了精确地建模和计算机辅助设计(CAD) ,需要在多种条件下进行测量。<br />
<br />
<br />
<br />
=== DNA and gene synthesis DNA 和基因合成===<br />
<br />
{{Main|Artificial gene synthesis|Synthetic genomics}}Driven by dramatic decreases in costs of [[oligonucleotides|oligonucleotide]] ("oligos") synthesis and the advent of PCR, the sizes of DNA constructions from oligos have increased to the genomic level.<ref>{{cite journal | vauthors = Kosuri S, Church GM | title = Large-scale de novo DNA synthesis: technologies and applications | journal = Nature Methods | volume = 11 | issue = 5 | pages = 499–507 | date = May 2014 | pmid = 24781323 | doi = 10.1038/nmeth.2918 | pmc = 7098426 }}</ref> In 2000, researchers reported synthesis of the 9.6 kbp (kilo bp) [[Hepatitis C]] virus genome from chemically synthesized 60 to 80-mers.<ref>{{cite journal | vauthors = Blight KJ, Kolykhalov AA, Rice CM | title = Efficient initiation of HCV RNA replication in cell culture | journal = Science | volume = 290 | issue = 5498 | pages = 1972–4 | date = December 2000 | pmid = 11110665 | doi = 10.1126/science.290.5498.1972 | bibcode = 2000Sci...290.1972B }}</ref> In 2002 researchers at [[Stony Brook University]] succeeded in synthesizing the 7741 bp [[poliovirus]] genome from its published sequence, producing the second synthetic genome, spanning two years.<ref>{{cite journal | vauthors = Couzin J | title = Virology. Active poliovirus baked from scratch | journal = Science | volume = 297 | issue = 5579 | pages = 174–5 | date = July 2002 | pmid = 12114601 | doi = 10.1126/science.297.5579.174b | s2cid = 83531627 | url = https://semanticscholar.org/paper/248000e7bc654631ae217274a77253ceddf270a1 }}</ref> In 2003 the 5386 bp genome of the [[bacteriophage]] [[Phi X 174]] was assembled in about two weeks.<ref name="assembly2003">{{cite journal | vauthors = Smith HO, Hutchison CA, Pfannkoch C, Venter JC | title = Generating a synthetic genome by whole genome assembly: phiX174 bacteriophage from synthetic oligonucleotides | journal = Proceedings of the National Academy of Sciences of the United States of America | volume = 100 | issue = 26 | pages = 15440–5 | date = December 2003 | pmid = 14657399 | pmc = 307586 | doi = 10.1073/pnas.2237126100 | bibcode = 2003PNAS..10015440S }}</ref> In 2006, the same team, at the [[J. Craig Venter Institute]], constructed and patented a [[Synthetic genomics|synthetic genome]] of a novel minimal bacterium, ''[[Mycoplasma laboratorium]]'' and were working on getting it functioning in a living cell.<ref>{{cite news|url=https://www.nytimes.com/2007/06/29/science/29cells.html|title=Scientists Transplant Genome of Bacteria|last=Wade|first=Nicholas|date=2007-06-29|work=The New York Times|access-date=2007-12-28|issn=0362-4331}}</ref><ref>{{cite journal | vauthors = Gibson DG, Benders GA, Andrews-Pfannkoch C, Denisova EA, Baden-Tillson H, Zaveri J, Stockwell TB, Brownley A, Thomas DW, Algire MA, Merryman C, Young L, Noskov VN, Glass JI, Venter JC, Hutchison CA, Smith HO | title = Complete chemical synthesis, assembly, and cloning of a Mycoplasma genitalium genome | journal = Science | volume = 319 | issue = 5867 | pages = 1215–20 | date = February 2008 | pmid = 18218864 | doi = 10.1126/science.1151721 | bibcode = 2008Sci...319.1215G | s2cid = 8190996 | url = https://semanticscholar.org/paper/8c662fd0e252c85d056aad7ff16009ebe1dd4cbc }}</ref><ref name="Ball">{{cite journal|last1=Ball|first1=Philip|date=2016|title=Man Made: A History of Synthetic Life|url=https://www.sciencehistory.org/distillations/magazine/man-made-a-history-of-synthetic-life|journal=Distillations|volume=2|issue=1|pages=15–23|access-date=22 March 2018}}</ref><br />
<br />
Driven by dramatic decreases in costs of oligonucleotide ("oligos") synthesis and the advent of PCR, the sizes of DNA constructions from oligos have increased to the genomic level. In 2000, researchers reported synthesis of the 9.6 kbp (kilo bp) Hepatitis C virus genome from chemically synthesized 60 to 80-mers. In 2002 researchers at Stony Brook University succeeded in synthesizing the 7741 bp poliovirus genome from its published sequence, producing the second synthetic genome, spanning two years. In 2003 the 5386 bp genome of the bacteriophage Phi X 174 was assembled in about two weeks. In 2006, the same team, at the J. Craig Venter Institute, constructed and patented a synthetic genome of a novel minimal bacterium, Mycoplasma laboratorium and were working on getting it functioning in a living cell.<br />
<br />
由于寡核苷酸 (oligos) 合成成本的大幅度降低和 PCR 的出现,寡核苷酸 DNA 构建的大小已经提高到基因组水平。2000年,研究人员报道了化学合成的60到80碱基数的聚丙型肝炎病毒基因组的合成。2002年,石溪大学的研究人员成功地从已发表的序列中合成了7741碱基数的脊髓灰质炎病毒基因组,生产了跨越两年的第二个合成基因组。2003年,噬菌体 Phi x 174的5386个基因组在大约两周内组装完毕。2006年,克莱格·凡特 (J. Craig Venter) 研究所的同一个团队,构建了一种新型微型细菌——支原体的合成基因组,并申请了专利,他们正在努力使其在活细胞中发挥作用。<br />
<br />
<br />
<br />
In 2007 it was reported that several companies were offering [[gene synthesis|synthesis of genetic sequences]] up to 2000 base pairs (bp) long, for a price of about $1 per bp and a turnaround time of less than two weeks.<ref>{{cite news| issn = 0362-4331| last = Pollack| first = Andrew| title = How Do You Like Your Genes? Biofabs Take Orders | work = The New York Times | access-date = 2007-12-28| date = 2007-09-12 | url = https://www.nytimes.com/2007/09/12/technology/techspecial/12gene.html?pagewanted=2&_r=1}}</ref> [[Oligonucleotide]]s harvested from a photolithographic- or inkjet-manufactured [[DNA chip]] combined with PCR and DNA mismatch error-correction allows inexpensive large-scale changes of [[codons]] in genetic systems to improve [[gene expression]] or incorporate novel amino-acids (see [[George M. Church]]'s and Anthony Forster's synthetic cell projects.<ref>{{Cite web|url=http://arep.med.harvard.edu/SBP|title=Synthetic Biology Projects|website=arep.med.harvard.edu|access-date=2018-02-17}}</ref><ref>{{cite journal | vauthors = Forster AC, Church GM | title = Towards synthesis of a minimal cell | journal = Molecular Systems Biology | volume = 2 | issue = 1 | pages = 45 | date = 2006-08-22 | pmid = 16924266 | pmc = 1681520 | doi = 10.1038/msb4100090 }}</ref>) This favors a synthesis-from-scratch approach.<br />
<br />
In 2007 it was reported that several companies were offering synthesis of genetic sequences up to 2000 base pairs (bp) long, for a price of about $1 per bp and a turnaround time of less than two weeks. Oligonucleotides harvested from a photolithographic- or inkjet-manufactured DNA chip combined with PCR and DNA mismatch error-correction allows inexpensive large-scale changes of codons in genetic systems to improve gene expression or incorporate novel amino-acids (see George M. Church's and Anthony Forster's synthetic cell projects.) This favors a synthesis-from-scratch approach.<br />
<br />
2007年有报道称,几家公司提供了长达2000个碱基对的基因序列合成,价格约为每个碱基1美元,一个完整流程所需工时不到两周。从光刻或喷墨制造的 DNA 芯片中提取的寡核苷酸,结合 PCR 和 DNA 错配错误校正,可以大规模改变遗传系统中的密码子,从而改善基因表达或合并新的氨基酸(参见乔治·M·丘奇和安东尼·福斯特的合成细胞项目)这有利于采用从头开始的合成方法。<br />
<br />
<br />
<br />
Additionally, the [[CRISPR|CRISPR/Cas]] system has emerged as a promising technique for gene editing. It was described as "the most important innovation in the synthetic biology space in nearly 30 years".<ref name="washpost_crispr">{{cite news|last1=Basulto|first1=Dominic|title=Everything you need to know about why CRISPR is such a hot technology|url=https://www.washingtonpost.com/news/innovations/wp/2015/11/04/everything-you-need-to-know-about-why-crispr-is-such-a-hot-technology/|access-date=5 December 2015|work=Washington Post|date=November 4, 2015}}</ref> While other methods take months or years to edit gene sequences, CRISPR speeds that time up to weeks.<ref name="washpost_crispr" /> Due to its ease of use and accessibility, however, it has raised ethical concerns, especially surrounding its use in [[Do-it-yourself biology|biohacking]].<ref>{{cite news|last1=Kahn|first1=Jennifer|title=The Crispr Quandary|url=https://www.nytimes.com/2015/11/15/magazine/the-crispr-quandary.html?_r=0|access-date=5 December 2015|work=New York Times|date=November 9, 2015}}</ref><ref>{{cite journal|last1=Ledford|first1=Heidi|title=CRISPR, the disruptor|url=http://www.nature.com/news/crispr-the-disruptor-1.17673|access-date=5 December 2015|agency=Nature News|journal=Nature|date=June 3, 2015|pmid=26040877|doi=10.1038/522020a|volume=522|issue=7554|pages=20–4|bibcode=2015Natur.522...20L|doi-access=free}}</ref><ref>{{cite magazine|last1=Higginbotham|first1=Stacey|title=Top VC Says Gene Editing Is Riskier Than Artificial Intelligence|url=http://fortune.com/2015/12/04/khosla-crispr-ai/|access-date=5 December 2015|magazine=Fortune|date=4 December 2015}}</ref><br />
<br />
Additionally, the CRISPR/Cas system has emerged as a promising technique for gene editing. It was described as "the most important innovation in the synthetic biology space in nearly 30 years". While other methods take months or years to edit gene sequences, CRISPR speeds that time up to weeks.<br />
<br />
此外,CRISPR/Cas 系统已经成为一种很有前途的基因编辑技术。它被称作“近30年来合成生物学领域最重要的创新”。虽然其他方法需要数月或数年来编辑基因序列,CRISPR 将这个时间缩短到数周。<br />
<br />
<br />
<br />
=== Sequencing 测序 ===<br />
<br />
[[DNA sequencing]] determines the order of [[nucleotide]] bases in a DNA molecule. Synthetic biologists use DNA sequencing in their work in several ways. First, large-scale genome sequencing efforts continue to provide information on naturally occurring organisms. This information provides a rich substrate from which synthetic biologists can construct parts and devices. Second, sequencing can verify that the fabricated system is as intended. Third, fast, cheap, and reliable sequencing can facilitate rapid detection and identification of synthetic systems and organisms.<ref>{{cite journal| author = Rollie| date = 2012 |title = Designing biological systems: Systems Engineering meets Synthetic Biology| journal = Chemical Engineering Science| volume = 69 | pages = 1–29| doi=10.1016/j.ces.2011.10.068| issue=1|display-authors=etal}}</ref><br />
<br />
DNA sequencing determines the order of nucleotide bases in a DNA molecule. Synthetic biologists use DNA sequencing in their work in several ways. First, large-scale genome sequencing efforts continue to provide information on naturally occurring organisms. This information provides a rich substrate from which synthetic biologists can construct parts and devices. Second, sequencing can verify that the fabricated system is as intended. Third, fast, cheap, and reliable sequencing can facilitate rapid detection and identification of synthetic systems and organisms.<br />
<br />
DNA 测序决定了 DNA 分子中核苷酸碱基的顺序。合成生物学家在他们的工作中以几种方式使用 DNA 测序。首先,大规模的基因组测序工作继续提供有关自然发生的生物体的信息。这些信息为合成生物学家提供了一个丰富的基质,他们可以从中构建零件和设备。其次,排序可以验证制造的系统是否如预期的那样。第三,快速、廉价和可靠的测序可以促进快速检测和识别合成系统和有机体。<br />
<br />
<br />
<br />
=== Microfluidics ===<br />
<br />
[[Microfluidics]], in particular droplet microfluidics, is an emerging tool used to construct new components, and to analyse and characterize them.<ref>{{cite journal | vauthors = Elani Y | title = Construction of membrane-bound artificial cells using microfluidics: a new frontier in bottom-up synthetic biology | journal = Biochemical Society Transactions | volume = 44 | issue = 3 | pages = 723–30 | date = June 2016 | pmid = 27284034 | pmc = 4900754 | doi = 10.1042/BST20160052 }}</ref><ref>{{cite journal | vauthors = Gach PC, Iwai K, Kim PW, Hillson NJ, Singh AK | title = Droplet microfluidics for synthetic biology | journal = Lab on a Chip | volume = 17 | issue = 20 | pages = 3388–3400 | date = October 2017 | pmid = 28820204 | doi = 10.1039/C7LC00576H | osti = 1421856 | url = http://www.escholarship.org/uc/item/6cr3k0v5 }}</ref> It is widely employed in screening assays.<ref>{{cite journal | vauthors = Vinuselvi P, Park S, Kim M, Park JM, Kim T, Lee SK | title = Microfluidic technologies for synthetic biology | journal = International Journal of Molecular Sciences | volume = 12 | issue = 6 | pages = 3576–93 | date = 2011-06-03 | pmid = 21747695 | pmc = 3131579 | doi = 10.3390/ijms12063576 }}</ref><br />
<br />
Microfluidics, in particular droplet microfluidics, is an emerging tool used to construct new components, and to analyse and characterize them. It is widely employed in screening assays.<br />
<br />
微流体,特别是液滴微流体,是一种新兴的工具,用于构造新的元件,并分析和表征它们。它被广泛应用于筛选分析。<br />
<br />
<br />
<br />
=== Modularity 模块化 ===<br />
<br />
The most used<ref name="primer">{{Cite book|title=Synthetic Biology – A Primer|last1=Freemont|first1=Paul S.|last2=Kitney|first2=Richard I.| name-list-style = vanc |date=2012|publisher=World Scientific|isbn=978-1-84816-863-3|doi=10.1142/p837}}</ref>{{rp|22–23}} standardized DNA parts are [[BioBrick]] plasmids, invented by [[Tom Knight (scientist)|Tom Knight]] in 2003.<ref>{{Cite journal|last1=Knight|first1=Thomas| name-list-style = vanc |year=2003|title=Tom Knight (2003). Idempotent Vector Design for Standard Assembly of Biobricks|hdl=1721.1/21168}}</ref> Biobricks are stored at the [[Registry of Standard Biological Parts]] in Cambridge, Massachusetts. The BioBrick standard has been used by thousands of students worldwide in the [[international Genetically Engineered Machine]] (iGEM) competition.<ref name="primer" />{{rp|22–23}}<br />
<br />
The most used standardized DNA parts are BioBrick plasmids, invented by Tom Knight in 2003. Biobricks are stored at the Registry of Standard Biological Parts in Cambridge, Massachusetts. The BioBrick standard has been used by thousands of students worldwide in the international Genetically Engineered Machine (iGEM) competition. SH3 domain-peptide binding or SpyTag/SpyCatcher offer such control. In addition it is necessary to regulate protein-protein interactions in cells, such as with light (using light-oxygen-voltage-sensing domains) or cell-permeable small molecules by chemically induced dimerization.<br />
<br />
最常用的标准化 DNA 部分是生物积木质粒,由汤姆·奈特在2003年发明。生物积木储存在马萨诸塞州剑桥的标准生物部件注册处。生物积木标准已经被全世界成千上万的学生用于国际基因工程机器竞赛 (iGEM) 。SH3域-肽结合或 SpyTag/SpyCatcher 能提供这样的控制。此外,还必须通过化学诱导的二聚化来调节细胞中蛋白质-蛋白质的相互作用,例如利用光(光-氧-电压感应域)或细胞渗透性小分子。<br />
<br />
<br />
<br />
While DNA is most important for information storage, a large fraction of the cell's activities are carried out by proteins. Tools can send proteins to specific regions of the cell and to link different proteins together. The interaction strength between protein partners should be tunable between a lifetime of seconds (desirable for dynamic signaling events) up to an irreversible interaction (desirable for device stability or resilient to harsh conditions). Interactions such as [[coiled coil]]s,<ref>{{cite journal | vauthors = Woolfson DN, Bartlett GJ, Bruning M, Thomson AR | title = New currency for old rope: from coiled-coil assemblies to α-helical barrels | journal = Current Opinion in Structural Biology | volume = 22 | issue = 4 | pages = 432–41 | date = August 2012 | pmid = 22445228 | doi = 10.1016/j.sbi.2012.03.002 }}</ref> [[SH3 domain]]-peptide binding<ref>{{cite journal | vauthors = Dueber JE, Wu GC, Malmirchegini GR, Moon TS, Petzold CJ, Ullal AV, Prather KL, Keasling JD | title = Synthetic protein scaffolds provide modular control over metabolic flux | journal = Nature Biotechnology | volume = 27 | issue = 8 | pages = 753–9 | date = August 2009 | pmid = 19648908 | doi = 10.1038/nbt.1557 | s2cid = 2756476 }}</ref> or [[SpyCatcher|SpyTag/SpyCatcher]]<ref>{{cite journal | vauthors = Reddington SC, Howarth M | title = Secrets of a covalent interaction for biomaterials and biotechnology: SpyTag and SpyCatcher | journal = Current Opinion in Chemical Biology | volume = 29 | pages = 94–9 | date = December 2015 | pmid = 26517567 | doi = 10.1016/j.cbpa.2015.10.002 | doi-access = free }}</ref> offer such control. In addition it is necessary to regulate protein-protein interactions in cells, such as with light (using [[light-oxygen-voltage-sensing domain]]s) or cell-permeable small molecules by [[chemically induced dimerization]].<ref>{{cite journal | vauthors = Bayle JH, Grimley JS, Stankunas K, Gestwicki JE, Wandless TJ, Crabtree GR | title = Rapamycin analogs with differential binding specificity permit orthogonal control of protein activity | journal = Chemistry & Biology | volume = 13 | issue = 1 | pages = 99–107 | date = January 2006 | pmid = 16426976 | doi = 10.1016/j.chembiol.2005.10.017 | doi-access = free }}</ref><br />
<br />
In a living cell, molecular motifs are embedded in a bigger network with upstream and downstream components. These components may alter the signaling capability of the modeling module. In the case of ultrasensitive modules, the sensitivity contribution of a module can differ from the sensitivity that the module sustains in isolation.<br />
<br />
在一个活细胞中,分子模体被嵌入到一个更大的由上游或下游组件构成的网络中。这些组件可以改变建模模块的信令能力。在超灵敏模块的情况下,模块的灵敏度贡献可能不同于该模块在隔离状态下支持的灵敏度。<br />
<br />
<br />
<br />
In a living cell, molecular motifs are embedded in a bigger network with upstream and downstream components. These components may alter the signaling capability of the modeling module. In the case of ultrasensitive modules, the sensitivity contribution of a module can differ from the sensitivity that the module sustains in isolation.<ref name="altszylerUltrasens2014">{{cite journal | vauthors = Altszyler E, Ventura A, Colman-Lerner A, Chernomoretz A | title = Impact of upstream and downstream constraints on a signaling module's ultrasensitivity | journal = Physical Biology | volume = 11 | issue = 6 | pages = 066003 | date = October 2014 | pmid = 25313165 | pmc = 4233326 | doi = 10.1088/1478-3975/11/6/066003 | bibcode = 2014PhBio..11f6003A }}</ref><ref name="altszylerUltrasens2017">{{cite journal | vauthors = Altszyler E, Ventura AC, Colman-Lerner A, Chernomoretz A | title = Ultrasensitivity in signaling cascades revisited: Linking local and global ultrasensitivity estimations | journal = PLOS ONE | volume = 12 | issue = 6 | pages = e0180083 | year = 2017 | pmid = 28662096 | pmc = 5491127 | doi = 10.1371/journal.pone.0180083 | bibcode = 2017PLoSO..1280083A | arxiv = 1608.08007 }}</ref><br />
<br />
<br />
<br />
Models inform the design of engineered biological systems by better predicting system behavior prior to fabrication. Synthetic biology benefits from better models of how biological molecules bind substrates and catalyze reactions, how DNA encodes the information needed to specify the cell and how multi-component integrated systems behave. Multiscale models of gene regulatory networks focus on synthetic biology applications. Simulations can model all biomolecular interactions in transcription, translation, regulation and induction of gene regulatory networks.<br />
<br />
模型通过在制造之前更好地预测系统行为来指导工程生物系统的设计。合成生物学受益于更好的模型,这些模型包括生物分子如何结合底物和催化反应,DNA 如何编码指定细胞所需的信息,以及多组分综合系统如何运作。基因调控网络的多尺度模型关注于合成生物学的应用。模拟可以模拟所有的生物分子相互作用的转录,翻译,调节和基因调控网络的诱导。<br />
<br />
=== Modeling 建模 ===<br />
<br />
Models inform the design of engineered biological systems by better predicting system behavior prior to fabrication. Synthetic biology benefits from better models of how biological molecules bind substrates and catalyze reactions, how DNA encodes the information needed to specify the cell and how multi-component integrated systems behave. Multiscale models of gene regulatory networks focus on synthetic biology applications. Simulations can model all biomolecular interactions in [[Transcription (biology)|transcription]], [[Translation (biology)|translation]], regulation and induction of gene regulatory networks.<ref>{{cite journal | vauthors = Carbonell-Ballestero M, Duran-Nebreda S, Montañez R, Solé R, Macía J, Rodríguez-Caso C | title = A bottom-up characterization of transfer functions for synthetic biology designs: lessons from enzymology | journal = Nucleic Acids Research | volume = 42 | issue = 22 | pages = 14060–14069 | date = December 2014 | pmid = 25404136 | pmc = 4267673 | doi = 10.1093/nar/gku964 }}</ref><br />
<br />
<ref>{{cite journal | vauthors = Kaznessis YN | title = Models for synthetic biology | journal = BMC Systems Biology | volume = 1 | issue = 1 | pages = 47 | date = November 2007 | pmid = 17986347 | pmc = 2194732 | doi = 10.1186/1752-0509-1-47 }}</ref><br />
<br />
<ref>{{cite conference |vauthors=Tuza ZA, Singhal V, Kim J, Murray RM | title = An in silico modeling toolbox for rapid prototyping of circuits in a biomolecular "breadboard" system. |book-title=52nd IEEE Conference on Decision and Control |date=December 2013 |doi=10.1109/CDC.2013.6760079}}</ref><br />
<br />
<br />
<br />
Studies have considered the components of the DNA transcription mechanism. One desire of scientists creating synthetic biological circuits is to be able to control the transcription of synthetic DNA in unicellular organisms (prokaryotes) and in multicellular organisms (eukaryotes). One study tested the adjustability of synthetic transcription factors (sTFs) in areas of transcription output and cooperative ability among multiple transcription factor complexes. Researchers were able to mutate functional regions called zinc fingers, the DNA specific component of sTFs, to decrease their affinity for specific operator DNA sequence sites, and thus decrease the associated site-specific activity of the sTF (usually transcriptional regulation). They further used the zinc fingers as components of complex-forming sTFs, which are the eukaryotic translation mechanisms. and demonstrated both analog and digital computation in living cells.<br />
--[[用户:粲兰|袁一博]]([[用户讨论:粲兰|讨论]]) “and demonstrated both analog and digital computation in living cells. ”这句首字母没有大写,不知是搬运时少了一点还是仅仅是格式错误。<br />
They demonstrated that bacteria can be engineered to perform both analog and/or digital computation. In human cells research demonstrated a universal logic evaluator that operates in mammalian cells in 2007. Subsequently, researchers utilized this paradigm to demonstrate a proof-of-concept therapy that uses biological digital computation to detect and kill human cancer cells in 2011. Another group of researchers demonstrated in 2016 that principles of computer engineering, can be used to automate digital circuit design in bacterial cells. In 2017, researchers demonstrated the 'Boolean logic and arithmetic through DNA excision' (BLADE) system to engineer digital computation in human cells.<br />
<br />
研究已经考虑了 DNA 转录机制的组成部分。科学家创造合成生物电路的一个愿望是能够控制单细胞生物(原核生物)和多细胞生物(真核生物)中合成 DNA 的转录。一项研究测试了合成转录因子(sTFs)在转录输出和多个转录因子复合物之间的合作能力方面的可调节性。研究人员能够使被称为锌指——转录因子的一段特殊DNA——的功能区域突变,以减少它们对特定算子DNA 序列位点的亲和力,从而减少相关的特定位点的转录因子活性(通常是转录调控)。他们进一步使用锌指作为复杂地组成的转录因子的组成部分,这是真核翻译的机制。并在活细胞中进行了模拟和数字计算。他们证明了可以设计细菌使其同时执行模拟和/或数字计算。2007年,人类细胞研究展示了一种在哺乳动物细胞中运作的通用逻辑求值器。随后,研究人员在2011年利用这一范式展示了一种概念验证疗法,利用生物数字计算来检测和杀死人类癌细胞。另一组研究人员在2016年证明了计算机工程的原理,可以用来自动化细菌细胞中的数字电路设计。2017年,研究人员演示了“通过 DNA 删除的布尔逻辑和算术”(BLADE)系统,用于在人类细胞中构建数字计算。<br />
<br />
=== Synthetic transcription factors 合成转录因子 ===<br />
<br />
Studies have considered the components of the [[Transcription (biology)|DNA transcription]] mechanism. One desire of scientists creating [[synthetic biological circuit]]s is to be able to control the transcription of synthetic DNA in unicellular organisms ([[prokaryote]]s) and in multicellular organisms ([[eukaryote]]s). One study tested the adjustability of synthetic [[transcription factor]]s (sTFs) in areas of transcription output and cooperative ability among multiple transcription factor complexes.<ref name="Khalil AS 2012">{{cite journal | vauthors = Khalil AS, Lu TK, Bashor CJ, Ramirez CL, Pyenson NC, Joung JK, Collins JJ | title = A synthetic biology framework for programming eukaryotic transcription functions | journal = Cell | volume = 150 | issue = 3 | pages = 647–58 | date = August 2012 | pmid = 22863014 | pmc = 3653585 | doi = 10.1016/j.cell.2012.05.045 }}</ref> Researchers were able to mutate functional regions called [[zinc finger]]s, the DNA specific component of sTFs, to decrease their affinity for specific operator DNA sequence sites, and thus decrease the associated site-specific activity of the sTF (usually transcriptional regulation). They further used the zinc fingers as components of complex-forming sTFs, which are the [[eukaryotic translation]] mechanisms.<ref name="Khalil AS 2012"/><br />
<br />
<br />
<br />
A biosensor refers to an engineered organism, usually a bacterium, that is capable of reporting some ambient phenomenon such as the presence of heavy metals or toxins. One such system is the Lux operon of Aliivibrio fischeri, which codes for the enzyme that is the source of bacterial bioluminescence, and can be placed after a respondent promoter to express the luminescence genes in response to a specific environmental stimulus. One such sensor created, consisted of a bioluminescent bacterial coating on a photosensitive computer chip to detect certain petroleum pollutants. When the bacteria sense the pollutant, they luminesce. Another example of a similar mechanism is the detection of landmines by an engineered E.coli reporter strain capable of detecting TNT and its main degradation product DNT, and consequently producing a green fluorescent protein (GFP).<br />
<br />
生物传感器是一种工用有机体,它能够报告周边某些环境现象,如重金属或毒素的存在,通常是细菌。其中一个这样的系统是 Aliivibrio 费氏弧菌的 Lux 操纵子,它编码的酶是细菌生物发光的来源,可以放置在应答启动子之后表达发光基因以响应特定的环境刺激。其中一个传感器是由一个光敏计算机芯片上的生物发光细菌涂层组成的,用以检测某些石油污染物。当细菌感觉到污染物时,它们就会发光。另一个类似机制的例子是地雷的检测,用一个能够检测 TNT 及其主要降解产物 DNT 的大肠杆菌报告基因工程菌株,从而产生绿色荧光蛋白。<br />
<br />
== Applications 应用 ==<br />
<br />
=== Biological computers 生物计算机 ===<br />
<br />
Modified organisms can sense environmental signals and send output signals that can be detected and serve diagnostic purposes. Microbe cohorts have been used.<br />
<br />
改良有机体可以感知环境信号,并发送能够被检测到的输出信号,用于诊断目的。微生物群落已经被应用于这种用途。<br />
<br />
A [[biological computer]] refers to an engineered biological system that can perform computer-like operations, which is a dominant paradigm in synthetic biology. Researchers built and characterized a variety of [[logic gate]]s in a number of organisms,<ref>{{cite journal | vauthors = Singh V | title = Recent advances and opportunities in synthetic logic gates engineering in living cells | journal = Systems and Synthetic Biology | volume = 8 | issue = 4 | pages = 271–82 | date = December 2014 | pmid = 26396651 | pmc = 4571725 | doi = 10.1007/s11693-014-9154-6 }}</ref> and demonstrated both analog and digital computation in living cells. They demonstrated that bacteria can be engineered to perform both analog and/or digital computation.<ref>{{cite journal | vauthors = Purcell O, Lu TK | title = Synthetic analog and digital circuits for cellular computation and memory | journal = Current Opinion in Biotechnology | volume = 29 | pages = 146–55 | date = October 2014 | pmid = 24794536 | pmc = 4237220 | doi = 10.1016/j.copbio.2014.04.009 | series = Cell and Pathway Engineering }}</ref><ref>{{cite journal | vauthors = Daniel R, Rubens JR, Sarpeshkar R, Lu TK | title = Synthetic analog computation in living cells | journal = Nature | volume = 497 | issue = 7451 | pages = 619–23 | date = May 2013 | pmid = 23676681 | doi = 10.1038/nature12148 | bibcode = 2013Natur.497..619D | s2cid = 4358570 }}</ref> In human cells research demonstrated a universal logic evaluator that operates in mammalian cells in 2007.<ref>{{cite journal | vauthors = Rinaudo K, Bleris L, Maddamsetti R, Subramanian S, Weiss R, Benenson Y | title = A universal RNAi-based logic evaluator that operates in mammalian cells | journal = Nature Biotechnology | volume = 25 | issue = 7 | pages = 795–801 | date = July 2007 | pmid = 17515909 | doi = 10.1038/nbt1307 | s2cid = 280451 }}</ref> Subsequently, researchers utilized this paradigm to demonstrate a proof-of-concept therapy that uses biological digital computation to detect and kill human cancer cells in 2011.<ref>{{cite journal | vauthors = Xie Z, Wroblewska L, Prochazka L, Weiss R, Benenson Y | title = Multi-input RNAi-based logic circuit for identification of specific cancer cells | journal = Science | volume = 333 | issue = 6047 | pages = 1307–11 | date = September 2011 | pmid = 21885784 | doi = 10.1126/science.1205527 | bibcode = 2011Sci...333.1307X | s2cid = 13743291 | url = https://semanticscholar.org/paper/372e175668b5323d79950b58f12b36f6974a81ef }}</ref> Another group of researchers demonstrated in 2016 that principles of [[computer engineering]], can be used to automate digital circuit design in bacterial cells.<ref>{{cite journal | vauthors = Nielsen AA, Der BS, Shin J, Vaidyanathan P, Paralanov V, Strychalski EA, Ross D, Densmore D, Voigt CA | title = Genetic circuit design automation | journal = Science | volume = 352 | issue = 6281 | pages = aac7341 | date = April 2016 | pmid = 27034378 | doi = 10.1126/science.aac7341 | doi-access = free }}</ref> In 2017, researchers demonstrated the 'Boolean logic and arithmetic through DNA excision' (BLADE) system to engineer digital computation in human cells.<ref>{{cite journal | vauthors = Weinberg BH, Pham NT, Caraballo LD, Lozanoski T, Engel A, Bhatia S, Wong WW | title = Large-scale design of robust genetic circuits with multiple inputs and outputs for mammalian cells | journal = Nature Biotechnology | volume = 35 | issue = 5 | pages = 453–462 | date = May 2017 | pmid = 28346402 | pmc = 5423837 | doi = 10.1038/nbt.3805 }}</ref><br />
<br />
<br />
<br />
=== Biosensors 生物传感器 ===<br />
<br />
Cells use interacting genes and proteins, which are called gene circuits, to implement diverse function, such as responding to environmental signals, decision making and communication. Three key components are involved: DNA, RNA and Synthetic biologist designed gene circuits that can control gene expression from several levels including transcriptional, post-transcriptional and translational levels.<br />
<br />
细胞使用相互作用的基因和蛋白质,即所谓的基因回路,来实现不同的功能,如响应环境信号,决策和沟通。其中涉及到三个关键组成部分: DNA、 RNA 和合成生物学家设计的基因电路,可以从转录、转录后和翻译水平等几个层面控制基因表达。<br />
<br />
A [[biosensor]] refers to an engineered organism, usually a bacterium, that is capable of reporting some ambient phenomenon such as the presence of heavy metals or toxins. One such system is the [[Luciferase|Lux operon]] of ''[[Aliivibrio fischeri]],''<ref>{{cite journal | vauthors = de Almeida PE, van Rappard JR, Wu JC | title = In vivo bioluminescence for tracking cell fate and function | journal = American Journal of Physiology. Heart and Circulatory Physiology | volume = 301 | issue = 3 | pages = H663–71 | date = September 2011 | pmid = 21666118 | pmc = 3191083 | doi = 10.1152/ajpheart.00337.2011 }}</ref> which codes for the enzyme that is the source of bacterial [[bioluminescence]], and can be placed after a respondent [[Promoter (genetics)|promoter]] to express the luminescence genes in response to a specific environmental stimulus.<ref>{{cite journal | vauthors = Close DM, Xu T, Sayler GS, Ripp S | title = In vivo bioluminescent imaging (BLI): noninvasive visualization and interrogation of biological processes in living animals | journal = Sensors | volume = 11 | issue = 1 | pages = 180–206 | date = 2011 | pmid = 22346573 | pmc = 3274065 | doi = 10.3390/s110100180 }}</ref> One such sensor created, consisted of a [[bioluminescent bacteria]]l coating on a photosensitive [[computer chip]] to detect certain [[petroleum]] [[pollutant]]s. When the bacteria sense the pollutant, they luminesce.<ref>{{cite journal|last=Gibbs|first=W. Wayt| name-list-style = vanc |date=1997 |title=Critters on a Chip |url=http://www.sciam.com/article.cfm?id=critters-on-a-chip |journal=Scientific American|access-date=2 Mar 2009}}</ref> Another example of a similar mechanism is the detection of landmines by an engineered ''E.coli'' reporter strain capable of detecting [[TNT]] and its main degradation product [[2,4-Dinitrotoluene|DNT]], and consequently producing a green fluorescent protein ([[Green fluorescent protein|GFP]]).<ref>{{Cite journal|last1=Belkin|first1=Shimshon|last2=Yagur-Kroll|first2=Sharon|last3=Kabessa|first3=Yossef|last4=Korouma|first4=Victor|last5=Septon|first5=Tali|last6=Anati|first6=Yonatan|last7=Zohar-Perez|first7=Cheinat|last8=Rabinovitz|first8=Zahi|last9=Nussinovitch|first9=Amos|date=April 2017|title=Remote detection of buried landmines using a bacterial sensor|journal=Nature Biotechnology|volume=35|issue=4|pages=308–310|doi=10.1038/nbt.3791|pmid=28398330|s2cid=3645230|issn=1087-0156}}</ref><br />
<br />
<br />
<br />
Traditional metabolic engineering has been bolstered by the introduction of combinations of foreign genes and optimization by directed evolution. This includes engineering E. coli and yeast for commercial production of a precursor of the antimalarial drug, Artemisinin.<br />
<br />
传统的代谢工程学已经通过引入外源基因的组合和定向进化的优化得到了支持。这包括改造大肠杆菌和酵母菌,用于商业化生产抗疟药物青蒿素的前体。<br />
<br />
Modified organisms can sense environmental signals and send output signals that can be detected and serve diagnostic purposes. Microbe cohorts have been used.<ref name="pmid26019220">{{cite journal | vauthors = Danino T, Prindle A, Kwong GA, Skalak M, Li H, Allen K, Hasty J, Bhatia SN | title = Programmable probiotics for detection of cancer in urine | journal = Science Translational Medicine | volume = 7 | issue = 289 | pages = 289ra84 | date = May 2015 | pmid = 26019220 | pmc = 4511399 | doi = 10.1126/scitranslmed.aaa3519 }}</ref><br />
<br />
<br />
<br />
Entire organisms have yet to be created from scratch, although living cells can be transformed with new DNA. <br />
--[[用户:粲兰|袁一博]]([[用户讨论:粲兰|讨论]])“Entire organisms have yet to be created from scratch, although living cells can be transformed with new DNA.”have 疑似应为 haven’t <br />
Several ways allow constructing synthetic DNA components and even entire synthetic genomes, but once the desired genetic code is obtained, it is integrated into a living cell that is expected to manifest the desired new capabilities or phenotypes while growing and thriving. Cell transformation is used to create biological circuits, which can be manipulated to yield desired outputs.<br />
<br />
虽然活细胞可以通过新的 DNA 转化,但整个有机体还没有从头开始创造。有几种方法可以构建合成 DNA 组件,甚至是整个合成基因组,但是一旦获得了所需的遗传密码,它就会被整合到一个活细胞中,这个活细胞在生长和发育的过程中,有望表现所需的新能力或表型。细胞分化被用于创造生物电路,我们可以通过操纵这些电路来产生所需的输出。<br />
<br />
=== Cell transformation 细胞分化 ===<br />
<br />
{{Main|Transformation (genetics)}}Cells use interacting genes and proteins, which are called gene circuits, to implement diverse function, such as responding to environmental signals, decision making and communication. Three key components are involved: DNA, RNA and Synthetic biologist designed gene circuits that can control gene expression from several levels including transcriptional, post-transcriptional and translational levels.<br />
<br />
<br />
<br />
Traditional metabolic engineering has been bolstered by the introduction of combinations of foreign genes and optimization by directed evolution. This includes engineering ''E. coli'' and [[yeast]] for commercial production of a precursor of the [[Antimalarial medication|antimalarial drug]], [[Artemisinin]].<ref>{{cite journal | vauthors = Westfall PJ, Pitera DJ, Lenihan JR, Eng D, Woolard FX, Regentin R, Horning T, Tsuruta H, Melis DJ, Owens A, Fickes S, Diola D, Benjamin KR, Keasling JD, Leavell MD, McPhee DJ, Renninger NS, Newman JD, Paddon CJ | title = Production of amorphadiene in yeast, and its conversion to dihydroartemisinic acid, precursor to the antimalarial agent artemisinin | journal = Proceedings of the National Academy of Sciences of the United States of America | volume = 109 | issue = 3 | pages = E111–8 | date = January 2012 | pmid = 22247290 | pmc = 3271868 | doi = 10.1073/pnas.1110740109 | bibcode = 2012PNAS..109E.111W }}</ref><br />
<br />
The [[Top7 protein was one of the first proteins designed for a fold that had never been seen before in nature ]]<br />
<br />
[[ Top7蛋白是最初为了折叠而设计的几个蛋白质之一,以前从未在自然界中见过]<br />
<br />
<br />
<br />
Entire organisms have yet to be created from scratch, although living cells can be [[Transformation (genetics)|transformed]] with new DNA. Several ways allow constructing synthetic DNA components and even entire [[Artificial gene synthesis|synthetic genomes]], but once the desired genetic code is obtained, it is integrated into a living cell that is expected to manifest the desired new capabilities or [[phenotype]]s while growing and thriving.<ref>{{cite news|url=https://www.independent.co.uk/news/science/eureka-scientists-unveil-giant-leap-towards-synthetic-life-9219644.html|title=Eureka! Scientists unveil giant leap towards synthetic life|last=Connor|first=Steve|date=28 March 2014|work=The Independent|access-date=2015-08-06}}</ref> Cell transformation is used to create [[Synthetic biological circuit|biological circuits]], which can be manipulated to yield desired outputs.<ref name=":0" /><ref name=":1" /><br />
<br />
Natural proteins can be engineered, for example, by directed evolution, novel protein structures that match or improve on the functionality of existing proteins can be produced. One group generated a helix bundle that was capable of binding oxygen with similar properties as hemoglobin, yet did not bind carbon monoxide. A similar protein structure was generated to support a variety of oxidoreductase activities while another formed a structurally and sequentially novel ATPase. Another group generated a family of G-protein coupled receptors that could be activated by the inert small molecule clozapine N-oxide but insensitive to the native ligand, acetylcholine; these receptors are known as DREADDs. Novel functionalities or protein specificity can also be engineered using computational approaches. One study was able to use two different computational methods – a bioinformatics and molecular modeling method to mine sequence databases, and a computational enzyme design method to reprogram enzyme specificity. Both methods resulted in designed enzymes with greater than 100 fold specificity for production of longer chain alcohols from sugar.<br />
<br />
天然蛋白质可以被设计出来,例如,通过定向进化,新的蛋白质结构可以匹配或改进现有蛋白质的功能。其中一组可以产生能将血红蛋白与具有类似性质的氧结合的螺旋束,但不结合一氧化碳。一个类似的蛋白质结构被生成以支持多种氧化还原酶活性,而另一组生成一个在结构和顺序上全新的 ATP 酶。另一组产生了一类 g 蛋白偶联受体,这类受体可以被惰性小分子N-氧化氯氮平激活,但对天然配体乙酰胆碱不敏感; 这些受体被称为 DREADDs。新的功能或蛋白质特异性也可以利用计算方法进行设计。一项研究能够使用两种不同的计算方法——生物信息学和分子模拟方法挖掘序列数据库,使用计算酶设计方法重新编写酶的专一性。这两种方法都可以使设计的酶具有大于100倍的专一性,可以用糖生产出长链醇。<br />
<br />
<br />
<br />
By integrating synthetic biology with [[materials science]], it would be possible to use cells as microscopic molecular foundries to produce materials with properties whose properties were genetically encoded. Re-engineering has produced Curli fibers, the [[amyloid]] component of extracellular material of [[biofilms]], as a platform for programmable [[nanomaterial]]. These nanofibers were genetically constructed for specific functions, including adhesion to substrates, nanoparticle templating and protein immobilization.<ref>{{cite journal|vauthors=Nguyen PQ, Botyanszki Z, Tay PK, Joshi NS|date=September 2014|title=Programmable biofilm-based materials from engineered curli nanofibres|journal=Nature Communications|volume=5|pages=4945|bibcode=2014NatCo...5.4945N|doi=10.1038/ncomms5945|pmid=25229329|doi-access=free}}</ref><br />
<br />
Another common investigation is expansion of the natural set of 20 amino acids. Excluding stop codons, 61 codons have been identified, but only 20 amino acids are coded generally in all organisms. Certain codons are engineered to code for alternative amino acids including: nonstandard amino acids such as O-methyl tyrosine; or exogenous amino acids such as 4-fluorophenylalanine. Typically, these projects make use of re-coded nonsense suppressor tRNA-Aminoacyl tRNA synthetase pairs from other organisms, though in most cases substantial engineering is required.<br />
<br />
另一个常见的研究是对20种天然氨基酸的扩展。除了终止密码子, 61个密码子已被破译出,但所有生物体中一般只有20个氨基酸。某些密码子被设计为编码可替代的氨基酸,包括: 非标准氨基酸,如 o- 甲基酪氨酸; 或外源氨基酸,如4- 氟苯丙氨酸。通常情况下,这些项目利用从其他生物体获取的重新编码的无意义抑制 tRNA-氨酰基 tRNA 合成酶对,虽然在大多数情况下这需要大量的工程。<br />
<br />
<br />
<br />
=== Designed proteins 设计蛋白质 ===<br />
<br />
Other researchers investigated protein structure and function by reducing the normal set of 20 amino acids. Limited protein sequence libraries are made by generating proteins where groups of amino acids may be replaced by a single amino acid. For instance, several non-polar amino acids within a protein can all be replaced with a single non-polar amino acid. . One project demonstrated that an engineered version of Chorismate mutase still had catalytic activity when only 9 amino acids were used.<br />
<br />
其他研究人员通过减少一般的20种天然氨基酸来研究蛋白质的结构和功能。有限的蛋白质序列库是通过生成蛋白质制成的,其中一组氨基酸可以被一个单一的氨基酸所取代。例如,一个蛋白质中的几个非极性氨基酸都可以被一个非极性氨基酸所取代。.一个研究项目证明了,当只使用9种氨基酸时,一种改造过的分支酸变位酶仍然具有催化活性。<br />
<br />
<br />
<br />
[[File:Top7.png|thumb|The [[Top7]] protein was one of the first proteins designed for a fold that had never been seen before in nature<ref name="kuhlman03">{{cite journal | vauthors = Kuhlman B, Dantas G, Ireton GC, Varani G, Stoddard BL, Baker D | title = Design of a novel globular protein fold with atomic-level accuracy | journal = Science | volume = 302 | issue = 5649 | pages = 1364–8 | date = November 2003 | pmid = 14631033 | doi = 10.1126/science.1089427 | bibcode = 2003Sci...302.1364K | s2cid = 1939390 | url = https://semanticscholar.org/paper/3188f905b60172dcad17a9b8c23567400c2bb65f }}</ref> ]]<br />
<br />
Researchers and companies practice synthetic biology to synthesize industrial enzymes with high activity, optimal yields and effectiveness. These synthesized enzymes aim to improve products such as detergents and lactose-free dairy products, as well as make them more cost effective. The improvements of metabolic engineering by synthetic biology is an example of a biotechnological technique utilized in industry to discover pharmaceuticals and fermentive chemicals. Synthetic biology may investigate modular pathway systems in biochemical production and increase yields of metabolic production. Artificial enzymatic activity and subsequent effects on metabolic reaction rates and yields may develop "efficient new strategies for improving cellular properties ... for industrially important biochemical production".<br />
<br />
研究人员和公司运用合成生物学来合成具有高活性、最佳产量和有效性的工业酶。这些合成酶旨在改善产品,如洗涤剂和无乳糖乳制品,以及使他们更具成本效益。合成生物学对代谢工程学的改进是生物技术用于工业发现药物和发酵性化学品的一个典例。合成生物学可以研究生化生产中的模块化途径系统,并提高代谢生产的产量。人工酶活性及其对代谢反应速率和产量的后续影响可能开发出“改善细胞特性有效的新策略...... 用于重要的工业生化产品”。<br />
<br />
<br />
<br />
Natural proteins can be engineered, for example, by [[directed evolution]], novel protein structures that match or improve on the functionality of existing proteins can be produced. One group generated a [[helix bundle]] that was capable of binding [[oxygen]] with similar properties as [[hemoglobin]], yet did not bind [[carbon monoxide]].<ref>{{cite journal | vauthors = Koder RL, Anderson JL, Solomon LA, Reddy KS, Moser CC, Dutton PL | title = Design and engineering of an O(2) transport protein | journal = Nature | volume = 458 | issue = 7236 | pages = 305–9 | date = March 2009 | pmid = 19295603 | pmc = 3539743 | doi = 10.1038/nature07841 | bibcode = 2009Natur.458..305K }}</ref> A similar protein structure was generated to support a variety of [[oxidoreductase]] activities <ref>{{cite journal | vauthors = Farid TA, Kodali G, Solomon LA, Lichtenstein BR, Sheehan MM, Fry BA, Bialas C, Ennist NM, Siedlecki JA, Zhao Z, Stetz MA, Valentine KG, Anderson JL, Wand AJ, Discher BM, Moser CC, Dutton PL | title = Elementary tetrahelical protein design for diverse oxidoreductase functions | journal = Nature Chemical Biology | volume = 9 | issue = 12 | pages = 826–833 | date = December 2013 | pmid = 24121554 | pmc = 4034760 | doi = 10.1038/nchembio.1362 }}</ref> while another formed a structurally and sequentially novel [[ATPase]].<ref name="WangHecht2020">{{cite journal|last1=Wang|first1=MS|last2=Hecht|first2=MH|title=A Completely De Novo ATPase from Combinatorial Protein Design|journal=Journal of the American Chemical Society|year=2020|volume=142|issue=36|pages=15230–15234|issn=0002-7863|doi=10.1021/jacs.0c02954|pmid=32833456}}</ref> Another group generated a family of G-protein coupled receptors that could be activated by the inert small molecule [[clozapine N-oxide]] but insensitive to the native [[ligand]], [[acetylcholine]]; these receptors are known as [[Receptor activated solely by a synthetic ligand|DREADDs]].<ref>{{cite journal | vauthors = Armbruster BN, Li X, Pausch MH, Herlitze S, Roth BL | title = Evolving the lock to fit the key to create a family of G protein-coupled receptors potently activated by an inert ligand | journal = Proceedings of the National Academy of Sciences of the United States of America | volume = 104 | issue = 12 | pages = 5163–8 | date = March 2007 | pmid = 17360345 | pmc = 1829280 | doi = 10.1073/pnas.0700293104 | bibcode = 2007PNAS..104.5163A }}</ref> Novel functionalities or protein specificity can also be engineered using computational approaches. One study was able to use two different computational methods – a bioinformatics and molecular modeling method to mine sequence databases, and a computational enzyme design method to reprogram enzyme specificity. Both methods resulted in designed enzymes with greater than 100 fold specificity for production of longer chain alcohols from sugar.<ref>{{cite journal | vauthors = Mak WS, Tran S, Marcheschi R, Bertolani S, Thompson J, Baker D, Liao JC, Siegel JB | title = Integrative genomic mining for enzyme function to enable engineering of a non-natural biosynthetic pathway | journal = Nature Communications | volume = 6 | pages = 10005 | date = November 2015 | pmid = 26598135 | pmc = 4673503 | doi = 10.1038/ncomms10005 | bibcode = 2015NatCo...610005M }}</ref><br />
<br />
<br />
<br />
Scientists can encode digital information onto a single strand of synthetic DNA. In 2012, George M. Church encoded one of his books about synthetic biology in DNA. The 5.3 Mb of data was more than 1000 times greater than the previous largest amount of information to be stored in synthesized DNA. A similar project encoded the complete sonnets of William Shakespeare in DNA. More generally, algorithms such as NUPACK, ViennaRNA, Ribosome Binding Site Calculator, Cello, and Non-Repetitive Parts Calculator enables the design of new genetic systems.<br />
<br />
科学家可以将数字信息编码到一条合成 DNA 链上。2012年,乔治·M·丘奇用 DNA 将他的一本关于合成生物学的书编码。这5.3 Mb 的数据量比之前存储在合成 DNA 中的最大信息量大了1000多倍。一个类似的项目将威廉·莎士比亚的十四行诗全部编码在 DNA 中。更广泛地说,例如 NUPACK,vienna,Ribosome Binding Site Calculator,cello,和 non-repeated Parts Calculator 等算法使新遗传系统的设计成为可能。<br />
<br />
Another common investigation is [[Expanded genetic code|expansion]] of the natural set of 20 [[amino acid]]s. Excluding [[stop codon]]s, 61 [[codons]] have been identified, but only 20 amino acids are coded generally in all organisms. Certain codons are engineered to code for alternative amino acids including: nonstandard amino acids such as O-methyl [[tyrosine]]; or exogenous amino acids such as 4-fluorophenylalanine. Typically, these projects make use of re-coded [[nonsense suppressor]] [[Transfer RNA|tRNA]]-[[Aminoacyl tRNA synthetase]] pairs from other organisms, though in most cases substantial engineering is required.<ref>{{cite journal | vauthors = Wang Q, Parrish AR, Wang L | title = Expanding the genetic code for biological studies | journal = Chemistry & Biology | volume = 16 | issue = 3 | pages = 323–36 | date = March 2009 | pmid = 19318213 | pmc = 2696486 | doi = 10.1016/j.chembiol.2009.03.001 }}</ref><br />
<br />
<br />
<br />
Many technologies have been developed for incorporating unnatural nucleotides and amino acids into nucleic acids and proteins, both in vitro and in vivo. For example, in May 2014, researchers announced that they had successfully introduced two new artificial nucleotides into bacterial DNA. By including individual artificial nucleotides in the culture media, they were able to exchange the bacteria 24 times; they did not generate mRNA or proteins able to use the artificial nucleotides.<br />
<br />
无论是在体外还是体内,在核酸和蛋白质中将非天然的核苷酸和氨基酸结合的技术已经被开发出来。例如,2014年5月,研究人员宣布他们已经成功地将两种新的人工核苷酸引入细菌 DNA。通过在培养基中加入单个的人工核苷酸,他们能够交换细菌24次; 他们没有产生能够人工核苷酸表达的 mRNA 或蛋白质。<br />
<br />
Other researchers investigated protein structure and function by reducing the normal set of 20 amino acids. Limited protein sequence libraries are made by generating proteins where groups of amino acids may be replaced by a single amino acid.<ref>{{cite journal|author=Davidson, AR|author2=Lumb, KJ|author3=Sauer, RT|date=1995|title=Cooperatively folded proteins in random sequence libraries|journal=Nature Structural Biology|volume=2|issue=10|pages=856–864|doi=10.1038/nsb1095-856|pmid=7552709|s2cid=31781262}}</ref> For instance, several [[Chemical polarity|non-polar]] amino acids within a protein can all be replaced with a single non-polar amino acid.<ref>{{cite journal|vauthors=Kamtekar S, Schiffer JM, Xiong H, Babik JM, Hecht MH|date=December 1993|title=Protein design by binary patterning of polar and nonpolar amino acids|journal=Science|volume=262|issue=5140|pages=1680–5|bibcode=1993Sci...262.1680K|doi=10.1126/science.8259512|pmid=8259512}}</ref> . One project demonstrated that an engineered version of [[Chorismate mutase]] still had catalytic activity when only 9 amino acids were used.<ref>{{cite journal|vauthors=Walter KU, Vamvaca K, Hilvert D|date=November 2005|title=An active enzyme constructed from a 9-amino acid alphabet|journal=The Journal of Biological Chemistry|volume=280|issue=45|pages=37742–6|doi=10.1074/jbc.M507210200|pmid=16144843|doi-access=free}}</ref><br />
<br />
<br />
<br />
Researchers and companies practice synthetic biology to synthesize [[industrial enzymes]] with high activity, optimal yields and effectiveness. These synthesized enzymes aim to improve products such as detergents and lactose-free dairy products, as well as make them more cost effective.<ref>{{cite web|url=https://www.thermofisher.com/us/en/home/life-science/synthetic-biology/synthetic-biology-applications.html|title=Synthetic Biology Applications|website=www.thermofisher.com|access-date=2015-11-12}}</ref> The improvements of metabolic engineering by synthetic biology is an example of a biotechnological technique utilized in industry to discover pharmaceuticals and fermentive chemicals. Synthetic biology may investigate modular pathway systems in biochemical production and increase yields of metabolic production. Artificial enzymatic activity and subsequent effects on metabolic reaction rates and yields may develop "efficient new strategies for improving cellular properties ... for industrially important biochemical production".<ref>{{cite journal | vauthors = Liu Y, Shin HD, Li J, Liu L | title = Toward metabolic engineering in the context of system biology and synthetic biology: advances and prospects | journal = Applied Microbiology and Biotechnology | volume = 99 | issue = 3 | pages = 1109–18 | date = February 2015 | pmid = 25547833 | doi = 10.1007/s00253-014-6298-y | s2cid = 954858 }}</ref><br />
<br />
Synthetic biology raised NASA's interest as it could help to produce resources for astronauts from a restricted portfolio of compounds sent from Earth. On Mars, in particular, synthetic biology could lead to production processes based on local resources, making it a powerful tool in the development of manned outposts with less dependence on Earth.<br />
<br />
合成生物学引起了美国国家航空航天局的兴趣,因为它可以促使从地球上发射的受限化合物组合为宇航员生产资源。特别是在火星上,合成生物学可以产生基于当地资源的生产过程,使其成为开发对地球依赖性较低的载人前哨站的有力工具。<br />
<br />
<br />
<br />
=== Designed nucleic acid systems 设计核酸系统 ===<br />
<br />
Scientists can encode digital information onto a single strand of [[synthetic DNA]]. In 2012, [[George M. Church]] encoded one of his books about synthetic biology in DNA. The 5.3 [[Megabit|Mb]] of data was more than 1000 times greater than the previous largest amount of information to be stored in synthesized DNA.<ref>{{cite journal | vauthors = Church GM, Gao Y, Kosuri S | title = Next-generation digital information storage in DNA | journal = Science | volume = 337 | issue = 6102 | pages = 1628 | date = September 2012 | pmid = 22903519 | doi = 10.1126/science.1226355 | bibcode = 2012Sci...337.1628C | s2cid = 934617 | url = https://semanticscholar.org/paper/0856a685e85bcd27c11cd5f385be818deceb27bd }}</ref> A similar project encoded the complete [[sonnet]]s of [[William Shakespeare]] in DNA.<ref>{{cite web|url=http://news.sky.com/story/1041917/huge-amounts-of-data-can-be-stored-in-dna|title=Huge amounts of data can be stored in DNA|date=23 January 2013|publisher=Sky News|access-date=24 January 2013|archive-url=https://web.archive.org/web/20160531044937/http://news.sky.com/story/1041917/huge-amounts-of-data-can-be-stored-in-dna|archive-date=2016-05-31 }}</ref> More generally, algorithms such as NUPACK,<ref>{{Cite journal|last1=Zadeh|first1=Joseph N.|last2=Steenberg|first2=Conrad D.|last3=Bois|first3=Justin S.|last4=Wolfe|first4=Brian R.|last5=Pierce|first5=Marshall B.|last6=Khan|first6=Asif R.|last7=Dirks|first7=Robert M.|last8=Pierce|first8=Niles A.|date=2011-01-15|title=NUPACK: Analysis and design of nucleic acid systems|journal=Journal of Computational Chemistry|language=en|volume=32|issue=1|pages=170–173|doi=10.1002/jcc.21596|pmid=20645303}}</ref> ViennaRNA,<ref>{{Cite journal|last1=Lorenz|first1=Ronny|last2=Bernhart|first2=Stephan H.|last3=Höner zu Siederdissen|first3=Christian|last4=Tafer|first4=Hakim|last5=Flamm|first5=Christoph|last6=Stadler|first6=Peter F.|last7=Hofacker|first7=Ivo L.|date=2011-11-24|title=ViennaRNA Package 2.0|journal=Algorithms for Molecular Biology|language=en|volume=6|issue=1|pages=26|doi=10.1186/1748-7188-6-26|issn=1748-7188|pmc=3319429|pmid=22115189}}</ref> Ribosome Binding Site Calculator,<ref>{{Cite journal|last1=Salis|first1=Howard M.|last2=Mirsky|first2=Ethan A.|last3=Voigt|first3=Christopher A.|date=October 2009|title=Automated design of synthetic ribosome binding sites to control protein expression|journal=Nature Biotechnology|language=en|volume=27|issue=10|pages=946–950|doi=10.1038/nbt.1568|pmid=19801975|issn=1546-1696|pmc=2782888}}</ref> Cello,<ref>{{Cite journal|last1=Nielsen|first1=A. A. K.|last2=Der|first2=B. S.|last3=Shin|first3=J.|last4=Vaidyanathan|first4=P.|last5=Paralanov|first5=V.|last6=Strychalski|first6=E. A.|last7=Ross|first7=D.|last8=Densmore|first8=D.|last9=Voigt|first9=C. A.|date=2016-04-01|title=Genetic circuit design automation|journal=Science|language=en|volume=352|issue=6281|pages=aac7341|doi=10.1126/science.aac7341|pmid=27034378|issn=0036-8075|doi-access=free}}</ref> and Non-Repetitive Parts Calculator<ref>{{Cite journal|last1=Hossain|first1=Ayaan|last2=Lopez|first2=Eriberto|last3=Halper|first3=Sean M.|last4=Cetnar|first4=Daniel P.|last5=Reis|first5=Alexander C.|last6=Strickland|first6=Devin|last7=Klavins|first7=Eric|last8=Salis|first8=Howard M.|date=2020-07-13|title=Automated design of thousands of nonrepetitive parts for engineering stable genetic systems|url=https://www.nature.com/articles/s41587-020-0584-2|journal=Nature Biotechnology|language=en|pages=1–10|doi=10.1038/s41587-020-0584-2|pmid=32661437|s2cid=220506228|issn=1546-1696}}</ref> enables the design of new genetic systems.<br />
<br />
<br />
<br />
[[Gene functions in the minimal genome of the synthetic organism, Syn 3.]]<br />
<br />
[[在合成生物的最小基因组中发挥功能的基因,Syn 3. ]]<br />
<br />
Many technologies have been developed for incorporating [[Nucleic acid analogue|unnatural nucleotides]] and amino acids into nucleic acids and proteins, both ''in vitro'' and ''in vivo''. For example, in May 2014, researchers announced that they had successfully introduced two new artificial [[nucleotides]] into bacterial DNA. By including individual artificial nucleotides in the culture media, they were able to exchange the bacteria 24 times; they did not generate [[Messenger RNA|mRNA]] or proteins able to use the artificial nucleotides.<ref name="NYT-20140507">{{cite news|url=https://www.nytimes.com/2014/05/08/business/researchers-report-breakthrough-in-creating-artificial-genetic-code.html|title=Researchers Report Breakthrough in Creating Artificial Genetic Code|last=Pollack|first=Andrew|date=May 7, 2014|work=[[New York Times]]|access-date=May 7, 2014}}</ref><ref name="NATURE-20140507">{{cite journal|last=Callaway|first=Ewen|date=May 7, 2014|title=First life with 'alien' DNA|url=http://www.nature.com/news/first-life-with-alien-dna-1.15179|journal=[[Nature (journal)|Nature]]|doi=10.1038/nature.2014.15179|s2cid=86967999|access-date=May 7, 2014}}</ref><ref name="NATJ-20140507">{{cite journal|vauthors=Malyshev DA, Dhami K, Lavergne T, Chen T, Dai N, Foster JM, Corrêa IR, Romesberg FE|date=May 2014|title=A semi-synthetic organism with an expanded genetic alphabet|journal=Nature|volume=509|issue=7500|pages=385–8|bibcode=2014Natur.509..385M|doi=10.1038/nature13314|pmc=4058825|pmid=24805238}}</ref><br />
<br />
One important topic in synthetic biology is synthetic life, that is concerned with hypothetical organisms created in vitro from biomolecules and/or chemical analogues thereof. Synthetic life experiments attempt to either probe the origins of life, study some of the properties of life, or more ambitiously to recreate life from non-living (abiotic) components. Synthetic life biology attempts to create living organisms capable of carrying out important functions, from manufacturing pharmaceuticals to detoxifying polluted land and water. In medicine, it offers prospects of using designer biological parts as a starting point for new classes of therapies and diagnostic tools. Nobody has been able to create such a cell. The host cells were able to grow and replicate. The Mycoplasma laboratorium is the only living organism with completely engineered genome.<br />
<br />
合成生物学的一个重要课题是合成生命,它涉及到在体外由生物分子和/或其化学类似物创造的假想生物体。合成生命实验或者试图探索生命的起源,研究生命的某些特性,或者更雄心勃勃地从非生命(非生物)组成部分中重新创造生命。合成生命生物学试图创造能够执行重要功能的生命有机体,从制造药品到净化被污染的土地和水。在医学上,它提供了使用设计生物学部件作为新类型治疗和诊断工具的起点的前景。没有人能够制造出这样的细胞。宿主细胞能够生长和复制。实验室合成支原体是唯一一个拥有完全工程化基因组的生物体。<br />
<br />
<br />
<br />
=== Space exploration 太空探索 ===<br />
<br />
The first living organism with 'artificial' expanded DNA code was presented in 2014; the team used E. coli that had its genome extracted and replaced with a chromosome with an expanded genetic code. The nucleosides added are d5SICS and dNaM. followed by national synthetic cell organizations in several countries, including FabriCell, MaxSynBio and BaSyC. <br />
--[[用户:粲兰|袁一博]]([[用户讨论:粲兰|讨论]])“followed by national synthetic cell organizations in several countries, including FabriCell, MaxSynBio and BaSyC.”该句首字母未大写,疑似搬运时少了部分语句。<br />
The European synthetic cell efforts were unified in 2019 as SynCellEU initiative.<br />
<br />
2014年,第一个具有人工扩展 DNA 编码的活有机体问世; 研究小组使用大肠杆菌提取了它的基因组,并用扩展基因编码的染色体替换了它。添加的核苷是 d5SICS 和 dNaM。其次是一些国家的国家合成细胞组织,包括 FabriCell,MaxSynBio 和 BaSyC。欧洲合成细胞的努力在2019年被统一为 SynCellEU 倡议。<br />
<br />
Synthetic biology raised [[NASA|NASA's]] interest as it could help to produce resources for astronauts from a restricted portfolio of compounds sent from Earth.<ref name="Verseux, C. 2015 73–100">{{Cite book|author=Verseux, C.|author2=Paulino-Lima, I.|author3=Baque, M.|author4=Billi, D.|author5=Rothschild, L.|date=2016|title=Synthetic Biology for Space Exploration: Promises and Societal Implications|journal=Ambivalences of Creating Life. Societal and Philosophical Dimensions of Synthetic Biology, Publisher: Springer-Verlag|volume=45|pages=73–100|doi=10.1007/978-3-319-21088-9_4|series=Ethics of Science and Technology Assessment|isbn=978-3-319-21087-2}}</ref><ref>{{cite journal|last1=Menezes|first1=A|last2=Cumbers|first2=J|last3=Hogan|first3=J|last4=Arkin|first4=A|date=2014|title=Towards synthetic biological approaches to resource utilization on space missions|journal=Journal of the Royal Society, Interface|volume=12|issue=102|pages=20140715|doi=10.1098/rsif.2014.0715|pmid=25376875|pmc=4277073}}</ref><ref>{{cite journal | vauthors = Montague M, McArthur GH, Cockell CS, Held J, Marshall W, Sherman LA, Wang N, Nicholson WL, Tarjan DR, Cumbers J | title = The role of synthetic biology for in situ resource utilization (ISRU) | journal = Astrobiology | volume = 12 | issue = 12 | pages = 1135–42 | date = December 2012 | pmid = 23140229 | doi = 10.1089/ast.2012.0829 | bibcode = 2012AsBio..12.1135M }}</ref> On Mars, in particular, synthetic biology could lead to production processes based on local resources, making it a powerful tool in the development of manned outposts with less dependence on Earth.<ref name="Verseux, C. 2015 73–100" /> Work has gone into developing plant strains that are able to cope with the harsh Martian environment, using similar techniques to those employed to increase resilience to certain environmental factors in agricultural crops.<ref>{{Cite web|title=NASA - Designer Plants on Mars|url=https://www.nasa.gov/centers/goddard/news/topstory/2005/mars_plants.html|last=GSFC|first=Bill Steigerwald |website=www.nasa.gov|language=en|access-date=2020-05-29}}</ref><br />
<br />
<br />
<br />
=== Synthetic life 合成生命 ===<br />
<br />
{{Further|Artificially Expanded Genetic Information System|Hypothetical types of biochemistry}}<br />
<br />
Bacteria have long been used in cancer treatment. Bifidobacterium and Clostridium selectively colonize tumors and reduce their size. Recently synthetic biologists reprogrammed bacteria to sense and respond to a particular cancer state. Most often bacteria are used to deliver a therapeutic molecule directly to the tumor to minimize off-target effects. To target the tumor cells, peptides that can specifically recognize a tumor were expressed on the surfaces of bacteria. Peptides used include an affibody molecule that specifically targets human epidermal growth factor receptor 2 and a synthetic adhesin. The other way is to allow bacteria to sense the tumor microenvironment, for example hypoxia, by building an AND logic gate into bacteria. The bacteria then only release target therapeutic molecules to the tumor through either lysis or the bacterial secretion system. Lysis has the advantage that it can stimulate the immune system and control growth. Multiple types of secretion systems can be used and other strategies as well. The system is inducible by external signals. Inducers include chemicals, electromagnetic or light waves.<br />
<br />
长期以来,细菌一直被用于癌症治疗。双歧杆菌和梭状芽胞杆菌选择性地定殖于肿瘤并减小肿瘤体积。最近,合成生物学家对细菌进行了重新编码,使其能够感知特定的癌症状态并对其做出反应。大多数情况下,细菌被用来直接向肿瘤输送治疗分子,以最小化脱靶效应。为了定靶于肿瘤细胞,细菌表面表达出了可以特异性识别肿瘤的肽。识别过程中涉及的多肽包括一个特定作用于人类表皮生长因子受体的粘附分子和一个合成粘附素。另一种方法是通过在细菌中建立一个与逻辑门让细菌感知肿瘤的微环境,例如,缺氧。然后,细菌只通过溶菌或细菌分泌系统向肿瘤释放靶向治疗分子。溶菌具有刺激免疫系统和控制生长的优点。这个过程中可以使用多种类型的分泌系统和其他策略。该系统由外部信号诱导。这些诱导因子包括化学物质、电磁波或光波。<br />
<br />
[[File:Syn3 genome.svg|thumb|upright=1.25|[[Gene]] functions in the minimal [[genome]] of the synthetic organism, ''[[Syn 3]]''.<ref name="Hutchison">{{cite journal | vauthors = Hutchison CA, Chuang RY, Noskov VN, Assad-Garcia N, Deerinck TJ, Ellisman MH, Gill J, Kannan K, Karas BJ, Ma L, Pelletier JF, Qi ZQ, Richter RA, Strychalski EA, Sun L, Suzuki Y, Tsvetanova B, Wise KS, Smith HO, Glass JI, Merryman C, Gibson DG, Venter JC | title = Design and synthesis of a minimal bacterial genome | journal = Science | volume = 351 | issue = 6280 | pages = aad6253 | date = March 2016 | pmid = 27013737 | doi = 10.1126/science.aad6253 | bibcode = 2016Sci...351.....H | doi-access = free }}</ref>]]<br />
<br />
One important topic in synthetic biology is ''synthetic life'', that is concerned with hypothetical organisms created ''[[in vitro]]'' from [[biomolecule]]s and/or [[hypothetical types of biochemistry|chemical analogues thereof]]. Synthetic life experiments attempt to either probe the [[origins of life]], study some of the properties of life, or more ambitiously to recreate life from non-living ([[abiotic components|abiotic]]) components. Synthetic life biology attempts to create living organisms capable of carrying out important functions, from manufacturing pharmaceuticals to detoxifying polluted land and water.<ref name="enzymes2014">{{cite news |last=Connor |first=Steve |url=https://www.independent.co.uk/news/science/major-synthetic-life-breakthrough-as-scientists-make-the-first-artificial-enzymes-9896333.html |title=Major synthetic life breakthrough as scientists make the first artificial enzymes |work=The Independent |location=London |date=1 December 2014 |access-date=2015-08-06 }}</ref> In medicine, it offers prospects of using designer biological parts as a starting point for new classes of therapies and diagnostic tools.<ref name="enzymes2014" /><br />
<br />
Multiple species and strains are applied in these therapeutics. Most commonly used bacteria are Salmonella typhimurium, Escherichia Coli, Bifidobacteria, Streptococcus, Lactobacillus, Listeria and Bacillus subtilis. Each of these species have their own property and are unique to cancer therapy in terms of tissue colonization, interaction with immune system and ease of application.<br />
<br />
在这些治疗方法中应用了多种菌株。最常用的细菌是鼠伤寒沙门氏菌、大肠桿菌、双歧杆菌、链球菌、乳酸菌、李斯特菌和枯草杆菌。这些物种中的每一个都有自己的特性。在定殖组织、与免疫系统的相互作用和易于应用方面,它们对癌症治疗各有独到之处。<br />
<br />
<br />
<br />
A living "artificial cell" has been defined as a completely synthetic cell that can capture [[energy]], maintain [[electrochemical gradient|ion gradients]], contain [[macromolecules]] as well as store information and have the ability to [[mutate]].<ref name="Deamer">{{cite journal | vauthors = Deamer D | title = A giant step towards artificial life? | journal = Trends in Biotechnology | volume = 23 | issue = 7 | pages = 336–8 | date = July 2005 | pmid = 15935500 | doi = 10.1016/j.tibtech.2005.05.008 }}</ref> Nobody has been able to create such a cell.<ref name='Deamer'/><br />
<br />
<br />
<br />
The immune system plays an important role in cancer and can be harnessed to attack cancer cells. Cell-based therapies focus on immunotherapies, mostly by engineering T cells.<br />
<br />
免疫系统在癌症中起着重要作用。可以利用免疫系统攻击癌细胞。以细胞为基础的疗法主要是免疫疗法,主要是通过改造 T 细胞。<br />
<br />
A completely synthetic bacterial chromosome was produced in 2010 by [[Craig Venter]], and his team introduced it to genomically emptied bacterial host cells.<ref name="gibson52">{{cite journal | vauthors = Gibson DG, Glass JI, Lartigue C, Noskov VN, Chuang RY, Algire MA, Benders GA, Montague MG, Ma L, Moodie MM, Merryman C, Vashee S, Krishnakumar R, Assad-Garcia N, Andrews-Pfannkoch C, Denisova EA, Young L, Qi ZQ, Segall-Shapiro TH, Calvey CH, Parmar PP, Hutchison CA, Smith HO, Venter JC | title = Creation of a bacterial cell controlled by a chemically synthesized genome | journal = Science | volume = 329 | issue = 5987 | pages = 52–6 | date = July 2010 | pmid = 20488990 | doi = 10.1126/science.1190719 | bibcode = 2010Sci...329...52G | doi-access = free }}</ref> The host cells were able to grow and replicate.<ref>{{cite web| url=https://www.npr.org/templates/transcript/transcript.php?storyId=127010591| title=Scientists Reach Milestone On Way To Artificial Life| access-date=2010-06-09|date=2010-05-20}}</ref><ref>{{cite web|last1=Venter|first1=JC|title=From Designing Life to Prolonging Healthy Life|url=https://www.youtube.com/watch?v=Gwu_djYMm3w&t=30s|website=YouTube|publisher=University of California Television (UCTV)|access-date=1 February 2017}}</ref> The [[Mycoplasma laboratorium]] is the only living organism with completely engineered genome.<br />
<br />
<br />
<br />
T cell receptors were engineered and ‘trained’ to detect cancer epitopes. Chimeric antigen receptors (CARs) are composed of a fragment of an antibody fused to intracellular T cell signaling domains that can activate and trigger proliferation of the cell. A second generation CAR-based therapy was approved by FDA.<br />
<br />
T 细胞受体被设计和“训练”用以检测癌症表位。嵌合抗原受体(CAR)是由融合于细胞内 T 细胞信号域的抗体片段组成,这些信号域可以激活并触发细胞增殖。美国食品药品监督管理局(FDA)批准了第二代基于嵌合抗原受体的基因治疗。<br />
<br />
The first living organism with 'artificial' expanded DNA code was presented in 2014; the team used ''E. coli'' that had its genome extracted and replaced with a chromosome with an expanded genetic code. The [[nucleoside]]s added are [[d5SICS]] and [[dNaM]].<ref name="NATJ-20140507"/><br />
<br />
<br />
<br />
Gene switches were designed to enhance safety of the treatment. Kill switches were developed to terminate the therapy should the patient show severe side effects. Mechanisms can more finely control the system and stop and reactivate it. Since the number of T-cells are important for therapy persistence and severity, growth of T-cells is also controlled to dial the effectiveness and safety of therapeutics.<br />
<br />
基因开关被设计出来以提高治疗的安全性。如果病人出现严重的副作用,杀伤开关就会终止治疗。机制可以更好地控制系统,停止和重新激活它。由于 T 细胞的数量对治疗的持续性和强度非常重要,因此 T 细胞的生长也受到控制,从而平衡治疗的有效性和安全性。<br />
<br />
In May 2019, researchers, in a milestone effort, reported the creation of a new [[Synthetic biology#Synthetic life|synthetic]] (possibly [[Artificial life#Biochemical-based ("wet")|artificial]]) form of [[wikt:viability|viable]] [[life]], a variant of the [[bacteria]] ''[[Escherichia coli]]'', by reducing the natural number of 64 [[codon]]s in the bacterial [[genome]] to 59 codons instead, in order to encode 20 [[amino acid]]s.<ref name="NYT-20190515"/><ref name="NAT-20190515"/><br />
<br />
<br />
<br />
Although several mechanisms can improve safety and control, limitations include the difficulty of inducing large DNA circuits into the cells and risks associated with introducing foreign components, especially proteins, into cells.<br />
<br />
虽然有几种机制可以提高安全性和控制性,但他们也都存在局限性,包括很难将大型 DNA 电路诱导入细胞,以及将外来成分,特别是蛋白质引入细胞的风险。<br />
<br />
In 2017 the international [[Build-a-Cell]] large-scale research collaboration for the construction of synthetic living cell was started,<ref>{{cite web|url=http://buildacell.io/|title=Build-a-Cell|accessdate=4 Dec 2019}}</ref> followed by national synthetic cell organizations in several countries, including FabriCell,<ref>{{cite web|url=http://fabricell.org/|title=FabriCell|accessdate=8 Dec 2019}}</ref> MaxSynBio<ref>{{cite web|url=https://www.maxsynbio.mpg.de/home/|title=MaxSynBio - Max Planck Research Network in Synthetic Biology|accessdate=8 Dec 2019}}</ref> and BaSyC.<ref>{{cite web|url=http://www.basyc.nl/|title=BaSyC|accessdate=8 Dec 2019}}</ref> The European synthetic cell efforts were unified in 2019 as SynCellEU initiative.<ref>{{cite web|url=http://www.syntheticcell.eu/|title=SynCell EU|accessdate=8 Dec 2019}}</ref><br />
<br />
<br />
<br />
=== Drug delivery platforms 药物输送平台 ===<br />
<br />
==== Engineered bacteria-based platform 基于细菌设计的平台 ====<br />
<br />
Bacteria have long been used in cancer treatment. ''[[Bifidobacterium]]'' and ''[[Clostridium]]'' selectively colonize tumors and reduce their size.<ref name="Zu_2014">{{cite journal|vauthors=Zu C, Wang J|date=August 2014|title=Tumor-colonizing bacteria: a potential tumor targeting therapy|url=|journal=Critical Reviews in Microbiology|volume=40|issue=3|pages=225–35|doi=10.3109/1040841X.2013.776511|pmid=23964706|s2cid=26498221}}</ref> Recently synthetic biologists reprogrammed bacteria to sense and respond to a particular cancer state. Most often bacteria are used to deliver a therapeutic molecule directly to the tumor to minimize off-target effects. To target the tumor cells, [[peptide]]s that can specifically recognize a tumor were expressed on the surfaces of bacteria. Peptides used include an [[affibody molecule]] that specifically targets human [[Epidermal growth factor receptor|epidermal growth factor receptor 2]]<ref name="Gujrati_2014">{{cite journal|vauthors=Gujrati V, Kim S, Kim SH, Min JJ, Choy HE, Kim SC, Jon S|date=February 2014|title=Bioengineered bacterial outer membrane vesicles as cell-specific drug-delivery vehicles for cancer therapy|url=|journal=ACS Nano|volume=8|issue=2|pages=1525–37|doi=10.1021/nn405724x|pmid=24410085}}</ref> and a synthetic [[Adhesin molecule (immunoglobulin -like)|adhesin]].<ref name="Piñero-Lambea_2015">{{cite journal|vauthors=Piñero-Lambea C, Bodelón G, Fernández-Periáñez R, Cuesta AM, Álvarez-Vallina L, Fernández LÁ|date=April 2015|title=Programming controlled adhesion of E. coli to target surfaces, cells, and tumors with synthetic adhesins|journal=ACS Synthetic Biology|volume=4|issue=4|pages=463–73|doi=10.1021/sb500252a|pmc=4410913|pmid=25045780}}</ref> The other way is to allow bacteria to sense the [[tumor microenvironment]], for example hypoxia, by building an AND logic gate into bacteria.<ref>{{cite journal | last1 = Deyneko | first1 = I.V. | last2 = Kasnitz | first2 = N. | last3 = Leschner | first3 = S. | last4 = Weiss | first4 = S. | year = 2016| title = Composing a tumor specific bacterial promoter | url = | journal = PLOS ONE | volume = 11| issue = 5| page = e0155338| doi = 10.1371/journal.pone.0155338 | pmid = 27171245 | pmc = 4865170 }}</ref> The bacteria then only release target therapeutic molecules to the tumor through either [[lysis]]<ref>{{cite journal | last1 = Rice | first1 = KC | last2 = Bayles | first2 = KW | year = 2008 | title = Molecular control of bacterial death and lysis | journal = Microbiol Mol Biol Rev | volume = 72 | issue = 1| pages = 85–109 | doi = 10.1128/mmbr.00030-07 | pmid = 18322035 | pmc = 2268280 }}</ref> or the [[bacterial secretion system]].<ref>{{cite journal | last1 = Ganai | first1 = S. | last2 = Arenas | first2 = R. B. | last3 = Forbes | first3 = N. S. | year = 2009 | title = Tumour-targeted delivery of TRAIL using Salmonella typhimurium enhances breast cancer survival in mice | url = | journal = Br. J. Cancer | volume = 101 | issue = 10| pages = 1683–1691 | doi = 10.1038/sj.bjc.6605403 | pmid = 19861961 | pmc = 2778534 }}</ref> Lysis has the advantage that it can stimulate the immune system and control growth. Multiple types of secretion systems can be used and other strategies as well. The system is inducible by external signals. Inducers include chemicals, electromagnetic or light waves.<br />
<br />
The creation of new life and the tampering of existing life has raised ethical concerns in the field of synthetic biology and are actively being discussed.<br />
<br />
创造新生命以及篡改现存生命引起了合成生物学领域的伦理问题,目前正处于积极的讨论中。<br />
<br />
<br />
<br />
Multiple species and strains are applied in these therapeutics. Most commonly used bacteria are ''[[Salmonella enterica subsp. enterica|Salmonella typhimurium]]'', [[Escherichia coli|''Escherichia Coli'']], ''Bifidobacteria'', ''[[Streptococcus]]'', ''[[Lactobacillus]]'', ''[[Listeria]]'' and ''[[Bacillus subtilis]]''. Each of these species have their own property and are unique to cancer therapy in terms of tissue colonization, interaction with immune system and ease of application.<br />
<br />
<br />
<br />
The ethical aspects of synthetic biology has 3 main features: biosafety, biosecurity, and the creation of new life forms. Other ethical issues mentioned include the regulation of new creations, patent management of new creations, benefit distribution, and research integrity.<br />
<br />
合成生物学的伦理方面有三个主要特点: 生物研究安全性、生物安全性和创造新的生命形式。其他提到的伦理问题包括新生命的管理、新生命的专利管理、利益分配和研究的完整性。<br />
<br />
==== Cell-based platform 基于细胞的平台====<br />
<br />
The immune system plays an important role in cancer and can be harnessed to attack cancer cells. Cell-based therapies focus on [[Cancer immunotherapy|immunotherapies]], mostly by engineering [[T cell]]s.<br />
<br />
Ethical issues have surfaced for recombinant DNA and genetically modified organism (GMO) technologies and extensive regulations of genetic engineering and pathogen research were in place in many jurisdictions. Amy Gutmann, former head of the Presidential Bioethics Commission, argued that we should avoid the temptation to over-regulate synthetic biology in general, and genetic engineering in particular. According to Gutmann, "Regulatory parsimony is especially important in emerging technologies...where the temptation to stifle innovation on the basis of uncertainty and fear of the unknown is particularly great. The blunt instruments of statutory and regulatory restraint may not only inhibit the distribution of new benefits, but can be counterproductive to security and safety by preventing researchers from developing effective safeguards.".<br />
<br />
重组 DNA 和转基因生物(GMO)技术的伦理问题已经浮出水面,许多司法管辖区对基因工程和病原体研究有着广泛的规定。生物伦理总统委员会前任主席艾米 · 古特曼认为,我们应该避免过度监管合成生物学,尤其是基因工程。古特曼认为,“过度监管在新兴技术领域尤为显要...... 在这些领域,处于不确定性和对未知事物的恐惧而扼杀创新的倾向尤为强烈。法律和监管限制的生硬手段可能不仅会抑制新利益的分配,而且可能阻碍研究人员制定有效的保障措施,从而对研究安全性和生命安全性产生反作用。".<br />
<br />
<br />
<br />
T cell receptors were engineered and ‘trained’ to detect cancer [[epitope]]s. [[Chimeric antigen receptor]]s (CARs) are composed of a fragment of an [[antibody]] fused to intracellular T cell signaling domains that can activate and trigger proliferation of the cell. A second generation CAR-based therapy was approved by FDA.{{Citation needed|date=April 2018}}<br />
<br />
<br />
<br />
Gene switches were designed to enhance safety of the treatment. Kill switches were developed to terminate the therapy should the patient show severe side effects.<ref>Jones, B.S., Lamb, L.S., Goldman, F. & Di Stasi, A. Improving the safety of cell therapy products by suicide gene transfer. Front. Pharmacol. 5, 254 (2014).</ref> Mechanisms can more finely control the system and stop and reactivate it.<ref>{{cite journal | last1 = Wei | first1 = P | last2 = Wong | first2 = WW | last3 = Park | first3 = JS | last4 = Corcoran | first4 = EE | last5 = Peisajovich | first5 = SG | last6 = Onuffer | first6 = JJ | last7 = Weiss | first7 = A | last8 = LiWA | year = 2012 | title = Bacterial virulence proteins as tools to rewire kinase pathways in yeast and immune cells | url = | journal = Nature | volume = 488 | issue = 7411| pages = 384–388 | doi = 10.1038/nature11259 | pmid = 22820255 | pmc = 3422413 }}</ref><ref>{{cite journal | last1 = Danino | first1 = T. | last2 = Mondragon-Palomino | first2 = O. | last3 = Tsimring | first3 = L. | last4 = Hasty | first4 = J. | year = 2010 | title = A synchronized quorum of genetic clocks | url = | journal = Nature | volume = 463 | issue = 7279| pages = 326–330 | doi = 10.1038/nature08753 | pmid = 20090747 | pmc = 2838179 }}</ref> Since the number of T-cells are important for therapy persistence and severity, growth of T-cells is also controlled to dial the effectiveness and safety of therapeutics.<ref>{{cite journal | last1 = Chen | first1 = Y. Y. | last2 = Jensen | first2 = M. C. | last3 = Smolke | first3 = C. D. | year = 2010 | title = Genetic control of mammalian T-cell proliferation with synthetic RNA regulatory systems | journal = Proc. Natl. Acad. Sci. U.S.A. | volume = 107 | issue = 19| pages = 8531–6 | doi = 10.1073/pnas.1001721107 | pmid = 20421500 | pmc = 2889348 }}</ref><br />
<br />
One ethical question is whether or not it is acceptable to create new life forms, sometimes known as "playing God". Currently, the creation of new life forms not present in nature is at small-scale, the potential benefits and dangers remain unknown, and careful consideration and oversight are ensured for most studies. Regarding auxotrophy, bacteria and yeast can be engineered to be unable to produce histidine, an important amino acid for all life. Such organisms can thus only be grown on histidine-rich media in laboratory conditions, nullifying fears that they could spread into undesirable areas.<br />
<br />
有这样一个道德问题,创造新的生命形式(有时被称为“扮演上帝”)是否可以接受。目前,自然界中不存在的新生命形式的创造规模很小,潜在的好处和危险仍然不为人知,并且大多数研究确保进行了认真的考虑和监督。通过制造营养缺陷,细菌和酵母可以被改造为不能生产组氨酸的类型。组氨酸是一种对所有生命来说都很重要的氨基酸。因此,这些微生物只能在实验室条件下在富含组氨酸的培养基上生长,从而消除了人们对它们可能扩散到不良区域的担忧。<br />
<br />
<br />
<br />
<br />
== Ethics 伦理问题 ==<br />
<br />
Some ethical issues relate to biosecurity, where biosynthetic technologies could be deliberately used to cause harm to society and/or the environment. Since synthetic biology raises ethical issues and biosecurity issues, humanity must consider and plan on how to deal with potentially harmful creations, and what kinds of ethical measures could possibly be employed to deter nefarious biosynthetic technologies. With the exception of regulating synthetic biology and biotechnology companies, however, the issues are not seen as new because they were raised during the earlier recombinant DNA and genetically modified organism (GMO) debates and extensive regulations of genetic engineering and pathogen research are already in place in many jurisdictions.<br /><br />
<br />
一些伦理问题与生物安全有关,在这方面,生物合成技术可能被有意用以破坏社会和/或环境。由于合成生物学引起了伦理问题和生物安全问题,人类必须考虑和计划如何处理潜在的有害创造物,以及何种伦理措施具有阻止邪恶的生物合成技术的可行性。然而,除了监管合成生物学和生物技术公司之外,这些问题并不被视为新问题,因为它们是在早期的重组 DNA 和转基因生物(GMO)辩论中提出的,而且许多司法辖区已经对基因工程和病原体研究进行了广泛的监管。< br/><br />
<br />
{{Update|section|date=January 2019}}<br />
<br />
<br />
<br />
The creation of new life and the tampering of existing life has raised [[Ethics|ethical concerns]] in the field of synthetic biology and are actively being discussed.<ref name=":3" /><br />
<br />
<br />
<br />
The European Union-funded project SYNBIOSAFE has issued reports on how to manage synthetic biology. A 2007 paper identified key issues in safety, security, ethics and the science-society interface, which the project defined as public education and ongoing dialogue among scientists, businesses, government and ethicists. The key security issues that SYNBIOSAFE identified involved engaging companies that sell synthetic DNA and the biohacking community of amateur biologists. Key ethical issues concerned the creation of new life forms.<br />
<br />
欧盟资助的项目 SYNBIOSAFE 已经发布了关于如何管理合成生物学的报告。2007年的一篇论文确定了技术安全、生命安全、伦理和科学-社会接口方面的关键问题,并将其定义为公共教育和科学家、企业、政府和伦理学家之间的持续交流。SYNBIOSAFE 确定的关键生命安全问题涉及到销售合成 DNA 的公司和业余生物学家组成的生物黑客社区。关键的伦理问题涉及到创造新的生命形式。<br />
<br />
Common ethical questions include:<br />
常见的伦理问题包括:<br />
<br />
<br />
A subsequent report focused on biosecurity, especially the so-called dual-use challenge. For example, while synthetic biology may lead to more efficient production of medical treatments, it may also lead to synthesis or modification of harmful pathogens (e.g., smallpox). The biohacking community remains a source of special concern, as the distributed and diffuse nature of open-source biotechnology makes it difficult to track, regulate or mitigate potential concerns over biosafety and biosecurity.<br />
<br />
随后的一份聚焦于于生物安全的报告,特别是所谓的两用挑战。例如,虽然合成生物学可能带来更有效的医疗生产,但它也可能合成或改造出有害的病原体(例如天花)。生物黑客界仍然是一个特别令人关切的问题,因为开源生物技术的分布和扩散性质使得跟踪、管理或减轻对生物安全和生物安保的隐忧变得困难。<br />
<br />
* Is it morally right to tamper with nature?<br />
篡改自然在道德上是正确的吗?<br />
<br />
* Is one playing God when creating new life?<br />
创造新生命时,人是否就是上帝?<br />
<br />
COSY, another European initiative, focuses on public perception and communication. To better communicate synthetic biology and its societal ramifications to a broader public, COSY and SYNBIOSAFE published SYNBIOSAFE, a 38-minute documentary film, in October 2009.<br />
<br />
COSY 是欧洲的另一项倡议,主要关注于公众认知和交流。为了更好地向更广泛的公众宣传合成生物学及其社会影响,COSY 和 SYNBIOSAFE 于2009年10月出版了一部38分钟的纪录片《安全的合成生物学》。<br />
<br />
* What happens if a synthetic organism accidentally escapes?<br />
如果一种合成生命体意外地从实验室中泄露出去,会发生什么?<br />
<br />
* What if an individual misuses synthetic biology and creates a harmful entity (e.g., a biological weapon)?<br />
假如某个个体错误地使用合成生物学并制造了一个有害的尸体,那该怎么办?<br />
<br />
The International Association Synthetic Biology has proposed self-regulation. This proposes specific measures that the synthetic biology industry, especially DNA synthesis companies, should implement. In 2007, a group led by scientists from leading DNA-synthesis companies published a "practical plan for developing an effective oversight framework for the DNA-synthesis industry".<br />
<br />
国际合成生物学协会已经建议进行自我调节。它提出了合成生物产业,特别是 DNA 合成公司,应该实施的具体措施。2007年,由主要的 DNA 合成公司的科学家领导的一个小组发表了“为 DNA 合成工业制定有效监督框架的实用计划”。<br />
<br />
* Who will have control of and access to the products of synthetic biology? <br />
谁会拥有控制和访问合成生物产品的权限?<br />
<br />
* Who will gain from these innovations? Investors? Medical patients? Industrial farmers?<br />
谁会从这些创新中获利?投资者?患者?工业农民?<br />
<br />
On July 9–10, 2009, the National Academies' Committee of Science, Technology & Law convened a symposium on "Opportunities and Challenges in the Emerging Field of Synthetic Biology".<br />
<br />
2009年7月9日至10日,美国国家学院科学、技术和法律委员会召开了一次名为“合成生物学新兴领域的机遇与挑战”的研讨会。<br />
<br />
* Does the patent system allow patents on living organisms? What about parts of organisms, like HIV resistance genes in humans?<ref>{{Cite web|url=https://www.theguardian.com/science/2018/nov/26/worlds-first-gene-edited-babies-created-in-china-claims-scientist|title= World's first gene-edited babies created in China, claims scientist |last=Staff|first=Agencies|date=November 2018|website=The Guardian|url-status=live|archive-url=|archive-date=|access-date=}}</ref><br />
<br />
* What if a new creation is deserving of moral or legal status?<br />
如果一个新生命理应拥有道德和法律地位该怎么办?<br />
<br />
After the publication of the first synthetic genome and the accompanying media coverage about "life" being created, President Barack Obama established the Presidential Commission for the Study of Bioethical Issues to study synthetic biology. The commission convened a series of meetings, and issued a report in December 2010 titled "New Directions: The Ethics of Synthetic Biology and Emerging Technologies." The commission stated that "while Venter’s achievement marked a significant technical advance in demonstrating that a relatively large genome could be accurately synthesized and substituted for another, it did not amount to the “creation of life”. It noted that synthetic biology is an emerging field, which creates potential risks and rewards. The commission did not recommend policy or oversight changes and called for continued funding of the research and new funding for monitoring, study of emerging ethical issues and public education. These security issues may be avoided by regulating industry uses of biotechnology through policy legislation. Federal guidelines on genetic manipulation are being proposed by "the President's Bioethics Commission ... in response to the announced creation of a self-replicating cell from a chemically synthesized genome, put forward 18 recommendations not only for regulating the science ... for educating the public". Richard Lewontin wrote that some of the safety tenets for oversight discussed in The Principles for the Oversight of Synthetic Biology are reasonable, but that the main problem with the recommendations in the manifesto is that "the public at large lacks the ability to enforce any meaningful realization of those recommendations".<br />
<br />
在发表了第一个合成基因组以及随之而来的关于”生命”的媒体报道之后,巴拉克·奥巴马总统设立了研究合成生物学的生物伦理问题总统委员会。该委员会召开了一系列会议,并于2010年12月发布了一份题为《新方向: 合成生物学和新兴技术的伦理学》的报告。委员会指出:“虽然文特尔的成就标志着一项重大的技术进步,证明了一个相对较大的基因组可以准确地合成和替代另一个基因组,但它并不等于‘创造生命’。”报告指出,合成生物学是一个新兴的领域,它产生了潜在的风险和回报。该委员会没有对政策或监督方面的改变提出建议,并呼吁继续为研究提供资金,并为监测、研究新出现的道德问题和公共教育提供新资金。这些安全问题可以通过政策立法规范生物技术的工业用途来避免。“生物伦理总统委员会正在提出关于基因操纵的联邦指导方针...... 作为对宣布从化学合成的基因组中创造出自我复制细胞的回应,提出了18项建议,不仅仅是为了规范科学...... 为了教育公众。”。理查德·路文汀 (Richard Lewontin) 写道,《合成生物学监督原则》中讨论的一些监督安全原则是合理的,但宣言中的建议存在的主要问题是“广大公众缺乏能力,无法强制任意有意义地实现这些建议”。<br />
<br />
<br />
<br />
The ethical aspects of synthetic biology has 3 main features: biosafety, biosecurity, and the creation of new life forms.<ref>{{Cite journal|title=Synthetic Biology and Ethics: Past, Present, and Future|last=Hayry|first=Mattie|date=April 2017|journal=Cambridge Quarterly of Healthcare Ethics|volume=26|issue=2|pages=186–205|doi=10.1017/S0963180116000803|pmid=28361718}}</ref> Other ethical issues mentioned include the regulation of new creations, patent management of new creations, benefit distribution, and research integrity.<ref>{{Cite journal|title=Synthetic biology applied in the agrifood sector: Public perceptions, attitudes and implications for future studies|last=Jin |display-authors=etal |first=Shan|date=September 2019|journal=Trends in Food Science and Technology|volume=91|pages=454–466|doi=10.1016/j.tifs.2019.07.025}}</ref><ref name=":3">{{Cite journal|url=https://heinonline.org/HOL/LandingPage?handle=hein.journals/macq15&div=8&id=&page=| title=Synthetic Biology: Ethics, Exeptionalism and Expectations| pages=45| last=Newson|first=AJ|date=2015|journal=Macquarie Law Journal| volume=15|url-status=live|archive-url=|archive-date=|access-date=}}</ref><br />
<br />
<br />
<br />
Ethical issues have surfaced for [[recombinant DNA]] and [[genetically modified organism]] (GMO) technologies and extensive regulations of [[genetic engineering]] and pathogen research were in place in many jurisdictions. [[Amy Gutmann]], former head of the Presidential Bioethics Commission, argued that we should avoid the temptation to over-regulate synthetic biology in general, and genetic engineering in particular. According to Gutmann, "Regulatory parsimony is especially important in emerging technologies...where the temptation to stifle innovation on the basis of uncertainty and fear of the unknown is particularly great. The blunt instruments of statutory and regulatory restraint may not only inhibit the distribution of new benefits, but can be counterproductive to security and safety by preventing researchers from developing effective safeguards.".<ref>{{cite journal | first = Gutmann | last = Amy | date = 2012 | title = The Ethics of Synthetic Biology | volume=41 | issue=4 | pages = 17–22 | journal = The Hastings Center Report | doi = 10.1002/j.1552-146X.2011.tb00118.x | pmid = 21845917 | s2cid = 20662786 }}</ref><br />
<br />
<br />
<br />
The hazards of synthetic biology include biosafety hazards to workers and the public, biosecurity hazards stemming from deliberate engineering of organisms to cause harm, and environmental hazards. The biosafety hazards are similar to those for existing fields of biotechnology, mainly exposure to pathogens and toxic chemicals, although novel synthetic organisms may have novel risks. For biosecurity, there is concern that synthetic or redesigned organisms could theoretically be used for bioterrorism. Potential risks include recreating known pathogens from scratch, engineering existing pathogens to be more dangerous, and engineering microbes to produce harmful biochemicals. Lastly, environmental hazards include adverse effects on biodiversity and ecosystem services, including potential changes to land use resulting from agricultural use of synthetic organisms.<br />
<br />
合成生物学的危害包括对工人和公众的生物安全危害、蓄意设计可造成危害的生物体所产生的生物安全危害以及环境危害。生物安全危害类似于现有生物技术领域的危害,尽管新的合成生物可能有新的风险,它的主要形式是接触病原体和有毒化学品。为了生物安全,人们担心人工合成或重新设计的生物体在理论上可能被用于生物恐怖主义。潜在的风险包括从零开始再造已知的病原体,将现有的病原体设计成更危险的,以及设计微生物来生产有害的生物化学产品。最后,环境危害包括对生物多样性和生态系统服务的不利影响,包括在农业上利用合成生物体对土地使用的潜在变化。<br />
<br />
=== The "creation" of life 创造生命 ===<br />
<br />
<br />
<br />
Existing risk analysis systems for GMOs are generally considered sufficient for synthetic organisms, although there may be difficulties for an organism built "bottom-up" from individual genetic sequences. Synthetic biology generally falls under existing regulations for GMOs and biotechnology in general, and any regulations that exist for downstream commercial products, although there are generally no regulations in any jurisdiction that are specific to synthetic biology.<br />
<br />
通常认为,尽管由单个基因序列构成的”自下而上”的生物体可能存在困难,现有的转基因生物风险分析系统足以用于合成生物体。一般而言,尽管任何法域一般都没有专门针对合成生物学的条例,合成生物学属于现有的转基因生物和生物技术条例的范围,也属于现有的关于下游商业产品的条例的范围。<br />
<br />
One ethical question is whether or not it is acceptable to create new life forms, sometimes known as "playing God". Currently, the creation of new life forms not present in nature is at small-scale, the potential benefits and dangers remain unknown, and careful consideration and oversight are ensured for most studies.<ref name=":3" /> Many advocates express the great potential value—to agriculture, medicine, and academic knowledge, among other fields—of creating artificial life forms. Creation of new entities could expand scientific knowledge well beyond what is currently known from studying natural phenomena. Yet there is concern that artificial life forms may reduce nature’s "purity" (i.e., nature could be somehow corrupted by human intervention and manipulation) and potentially influence the adoption of more engineering-like principles instead of biodiversity- and nature-focused ideals. Some are also concerned that if an artificial life form were to be released into nature, it could hamper biodiversity by beating out natural species for resources (similar to how [[algal bloom]]s kill marine species). Another concern involves the ethical treatment of newly created entities if they happen to [[nociception|sense pain]], [[sentience]], and self-perception. Should such life be given moral or legal rights? If so, how?<br />
<br />
<br />
<br />
=== Biosafety and biocontainment 生物技术安全和生物抑制 ===<br />
<br />
What is most ethically appropriate when considering biosafety measures? How can accidental introduction of synthetic life in the natural environment be avoided? Much ethical consideration and critical thought has been given to these questions. Biosafety not only refers to biological containment; it also refers to strides taken to protect the public from potentially hazardous biological agents. Even though such concerns are important and remain unanswered, not all products of synthetic biology present concern for biological safety or negative consequences for the environment. It is argued that most synthetic technologies are benign and are incapable of flourishing in the outside world due to their "unnatural" characteristics as there is yet to be an example of a transgenic microbe conferred with a fitness advantage in the wild.<br />
<br />
<br />
<br />
In general, existing [[Hierarchy of hazard controls|hazard controls]], risk assessment methodologies, and regulations developed for traditional [[genetically modified organism]]s (GMOs) are considered to be sufficient for synthetic organisms. "Extrinsic" [[biocontainment]] methods in a laboratory context include physical containment through [[biosafety cabinet]]s and [[glovebox]]es, as well as [[personal protective equipment]]. In an agricultural context they include isolation distances and [[pollen]] barriers, similar to methods for [[Biocontainment of genetically modified organisms|biocontainment of GMOs]]. Synthetic organisms may offer increased hazard control because they can be engineered with "intrinsic" biocontainment methods that limit their growth in an uncontained environment, or prevent [[horizontal gene transfer]] to natural organisms. Examples of intrinsic biocontainment include [[auxotrophy]], biological [[kill switch]]es, inability of the organism to replicate or to pass modified or synthetic genes to offspring, and the use of [[Xenobiology|xenobiological]] organisms using alternative biochemistry, for example using artificial [[xeno nucleic acid]]s (XNA) instead of DNA.<ref name=":12" /><ref name=":32">{{Cite journal|url=https://publications.europa.eu/en/publication-detail/-/publication/bfd7d06c-d3ae-11e5-a4b5-01aa75ed71a1/language-en|title=Opinion on synthetic biology II: Risk assessment methodologies and safety aspects|last=|first=|date=2016-02-12|website=EU [[Directorate-General for Health and Consumers]]|pages=|via=|doi=10.2772/63529|archive-url=|archive-date=|access-date=|volume=|publisher=Publications Office}}</ref> Regarding auxotrophy, bacteria and yeast can be engineered to be unable to produce [[histidine]], an important amino acid for all life. Such organisms can thus only be grown on histidine-rich media in laboratory conditions, nullifying fears that they could spread into undesirable areas.<br />
<br />
<br />
<br />
<br />
<br />
=== Biosecurity 生物安全 ===<br />
<br />
Some ethical issues relate to biosecurity, where biosynthetic technologies could be deliberately used to cause harm to society and/or the environment. Since synthetic biology raises ethical issues and biosecurity issues, humanity must consider and plan on how to deal with potentially harmful creations, and what kinds of ethical measures could possibly be employed to deter nefarious biosynthetic technologies. With the exception of regulating synthetic biology and biotechnology companies,<ref name="Bügl, H. et al. 2007 627–629">{{cite journal | vauthors = Bügl H, Danner JP, Molinari RJ, Mulligan JT, Park HO, Reichert B, Roth DA, Wagner R, Budowle B, Scripp RM, Smith JA, Steele SJ, Church G, Endy D | title = DNA synthesis and biological security | journal = Nature Biotechnology | volume = 25 | issue = 6 | pages = 627–9 | date = June 2007 | pmid = 17557094 | doi = 10.1038/nbt0607-627 | s2cid = 7776829 }}</ref><ref>{{cite web|url = http://www.synbioproject.org/site/assets/files/1335/hastings.pdf|title = Ethical Issues in Synthetic Biology: An Overview of the Debates|date = |access-date = |website = }}</ref> however, the issues are not seen as new because they were raised during the earlier [[recombinant DNA]] and [[genetically modified organism]] (GMO) debates and extensive regulations of [[genetic engineering]] and pathogen research are already in place in many jurisdictions.<ref name="bioethics.gov">Presidential Commission for the study of Bioethical Issues, December 2010 [http://bioethics.gov/synthetic-biology-report NEW DIRECTIONS The Ethics of Synthetic Biology and Emerging Technologies] Retrieved 2012-04-14.</ref><br /><br />
<br />
<br />
<br />
=== European Union 欧盟方面 ===<br />
<br />
<br />
<br />
The [[European Union]]-funded project SYNBIOSAFE<ref>[http://www.synbiosafe.eu/ SYNBIOSAFE official site]</ref> has issued reports on how to manage synthetic biology. A 2007 paper identified key issues in safety, security, ethics and the science-society interface, which the project defined as public education and ongoing dialogue among scientists, businesses, government and ethicists.<ref name="Priorities">{{cite journal | vauthors = Schmidt M, Ganguli-Mitra A, Torgersen H, Kelle A, Deplazes A, Biller-Andorno N | title = A priority paper for the societal and ethical aspects of synthetic biology | journal = Systems and Synthetic Biology | volume = 3 | issue = 1–4 | pages = 3–7 | date = December 2009 | pmid = 19816794 | pmc = 2759426 | doi = 10.1007/s11693-009-9034-7 | url = http://www.synbiosafe.eu/uploads/pdf/Schmidt_etal-2009-SSBJ.pdf }}</ref><ref>Schmidt M. Kelle A. Ganguli A, de Vriend H. (Eds.) 2009. [https://www.springer.com/biomed/book/978-90-481-2677-4 "Synthetic Biology. The Technoscience and its Societal Consequences".] Springer Academic Publishing.</ref> The key security issues that SYNBIOSAFE identified involved engaging companies that sell synthetic DNA and the [[Do-it-yourself biology|biohacking]] community of amateur biologists. Key ethical issues concerned the creation of new life forms.<br />
<br />
<br />
<br />
A subsequent report focused on biosecurity, especially the so-called [[dual use technology|dual-use]] challenge. For example, while synthetic biology may lead to more efficient production of medical treatments, it may also lead to synthesis or modification of harmful pathogens (e.g., [[smallpox]]).<ref>{{cite journal | vauthors = Kelle A | title = Ensuring the security of synthetic biology-towards a 5P governance strategy | journal = Systems and Synthetic Biology | volume = 3 | issue = 1–4 | pages = 85–90 | date = December 2009 | pmid = 19816803 | pmc = 2759433 | doi = 10.1007/s11693-009-9041-8 }}</ref> The biohacking community remains a source of special concern, as the distributed and diffuse nature of open-source biotechnology makes it difficult to track, regulate or mitigate potential concerns over biosafety and biosecurity.<ref>{{cite journal | vauthors = Schmidt M | title = Diffusion of synthetic biology: a challenge to biosafety | journal = Systems and Synthetic Biology | volume = 2 | issue = 1–2 | pages = 1–6 | date = June 2008 | pmid = 19003431 | pmc = 2671588 | doi = 10.1007/s11693-008-9018-z | url = http://www.markusschmidt.eu/pdf/Diffusion_of_synthetic_biology.pdf }}</ref><br />
<br />
<br />
<br />
COSY, another European initiative, focuses on public perception and communication.<ref>[http://www.synbio.at/ COSY: Communicating Synthetic Biology]</ref><ref>{{cite journal | vauthors = Kronberger N, Holtz P, Kerbe W, Strasser E, Wagner W | title = Communicating Synthetic Biology: from the lab via the media to the broader public | journal = Systems and Synthetic Biology | volume = 3 | issue = 1–4 | pages = 19–26 | date = December 2009 | pmid = 19816796 | pmc = 2759424 | doi = 10.1007/s11693-009-9031-x }}</ref><ref>{{cite journal | vauthors = Cserer A, Seiringer A | title = Pictures of Synthetic Biology : A reflective discussion of the representation of Synthetic Biology (SB) in the German-language media and by SB experts | journal = Systems and Synthetic Biology | volume = 3 | issue = 1–4 | pages = 27–35 | date = December 2009 | pmid = 19816797 | pmc = 2759430 | doi = 10.1007/s11693-009-9038-3 }}</ref> To better communicate synthetic biology and its societal ramifications to a broader public, COSY and SYNBIOSAFE published ''SYNBIOSAFE'', a 38-minute documentary film, in October 2009.<ref>[http://www.synbiosafe.eu/DVD COSY/SYNBIOSAFE Documentary]</ref><br />
<br />
<br />
<br />
The International Association Synthetic Biology has proposed self-regulation.<ref>Report of IASB [http://www.ia-sb.eu/tasks/sites/synthetic-biology/assets/File/pdf/iasb_report_biosecurity_syntheticbiology.pdf "Technical solutions for biosecurity in synthetic biology"] {{webarchive |url=https://web.archive.org/web/20110719031805/http://www.ia-sb.eu/tasks/sites/synthetic-biology/assets/File/pdf/iasb_report_biosecurity_syntheticbiology.pdf |date=July 19, 2011 }}, Munich, 2008</ref> This proposes specific measures that the synthetic biology industry, especially DNA synthesis companies, should implement. In 2007, a group led by scientists from leading DNA-synthesis companies published a "practical plan for developing an effective oversight framework for the DNA-synthesis industry".<ref name="Bügl, H. et al. 2007 627–629" /><br />
<br />
<br />
<br />
=== United States 美国方面 ===<br />
<br />
<br />
<br />
In January 2009, the [[Alfred P. Sloan Foundation]] funded the [[Woodrow Wilson Center]], the [[Hastings Center]], and the [[J. Craig Venter Institute]] to examine the public perception, ethics and policy implications of synthetic biology.<ref>Parens E., Johnston J., Moses J. [http://www.thehastingscenter.org/who-we-are/our-research/selected-past-projects/ethical-issues-in-synthetic-biology-2/ Ethical Issues in Synthetic Biology.] 2009.</ref><br />
<br />
<br />
<br />
On July 9–10, 2009, the National Academies' Committee of Science, Technology & Law convened a symposium on "Opportunities and Challenges in the Emerging Field of Synthetic Biology".<ref>[http://sites.nationalacademies.org/PGA/stl/PGA_050738 NAS Symposium official site]</ref><br />
<br />
<br />
<br />
After the publication of the [[Mycoplasma laboratorium|first synthetic genome]] and the accompanying media coverage about "life" being created, President [[Barack Obama]] established the [[Presidential Commission for the Study of Bioethical Issues]] to study synthetic biology.<ref>Presidential Commission for the study of Bioethical Issues, December 2010 [http://bioethics.gov/node/353 FAQ]</ref> The commission convened a series of meetings, and issued a report in December 2010 titled "New Directions: The Ethics of Synthetic Biology and Emerging Technologies." The commission stated that "while Venter’s achievement marked a significant technical advance in demonstrating that a relatively large genome could be accurately synthesized and substituted for another, it did not amount to the “creation of life”.<ref>[http://bioethics.gov/node/353 Synthetic Biology F.A.Q.'s | Presidential Commission for the Study of Bioethical Issues]</ref> It noted that synthetic biology is an emerging field, which creates potential risks and rewards. The commission did not recommend policy or oversight changes and called for continued funding of the research and new funding for monitoring, study of emerging ethical issues and public education.<ref name="bioethics.gov" /><br />
<br />
<br />
<br />
Synthetic biology, as a major tool for biological advances, results in the "potential for developing biological weapons, possible unforeseen negative impacts on human health ... and any potential environmental impact".<ref name=":2">{{cite journal | vauthors = Erickson B, Singh R, Winters P | title = Synthetic biology: regulating industry uses of new biotechnologies | journal = Science | volume = 333 | issue = 6047 | pages = 1254–6 | date = September 2011 | pmid = 21885775 | doi = 10.1126/science.1211066 | bibcode = 2011Sci...333.1254E | s2cid = 1568198 | url = https://semanticscholar.org/paper/6ae989f6b07dc3c8a8694792d6fe8f036a0e0292 }}</ref> These security issues may be avoided by regulating industry uses of biotechnology through policy legislation. Federal guidelines on genetic manipulation are being proposed by "the President's Bioethics Commission ... in response to the announced creation of a self-replicating cell from a chemically synthesized genome, put forward 18 recommendations not only for regulating the science ... for educating the public".<ref name=":2" /><br />
<br />
<br />
<br />
=== Opposition 反对意见 ===<br />
<br />
On March 13, 2012, over 100 environmental and civil society groups, including [[Friends of the Earth]], the [[International Center for Technology Assessment]] and the [[ETC Group (AGETC)|ETC Group]] issued the manifesto ''The Principles for the Oversight of Synthetic Biology''. This manifesto calls for a worldwide moratorium on the release and commercial use of synthetic organisms until more robust regulations and rigorous biosafety measures are established. The groups specifically call for an outright ban on the use of synthetic biology on the [[human genome]] or [[human microbiome]].<ref>Katherine Xue for Harvard Magazine. September–October 2014 [http://harvardmagazine.com/2014/09/synthetic-biologys-new-menagerie Synthetic Biology’s New Menagerie]</ref><ref>Yojana Sharma for Scidev.net March 15, 2012. [http://www.scidev.net/global/genomics/news/ngos-call-for-international-regulation-of-synthetic-biology.html NGOs call for international regulation of synthetic biology]</ref> [[Richard Lewontin]] wrote that some of the safety tenets for oversight discussed in ''The Principles for the Oversight of Synthetic Biology'' are reasonable, but that the main problem with the recommendations in the manifesto is that "the public at large lacks the ability to enforce any meaningful realization of those recommendations".<ref>[http://www.nybooks.com/articles/archives/2014/may/08/new-synthetic-biology-who-gains/?insrc=rel#fnr-1 The New Synthetic Biology: Who Gains?] (2014-05-08), [[Richard C. Lewontin]], ''[[New York Review of Books]]''</ref><br />
<br />
<br />
<br />
== Health and safety 健康和安全 ==<br />
<br />
{{Main|Hazards of synthetic biology}}<br />
<br />
<br />
<br />
The hazards of synthetic biology include [[biosafety]] hazards to workers and the public, [[biosecurity]] hazards stemming from deliberate engineering of organisms to cause harm, and environmental hazards. The biosafety hazards are similar to those for existing fields of biotechnology, mainly exposure to pathogens and toxic chemicals, although novel synthetic organisms may have novel risks.<ref name=":02">{{Cite journal|url=https://blogs.cdc.gov/niosh-science-blog/2017/01/24/synthetic-biology/|title=Synthetic Biology and Occupational Risk|last1=Howard|first1=John|last2=Murashov|first2=Vladimir|date=2017-01-24|journal=Journal of Occupational and Environmental Hygiene|archive-url=|archive-date=|access-date=2018-11-30|last3=Schulte|first3=Paul|volume=14|issue=3|pages=224–236|pmid=27754800|doi=10.1080/15459624.2016.1237031|s2cid=205893358}}</ref><ref name=":12">{{Cite journal|last1=Howard|first1=John|last2=Murashov|first2=Vladimir|last3=Schulte|first3=Paul|date=2016-10-18|title=Synthetic biology and occupational risk|journal=Journal of Occupational and Environmental Hygiene|volume=14|issue=3|pages=224–236|doi=10.1080/15459624.2016.1237031|pmid=27754800|s2cid=205893358|issn=1545-9624}}</ref> For biosecurity, there is concern that synthetic or redesigned organisms could theoretically be used for [[bioterrorism]]. Potential risks include recreating known pathogens from scratch, engineering existing pathogens to be more dangerous, and engineering microbes to produce harmful biochemicals.<ref name=":7">{{Cite book|title=Biodefense in the Age of Synthetic Biology|date=2018-06-19|publisher=[[National Academies of Sciences, Engineering, and Medicine]]|isbn=9780309465182|location=|pages=|doi=10.17226/24890|pmid=30629396|last1=National Academies Of Sciences|first1=Engineering|author2=Division on Earth Life Studies|last3=Board On Life|first3=Sciences|author4=Board on Chemical Sciences Technology|author5=Committee on Strategies for Identifying Addressing Potential Biodefense Vulnerabilities Posed by Synthetic Biology}}</ref> Lastly, environmental hazards include adverse effects on [[biodiversity]] and [[ecosystem services]], including potential changes to land use resulting from agricultural use of synthetic organisms.<ref name=":8">{{Cite web|url=http://ec.europa.eu/environment/integration/research/newsalert/multimedia/synthetic_biology_and_biodiversity.htm|title=Future Brief: Synthetic biology and biodiversity|last=|first=|date=September 2016|website=European Commission|pages=14–15|archive-url=|archive-date=|access-date=2019-01-14}}</ref><ref>{{Cite web|url=https://publications.europa.eu/en/publication-detail/-/publication/9b231c71-faf1-11e5-b713-01aa75ed71a1/language-en/format-PDF|title=Final opinion on synthetic biology III: Risks to the environment and biodiversity related to synthetic biology and research priorities in the field of synthetic biology|last=|first=|date=2016-04-04|website=EU Directorate-General for Health and Food Safety|pages=8, 27|archive-url=|archive-date=|access-date=2019-01-14}}</ref><br />
<br />
<br />
<br />
Existing risk analysis systems for GMOs are generally considered sufficient for synthetic organisms, although there may be difficulties for an organism built "bottom-up" from individual genetic sequences.<ref name=":32" /><ref name=":22">{{Cite web|url=http://www.hse.gov.uk/research/rrpdf/rr944.pdf|title=Synthetic biology: A review of the technology, and current and future needs from the regulatory framework in Great Britain|last1=Bailey|first1=Claire|last2=Metcalf|first2=Heather|date=2012|website=UK [[Health and Safety Executive]]|archive-url=|archive-date=|access-date=2018-11-29|last3=Crook|first3=Brian}}</ref> Synthetic biology generally falls under existing regulations for GMOs and biotechnology in general, and any regulations that exist for downstream commercial products, although there are generally no regulations in any jurisdiction that are specific to synthetic biology.<ref name=":5">{{Citation|last1=Pei|first1=Lei|title=Regulatory Frameworks for Synthetic Biology|date=2012|work=Synthetic Biology|pages=157–226|publisher=John Wiley & Sons, Ltd|doi=10.1002/9783527659296.ch5|isbn=9783527659296|last2=Bar‐Yam|first2=Shlomiya|last3=Byers‐Corbin|first3=Jennifer|last4=Casagrande|first4=Rocco|last5=Eichler|first5=Florentine|last6=Lin|first6=Allen|last7=Österreicher|first7=Martin|last8=Regardh|first8=Pernilla C.|last9=Turlington|first9=Ralph D.}}</ref><ref name=":4">{{Cite journal|last=Trump|first=Benjamin D.|date=2017-11-01|title=Synthetic biology regulation and governance: Lessons from TAPIC for the United States, European Union, and Singapore|journal=Health Policy|volume=121|issue=11|pages=1139–1146|doi=10.1016/j.healthpol.2017.07.010|pmid=28807332|issn=0168-8510|doi-access=free}}</ref><br />
<br />
<br />
<br />
== See also 请参阅 ==<br />
<br />
{{Colbegin|colwidth=20em}}<br />
<br />
* ''[[ACS Synthetic Biology]]'' (journal)<br />
<br />
* [[Bioengineering]]<br />
<br />
* [[Biomimicry]]<br />
<br />
Category:Biotechnology<br />
<br />
类别: 生物技术<br />
<br />
* [[Carlson Curve]]<br />
<br />
Category:Molecular genetics<br />
<br />
类别: 分子遗传学<br />
<br />
* [[Chiral life concept]]<br />
<br />
Category:Systems biology<br />
<br />
分类: 系统生物学<br />
<br />
* [[Computational biology]]<br />
<br />
Category:Bioinformatics<br />
<br />
类别: 生物信息学<br />
<br />
* [[Computational biomodeling]]<br />
<br />
Category:Biocybernetics<br />
<br />
类别: 生物控制论<br />
<br />
* [[DNA digital data storage]]<br />
<br />
Category:Appropriate technology<br />
<br />
类别: 适当的技术<br />
<br />
* [[Engineering biology]]<br />
<br />
Category:Emerging technologies<br />
<br />
类别: 新兴技术<br />
<br />
<noinclude><br />
<br />
<small>This page was moved from [[wikipedia:en:Synthetic biology]]. Its edit history can be viewed at [[合成生物学/edithistory]]</small></noinclude><br />
<br />
[[Category:待整理页面]]</div>粲兰https://wiki.swarma.org/index.php?title=%E8%87%AA%E5%A4%8D%E5%88%B6_Self-replication&diff=19607自复制 Self-replication2020-12-03T09:46:04Z<p>粲兰:</p>
<hr />
<div>{{#seo:<br />
|keywords=自复制,生物细胞,计算机<br />
|description=一个动力系统任何能产生与自身相同或相似的复制体的的行为<br />
}}<br />
[[Image:DNA_chemical_structure.svg|thumb|right|200px|DNA分子结构 ]]<br />
'''自复制 Self-replication'''是一个动力系统,该系统能产生与自身相同或相似的复制体。生物细胞,在适当的环境下,通过细胞分裂进行繁殖。在细胞分裂过程中,DNA被复制,并在生殖过程中传递给后代。生物病毒可以复制,但只能通过感染过程控制细胞的生殖机制。有害的朊病毒蛋白可以通过将正常的蛋白质转化为反常形式来复制。<ref>{{cite news|url=http://news.bbc.co.uk/1/hi/health/8435320.stm |title='Lifeless' prion proteins are 'capable of evolution' |work=BBC News |date=2010-01-01 |accessdate=2013-10-22}}</ref>计算机病毒利用计算机上已有的硬件和软件进行复制。<br />
<br />
自复制在器人学中一直是一个重要的研究方向,也是科幻小说中的一个兴趣主题。任何不能完美复制的''自我复制机制self-replicating mechanism''(变异)都会经历遗传变异,产生自身的变异体。这些变异体将受到自然选择的影响,有些变异会比其他变异更好地在当前环境中生存,并将超越它们自身。<br />
<br />
<br />
==综述==<br />
===理论===<br />
<br />
[[约翰·冯·诺依曼 John von Neumann]]的早期研究<ref name=Hixon_vonNeumann>{{cite book|last=von Neumann|first=John|title=The Hixon Symposium|year=1948|location=Pasadena, California|pages=1–36}}</ref>表明'''复制因子 replicators''' 有几个部分:<br />
<br />
*'''<font color="#ff8000">复制因子 replicator</font>'''的编码表示<br />
*一种能复制编码后的复制机表示的机制<br />
*一种能在复制机所在环境中启动构建过程的机制<br />
<br />
<br />
尽管学界尚未有任何发现,这种模式可能有例外。例如,科学家们已经接近于构建出可以在RNA单体和转录酶的“环境”中[https://arstechnica.com/science/2011/04/investigations-into-the-ancient-RNA-world/ 可复制的RNA]。在这种情况下,身体就是基因组,专门的复制机是外部的。这种系统还无法克服对外部复制机的需求服,所以更准确地描述是“辅助复制”而不是“自复制”。<br />
<br />
<br />
然而,最简单的可能情况是只有一个基因组存在。如果没有一些自我繁殖步骤的说明,可能将一个只有基因组的系统描述为类似于晶体的东西会更为恰当。<br />
<br />
<br><br />
<br />
===自复制的种类===<br />
<br />
最近的研究<ref>{{cite web|url = http://www.MolecularAssembler.com/KSRM/5.1.htm | date = 2004 | accessdate = 29 June 2013 | last = Freitas | first = Robert | last2 = Merkle | first2 = Ralph | title = Kinematic Self-Replicating Machines - General Taxonomy of Replicators}}</ref>已经开始对'''复制因子replicators'''进行分类。这种分类通常基于它们所需要的支持程度。<br />
<br />
<br />
*'''<font color="#ff8000">天然复制因子 Natural replicators</font>''',设计全部或绝大部分不经人手,浑然天成。这样的系统包含自然的生命形式。<br />
*'''<font color="#ff8000">无机复制因子 Autotrophic replicators</font>'''可以在自然环境下进行'''自我复制 self-replicating'''。它们自己会收集自身的物质。据推测,非生物的无机复制因子可以由人类设计而成,并且可以轻易按照人类人品的规格去设计。<br />
*'''<font color="#ff8000">自生产系统 Self-reproductive systems</font>'''存在于假想当中,可以利用工业原料,例如金属棒和金属丝,以产生自身的拷贝。<br />
*'''<font color="#ff8000">自组装系统 Self-assembling systems</font>'''自动将它们各种已完成的部分组装起来。这种系统的简单例子已经在宏观尺度得到展示。<br />
<br />
<br />
机械复制因子的设计空间非常广阔。迄今为止,Robert Freitas和Ralph Merkle的综合研究<ref>{{cite web|url = http://www.MolecularAssembler.com/KSRM/5.1.9.htm | date = 2004 | accessdate = 29 June 2013 | last1 = Freitas | first1 = Robert | last2 = Merkle | first2 = Ralph | title = Kinematic Self-Replicating Machines - Freitas-Merkle Map of the Kinematic Replicator Design Space (2003–2004)}}</ref> 已经确定了137个设计维度并将其分为十几个独立的类别,包括:<br />
:(1)复制控制 Replication Control,<br />
:(2)复制信息 Replication Information,<br />
:(3)'''<font color="#ff8000">复制基质 Replication Substrate</font>''',<br />
:(4)复制因子结构 Replicator Structure,<br />
:(5)被动部件 Passive Parts,<br />
:(6)主动子单元 Active Subunits,<br />
:(7)'''<font color="#ff8000">复制机能量学 Replicator Energetics</font>''',<br />
:(8)'''<font color="#ff8000">复制机运动学 Replicator Kinematics</font>''',<br />
:(9)复制过程 Replication Process,<br />
:(10)复制因子性能 Replicator Performance,<br />
:(11)产物结构 Product Structure,<br />
:(12)可演化性 Evolvability。<br />
<br />
===一种自复制的计算机程序——蒯恩 Quine===<br />
<br />
在[[计算机科学]]中,蒯恩 Quine是一种自复制的计算机程序。当计算机执行这个程序时,程序会输出自身的代码。例如,利用Python语言编写的一个Quine程序如下如下:<br />
<br />
:<code>a='a=%r;print(a%%a)';print(a%a)</code><br />
<br />
将以上代码写入文件,然后执行,会发现程序的输出就是程序代码自身:<br />
<br />
[[文件:Quine.png|缩略图|左|Quine-python实现]]<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
一种更简单的方法是编写一个程序,这个程序将复制它所指向的任何数据流,然后把程序指向自己。在这种情况下,程序既被当作可执行代码,也被当作要操作的数据。这种方法在包括生物生命在内的大多数自复制系统中都很常见,而且更简单,因为它不需要程序包含对自身的完整描述。<br />
<br />
<br />
在许多编程语言中,空程序是合法的,并且执行时不会产生错误或其他输出,其输出是相同的源代码,所以这种程序是一种简单的自复制机制。<br />
<br />
===自复制式平铺===<br />
<br />
在几何学中,'''<font color="#ff8000">自复制式平铺 self-replicating tiling</font>'''是一种平铺方法,其中几个全等的图形可以连接在一起,形成一个较大的类似于原来的图形。这属于一个被称为'''密铺 tessellation'''的研究领域。 <font>称为“斯芬克斯 sphinx”的六块正三边形组hexiamond是唯一已知的自复制的五边形<ref>For an image that does not show how this replicates, see: Eric W. Weisstein. "Sphinx." From MathWorld--A Wolfram Web Resource. [http://mathworld.wolfram.com/Sphinx.html http://mathworld.wolfram.com/Sphinx.html]</ref> 。例如,4个图中的凹五边形可以一起组成一个和原形状相似但是原来2倍大小的凹五边形。所罗门·格伦布 Solomon W. Golomb <ref>For further illustrations, see [http://www.geoaustralia.com/italian/Sphinx/Guide.html Teaching TILINGS / TESSELLATIONS with Geo Sphinx]</ref>为这样的自我复制纹样创造了'''rep-tiles'''这个术语。<br />
<br />
<br />
2012年,李·萨洛斯 Lee Sallows将 rep-tiles 定义为一种特殊的自平铺纹样集或组 。一组 ''<math>n</math>'' 阶的复制品是一组 ''<math>n</math>'' 个形状的复制品,它们可以以 ''<math>n</math>'' 种不同的方式组合,以便形成更大的自复制产物。每个形状各不相同的自平铺纹样集被称为“完美的 perfect”。<math>n</math>次重复的 rep-tile 只是由<math>n</math>个相同部分组成的一个集合。<br />
{|<br />
|- style="vertical-align:bottom;"<br />
[[File:Self-replication_of_sphynx_hexidiamonds.svg|thumb|left|text-bottom|260px|可以将四个“sphinx”拼在一起以形成另一个sphinx。]]<br />
[[File:A rep-tile-based_setiset_of_order_4.png|thumb|right|text-bottom|290px|一个完美的setiset 4阶]]<br />
|}<br />
{{clear}}<br />
<br />
===自复制的粘土晶体===<br />
<br />
粘土晶体中存在一种不基于 DNA 或 RNA 的天然自复制。<ref>{{cite web|url=http://www.bbc.com/earth/story/20160823-the-idea-that-life-began-as-clay-crystals-is-50-years-old |title=The idea that life began as clay crystals is 50 years old |publisher=bbc.com |date=2016-08-24 |accessdate=2019-11-10}}</ref>粘土由大量的小晶体组成,粘土是促进晶体生长的环境。晶体由规则的原子晶格组成的,将其放置在含有晶体成分的水溶液中能够生长,并自动地将晶体边界上的原子排列成晶体形式。当正常的原子结构被破坏时,晶体可能具有不规则性,当晶体生长时,这些不规则性可能会传播,形成一种不规则晶体的自我复制。由于这些不规则结构可能会影响晶体分裂形成新晶体的概率,因此这种不规则结构的晶体甚至可以被认为是在经历演化过程。<br />
<br />
<br />
===应用===<br />
<br />
一些工程科学的长期目标是制造出一种可以自复制的'''<font color="ff8000">铿锵复制因子 clanking replicator''' </font>。通常的原因是为了在保证产品的功效的同时降低每件商品的成本。许多权威人士表示,自复制产品的成本应该能逼近木材或其他生物材质的单位重量成本,因为自我复制不需要传统工业产品所需的劳动力、资本和分销成本。<br />
<br />
<br />
制造出一个全新的人工复制机是一个合理的近期目标。<br />
<br />
<br />
美国宇航局最近的一项研究表明,铿锵复制机的复杂度大约相当于英特尔奔腾4处理器的复杂度。<ref>{{cite web|url=http://www.niac.usra.edu/files/studies/final_report/883Toth-Fejel.pdf |title=Modeling Kinematic Cellular Automata Final Report |publisher= |date=April 30, 2004 |accessdate=2013-10-22}}</ref> 也就是说,这项技术在一个合理的商业时间规模内,是可以由一个相对较小的工程团队以一个合理的成本实现的。<br />
<br />
<br />
目前学术界对生物技术有着浓厚兴趣,这一领域也有大量资金,这正是尝试利用现有细胞的复制能力的好时机,而且有较大期望可以产生重要的理解和进展。<br />
<br />
<br />
自复制的一种变体在编译器构造中具有实际意义,在天然自复制中也会出现类似的自我改进现象。编译器(表现型)可以应用于编译器自身的源代码(基因型) ,从而产生编译器本身。在编译器开发过程中,一般使用修改(变异)的源代码来创建下一代编译器。这个过程不同于天然的自我复制,因为这个过程是由工程师指导的,而不是复制机本身。<br />
<br />
==机械中的自复制==<br />
<br />
机器人学领域的一项活动就是机器的自复制。由于所有机器人(至少在现代)都有挺多的相同特性,一个自复制机器人(或者可能是一群机器人)需要做到以下几点:<br />
<br />
*获得构建材料<br />
*制造新零件,包括最小的零件和思维组件<br />
*提供一个稳定一致的动力源<br />
*为新成员编程<br />
*改正子代产物的任何错误<br />
<br />
<br />
在纳米级别上,组装者也可能被设计成在自身动力下进行自复制。这反过来又导致了“灰蛊”''grey goo'' 版本的世界末日,就像在诸如《花开》''Bloom'',《掠食》''Prey'' 和《递归》''Recursion'' 这样的科幻小说中描述的那样。<br />
<br />
<br />
美国远见研究所''Foresight Institute'' 已经为机械自复制领域的研究者们发布了指导方针。<ref>{{cite web|url=http://foresight.org/guidelines/ |title=Molecular Nanotechnology Guidelines |publisher=Foresight.org |date= |accessdate=2013-10-22}}</ref> 指导方针建议研究者使用一些特定的技术来防止机械复制因子失控,比如使用广播结构''broadcast architecture''。<br />
<br />
<br />
关于与工业时代相关的机械复制的详细文章,请参阅[https://en.wikipedia.org/wiki/Mass_production 大规模生产] ''mass production''。<br />
<br />
==研究领域==<br />
以下领域已开展的与自复制相关的研究:<br />
<br />
* 生物学研究自然复制和复制因子及其相互作用。这些可以成为规避自我复制机制设计困难的重要指导。<br />
* 在化学领域,自我复制研究通常特指关于一组特定的分子如何在这个分子集群(通常是系统化学领域的一部分)中共同作用以复制对方<ref>{{cite book |author=Moulin, Giuseppone |title=Constitutional Dynamic Chemistry |volume=322 |pages=87–105 |year=2011|publisher=Springer|doi=10.1007/128_2011_198|pmid=21728135 |series=Topics in Current Chemistry |isbn=978-3-642-28343-7 |chapter=Dynamic Combinatorial Self-Replicating Systems }}</ref>。<br />
* '''模因论 ''Memetics''''' 研究思想及其在人类文化中的传播。'''模因 Meme'''只需要很少的材料,因此在理论上与病毒相似,通常被称为病毒性的。<br />
* 分子纳米技术是制造纳米级产品的组装工具。如果没有自我复制,分子机器的研发资金和组装成本就会变得不可思议的高。<br />
* 空间资源: 美国航天局资助了一些设计研究,通过开发自我复制机来开采空间资源。这些设计大多数包括计算机控制的可复制自己的机器。<br />
* 计算机安全:许多计算机安全问题是由感染计算机的自复制计算机程序造成的——计算机蠕虫和计算机病毒。<br />
* 在并行计算中,在大型计算机集群或分布式计算系统的每个节点上手动加载一个新程序需要很长时间。使用移动代理程序自动加载新程序可以节省系统管理员大量的时间,并且可以更快地为用户提供结果,只要他们不失去控制。<br />
<br />
==工业==<br />
===太空探索和制造业===<br />
<br />
太空系统中自复制的目标是利用低发射质量的大量物质。例如,一个自养自复制机械可以用太阳能电池覆盖月球或行星,并通过微波将能量传送到地球。一旦就位,自己建造的同样的机器也可以生产原材料或制成品,包括运输产品的运输系统。另一个自复制机械模型会在星系和宇宙中复制自己,把信息传回来。<br />
<br />
<br />
一般来说,由于这些系统是自养的,他们是已知最困难和复杂的复制因子。它们也被认为是最危险的复制因子,因为它们不需要人类的任何投入来繁殖。<br />
<br />
<br />
一个关于太空中复制因子的经典理论研究是1980年由 NASA 的罗伯特·弗雷塔斯 Robert Freitas 编辑的关于自养铿锵复制因子的研究。<ref>[[Wikisource:Advanced Automation for Space Missions]]</ref><br />
<br />
<br />
大部分的设计研究都关注于采用一个简单、灵活的化学系统来处理月球表面的风化层,以及复制因子所需要的元素比率和从风化层中获得的元素比率之间的差异。限制元素是'''氯(Chlorine)''',它是处理风化层以获得铝的一个必不可少的元素。氯在月球的风化层中非常罕见,通过投入适量的氯,可以保证更快的生殖速度。<br />
<br />
<br />
参考设计采用了小型计算机控制的在轨道上运行的电动车。每个推车可以有一个简单的手或一个小型推土机铲,形成一个基本的机器人。<br />
<br />
<br />
电力将由支撑在支柱上的“天篷”状的太阳能电池提供。其他的机器可以在天篷下面运转。<br />
<br />
<br />
一个“铸造机器人”将使用一个机械手臂和一些雕刻工具来制作石膏模具。石膏模具易于制作,而且能够生产表面光洁度好且精密的零件。然后,机器人将用非导电熔岩(玄武岩)或纯金属铸造大部分零件。它内部的电炉可将这些材料熔化。<br />
<br />
<br />
他们提出了一个探索性的、更为复杂的“芯片工厂 chip factory”来生产计算机和电子系统,但设计师们还表示,将这些芯片像“维生素”一样从地球运输出去,可能会被证明是可行的。<br />
<br />
===分子制造业===<br />
纳米技术学家尤其相信,在人类设计出一种纳米尺度的自复制组装器之前,他们的工作很可能无法达到成熟的状态[http://www.MolecularAssembler.com/KSRM/4.11.3.htm]。 <br />
<br />
<br />
这些系统比自养系统简单得多,因为纯净的原料和能源会被预先提供给它们。它们不需要再生这些材料。这种区别是关于分子制造是否可行的一些争论的根源。许多权威认为这是不可能的,他们明确地引证了复杂自养自复制系统的资料;而许多认同这种可能性的权威人士清楚地引用了已经被证明的更简单的自组装系统的资料。与此同时,2003年的一项实验展示了一个乐高积木自主机器人,它能够按照预先设定的轨道,从外部提供的4个组件开始,精确地组装出自己的复制品。[http://www.MolecularAssembler.com/KSRM/3.23.4.htm].<br />
<br />
<br />
仅仅利用现有细胞的复制能力是不够的,因为蛋白质的生物合成过程中存在局限性。<br />
<br />
<br />
我们需要的是合理设计一种具有更广泛合成能力的全新复制因子。<br />
<br />
<br />
2011年,纽约大学的科学家们开发出了可自复制的人造结构,这一过程有产生新型材料的潜力。他们已经证明,这种结构不仅可以复制像细胞 DNA 或 RNA 这样的分子,而且可以复制能够呈现许多不同形态、具有许多不同功能特征、并与许多不同类型的化学物种相关联的离散结构。<ref>{{cite journal | doi = 10.1038/nature10500 | last1 = Wang | first1 = Tong | last2 = Sha | first2 = Ruojie | last3 = Dreyfus | first3 = Rémi | last4 = Leunissen | first4 = Mirjam E. | last5 = Maass | first5 = Corinna | last6 = Pine | first6 = David J. | last7 = Chaikin | first7 = Paul M. | last8 = Seeman | first8 = Nadrian C. | year = 2011 | title = Self-replication of information-bearing nanoscale patterns | journal = Nature | volume = 478 | issue = 7368 | pages = 225–228 | pmid=21993758 | pmc=3192504}}</ref><ref>{{cite web | url = https://www.sciencedaily.com/releases/2011/10/111012132651.htm | title = Self-replication process holds promise for production of new materials. | date = 17 October 2011 | website = Science Daily | accessdate=17 October 2011}}</ref><br />
<br />
<br />
有关假设的自我复制系统的其他化学基础的讨论,请参阅[https://en.wikipedia.org/wiki/Alternative_biochemistry 替代生物化学] ''alternative biochemistry''。<br />
<br />
==参阅==<br />
*[https://zhuanlan.zhihu.com/p/135833919 从自我复制到自我意识]<br />
* [[人工生命]] Artificial life<br />
* [https://en.wikipedia.org/wiki/Astrochicken 太空鸡实验] Astrochicken<br />
* [[自创生理论]]Autopoiesis<br />
* [[复杂系统]]Complex system<br />
* [https://en.wikipedia.org/wiki/DNA_replication DNA复制] DNA replication<br />
* [[自我复制机器]]Self-replicating machine<br />
** [[自我复制空间飞行器 Self-replicating spacecraft]]<br />
* [[空间制造 Space manufacturing]]<br />
* [[冯·诺依曼宇宙构造函数 Von Neumann universal constructor]]<br />
* [[冯·诺依曼机 Von Neumann machine (disambiguation)]]<br />
* [[自重构 Self reconfigurable]]<br />
* [[最终人存原理 Final Anthropic Principle]]<br />
* [[正反馈 Positive feedback]]<br />
* [[谐 Harmonic]]<br />
<br />
<br><br />
<br />
==参考文献==<br />
{{reflist}}<br />
<br />
<br />
==其他文献==<br />
* von Neumann, J., 1966, ''The Theory of Self-reproducing Automata'', A. Burks, ed., Univ. of Illinois Press, Urbana, IL.<br />
* Advanced Automation for Space Missions, a 1980 NASA study edited by Robert Freitas<br />
* [http://www.MolecularAssembler.com/KSRM.htm Kinematic Self-Replicating Machines] first comprehensive survey of entire field in 2004 by Robert Freitas and Ralph Merkle<br />
* [https://web.archive.org/web/20040920220139/http://www.niac.usra.edu/files/studies/final_report/pdf/883Toth-Fejel.pdf NASA Institute for Advance Concepts study by General Dynamics]- concluded that complexity of the development was equal to that of a Pentium 4, and promoted a design based on cellular automata.<br />
* ''Gödel, Escher, Bach'' by Douglas Hofstadter (detailed discussion and many examples)<br />
* Kenyon, R., ''Self-replicating tilings'', in: Symbolic Dynamics and Applications (P. Walters, ed.) Contemporary Math. vol. 135 (1992), 239-264.<br />
<br />
{{refend}}<br />
<br />
==编者推荐==<br />
===相关资料===<br />
====[http://www.nyx.net/~gthompso/quine.htm The Quine Page]====<br />
该网页收集了各种各样的quine程序。<br />
<br />
<br />
===相关课程===<br />
[[File:自复制课程推荐.png|400px|right|thumb|自复制——抵抗热力学第二定律的崭新方法]]<br />
====[https://campus.swarma.org/course/1127 自复制——抵抗热力学第二定律的崭新方法]====<br />
大自然竟然找到了另一种抵抗热力学第二定律的途径,这就是自复制。我们可以想象,一个能够自复制的斑图在一片随机混沌之海中以非常小概率的诞生,但是一旦出现,它就会不断繁衍,自我复制。它就像找到了概率论中的“后门”,让自复制生命变成了必然事件。本课程中,张江老师将分析自复制是如何出现,来抵抗热力学第二定律。<br />
<br />
'''课程大纲'''<br />
*认识到生命自复制与热力学第二定律的关系<br />
*理解von Neumann研究自复制自动机的动机<br />
*自复制自动机的基本构成<br />
*自复制将热力学第二定律变废为宝的逻辑<br />
*遗传算法的基本原理<br />
*什么是复杂适应系统?<br />
<br />
<br />
----<br />
本中文词条由[[用户:Qige96|Ricky]]审校,[[用户:Paradoxist-Paradoxer|Paradoxist-Paradoxer]]审校,[[用户:薄荷|薄荷]]欢迎在讨论页面留言。<br />
<br />
'''本词条内容源自公开资料,遵守 CC3.0协议。'''</div>粲兰https://wiki.swarma.org/index.php?title=%E5%90%88%E6%88%90%E7%94%9F%E7%89%A9%E5%AD%A6&diff=19569合成生物学2020-12-01T06:23:17Z<p>粲兰:</p>
<hr />
<div>此词条暂由袁一博翻译,翻译字数共4491,未经人工整理和审校,带来阅读不便,请见谅。<br />
<br />
{{redirect|Artificial life form|simulated life forms|Artificial life}}<br />
<br />
{{short description|Interdisciplinary branch of biology and engineering}}<br />
<br />
{{Synthetic biology}}<br />
<br />
[[File:Synthetic Biology Research at NASA Ames.jpg|thumb|Synthetic Biology Research at [[Ames Research Center|NASA Ames Research Center]]. NASA埃姆斯研究中心的合成生物学研究。]]<br />
<br />
NASA Ames Research Center.]]<br />
<br />
美国国家航空和宇宙航行局/美国国家航空航天局埃姆斯研究中心<br />
<br />
<br />
<br />
'''Synthetic biology''' ('''SynBio''') is a multidisciplinary area of research that seeks to create new biological parts, devices, and systems, or to redesign systems that are already found in nature.<br />
<br />
Synthetic biology (SynBio) is a multidisciplinary area of research that seeks to create new biological parts, devices, and systems, or to redesign systems that are already found in nature.<br />
<br />
合成生物学(SynBio)是一个多学科的研究领域,旨在创造新的生物部件、设备和系统,或重新设计已经在自然界中发现的系统。<br />
<br />
<br />
<br />
It is a branch of science that encompasses a broad range of methodologies from various disciplines, such as [[biotechnology]], [[genetic engineering]], [[molecular biology]], [[molecular engineering]], [[systems biology]], [[Model lipid bilayer|membrane science]], [[biophysics]], [[Biological engineering|chemical and biological engineering]], [[Electrical engineering|electrical and computer engineering]], [[control engineering]] and [[evolutionary biology]].<br />
<br />
It is a branch of science that encompasses a broad range of methodologies from various disciplines, such as biotechnology, genetic engineering, molecular biology, molecular engineering, systems biology, membrane science, biophysics, chemical and biological engineering, electrical and computer engineering, control engineering and evolutionary biology.<br />
<br />
它是科学的一个分支,包括广泛的方法,从不同的学科,如生物技术,基因工程,分子生物学,分子工程,系统生物学,膜科学,生物物理学,化学和生物工程,电子和计算机工程,控制工程和进化生物学。<br />
<br />
<br />
<br />
Due to more powerful [[genetic engineering]] capabilities and decreased DNA synthesis and [[DNA sequencing|sequencing costs]], the field of synthetic biology is rapidly growing. In 2016, more than 350 companies across 40 countries were actively engaged in synthetic biology applications; all these companies had an estimated net worth of $3.9 billion in the global market.<ref>{{cite journal | last1 = Bueso | first1 = F. Y. | last2 = Tangney | first2 = M. | year = 2017 | title = Synthetic Biology in the Driving Seat of the Bioeconomy | url = | journal = Trends in Biotechnology | volume = 35 | issue = 5| pages = 373–378 | doi = 10.1016/j.tibtech.2017.02.002 | pmid = 28249675 }}</ref><br />
<br />
Due to more powerful genetic engineering capabilities and decreased DNA synthesis and sequencing costs, the field of synthetic biology is rapidly growing. In 2016, more than 350 companies across 40 countries were actively engaged in synthetic biology applications; all these companies had an estimated net worth of $3.9 billion in the global market.<br />
<br />
由于更强大的基因工程能力和降低的 DNA 合成及测序成本,合成生物学领域正在迅速发展。2016年,来自40个国家的350多家公司积极参与合成生物学应用; 所有这些公司在全球市场的净值估计为39亿美元。<br />
<br />
<br />
<br />
== Definition 定义 ==<br />
<br />
Synthetic biology currently has no generally accepted definition. Here are a few examples:<br />
<br />
Synthetic biology currently has no generally accepted definition. Here are a few examples:<br />
<br />
合成生物学目前还没有公认的定义。以下是一些定义的示例:<br />
<br />
<br />
<br />
* "the use of a mixture of physical engineering and genetic engineering to create new (and, therefore, synthetic) life forms混合使用物理工程和基因工程来创建新的(因而也即合成的)生命形式。"<ref>{{cite journal | last1 = Hunter | first1 = D | year = 2013 | title = How to object to radically new technologies on the basis of justice: the case of synthetic biology | url = | journal = Bioethics | volume = 27 | issue = 8| pages = 426–434 | doi = 10.1111/bioe.12049 | pmid = 24010854 }}</ref><br />
<br />
<br />
* "an emerging field of research that aims to combine the knowledge and methods of biology, engineering and related disciplines in the design of chemically synthesized DNA to create organisms with novel or enhanced characteristics and traits一个新兴的研究领域,旨在将生物学,工程学和相关学科领域的知识和方法结合到化学合成DNA 的设计中,从而创造出具有新颖或增强特性和特征的有机体。<br />
"<ref>{{cite journal | last1 = Gutmann | first1 = A | year = 2011 | title = The ethics of synthetic biology: guiding principles for emerging technologies | url = | journal = Hastings Center Report | volume = 41 | issue = 4| pages = 17–22 | doi = 10.1002/j.1552-146x.2011.tb00118.x | pmid = 21845917 | s2cid = 20662786 }}</ref><br />
<br />
* "designing and constructing [[BioBrick|biological modules]], [[biological systems]], and [[biological machine]]s or, re-design of existing biological systems for useful purposes设计并构建生物积木、生物系统以及生物机器,或为有用的目的重新设计现有的生物系统。"<ref name="NakanoEckford2013">{{cite book|url={{google books |plainurl=y |id=uVhsAAAAQBAJ}}|title=Molecular Communication|last1=Nakano|first1=Tadashi|last2=Eckford|first2=Andrew W.|last3=Haraguchi|first3=Tokuko|date=12 September 2013|publisher=Cambridge University Press|isbn=978-1-107-02308-6|name-list-style=vanc}}</ref><br />
<br />
<br />
* “applying the engineering paradigm of systems design to biological systems in order to produce predictable and robust systems with novel functionalities that do not exist in nature” (The European Commission, 2005)This can include the possibility of a [[molecular assembler]], based upon biomolecular systems such as the [[ribosome]]”<ref name="RoadMap">{{Cite web|url=http://www.foresight.org/roadmaps/Nanotech_Roadmap_2007_main.pdf|title=Productive Nanosystems: A Technology Roadmap|website=Foresight Institute}}</ref><br />
将系统设计的工程范式应用到生物系统中,以产生具有自然界中不存在的新功能的可预测且健全的系统”(欧洲委员会,2005年),这可能包括基于生物分子系统——例如核糖体——的分子组合器的可能性。<br />
<br />
<br />
<br />
To note, synthetic biology has traditionally been divided into two different approaches: top down and bottom up.<br />
<br />
To note, synthetic biology has traditionally been divided into two different approaches: top down and bottom up.<br />
<br />
值得注意的是,合成生物学在传统上被分为两种不同的方法: 自上而下和自下而上。<br />
<br />
<br />
<br />
# The <u>top down</u> approach involves using metabolic and genetic engineering techniques to impart new functions to living cells.<br />
<br />
The <u>top down</u> approach involves using metabolic and genetic engineering techniques to impart new functions to living cells.<br />
<br />
自上而下的方法包括利用代谢和基因工程技术赋予活细胞以新的功能。<br />
<br />
# The <u>bottom up</u> approach involves creating new biological systems ''in vitro'' by bringing together 'non-living' biomolecular components,<ref>{{cite journal | vauthors = Schwille P | title = Bottom-up synthetic biology: engineering in a tinkerer's world | journal = Science | volume = 333 | issue = 6047 | pages = 1252–4 | date = September 2011 | pmid = 21885774 | doi = 10.1126/science.1211701 | bibcode = 2011Sci...333.1252S | s2cid = 43354332 }}</ref> often with the aim of constructing an [[artificial cell]].<br />
<br />
The <u>bottom up</u> approach involves creating new biological systems in vitro by bringing together 'non-living' biomolecular components, often with the aim of constructing an artificial cell.<br />
<br />
自下而上的方法包括在体外创建新的生物系统,将“非活性”的生物分子组件聚集在一起,其目的通常是构建一个人工细胞。<br />
<br />
<br />
<br />
Biological systems are thus assembled module-by-module. [[Cell-free protein synthesis|Cell-free protein expression systems]] are often employed,<ref>{{cite journal | vauthors = Noireaux V, Libchaber A | title = A vesicle bioreactor as a step toward an artificial cell assembly | journal = Proceedings of the National Academy of Sciences of the United States of America | volume = 101 | issue = 51 | pages = 17669–74 | date = December 2004 | pmid = 15591347 | pmc = 539773 | doi = 10.1073/pnas.0408236101 | bibcode = 2004PNAS..10117669N }}</ref><ref>{{cite journal | vauthors = Hodgman CE, Jewett MC | title = Cell-free synthetic biology: thinking outside the cell | journal = Metabolic Engineering | volume = 14 | issue = 3 | pages = 261–9 | date = May 2012 | pmid = 21946161 | pmc = 3322310 | doi = 10.1016/j.ymben.2011.09.002 }}</ref><ref>{{cite journal | vauthors = Elani Y, Law RV, Ces O | title = Protein synthesis in artificial cells: using compartmentalisation for spatial organisation in vesicle bioreactors | journal = Physical Chemistry Chemical Physics | volume = 17 | issue = 24 | pages = 15534–7 | date = June 2015 | pmid = 25932977 | doi = 10.1039/C4CP05933F | bibcode = 2015PCCP...1715534E | doi-access = free }}</ref> as are membrane-based molecular machinery. There are increasing efforts to bridge the divide between these approaches by forming hybrid living/synthetic cells,<ref>{{cite journal | vauthors = Elani Y, Trantidou T, Wylie D, Dekker L, Polizzi K, Law RV, Ces O | title = Constructing vesicle-based artificial cells with embedded living cells as organelle-like modules | journal = Scientific Reports | volume = 8 | issue = 1 | pages = 4564 | date = March 2018 | pmid = 29540757 | pmc = 5852042 | doi = 10.1038/s41598-018-22263-3 | bibcode = 2018NatSR...8.4564E }}</ref> and engineering communication between living and synthetic cell populations.<ref>{{cite journal | vauthors = Lentini R, Martín NY, Forlin M, Belmonte L, Fontana J, Cornella M, Martini L, Tamburini S, Bentley WE, Jousson O, Mansy SS | title = Two-Way Chemical Communication between Artificial and Natural Cells | journal = ACS Central Science | volume = 3 | issue = 2 | pages = 117–123 | date = February 2017 | pmid = 28280778 | pmc = 5324081 | doi = 10.1021/acscentsci.6b00330 }}</ref><br />
<br />
Biological systems are thus assembled module-by-module. Cell-free protein expression systems are often employed, as are membrane-based molecular machinery. There are increasing efforts to bridge the divide between these approaches by forming hybrid living/synthetic cells, and engineering communication between living and synthetic cell populations.<br />
<br />
生物系统就是这样一个模块一个模块地组装起来的。无细胞蛋白表达系统作为以膜为基础的分子机制,经常被采用。通过形成活细胞/合成细胞的混合体,以及在活细胞和合成细胞群之间进行工程交流,越来越多的努力为这些系统之间架起了桥梁。<br />
<br />
<br />
<br />
== History 发展历程 ==<br />
<br />
'''1910:''' First identifiable use of the term "synthetic biology" in [[Stéphane Leduc]]'s publication ''Théorie physico-chimique de la vie et générations spontanées''.<ref>[https://openlibrary.org/books/OL23348076M/Théorie_physico-chimique_de_la_vie_et_générations_spontanées Théorie physico-chimique de la vie et générations spontanées, S. Leduc, 1910]</ref> He also noted this term in another publication, ''La Biologie Synthétique'' in 1912.<ref>{{cite book |url=http://www.peiresc.org/bstitre.htm |title=La biologie synthétique, étude de biophysique |last=Leduc |first=Stéphane |date=1912 | veditors = Poinat A }}</ref><br />
<br />
1910: First identifiable use of the term "synthetic biology" in Stéphane Leduc's publication Théorie physico-chimique de la vie et générations spontanées. He also noted this term in another publication, La Biologie Synthétique in 1912.<br />
<br />
1910: First identifiable use of the term "synthetic biology" in Stéphane Leduc's publication Théorie physico-chimique de la vie et générations spontanées.他还在1912年的另一本出版物《生物合成学》中提到了这个术语。<br />
<br />
<br />
<br />
'''1961:''' Jacob and Monod postulate cellular regulation by molecular networks from their study of the ''lac'' operon in ''E. coli'' and envisioned the ability to assemble new systems from molecular components.<ref>Jacob, F.ß. & Monod, J. On the regulation of gene activity. Cold Spring Harb. Symp. Quant. Biol. 26, 193–211 (1961).</ref><br />
<br />
1961: Jacob and Monod postulate cellular regulation by molecular networks from their study of the lac operon in E. coli and envisioned the ability to assemble new systems from molecular components.<br />
<br />
1961年: 雅各布和莫诺德通过他们对大肠杆菌中乳酸操纵子的研究,设想了通过分子网络调控细胞的方法,并预期了由分子组件组装新系统的能力。<br />
<br />
<br />
<br />
'''1973:''' First molecular cloning and amplification of DNA in a plasmid is published in ''P.N.A.S.'' by Cohen, Boyer ''et al.'' constituting the dawn of synthetic biology.<ref>{{cite journal | vauthors = Cohen SN, Chang AC, Boyer HW, Helling RB | title = Construction of biologically functional bacterial plasmids in vitro | journal = Proc. Natl. Acad. Sci. USA | volume = 70 | issue = 11 | pages = 3240–3244 | date = 1973 | pmid = 4594039 | doi = 10.1073/pnas.70.11.3240 | bibcode = 1973PNAS...70.3240C | pmc = 427208 }}</ref></blockquote><br />
<br />
1973: First molecular cloning and amplification of DNA in a plasmid is published in P.N.A.S. by Cohen, Boyer et al. constituting the dawn of synthetic biology.</blockquote><br />
<br />
1973年:第一篇关于质粒中 DNA 的分子克隆和扩增的文章在 P.N.A.S. 发表。作者: 科恩,波义耳等人(Cohen,Boyer et al.) 。构成了合成生物学的黎明<br />
<br />
<br />
<br />
'''1978:''' [[Werner Arber|Arber]], [[Daniel Nathans|Nathans]] and [[Hamilton O. Smith|Smith]] win the [[Nobel Prize in Physiology or Medicine]] for the discovery of [[restriction enzyme]]s, leading Szybalski to offer an editorial comment in the journal ''[[Gene (journal)|Gene]]'':<br />
<br />
1978: Arber, Nathans and Smith win the Nobel Prize in Physiology or Medicine for the discovery of restriction enzymes, leading Szybalski to offer an editorial comment in the journal Gene:<br />
<br />
1978年: 阿尔伯 (Arber) ,纳森斯 (Nathans) 和 史密斯 (Smith) 因为限制性内切酶的发现而获得美国诺贝尔生理学或医学奖学会奖,这使得齐巴尔斯基 (Szybalski) 在《基因》杂志上发表了一篇社论评论:<br />
<br />
<br />
<br />
<blockquote>The work on restriction nucleases not only permits us easily to construct recombinant DNA molecules and to analyze individual genes, but also has led us into the new era of synthetic biology where not only existing genes are described and analyzed but also new gene arrangements can be constructed and evaluated.<ref>{{cite journal | vauthors = Szybalski W, Skalka A | title = Nobel prizes and restriction enzymes | journal = Gene | volume = 4 | issue = 3 | pages = 181–2 | date = November 1978 | pmid = 744485 | doi = 10.1016/0378-1119(78)90016-1 }}</ref></blockquote><br />
<br />
<blockquote>The work on restriction nucleases not only permits us easily to construct recombinant DNA molecules and to analyze individual genes, but also has led us into the new era of synthetic biology where not only existing genes are described and analyzed but also new gene arrangements can be constructed and evaluated.</blockquote><br />
<br />
限制性核酸酶的研究不仅使我们能够很容易地构建重组 DNA 分子和分析单个基因,而且使我们进入了合成生物学的新时代,不仅可以描述和分析现有的基因,而且可以构建和评估新的基因序列。</blockquote ><br />
<br />
<br />
<br />
'''1988:''' First DNA amplification by the polymerase chain reaction (PCR) using a thermostable DNA polymerase is published in ''Science'' by Mullis ''et al.''<ref>{{cite journal | vauthors = Saiki RK, Gelfand DH, Stoffel S, Scharf SJ, Higuchi R, Horn GT, Mullis KB, Erlich HA | title = Primer-directed enzymatic amplification of DNA with a thermostable DNA polymerase | journal = Science | volume = 239 | issue = 4839 | pages = 487–491 | date = 1988 | pmid = 2448875 | doi = 10.1126/science.239.4839.487 }}</ref></blockquote> This obviated adding new DNA polymerase after each PCR cycle, thus greatly simplifying DNA mutagenesis and assembly.<br />
<br />
1988: First DNA amplification by the polymerase chain reaction (PCR) using a thermostable DNA polymerase is published in Science by Mullis et al.</blockquote> This obviated adding new DNA polymerase after each PCR cycle, thus greatly simplifying DNA mutagenesis and assembly.<br />
<br />
1988年: 第一次利用热稳定的 DNA 聚合酶进行聚合酶链式反应以实现 DNA 扩增(PCR)的成果由马利斯 (Mullis) 等人发表在《科学》杂志上,这样就避免了在每次 PCR 循环后增加新的 DNA 聚合酶,从而大大简化了 DNA 的突变和组装。<br />
<br />
<br />
<br />
'''2000:''' Two papers in [[Nature (journal)|Nature]] report [[synthetic biological circuits]], a genetic toggle switch and a biological clock, by combining genes within [[Escherichia coli|''E. coli'']] cells.<ref name=":0">{{cite journal | vauthors = Elowitz MB, Leibler S | title = A synthetic oscillatory network of transcriptional regulators | journal = Nature | volume = 403 | issue = 6767 | pages = 335–8 | date = January 2000 | pmid = 10659856 | doi = 10.1038/35002125 | bibcode = 2000Natur.403..335E | s2cid = 41632754 }}</ref><ref name=":1">{{cite journal | vauthors = Gardner TS, Cantor CR, Collins JJ | title = Construction of a genetic toggle switch in Escherichia coli | journal = Nature | volume = 403 | issue = 6767 | pages = 339–42 | date = January 2000 | pmid = 10659857 | doi = 10.1038/35002131 | bibcode = 2000Natur.403..339G | s2cid = 345059 }}</ref><br />
<br />
2000: Two papers in Nature report synthetic biological circuits, a genetic toggle switch and a biological clock, by combining genes within E. coli cells.<br />
<br />
2000年: 《自然》杂志的两篇论文报告了通过结合大肠杆菌细胞内的基因制造合成生物电路、基因切换开关和生物钟。<br />
<br />
<br />
<br />
'''2003:''' The most widely used standardized DNA parts, [[BioBrick]] plasmids, are invented by [[Tom Knight (scientist)|Tom Knight]].<ref>{{Cite journal|last1=Knight|first1=Thomas| name-list-style = vanc |year=2003|title=Tom Knight (2003). Idempotent Vector Design for Standard Assembly of Biobricks|hdl=1721.1/21168}}</ref> These parts will become central to the international Genetically Engineered Machine competition (iGEM) founded at MIT in the following year.<br />
<br />
2003: The most widely used standardized DNA parts, BioBrick plasmids, are invented by Tom Knight. These parts will become central to the international Genetically Engineered Machine competition (iGEM) founded at MIT in the following year.<br />
<br />
2003年: 最广泛使用的标准化 DNA 部件,生物积木质粒,是由汤姆·奈特 (Tom Knight) 发明的。这些部分将成为第二年在麻省理工学院成立的国际基因工程机器竞赛 (iGEM) 的中心。<br />
<br />
<br />
<br />
[[File:Synthetic Biology Open Language (SBOL) standard visual symbols.png|thumb|upright=1.25| [[Synthetic Biology Open Language]] (SBOL) standard visual symbols for use with [[BioBrick|BioBricks Standard]]]]<br />
<br />
[[Synthetic Biology Open Language (SBOL) standard visual symbols for use with BioBricks Standard]]<br />
<br />
[与生物积木标准一起使用的合成生物学开放式语言(SBOL)标准视觉符号]<br />
<br />
<br />
<br />
'''2003:''' Researchers engineer an artemisinin precursor pathway in ''E. coli''.<ref>Martin, V. J., Pitera, D. J., Withers, S. T., Newman, J. D. & Keasling, J. D. Engineering a mevalonate pathway in Escherichia coli for production of terpenoids. Nature Biotech. 21, 796–802 (2003).</ref><br />
<br />
2003: Researchers engineer an artemisinin precursor pathway in E. coli.<br />
<br />
2003年: 研究人员在大肠杆菌中设计出青蒿素前体途径。<br />
<br />
<br />
<br />
'''2004:''' First international conference for synthetic biology, Synthetic Biology 1.0 (SB1.0) is held at the Massachusetts Institute of Technology, USA.<br />
<br />
2004: First international conference for synthetic biology, Synthetic Biology 1.0 (SB1.0) is held at the Massachusetts Institute of Technology, USA.<br />
<br />
2004年: 第一届合成生物学国际会议,合成生物学1.0(SB1.0)在美国麻省理工学院举行。<br />
<br />
<br />
<br />
'''2005:''' Researchers develop a light-sensing circuit in ''E. coli''.<ref>{{cite journal | last1 = Levskaya | first1 = A. | display-authors = etal | year = 2005 | title = "Synthetic biology " engineering Escherichia coli to see light | url = | journal = Nature | volume = 438 | issue = 7067| pages = 441–442 | doi = 10.1038/nature04405 | pmid = 16306980 | s2cid = 4428475 }}</ref> Another group designs circuits capable of multicellular pattern formation.<ref>Basu, S., Gerchman, Y., Collins, C. H., Arnold, F. H. & Weiss, R. "A synthetic multicellular system for programmed pattern formation. ''Nature'' 434,</ref><br />
<br />
2005: Researchers develop a light-sensing circuit in E. coli. Another group designs circuits capable of multicellular pattern formation.<br />
<br />
2005年: 研究人员在大肠杆菌中开发出一种感光电路。另一个研究小组设计出了能够形成多细胞模式的电路。<br />
<br />
<br />
<br />
'''2006:''' Researchers engineer a synthetic circuit that promotes bacterial invasion of tumour cells.<ref>{{cite journal | last1 = Anderson | first1 = J. C. | last2 = Clarke | first2 = E. J. | last3 = Arkin | first3 = A. P. | last4 = Voigt | first4 = C. A. | year = 2006 | title = Environmentally controlled invasion of cancer cells by engineered bacteria | url = | journal = J. Mol. Biol. | volume = 355 | issue = 4| pages = 619–627 | doi = 10.1016/j.jmb.2005.10.076 | pmid = 16330045 }}</ref><br />
<br />
2006: Researchers engineer a synthetic circuit that promotes bacterial invasion of tumour cells.<br />
<br />
2006年: 研究人员设计了一种能促进细菌侵入肿瘤细胞的合成电路。<br />
<br />
<br />
<br />
'''2010:''' Researchers publish in ''Science'' the first synthetic bacterial genome, called ''M. mycoides'' JCVI-syn1.0.<ref name="gibson52" /><ref>{{Cite news|url=https://www.telegraph.co.uk/news/science/science-news/7747779/American-scientist-who-created-artificial-life-denies-playing-God.html|title=American scientist who created artificial life denies 'playing God'|last=|first=|date=May 2010|website=The Telegraph|url-status=live|archive-url=|archive-date=|access-date=}}</ref> The genome is made from chemically-synthesized DNA using yeast recombination.<br />
<br />
2010: Researchers publish in Science the first synthetic bacterial genome, called M. mycoides JCVI-syn1.0. The genome is made from chemically-synthesized DNA using yeast recombination.<br />
<br />
2010年: 研究人员在《科学》杂志上发表了第一个人工合成的细菌基因组,名为丝状支原体 JCVI-syn1.0。基因组是使用酵母由化学合成的 DNA重组得到的。<br />
<br />
<br />
<br />
'''2011:''' Functional synthetic chromosome arms are engineered in yeast.<ref>{{cite journal | last1 = Dymond | first1 = J. S. | display-authors = etal | year = 2011 | title = Synthetic chromosome arms function in yeast and generate phenotypic diversity by design | url = | journal = Nature | volume = 477 | issue = 7365 | pages = 816–821 | doi = 10.1038/nature10403 | pmid = 21918511 | pmc = 3774833 }}</ref><br />
<br />
2011: Functional synthetic chromosome arms are engineered in yeast.<br />
<br />
2011年: 成功在酵母中设计出功能性合成染色体臂。<br />
<br />
<br />
<br />
'''2012:''' Charpentier and Doudna labs publish in ''Science'' the programming of CRISPR-Cas9 bacterial immunity for targeting DNA cleavage.<ref>{{cite journal | vauthors = Jinek M, Chylinski K, Fonfara I, Hauer M, Doudna JA, Charpentier E | title = A programmable dual-RNA-guided DNA endonuclease in adaptive bacterial immunity | journal = Science | volume = 337 | issue = 6096 | pages = 816–821 | date = 2012 | pmid = 22745249 | doi = 10.1126/science.1225829 | pmc = 6286148 }}</ref></blockquote> This technology greatly simplified and expanded eukaryotic gene editing.<br />
<br />
2012: Charpentier and Doudna labs publish in Science the programming of CRISPR-Cas9 bacterial immunity for targeting DNA cleavage.</blockquote> This technology greatly simplified and expanded eukaryotic gene editing.<br />
<br />
2012年: Charpentier 和 Doudna 实验室在《科学》杂志上发表了 CRISPR-Cas9细菌免疫系统的程序设计,用于靶向 DNA 的裂解。这项技术极大地简化和扩展了真核生物的基因编辑。<br />
<br />
<br />
<br />
'''2019:''' Scientists at [[ETH Zurich]] report the creation of the first [[bacterial genome]], named ''[[Caulobacter crescentus|Caulobacter ethensis-2.0]]'', made entirely by a computer, although a related [[wikt:viability|viable form]] of ''C. ethensis-2.0'' does not yet exist.<ref name="EA-20190401">{{cite news |author=ETH Zurich |title=First bacterial genome created entirely with a computer |url=https://www.eurekalert.org/pub_releases/2019-04/ez-fbg032819.php |date=1 April 2019 |work=[[EurekAlert!]] |accessdate=2 April 2019 |author-link=ETH Zurich }}</ref><ref name="PNAS20190401">{{cite journal |author=Venetz, Jonathan E. |display-authors=et al. |title=Chemical synthesis rewriting of a bacterial genome to achieve design flexibility and biological functionality |date=1 April 2019 |journal=[[Proceedings of the National Academy of Sciences of the United States of America]] |volume=116 |issue=16 |pages=8070–8079 |doi=10.1073/pnas.1818259116 |pmid=30936302 |pmc=6475421 }}</ref><br />
<br />
2019: Scientists at ETH Zurich report the creation of the first bacterial genome, named Caulobacter ethensis-2.0, made entirely by a computer, although a related viable form of C. ethensis-2.0 does not yet exist.<br />
<br />
2019年: 苏黎世联邦理工学院 (ETH Zurich) 的科学家报告说,他们已经创造出了第一个细菌基因组,并将其命名为 Caulobacter ethensis-2.0 ,这个基因组完全是由计算机制造的,尽管与之相关的可存活的Caulobacter ethensis-2.0还不存在。<br />
<br />
<br />
<br />
'''2019:''' Researchers report the production of a new [[Synthetic biology#Synthetic life|synthetic]] (possibly [[Artificial life#Biochemical-based ("wet")|artificial]]) form of [[wikt:viability|viable]] [[life]], a variant of the [[bacteria]] ''[[Escherichia coli]]'', by reducing the natural number of 64 [[codon]]s in the bacterial [[genome]] to 59 codons instead, in order to encode 20 [[amino acid]]s.<ref name="NYT-20190515">{{cite news |last=Zimmer |first=Carl |authorlink=Carl Zimmer |title=Scientists Created Bacteria With a Synthetic Genome. Is This Artificial Life? - In a milestone for synthetic biology, colonies of E. coli thrive with DNA constructed from scratch by humans, not nature. |url=https://www.nytimes.com/2019/05/15/science/synthetic-genome-bacteria.html |date=15 May 2019 |work=[[The New York Times]] |accessdate=16 May 2019 }}</ref><ref name="NAT-20190515">{{cite journal |author=Fredens, Julius |display-authors=et al. |title=Total synthesis of Escherichia coli with a recoded genome |date=15 May 2019 |journal=[[Nature (journal)|Nature]] |volume=569 |issue=7757 |pages=514–518 |doi=10.1038/s41586-019-1192-5 |pmid=31092918 |pmc=7039709 |bibcode=2019Natur.569..514F }}</ref><br />
<br />
2019: Researchers report the production of a new synthetic (possibly artificial) form of viable life, a variant of the bacteria Escherichia coli, by reducing the natural number of 64 codons in the bacterial genome to 59 codons instead, in order to encode 20 amino acids.<br />
<br />
2019年: 研究人员报告了一种新式合成(可能是人工的)可行生命形式的产生,这是大肠杆菌的变种,它通过将细菌基因组中64个密码子的自然数目减少到59个密码子来编码20个氨基酸。<br />
<br />
<br />
<br />
== Perspectives 各方观点 ==<br />
<br />
Engineers view biology as a ''technology'' (in other words, a given system's ''[[biotechnology]]'' or its ''[[biological engineering]]'')<ref>{{cite journal | volume = 6 | last = Zeng | first = Jie (Bangzhe) | title = On the concept of systems bio-engineering | journal = Coomunication on Transgenic Animals, June 1994, CAS, PRC }}</ref> Synthetic biology includes the broad redefinition and expansion of biotechnology, with the ultimate goals of being able to design and build engineered biological systems that process information, manipulate chemicals, fabricate materials and structures, produce energy, provide food, and maintain and enhance human health (see [[Biomedical Engineering]]) and our environment.<ref>{{cite journal | volume = 6 | last = Chopra | first = Paras | author2 = Akhil Kamma | title = Engineering life through Synthetic Biology | journal = In Silico Biology }}</ref><br />
<br />
Engineers view biology as a technology (in other words, a given system's biotechnology or its biological engineering) Synthetic biology includes the broad redefinition and expansion of biotechnology, with the ultimate goals of being able to design and build engineered biological systems that process information, manipulate chemicals, fabricate materials and structures, produce energy, provide food, and maintain and enhance human health (see Biomedical Engineering) and our environment.<br />
<br />
工程师将生物学视为一种技术(换句话说,一个特定系统的生物技术或其生物工程)合成生物学包括生物技术的广泛重新定义和扩展,最终目标是能够设计和建造工程生物系统,处理信息,操纵化学品,制造材料和结构,生产能源,提供食物,维护和增强人类健康(见生物医学工程)和我们的环境。<br />
<br />
<br />
<br />
Studies in synthetic biology can be subdivided into broad classifications according to the approach they take to the problem at hand: standardization of biological parts, biomolecular engineering, genome engineering. {{citation needed|date=May 2020}}<br />
<br />
Studies in synthetic biology can be subdivided into broad classifications according to the approach they take to the problem at hand: standardization of biological parts, biomolecular engineering, genome engineering. <br />
<br />
合成生物学的研究可以根据它们对手头问题所采取的方法再广泛细分为: 生物部分的标准化、生物分子工程、基因组工程。<br />
<br />
<br />
<br />
Biomolecular engineering includes approaches that aim to create a toolkit of functional units that can be introduced to present new technological functions in living cells. [[Genetic engineering]] includes approaches to construct synthetic chromosomes for whole or minimal organisms.<br />
<br />
Biomolecular engineering includes approaches that aim to create a toolkit of functional units that can be introduced to present new technological functions in living cells. Genetic engineering includes approaches to construct synthetic chromosomes for whole or minimal organisms.<br />
<br />
生物分子工程包括旨在创建一个功能单元工具包的方法,这些功能单元可以用来展示活细胞中的新技术性功能。基因工程包括为整个或最小的有机体构建合成染色体的方法。<br />
<br />
<br />
<br />
Biomolecular design refers to the general idea of de novo design and additive combination of biomolecular components. Each of these approaches share a similar task: to develop a more synthetic entity at a higher level of complexity by inventively manipulating a simpler part at the preceding level.<ref>{{cite journal | vauthors = Channon K, Bromley EH, Woolfson DN | title = Synthetic biology through biomolecular design and engineering | journal = Current Opinion in Structural Biology | volume = 18 | issue = 4 | pages = 491–8 | date = August 2008 | pmid = 18644449 | doi = 10.1016/j.sbi.2008.06.006 }}</ref><br />
<br />
Biomolecular design refers to the general idea of de novo design and additive combination of biomolecular components. Each of these approaches share a similar task: to develop a more synthetic entity at a higher level of complexity by inventively manipulating a simpler part at the preceding level.<br />
<br />
生物分子设计是指生物分子组分的重新设计和加合的总体思想。这些方法都有一个相似的任务: 通过在前一级创造性地操作一个更简单的部分,从而在更高的复杂性水平上开发一个更高度合成化的实体。<br />
<br />
<br />
<br />
On the other hand, "re-writers" are synthetic biologists interested in testing the irreducibility of biological systems. Due to the complexity of natural biological systems, it would be simpler to rebuild the natural systems of interest from the ground up; In order to provide engineered surrogates that are easier to comprehend, control and manipulate.<ref>{{cite journal | first = M | last = Stone | title = Life Redesigned to Suit the Engineering Crowd | journal = Microbe | volume = 1 | issue = 12 | pages = 566–570 | date = 2006 | s2cid = 7171812 | url = https://pdfs.semanticscholar.org/8d45/e0f37a0fb6c1a3c659c71ee9c52619b18364.pdf }}</ref> Re-writers draw inspiration from [[refactoring]], a process sometimes used to improve computer software.<br />
<br />
On the other hand, "re-writers" are synthetic biologists interested in testing the irreducibility of biological systems. Due to the complexity of natural biological systems, it would be simpler to rebuild the natural systems of interest from the ground up; In order to provide engineered surrogates that are easier to comprehend, control and manipulate. Re-writers draw inspiration from refactoring, a process sometimes used to improve computer software.<br />
<br />
另一方面,“重写者”指的是对测试生物系统的不可还原性感兴趣的合成生物学家。由于自然生物系统的复杂性,从头开始重建感兴趣的自然系统会更简单; 为了提供更容易理解、控制和操作的工程替代品。重写者从重构中获得灵感,我们有时用这种重构以改进计算机软件。<br />
<br />
<br />
<br />
== Enabling technologies 使能技术 ==<br />
<br />
Several novel enabling technologies were critical to the success of synthetic biology. Concepts include [[standardization]] of biological parts and hierarchical abstraction to permit using those parts in synthetic systems.<ref>{{cite journal | vauthors = Baker D, Church G, Collins J, Endy D, Jacobson J, Keasling J, Modrich P, Smolke C, Weiss R | title = Engineering life: building a fab for biology | journal = Scientific American | volume = 294 | issue = 6 | pages = 44–51 | date = June 2006 | pmid = 16711359 | doi = 10.1038/scientificamerican0606-44 | bibcode = 2006SciAm.294f..44B }}</ref> Basic technologies include reading and writing DNA (sequencing and fabrication). Measurements under multiple conditions are needed for accurate modeling and [[computer-aided design]] (CAD).<br />
<br />
Several novel enabling technologies were critical to the success of synthetic biology. Concepts include standardization of biological parts and hierarchical abstraction to permit using those parts in synthetic systems. Basic technologies include reading and writing DNA (sequencing and fabrication). Measurements under multiple conditions are needed for accurate modeling and computer-aided design (CAD).<br />
<br />
一些新的使能技术对于合成生物学的成功至关重要。概念包括生物部分的标准化和层次抽象,以允许在合成系统中使用这些部分。基本技术包括读写 DNA (测序和编码)。为了精确地建模和计算机辅助设计(CAD) ,需要在多种条件下进行测量。<br />
<br />
<br />
<br />
=== DNA and gene synthesis DNA 和基因合成===<br />
<br />
{{Main|Artificial gene synthesis|Synthetic genomics}}Driven by dramatic decreases in costs of [[oligonucleotides|oligonucleotide]] ("oligos") synthesis and the advent of PCR, the sizes of DNA constructions from oligos have increased to the genomic level.<ref>{{cite journal | vauthors = Kosuri S, Church GM | title = Large-scale de novo DNA synthesis: technologies and applications | journal = Nature Methods | volume = 11 | issue = 5 | pages = 499–507 | date = May 2014 | pmid = 24781323 | doi = 10.1038/nmeth.2918 | pmc = 7098426 }}</ref> In 2000, researchers reported synthesis of the 9.6 kbp (kilo bp) [[Hepatitis C]] virus genome from chemically synthesized 60 to 80-mers.<ref>{{cite journal | vauthors = Blight KJ, Kolykhalov AA, Rice CM | title = Efficient initiation of HCV RNA replication in cell culture | journal = Science | volume = 290 | issue = 5498 | pages = 1972–4 | date = December 2000 | pmid = 11110665 | doi = 10.1126/science.290.5498.1972 | bibcode = 2000Sci...290.1972B }}</ref> In 2002 researchers at [[Stony Brook University]] succeeded in synthesizing the 7741 bp [[poliovirus]] genome from its published sequence, producing the second synthetic genome, spanning two years.<ref>{{cite journal | vauthors = Couzin J | title = Virology. Active poliovirus baked from scratch | journal = Science | volume = 297 | issue = 5579 | pages = 174–5 | date = July 2002 | pmid = 12114601 | doi = 10.1126/science.297.5579.174b | s2cid = 83531627 | url = https://semanticscholar.org/paper/248000e7bc654631ae217274a77253ceddf270a1 }}</ref> In 2003 the 5386 bp genome of the [[bacteriophage]] [[Phi X 174]] was assembled in about two weeks.<ref name="assembly2003">{{cite journal | vauthors = Smith HO, Hutchison CA, Pfannkoch C, Venter JC | title = Generating a synthetic genome by whole genome assembly: phiX174 bacteriophage from synthetic oligonucleotides | journal = Proceedings of the National Academy of Sciences of the United States of America | volume = 100 | issue = 26 | pages = 15440–5 | date = December 2003 | pmid = 14657399 | pmc = 307586 | doi = 10.1073/pnas.2237126100 | bibcode = 2003PNAS..10015440S }}</ref> In 2006, the same team, at the [[J. Craig Venter Institute]], constructed and patented a [[Synthetic genomics|synthetic genome]] of a novel minimal bacterium, ''[[Mycoplasma laboratorium]]'' and were working on getting it functioning in a living cell.<ref>{{cite news|url=https://www.nytimes.com/2007/06/29/science/29cells.html|title=Scientists Transplant Genome of Bacteria|last=Wade|first=Nicholas|date=2007-06-29|work=The New York Times|access-date=2007-12-28|issn=0362-4331}}</ref><ref>{{cite journal | vauthors = Gibson DG, Benders GA, Andrews-Pfannkoch C, Denisova EA, Baden-Tillson H, Zaveri J, Stockwell TB, Brownley A, Thomas DW, Algire MA, Merryman C, Young L, Noskov VN, Glass JI, Venter JC, Hutchison CA, Smith HO | title = Complete chemical synthesis, assembly, and cloning of a Mycoplasma genitalium genome | journal = Science | volume = 319 | issue = 5867 | pages = 1215–20 | date = February 2008 | pmid = 18218864 | doi = 10.1126/science.1151721 | bibcode = 2008Sci...319.1215G | s2cid = 8190996 | url = https://semanticscholar.org/paper/8c662fd0e252c85d056aad7ff16009ebe1dd4cbc }}</ref><ref name="Ball">{{cite journal|last1=Ball|first1=Philip|date=2016|title=Man Made: A History of Synthetic Life|url=https://www.sciencehistory.org/distillations/magazine/man-made-a-history-of-synthetic-life|journal=Distillations|volume=2|issue=1|pages=15–23|access-date=22 March 2018}}</ref><br />
<br />
Driven by dramatic decreases in costs of oligonucleotide ("oligos") synthesis and the advent of PCR, the sizes of DNA constructions from oligos have increased to the genomic level. In 2000, researchers reported synthesis of the 9.6 kbp (kilo bp) Hepatitis C virus genome from chemically synthesized 60 to 80-mers. In 2002 researchers at Stony Brook University succeeded in synthesizing the 7741 bp poliovirus genome from its published sequence, producing the second synthetic genome, spanning two years. In 2003 the 5386 bp genome of the bacteriophage Phi X 174 was assembled in about two weeks. In 2006, the same team, at the J. Craig Venter Institute, constructed and patented a synthetic genome of a novel minimal bacterium, Mycoplasma laboratorium and were working on getting it functioning in a living cell.<br />
<br />
由于寡核苷酸 (oligos) 合成成本的大幅度降低和 PCR 的出现,寡核苷酸 DNA 构建的大小已经提高到基因组水平。2000年,研究人员报道了化学合成的60到80碱基数的聚丙型肝炎病毒基因组的合成。2002年,石溪大学的研究人员成功地从已发表的序列中合成了7741碱基数的脊髓灰质炎病毒基因组,生产了跨越两年的第二个合成基因组。2003年,噬菌体 Phi x 174的5386个基因组在大约两周内组装完毕。2006年,克莱格·凡特 (J. Craig Venter) 研究所的同一个团队,构建了一种新型微型细菌——支原体的合成基因组,并申请了专利,他们正在努力使其在活细胞中发挥作用。<br />
<br />
<br />
<br />
In 2007 it was reported that several companies were offering [[gene synthesis|synthesis of genetic sequences]] up to 2000 base pairs (bp) long, for a price of about $1 per bp and a turnaround time of less than two weeks.<ref>{{cite news| issn = 0362-4331| last = Pollack| first = Andrew| title = How Do You Like Your Genes? Biofabs Take Orders | work = The New York Times | access-date = 2007-12-28| date = 2007-09-12 | url = https://www.nytimes.com/2007/09/12/technology/techspecial/12gene.html?pagewanted=2&_r=1}}</ref> [[Oligonucleotide]]s harvested from a photolithographic- or inkjet-manufactured [[DNA chip]] combined with PCR and DNA mismatch error-correction allows inexpensive large-scale changes of [[codons]] in genetic systems to improve [[gene expression]] or incorporate novel amino-acids (see [[George M. Church]]'s and Anthony Forster's synthetic cell projects.<ref>{{Cite web|url=http://arep.med.harvard.edu/SBP|title=Synthetic Biology Projects|website=arep.med.harvard.edu|access-date=2018-02-17}}</ref><ref>{{cite journal | vauthors = Forster AC, Church GM | title = Towards synthesis of a minimal cell | journal = Molecular Systems Biology | volume = 2 | issue = 1 | pages = 45 | date = 2006-08-22 | pmid = 16924266 | pmc = 1681520 | doi = 10.1038/msb4100090 }}</ref>) This favors a synthesis-from-scratch approach.<br />
<br />
In 2007 it was reported that several companies were offering synthesis of genetic sequences up to 2000 base pairs (bp) long, for a price of about $1 per bp and a turnaround time of less than two weeks. Oligonucleotides harvested from a photolithographic- or inkjet-manufactured DNA chip combined with PCR and DNA mismatch error-correction allows inexpensive large-scale changes of codons in genetic systems to improve gene expression or incorporate novel amino-acids (see George M. Church's and Anthony Forster's synthetic cell projects.) This favors a synthesis-from-scratch approach.<br />
<br />
2007年有报道称,几家公司提供了长达2000个碱基对的基因序列合成,价格约为每个碱基1美元,一个完整流程所需工时不到两周。从光刻或喷墨制造的 DNA 芯片中提取的寡核苷酸,结合 PCR 和 DNA 错配错误校正,可以大规模改变遗传系统中的密码子,从而改善基因表达或合并新的氨基酸(参见乔治·M·丘奇和安东尼·福斯特的合成细胞项目)这有利于采用从头开始的合成方法。<br />
<br />
<br />
<br />
Additionally, the [[CRISPR|CRISPR/Cas]] system has emerged as a promising technique for gene editing. It was described as "the most important innovation in the synthetic biology space in nearly 30 years".<ref name="washpost_crispr">{{cite news|last1=Basulto|first1=Dominic|title=Everything you need to know about why CRISPR is such a hot technology|url=https://www.washingtonpost.com/news/innovations/wp/2015/11/04/everything-you-need-to-know-about-why-crispr-is-such-a-hot-technology/|access-date=5 December 2015|work=Washington Post|date=November 4, 2015}}</ref> While other methods take months or years to edit gene sequences, CRISPR speeds that time up to weeks.<ref name="washpost_crispr" /> Due to its ease of use and accessibility, however, it has raised ethical concerns, especially surrounding its use in [[Do-it-yourself biology|biohacking]].<ref>{{cite news|last1=Kahn|first1=Jennifer|title=The Crispr Quandary|url=https://www.nytimes.com/2015/11/15/magazine/the-crispr-quandary.html?_r=0|access-date=5 December 2015|work=New York Times|date=November 9, 2015}}</ref><ref>{{cite journal|last1=Ledford|first1=Heidi|title=CRISPR, the disruptor|url=http://www.nature.com/news/crispr-the-disruptor-1.17673|access-date=5 December 2015|agency=Nature News|journal=Nature|date=June 3, 2015|pmid=26040877|doi=10.1038/522020a|volume=522|issue=7554|pages=20–4|bibcode=2015Natur.522...20L|doi-access=free}}</ref><ref>{{cite magazine|last1=Higginbotham|first1=Stacey|title=Top VC Says Gene Editing Is Riskier Than Artificial Intelligence|url=http://fortune.com/2015/12/04/khosla-crispr-ai/|access-date=5 December 2015|magazine=Fortune|date=4 December 2015}}</ref><br />
<br />
Additionally, the CRISPR/Cas system has emerged as a promising technique for gene editing. It was described as "the most important innovation in the synthetic biology space in nearly 30 years". While other methods take months or years to edit gene sequences, CRISPR speeds that time up to weeks.<br />
<br />
此外,CRISPR/Cas 系统已经成为一种很有前途的基因编辑技术。它被称作“近30年来合成生物学领域最重要的创新”。虽然其他方法需要数月或数年来编辑基因序列,CRISPR 将这个时间缩短到数周。<br />
<br />
<br />
<br />
=== Sequencing 测序 ===<br />
<br />
[[DNA sequencing]] determines the order of [[nucleotide]] bases in a DNA molecule. Synthetic biologists use DNA sequencing in their work in several ways. First, large-scale genome sequencing efforts continue to provide information on naturally occurring organisms. This information provides a rich substrate from which synthetic biologists can construct parts and devices. Second, sequencing can verify that the fabricated system is as intended. Third, fast, cheap, and reliable sequencing can facilitate rapid detection and identification of synthetic systems and organisms.<ref>{{cite journal| author = Rollie| date = 2012 |title = Designing biological systems: Systems Engineering meets Synthetic Biology| journal = Chemical Engineering Science| volume = 69 | pages = 1–29| doi=10.1016/j.ces.2011.10.068| issue=1|display-authors=etal}}</ref><br />
<br />
DNA sequencing determines the order of nucleotide bases in a DNA molecule. Synthetic biologists use DNA sequencing in their work in several ways. First, large-scale genome sequencing efforts continue to provide information on naturally occurring organisms. This information provides a rich substrate from which synthetic biologists can construct parts and devices. Second, sequencing can verify that the fabricated system is as intended. Third, fast, cheap, and reliable sequencing can facilitate rapid detection and identification of synthetic systems and organisms.<br />
<br />
DNA 测序决定了 DNA 分子中核苷酸碱基的顺序。合成生物学家在他们的工作中以几种方式使用 DNA 测序。首先,大规模的基因组测序工作继续提供有关自然发生的生物体的信息。这些信息为合成生物学家提供了一个丰富的基质,他们可以从中构建零件和设备。其次,排序可以验证制造的系统是否如预期的那样。第三,快速、廉价和可靠的测序可以促进快速检测和识别合成系统和有机体。<br />
<br />
<br />
<br />
=== Microfluidics ===<br />
<br />
[[Microfluidics]], in particular droplet microfluidics, is an emerging tool used to construct new components, and to analyse and characterize them.<ref>{{cite journal | vauthors = Elani Y | title = Construction of membrane-bound artificial cells using microfluidics: a new frontier in bottom-up synthetic biology | journal = Biochemical Society Transactions | volume = 44 | issue = 3 | pages = 723–30 | date = June 2016 | pmid = 27284034 | pmc = 4900754 | doi = 10.1042/BST20160052 }}</ref><ref>{{cite journal | vauthors = Gach PC, Iwai K, Kim PW, Hillson NJ, Singh AK | title = Droplet microfluidics for synthetic biology | journal = Lab on a Chip | volume = 17 | issue = 20 | pages = 3388–3400 | date = October 2017 | pmid = 28820204 | doi = 10.1039/C7LC00576H | osti = 1421856 | url = http://www.escholarship.org/uc/item/6cr3k0v5 }}</ref> It is widely employed in screening assays.<ref>{{cite journal | vauthors = Vinuselvi P, Park S, Kim M, Park JM, Kim T, Lee SK | title = Microfluidic technologies for synthetic biology | journal = International Journal of Molecular Sciences | volume = 12 | issue = 6 | pages = 3576–93 | date = 2011-06-03 | pmid = 21747695 | pmc = 3131579 | doi = 10.3390/ijms12063576 }}</ref><br />
<br />
Microfluidics, in particular droplet microfluidics, is an emerging tool used to construct new components, and to analyse and characterize them. It is widely employed in screening assays.<br />
<br />
微流体,特别是液滴微流体,是一种新兴的工具,用于构造新的元件,并分析和表征它们。它被广泛应用于筛选分析。<br />
<br />
<br />
<br />
=== Modularity 模块化 ===<br />
<br />
The most used<ref name="primer">{{Cite book|title=Synthetic Biology – A Primer|last1=Freemont|first1=Paul S.|last2=Kitney|first2=Richard I.| name-list-style = vanc |date=2012|publisher=World Scientific|isbn=978-1-84816-863-3|doi=10.1142/p837}}</ref>{{rp|22–23}} standardized DNA parts are [[BioBrick]] plasmids, invented by [[Tom Knight (scientist)|Tom Knight]] in 2003.<ref>{{Cite journal|last1=Knight|first1=Thomas| name-list-style = vanc |year=2003|title=Tom Knight (2003). Idempotent Vector Design for Standard Assembly of Biobricks|hdl=1721.1/21168}}</ref> Biobricks are stored at the [[Registry of Standard Biological Parts]] in Cambridge, Massachusetts. The BioBrick standard has been used by thousands of students worldwide in the [[international Genetically Engineered Machine]] (iGEM) competition.<ref name="primer" />{{rp|22–23}}<br />
<br />
The most used standardized DNA parts are BioBrick plasmids, invented by Tom Knight in 2003. Biobricks are stored at the Registry of Standard Biological Parts in Cambridge, Massachusetts. The BioBrick standard has been used by thousands of students worldwide in the international Genetically Engineered Machine (iGEM) competition. SH3 domain-peptide binding or SpyTag/SpyCatcher offer such control. In addition it is necessary to regulate protein-protein interactions in cells, such as with light (using light-oxygen-voltage-sensing domains) or cell-permeable small molecules by chemically induced dimerization.<br />
<br />
最常用的标准化 DNA 部分是生物积木质粒,由汤姆·奈特在2003年发明。生物积木储存在马萨诸塞州剑桥的标准生物部件注册处。生物积木标准已经被全世界成千上万的学生用于国际基因工程机器竞赛 (iGEM) 。SH3域-肽结合或 SpyTag/SpyCatcher 能提供这样的控制。此外,还必须通过化学诱导的二聚化来调节细胞中蛋白质-蛋白质的相互作用,例如利用光(光-氧-电压感应域)或细胞渗透性小分子。<br />
<br />
<br />
<br />
While DNA is most important for information storage, a large fraction of the cell's activities are carried out by proteins. Tools can send proteins to specific regions of the cell and to link different proteins together. The interaction strength between protein partners should be tunable between a lifetime of seconds (desirable for dynamic signaling events) up to an irreversible interaction (desirable for device stability or resilient to harsh conditions). Interactions such as [[coiled coil]]s,<ref>{{cite journal | vauthors = Woolfson DN, Bartlett GJ, Bruning M, Thomson AR | title = New currency for old rope: from coiled-coil assemblies to α-helical barrels | journal = Current Opinion in Structural Biology | volume = 22 | issue = 4 | pages = 432–41 | date = August 2012 | pmid = 22445228 | doi = 10.1016/j.sbi.2012.03.002 }}</ref> [[SH3 domain]]-peptide binding<ref>{{cite journal | vauthors = Dueber JE, Wu GC, Malmirchegini GR, Moon TS, Petzold CJ, Ullal AV, Prather KL, Keasling JD | title = Synthetic protein scaffolds provide modular control over metabolic flux | journal = Nature Biotechnology | volume = 27 | issue = 8 | pages = 753–9 | date = August 2009 | pmid = 19648908 | doi = 10.1038/nbt.1557 | s2cid = 2756476 }}</ref> or [[SpyCatcher|SpyTag/SpyCatcher]]<ref>{{cite journal | vauthors = Reddington SC, Howarth M | title = Secrets of a covalent interaction for biomaterials and biotechnology: SpyTag and SpyCatcher | journal = Current Opinion in Chemical Biology | volume = 29 | pages = 94–9 | date = December 2015 | pmid = 26517567 | doi = 10.1016/j.cbpa.2015.10.002 | doi-access = free }}</ref> offer such control. In addition it is necessary to regulate protein-protein interactions in cells, such as with light (using [[light-oxygen-voltage-sensing domain]]s) or cell-permeable small molecules by [[chemically induced dimerization]].<ref>{{cite journal | vauthors = Bayle JH, Grimley JS, Stankunas K, Gestwicki JE, Wandless TJ, Crabtree GR | title = Rapamycin analogs with differential binding specificity permit orthogonal control of protein activity | journal = Chemistry & Biology | volume = 13 | issue = 1 | pages = 99–107 | date = January 2006 | pmid = 16426976 | doi = 10.1016/j.chembiol.2005.10.017 | doi-access = free }}</ref><br />
<br />
In a living cell, molecular motifs are embedded in a bigger network with upstream and downstream components. These components may alter the signaling capability of the modeling module. In the case of ultrasensitive modules, the sensitivity contribution of a module can differ from the sensitivity that the module sustains in isolation.<br />
<br />
在一个活细胞中,分子模体被嵌入到一个更大的由上游或下游组件构成的网络中。这些组件可以改变建模模块的信令能力。在超灵敏模块的情况下,模块的灵敏度贡献可能不同于该模块在隔离状态下支持的灵敏度。<br />
<br />
<br />
<br />
In a living cell, molecular motifs are embedded in a bigger network with upstream and downstream components. These components may alter the signaling capability of the modeling module. In the case of ultrasensitive modules, the sensitivity contribution of a module can differ from the sensitivity that the module sustains in isolation.<ref name="altszylerUltrasens2014">{{cite journal | vauthors = Altszyler E, Ventura A, Colman-Lerner A, Chernomoretz A | title = Impact of upstream and downstream constraints on a signaling module's ultrasensitivity | journal = Physical Biology | volume = 11 | issue = 6 | pages = 066003 | date = October 2014 | pmid = 25313165 | pmc = 4233326 | doi = 10.1088/1478-3975/11/6/066003 | bibcode = 2014PhBio..11f6003A }}</ref><ref name="altszylerUltrasens2017">{{cite journal | vauthors = Altszyler E, Ventura AC, Colman-Lerner A, Chernomoretz A | title = Ultrasensitivity in signaling cascades revisited: Linking local and global ultrasensitivity estimations | journal = PLOS ONE | volume = 12 | issue = 6 | pages = e0180083 | year = 2017 | pmid = 28662096 | pmc = 5491127 | doi = 10.1371/journal.pone.0180083 | bibcode = 2017PLoSO..1280083A | arxiv = 1608.08007 }}</ref><br />
<br />
<br />
<br />
Models inform the design of engineered biological systems by better predicting system behavior prior to fabrication. Synthetic biology benefits from better models of how biological molecules bind substrates and catalyze reactions, how DNA encodes the information needed to specify the cell and how multi-component integrated systems behave. Multiscale models of gene regulatory networks focus on synthetic biology applications. Simulations can model all biomolecular interactions in transcription, translation, regulation and induction of gene regulatory networks.<br />
<br />
模型通过在制造之前更好地预测系统行为来指导工程生物系统的设计。合成生物学受益于更好的模型,这些模型包括生物分子如何结合底物和催化反应,DNA 如何编码指定细胞所需的信息,以及多组分综合系统如何运作。基因调控网络的多尺度模型关注于合成生物学的应用。模拟可以模拟所有的生物分子相互作用的转录,翻译,调节和基因调控网络的诱导。<br />
<br />
=== Modeling 建模 ===<br />
<br />
Models inform the design of engineered biological systems by better predicting system behavior prior to fabrication. Synthetic biology benefits from better models of how biological molecules bind substrates and catalyze reactions, how DNA encodes the information needed to specify the cell and how multi-component integrated systems behave. Multiscale models of gene regulatory networks focus on synthetic biology applications. Simulations can model all biomolecular interactions in [[Transcription (biology)|transcription]], [[Translation (biology)|translation]], regulation and induction of gene regulatory networks.<ref>{{cite journal | vauthors = Carbonell-Ballestero M, Duran-Nebreda S, Montañez R, Solé R, Macía J, Rodríguez-Caso C | title = A bottom-up characterization of transfer functions for synthetic biology designs: lessons from enzymology | journal = Nucleic Acids Research | volume = 42 | issue = 22 | pages = 14060–14069 | date = December 2014 | pmid = 25404136 | pmc = 4267673 | doi = 10.1093/nar/gku964 }}</ref><br />
<br />
<ref>{{cite journal | vauthors = Kaznessis YN | title = Models for synthetic biology | journal = BMC Systems Biology | volume = 1 | issue = 1 | pages = 47 | date = November 2007 | pmid = 17986347 | pmc = 2194732 | doi = 10.1186/1752-0509-1-47 }}</ref><br />
<br />
<ref>{{cite conference |vauthors=Tuza ZA, Singhal V, Kim J, Murray RM | title = An in silico modeling toolbox for rapid prototyping of circuits in a biomolecular "breadboard" system. |book-title=52nd IEEE Conference on Decision and Control |date=December 2013 |doi=10.1109/CDC.2013.6760079}}</ref><br />
<br />
<br />
<br />
Studies have considered the components of the DNA transcription mechanism. One desire of scientists creating synthetic biological circuits is to be able to control the transcription of synthetic DNA in unicellular organisms (prokaryotes) and in multicellular organisms (eukaryotes). One study tested the adjustability of synthetic transcription factors (sTFs) in areas of transcription output and cooperative ability among multiple transcription factor complexes. Researchers were able to mutate functional regions called zinc fingers, the DNA specific component of sTFs, to decrease their affinity for specific operator DNA sequence sites, and thus decrease the associated site-specific activity of the sTF (usually transcriptional regulation). They further used the zinc fingers as components of complex-forming sTFs, which are the eukaryotic translation mechanisms. and demonstrated both analog and digital computation in living cells.<br />
--[[用户:粲兰|袁一博]]([[用户讨论:粲兰|讨论]]) “and demonstrated both analog and digital computation in living cells. ”这句首字母没有大写,不知是搬运时少了一点还是仅仅是格式错误。<br />
They demonstrated that bacteria can be engineered to perform both analog and/or digital computation. In human cells research demonstrated a universal logic evaluator that operates in mammalian cells in 2007. Subsequently, researchers utilized this paradigm to demonstrate a proof-of-concept therapy that uses biological digital computation to detect and kill human cancer cells in 2011. Another group of researchers demonstrated in 2016 that principles of computer engineering, can be used to automate digital circuit design in bacterial cells. In 2017, researchers demonstrated the 'Boolean logic and arithmetic through DNA excision' (BLADE) system to engineer digital computation in human cells.<br />
<br />
研究已经考虑了 DNA 转录机制的组成部分。科学家创造合成生物电路的一个愿望是能够控制单细胞生物(原核生物)和多细胞生物(真核生物)中合成 DNA 的转录。一项研究测试了合成转录因子(sTFs)在转录输出和多个转录因子复合物之间的合作能力方面的可调节性。研究人员能够突变被称为锌指——转录因子的一段特殊DNA——的功能区域,以减少它们对特定算子DNA 序列位点的亲和力,从而减少相关的特定位点的转录因子活性(通常是转录调控)。他们进一步使用锌指作为复杂地组成的转录因子的组成部分,这是真核翻译的机制。并在活细胞中进行了模拟和数字计算。他们证明了可以设计细菌使其同时执行模拟和/或数字计算。2007年,人类细胞研究展示了一种在哺乳动物细胞中运作的通用逻辑求值器。随后,研究人员在2011年利用这一范式展示了一种概念验证疗法,利用生物数字计算来检测和杀死人类癌细胞。另一组研究人员在2016年证明了计算机工程的原理,可以用来自动化细菌细胞中的数字电路设计。2017年,研究人员演示了“通过 DNA 删除的布尔逻辑和算术”(BLADE)系统,用于在人类细胞中构建数字计算。<br />
<br />
=== Synthetic transcription factors 合成转录因子 ===<br />
<br />
Studies have considered the components of the [[Transcription (biology)|DNA transcription]] mechanism. One desire of scientists creating [[synthetic biological circuit]]s is to be able to control the transcription of synthetic DNA in unicellular organisms ([[prokaryote]]s) and in multicellular organisms ([[eukaryote]]s). One study tested the adjustability of synthetic [[transcription factor]]s (sTFs) in areas of transcription output and cooperative ability among multiple transcription factor complexes.<ref name="Khalil AS 2012">{{cite journal | vauthors = Khalil AS, Lu TK, Bashor CJ, Ramirez CL, Pyenson NC, Joung JK, Collins JJ | title = A synthetic biology framework for programming eukaryotic transcription functions | journal = Cell | volume = 150 | issue = 3 | pages = 647–58 | date = August 2012 | pmid = 22863014 | pmc = 3653585 | doi = 10.1016/j.cell.2012.05.045 }}</ref> Researchers were able to mutate functional regions called [[zinc finger]]s, the DNA specific component of sTFs, to decrease their affinity for specific operator DNA sequence sites, and thus decrease the associated site-specific activity of the sTF (usually transcriptional regulation). They further used the zinc fingers as components of complex-forming sTFs, which are the [[eukaryotic translation]] mechanisms.<ref name="Khalil AS 2012"/><br />
<br />
<br />
<br />
A biosensor refers to an engineered organism, usually a bacterium, that is capable of reporting some ambient phenomenon such as the presence of heavy metals or toxins. One such system is the Lux operon of Aliivibrio fischeri, which codes for the enzyme that is the source of bacterial bioluminescence, and can be placed after a respondent promoter to express the luminescence genes in response to a specific environmental stimulus. One such sensor created, consisted of a bioluminescent bacterial coating on a photosensitive computer chip to detect certain petroleum pollutants. When the bacteria sense the pollutant, they luminesce. Another example of a similar mechanism is the detection of landmines by an engineered E.coli reporter strain capable of detecting TNT and its main degradation product DNT, and consequently producing a green fluorescent protein (GFP).<br />
<br />
生物传感器是一种工用有机体,它能够报告周边某些环境现象,如重金属或毒素的存在,通常是细菌。其中一个这样的系统是 Aliivibrio 费氏弧菌的 Lux 操纵子,它编码的酶是细菌生物发光的来源,可以放置在应答启动子之后表达发光基因以响应特定的环境刺激。其中一个传感器是由一个光敏计算机芯片上的生物发光细菌涂层组成的,用以检测某些石油污染物。当细菌感觉到污染物时,它们就会发光。另一个类似机制的例子是地雷的检测,用一个能够检测 TNT 及其主要降解产物 DNT 的大肠杆菌报告基因工程菌株,从而产生绿色荧光蛋白。<br />
<br />
== Applications 应用 ==<br />
<br />
=== Biological computers 生物计算机 ===<br />
<br />
Modified organisms can sense environmental signals and send output signals that can be detected and serve diagnostic purposes. Microbe cohorts have been used.<br />
<br />
改良有机体可以感知环境信号,并发送能够被检测到的输出信号,用于诊断目的。微生物群落已经被应用于这种用途。<br />
<br />
A [[biological computer]] refers to an engineered biological system that can perform computer-like operations, which is a dominant paradigm in synthetic biology. Researchers built and characterized a variety of [[logic gate]]s in a number of organisms,<ref>{{cite journal | vauthors = Singh V | title = Recent advances and opportunities in synthetic logic gates engineering in living cells | journal = Systems and Synthetic Biology | volume = 8 | issue = 4 | pages = 271–82 | date = December 2014 | pmid = 26396651 | pmc = 4571725 | doi = 10.1007/s11693-014-9154-6 }}</ref> and demonstrated both analog and digital computation in living cells. They demonstrated that bacteria can be engineered to perform both analog and/or digital computation.<ref>{{cite journal | vauthors = Purcell O, Lu TK | title = Synthetic analog and digital circuits for cellular computation and memory | journal = Current Opinion in Biotechnology | volume = 29 | pages = 146–55 | date = October 2014 | pmid = 24794536 | pmc = 4237220 | doi = 10.1016/j.copbio.2014.04.009 | series = Cell and Pathway Engineering }}</ref><ref>{{cite journal | vauthors = Daniel R, Rubens JR, Sarpeshkar R, Lu TK | title = Synthetic analog computation in living cells | journal = Nature | volume = 497 | issue = 7451 | pages = 619–23 | date = May 2013 | pmid = 23676681 | doi = 10.1038/nature12148 | bibcode = 2013Natur.497..619D | s2cid = 4358570 }}</ref> In human cells research demonstrated a universal logic evaluator that operates in mammalian cells in 2007.<ref>{{cite journal | vauthors = Rinaudo K, Bleris L, Maddamsetti R, Subramanian S, Weiss R, Benenson Y | title = A universal RNAi-based logic evaluator that operates in mammalian cells | journal = Nature Biotechnology | volume = 25 | issue = 7 | pages = 795–801 | date = July 2007 | pmid = 17515909 | doi = 10.1038/nbt1307 | s2cid = 280451 }}</ref> Subsequently, researchers utilized this paradigm to demonstrate a proof-of-concept therapy that uses biological digital computation to detect and kill human cancer cells in 2011.<ref>{{cite journal | vauthors = Xie Z, Wroblewska L, Prochazka L, Weiss R, Benenson Y | title = Multi-input RNAi-based logic circuit for identification of specific cancer cells | journal = Science | volume = 333 | issue = 6047 | pages = 1307–11 | date = September 2011 | pmid = 21885784 | doi = 10.1126/science.1205527 | bibcode = 2011Sci...333.1307X | s2cid = 13743291 | url = https://semanticscholar.org/paper/372e175668b5323d79950b58f12b36f6974a81ef }}</ref> Another group of researchers demonstrated in 2016 that principles of [[computer engineering]], can be used to automate digital circuit design in bacterial cells.<ref>{{cite journal | vauthors = Nielsen AA, Der BS, Shin J, Vaidyanathan P, Paralanov V, Strychalski EA, Ross D, Densmore D, Voigt CA | title = Genetic circuit design automation | journal = Science | volume = 352 | issue = 6281 | pages = aac7341 | date = April 2016 | pmid = 27034378 | doi = 10.1126/science.aac7341 | doi-access = free }}</ref> In 2017, researchers demonstrated the 'Boolean logic and arithmetic through DNA excision' (BLADE) system to engineer digital computation in human cells.<ref>{{cite journal | vauthors = Weinberg BH, Pham NT, Caraballo LD, Lozanoski T, Engel A, Bhatia S, Wong WW | title = Large-scale design of robust genetic circuits with multiple inputs and outputs for mammalian cells | journal = Nature Biotechnology | volume = 35 | issue = 5 | pages = 453–462 | date = May 2017 | pmid = 28346402 | pmc = 5423837 | doi = 10.1038/nbt.3805 }}</ref><br />
<br />
<br />
<br />
=== Biosensors 生物传感器 ===<br />
<br />
Cells use interacting genes and proteins, which are called gene circuits, to implement diverse function, such as responding to environmental signals, decision making and communication. Three key components are involved: DNA, RNA and Synthetic biologist designed gene circuits that can control gene expression from several levels including transcriptional, post-transcriptional and translational levels.<br />
<br />
细胞使用相互作用的基因和蛋白质,即所谓的基因回路,来实现不同的功能,如响应环境信号,决策和沟通。其中涉及到三个关键组成部分: DNA、 RNA 和合成生物学家设计的基因电路,可以从转录、转录后和翻译水平等几个层面控制基因表达。<br />
<br />
A [[biosensor]] refers to an engineered organism, usually a bacterium, that is capable of reporting some ambient phenomenon such as the presence of heavy metals or toxins. One such system is the [[Luciferase|Lux operon]] of ''[[Aliivibrio fischeri]],''<ref>{{cite journal | vauthors = de Almeida PE, van Rappard JR, Wu JC | title = In vivo bioluminescence for tracking cell fate and function | journal = American Journal of Physiology. Heart and Circulatory Physiology | volume = 301 | issue = 3 | pages = H663–71 | date = September 2011 | pmid = 21666118 | pmc = 3191083 | doi = 10.1152/ajpheart.00337.2011 }}</ref> which codes for the enzyme that is the source of bacterial [[bioluminescence]], and can be placed after a respondent [[Promoter (genetics)|promoter]] to express the luminescence genes in response to a specific environmental stimulus.<ref>{{cite journal | vauthors = Close DM, Xu T, Sayler GS, Ripp S | title = In vivo bioluminescent imaging (BLI): noninvasive visualization and interrogation of biological processes in living animals | journal = Sensors | volume = 11 | issue = 1 | pages = 180–206 | date = 2011 | pmid = 22346573 | pmc = 3274065 | doi = 10.3390/s110100180 }}</ref> One such sensor created, consisted of a [[bioluminescent bacteria]]l coating on a photosensitive [[computer chip]] to detect certain [[petroleum]] [[pollutant]]s. When the bacteria sense the pollutant, they luminesce.<ref>{{cite journal|last=Gibbs|first=W. Wayt| name-list-style = vanc |date=1997 |title=Critters on a Chip |url=http://www.sciam.com/article.cfm?id=critters-on-a-chip |journal=Scientific American|access-date=2 Mar 2009}}</ref> Another example of a similar mechanism is the detection of landmines by an engineered ''E.coli'' reporter strain capable of detecting [[TNT]] and its main degradation product [[2,4-Dinitrotoluene|DNT]], and consequently producing a green fluorescent protein ([[Green fluorescent protein|GFP]]).<ref>{{Cite journal|last1=Belkin|first1=Shimshon|last2=Yagur-Kroll|first2=Sharon|last3=Kabessa|first3=Yossef|last4=Korouma|first4=Victor|last5=Septon|first5=Tali|last6=Anati|first6=Yonatan|last7=Zohar-Perez|first7=Cheinat|last8=Rabinovitz|first8=Zahi|last9=Nussinovitch|first9=Amos|date=April 2017|title=Remote detection of buried landmines using a bacterial sensor|journal=Nature Biotechnology|volume=35|issue=4|pages=308–310|doi=10.1038/nbt.3791|pmid=28398330|s2cid=3645230|issn=1087-0156}}</ref><br />
<br />
<br />
<br />
Traditional metabolic engineering has been bolstered by the introduction of combinations of foreign genes and optimization by directed evolution. This includes engineering E. coli and yeast for commercial production of a precursor of the antimalarial drug, Artemisinin.<br />
<br />
传统的代谢工程学已经通过引入外源基因的组合和定向进化的优化得到了支持。这包括改造大肠杆菌和酵母菌,用于商业化生产抗疟药物青蒿素的前体。<br />
<br />
Modified organisms can sense environmental signals and send output signals that can be detected and serve diagnostic purposes. Microbe cohorts have been used.<ref name="pmid26019220">{{cite journal | vauthors = Danino T, Prindle A, Kwong GA, Skalak M, Li H, Allen K, Hasty J, Bhatia SN | title = Programmable probiotics for detection of cancer in urine | journal = Science Translational Medicine | volume = 7 | issue = 289 | pages = 289ra84 | date = May 2015 | pmid = 26019220 | pmc = 4511399 | doi = 10.1126/scitranslmed.aaa3519 }}</ref><br />
<br />
<br />
<br />
Entire organisms have yet to be created from scratch, although living cells can be transformed with new DNA. <br />
--[[用户:粲兰|袁一博]]([[用户讨论:粲兰|讨论]])“Entire organisms have yet to be created from scratch, although living cells can be transformed with new DNA.”have 疑似应为 haven’t <br />
Several ways allow constructing synthetic DNA components and even entire synthetic genomes, but once the desired genetic code is obtained, it is integrated into a living cell that is expected to manifest the desired new capabilities or phenotypes while growing and thriving. Cell transformation is used to create biological circuits, which can be manipulated to yield desired outputs.<br />
<br />
虽然活细胞可以通过新的 DNA 转化,但整个有机体还没有从头开始创造。有几种方法可以构建合成 DNA 组件,甚至是整个合成基因组,但是一旦获得了所需的遗传密码,它就会被整合到一个活细胞中,这个活细胞在生长和发育的过程中,有望表现所需的新能力或表型。细胞分化被用于创造生物电路,我们可以通过操纵这些电路来产生所需的输出。<br />
<br />
=== Cell transformation 细胞分化 ===<br />
<br />
{{Main|Transformation (genetics)}}Cells use interacting genes and proteins, which are called gene circuits, to implement diverse function, such as responding to environmental signals, decision making and communication. Three key components are involved: DNA, RNA and Synthetic biologist designed gene circuits that can control gene expression from several levels including transcriptional, post-transcriptional and translational levels.<br />
<br />
<br />
<br />
Traditional metabolic engineering has been bolstered by the introduction of combinations of foreign genes and optimization by directed evolution. This includes engineering ''E. coli'' and [[yeast]] for commercial production of a precursor of the [[Antimalarial medication|antimalarial drug]], [[Artemisinin]].<ref>{{cite journal | vauthors = Westfall PJ, Pitera DJ, Lenihan JR, Eng D, Woolard FX, Regentin R, Horning T, Tsuruta H, Melis DJ, Owens A, Fickes S, Diola D, Benjamin KR, Keasling JD, Leavell MD, McPhee DJ, Renninger NS, Newman JD, Paddon CJ | title = Production of amorphadiene in yeast, and its conversion to dihydroartemisinic acid, precursor to the antimalarial agent artemisinin | journal = Proceedings of the National Academy of Sciences of the United States of America | volume = 109 | issue = 3 | pages = E111–8 | date = January 2012 | pmid = 22247290 | pmc = 3271868 | doi = 10.1073/pnas.1110740109 | bibcode = 2012PNAS..109E.111W }}</ref><br />
<br />
The [[Top7 protein was one of the first proteins designed for a fold that had never been seen before in nature ]]<br />
<br />
[[ Top7蛋白是最初为了折叠而设计的几个蛋白质之一,以前从未在自然界中见过]<br />
<br />
<br />
<br />
Entire organisms have yet to be created from scratch, although living cells can be [[Transformation (genetics)|transformed]] with new DNA. Several ways allow constructing synthetic DNA components and even entire [[Artificial gene synthesis|synthetic genomes]], but once the desired genetic code is obtained, it is integrated into a living cell that is expected to manifest the desired new capabilities or [[phenotype]]s while growing and thriving.<ref>{{cite news|url=https://www.independent.co.uk/news/science/eureka-scientists-unveil-giant-leap-towards-synthetic-life-9219644.html|title=Eureka! Scientists unveil giant leap towards synthetic life|last=Connor|first=Steve|date=28 March 2014|work=The Independent|access-date=2015-08-06}}</ref> Cell transformation is used to create [[Synthetic biological circuit|biological circuits]], which can be manipulated to yield desired outputs.<ref name=":0" /><ref name=":1" /><br />
<br />
Natural proteins can be engineered, for example, by directed evolution, novel protein structures that match or improve on the functionality of existing proteins can be produced. One group generated a helix bundle that was capable of binding oxygen with similar properties as hemoglobin, yet did not bind carbon monoxide. A similar protein structure was generated to support a variety of oxidoreductase activities while another formed a structurally and sequentially novel ATPase. Another group generated a family of G-protein coupled receptors that could be activated by the inert small molecule clozapine N-oxide but insensitive to the native ligand, acetylcholine; these receptors are known as DREADDs. Novel functionalities or protein specificity can also be engineered using computational approaches. One study was able to use two different computational methods – a bioinformatics and molecular modeling method to mine sequence databases, and a computational enzyme design method to reprogram enzyme specificity. Both methods resulted in designed enzymes with greater than 100 fold specificity for production of longer chain alcohols from sugar.<br />
<br />
天然蛋白质可以被设计出来,例如,通过定向进化,新的蛋白质结构可以匹配或改进现有蛋白质的功能。其中一组可以产生能将血红蛋白与具有类似性质的氧结合的螺旋束,但不结合一氧化碳。一个类似的蛋白质结构被生成以支持多种氧化还原酶活性,而另一组生成一个在结构和顺序上全新的 ATP 酶。另一组产生了一类 g 蛋白偶联受体,这类受体可以被惰性小分子N-氧化氯氮平激活,但对天然配体乙酰胆碱不敏感; 这些受体被称为 DREADDs。新的功能或蛋白质特异性也可以利用计算方法进行设计。一项研究能够使用两种不同的计算方法——生物信息学和分子模拟方法挖掘序列数据库,使用计算酶设计方法重新编写酶的专一性。这两种方法都可以使设计的酶具有大于100倍的专一性,可以用糖生产出长链醇。<br />
<br />
<br />
<br />
By integrating synthetic biology with [[materials science]], it would be possible to use cells as microscopic molecular foundries to produce materials with properties whose properties were genetically encoded. Re-engineering has produced Curli fibers, the [[amyloid]] component of extracellular material of [[biofilms]], as a platform for programmable [[nanomaterial]]. These nanofibers were genetically constructed for specific functions, including adhesion to substrates, nanoparticle templating and protein immobilization.<ref>{{cite journal|vauthors=Nguyen PQ, Botyanszki Z, Tay PK, Joshi NS|date=September 2014|title=Programmable biofilm-based materials from engineered curli nanofibres|journal=Nature Communications|volume=5|pages=4945|bibcode=2014NatCo...5.4945N|doi=10.1038/ncomms5945|pmid=25229329|doi-access=free}}</ref><br />
<br />
Another common investigation is expansion of the natural set of 20 amino acids. Excluding stop codons, 61 codons have been identified, but only 20 amino acids are coded generally in all organisms. Certain codons are engineered to code for alternative amino acids including: nonstandard amino acids such as O-methyl tyrosine; or exogenous amino acids such as 4-fluorophenylalanine. Typically, these projects make use of re-coded nonsense suppressor tRNA-Aminoacyl tRNA synthetase pairs from other organisms, though in most cases substantial engineering is required.<br />
<br />
另一个常见的研究是对20种天然氨基酸的扩展。除了终止密码子, 61个密码子已被破译出,但所有生物体中一般只有20个氨基酸。某些密码子被设计为编码可替代的氨基酸,包括: 非标准氨基酸,如 o- 甲基酪氨酸; 或外源氨基酸,如4- 氟苯丙氨酸。通常情况下,这些项目利用从其他生物体获取的重新编码的无意义抑制 tRNA-氨酰基 tRNA 合成酶对,虽然在大多数情况下这需要大量的工程。<br />
<br />
<br />
<br />
=== Designed proteins 设计蛋白质 ===<br />
<br />
Other researchers investigated protein structure and function by reducing the normal set of 20 amino acids. Limited protein sequence libraries are made by generating proteins where groups of amino acids may be replaced by a single amino acid. For instance, several non-polar amino acids within a protein can all be replaced with a single non-polar amino acid. . One project demonstrated that an engineered version of Chorismate mutase still had catalytic activity when only 9 amino acids were used.<br />
<br />
其他研究人员通过减少一般的20种天然氨基酸来研究蛋白质的结构和功能。有限的蛋白质序列库是通过生成蛋白质制成的,其中一组氨基酸可以被一个单一的氨基酸所取代。例如,一个蛋白质中的几个非极性氨基酸都可以被一个非极性氨基酸所取代。.一个研究项目证明了,当只使用9种氨基酸时,一种改造过的分支酸变位酶仍然具有催化活性。<br />
<br />
<br />
<br />
[[File:Top7.png|thumb|The [[Top7]] protein was one of the first proteins designed for a fold that had never been seen before in nature<ref name="kuhlman03">{{cite journal | vauthors = Kuhlman B, Dantas G, Ireton GC, Varani G, Stoddard BL, Baker D | title = Design of a novel globular protein fold with atomic-level accuracy | journal = Science | volume = 302 | issue = 5649 | pages = 1364–8 | date = November 2003 | pmid = 14631033 | doi = 10.1126/science.1089427 | bibcode = 2003Sci...302.1364K | s2cid = 1939390 | url = https://semanticscholar.org/paper/3188f905b60172dcad17a9b8c23567400c2bb65f }}</ref> ]]<br />
<br />
Researchers and companies practice synthetic biology to synthesize industrial enzymes with high activity, optimal yields and effectiveness. These synthesized enzymes aim to improve products such as detergents and lactose-free dairy products, as well as make them more cost effective. The improvements of metabolic engineering by synthetic biology is an example of a biotechnological technique utilized in industry to discover pharmaceuticals and fermentive chemicals. Synthetic biology may investigate modular pathway systems in biochemical production and increase yields of metabolic production. Artificial enzymatic activity and subsequent effects on metabolic reaction rates and yields may develop "efficient new strategies for improving cellular properties ... for industrially important biochemical production".<br />
<br />
研究人员和公司运用合成生物学来合成具有高活性、最佳产量和有效性的工业酶。这些合成酶旨在改善产品,如洗涤剂和无乳糖乳制品,以及使他们更具成本效益。合成生物学对代谢工程学的改进是生物技术用于工业发现药物和发酵性化学品的一个典例。合成生物学可以研究生化生产中的模块化途径系统,并提高代谢生产的产量。人工酶活性及其对代谢反应速率和产量的后续影响可能开发出“改善细胞特性有效的新策略...... 用于重要的工业生化产品”。<br />
<br />
<br />
<br />
Natural proteins can be engineered, for example, by [[directed evolution]], novel protein structures that match or improve on the functionality of existing proteins can be produced. One group generated a [[helix bundle]] that was capable of binding [[oxygen]] with similar properties as [[hemoglobin]], yet did not bind [[carbon monoxide]].<ref>{{cite journal | vauthors = Koder RL, Anderson JL, Solomon LA, Reddy KS, Moser CC, Dutton PL | title = Design and engineering of an O(2) transport protein | journal = Nature | volume = 458 | issue = 7236 | pages = 305–9 | date = March 2009 | pmid = 19295603 | pmc = 3539743 | doi = 10.1038/nature07841 | bibcode = 2009Natur.458..305K }}</ref> A similar protein structure was generated to support a variety of [[oxidoreductase]] activities <ref>{{cite journal | vauthors = Farid TA, Kodali G, Solomon LA, Lichtenstein BR, Sheehan MM, Fry BA, Bialas C, Ennist NM, Siedlecki JA, Zhao Z, Stetz MA, Valentine KG, Anderson JL, Wand AJ, Discher BM, Moser CC, Dutton PL | title = Elementary tetrahelical protein design for diverse oxidoreductase functions | journal = Nature Chemical Biology | volume = 9 | issue = 12 | pages = 826–833 | date = December 2013 | pmid = 24121554 | pmc = 4034760 | doi = 10.1038/nchembio.1362 }}</ref> while another formed a structurally and sequentially novel [[ATPase]].<ref name="WangHecht2020">{{cite journal|last1=Wang|first1=MS|last2=Hecht|first2=MH|title=A Completely De Novo ATPase from Combinatorial Protein Design|journal=Journal of the American Chemical Society|year=2020|volume=142|issue=36|pages=15230–15234|issn=0002-7863|doi=10.1021/jacs.0c02954|pmid=32833456}}</ref> Another group generated a family of G-protein coupled receptors that could be activated by the inert small molecule [[clozapine N-oxide]] but insensitive to the native [[ligand]], [[acetylcholine]]; these receptors are known as [[Receptor activated solely by a synthetic ligand|DREADDs]].<ref>{{cite journal | vauthors = Armbruster BN, Li X, Pausch MH, Herlitze S, Roth BL | title = Evolving the lock to fit the key to create a family of G protein-coupled receptors potently activated by an inert ligand | journal = Proceedings of the National Academy of Sciences of the United States of America | volume = 104 | issue = 12 | pages = 5163–8 | date = March 2007 | pmid = 17360345 | pmc = 1829280 | doi = 10.1073/pnas.0700293104 | bibcode = 2007PNAS..104.5163A }}</ref> Novel functionalities or protein specificity can also be engineered using computational approaches. One study was able to use two different computational methods – a bioinformatics and molecular modeling method to mine sequence databases, and a computational enzyme design method to reprogram enzyme specificity. Both methods resulted in designed enzymes with greater than 100 fold specificity for production of longer chain alcohols from sugar.<ref>{{cite journal | vauthors = Mak WS, Tran S, Marcheschi R, Bertolani S, Thompson J, Baker D, Liao JC, Siegel JB | title = Integrative genomic mining for enzyme function to enable engineering of a non-natural biosynthetic pathway | journal = Nature Communications | volume = 6 | pages = 10005 | date = November 2015 | pmid = 26598135 | pmc = 4673503 | doi = 10.1038/ncomms10005 | bibcode = 2015NatCo...610005M }}</ref><br />
<br />
<br />
<br />
Scientists can encode digital information onto a single strand of synthetic DNA. In 2012, George M. Church encoded one of his books about synthetic biology in DNA. The 5.3 Mb of data was more than 1000 times greater than the previous largest amount of information to be stored in synthesized DNA. A similar project encoded the complete sonnets of William Shakespeare in DNA. More generally, algorithms such as NUPACK, ViennaRNA, Ribosome Binding Site Calculator, Cello, and Non-Repetitive Parts Calculator enables the design of new genetic systems.<br />
<br />
科学家可以将数字信息编码到一条合成 DNA 链上。2012年,乔治·M·丘奇用 DNA 将他的一本关于合成生物学的书编码。这5.3 Mb 的数据量比之前存储在合成 DNA 中的最大信息量大了1000多倍。一个类似的项目将威廉·莎士比亚的十四行诗全部编码在 DNA 中。更广泛地说,例如 NUPACK,vienna,Ribosome Binding Site Calculator,cello,和 non-repeated Parts Calculator 等算法使新遗传系统的设计成为可能。<br />
<br />
Another common investigation is [[Expanded genetic code|expansion]] of the natural set of 20 [[amino acid]]s. Excluding [[stop codon]]s, 61 [[codons]] have been identified, but only 20 amino acids are coded generally in all organisms. Certain codons are engineered to code for alternative amino acids including: nonstandard amino acids such as O-methyl [[tyrosine]]; or exogenous amino acids such as 4-fluorophenylalanine. Typically, these projects make use of re-coded [[nonsense suppressor]] [[Transfer RNA|tRNA]]-[[Aminoacyl tRNA synthetase]] pairs from other organisms, though in most cases substantial engineering is required.<ref>{{cite journal | vauthors = Wang Q, Parrish AR, Wang L | title = Expanding the genetic code for biological studies | journal = Chemistry & Biology | volume = 16 | issue = 3 | pages = 323–36 | date = March 2009 | pmid = 19318213 | pmc = 2696486 | doi = 10.1016/j.chembiol.2009.03.001 }}</ref><br />
<br />
<br />
<br />
Many technologies have been developed for incorporating unnatural nucleotides and amino acids into nucleic acids and proteins, both in vitro and in vivo. For example, in May 2014, researchers announced that they had successfully introduced two new artificial nucleotides into bacterial DNA. By including individual artificial nucleotides in the culture media, they were able to exchange the bacteria 24 times; they did not generate mRNA or proteins able to use the artificial nucleotides.<br />
<br />
无论是在体外还是体内,在核酸和蛋白质中将非天然的核苷酸和氨基酸结合的技术已经被开发出来。例如,2014年5月,研究人员宣布他们已经成功地将两种新的人工核苷酸引入细菌 DNA。通过在培养基中加入单个的人工核苷酸,他们能够交换细菌24次; 他们没有产生能够人工核苷酸表达的 mRNA 或蛋白质。<br />
<br />
Other researchers investigated protein structure and function by reducing the normal set of 20 amino acids. Limited protein sequence libraries are made by generating proteins where groups of amino acids may be replaced by a single amino acid.<ref>{{cite journal|author=Davidson, AR|author2=Lumb, KJ|author3=Sauer, RT|date=1995|title=Cooperatively folded proteins in random sequence libraries|journal=Nature Structural Biology|volume=2|issue=10|pages=856–864|doi=10.1038/nsb1095-856|pmid=7552709|s2cid=31781262}}</ref> For instance, several [[Chemical polarity|non-polar]] amino acids within a protein can all be replaced with a single non-polar amino acid.<ref>{{cite journal|vauthors=Kamtekar S, Schiffer JM, Xiong H, Babik JM, Hecht MH|date=December 1993|title=Protein design by binary patterning of polar and nonpolar amino acids|journal=Science|volume=262|issue=5140|pages=1680–5|bibcode=1993Sci...262.1680K|doi=10.1126/science.8259512|pmid=8259512}}</ref> . One project demonstrated that an engineered version of [[Chorismate mutase]] still had catalytic activity when only 9 amino acids were used.<ref>{{cite journal|vauthors=Walter KU, Vamvaca K, Hilvert D|date=November 2005|title=An active enzyme constructed from a 9-amino acid alphabet|journal=The Journal of Biological Chemistry|volume=280|issue=45|pages=37742–6|doi=10.1074/jbc.M507210200|pmid=16144843|doi-access=free}}</ref><br />
<br />
<br />
<br />
Researchers and companies practice synthetic biology to synthesize [[industrial enzymes]] with high activity, optimal yields and effectiveness. These synthesized enzymes aim to improve products such as detergents and lactose-free dairy products, as well as make them more cost effective.<ref>{{cite web|url=https://www.thermofisher.com/us/en/home/life-science/synthetic-biology/synthetic-biology-applications.html|title=Synthetic Biology Applications|website=www.thermofisher.com|access-date=2015-11-12}}</ref> The improvements of metabolic engineering by synthetic biology is an example of a biotechnological technique utilized in industry to discover pharmaceuticals and fermentive chemicals. Synthetic biology may investigate modular pathway systems in biochemical production and increase yields of metabolic production. Artificial enzymatic activity and subsequent effects on metabolic reaction rates and yields may develop "efficient new strategies for improving cellular properties ... for industrially important biochemical production".<ref>{{cite journal | vauthors = Liu Y, Shin HD, Li J, Liu L | title = Toward metabolic engineering in the context of system biology and synthetic biology: advances and prospects | journal = Applied Microbiology and Biotechnology | volume = 99 | issue = 3 | pages = 1109–18 | date = February 2015 | pmid = 25547833 | doi = 10.1007/s00253-014-6298-y | s2cid = 954858 }}</ref><br />
<br />
Synthetic biology raised NASA's interest as it could help to produce resources for astronauts from a restricted portfolio of compounds sent from Earth. On Mars, in particular, synthetic biology could lead to production processes based on local resources, making it a powerful tool in the development of manned outposts with less dependence on Earth.<br />
<br />
合成生物学引起了美国国家航空航天局的兴趣,因为它可以促使从地球上发射的受限化合物组合为宇航员生产资源。特别是在火星上,合成生物学可以产生基于当地资源的生产过程,使其成为开发对地球依赖性较低的载人前哨站的有力工具。<br />
<br />
<br />
<br />
=== Designed nucleic acid systems 设计核酸系统 ===<br />
<br />
Scientists can encode digital information onto a single strand of [[synthetic DNA]]. In 2012, [[George M. Church]] encoded one of his books about synthetic biology in DNA. The 5.3 [[Megabit|Mb]] of data was more than 1000 times greater than the previous largest amount of information to be stored in synthesized DNA.<ref>{{cite journal | vauthors = Church GM, Gao Y, Kosuri S | title = Next-generation digital information storage in DNA | journal = Science | volume = 337 | issue = 6102 | pages = 1628 | date = September 2012 | pmid = 22903519 | doi = 10.1126/science.1226355 | bibcode = 2012Sci...337.1628C | s2cid = 934617 | url = https://semanticscholar.org/paper/0856a685e85bcd27c11cd5f385be818deceb27bd }}</ref> A similar project encoded the complete [[sonnet]]s of [[William Shakespeare]] in DNA.<ref>{{cite web|url=http://news.sky.com/story/1041917/huge-amounts-of-data-can-be-stored-in-dna|title=Huge amounts of data can be stored in DNA|date=23 January 2013|publisher=Sky News|access-date=24 January 2013|archive-url=https://web.archive.org/web/20160531044937/http://news.sky.com/story/1041917/huge-amounts-of-data-can-be-stored-in-dna|archive-date=2016-05-31 }}</ref> More generally, algorithms such as NUPACK,<ref>{{Cite journal|last1=Zadeh|first1=Joseph N.|last2=Steenberg|first2=Conrad D.|last3=Bois|first3=Justin S.|last4=Wolfe|first4=Brian R.|last5=Pierce|first5=Marshall B.|last6=Khan|first6=Asif R.|last7=Dirks|first7=Robert M.|last8=Pierce|first8=Niles A.|date=2011-01-15|title=NUPACK: Analysis and design of nucleic acid systems|journal=Journal of Computational Chemistry|language=en|volume=32|issue=1|pages=170–173|doi=10.1002/jcc.21596|pmid=20645303}}</ref> ViennaRNA,<ref>{{Cite journal|last1=Lorenz|first1=Ronny|last2=Bernhart|first2=Stephan H.|last3=Höner zu Siederdissen|first3=Christian|last4=Tafer|first4=Hakim|last5=Flamm|first5=Christoph|last6=Stadler|first6=Peter F.|last7=Hofacker|first7=Ivo L.|date=2011-11-24|title=ViennaRNA Package 2.0|journal=Algorithms for Molecular Biology|language=en|volume=6|issue=1|pages=26|doi=10.1186/1748-7188-6-26|issn=1748-7188|pmc=3319429|pmid=22115189}}</ref> Ribosome Binding Site Calculator,<ref>{{Cite journal|last1=Salis|first1=Howard M.|last2=Mirsky|first2=Ethan A.|last3=Voigt|first3=Christopher A.|date=October 2009|title=Automated design of synthetic ribosome binding sites to control protein expression|journal=Nature Biotechnology|language=en|volume=27|issue=10|pages=946–950|doi=10.1038/nbt.1568|pmid=19801975|issn=1546-1696|pmc=2782888}}</ref> Cello,<ref>{{Cite journal|last1=Nielsen|first1=A. A. K.|last2=Der|first2=B. S.|last3=Shin|first3=J.|last4=Vaidyanathan|first4=P.|last5=Paralanov|first5=V.|last6=Strychalski|first6=E. A.|last7=Ross|first7=D.|last8=Densmore|first8=D.|last9=Voigt|first9=C. A.|date=2016-04-01|title=Genetic circuit design automation|journal=Science|language=en|volume=352|issue=6281|pages=aac7341|doi=10.1126/science.aac7341|pmid=27034378|issn=0036-8075|doi-access=free}}</ref> and Non-Repetitive Parts Calculator<ref>{{Cite journal|last1=Hossain|first1=Ayaan|last2=Lopez|first2=Eriberto|last3=Halper|first3=Sean M.|last4=Cetnar|first4=Daniel P.|last5=Reis|first5=Alexander C.|last6=Strickland|first6=Devin|last7=Klavins|first7=Eric|last8=Salis|first8=Howard M.|date=2020-07-13|title=Automated design of thousands of nonrepetitive parts for engineering stable genetic systems|url=https://www.nature.com/articles/s41587-020-0584-2|journal=Nature Biotechnology|language=en|pages=1–10|doi=10.1038/s41587-020-0584-2|pmid=32661437|s2cid=220506228|issn=1546-1696}}</ref> enables the design of new genetic systems.<br />
<br />
<br />
<br />
[[Gene functions in the minimal genome of the synthetic organism, Syn 3.]]<br />
<br />
[[在合成生物的最小基因组中发挥功能的基因,Syn 3. ]]<br />
<br />
Many technologies have been developed for incorporating [[Nucleic acid analogue|unnatural nucleotides]] and amino acids into nucleic acids and proteins, both ''in vitro'' and ''in vivo''. For example, in May 2014, researchers announced that they had successfully introduced two new artificial [[nucleotides]] into bacterial DNA. By including individual artificial nucleotides in the culture media, they were able to exchange the bacteria 24 times; they did not generate [[Messenger RNA|mRNA]] or proteins able to use the artificial nucleotides.<ref name="NYT-20140507">{{cite news|url=https://www.nytimes.com/2014/05/08/business/researchers-report-breakthrough-in-creating-artificial-genetic-code.html|title=Researchers Report Breakthrough in Creating Artificial Genetic Code|last=Pollack|first=Andrew|date=May 7, 2014|work=[[New York Times]]|access-date=May 7, 2014}}</ref><ref name="NATURE-20140507">{{cite journal|last=Callaway|first=Ewen|date=May 7, 2014|title=First life with 'alien' DNA|url=http://www.nature.com/news/first-life-with-alien-dna-1.15179|journal=[[Nature (journal)|Nature]]|doi=10.1038/nature.2014.15179|s2cid=86967999|access-date=May 7, 2014}}</ref><ref name="NATJ-20140507">{{cite journal|vauthors=Malyshev DA, Dhami K, Lavergne T, Chen T, Dai N, Foster JM, Corrêa IR, Romesberg FE|date=May 2014|title=A semi-synthetic organism with an expanded genetic alphabet|journal=Nature|volume=509|issue=7500|pages=385–8|bibcode=2014Natur.509..385M|doi=10.1038/nature13314|pmc=4058825|pmid=24805238}}</ref><br />
<br />
One important topic in synthetic biology is synthetic life, that is concerned with hypothetical organisms created in vitro from biomolecules and/or chemical analogues thereof. Synthetic life experiments attempt to either probe the origins of life, study some of the properties of life, or more ambitiously to recreate life from non-living (abiotic) components. Synthetic life biology attempts to create living organisms capable of carrying out important functions, from manufacturing pharmaceuticals to detoxifying polluted land and water. In medicine, it offers prospects of using designer biological parts as a starting point for new classes of therapies and diagnostic tools. Nobody has been able to create such a cell. The host cells were able to grow and replicate. The Mycoplasma laboratorium is the only living organism with completely engineered genome.<br />
<br />
合成生物学的一个重要课题是合成生命,它涉及到在体外由生物分子和/或其化学类似物创造的假想生物体。合成生命实验或者试图探索生命的起源,研究生命的某些特性,或者更雄心勃勃地从非生命(非生物)组成部分中重新创造生命。合成生命生物学试图创造能够执行重要功能的生命有机体,从制造药品到净化被污染的土地和水。在医学上,它提供了使用设计生物学部件作为新类型治疗和诊断工具的起点的前景。没有人能够制造出这样的细胞。宿主细胞能够生长和复制。实验室合成支原体是唯一一个拥有完全工程化基因组的生物体。<br />
<br />
<br />
<br />
=== Space exploration 太空探索 ===<br />
<br />
The first living organism with 'artificial' expanded DNA code was presented in 2014; the team used E. coli that had its genome extracted and replaced with a chromosome with an expanded genetic code. The nucleosides added are d5SICS and dNaM. followed by national synthetic cell organizations in several countries, including FabriCell, MaxSynBio and BaSyC. <br />
--[[用户:粲兰|袁一博]]([[用户讨论:粲兰|讨论]])“followed by national synthetic cell organizations in several countries, including FabriCell, MaxSynBio and BaSyC.”该句首字母未大写,疑似搬运时少了部分语句。<br />
The European synthetic cell efforts were unified in 2019 as SynCellEU initiative.<br />
<br />
2014年,第一个具有人工扩展 DNA 编码的活有机体问世; 研究小组使用大肠杆菌提取了它的基因组,并用扩展基因编码的染色体替换了它。添加的核苷是 d5SICS 和 dNaM。其次是一些国家的国家合成细胞组织,包括 FabriCell,MaxSynBio 和 BaSyC。欧洲合成细胞的努力在2019年被统一为 SynCellEU 倡议。<br />
<br />
Synthetic biology raised [[NASA|NASA's]] interest as it could help to produce resources for astronauts from a restricted portfolio of compounds sent from Earth.<ref name="Verseux, C. 2015 73–100">{{Cite book|author=Verseux, C.|author2=Paulino-Lima, I.|author3=Baque, M.|author4=Billi, D.|author5=Rothschild, L.|date=2016|title=Synthetic Biology for Space Exploration: Promises and Societal Implications|journal=Ambivalences of Creating Life. Societal and Philosophical Dimensions of Synthetic Biology, Publisher: Springer-Verlag|volume=45|pages=73–100|doi=10.1007/978-3-319-21088-9_4|series=Ethics of Science and Technology Assessment|isbn=978-3-319-21087-2}}</ref><ref>{{cite journal|last1=Menezes|first1=A|last2=Cumbers|first2=J|last3=Hogan|first3=J|last4=Arkin|first4=A|date=2014|title=Towards synthetic biological approaches to resource utilization on space missions|journal=Journal of the Royal Society, Interface|volume=12|issue=102|pages=20140715|doi=10.1098/rsif.2014.0715|pmid=25376875|pmc=4277073}}</ref><ref>{{cite journal | vauthors = Montague M, McArthur GH, Cockell CS, Held J, Marshall W, Sherman LA, Wang N, Nicholson WL, Tarjan DR, Cumbers J | title = The role of synthetic biology for in situ resource utilization (ISRU) | journal = Astrobiology | volume = 12 | issue = 12 | pages = 1135–42 | date = December 2012 | pmid = 23140229 | doi = 10.1089/ast.2012.0829 | bibcode = 2012AsBio..12.1135M }}</ref> On Mars, in particular, synthetic biology could lead to production processes based on local resources, making it a powerful tool in the development of manned outposts with less dependence on Earth.<ref name="Verseux, C. 2015 73–100" /> Work has gone into developing plant strains that are able to cope with the harsh Martian environment, using similar techniques to those employed to increase resilience to certain environmental factors in agricultural crops.<ref>{{Cite web|title=NASA - Designer Plants on Mars|url=https://www.nasa.gov/centers/goddard/news/topstory/2005/mars_plants.html|last=GSFC|first=Bill Steigerwald |website=www.nasa.gov|language=en|access-date=2020-05-29}}</ref><br />
<br />
<br />
<br />
=== Synthetic life 合成生命 ===<br />
<br />
{{Further|Artificially Expanded Genetic Information System|Hypothetical types of biochemistry}}<br />
<br />
Bacteria have long been used in cancer treatment. Bifidobacterium and Clostridium selectively colonize tumors and reduce their size. Recently synthetic biologists reprogrammed bacteria to sense and respond to a particular cancer state. Most often bacteria are used to deliver a therapeutic molecule directly to the tumor to minimize off-target effects. To target the tumor cells, peptides that can specifically recognize a tumor were expressed on the surfaces of bacteria. Peptides used include an affibody molecule that specifically targets human epidermal growth factor receptor 2 and a synthetic adhesin. The other way is to allow bacteria to sense the tumor microenvironment, for example hypoxia, by building an AND logic gate into bacteria. The bacteria then only release target therapeutic molecules to the tumor through either lysis or the bacterial secretion system. Lysis has the advantage that it can stimulate the immune system and control growth. Multiple types of secretion systems can be used and other strategies as well. The system is inducible by external signals. Inducers include chemicals, electromagnetic or light waves.<br />
<br />
长期以来,细菌一直被用于癌症治疗。双歧杆菌和梭状芽胞杆菌选择性地定殖于肿瘤并减小肿瘤体积。最近,合成生物学家对细菌进行了重新编码,使其能够感知特定的癌症状态并对其做出反应。大多数情况下,细菌被用来直接向肿瘤输送治疗分子,以最小化脱靶效应。为了定靶于肿瘤细胞,细菌表面表达出了可以特异性识别肿瘤的肽。识别过程中涉及的多肽包括一个特定作用于人类表皮生长因子受体的粘附分子和一个合成粘附素。另一种方法是通过在细菌中建立一个与逻辑门让细菌感知肿瘤的微环境,例如,缺氧。然后,细菌只通过溶菌或细菌分泌系统向肿瘤释放靶向治疗分子。溶菌具有刺激免疫系统和控制生长的优点。这个过程中可以使用多种类型的分泌系统和其他策略。该系统由外部信号诱导。这些诱导因子包括化学物质、电磁波或光波。<br />
<br />
[[File:Syn3 genome.svg|thumb|upright=1.25|[[Gene]] functions in the minimal [[genome]] of the synthetic organism, ''[[Syn 3]]''.<ref name="Hutchison">{{cite journal | vauthors = Hutchison CA, Chuang RY, Noskov VN, Assad-Garcia N, Deerinck TJ, Ellisman MH, Gill J, Kannan K, Karas BJ, Ma L, Pelletier JF, Qi ZQ, Richter RA, Strychalski EA, Sun L, Suzuki Y, Tsvetanova B, Wise KS, Smith HO, Glass JI, Merryman C, Gibson DG, Venter JC | title = Design and synthesis of a minimal bacterial genome | journal = Science | volume = 351 | issue = 6280 | pages = aad6253 | date = March 2016 | pmid = 27013737 | doi = 10.1126/science.aad6253 | bibcode = 2016Sci...351.....H | doi-access = free }}</ref>]]<br />
<br />
One important topic in synthetic biology is ''synthetic life'', that is concerned with hypothetical organisms created ''[[in vitro]]'' from [[biomolecule]]s and/or [[hypothetical types of biochemistry|chemical analogues thereof]]. Synthetic life experiments attempt to either probe the [[origins of life]], study some of the properties of life, or more ambitiously to recreate life from non-living ([[abiotic components|abiotic]]) components. Synthetic life biology attempts to create living organisms capable of carrying out important functions, from manufacturing pharmaceuticals to detoxifying polluted land and water.<ref name="enzymes2014">{{cite news |last=Connor |first=Steve |url=https://www.independent.co.uk/news/science/major-synthetic-life-breakthrough-as-scientists-make-the-first-artificial-enzymes-9896333.html |title=Major synthetic life breakthrough as scientists make the first artificial enzymes |work=The Independent |location=London |date=1 December 2014 |access-date=2015-08-06 }}</ref> In medicine, it offers prospects of using designer biological parts as a starting point for new classes of therapies and diagnostic tools.<ref name="enzymes2014" /><br />
<br />
Multiple species and strains are applied in these therapeutics. Most commonly used bacteria are Salmonella typhimurium, Escherichia Coli, Bifidobacteria, Streptococcus, Lactobacillus, Listeria and Bacillus subtilis. Each of these species have their own property and are unique to cancer therapy in terms of tissue colonization, interaction with immune system and ease of application.<br />
<br />
在这些治疗方法中应用了多种菌株。最常用的细菌是鼠伤寒沙门氏菌、大肠桿菌、双歧杆菌、链球菌、乳酸菌、李斯特菌和枯草杆菌。这些物种中的每一个都有自己的特性。在定殖组织、与免疫系统的相互作用和易于应用方面,它们对癌症治疗各有独到之处。<br />
<br />
<br />
<br />
A living "artificial cell" has been defined as a completely synthetic cell that can capture [[energy]], maintain [[electrochemical gradient|ion gradients]], contain [[macromolecules]] as well as store information and have the ability to [[mutate]].<ref name="Deamer">{{cite journal | vauthors = Deamer D | title = A giant step towards artificial life? | journal = Trends in Biotechnology | volume = 23 | issue = 7 | pages = 336–8 | date = July 2005 | pmid = 15935500 | doi = 10.1016/j.tibtech.2005.05.008 }}</ref> Nobody has been able to create such a cell.<ref name='Deamer'/><br />
<br />
<br />
<br />
The immune system plays an important role in cancer and can be harnessed to attack cancer cells. Cell-based therapies focus on immunotherapies, mostly by engineering T cells.<br />
<br />
免疫系统在癌症中起着重要作用。可以利用免疫系统攻击癌细胞。以细胞为基础的疗法主要是免疫疗法,主要是通过改造 T 细胞。<br />
<br />
A completely synthetic bacterial chromosome was produced in 2010 by [[Craig Venter]], and his team introduced it to genomically emptied bacterial host cells.<ref name="gibson52">{{cite journal | vauthors = Gibson DG, Glass JI, Lartigue C, Noskov VN, Chuang RY, Algire MA, Benders GA, Montague MG, Ma L, Moodie MM, Merryman C, Vashee S, Krishnakumar R, Assad-Garcia N, Andrews-Pfannkoch C, Denisova EA, Young L, Qi ZQ, Segall-Shapiro TH, Calvey CH, Parmar PP, Hutchison CA, Smith HO, Venter JC | title = Creation of a bacterial cell controlled by a chemically synthesized genome | journal = Science | volume = 329 | issue = 5987 | pages = 52–6 | date = July 2010 | pmid = 20488990 | doi = 10.1126/science.1190719 | bibcode = 2010Sci...329...52G | doi-access = free }}</ref> The host cells were able to grow and replicate.<ref>{{cite web| url=https://www.npr.org/templates/transcript/transcript.php?storyId=127010591| title=Scientists Reach Milestone On Way To Artificial Life| access-date=2010-06-09|date=2010-05-20}}</ref><ref>{{cite web|last1=Venter|first1=JC|title=From Designing Life to Prolonging Healthy Life|url=https://www.youtube.com/watch?v=Gwu_djYMm3w&t=30s|website=YouTube|publisher=University of California Television (UCTV)|access-date=1 February 2017}}</ref> The [[Mycoplasma laboratorium]] is the only living organism with completely engineered genome.<br />
<br />
<br />
<br />
T cell receptors were engineered and ‘trained’ to detect cancer epitopes. Chimeric antigen receptors (CARs) are composed of a fragment of an antibody fused to intracellular T cell signaling domains that can activate and trigger proliferation of the cell. A second generation CAR-based therapy was approved by FDA.<br />
<br />
T 细胞受体被设计和“训练”用以检测癌症表位。嵌合抗原受体(CAR)是由融合于细胞内 T 细胞信号域的抗体片段组成,这些信号域可以激活并触发细胞增殖。美国食品药品监督管理局(FDA)批准了第二代基于嵌合抗原受体的基因治疗。<br />
<br />
The first living organism with 'artificial' expanded DNA code was presented in 2014; the team used ''E. coli'' that had its genome extracted and replaced with a chromosome with an expanded genetic code. The [[nucleoside]]s added are [[d5SICS]] and [[dNaM]].<ref name="NATJ-20140507"/><br />
<br />
<br />
<br />
Gene switches were designed to enhance safety of the treatment. Kill switches were developed to terminate the therapy should the patient show severe side effects. Mechanisms can more finely control the system and stop and reactivate it. Since the number of T-cells are important for therapy persistence and severity, growth of T-cells is also controlled to dial the effectiveness and safety of therapeutics.<br />
<br />
基因开关被设计出来以提高治疗的安全性。如果病人出现严重的副作用,杀伤开关就会终止治疗。机制可以更好地控制系统,停止和重新激活它。由于 T 细胞的数量对治疗的持续性和强度非常重要,因此 T 细胞的生长也受到控制,从而平衡治疗的有效性和安全性。<br />
<br />
In May 2019, researchers, in a milestone effort, reported the creation of a new [[Synthetic biology#Synthetic life|synthetic]] (possibly [[Artificial life#Biochemical-based ("wet")|artificial]]) form of [[wikt:viability|viable]] [[life]], a variant of the [[bacteria]] ''[[Escherichia coli]]'', by reducing the natural number of 64 [[codon]]s in the bacterial [[genome]] to 59 codons instead, in order to encode 20 [[amino acid]]s.<ref name="NYT-20190515"/><ref name="NAT-20190515"/><br />
<br />
<br />
<br />
Although several mechanisms can improve safety and control, limitations include the difficulty of inducing large DNA circuits into the cells and risks associated with introducing foreign components, especially proteins, into cells.<br />
<br />
虽然有几种机制可以提高安全性和控制性,但他们也都存在局限性,包括很难将大型 DNA 电路诱导入细胞,以及将外来成分,特别是蛋白质引入细胞的风险。<br />
<br />
In 2017 the international [[Build-a-Cell]] large-scale research collaboration for the construction of synthetic living cell was started,<ref>{{cite web|url=http://buildacell.io/|title=Build-a-Cell|accessdate=4 Dec 2019}}</ref> followed by national synthetic cell organizations in several countries, including FabriCell,<ref>{{cite web|url=http://fabricell.org/|title=FabriCell|accessdate=8 Dec 2019}}</ref> MaxSynBio<ref>{{cite web|url=https://www.maxsynbio.mpg.de/home/|title=MaxSynBio - Max Planck Research Network in Synthetic Biology|accessdate=8 Dec 2019}}</ref> and BaSyC.<ref>{{cite web|url=http://www.basyc.nl/|title=BaSyC|accessdate=8 Dec 2019}}</ref> The European synthetic cell efforts were unified in 2019 as SynCellEU initiative.<ref>{{cite web|url=http://www.syntheticcell.eu/|title=SynCell EU|accessdate=8 Dec 2019}}</ref><br />
<br />
<br />
<br />
=== Drug delivery platforms 药物输送平台 ===<br />
<br />
==== Engineered bacteria-based platform 基于细菌设计的平台 ====<br />
<br />
Bacteria have long been used in cancer treatment. ''[[Bifidobacterium]]'' and ''[[Clostridium]]'' selectively colonize tumors and reduce their size.<ref name="Zu_2014">{{cite journal|vauthors=Zu C, Wang J|date=August 2014|title=Tumor-colonizing bacteria: a potential tumor targeting therapy|url=|journal=Critical Reviews in Microbiology|volume=40|issue=3|pages=225–35|doi=10.3109/1040841X.2013.776511|pmid=23964706|s2cid=26498221}}</ref> Recently synthetic biologists reprogrammed bacteria to sense and respond to a particular cancer state. Most often bacteria are used to deliver a therapeutic molecule directly to the tumor to minimize off-target effects. To target the tumor cells, [[peptide]]s that can specifically recognize a tumor were expressed on the surfaces of bacteria. Peptides used include an [[affibody molecule]] that specifically targets human [[Epidermal growth factor receptor|epidermal growth factor receptor 2]]<ref name="Gujrati_2014">{{cite journal|vauthors=Gujrati V, Kim S, Kim SH, Min JJ, Choy HE, Kim SC, Jon S|date=February 2014|title=Bioengineered bacterial outer membrane vesicles as cell-specific drug-delivery vehicles for cancer therapy|url=|journal=ACS Nano|volume=8|issue=2|pages=1525–37|doi=10.1021/nn405724x|pmid=24410085}}</ref> and a synthetic [[Adhesin molecule (immunoglobulin -like)|adhesin]].<ref name="Piñero-Lambea_2015">{{cite journal|vauthors=Piñero-Lambea C, Bodelón G, Fernández-Periáñez R, Cuesta AM, Álvarez-Vallina L, Fernández LÁ|date=April 2015|title=Programming controlled adhesion of E. coli to target surfaces, cells, and tumors with synthetic adhesins|journal=ACS Synthetic Biology|volume=4|issue=4|pages=463–73|doi=10.1021/sb500252a|pmc=4410913|pmid=25045780}}</ref> The other way is to allow bacteria to sense the [[tumor microenvironment]], for example hypoxia, by building an AND logic gate into bacteria.<ref>{{cite journal | last1 = Deyneko | first1 = I.V. | last2 = Kasnitz | first2 = N. | last3 = Leschner | first3 = S. | last4 = Weiss | first4 = S. | year = 2016| title = Composing a tumor specific bacterial promoter | url = | journal = PLOS ONE | volume = 11| issue = 5| page = e0155338| doi = 10.1371/journal.pone.0155338 | pmid = 27171245 | pmc = 4865170 }}</ref> The bacteria then only release target therapeutic molecules to the tumor through either [[lysis]]<ref>{{cite journal | last1 = Rice | first1 = KC | last2 = Bayles | first2 = KW | year = 2008 | title = Molecular control of bacterial death and lysis | journal = Microbiol Mol Biol Rev | volume = 72 | issue = 1| pages = 85–109 | doi = 10.1128/mmbr.00030-07 | pmid = 18322035 | pmc = 2268280 }}</ref> or the [[bacterial secretion system]].<ref>{{cite journal | last1 = Ganai | first1 = S. | last2 = Arenas | first2 = R. B. | last3 = Forbes | first3 = N. S. | year = 2009 | title = Tumour-targeted delivery of TRAIL using Salmonella typhimurium enhances breast cancer survival in mice | url = | journal = Br. J. Cancer | volume = 101 | issue = 10| pages = 1683–1691 | doi = 10.1038/sj.bjc.6605403 | pmid = 19861961 | pmc = 2778534 }}</ref> Lysis has the advantage that it can stimulate the immune system and control growth. Multiple types of secretion systems can be used and other strategies as well. The system is inducible by external signals. Inducers include chemicals, electromagnetic or light waves.<br />
<br />
The creation of new life and the tampering of existing life has raised ethical concerns in the field of synthetic biology and are actively being discussed.<br />
<br />
创造新生命以及篡改现存生命引起了合成生物学领域的伦理问题,目前正处于积极的讨论中。<br />
<br />
<br />
<br />
Multiple species and strains are applied in these therapeutics. Most commonly used bacteria are ''[[Salmonella enterica subsp. enterica|Salmonella typhimurium]]'', [[Escherichia coli|''Escherichia Coli'']], ''Bifidobacteria'', ''[[Streptococcus]]'', ''[[Lactobacillus]]'', ''[[Listeria]]'' and ''[[Bacillus subtilis]]''. Each of these species have their own property and are unique to cancer therapy in terms of tissue colonization, interaction with immune system and ease of application.<br />
<br />
<br />
<br />
The ethical aspects of synthetic biology has 3 main features: biosafety, biosecurity, and the creation of new life forms. Other ethical issues mentioned include the regulation of new creations, patent management of new creations, benefit distribution, and research integrity.<br />
<br />
合成生物学的伦理方面有三个主要特点: 生物研究安全性、生物安全性和创造新的生命形式。其他提到的伦理问题包括新生命的管理、新生命的专利管理、利益分配和研究的完整性。<br />
<br />
==== Cell-based platform 基于细胞的平台====<br />
<br />
The immune system plays an important role in cancer and can be harnessed to attack cancer cells. Cell-based therapies focus on [[Cancer immunotherapy|immunotherapies]], mostly by engineering [[T cell]]s.<br />
<br />
Ethical issues have surfaced for recombinant DNA and genetically modified organism (GMO) technologies and extensive regulations of genetic engineering and pathogen research were in place in many jurisdictions. Amy Gutmann, former head of the Presidential Bioethics Commission, argued that we should avoid the temptation to over-regulate synthetic biology in general, and genetic engineering in particular. According to Gutmann, "Regulatory parsimony is especially important in emerging technologies...where the temptation to stifle innovation on the basis of uncertainty and fear of the unknown is particularly great. The blunt instruments of statutory and regulatory restraint may not only inhibit the distribution of new benefits, but can be counterproductive to security and safety by preventing researchers from developing effective safeguards.".<br />
<br />
重组 DNA 和转基因生物(GMO)技术的伦理问题已经浮出水面,许多司法管辖区对基因工程和病原体研究有着广泛的规定。生物伦理总统委员会前任主席艾米 · 古特曼认为,我们应该避免过度监管合成生物学,尤其是基因工程。古特曼认为,“过度监管在新兴技术领域尤为显要...... 在这些领域,处于不确定性和对未知事物的恐惧而扼杀创新的倾向尤为强烈。法律和监管限制的生硬手段可能不仅会抑制新利益的分配,而且可能阻碍研究人员制定有效的保障措施,从而对研究安全性和生命安全性产生反作用。".<br />
<br />
<br />
<br />
T cell receptors were engineered and ‘trained’ to detect cancer [[epitope]]s. [[Chimeric antigen receptor]]s (CARs) are composed of a fragment of an [[antibody]] fused to intracellular T cell signaling domains that can activate and trigger proliferation of the cell. A second generation CAR-based therapy was approved by FDA.{{Citation needed|date=April 2018}}<br />
<br />
<br />
<br />
Gene switches were designed to enhance safety of the treatment. Kill switches were developed to terminate the therapy should the patient show severe side effects.<ref>Jones, B.S., Lamb, L.S., Goldman, F. & Di Stasi, A. Improving the safety of cell therapy products by suicide gene transfer. Front. Pharmacol. 5, 254 (2014).</ref> Mechanisms can more finely control the system and stop and reactivate it.<ref>{{cite journal | last1 = Wei | first1 = P | last2 = Wong | first2 = WW | last3 = Park | first3 = JS | last4 = Corcoran | first4 = EE | last5 = Peisajovich | first5 = SG | last6 = Onuffer | first6 = JJ | last7 = Weiss | first7 = A | last8 = LiWA | year = 2012 | title = Bacterial virulence proteins as tools to rewire kinase pathways in yeast and immune cells | url = | journal = Nature | volume = 488 | issue = 7411| pages = 384–388 | doi = 10.1038/nature11259 | pmid = 22820255 | pmc = 3422413 }}</ref><ref>{{cite journal | last1 = Danino | first1 = T. | last2 = Mondragon-Palomino | first2 = O. | last3 = Tsimring | first3 = L. | last4 = Hasty | first4 = J. | year = 2010 | title = A synchronized quorum of genetic clocks | url = | journal = Nature | volume = 463 | issue = 7279| pages = 326–330 | doi = 10.1038/nature08753 | pmid = 20090747 | pmc = 2838179 }}</ref> Since the number of T-cells are important for therapy persistence and severity, growth of T-cells is also controlled to dial the effectiveness and safety of therapeutics.<ref>{{cite journal | last1 = Chen | first1 = Y. Y. | last2 = Jensen | first2 = M. C. | last3 = Smolke | first3 = C. D. | year = 2010 | title = Genetic control of mammalian T-cell proliferation with synthetic RNA regulatory systems | journal = Proc. Natl. Acad. Sci. U.S.A. | volume = 107 | issue = 19| pages = 8531–6 | doi = 10.1073/pnas.1001721107 | pmid = 20421500 | pmc = 2889348 }}</ref><br />
<br />
One ethical question is whether or not it is acceptable to create new life forms, sometimes known as "playing God". Currently, the creation of new life forms not present in nature is at small-scale, the potential benefits and dangers remain unknown, and careful consideration and oversight are ensured for most studies. Regarding auxotrophy, bacteria and yeast can be engineered to be unable to produce histidine, an important amino acid for all life. Such organisms can thus only be grown on histidine-rich media in laboratory conditions, nullifying fears that they could spread into undesirable areas.<br />
<br />
有这样一个道德问题,创造新的生命形式(有时被称为“扮演上帝”)是否可以接受。目前,自然界中不存在的新生命形式的创造规模很小,潜在的好处和危险仍然不为人知,并且大多数研究确保进行了认真的考虑和监督。通过制造营养缺陷,细菌和酵母可以被改造为不能生产组氨酸的类型。组氨酸是一种对所有生命来说都很重要的氨基酸。因此,这些微生物只能在实验室条件下在富含组氨酸的培养基上生长,从而消除了人们对它们可能扩散到不良区域的担忧。<br />
<br />
<br />
<br />
<br />
== Ethics 伦理问题 ==<br />
<br />
Some ethical issues relate to biosecurity, where biosynthetic technologies could be deliberately used to cause harm to society and/or the environment. Since synthetic biology raises ethical issues and biosecurity issues, humanity must consider and plan on how to deal with potentially harmful creations, and what kinds of ethical measures could possibly be employed to deter nefarious biosynthetic technologies. With the exception of regulating synthetic biology and biotechnology companies, however, the issues are not seen as new because they were raised during the earlier recombinant DNA and genetically modified organism (GMO) debates and extensive regulations of genetic engineering and pathogen research are already in place in many jurisdictions.<br /><br />
<br />
一些伦理问题与生物安全有关,在这方面,生物合成技术可能被有意用以破坏社会和/或环境。由于合成生物学引起了伦理问题和生物安全问题,人类必须考虑和计划如何处理潜在的有害创造物,以及何种伦理措施具有阻止邪恶的生物合成技术的可行性。然而,除了监管合成生物学和生物技术公司之外,这些问题并不被视为新问题,因为它们是在早期的重组 DNA 和转基因生物(GMO)辩论中提出的,而且许多司法辖区已经对基因工程和病原体研究进行了广泛的监管。< br/><br />
<br />
{{Update|section|date=January 2019}}<br />
<br />
<br />
<br />
The creation of new life and the tampering of existing life has raised [[Ethics|ethical concerns]] in the field of synthetic biology and are actively being discussed.<ref name=":3" /><br />
<br />
<br />
<br />
The European Union-funded project SYNBIOSAFE has issued reports on how to manage synthetic biology. A 2007 paper identified key issues in safety, security, ethics and the science-society interface, which the project defined as public education and ongoing dialogue among scientists, businesses, government and ethicists. The key security issues that SYNBIOSAFE identified involved engaging companies that sell synthetic DNA and the biohacking community of amateur biologists. Key ethical issues concerned the creation of new life forms.<br />
<br />
欧盟资助的项目 SYNBIOSAFE 已经发布了关于如何管理合成生物学的报告。2007年的一篇论文确定了技术安全、生命安全、伦理和科学-社会接口方面的关键问题,并将其定义为公共教育和科学家、企业、政府和伦理学家之间的持续交流。SYNBIOSAFE 确定的关键生命安全问题涉及到销售合成 DNA 的公司和业余生物学家组成的生物黑客社区。关键的伦理问题涉及到创造新的生命形式。<br />
<br />
Common ethical questions include:<br />
常见的伦理问题包括:<br />
<br />
<br />
A subsequent report focused on biosecurity, especially the so-called dual-use challenge. For example, while synthetic biology may lead to more efficient production of medical treatments, it may also lead to synthesis or modification of harmful pathogens (e.g., smallpox). The biohacking community remains a source of special concern, as the distributed and diffuse nature of open-source biotechnology makes it difficult to track, regulate or mitigate potential concerns over biosafety and biosecurity.<br />
<br />
随后的一份聚焦于于生物安全的报告,特别是所谓的两用挑战。例如,虽然合成生物学可能带来更有效的医疗生产,但它也可能合成或改造出有害的病原体(例如天花)。生物黑客界仍然是一个特别令人关切的问题,因为开源生物技术的分布和扩散性质使得跟踪、管理或减轻对生物安全和生物安保的隐忧变得困难。<br />
<br />
* Is it morally right to tamper with nature?<br />
篡改自然在道德上是正确的吗?<br />
<br />
* Is one playing God when creating new life?<br />
创造新生命时,人是否就是上帝?<br />
<br />
COSY, another European initiative, focuses on public perception and communication. To better communicate synthetic biology and its societal ramifications to a broader public, COSY and SYNBIOSAFE published SYNBIOSAFE, a 38-minute documentary film, in October 2009.<br />
<br />
COSY 是欧洲的另一项倡议,主要关注于公众认知和交流。为了更好地向更广泛的公众宣传合成生物学及其社会影响,COSY 和 SYNBIOSAFE 于2009年10月出版了一部38分钟的纪录片《安全的合成生物学》。<br />
<br />
* What happens if a synthetic organism accidentally escapes?<br />
如果一种合成生命体意外地从实验室中泄露出去,会发生什么?<br />
<br />
* What if an individual misuses synthetic biology and creates a harmful entity (e.g., a biological weapon)?<br />
假如某个个体错误地使用合成生物学并制造了一个有害的尸体,那该怎么办?<br />
<br />
The International Association Synthetic Biology has proposed self-regulation. This proposes specific measures that the synthetic biology industry, especially DNA synthesis companies, should implement. In 2007, a group led by scientists from leading DNA-synthesis companies published a "practical plan for developing an effective oversight framework for the DNA-synthesis industry".<br />
<br />
国际合成生物学协会已经建议进行自我调节。它提出了合成生物产业,特别是 DNA 合成公司,应该实施的具体措施。2007年,由主要的 DNA 合成公司的科学家领导的一个小组发表了“为 DNA 合成工业制定有效监督框架的实用计划”。<br />
<br />
* Who will have control of and access to the products of synthetic biology? <br />
谁会拥有控制和访问合成生物产品的权限?<br />
<br />
* Who will gain from these innovations? Investors? Medical patients? Industrial farmers?<br />
谁会从这些创新中获利?投资者?患者?工业农民?<br />
<br />
On July 9–10, 2009, the National Academies' Committee of Science, Technology & Law convened a symposium on "Opportunities and Challenges in the Emerging Field of Synthetic Biology".<br />
<br />
2009年7月9日至10日,美国国家学院科学、技术和法律委员会召开了一次名为“合成生物学新兴领域的机遇与挑战”的研讨会。<br />
<br />
* Does the patent system allow patents on living organisms? What about parts of organisms, like HIV resistance genes in humans?<ref>{{Cite web|url=https://www.theguardian.com/science/2018/nov/26/worlds-first-gene-edited-babies-created-in-china-claims-scientist|title= World's first gene-edited babies created in China, claims scientist |last=Staff|first=Agencies|date=November 2018|website=The Guardian|url-status=live|archive-url=|archive-date=|access-date=}}</ref><br />
<br />
* What if a new creation is deserving of moral or legal status?<br />
如果一个新生命理应拥有道德和法律地位该怎么办?<br />
<br />
After the publication of the first synthetic genome and the accompanying media coverage about "life" being created, President Barack Obama established the Presidential Commission for the Study of Bioethical Issues to study synthetic biology. The commission convened a series of meetings, and issued a report in December 2010 titled "New Directions: The Ethics of Synthetic Biology and Emerging Technologies." The commission stated that "while Venter’s achievement marked a significant technical advance in demonstrating that a relatively large genome could be accurately synthesized and substituted for another, it did not amount to the “creation of life”. It noted that synthetic biology is an emerging field, which creates potential risks and rewards. The commission did not recommend policy or oversight changes and called for continued funding of the research and new funding for monitoring, study of emerging ethical issues and public education. These security issues may be avoided by regulating industry uses of biotechnology through policy legislation. Federal guidelines on genetic manipulation are being proposed by "the President's Bioethics Commission ... in response to the announced creation of a self-replicating cell from a chemically synthesized genome, put forward 18 recommendations not only for regulating the science ... for educating the public". Richard Lewontin wrote that some of the safety tenets for oversight discussed in The Principles for the Oversight of Synthetic Biology are reasonable, but that the main problem with the recommendations in the manifesto is that "the public at large lacks the ability to enforce any meaningful realization of those recommendations".<br />
<br />
在发表了第一个合成基因组以及随之而来的关于”生命”的媒体报道之后,巴拉克·奥巴马总统设立了研究合成生物学的生物伦理问题总统委员会。该委员会召开了一系列会议,并于2010年12月发布了一份题为《新方向: 合成生物学和新兴技术的伦理学》的报告。委员会指出:“虽然文特尔的成就标志着一项重大的技术进步,证明了一个相对较大的基因组可以准确地合成和替代另一个基因组,但它并不等于‘创造生命’。”报告指出,合成生物学是一个新兴的领域,它产生了潜在的风险和回报。该委员会没有对政策或监督方面的改变提出建议,并呼吁继续为研究提供资金,并为监测、研究新出现的道德问题和公共教育提供新资金。这些安全问题可以通过政策立法规范生物技术的工业用途来避免。“生物伦理总统委员会正在提出关于基因操纵的联邦指导方针...... 作为对宣布从化学合成的基因组中创造出自我复制细胞的回应,提出了18项建议,不仅仅是为了规范科学...... 为了教育公众。”。理查德·路文汀 (Richard Lewontin) 写道,《合成生物学监督原则》中讨论的一些监督安全原则是合理的,但宣言中的建议存在的主要问题是“广大公众缺乏能力,无法强制任意有意义地实现这些建议”。<br />
<br />
<br />
<br />
The ethical aspects of synthetic biology has 3 main features: biosafety, biosecurity, and the creation of new life forms.<ref>{{Cite journal|title=Synthetic Biology and Ethics: Past, Present, and Future|last=Hayry|first=Mattie|date=April 2017|journal=Cambridge Quarterly of Healthcare Ethics|volume=26|issue=2|pages=186–205|doi=10.1017/S0963180116000803|pmid=28361718}}</ref> Other ethical issues mentioned include the regulation of new creations, patent management of new creations, benefit distribution, and research integrity.<ref>{{Cite journal|title=Synthetic biology applied in the agrifood sector: Public perceptions, attitudes and implications for future studies|last=Jin |display-authors=etal |first=Shan|date=September 2019|journal=Trends in Food Science and Technology|volume=91|pages=454–466|doi=10.1016/j.tifs.2019.07.025}}</ref><ref name=":3">{{Cite journal|url=https://heinonline.org/HOL/LandingPage?handle=hein.journals/macq15&div=8&id=&page=| title=Synthetic Biology: Ethics, Exeptionalism and Expectations| pages=45| last=Newson|first=AJ|date=2015|journal=Macquarie Law Journal| volume=15|url-status=live|archive-url=|archive-date=|access-date=}}</ref><br />
<br />
<br />
<br />
Ethical issues have surfaced for [[recombinant DNA]] and [[genetically modified organism]] (GMO) technologies and extensive regulations of [[genetic engineering]] and pathogen research were in place in many jurisdictions. [[Amy Gutmann]], former head of the Presidential Bioethics Commission, argued that we should avoid the temptation to over-regulate synthetic biology in general, and genetic engineering in particular. According to Gutmann, "Regulatory parsimony is especially important in emerging technologies...where the temptation to stifle innovation on the basis of uncertainty and fear of the unknown is particularly great. The blunt instruments of statutory and regulatory restraint may not only inhibit the distribution of new benefits, but can be counterproductive to security and safety by preventing researchers from developing effective safeguards.".<ref>{{cite journal | first = Gutmann | last = Amy | date = 2012 | title = The Ethics of Synthetic Biology | volume=41 | issue=4 | pages = 17–22 | journal = The Hastings Center Report | doi = 10.1002/j.1552-146X.2011.tb00118.x | pmid = 21845917 | s2cid = 20662786 }}</ref><br />
<br />
<br />
<br />
The hazards of synthetic biology include biosafety hazards to workers and the public, biosecurity hazards stemming from deliberate engineering of organisms to cause harm, and environmental hazards. The biosafety hazards are similar to those for existing fields of biotechnology, mainly exposure to pathogens and toxic chemicals, although novel synthetic organisms may have novel risks. For biosecurity, there is concern that synthetic or redesigned organisms could theoretically be used for bioterrorism. Potential risks include recreating known pathogens from scratch, engineering existing pathogens to be more dangerous, and engineering microbes to produce harmful biochemicals. Lastly, environmental hazards include adverse effects on biodiversity and ecosystem services, including potential changes to land use resulting from agricultural use of synthetic organisms.<br />
<br />
合成生物学的危害包括对工人和公众的生物安全危害、蓄意设计可造成危害的生物体所产生的生物安全危害以及环境危害。生物安全危害类似于现有生物技术领域的危害,尽管新的合成生物可能有新的风险,它的主要形式是接触病原体和有毒化学品。为了生物安全,人们担心人工合成或重新设计的生物体在理论上可能被用于生物恐怖主义。潜在的风险包括从零开始再造已知的病原体,将现有的病原体设计成更危险的,以及设计微生物来生产有害的生物化学产品。最后,环境危害包括对生物多样性和生态系统服务的不利影响,包括在农业上利用合成生物体对土地使用的潜在变化。<br />
<br />
=== The "creation" of life 创造生命 ===<br />
<br />
<br />
<br />
Existing risk analysis systems for GMOs are generally considered sufficient for synthetic organisms, although there may be difficulties for an organism built "bottom-up" from individual genetic sequences. Synthetic biology generally falls under existing regulations for GMOs and biotechnology in general, and any regulations that exist for downstream commercial products, although there are generally no regulations in any jurisdiction that are specific to synthetic biology.<br />
<br />
通常认为,尽管由单个基因序列构成的”自下而上”的生物体可能存在困难,现有的转基因生物风险分析系统足以用于合成生物体。一般而言,尽管任何法域一般都没有专门针对合成生物学的条例,合成生物学属于现有的转基因生物和生物技术条例的范围,也属于现有的关于下游商业产品的条例的范围。<br />
<br />
One ethical question is whether or not it is acceptable to create new life forms, sometimes known as "playing God". Currently, the creation of new life forms not present in nature is at small-scale, the potential benefits and dangers remain unknown, and careful consideration and oversight are ensured for most studies.<ref name=":3" /> Many advocates express the great potential value—to agriculture, medicine, and academic knowledge, among other fields—of creating artificial life forms. Creation of new entities could expand scientific knowledge well beyond what is currently known from studying natural phenomena. Yet there is concern that artificial life forms may reduce nature’s "purity" (i.e., nature could be somehow corrupted by human intervention and manipulation) and potentially influence the adoption of more engineering-like principles instead of biodiversity- and nature-focused ideals. Some are also concerned that if an artificial life form were to be released into nature, it could hamper biodiversity by beating out natural species for resources (similar to how [[algal bloom]]s kill marine species). Another concern involves the ethical treatment of newly created entities if they happen to [[nociception|sense pain]], [[sentience]], and self-perception. Should such life be given moral or legal rights? If so, how?<br />
<br />
<br />
<br />
=== Biosafety and biocontainment 生物技术安全和生物抑制 ===<br />
<br />
What is most ethically appropriate when considering biosafety measures? How can accidental introduction of synthetic life in the natural environment be avoided? Much ethical consideration and critical thought has been given to these questions. Biosafety not only refers to biological containment; it also refers to strides taken to protect the public from potentially hazardous biological agents. Even though such concerns are important and remain unanswered, not all products of synthetic biology present concern for biological safety or negative consequences for the environment. It is argued that most synthetic technologies are benign and are incapable of flourishing in the outside world due to their "unnatural" characteristics as there is yet to be an example of a transgenic microbe conferred with a fitness advantage in the wild.<br />
<br />
<br />
<br />
In general, existing [[Hierarchy of hazard controls|hazard controls]], risk assessment methodologies, and regulations developed for traditional [[genetically modified organism]]s (GMOs) are considered to be sufficient for synthetic organisms. "Extrinsic" [[biocontainment]] methods in a laboratory context include physical containment through [[biosafety cabinet]]s and [[glovebox]]es, as well as [[personal protective equipment]]. In an agricultural context they include isolation distances and [[pollen]] barriers, similar to methods for [[Biocontainment of genetically modified organisms|biocontainment of GMOs]]. Synthetic organisms may offer increased hazard control because they can be engineered with "intrinsic" biocontainment methods that limit their growth in an uncontained environment, or prevent [[horizontal gene transfer]] to natural organisms. Examples of intrinsic biocontainment include [[auxotrophy]], biological [[kill switch]]es, inability of the organism to replicate or to pass modified or synthetic genes to offspring, and the use of [[Xenobiology|xenobiological]] organisms using alternative biochemistry, for example using artificial [[xeno nucleic acid]]s (XNA) instead of DNA.<ref name=":12" /><ref name=":32">{{Cite journal|url=https://publications.europa.eu/en/publication-detail/-/publication/bfd7d06c-d3ae-11e5-a4b5-01aa75ed71a1/language-en|title=Opinion on synthetic biology II: Risk assessment methodologies and safety aspects|last=|first=|date=2016-02-12|website=EU [[Directorate-General for Health and Consumers]]|pages=|via=|doi=10.2772/63529|archive-url=|archive-date=|access-date=|volume=|publisher=Publications Office}}</ref> Regarding auxotrophy, bacteria and yeast can be engineered to be unable to produce [[histidine]], an important amino acid for all life. Such organisms can thus only be grown on histidine-rich media in laboratory conditions, nullifying fears that they could spread into undesirable areas.<br />
<br />
<br />
<br />
<br />
<br />
=== Biosecurity 生物安全 ===<br />
<br />
Some ethical issues relate to biosecurity, where biosynthetic technologies could be deliberately used to cause harm to society and/or the environment. Since synthetic biology raises ethical issues and biosecurity issues, humanity must consider and plan on how to deal with potentially harmful creations, and what kinds of ethical measures could possibly be employed to deter nefarious biosynthetic technologies. With the exception of regulating synthetic biology and biotechnology companies,<ref name="Bügl, H. et al. 2007 627–629">{{cite journal | vauthors = Bügl H, Danner JP, Molinari RJ, Mulligan JT, Park HO, Reichert B, Roth DA, Wagner R, Budowle B, Scripp RM, Smith JA, Steele SJ, Church G, Endy D | title = DNA synthesis and biological security | journal = Nature Biotechnology | volume = 25 | issue = 6 | pages = 627–9 | date = June 2007 | pmid = 17557094 | doi = 10.1038/nbt0607-627 | s2cid = 7776829 }}</ref><ref>{{cite web|url = http://www.synbioproject.org/site/assets/files/1335/hastings.pdf|title = Ethical Issues in Synthetic Biology: An Overview of the Debates|date = |access-date = |website = }}</ref> however, the issues are not seen as new because they were raised during the earlier [[recombinant DNA]] and [[genetically modified organism]] (GMO) debates and extensive regulations of [[genetic engineering]] and pathogen research are already in place in many jurisdictions.<ref name="bioethics.gov">Presidential Commission for the study of Bioethical Issues, December 2010 [http://bioethics.gov/synthetic-biology-report NEW DIRECTIONS The Ethics of Synthetic Biology and Emerging Technologies] Retrieved 2012-04-14.</ref><br /><br />
<br />
<br />
<br />
=== European Union 欧盟方面 ===<br />
<br />
<br />
<br />
The [[European Union]]-funded project SYNBIOSAFE<ref>[http://www.synbiosafe.eu/ SYNBIOSAFE official site]</ref> has issued reports on how to manage synthetic biology. A 2007 paper identified key issues in safety, security, ethics and the science-society interface, which the project defined as public education and ongoing dialogue among scientists, businesses, government and ethicists.<ref name="Priorities">{{cite journal | vauthors = Schmidt M, Ganguli-Mitra A, Torgersen H, Kelle A, Deplazes A, Biller-Andorno N | title = A priority paper for the societal and ethical aspects of synthetic biology | journal = Systems and Synthetic Biology | volume = 3 | issue = 1–4 | pages = 3–7 | date = December 2009 | pmid = 19816794 | pmc = 2759426 | doi = 10.1007/s11693-009-9034-7 | url = http://www.synbiosafe.eu/uploads/pdf/Schmidt_etal-2009-SSBJ.pdf }}</ref><ref>Schmidt M. Kelle A. Ganguli A, de Vriend H. (Eds.) 2009. [https://www.springer.com/biomed/book/978-90-481-2677-4 "Synthetic Biology. The Technoscience and its Societal Consequences".] Springer Academic Publishing.</ref> The key security issues that SYNBIOSAFE identified involved engaging companies that sell synthetic DNA and the [[Do-it-yourself biology|biohacking]] community of amateur biologists. Key ethical issues concerned the creation of new life forms.<br />
<br />
<br />
<br />
A subsequent report focused on biosecurity, especially the so-called [[dual use technology|dual-use]] challenge. For example, while synthetic biology may lead to more efficient production of medical treatments, it may also lead to synthesis or modification of harmful pathogens (e.g., [[smallpox]]).<ref>{{cite journal | vauthors = Kelle A | title = Ensuring the security of synthetic biology-towards a 5P governance strategy | journal = Systems and Synthetic Biology | volume = 3 | issue = 1–4 | pages = 85–90 | date = December 2009 | pmid = 19816803 | pmc = 2759433 | doi = 10.1007/s11693-009-9041-8 }}</ref> The biohacking community remains a source of special concern, as the distributed and diffuse nature of open-source biotechnology makes it difficult to track, regulate or mitigate potential concerns over biosafety and biosecurity.<ref>{{cite journal | vauthors = Schmidt M | title = Diffusion of synthetic biology: a challenge to biosafety | journal = Systems and Synthetic Biology | volume = 2 | issue = 1–2 | pages = 1–6 | date = June 2008 | pmid = 19003431 | pmc = 2671588 | doi = 10.1007/s11693-008-9018-z | url = http://www.markusschmidt.eu/pdf/Diffusion_of_synthetic_biology.pdf }}</ref><br />
<br />
<br />
<br />
COSY, another European initiative, focuses on public perception and communication.<ref>[http://www.synbio.at/ COSY: Communicating Synthetic Biology]</ref><ref>{{cite journal | vauthors = Kronberger N, Holtz P, Kerbe W, Strasser E, Wagner W | title = Communicating Synthetic Biology: from the lab via the media to the broader public | journal = Systems and Synthetic Biology | volume = 3 | issue = 1–4 | pages = 19–26 | date = December 2009 | pmid = 19816796 | pmc = 2759424 | doi = 10.1007/s11693-009-9031-x }}</ref><ref>{{cite journal | vauthors = Cserer A, Seiringer A | title = Pictures of Synthetic Biology : A reflective discussion of the representation of Synthetic Biology (SB) in the German-language media and by SB experts | journal = Systems and Synthetic Biology | volume = 3 | issue = 1–4 | pages = 27–35 | date = December 2009 | pmid = 19816797 | pmc = 2759430 | doi = 10.1007/s11693-009-9038-3 }}</ref> To better communicate synthetic biology and its societal ramifications to a broader public, COSY and SYNBIOSAFE published ''SYNBIOSAFE'', a 38-minute documentary film, in October 2009.<ref>[http://www.synbiosafe.eu/DVD COSY/SYNBIOSAFE Documentary]</ref><br />
<br />
<br />
<br />
The International Association Synthetic Biology has proposed self-regulation.<ref>Report of IASB [http://www.ia-sb.eu/tasks/sites/synthetic-biology/assets/File/pdf/iasb_report_biosecurity_syntheticbiology.pdf "Technical solutions for biosecurity in synthetic biology"] {{webarchive |url=https://web.archive.org/web/20110719031805/http://www.ia-sb.eu/tasks/sites/synthetic-biology/assets/File/pdf/iasb_report_biosecurity_syntheticbiology.pdf |date=July 19, 2011 }}, Munich, 2008</ref> This proposes specific measures that the synthetic biology industry, especially DNA synthesis companies, should implement. In 2007, a group led by scientists from leading DNA-synthesis companies published a "practical plan for developing an effective oversight framework for the DNA-synthesis industry".<ref name="Bügl, H. et al. 2007 627–629" /><br />
<br />
<br />
<br />
=== United States 美国方面 ===<br />
<br />
<br />
<br />
In January 2009, the [[Alfred P. Sloan Foundation]] funded the [[Woodrow Wilson Center]], the [[Hastings Center]], and the [[J. Craig Venter Institute]] to examine the public perception, ethics and policy implications of synthetic biology.<ref>Parens E., Johnston J., Moses J. [http://www.thehastingscenter.org/who-we-are/our-research/selected-past-projects/ethical-issues-in-synthetic-biology-2/ Ethical Issues in Synthetic Biology.] 2009.</ref><br />
<br />
<br />
<br />
On July 9–10, 2009, the National Academies' Committee of Science, Technology & Law convened a symposium on "Opportunities and Challenges in the Emerging Field of Synthetic Biology".<ref>[http://sites.nationalacademies.org/PGA/stl/PGA_050738 NAS Symposium official site]</ref><br />
<br />
<br />
<br />
After the publication of the [[Mycoplasma laboratorium|first synthetic genome]] and the accompanying media coverage about "life" being created, President [[Barack Obama]] established the [[Presidential Commission for the Study of Bioethical Issues]] to study synthetic biology.<ref>Presidential Commission for the study of Bioethical Issues, December 2010 [http://bioethics.gov/node/353 FAQ]</ref> The commission convened a series of meetings, and issued a report in December 2010 titled "New Directions: The Ethics of Synthetic Biology and Emerging Technologies." The commission stated that "while Venter’s achievement marked a significant technical advance in demonstrating that a relatively large genome could be accurately synthesized and substituted for another, it did not amount to the “creation of life”.<ref>[http://bioethics.gov/node/353 Synthetic Biology F.A.Q.'s | Presidential Commission for the Study of Bioethical Issues]</ref> It noted that synthetic biology is an emerging field, which creates potential risks and rewards. The commission did not recommend policy or oversight changes and called for continued funding of the research and new funding for monitoring, study of emerging ethical issues and public education.<ref name="bioethics.gov" /><br />
<br />
<br />
<br />
Synthetic biology, as a major tool for biological advances, results in the "potential for developing biological weapons, possible unforeseen negative impacts on human health ... and any potential environmental impact".<ref name=":2">{{cite journal | vauthors = Erickson B, Singh R, Winters P | title = Synthetic biology: regulating industry uses of new biotechnologies | journal = Science | volume = 333 | issue = 6047 | pages = 1254–6 | date = September 2011 | pmid = 21885775 | doi = 10.1126/science.1211066 | bibcode = 2011Sci...333.1254E | s2cid = 1568198 | url = https://semanticscholar.org/paper/6ae989f6b07dc3c8a8694792d6fe8f036a0e0292 }}</ref> These security issues may be avoided by regulating industry uses of biotechnology through policy legislation. Federal guidelines on genetic manipulation are being proposed by "the President's Bioethics Commission ... in response to the announced creation of a self-replicating cell from a chemically synthesized genome, put forward 18 recommendations not only for regulating the science ... for educating the public".<ref name=":2" /><br />
<br />
<br />
<br />
=== Opposition 反对意见 ===<br />
<br />
On March 13, 2012, over 100 environmental and civil society groups, including [[Friends of the Earth]], the [[International Center for Technology Assessment]] and the [[ETC Group (AGETC)|ETC Group]] issued the manifesto ''The Principles for the Oversight of Synthetic Biology''. This manifesto calls for a worldwide moratorium on the release and commercial use of synthetic organisms until more robust regulations and rigorous biosafety measures are established. The groups specifically call for an outright ban on the use of synthetic biology on the [[human genome]] or [[human microbiome]].<ref>Katherine Xue for Harvard Magazine. September–October 2014 [http://harvardmagazine.com/2014/09/synthetic-biologys-new-menagerie Synthetic Biology’s New Menagerie]</ref><ref>Yojana Sharma for Scidev.net March 15, 2012. [http://www.scidev.net/global/genomics/news/ngos-call-for-international-regulation-of-synthetic-biology.html NGOs call for international regulation of synthetic biology]</ref> [[Richard Lewontin]] wrote that some of the safety tenets for oversight discussed in ''The Principles for the Oversight of Synthetic Biology'' are reasonable, but that the main problem with the recommendations in the manifesto is that "the public at large lacks the ability to enforce any meaningful realization of those recommendations".<ref>[http://www.nybooks.com/articles/archives/2014/may/08/new-synthetic-biology-who-gains/?insrc=rel#fnr-1 The New Synthetic Biology: Who Gains?] (2014-05-08), [[Richard C. Lewontin]], ''[[New York Review of Books]]''</ref><br />
<br />
<br />
<br />
== Health and safety 健康和安全 ==<br />
<br />
{{Main|Hazards of synthetic biology}}<br />
<br />
<br />
<br />
The hazards of synthetic biology include [[biosafety]] hazards to workers and the public, [[biosecurity]] hazards stemming from deliberate engineering of organisms to cause harm, and environmental hazards. The biosafety hazards are similar to those for existing fields of biotechnology, mainly exposure to pathogens and toxic chemicals, although novel synthetic organisms may have novel risks.<ref name=":02">{{Cite journal|url=https://blogs.cdc.gov/niosh-science-blog/2017/01/24/synthetic-biology/|title=Synthetic Biology and Occupational Risk|last1=Howard|first1=John|last2=Murashov|first2=Vladimir|date=2017-01-24|journal=Journal of Occupational and Environmental Hygiene|archive-url=|archive-date=|access-date=2018-11-30|last3=Schulte|first3=Paul|volume=14|issue=3|pages=224–236|pmid=27754800|doi=10.1080/15459624.2016.1237031|s2cid=205893358}}</ref><ref name=":12">{{Cite journal|last1=Howard|first1=John|last2=Murashov|first2=Vladimir|last3=Schulte|first3=Paul|date=2016-10-18|title=Synthetic biology and occupational risk|journal=Journal of Occupational and Environmental Hygiene|volume=14|issue=3|pages=224–236|doi=10.1080/15459624.2016.1237031|pmid=27754800|s2cid=205893358|issn=1545-9624}}</ref> For biosecurity, there is concern that synthetic or redesigned organisms could theoretically be used for [[bioterrorism]]. Potential risks include recreating known pathogens from scratch, engineering existing pathogens to be more dangerous, and engineering microbes to produce harmful biochemicals.<ref name=":7">{{Cite book|title=Biodefense in the Age of Synthetic Biology|date=2018-06-19|publisher=[[National Academies of Sciences, Engineering, and Medicine]]|isbn=9780309465182|location=|pages=|doi=10.17226/24890|pmid=30629396|last1=National Academies Of Sciences|first1=Engineering|author2=Division on Earth Life Studies|last3=Board On Life|first3=Sciences|author4=Board on Chemical Sciences Technology|author5=Committee on Strategies for Identifying Addressing Potential Biodefense Vulnerabilities Posed by Synthetic Biology}}</ref> Lastly, environmental hazards include adverse effects on [[biodiversity]] and [[ecosystem services]], including potential changes to land use resulting from agricultural use of synthetic organisms.<ref name=":8">{{Cite web|url=http://ec.europa.eu/environment/integration/research/newsalert/multimedia/synthetic_biology_and_biodiversity.htm|title=Future Brief: Synthetic biology and biodiversity|last=|first=|date=September 2016|website=European Commission|pages=14–15|archive-url=|archive-date=|access-date=2019-01-14}}</ref><ref>{{Cite web|url=https://publications.europa.eu/en/publication-detail/-/publication/9b231c71-faf1-11e5-b713-01aa75ed71a1/language-en/format-PDF|title=Final opinion on synthetic biology III: Risks to the environment and biodiversity related to synthetic biology and research priorities in the field of synthetic biology|last=|first=|date=2016-04-04|website=EU Directorate-General for Health and Food Safety|pages=8, 27|archive-url=|archive-date=|access-date=2019-01-14}}</ref><br />
<br />
<br />
<br />
Existing risk analysis systems for GMOs are generally considered sufficient for synthetic organisms, although there may be difficulties for an organism built "bottom-up" from individual genetic sequences.<ref name=":32" /><ref name=":22">{{Cite web|url=http://www.hse.gov.uk/research/rrpdf/rr944.pdf|title=Synthetic biology: A review of the technology, and current and future needs from the regulatory framework in Great Britain|last1=Bailey|first1=Claire|last2=Metcalf|first2=Heather|date=2012|website=UK [[Health and Safety Executive]]|archive-url=|archive-date=|access-date=2018-11-29|last3=Crook|first3=Brian}}</ref> Synthetic biology generally falls under existing regulations for GMOs and biotechnology in general, and any regulations that exist for downstream commercial products, although there are generally no regulations in any jurisdiction that are specific to synthetic biology.<ref name=":5">{{Citation|last1=Pei|first1=Lei|title=Regulatory Frameworks for Synthetic Biology|date=2012|work=Synthetic Biology|pages=157–226|publisher=John Wiley & Sons, Ltd|doi=10.1002/9783527659296.ch5|isbn=9783527659296|last2=Bar‐Yam|first2=Shlomiya|last3=Byers‐Corbin|first3=Jennifer|last4=Casagrande|first4=Rocco|last5=Eichler|first5=Florentine|last6=Lin|first6=Allen|last7=Österreicher|first7=Martin|last8=Regardh|first8=Pernilla C.|last9=Turlington|first9=Ralph D.}}</ref><ref name=":4">{{Cite journal|last=Trump|first=Benjamin D.|date=2017-11-01|title=Synthetic biology regulation and governance: Lessons from TAPIC for the United States, European Union, and Singapore|journal=Health Policy|volume=121|issue=11|pages=1139–1146|doi=10.1016/j.healthpol.2017.07.010|pmid=28807332|issn=0168-8510|doi-access=free}}</ref><br />
<br />
<br />
<br />
== See also 请参阅 ==<br />
<br />
{{Colbegin|colwidth=20em}}<br />
<br />
* ''[[ACS Synthetic Biology]]'' (journal)<br />
<br />
* [[Bioengineering]]<br />
<br />
* [[Biomimicry]]<br />
<br />
Category:Biotechnology<br />
<br />
类别: 生物技术<br />
<br />
* [[Carlson Curve]]<br />
<br />
Category:Molecular genetics<br />
<br />
类别: 分子遗传学<br />
<br />
* [[Chiral life concept]]<br />
<br />
Category:Systems biology<br />
<br />
分类: 系统生物学<br />
<br />
* [[Computational biology]]<br />
<br />
Category:Bioinformatics<br />
<br />
类别: 生物信息学<br />
<br />
* [[Computational biomodeling]]<br />
<br />
Category:Biocybernetics<br />
<br />
类别: 生物控制论<br />
<br />
* [[DNA digital data storage]]<br />
<br />
Category:Appropriate technology<br />
<br />
类别: 适当的技术<br />
<br />
* [[Engineering biology]]<br />
<br />
Category:Emerging technologies<br />
<br />
类别: 新兴技术<br />
<br />
<noinclude><br />
<br />
<small>This page was moved from [[wikipedia:en:Synthetic biology]]. Its edit history can be viewed at [[合成生物学/edithistory]]</small></noinclude><br />
<br />
[[Category:待整理页面]]</div>粲兰https://wiki.swarma.org/index.php?title=%E7%94%A8%E6%88%B7:%E7%B2%B2%E5%85%B0&diff=19275用户:粲兰2020-11-27T06:47:06Z<p>粲兰:</p>
<hr />
<div>==Self-Introduction 自我介绍==<br />
{| class="wikitable"<br />
|-<br />
! Attributes !! Details<br />
|-<br />
| Name: || 袁一博 (粲兰)<br />
|-<br />
| College: || 山东大学 本科二年级 <br />
|-<br />
| Major : || 统计学<br />
|-<br />
| Interested In : || 生物统计 网络科学 机器学习<br />
|-<br />
| Lable: || 开朗 外向 求知 向善<br />
|-<br />
| Hobby: || 足球 音乐 动漫 打王者 学习<br />
|-<br />
| Motto : || 每一次努力,都是未来的自己在向现在的自己求救。<br />
|-<br />
| Contact Me:|| QQ:1363052561<br />
|-<br />
|}<br />
<br />
==Story With Jizhi 我与集智的故事==<br />
===First Meet 初识===<br />
看到了学长转发的消息,感觉很有意思,又自认英语水平不错,所以就来啦。<br />
<br />
===Experience 经历===<br />
一开始本来是想趁机多学一些知识,结果发现自己根本都看不懂哈哈哈。<br />
<br />
<br />
要做的还有很多,要学的还有很多。在这个家庭里看见这么多人都在努力,为玩手机的自己感到不齿。一起加油吧!<br />
<br />
<br />
今后也会尽己所能为大家带来更多高质量的词条!<br />
<br />
<br />
<br />
以后也会时不时的更新一下主页,谢谢阅读。</div>粲兰https://wiki.swarma.org/index.php?title=%E7%94%A8%E6%88%B7:%E7%B2%B2%E5%85%B0&diff=19274用户:粲兰2020-11-27T06:46:14Z<p>粲兰:</p>
<hr />
<div>==Self-Introduction 自我介绍==<br />
{| class="wikitable"<br />
|-<br />
! Attributes !! Details<br />
|-<br />
| Name: || 袁一博 (粲兰)<br />
|-<br />
| College: || 山东大学 本科二年级 <br />
|-<br />
| Major : || 统计学<br />
|-<br />
| Lable: || 开朗 外向 求知 向善<br />
|-<br />
| Hobby: || 足球 音乐 动漫 打王者 学习<br />
|-<br />
| Motto : || 每一次努力,都是未来的自己在向现在的自己求救。<br />
|-<br />
| Contact Me:|| QQ:1363052561<br />
|-<br />
|}<br />
<br />
==Story With Jizhi 我与集智的故事==<br />
===First Meet 初识===<br />
看到了学长转发的消息,感觉很有意思,又自认英语水平不错,所以就来啦。<br />
<br />
===Experience 经历===<br />
一开始本来是想趁机多学一些知识,结果发现自己根本都看不懂哈哈哈。<br />
<br />
<br />
要做的还有很多,要学的还有很多。在这个家庭里看见这么多人都在努力,为玩手机的自己感到不齿。一起加油吧!<br />
<br />
<br />
今后也会尽己所能为大家带来更多高质量的词条!<br />
<br />
<br />
<br />
以后也会时不时的更新一下主页,谢谢阅读。</div>粲兰https://wiki.swarma.org/index.php?title=%E7%94%A8%E6%88%B7:%E7%B2%B2%E5%85%B0&diff=19273用户:粲兰2020-11-27T06:44:29Z<p>粲兰:建立内容为“==Self-Introduction 自我介绍== {| class="wikitable" |- ! Attributes !! Details |- | Name: || 袁一博 (粲兰) |- | College: || 山东大学 本科…”的新页面</p>
<hr />
<div>==Self-Introduction 自我介绍==<br />
{| class="wikitable"<br />
|-<br />
! Attributes !! Details<br />
|-<br />
| Name: || 袁一博 (粲兰)<br />
|-<br />
| College: || 山东大学 本科二年级 <br />
|-<br />
| Major : || 统计学<br />
|-<br />
| Lable: || 开朗 外向 求知 向善<br />
|-<br />
| Hobby: || 足球 音乐 动漫 打王者 学习<br />
|-<br />
| Motto : || 每一次努力,都是未来的自己在向现在的自己求救。<br />
|-<br />
| Contact Me:|| QQ:1363052561<br />
|-<br />
|}<br />
<br />
==我与集智的故事==<br />
===初识===<br />
看到了学长转发的消息,感觉很有意思,又自认英语水平不错,所以就来啦。<br />
<br />
===相处===<br />
一开始本来是想趁机多学一些知识,结果发现自己根本都看不懂哈哈哈。<br />
<br />
<br />
要做的还有很多,要学的还有很多。在这个家庭里看见这么多人都在努力,为玩手机的自己感到不齿。一起加油吧!<br />
<br />
<br />
今后也会尽己所能为大家带来更多高质量的词条!<br />
<br />
<br />
<br />
以后也会时不时的更新一下主页,谢谢阅读。</div>粲兰https://wiki.swarma.org/index.php?title=LFR%E7%AE%97%E6%B3%95&diff=19052LFR算法2020-11-22T15:23:00Z<p>粲兰:</p>
<hr />
<div>{{#seo:<br />
|keywords=Lancichinetti–Fortunato–Radicchi benchmark,基准网络,节点度分布,社区规模分布<br />
|description=致命弱点,希腊神话,特洛伊战争<br />
}}<br />
'''<font color="#ff8000">兰奇基内蒂-福图纳托-拉迪奇基准测试 Lancichinetti–Fortunato–Radicchi benchmark (LFR)</font>'''是一种生成'''<font color="#ff8000">基准网络(baseline network)</font>'''(类似于真实世界网络的人工网络)的算法。他们有一个预先已知的社区,用于比较不同的社区检测方法。<ref>Hua-Wei Shen (2013). "Community Structure of Complex Networks". Springer Science & Business Media. 11–12.</ref>与其他方法相比,基准测试的优点在于它解释了'''<font color="#ff8000">节点度分布(node degree distribution)</font>'''和'''<font color="#ff8000">社区规模分布(Community size distribution)</font>'''的[[异质性]]。<ref name="original">A. Lancichinetti, S. Fortunato, and F. Radicchi.(2008) Benchmark graphs for testing community detection algorithms. Physical Review E, 78. {{ArXiv|0805.4770}}</ref><br />
<br />
==算法==<br />
<br />
'''<font color="#ff8000">节点度(node degree)</font>'''和'''<font color="#ff8000">社区规模(Community size)</font>'''按[[幂律分布]],但指数不同。基准测试假设节点度和社区规模都具有不同指数的幂律分布,分别为<math>\gamma</math>和<math>\beta</math>。<math>N</math>是节点的数量,平均度为<math>\langle k \rangle</math>。混合参数<math>\mu</math>是一个节点的相邻节点的平均比例,这些相邻节点不属于基准节点所属的任何社区。这个参数控制着社区之间的边缘比例。<ref>Twan van Laarhoven and Elena Marchiori (2013). "Network community detection with edge classifiers trained on LFR graphs". https://www.cs.ru.nl/~elenam/paper-learning-community.pdf</ref><br />
<br />
<br />
生成基准网络的步骤如下:<br />
<br />
:'''步骤1''':生成一个网络,其节点遵循指数为<math>\gamma</math>的幂律分布,并选择分布的极值<math> k_{\min} </math>和<math> k_{\max} </math>来获得期望平均度<math>\langle k\rangle</math>。<br />
<br />
:'''步骤2''':每个节点的<math>(1 - \mu)</math>链接部分与同一社区的节点相同,而<math>\mu</math>部分与其他节点相同。<br />
<br />
:'''步骤3''':根据指数为<math>\beta</math>的幂律分布生成社区规模。所有规模大小的和必须等于<math>N</math>。最小和最大的<math> s_{\min} </math><math> s_{\max} </math>必须满足社区的定义,这样每个非孤立的节点至少存在于一个群落中:<math> s_{\min} > k_{\min} </math> ; <math> s_{\max} > k_{\max} </math><br />
<br />
:'''步骤4''':最初,没有为任何社区分配任何节点。然后,每个节点被随机分配到一个社区。只要社区内相邻节点的数量不超过社区规模,就会向社区添加一个新节点,否则就不会添加。在接下来的迭代中,无归属的节点被随机分配给某个社区。如果该社区是完备的,即规模已经用尽,必须随机选择社区中的一个节点并断开其链接。当所有社区都完备且所有节点都至少属于一个社区时停止迭代。<br />
<br />
:'''步骤5''':对节点重新布线,保持相同的节点度,但只影响内部和外部链接,使得每个节点在社区外的链接数量约等于混合参数<math>\mu</math>。<ref name="original"/><br />
<br />
==调试==<br />
<br />
考虑社区的一个不重叠分割。每次迭代中随机选择的节点的社区遵循一个<math>p(C)</math>分布,这个分布表示随机选择的节点来自社区<math>C</math>的概率。考虑同一个网络的一个分割,这个分割由一些社区搜索算法预测得出,并且具有<math>p(C_2)</math>分布。基准分割具有<math>p(C_1)</math>分布。<br />
<br />
<br />
联合分布为<math>p(C_1, C_2)</math>。这两个分割的相似性可以通过'''<font color="#ff8000">归一化互信息</font>'''得到。<br />
<br />
<br />
: <math> I_n = \frac{\sum_{C_1,C_2} p(C_1,C_2) \log_2 \frac{p(C_1,C_2)}{p(C_1)p(C_2)} }{\frac 1 2 H(\{p(C_1)\}) + \frac 1 2 H(\{p(C_2)\})} </math><br />
<br />
<br />
如果<math> I_n=1 </math>基准和检测到的分割相同,且<math> I_n=0 </math>,那么它们彼此独立。<ref>Barabasi, A.-L. (2014). "Network Science". Chapter 9: Communities.</ref><br />
<br />
<br />
<br />
==参考文献==<br />
<br />
{{Reflist}}<br />
<br />
<br />
<br />
[[Category:算法]]<br />
[[Category:随机图]]<br />
[[Category:基准(计算)]]<br />
[[Category:统计方法]]<br />
<br />
----<br />
本中文词条[[用户:粲兰|粲兰]]翻译,由[[用户:黄秋莉|黄秋莉]]审校,[[用户:薄荷|薄荷]]编辑欢迎在讨论页面留言。<br />
<br />
'''本词条内容翻译自 wikipedia.org,遵守 CC3.0协议。'''</div>粲兰https://wiki.swarma.org/index.php?title=%E5%B8%95%E7%B4%AF%E6%89%98%E6%9C%80%E4%BC%98_Pareto_optimality&diff=19051帕累托最优 Pareto optimality2020-11-22T15:18:54Z<p>粲兰:</p>
<hr />
<div>此词条由袁一博翻译,未经人工整理和审校,带来阅读不便,请见谅。<br />
<br />
{{short description|State in which no reallocation of resources can make everyone at least as well off}}<br />
<br />
{{Use mdy dates|date=January 2016}}<br />
<br />
<br />
<br />
'''Pareto efficiency''' or '''Pareto optimality''' is a situation that cannot be modified so as to make any one individual or preference criterion better off without making at least one individual or preference criterion worse off. The concept is named after [[Vilfredo Pareto]] (1848–1923), Italian engineer and economist, who used the concept in his studies of [[economic efficiency]] and [[income distribution]]. The following three concepts are closely related:<br />
<br />
Pareto efficiency or Pareto optimality is a situation that cannot be modified so as to make any one individual or preference criterion better off without making at least one individual or preference criterion worse off. The concept is named after Vilfredo Pareto (1848–1923), Italian engineer and economist, who used the concept in his studies of economic efficiency and income distribution. The following three concepts are closely related:<br />
<br />
帕累托效率或帕累托最优是一种不能被修改的情况,它使得任何个体或优先准则变得更好而不使至少一个个体或一项优先准则变得更差。这个概念是以意大利工程师、经济学家维尔弗雷多·帕累托(1848-1923)的名字命名的。他在研究'''<font color="#ff8000">经济效率(economic efficiency)</font>'''和'''<font color="#ff8000">收入分配(income distribution)</font>'''时使用了这个概念。以下三个概念密切相关:<br />
--[[用户:趣木木|趣木木]]([[用户讨论:趣木木|讨论]])专有名词与疑难句 后面需要附上英文<br />
<br />
<br />
* Given an initial situation, a '''Pareto improvement''' is a new situation which is weakly preferred by all agents, and strictly preferred by at least one agent. In a sense, it is a unanimously-agreed improvement: if we move to the new situation, some agents will gain, and no agents will lose.<br />
<br />
* A situation is called '''Pareto dominated''' if it has a Pareto improvement. <br />
<br />
* A situation is called '''Pareto optimal''' or '''Pareto efficient''' if it is not Pareto dominated.<br />
<br />
* 在一个给定的初始条件下,一个帕累托改进指的是一种不为大多数主体所喜爱但被至少一个主体喜爱的状况。在某种意义上,它是一种一致同意的改进:如果我们处于这种新的情况下,某些主体可能获利,且没有主体会蒙受损失。<br />
*一种状况如果拥有一个帕累托改进,那么它被称作受帕累托支配的。<br />
*一种状况如果是不受帕累托支配的,那么它被称作帕累托最优的或帕累托有效的。<br />
<br />
<br />
<br />
The '''Pareto frontier''' is the set of all Pareto efficient allocations, conventionally shown [[Chart|graphically]]. It also is variously known as the '''Pareto front''' or '''Pareto set'''.<ref>{{Cite web|url=http://www.cenaero.be/Page.asp?docid=27103&|title=Pareto Front|last=proximedia|website=www.cenaero.be|access-date=2018-10-08}}</ref><br />
<br />
The Pareto frontier is the set of all Pareto efficient allocations, conventionally shown graphically. It also is variously known as the Pareto front or Pareto set.<br />
<br />
帕累托边界是所有帕累托有效分配的集合,按惯例以图表形式表示它。它也被称为帕累托前沿或帕累托集。<br />
<br />
<br />
<br />
"Pareto efficiency" is considered as a minimal notion of efficiency that does not necessarily result in a socially desirable distribution of resources: it makes no statement about [[Social equality|equality]], or the overall well-being of a society.<ref>{{cite journal |authorlink=Amartya Sen |first=A. |last=Sen |title=Markets and freedom: Achievements and limitations of the market mechanism in promoting individual freedoms |journal=Oxford Economic Papers |volume=45 |issue=4 |pages=519–541 |date=October 1993 |jstor=2663703 |url=http://www.cs.princeton.edu/courses/archive/spr06/cos444/papers/sen.pdf |doi=10.1093/oxfordjournals.oep.a042106 }}</ref><ref>{{cite book |first=N. |last=Barr |author-link=Nicholas Barr|chapter=3.2.2 The relevance of efficiency to different theories of society |title=Economics of the Welfare State |year=2012 |publisher=[[Oxford University Press]] |isbn=978-0-19-929781-8 |pages=[https://books.google.com/books?id=DOg0BM1XiqQC&pg=PA46 46–49] |edition=5th}}</ref>{{rp|46–49}} It is a necessary, but not sufficient, condition of efficiency.<br />
<br />
"Pareto efficiency" is considered as a minimal notion of efficiency that does not necessarily result in a socially desirable distribution of resources: it makes no statement about equality, or the overall well-being of a society. It is a necessary, but not sufficient, condition of efficiency.<br />
<br />
“帕累托最优”被认为是一种狭义的效率,它不一定产生社会所期望的资源分配: 它没有为平等或一个社会的总体福祉发声。它是效率的必要不充分条件。<br />
<br />
<br />
<br />
In addition to the context of efficiency in ''allocation'', the concept of Pareto efficiency also arises in the context of [[productive efficiency|''efficiency in production'']] vs. ''[[x-inefficiency]]'': a set of outputs of goods is Pareto efficient if there is no feasible re-allocation of productive inputs such that output of one product increases while the outputs of all other goods either increase or remain the same.<ref>[[John D. Black|Black, J. D.]], Hashimzade, N., & [[Gareth Myles|Myles, G.]], eds., ''A Dictionary of Economics'', 5th ed. (Oxford: Oxford University Press, 2017), [https://books.google.com/books?id=WyvYDQAAQBAJ&pg=PT459 p. 459].</ref>{{rp|459}}<br />
<br />
In addition to the context of efficiency in allocation, the concept of Pareto efficiency also arises in the context of efficiency in production vs. x-inefficiency: a set of outputs of goods is Pareto efficient if there is no feasible re-allocation of productive inputs such that output of one product increases while the outputs of all other goods either increase or remain the same.<br />
<br />
除了分配效率的背景之外,帕累托最优的概念也出现在'''<font color="#ff8000">生产效率(efficiency in production)</font>'''对比于'''<font color="#ff8000">x-低效率(x-inefficiency)</font>'''的背景之下,即如果生产投入没有可行的再分配,或者说一种产品的产出增加,而所有其他产品的产出增加或保持不变,那么一组产品的产出就是帕累托有效的。<br />
<br />
<br />
<br />
Besides economics, the notion of Pareto efficiency has been applied to the selection of alternatives in [[engineering]] and [[biology]]. Each option is first assessed, under multiple criteria, and then a subset of options is ostensibly identified with the property that no other option can categorically outperform the specified option. It is a statement of impossibility of improving one variable without harming other variables in the subject of [[multi-objective optimization]] (also termed '''Pareto optimization''').<br />
<br />
Besides economics, the notion of Pareto efficiency has been applied to the selection of alternatives in engineering and biology. Each option is first assessed, under multiple criteria, and then a subset of options is ostensibly identified with the property that no other option can categorically outperform the specified option. It is a statement of impossibility of improving one variable without harming other variables in the subject of multi-objective optimization (also termed Pareto optimization).<br />
<br />
除了经济学,帕累托最优的概念已经应用到工程和生物学中的替代品的选择。首先根据多项标准对每个选项进行评估,然后确定选项子集,其中的任何元素都具有没有其他选项可以明确胜过该元素的属性。在'''<font color="#ff8000">多目标优化(multi-objective optimization)</font>'''(又称帕累托优化)中,这是一种对在不损害其他变量的情况下改进一个变量的不可能性的陈述。<br />
<br />
<br />
<br />
== Overview 综述 ==<br />
<br />
<br />
<br />
<br />
"Pareto optimality" is a formally defined concept used to describe when an [[resource allocation|allocation]] is optimal. An allocation is ''not'' Pareto optimal if there is an alternative allocation where improvements can be made to at least one participant's well-being without reducing any other participant's well-being. If there is a transfer that satisfies this condition, the reallocation is called a "Pareto improvement". When no further Pareto improvements are possible, the allocation is a "Pareto optimum".<br />
<br />
"Pareto optimality" is a formally defined concept used to describe when an allocation is optimal. An allocation is not Pareto optimal if there is an alternative allocation where improvements can be made to at least one participant's well-being without reducing any other participant's well-being. If there is a transfer that satisfies this condition, the reallocation is called a "Pareto improvement". When no further Pareto improvements are possible, the allocation is a "Pareto optimum".<br />
<br />
“帕累托最优”是一个正式定义的概念,用来描述一个分配何时是最优的。如果有一种替代性的分配方式可以在不降低任何其他参与者福祉的情况下改善至少一个参与者的福祉,那么这种分配就不是帕累托最优的。如果有一个转移满足这个条件,这个再分配就被称为“帕累托改进”。当无法进一步实现帕累托改进时,这个分配就是“帕累托最优”。<br />
<br />
<br />
<br />
The formal presentation of the concept in an economy is as follows: Consider an economy with <math> n</math> agents and <math> k </math> goods. Then an allocation <math> \{x_1, ..., x_n\} </math>, where <math> x_i \in \mathbb{R}^k </math> for all ''i'', is ''Pareto optimal'' if there is no other feasible allocation <math> \{x_1', ..., x_n'\} </math> such that, for utility function <math> u_i </math> for each agent <math> i </math>, <math> u_i(x_i') \geq u_i(x_i) </math> for all <math> i \in \{1, ..., n\} </math> with <math> u_i(x_i') > u_i(x_i) </math> for some <math> i</math>.<ref name="AndreuMas95">{{citation|author-link=Andreu Mas-Colell|last1=Mas-Colell|first1=A.|first2=Michael D.|last2=Whinston|first3=Jerry R.|last3=Green|year=1995|title=Microeconomic Theory|chapter=Chapter 16: Equilibrium and its Basic Welfare Properties|publisher=Oxford University Press|isbn=978-0-19-510268-0|url-access=registration|url=https://archive.org/details/isbn_9780198089537}}</ref> Here, in this simple economy, "feasibility" refers to an allocation where the total amount of each good that is allocated sums to no more than the total amount of the good in the economy. In a more complex economy with production, an allocation would consist both of consumption [[Vector space|vector]]s and production vectors, and feasibility would require that the total amount of each consumed good is no greater than the initial endowment plus the amount produced.<br />
<br />
The formal presentation of the concept in an economy is as follows: Consider an economy with <math> n</math> agents and <math> k </math> goods. Then an allocation <math> \{x_1, ..., x_n\} </math>, where <math> x_i \in \mathbb{R}^k </math> for all i, is Pareto optimal if there is no other feasible allocation <math> \{x_1', ..., x_n'\} </math> such that, for utility function <math> u_i </math> for each agent <math> i </math>, <math> u_i(x_i') \geq u_i(x_i) </math> for all <math> i \in \{1, ..., n\} </math> with <math> u_i(x_i') > u_i(x_i) </math> for some <math> i</math>. Here, in this simple economy, "feasibility" refers to an allocation where the total amount of each good that is allocated sums to no more than the total amount of the good in the economy. In a more complex economy with production, an allocation would consist both of consumption vectors and production vectors, and feasibility would require that the total amount of each consumed good is no greater than the initial endowment plus the amount produced.<br />
<br />
这个概念在一个经济体系中的正式表现如下: 考虑一个有''n''个主体和''k''个商品的经济体系,如果没有其他可行的分配'''<font color="#32CD32">此处需插入公式</font>'''使得对于效用函数对任意主体''i''满足'''<font color="#32CD32">此处需插入公式</font>''',它对某些个体''i''满足'''<font color="#32CD32">此处需插入公式</font>''',那么一个分配'''<font color="#32CD32">此处需插入公式</font>''',是 Pareto 最优的,其中对任意''i'','''<font color="#32CD32">此处需插入公式</font>'''。在这个简单的经济体系中,“可行性”是指每种商品的分配总额不超过该经济体系中所有商品的总额。在一个有生产能力的更为复杂的经济体中,一个分配将包括消费载体和生产载体,且可行性要求每种消费品的总量不大于初始禀赋加上生产总量。<br />
<br />
<br />
<br />
In principle, a change from a generally inefficient economic allocation to an efficient one is not necessarily considered to be a Pareto improvement. Even when there are overall gains in the economy, if a single agent is disadvantaged by the reallocation, the allocation is not Pareto optimal. For instance, if a change in economic policy eliminates a monopoly and that market subsequently becomes competitive, the gain to others may be large. However, since the monopolist is disadvantaged, this is not a Pareto improvement. In theory, if the gains to the economy are larger than the loss to the monopolist, the monopolist could be compensated for its loss while still leaving a net gain for others in the economy, allowing for a Pareto improvement. Thus, in practice, to ensure that nobody is disadvantaged by a change aimed at achieving Pareto efficiency, [[compensation principle|compensation]] of one or more parties may be required. It is acknowledged, in the real world, that such compensations may have [[unintended consequences]] leading to incentive distortions over time, as agents supposedly anticipate such compensations and change their actions accordingly.<ref>See [[Ricardian equivalence]]</ref><br />
<br />
In principle, a change from a generally inefficient to an efficient one is not necessarily considered to be a Pareto improvement. Even when there are overall gains in the economy, if a single agent is disadvantaged by the reallocation, the allocation is not Pareto optimal. For instance, if a change in economic policy eliminates a monopoly and that market subsequently becomes competitive, the gain to others may be large. However, since the monopolist is disadvantaged, this is not a Pareto improvement. In theory, if the gains to the economy are larger than the loss to the monopolist, the monopolist could be compensated for its loss while still leaving a net gain for others in the economy, a Pareto improvement. Thus, in practice, to ensure that nobody is disadvantaged by a change aimed at achieving Pareto efficiency, compensation of one or more parties may be required. It is acknowledged, in the real world, that such compensations may have unintended consequences leading to incentive distortions over time, as agents supposedly anticipate such compensations and change their actions accordingly.<br />
<br />
原则上,从一个普遍低效率的经济分配到一个高效率的经济分配的转变不一定被认为是一个帕累托改进。即使经济总体是获益的,如果一个主体在再分配中处于不利地位,这个分配也不是帕累托最优的。例如,如果经济政策的某个改变消除了垄断,市场随后变得具有竞争力,那么其他主体的收益可能很大。然而,由于垄断者处于不利地位,这不是一个帕累托改进。理论上,如果经济体系的收益大于垄断者的损失,考虑到帕累托改善,垄断者可以在为经济体系中的其他主体留下净收益的情况下得到补偿。因此,在实践中,为了确保没有人会因为旨在实现帕累托最优的改变而处于不利地位,可能需要对一个或多个当事方进行补偿。在现实世界中,因为代理人可能预期这种补偿并相应地改变他们的行为,随着时间的推移,这种补偿可能会造成意外的后果以及动机的扭曲。<br />
<br />
<br />
--[[用户:趣木木|趣木木]]([[用户讨论:趣木木|讨论]])“帕累托改善”“帕累托改进”名词注意统一<br />
Under the idealized conditions of the [[first welfare theorem]], a system of [[free market]]s, also called a "[[competitive equilibrium]]", leads to a Pareto-efficient outcome. It was first demonstrated mathematically by economists [[Kenneth Arrow]] and [[Gérard Debreu]].<br />
<br />
Under the idealized conditions of the first welfare theorem, a system of free markets, also called a "competitive equilibrium", leads to a Pareto-efficient outcome. It was first demonstrated mathematically by economists Kenneth Arrow and Gérard Debreu.<br />
<br />
在'''<font color="#ff8000">福利经济学第一定理(the first welfare theorem)</font>'''的理想条件下,一个'''<font color="#ff8000">自由市场(free market)</font>'''系统,也称为“'''<font color="#ff8000">竞争均衡(competitive equilibrium)</font>'''” ,对应一个帕累托有效的结果。经济学家肯尼斯·阿罗(Kenneth Arrow)和杰拉德·迪布鲁(Gérard Debreu)首先用数学方法证明了这一点。<br />
<br />
<br />
<br />
However, the result only holds under the restrictive assumptions necessary for the proof: markets exist for all possible goods, so there are no [[externality|externalities]]; all markets are in full equilibrium; markets are perfectly competitive; transaction costs are negligible; and market participants have [[perfect information]].<br />
<br />
However, the result only holds under the restrictive assumptions necessary for the proof: markets exist for all possible goods, so there are no externalities; all markets are in full equilibrium; markets are perfectly competitive; transaction costs are negligible; and market participants have perfect information.<br />
<br />
然而,这个结果只有在证明所需的限制性假设下才成立,即所有可能的商品都存在市场,因此不存在外部效应; 所有市场都处于完全均衡状态; 市场是完全竞争的; 交易成本是可忽略的; 市场参与者拥有'''<font color="#ff8000">完全信息(perfect information)</font>'''。<br />
--[[用户:趣木木|趣木木]]([[用户讨论:趣木木|讨论]])在博弈论中有“完全的信息”为完全信息<br />
<br />
<br />
In the absence of perfect information or complete markets, outcomes will generally be Pareto inefficient, per the [[Joseph Stiglitz#Information asymmetry|Greenwald-Stiglitz theorem]].<ref>{{Cite journal |doi=10.2307/1891114 |last1=Greenwald |first1=B. |last2=Stiglitz |first2=J. E. |author1-link=Bruce Greenwald |author2-link=Joseph E. Stiglitz |journal=Quarterly Journal of Economics |volume=101 |issue=2 |pages=229–64 |year=1986 |title=Externalities in economies with imperfect information and incomplete markets |jstor=1891114}}</ref><br />
<br />
In the absence of perfect information or complete markets, outcomes will generally be Pareto inefficient, per the Greenwald-Stiglitz theorem.<br />
<br />
根据'''<font color="#ff8000">格林沃德-斯蒂格利茨定理(the Greenwald-Stiglitz theorem)</font>''',在缺乏完全信息或完全市场的情况下,这个结果通常是帕累托低效的。<br />
<br />
<br />
<br />
The [[second welfare theorem]] is essentially the reverse of the first welfare-theorem. It states that under similar, ideal assumptions, any Pareto optimum can be obtained by some [[competitive equilibrium]], or [[free market]] system, although it may also require a [[lump-sum]] transfer of wealth.<ref name="AndreuMas95"/><br />
<br />
The second welfare theorem is essentially the reverse of the first welfare-theorem. It states that under similar, ideal assumptions, any Pareto optimum can be obtained by some competitive equilibrium, or free market system, although it may also require a lump-sum transfer of wealth.<br />
<br />
'''<font color="#ff8000">福利经济学第二定理(The second welfare theorem)</font>'''实质上是福利经济学第一定理的逆转。它指出,在类似的理想假设下,任何帕累托最优都可以通过某种竞争均衡或自由市场制度获得,尽管它可能也需要一次性转移财富。<br />
<br />
<br />
<br />
== Weak Pareto efficiency{{anchor|weak}} 弱帕累托效率 ==d<br />
<br />
<br />
'''Weak Pareto optimality''' is a situation that cannot be strictly improved for ''every'' individual.<ref>{{Cite book | doi=10.1007/978-1-4020-9160-5_341|chapter = Pareto Optimality|title = Encyclopedia of Global Justice| pages=808–809|year = 2011|last1 = Mock|first1 = William B T.| isbn=978-1-4020-9159-9}}</ref> <br />
<br />
Weak Pareto optimality is a situation that cannot be strictly improved for every individual. <br />
<br />
弱帕累托最优是一种不能严格地改善每个个体的情况。<br />
<br />
<br />
<br />
Formally, we define a '''strong pareto improvement''' as a situation in which all agents are strictly better-off (in contrast to just "Pareto improvement", which requires that one agent is strictly better-off and the other agents are at least as good). A situation is '''weak Pareto-optimal''' if it has no strong Pareto-improvements.<br />
<br />
Formally, we define a strong pareto improvement as a situation in which all agents are strictly better-off (in contrast to just "Pareto improvement", which requires that one agent is strictly better-off and the other agents are at least as good). A situation is weak Pareto-optimal if it has no strong Pareto-improvements.<br />
<br />
在形式上,我们将强帕累托改善定义为所有主体严格处于较好状态的情况(与之相对的只是“帕累托改进” ,它要求一个主体严格处于较好状态,而其他主体至少同样良好)。没有强帕累托改进的情况是弱帕累托最优的。<br />
<br />
<br />
<br />
Any strong Pareto-improvement is also a weak Pareto-improvement. The opposite is not true; for example, consider a resource allocation problem with two resources, which Alice values at 10, 0 and George values at 5, 5. Consider the allocation giving all resources to Alice, where the utility profile is (10,0).<br />
<br />
Any strong Pareto-improvement is also a weak Pareto-improvement. The opposite is not true; for example, consider a resource allocation problem with two resources, which Alice values at 10, 0 and George values at 5, 5. Consider the allocation giving all resources to Alice, where the utility profile is (10,0).<br />
<br />
任何强帕累托改进也是弱帕累托改进。反之则不然; 例如,考虑一个包含两个资源的资源分配问题,Alice值为10,0,George值为5,5。考虑将所有资源分配给 Alice 的分配,它的'''<font color="#32CD32">分配方案</font>'''为(10,0)。<br />
<br />
<br />
<br />
* It is a weak-PO, since no other allocation is strictly better to both agents (there are no strong Pareto improvements). <br />
<br />
* But it is not a strong-PO, since the allocation in which George gets the second resource is strictly better for George and weakly better for Alice (it is a weak Pareto improvement) - its utility profile is (10,5)<br />
* 它是一个弱帕累托最优,因为没有其他任何分配对上述两个主体是更优的(没有强帕累托改进)。<br />
* 但它不是一个强帕累托最优,因为这个George在其中得到第二顺位的资源的分配对George是严格更优的且对Alice是弱更优的(它是一个弱帕累托改进),它的'''<font color="#32CD32">分配方案</font>'''为(10,5)<br />
<br />
<br />
<br />
A market doesn't require [[local nonsatiation]] to get to a weak Pareto-optimum.<ref>Markey‐Towler, Brendan and John Foster. "[http://www.uq.edu.au/economics/abstract/476.pdf Why economic theory has little to say about the causes and effects of inequality]", School of Economics, [[University of Queensland]], Australia, 21 February 2013, RePEc:qld:uq2004:476</ref><br />
<br />
A market doesn't require local nonsatiation to get to a weak Pareto-optimum.<br />
<br />
市场不需要局部不饱和就能达到弱帕累托最优。<br />
<br />
<br />
<br />
== Constrained Pareto efficiency {{anchor|Constrained Pareto efficiency}} 受约束的帕累托效率 ==<br />
<br />
'''Constrained Pareto optimality''' is a weakening of Pareto-optimality, accounting for the fact that a potential planner (e.g., the government) may not be able to improve upon a decentralized market outcome, even if that outcome is inefficient. This will occur if it is limited by the same informational or institutional constraints as are individual agents.<ref>Magill, M., & [[Martine Quinzii|Quinzii, M.]], ''Theory of Incomplete Markets'', MIT Press, 2002, [https://books.google.com/books?id=d66GXq2F2M0C&pg=PA104#v=onepage&q&f=false p. 104].</ref>{{rp|104}}<br />
<br />
Constrained Pareto optimality is a weakening of Pareto-optimality, accounting for the fact that a potential planner (e.g., the government) may not be able to improve upon a decentralized market outcome, even if that outcome is inefficient. This will occur if it is limited by the same informational or institutional constraints as are individual agents.<br />
<br />
受约束的帕累托最优是帕累托最优的弱化,因为一个潜在的规划者(比如政府)可能无法改善分散市场的结果,即使这个结果是低效的。如果它受到与独立主体相同的信息或机构约束的限制,就会发生这种情况。<br />
<br />
<br />
<br />
An example is of a setting where individuals have private information (for example, a labor market where the worker's own productivity is known to the worker but not to a potential employer, or a used-car market where the quality of a car is known to the seller but not to the buyer) which results in [[moral hazard]] or an [[adverse selection]] and a sub-optimal outcome. In such a case, a planner who wishes to improve the situation is unlikely to have access to any information that the participants in the markets do not have. Hence, the planner cannot implement allocation rules which are based on the idiosyncratic characteristics of individuals; for example, "if a person is of type A, they pay price p1, but if of type B, they pay price p2" (see [[Lindahl prices]]). Essentially, only anonymous rules are allowed (of the sort "Everyone pays price p") or rules based on observable behavior; "if any person chooses x at price px, then they get a subsidy of ten dollars, and nothing otherwise". If there exists no allowed rule that can successfully improve upon the market outcome, then that outcome is said to be "constrained Pareto-optimal".<br />
<br />
− <br />
An example is of a setting where individuals have private information (for example, a labor market where the worker's own productivity is known to the worker but not to a potential employer, or a used-car market where the quality of a car is known to the seller but not to the buyer) which results in moral hazard or an adverse selection and a sub-optimal outcome. In such a case, a planner who wishes to improve the situation is unlikely to have access to any information that the participants in the markets do not have. Hence, the planner cannot implement allocation rules which are based on the idiosyncratic characteristics of individuals; for example, "if a person is of type A, they pay price p1, but if of type B, they pay price p2" (see Lindahl prices). Essentially, only anonymous rules are allowed (of the sort "Everyone pays price p") or rules based on observable behavior; "if any person chooses x at price px, then they get a subsidy of ten dollars, and nothing otherwise". If there exists no allowed rule that can successfully improve upon the market outcome, then that outcome is said to be "constrained Pareto-optimal".<br />
<br />
例如,个人拥有私人信息的情况(例如,劳动力市场中工人自己的生产率为工人所知,而潜在雇主却不知道,或者二手车市场中汽车的质量为卖方所知,而非买方所知)导致道德风险或逆向选择和次优结果。在这种情况下,希望改善局面的规划者不太可能获得市场参与者没有的任何信息。因此,计划者不能执行基于个人特质的分配规则; 例如,”如果一个人属于 a 型,他们支付 p1的价格,但如果属于 b 型,他们支付 p2的价格”(见林达尔价格)。基本上,只有隐性规则(类似于“每个人都支付价格 p”)或基于可观察行为的规则被允许; “如果任何人以价格 px 选择 x,那么他们将得到10美元的补贴,除此之外什么也得不到”。如果不存在能够成功改善市场结果的允许规则,那么该结果被称为是“受约束的帕累托最优的”。<br />
<br />
<br />
<br />
The concept of constrained Pareto optimality assumes benevolence on the part of the planner and hence is distinct from the concept of [[government failure]], which occurs when the policy making politicians fail to achieve an optimal outcome simply because they are not necessarily acting in the public's best interest.<br />
<br />
The concept of constrained Pareto optimality assumes benevolence on the part of the planner and hence is distinct from the concept of government failure, which occurs when the policy making politicians fail to achieve an optimal outcome simply because they are not necessarily acting in the public's best interest.<br />
<br />
受约束的帕累托最优的概念假定了计划者的仁慈,因此不同于政府失灵的概念。政府失灵在制定政策的政客仅仅因为他们的行为不一定符合公众的最佳利益而未能取得最佳结果时会出现。<br />
<br />
<br />
<br />
== Fractional Pareto efficiency{{anchor|fractional}} 部分帕累托效率 ==<br />
<br />
'''Fractional Pareto optimality''' is a strengthening of Pareto-optimality in the context of [[fair item allocation]]. An allocation of indivisible items is '''fractionally Pareto-optimal (fPO)''' if it is not Pareto-dominated even by an allocation in which some items are split between agents. This is in contrast to standard Pareto-optimality, which only considers domination by feasible (discrete) allocations.<ref>Barman, S., Krishnamurthy, S. K., & Vaish, R., [https://arxiv.org/pdf/1707.04731.pdf "Finding Fair and Efficient Allocations"], ''EC '18: Proceedings of the 2018 ACM Conference on Economics and Computation'', June 2018.</ref><br />
<br />
Fractional Pareto optimality is a strengthening of Pareto-optimality in the context of fair item allocation. An allocation of indivisible items is fractionally Pareto-optimal (fPO) if it is not Pareto-dominated even by an allocation in which some items are split between agents. This is in contrast to standard Pareto-optimality, which only considers domination by feasible (discrete) allocations.<br />
<br />
'''<font color="#ff8000">部分帕累托最优(Fractional Pareto optimality)</font>'''是在物品公平分配的背景下对帕累托最优的一个加强。 即使是在一个分配过程中,一些物品在主体之间被分配,如果一个不可分割的物品的分配不是受帕累托支配的,那么它不是部分帕累托最优(fPO)。这与标准的帕累托最优相反,因为它只考虑可行(离散)分配的控制。<br />
<br />
<br />
<br />
As an example, consider an item allocation problem with two items, which Alice values at 3, 2 and George values at 4, 1. Consider the allocation giving the first item to Alice and the second to George, where the utility profile is (3,1).<br />
<br />
As an example, consider an item allocation problem with two items, which Alice values at 3, 2 and George values at 4, 1. Consider the allocation giving the first item to Alice and the second to George, where the utility profile is (3,1).<br />
<br />
作为一个示例,考虑一个有两个项的项分配问题,Alice 值为3,2,George 值为4,1。考虑将第一个项目分配给 Alice,第二个项目分配给 George,其中'''<font color="#32CD32">分配方案</font>'''为(3,1)。<br />
<br />
<br />
<br />
* It is Pareto-optimal, since any other discrete allocation (without splitting items) makes someone worse-off. <br />
<br />
* However, it is not fractionally-Pareto-optimal, since it is Pareto-dominated by the allocation giving to Alice 1/2 of the first item and the whole second item, and the other 1/2 of the first item to George - its utility profile is (3.5, 2).<br />
* 它是一个帕累托最优,因为其他任何离散分配(在不分离物品的情况下)都会使得某个主体变差。<br />
* 但是,它不是部分帕累托最优的,因为它是受该分配帕累托支配的。它分配给了Alice第一个资源的一半和第二个资源的全部,分配给了George第一个资源的一半。它的'''<font color="#32CD32">分配方案</font>'''是(3.5,2)。<br />
<br />
<br />
<br />
== Pareto-efficiency and welfare-maximization 帕累托效率和福利最大化==<br />
<br />
{{See also|Pareto-efficient envy-free division 同见帕累托效率与无嫉妒分割}}<br />
<br />
Suppose each agent ''i'' is assigned a positive weight ''a<sub>i</sub>''. For every allocation ''x'', define the ''welfare'' of ''x'' as the weighted sum of utilities of all agents in ''x'', i.e.:<br />
<br />
Suppose each agent i is assigned a positive weight a<sub>i</sub>. For every allocation x, define the welfare of x as the weighted sum of utilities of all agents in x, i.e.:<br />
<br />
假设每个主体 ''i'' 被赋予一个正权重。对于每个分配 ''x'' ,将 ''x'' 的福利定义为 ''x'' 中所有主体的配置的加权和,即。:<br />
<br />
<br />
<br />
<math>W_a(x) := \sum_{i=1}^n a_i u_i(x)</math>.<br />
<br />
<br />
<br />
<br />
Let ''x<sub>a</sub>'' be an allocation that maximizes the welfare over all allocations, i.e.:<br />
<br />
假设是一个在所有分配中使福利最大化的分配,即:<br />
<br />
<br />
<br />
<math>x_a \in \arg \max_{x} W_a(x)</math>.<br />
<br />
<br />
<br />
<br />
It is easy to show that the allocation ''x<sub>a</sub>'' is Pareto-efficient: since all weights are positive, any Pareto-improvement would increase the sum, contradicting the definition of ''x<sub>a</sub>''.<br />
<br />
<br />
很容易证明分配是帕累托有效的: 因为所有'''<font color="#32CD32">此处需插入公式</font>'''的权重都是正的,任何帕累托改进都会增加加权和,这与'''<font color="#32CD32">此处需插入公式</font>'''的定义相矛盾。<br />
<br />
<br />
<br />
Japanese neo-[[Léon_Walras#General_equilibrium_theory|Walrasian]] economist [[Takashi Negishi]] proved<ref>{{cite journal |last=Negishi |first=Takashi |date=1960 |title=Welfare Economics and Existence of an Equilibrium for a Competitive Economy |journal=Metroeconomica |volume=12 |issue=2–3 |pages=92–97 |doi=10.1111/j.1467-999X.1960.tb00275.x }}</ref> that, under certain assumptions, the opposite is also true: for ''every'' Pareto-efficient allocation ''x'', there exists a positive vector ''a'' such that ''x'' maximizes ''W''<sub>a</sub>. A shorter proof is provided by [[Hal Varian]].<ref>{{cite journal |doi=10.1016/0047-2727(76)90018-9 |title=Two problems in the theory of fairness |journal=Journal of Public Economics |volume=5 |issue=3–4 |pages=249–260 |year=1976 |last1=Varian |first1=Hal R. |hdl=1721.1/64180 |hdl-access=free }}</ref><br />
<br />
Japanese neo-Walrasian economist Takashi Negishi proved that, under certain assumptions, the opposite is also true: for every Pareto-efficient allocation x, there exists a positive vector a such that x maximizes W<sub>a</sub>. A shorter proof is provided by Hal Varian.<br />
<br />
日本新瓦尔拉斯经济学家根岸隆史(Takashi Negishi)证明,在某些假设下,该命题的逆命题也成立,即对于每一个帕累托有效配置''x'',都存在一个正向量''a'',使最大化。哈尔·瓦里安提供了一个较短的证明。<br />
<br />
<br />
<br />
== Use in engineering 工程学上的应用==<br />
<br />
The notion of Pareto efficiency has been used in engineering.<ref>Goodarzi, E., Ziaei, M., & Hosseinipour, E. Z., ''Introduction to Optimization Analysis in Hydrosystem Engineering'' ([[Berlin]]/[[Heidelberg]]: [[Springer Science+Business Media|Springer]], 2014), [https://books.google.com/books?id=WjS8BAAAQBAJ&pg=PT111 pp. 111–148].</ref>{{rp|111–148}} Given a set of choices and a way of valuing them, the '''Pareto frontier''' or '''Pareto set''' or '''Pareto front''' is the set of choices that are Pareto efficient. By restricting attention to the set of choices that are Pareto-efficient, a designer can make [[Trade-off|tradeoffs]] within this set, rather than considering the full range of every parameter.<ref>Jahan, A., Edwards, K. L., & Bahraminasab, M., ''Multi-criteria Decision Analysis'', 2nd ed. ([[Amsterdam]]: [[Elsevier]], 2013), [https://books.google.com/books?id=3mreBgAAQBAJ&pg=PA63 pp. 63–65].</ref>{{rp|63–65}}<br />
<br />
The notion of Pareto efficiency has been used in engineering. Given a set of choices and a way of valuing them, the Pareto frontier or Pareto set or Pareto front is the set of choices that are Pareto efficient. By restricting attention to the set of choices that are Pareto-efficient, a designer can make tradeoffs within this set, rather than considering the full range of every parameter.<br />
<br />
帕累托最优的概念已经在工程中得到了应用。给定一组选择和一种评估它们的方法,帕累托边界、帕累托解集或帕累托前沿就是帕累托有效的选择集。通过将注意力限制在帕累托有效的选择集上,设计者可以在这个集合中进行权衡,而不是考虑每个参数的全部范围。<br />
<br />
<br />
<br />
[[File:Front pareto.svg|thumb|300px|Example of a Pareto frontier. The boxed points represent feasible choices, and smaller values are preferred to larger ones. Point ''C'' is not on the Pareto frontier because it is dominated by both point ''A'' and point ''B''. Points ''A'' and ''B'' are not strictly dominated by any other, and hence lie on the frontier.]] <br />
<br />
图1:Example of a Pareto frontier. The boxed points represent feasible choices, and smaller values are preferred to larger ones. Point C is not on the Pareto frontier because it is dominated by both point A and point B. Points A and B are not strictly dominated by any other, and hence lie on the frontier. <br />
<br />
帕累托边界的一个例子。集合中的点表示可行的选择,较小的值比较大的值更好。点''C''不在帕累托边界上,因为它同时被点 ''A'' 和点 ''B'' 支配。点''A''和点''B''不受任何其他点严格控制,因此位于边界上。<br />
--[[用户:趣木木|趣木木]]([[用户讨论:趣木木|讨论]])图片的格式按照[图1:英文+译文来]<br />
<br />
[[File:Pareto Efficient Frontier 1024x1024.png|thumb|256px|A [[production-possibility frontier]]. The red line is an example of a Pareto-efficient frontier, where the frontier and the area left and below it are a continuous set of choices. The red points on the frontier are examples of Pareto-optimal choices of production. Points off the frontier, such as N and K, are not Pareto-efficient, since there exist points on the frontier which Pareto-dominate them.]]<br />
<br />
图2:A [[production-possibility frontier. The red line is an example of a Pareto-efficient frontier, where the frontier and the area left and below it are a continuous set of choices. The red points on the frontier are examples of Pareto-optimal choices of production. Points off the frontier, such as N and K, are not Pareto-efficient, since there exist points on the frontier which Pareto-dominate them.]]<br />
<br />
一个'''<font color="#ff8000">生产可能性边界(production-possibility frontier)</font>'''。红线是帕累托有效边界的一个例子,边界和左下方的区域是一个连续的选择集。边界上的红点是生产的帕累托最优选择的例子。边界外的点,如 ''N'' 和''K'',不是帕累托有效率,因为在边界上存在着受帕累托支配的点<br />
<br />
<br />
<br />
=== Pareto frontier 帕累托边界 ===<br />
<br />
For a given system, the '''Pareto frontier''' or '''Pareto set''' is the set of parameterizations (allocations) that are all Pareto efficient. Finding Pareto frontiers is particularly useful in engineering. By yielding all of the potentially optimal solutions, a designer can make focused [[Trade-off|tradeoffs]] within this constrained set of parameters, rather than needing to consider the full ranges of parameters.<ref>Costa, N. R., & Lourenço, J. A., "Exploring Pareto Frontiers in the Response Surface Methodology", in G.-C. Yang, S.-I. Ao, & L. Gelman, eds., ''Transactions on Engineering Technologies: World Congress on Engineering 2014'' (Berlin/Heidelberg: Springer, 2015), [https://books.google.com/books?id=eMElCQAAQBAJ&pg=PA398 pp. 399–412].</ref>{{rp|399–412}}<br />
<br />
For a given system, the Pareto frontier or Pareto set is the set of parameterizations (allocations) that are all Pareto efficient. Finding Pareto frontiers is particularly useful in engineering. By yielding all of the potentially optimal solutions, a designer can make focused tradeoffs within this constrained set of parameters, rather than needing to consider the full ranges of parameters.<br />
<br />
对于一个给定的系统,'''<font color="#ff8000">帕累托边界(the Pareto frontier)</font>'''或'''<font color="#ff8000">帕累托集(the Pareto set)</font>'''是所有帕累托有效的参数化(分配)的集合。找到帕累托前沿在工程学中特别有用。通过产生所有潜在的最优解决方案,设计师可以在这个受限的参数集中进行集中的权衡,而不需要考虑所有的参数。<br />
<br />
<br />
<br />
The Pareto frontier, ''P''(''Y''), may be more formally described as follows. Consider a system with function <math>f: \mathbb{R}^n \rightarrow \mathbb{R}^m</math>, where ''X'' is a [[compact space|compact set]] of feasible decisions in the [[metric space]] <math>\mathbb{R}^n</math>, and ''Y'' is the feasible set of criterion vectors in <math>\mathbb{R}^m</math>, such that <math>Y = \{ y \in \mathbb{R}^m:\; y = f(x), x \in X\;\}</math>.<br />
<br />
The Pareto frontier, P(Y), may be more formally described as follows. Consider a system with function <math>f: \mathbb{R}^n \rightarrow \mathbb{R}^m</math>, where X is a compact set of feasible decisions in the metric space <math>\mathbb{R}^n</math>, and Y is the feasible set of criterion vectors in <math>\mathbb{R}^m</math>, such that <math>Y = \{ y \in \mathbb{R}^m:\; y = f(x), x \in X\;\}</math>.<br />
<br />
帕累托边界, ''P''(''Y'') ,可以更正式地描述如下。考虑一个包含函数'''<font color="#32CD32">此处需插入公式</font>'''的系统,其中''X''是'''<font color="#ff8000">度量空间(metric space)</font>''''''<font color="#32CD32">此处需插入公式</font>'''中可行决策的'''<font color="#ff8000">紧集(compact set)</font>''',''Y''是'''<font color="#32CD32">此处需插入公式</font>'''中标准向量的可行集,使得'''<font color="#32CD32">此处需插入公式</font>'''。<br />
<br />
<br />
<br />
We assume that the preferred directions of criteria values are known. A point <math>y^{\prime\prime} \in \mathbb{R}^m</math> is preferred to (strictly dominates) another point <math>y^{\prime} \in \mathbb{R}^m</math>, written as <math>y^{\prime\prime} \succ y^{\prime}</math>. The Pareto frontier is thus written as:<br />
<br />
We assume that the preferred directions of criteria values are known. A point <math>y^{\prime\prime} \in \mathbb{R}^m</math> is preferred to (strictly dominates) another point <math>y^{\prime} \in \mathbb{R}^m</math>, written as <math>y^{\prime\prime} \succ y^{\prime}</math>. The Pareto frontier is thus written as:<br />
<br />
我们假设标准值的最优方向是已知的。'''<font color="#32CD32">此处需插入公式</font>'''中的一个点'''<font color="#32CD32">此处需插入公式</font>'''优于中的另一个点'''<font color="#32CD32">此处需插入公式</font>''',写作'''<font color="#32CD32">此处需插入公式</font>'''。因此,帕累托边界可以被描述为:<br />
<br />
<br />
<br />
: <math>P(Y) = \{ y^\prime \in Y: \; \{y^{\prime\prime} \in Y:\; y^{\prime\prime} \succ y^{\prime}, y^\prime \neq y^{\prime\prime} \; \} = \empty \}. </math><br />
<br />
<math>P(Y) = \{ y^\prime \in Y: \; \{y^{\prime\prime} \in Y:\; y^{\prime\prime} \succ y^{\prime}, y^\prime \neq y^{\prime\prime} \; \} = \empty \}. </math><br />
<br />
<br />
<br />
<br />
=== Marginal rate of substitution 边际替代率 ===<br />
<br />
A significant aspect of the Pareto frontier in economics is that, at a Pareto-efficient allocation, the [[marginal rate of substitution]] is the same for all consumers. A formal statement can be derived by considering a system with ''m'' consumers and ''n'' goods, and a utility function of each consumer as <math>z_i=f^i(x^i)</math> where <math>x^i=(x_1^i, x_2^i, \ldots, x_n^i)</math> is the vector of goods, both for all ''i''. The feasibility constraint is <math>\sum_{i=1}^m x_j^i = b_j</math> for <math>j=1,\ldots,n</math>. To find the Pareto optimal allocation, we maximize the [[Lagrangian mechanics|Lagrangian]]:<br />
<br />
A significant aspect of the Pareto frontier in economics is that, at a Pareto-efficient allocation, the marginal rate of substitution is the same for all consumers. A formal statement can be derived by considering a system with m consumers and n goods, and a utility function of each consumer as <math>z_i=f^i(x^i)</math> where <math>x^i=(x_1^i, x_2^i, \ldots, x_n^i)</math> is the vector of goods, both for all i. The feasibility constraint is <math>\sum_{i=1}^m x_j^i = b_j</math> for <math>j=1,\ldots,n</math>. To find the Pareto optimal allocation, we maximize the Lagrangian:<br />
<br />
经济学中,帕累托边界的一个重要方面是在帕累托有效分配中,所有消费者的'''<font color="#ff8000">边际替代率(the marginal rate of substitution)</font>'''是相同的。一个正式的陈述可以通过考虑一个有''m''个消费者和''n''个商品的系统,以及每个消费者的效用函数'''<font color="#32CD32">此处需插入公式</font>'''来推导出。在这个效用方程中,对所有的''i'','''<font color="#32CD32">此处需插入公式</font>'''是商品的矢量。可行性约束为'''<font color="#32CD32">此处需插入公式</font>'''。为了找到帕累托最优分配,我们最大化'''<font color="#ff8000">拉格朗日函数(Lagrangian)</font>''':<br />
<br />
<br />
<br />
: <math>L_i((x_j^k)_{k,j}, (\lambda_k)_k, (\mu_j)_j)=f^i(x^i)+\sum_{k=2}^m \lambda_k(z_k- f^k(x^k))+\sum_{j=1}^n \mu_j \left( b_j-\sum_{k=1}^m x_j^k \right)</math><br />
<br />
<math>L_i((x_j^k)_{k,j}, (\lambda_k)_k, (\mu_j)_j)=f^i(x^i)+\sum_{k=2}^m \lambda_k(z_k- f^k(x^k))+\sum_{j=1}^n \mu_j \left( b_j-\sum_{k=1}^m x_j^k \right)</math><br />
<br />
<br />
<br />
where <math>(\lambda_k)_k</math> and <math>(\mu_j)_j</math> are the vectors of multipliers. Taking the partial derivative of the Lagrangian with respect to each good <math>x_j^k</math> for <math>j=1,\ldots,n</math> and <math>k=1,\ldots, m</math> and gives the following system of first-order conditions:<br />
<br />
where <math>(\lambda_k)_k</math> and <math>(\mu_j)_j</math> are the vectors of multipliers. Taking the partial derivative of the Lagrangian with respect to each good <math>x_j^k</math> for <math>j=1,\ldots,n</math> and <math>k=1,\ldots, m</math> and gives the following system of first-order conditions:<br />
<br />
其中'''<font color="#32CD32">此处需插入公式</font>'''和'''<font color="#32CD32">此处需插入公式</font>'''是乘子的向量。采用关于商品的拉格朗日函数的偏导数,其中,并给出以下一阶条件系统:<br />
<br />
<br />
<br />
: <math>\frac{\partial L_i}{\partial x_j^i} = f_{x^i_j}^1-\mu_j=0\text{ for }j=1,\ldots,n,</math><br />
<br />
<math>\frac{\partial L_i}{\partial x_j^i} = f_{x^i_j}^1-\mu_j=0\text{ for }j=1,\ldots,n,</math><br />
<br />
1,ldots,n,math<br />
<br />
<br />
<br />
: <math>\frac{\partial L_i}{\partial x_j^k} = -\lambda_k f_{x^k_j}^i-\mu_j=0 \text{ for }k= 2,\ldots,m \text{ and }j=1,\ldots,n,</math><br />
<br />
<math>\frac{\partial L_i}{\partial x_j^k} = -\lambda_k f_{x^k_j}^i-\mu_j=0 \text{ for }k= 2,\ldots,m \text{ and }j=1,\ldots,n,</math><br />
<br />
2,ldots,m text { and }1,ldots,n,/ math<br />
<br />
<br />
<br />
where <math>f_{x^i_j}</math> denotes the partial derivative of <math>f</math> with respect to <math>x_j^i</math>. Now, fix any <math>k\neq i</math> and <math>j,s\in \{1,\ldots,n\}</math>. The above first-order condition imply that<br />
<br />
where <math>f_{x^i_j}</math> denotes the partial derivative of <math>f</math> with respect to <math>x_j^i</math>. Now, fix any <math>k\neq i</math> and <math>j,s\in \{1,\ldots,n\}</math>. The above first-order condition imply that<br />
<br />
其中'''<font color="#32CD32">此处需插入公式</font>'''表示'''<font color="#32CD32">此处需插入公式</font>'''的偏导数。现给定'''<font color="#32CD32">此处需插入公式</font>'''。上述一阶条件意味着<br />
<br />
<br />
<br />
: <math>\frac{f_{x_j^i}^i}{f_{x_s^i}^i}=\frac{\mu_j}{\mu_s}=\frac{f_{x_j^k}^k}{f_{x_s^k}^k}.</math><br />
<br />
<math>\frac{f_{x_j^i}^i}{f_{x_s^i}^i}=\frac{\mu_j}{\mu_s}=\frac{f_{x_j^k}^k}{f_{x_s^k}^k}.</math><br />
<br />
Math frac { x ^ i } i }{ x s ^ i }} frac { mu s } f { x ^ k } ^ k } . / math<br />
<br />
<br />
<br />
Thus, in a Pareto-optimal allocation, the marginal rate of substitution must be the same for all consumers.<ref>Wilkerson, T., ''Advanced Economic Theory'' ([[Waltham Abbey]]: Edtech Press, 2018), [https://books.google.com/books?id=UtW_DwAAQBAJ&pg=PA114 p. 114].</ref>{{rp|114}}<br />
<br />
Thus, in a Pareto-optimal allocation, the marginal rate of substitution must be the same for all consumers.<br />
<br />
因此,在帕累托最优分配中,所有消费者的边际替代率必须相同。<br />
<br />
<br />
<br />
=== Computation 计算===<br />
<br />
[[Algorithm]]s for computing the Pareto frontier of a finite set of alternatives have been studied in [[computer science]] and power engineering.<ref>{{cite journal |doi=10.3390/en6031439 |last1=Tomoiagă |first1=Bogdan |last2=Chindriş |first2=Mircea |last3=Sumper |first3=Andreas |last4=Sudria-Andreu |first4=Antoni |last5=Villafafila-Robles |first5=Roberto |title=Pareto Optimal Reconfiguration of Power Distribution Systems Using a Genetic Algorithm Based on NSGA-II |journal=Energies |year=2013 |volume=6 |issue=3 |pages=1439–55 |doi-access=free }}</ref> They include:<br />
<br />
Algorithms for computing the Pareto frontier of a finite set of alternatives have been studied in computer science and power engineering. They include:<br />
<br />
计算机科学和动力工程给出了计算有限个方案集的帕累托边界的算法。它们包括:<br />
<br />
<br />
<br />
* "The maximum vector problem" or the [[Skyline operator|skyline query]].<ref>{{cite journal |doi=10.1016/0020-0190(96)00116-0 |last1=Nielsen |first1=Frank |title=Output-sensitive peeling of convex and maximal layers |journal=Information Processing Letters |volume=59 |pages=255–9 |year=1996 |issue=5 |citeseerx=10.1.1.259.1042 }}</ref><ref>{{cite journal |doi=10.1145/321906.321910 |last1=Kung |first1=H. T. |last2=Luccio |first2=F. |last3=Preparata |first3=F.P. |title=On finding the maxima of a set of vectors |journal=Journal of the ACM |volume=22 |pages=469–76 |year=1975 |issue=4 }}</ref><ref>{{cite journal |doi=10.1007/s00778-006-0029-7 |last1=Godfrey |first1=P. |last2=Shipley |first2=R. |last3=Gryz |first3=J. |journal=VLDB Journal |volume=16 |pages=5–28 |year=2006 |title=Algorithms and Analyses for Maximal Vector Computation |citeseerx=10.1.1.73.6344 }}</ref><br />
* “最大向量问题”,或称轮廓查询。<br />
<br />
* "The scalarization algorithm" or the method of weighted sums.<ref name="Kimde Weck2005">{{cite journal|last1=Kim|first1=I. Y.|last2=de Weck|first2=O. L.|title=Adaptive weighted sum method for multiobjective optimization: a new method for Pareto front generation|journal=Structural and Multidisciplinary Optimization|volume=31|issue=2|year=2005|pages=105–116|issn=1615-147X|doi=10.1007/s00158-005-0557-6}}</ref><ref name="MarlerArora2009">{{cite journal|last1=Marler|first1=R. Timothy|last2=Arora|first2=Jasbir S.|title=The weighted sum method for multi-objective optimization: new insights|journal=Structural and Multidisciplinary Optimization|volume=41|issue=6|year=2009|pages=853–862|issn=1615-147X|doi=10.1007/s00158-009-0460-7}}</ref><br />
* “标量化算法”,或称加权求和法。<br />
<br />
<br />
<br />
* "The <math>\epsilon</math>-constraints method".<ref>{{cite journal|title=On a Bicriterion Formulation of the Problems of Integrated System Identification and System Optimization|journal=IEEE Transactions on Systems, Man, and Cybernetics|volume=SMC-1|issue=3|year=1971|pages=296–297|issn=0018-9472|doi=10.1109/TSMC.1971.4308298}}</ref><ref name="Mavrotas2009">{{cite journal|last1=Mavrotas|first1=George|title=Effective implementation of the ε-constraint method in Multi-Objective Mathematical Programming problems|journal=Applied Mathematics and Computation|volume=213|issue=2|year=2009|pages=455–465|issn=00963003|doi=10.1016/j.amc.2009.03.037}}</ref><br />
* “ϵ-约束法”。<br />
<br />
<br />
<br />
== Use in biology 在生物学中的应用==<br />
<br />
Pareto optimisation has also been studied in biological processes.<ref>Moore, J. H., Hill, D. P., Sulovari, A., & Kidd, L. C., "Genetic Analysis of Prostate Cancer Using Computational Evolution, Pareto-Optimization and Post-processing", in R. Riolo, E. Vladislavleva, M. D. Ritchie, & J. H. Moore, eds., ''Genetic Programming Theory and Practice X'' (Berlin/Heidelberg: Springer, 2013), [https://books.google.co.il/books?id=YZZAAAAAQBAJ&pg=PA86 pp. 87–102].</ref>{{rp|87–102}} In bacteria, genes were shown to be either inexpensive to make (resource efficient) or easier to read (translation efficient). Natural selection acts to push highly expressed genes towards the Pareto frontier for resource use and translational efficiency. Genes near the Pareto frontier were also shown to evolve more slowly (indicating that they are providing a selective advantage).<ref>{{Cite journal|doi=10.1186/s13059-018-1480-7|pmid=30064467|last1=Seward|first1=Emily A. |last2=Kelly|first2=Steven|title=Selection-driven cost-efficiency optimization of transcripts modulates gene evolutionary rate in bacteria.|journal=Genome Biology|volume=19|issue=1|pages=102|year=2018|pmc=6066932}}</ref><br />
<br />
Pareto optimisation has also been studied in biological processes. In bacteria, genes were shown to be either inexpensive to make (resource efficient) or easier to read (translation efficient). Natural selection acts to push highly expressed genes towards the Pareto frontier for resource use and translational efficiency. Genes near the Pareto frontier were also shown to evolve more slowly (indicating that they are providing a selective advantage).<br />
<br />
帕累托最优化在生物过程中也有研究。在细菌中,基因要么生成成本低廉(资源节约型) ,要么更容易被读取(翻译效率型)。自然选择将高表达的基因推向资源利用和翻译效率的帕累托边界。帕累托边界附近基因的进化速度也较慢(这表明它们提供了一种选择优势)。<br />
<br />
<br />
<br />
== Criticism 批判 ==<br />
<br />
It would be incorrect to treat Pareto efficiency as equivalent to societal optimization,<ref>[[Jacques Drèze|Drèze, J.]], ''Essays on Economic Decisions Under Uncertainty'' ([[Cambridge]]: [[Cambridge University Press]], 1987), [https://books.google.com/books?id=LWE4AAAAIAAJ&pg=PA358 pp. 358–364]</ref>{{rp|358–364}} as the latter is a [[normative]] concept that is a matter of interpretation that typically would account for the consequence of degrees of inequality of distribution.<ref>Backhaus, J. G., ''The Elgar Companion to Law and Economics'' ([[Cheltenham|Cheltenham, UK]] / [[Northampton, MA]]: [[Edward Elgar Publishing|Edward Elgar]], 2005), [https://books.google.com/books?id=EtguKoWHUHYC&lpg=PP1&hl=de&pg=PA10 pp. 10–15].</ref>{{rp|10–15}} An example would be the interpretation of one school district with low property tax revenue versus another with much higher revenue as a sign that more equal distribution occurs with the help of government redistribution.<ref>Paulsen, M. B., "The Economics of the Public Sector: The Nature and Role of Public Policy in the Finance of Higher Education", in M. B. Paulsen, J. C. Smart, eds. ''The Finance of Higher Education: Theory, Research, Policy, and Practice'' (New York: Agathon Press, 2001), [https://books.google.com/books?id=BlkPAy-gb8sC&pg=PA95 pp. 95–132].</ref>{{rp|95–132}}<br />
<br />
It would be incorrect to treat Pareto efficiency as equivalent to societal optimization, as the latter is a normative concept that is a matter of interpretation that typically would account for the consequence of degrees of inequality of distribution. An example would be the interpretation of one school district with low property tax revenue versus another with much higher revenue as a sign that more equal distribution occurs with the help of government redistribution.<br />
<br />
把帕累托最优等同于社会优化是不正确的,因为后者是一个规范性概念,是一个典型的解释问题,可以解释分配不平等程度的后果。一个例子就是对一个财产税收入较低的学区和另一个财政收入较高的学区的解释,这表明在政府再分配的帮助下实现了更加平等的分配。<br />
<br />
<br />
<br />
Pareto efficiency does not require a totally equitable distribution of wealth.<ref>Bhushi, K., ed., ''Farm to Fingers: The Culture and Politics of Food in Contemporary India'' (Cambridge: Cambridge University Press, 2018), [https://books.google.com/books?id=NYJIDwAAQBAJ&pg=PA222 p. 222].</ref>{{rp|222}} An economy in which a wealthy few hold the [[Wealth condensation|vast majority of resources]] can be Pareto efficient. This possibility is inherent in the definition of Pareto efficiency; often the [[status quo]] is Pareto efficient regardless of the degree to which wealth is equitably distributed. A simple example is the distribution of a pie among three people. The most equitable distribution would assign one third to each person. However the assignment of, say, a half section to each of two individuals and none to the third is also Pareto optimal despite not being equitable, because none of the recipients could be made better off without decreasing someone else's share; and there are many other such distribution examples. An example of a Pareto inefficient distribution of the pie would be allocation of a quarter of the pie to each of the three, with the remainder discarded.<ref>Wittman, D., ''Economic Foundations of Law and Organization'' (Cambridge: Cambridge University Press, 2006), [https://books.google.com/books?id=fOolQOtKM7QC&pg=PA18 p. 18].</ref>{{rp|18}} The origin (and utility value) of the pie is conceived as immaterial in these examples. In such cases, whereby a "windfall" is gained that none of the potential distributees actually produced (e.g., land, inherited wealth, a portion of the broadcast spectrum, or some other resource), the criterion of Pareto efficiency does not determine a unique optimal allocation. Wealth consolidation may exclude others from wealth accumulation because of bars to market entry, etc.<br />
<br />
Pareto efficiency does not require a totally equitable distribution of wealth. An economy in which a wealthy few hold the vast majority of resources can be Pareto efficient. This possibility is inherent in the definition of Pareto efficiency; often the status quo is Pareto efficient regardless of the degree to which wealth is equitably distributed. A simple example is the distribution of a pie among three people. The most equitable distribution would assign one third to each person. However the assignment of, say, a half section to each of two individuals and none to the third is also Pareto optimal despite not being equitable, because none of the recipients could be made better off without decreasing someone else's share; and there are many other such distribution examples. An example of a Pareto inefficient distribution of the pie would be allocation of a quarter of the pie to each of the three, with the remainder discarded. The origin (and utility value) of the pie is conceived as immaterial in these examples. In such cases, whereby a "windfall" is gained that none of the potential distributees actually produced (e.g., land, inherited wealth, a portion of the broadcast spectrum, or some other resource), the criterion of Pareto efficiency does not determine a unique optimal allocation. Wealth consolidation may exclude others from wealth accumulation because of bars to market entry, etc.<br />
<br />
帕累托最优并不需要完全公平的财富分配。一个少数富人拥有绝大多数资源的经济体系可以是帕累托有效的。这种可能性是帕累托最优的固有定义; 通常情况下,无论财富的公平分配程度如何,现状都是帕累托有效的。一个简单的例子是在三个人之间分配馅饼。最公平的分配将分配给每个人三分之一。<br />
<br />
--[[用户:粲兰|袁一博]]([[用户讨论:粲兰|讨论]]) <br />
<br />
<br />
另一种分配是两个人各占半部分,第三个人不占分毫。然而,尽管这种分配并不公平,它也是帕累托最优的,因为没有一个受者能够在不减少其他人的份额的情况下得到更优的收益; 还有其他许多这样的分配例子。帕累托无效率的馅饼分配的一个例子是三者中的每一个分得馅饼的四分之一,剩下的部分丢弃。在这些示例中,馅饼的缘由(和实用价值)被认为是无关紧要的。在这种情况下,由于潜在的分配者都没有实际生产,却获得了“意外之财”(例如,土地、继承的财产、广播频谱的一部分或其他资源) ,帕累托最优的标准并不能决定唯一的一个最优分配。由于市场准入门槛等原因,财产整合可能会将他者排除在财产积累之外。<br />
<br />
<br />
<br />
The [[liberal paradox]] elaborated by [[Amartya Sen]] shows that when people have preferences about what other people do, the goal of Pareto efficiency can come into conflict with the goal of individual liberty.<ref>Sen, A., ''Rationality and Freedom'' ([[Cambridge, Massachusetts|Cambridge, MA]] / London: [[Harvard University Press|Belknep Press]], 2004), [https://books.google.cz/books?id=DaOY4DQ-MKAC&pg=PA92 pp. 92–94].</ref>{{rp|92–94}}<br />
<br />
The liberal paradox elaborated by Amartya Sen shows that when people have preferences about what other people do, the goal of Pareto efficiency can come into conflict with the goal of individual liberty.<br />
<br />
阿马蒂亚·森(Amartya Sen)阐述的'''<font color="#ff8000">自由主义悖论(The liberal paradox)'''表明,当人们对他人的行为有偏好时,帕累托最优的目标可能与个人自由的目标发生冲突。<br />
<br />
<br />
<br />
==See also 请参阅 ==<br />
<br />
* [[Admissible decision rule]], analog in [[decision theory]] 可容许决策规则,决策理论中的类比<br />
<br />
* [[Arrow's impossibility theorem]] 阿罗不可能定理<br />
<br />
* [[Bayesian efficiency]] 贝叶斯效率<br />
<br />
* [[Fundamental theorems of welfare economics]] 福利经济学基本定理<br />
<br />
* [[Deadweight loss]] 无谓损失<br />
<br />
* [[Economic efficiency]] 经济效益<br />
<br />
* [[Highest and best use]] 最佳使用<br />
<br />
* [[Kaldor–Hicks efficiency]] 卡尔多-希克斯效率<br />
<br />
* [[Market failure]], when a market result is not Pareto optimal 市场失灵,即市场结果非帕累托最优的时刻<br />
<br />
* [[Maximal element]], concept in [[order theory]] 极大元,阶理论中的概念<br />
<br />
* [[Maxima of a point set]] 点集极大值<br />
<br />
* [[Multi-objective optimization]] 多目标优化<br />
<br />
* [[Pareto-efficient envy-free division]] 帕累托有效的无嫉妒分割<br />
<br />
* ''[[Social Choice and Individual Values]]'' for the '(weak) Pareto principle' 关于弱帕累托原则的社会选择与个人价值<br />
<br />
* [[Trade-off talking rational economic person|TOTREP]] 讲究权衡的理性经济人<br />
<br />
* [[Welfare economics]] 福利经济<br />
<br />
--[[用户:趣木木|趣木木]]([[用户讨论:趣木木|讨论]])需附上编者推荐<br />
<br />
==References 参考文献==<br />
<br />
{{reflist|30em}}<br />
<br />
<br />
<br />
== Further reading 延伸阅读 ==<br />
<br />
* {{Cite Fudenberg Tirole 1991|pages=[https://books.google.com/books?id=pFPHKwXro3QC&pg=PA18 18–23]}}<br />
<br />
* {{Cite journal |last1=Bendor | first1=Jonathan |last2= Mookherjee | first2=Dilip | title = Communitarian versus Universalistic norms | journal = [[Quarterly Journal of Political Science]] | volume = 3 | issue = 1 | pages = 33–61 | doi = 10.1561/100.00007028 | date = April 2008 | ref = harv }}<br />
<br />
* {{Cite journal | last = Kanbur | first = Ravi| author-link = Ravi Kanbur | title = Pareto's revenge | journal = Journal of Social and Economic Development | volume = 7 | issue = 1 | pages = 1–11 | date = January–June 2005 | url = http://www.arts.cornell.edu/poverty/kanbur/ParRev.pdf | ref = harv }}<br />
<br />
* {{cite book | last = Ng | first = Yew-Kwang | author-link = Yew-Kwang Ng | title = Welfare economics towards a more complete analysis | url=https://books.google.com/books?id=o-2GDAAAQBAJ&printsec=frontcover| publisher = Palgrave Macmillan | location = Basingstoke, Hampshire New York | year = 2004 | isbn = 9780333971215 }}<br />
<br />
* {{Citation | author-first1=Ariel | author-last1=Rubinstein | author-first2=Martin J. | author-last2=Osborne | author-link1 = Ariel Rubinstein | contribution = Introduction | editor-first1=Ariel | editor-last1=Rubinstein | editor-first2=Martin J. | editor-last2=Osborne | editor-link1 = Ariel Rubinstein | title = A course in game theory | pages = 6–7 | publisher = MIT Press | location = Cambridge, Massachusetts | year = 1994 | isbn = 9780262650403 }} [https://books.google.com/books?id=5ntdaYX4LPkC&pg=PA6 Book preview.]<br />
<br />
* {{Cite journal | last = Mathur | first = Vijay K. | title = How well do we know Pareto optimality? | journal = The Journal of Economic Education | volume = 22 | issue = 2 | pages = 172–178 | doi = 10.2307/1182422 | date = Spring 1991 | ref = harv | jstor = 1182422 }}<br />
<br />
* {{Cite journal | last1 = Newbery | first1 = David M.G. | last2 = Stiglitz | first2 = Joseph E. | author-link1 = David Newbery | author-link2 = Joseph Stiglitz | title = Pareto inferior trade | journal = Review of Economic Studies | volume = 51 | issue = 1 | pages = 1–12 | doi = 10.2307/2297701 | date = January 1984 | ref = harv | jstor = 2297701 }}<br />
<br />
<br />
<br />
{{Economics}}<br />
<br />
{{Game theory}}<br />
<br />
{{Voting systems}}<br />
<br />
<br />
<br />
{{Authority control}}<br />
<br />
<br />
<br />
{{DEFAULTSORT:Pareto Efficiency}}<br />
<br />
[[Category:Game theory]]<br />
<br />
Category:Game theory<br />
<br />
范畴: 博弈论<br />
<br />
[[Category:Law and economics]]<br />
<br />
Category:Law and economics<br />
<br />
类别: 法律和经济学<br />
<br />
[[Category:Welfare economics]]<br />
<br />
Category:Welfare economics<br />
<br />
类别: 福利经济学<br />
<br />
[[Category:Pareto efficiency]]<br />
<br />
Category:Pareto efficiency<br />
<br />
类别: 帕累托最优<br />
<br />
[[Category:Mathematical optimization]]<br />
<br />
Category:Mathematical optimization<br />
<br />
类别: 最优化<br />
<br />
[[Category:Electoral system criteria]]<br />
<br />
Category:Electoral system criteria<br />
<br />
类别: 选举制度标准<br />
<br />
[[Category:Vilfredo Pareto]]<br />
<br />
Category:Vilfredo Pareto<br />
<br />
类别: Vilfredo Pareto<br />
<br />
<noinclude><br />
<br />
<small>This page was moved from [[wikipedia:en:Pareto efficiency]]. Its edit history can be viewed at [[帕累托最优/edithistory]]</small></noinclude><br />
<br />
[[Category:待整理页面]]</div>粲兰https://wiki.swarma.org/index.php?title=%E8%87%AA%E5%A4%8D%E5%88%B6_Self-replication&diff=19050自复制 Self-replication2020-11-22T15:12:02Z<p>粲兰:</p>
<hr />
<div>{{#seo:<br />
|keywords=自复制,生物细胞,计算机<br />
|description=一个动力系统任何能产生与自身相同或相似的复制体的的行为<br />
}}<br />
[[Image:DNA_chemical_structure.svg|thumb|right|200px|DNA分子结构 ]]<br />
'''自复制 Self-replication'''是一个动力系统任何能产生与自身相同或相似的复制体的的行为。生物细胞,在适当的环境下,通过细胞分裂进行繁殖。在细胞分裂过程中,DNA 被复制,并在生殖过程中传递给后代。生物病毒可以复制,但只能通过感染过程控制细胞的生殖机制。有害的朊病毒蛋白可以通过将正常的蛋白质转化为反常形式来复制。<ref>{{cite news|url=http://news.bbc.co.uk/1/hi/health/8435320.stm |title='Lifeless' prion proteins are 'capable of evolution' |work=BBC News |date=2010-01-01 |accessdate=2013-10-22}}</ref>计算机病毒利用计算机上已有的硬件和软件进行复制。自我复制机器人学一直是一个研究领域,也是科幻小说中的一个兴趣主题。任何不能完美复制的自复制机制(变异)都会经历遗传变异,产生自身的变异体。这些变异体将受到自然选择的影响,因为有些变异会比其他变异更好地在当前环境中生存,并将超越他们。<br />
<br />
<br />
==综述==<br />
===理论===<br />
<br />
[[约翰·冯·诺依曼 John von Neumann]]的早期研究<ref name=Hixon_vonNeumann>{{cite book|last=von Neumann|first=John|title=The Hixon Symposium|year=1948|location=Pasadena, California|pages=1–36}}</ref>表明复制因子有几个部分:<br />
<br />
*'''<font color="#ff8000">复制机 replicator</font>'''的编码表示<br />
*一种能复制编码后的复制机表示的机制<br />
*一种能在复制机所在环境中启动构建过程的机制<br />
<br />
<br />
这种模式可能有例外,尽管尚未由任何发现。例如,科学家们已经接近于在 RNA 单体和转录酶的“环境”中构建[https://arstechnica.com/science/2011/04/investigations-into-the-ancient-RNA-world/ 可复制的RNA]。在这种情况下,身体就是基因组,专门的复制机制是外部的。对外部复制机制的需求尚未被克服,这种系统更准确地描述为“辅助复制”而不是“自我复制”。<br />
<br />
<br />
然而,最简单的可能情况是只有一个基因组存在。如果没有一些自我繁殖步骤的说明,一个只有基因组的系统可能被描述为类似于晶体的东西会更为恰当。<br />
<br />
<br><br />
<br />
===自复制的种类===<br />
<br />
最近的研究<ref>{{cite web|url = http://www.MolecularAssembler.com/KSRM/5.1.htm | date = 2004 | accessdate = 29 June 2013 | last = Freitas | first = Robert | last2 = Merkle | first2 = Ralph | title = Kinematic Self-Replicating Machines - General Taxonomy of Replicators}}</ref>已经开始对复制者进行分类,通常基于它们所需要的支持程度。<br />
<br />
<br />
*'''<font color="#ff8000">天然复制机 Natural replicators</font>'''的设计全部或绝大部分不经人手,浑然天成。这样的系统包含自然的生命形式。<br />
*'''<font color="#ff8000">无机复制机 Autotrophic replicators</font>'''可以在自然环境下进行自我复制。它们自己会收集自身的物质。据推测,非生物的无机复制因子可以由人类设计而成,并且可以轻易按照人类人品的规格去设计。<br />
*'''<font color="#ff8000">自生产系统 Self-reproductive systems</font>'''存在于假想当中,可以利用工业原料,例如金属棒和金属丝,以产生自身的拷贝。<br />
*'''<font color="#ff8000">自组装系统 Self-assembling systems</font>'''自动将它们各种已完成的部分组装起来。这种系统的简单例子已经在宏观尺度得到展示。<br />
<br />
<br />
机械复制机的设计空间非常广阔。迄今为止,罗伯特·弗雷塔斯 Robert Freitas和拉尔夫·默克尔 Ralph Merkle的综合研究<ref>{{cite web|url = http://www.MolecularAssembler.com/KSRM/5.1.9.htm | date = 2004 | accessdate = 29 June 2013 | last1 = Freitas | first1 = Robert | last2 = Merkle | first2 = Ralph | title = Kinematic Self-Replicating Machines - Freitas-Merkle Map of the Kinematic Replicator Design Space (2003–2004)}}</ref> 已经确定了137个设计维度并将其分为十几个独立的类别,包括:<br />
:(1)复制控制 Replication Control,<br />
:(2)复制信息 Replication Information,<br />
:(3)'''<font color="#ff8000">复制基质(Replication Substrate)</font>''',<br />
:(4)复制机结构 Replicator Structure,<br />
:(5)被动部件 Passive Parts,<br />
:(6)主动子单元 Active Subunits,<br />
:(7)'''<font color="#ff8000">复制机能量学(Replicator Energetics)</font>''',<br />
:(8)'''<font color="#ff8000">复制机运动学(Replicator Kinematics)</font>''',<br />
:(9)复制过程 Replication Process,<br />
:(10)复制机性能 Replicator Performance,<br />
:(11)产物结构 Product Structure,<br />
:(12)可演化性 Evolvability。<br />
<br />
<br />
===一种自复制的计算机程序——Quine===<br />
<br />
在[[计算机科学]]中,quine是一种自我复制的计算机程序,当执行时,输出自己的代码。例如,利用Python语言编写的一个 quine 如下:<br />
<br />
:<code>a='a=%r;print(a%%a)';print(a%a)</code><br />
<br />
<br />
一种更简单的方法是编写一个程序,这个程序将复制它所指向的任何数据流,然后把程序指向自己。在这种情况下,程序既被当作可执行代码,也被当作要操作的数据。这种方法在包括生物生命在内的大多数自复制系统中都很常见,而且更简单,因为它不需要程序包含对自身的完整描述。<br />
<br />
<br />
在许多编程语言中,空程序是合法的,并且执行时不会产生错误或其他输出。因此,其输出是相同的源代码,所以这种程序是一种简单的自复制机。<br />
<br />
<br />
===自复制式平铺===<br />
<br />
在几何学中,'''<font color="#ff8000">自复制式平铺 self-replicating tiling</font>'''是一种平铺方法,其中几个全等的图形可以连接在一起,形成一个较大的类似于原来的图形。这属于一个被称为'''密铺 tessellation'''的研究领域。 称为“斯芬克斯 sphinx”的六块正三边形组 hexiamond是唯一已知的自我复制的五边形<ref>For an image that does not show how this replicates, see: Eric W. Weisstein. "Sphinx." From MathWorld--A Wolfram Web Resource. [http://mathworld.wolfram.com/Sphinx.html http://mathworld.wolfram.com/Sphinx.html]</ref> 。例如,4个图中的凹五边形可以一起组成一个和原形状相似但是原来2倍大小的凹五边形。所罗门·格伦布 Solomon W. Golomb <ref>For further illustrations, see [http://www.geoaustralia.com/italian/Sphinx/Guide.html Teaching TILINGS / TESSELLATIONS with Geo Sphinx]</ref>为这样的自我复制纹样创造了'''rep-tiles'''这个术语。<br />
<br />
<br />
2012年,李·萨洛斯 Lee Sallows将 rep-tiles 定义为一种特殊的自平铺纹样集或组 。一组 ''n'' 阶的复制品是一组 ''n'' 个形状的复制品,它们可以以 ''n'' 种不同的方式组合,以便形成更大的自复制产物。每个形状各不相同的自平铺纹样集被称为“完美的 perfect”。n次重复的 rep-tile 只是由 n 个相同部分组成的一个集合。<br />
{|<br />
|- style="vertical-align:bottom;"<br />
[[File:Self-replication_of_sphynx_hexidiamonds.svg|thumb|left|text-bottom|260px|可以将四个“sphinx”拼在一起以形成另一个sphinx。]]<br />
[[File:A rep-tile-based_setiset_of_order_4.png|thumb|right|text-bottom|290px|一个完美的setiset 4阶]]<br />
|}<br />
{{clear}}<br />
<br />
===自复制的粘土晶体===<br />
<br />
粘土晶体中存在一种不基于 DNA 或 RNA 的天然自复制。<ref>{{cite web|url=http://www.bbc.com/earth/story/20160823-the-idea-that-life-began-as-clay-crystals-is-50-years-old |title=The idea that life began as clay crystals is 50 years old |publisher=bbc.com |date=2016-08-24 |accessdate=2019-11-10}}</ref>粘土由大量的小晶体组成,粘土是促进晶体生长的环境。晶体由规则的原子晶格组成的,将其放置在含有晶体成分的水溶液中能够生长,并自动地将晶体边界上的原子排列成晶体形式。当正常的原子结构被破坏时,晶体可能具有不规则性,当晶体生长时,这些不规则性可能会传播,形成一种不规则晶体的自我复制。由于这些不规则结构可能会影响晶体分裂形成新晶体的概率,因此这种不规则结构的晶体甚至可以被认为是在经历演化过程。<br />
<br />
<br />
===应用===<br />
<br />
一些工程科学的长期目标是制造出一种可以自复制的'''<font color="ff8000">铿锵复制机 clanking replicator''' </font>。通常的原因是为了在保证产品的功效的同时降低每件商品的成本。许多权威人士表示,自复制产品的成本应该能逼近木材或其他生物材质的单位重量成本,因为自我复制不需要传统工业产品所需的劳动力、资本和分销成本。<br />
<br />
<br />
制造出一个全新的人工复制机是一个合理的近期目标。<br />
<br />
<br />
美国宇航局最近的一项研究表明,铿锵复制机的复杂度大约相当于英特尔奔腾4处理器的复杂度。<ref>{{cite web|url=http://www.niac.usra.edu/files/studies/final_report/883Toth-Fejel.pdf |title=Modeling Kinematic Cellular Automata Final Report |publisher= |date=April 30, 2004 |accessdate=2013-10-22}}</ref> 也就是说,这项技术在一个合理的商业时间规模内,是可以由一个相对较小的工程团队以一个合理的成本实现的。<br />
<br />
<br />
目前学术界对生物技术的有着浓厚兴趣,这一领域的也有大量资金,这正是尝试利用现有细胞的复制能力的时候,而且可以期望产生重大的洞察和进展。<br />
<br />
<br />
自复制的一种变体在编译器构造中具有实际意义,在天然自复制中也会出现类似的自我改进现象。编译器(表现型)可以应用于编译器自身的源代码(基因型) ,从而产生编译器本身。在编译器开发过程中,一般使用修改(变异)的源代码来创建下一代编译器。这个过程不同于天然的自我复制,因为这个过程是由工程师指导的,而不是复制机本身。<br />
<br />
<br />
<br />
==机械中的自复制==<br />
<br />
机器人学领域的一项活动就是机器的自复制。由于所有机器人(至少在现代)都有相当数量的相同特性,一个自复制机器人(或者可能是一群机器人)需要做到以下几点:<br />
<br />
#获得构建材料<br />
#制造新零件,包括最小的零件和思维组件<br />
#提供一个稳定一致的动力源<br />
#为新成员编程<br />
#改正子代产物的任何错误<br />
<br />
<br />
在纳米级别上,组装者也可能被设计成在自身动力下进行自复制。这反过来又导致了“灰蛊 grey goo” 版本的世界末日,就像在诸如《花开 Bloom》,《掠食 Prey》和《递归Recursion》这样的科幻小说中描述的那样。<br />
<br />
<br />
美国前瞻协会已经为机械自复制领域的研究者们发布了指导方针。<ref>{{cite web|url=http://foresight.org/guidelines/ |title=Molecular Nanotechnology Guidelines |publisher=Foresight.org |date= |accessdate=2013-10-22}}</ref> 指导方针建议研究者使用一些特定的技术来防止机械复制因子失控,比如使用广播结构 broadcast architecture。<br />
<br />
<br />
关于与工业时代相关的机械复制的详细文章,请参阅[[大规模生产 mass production]]。<br />
<br />
<br />
==研究领域==<br />
以下领域已开展的与自复制相关的研究:<br />
<br />
* 生物学研究自然复制和复制因子及其相互作用。这些可以成为避免自我复制机器设计困难的重要指导。<br />
* 在化学领域,自我复制研究通常特指关于一组特定的分子如何在这个分子集群(通常是系统化学领域的一部分)中共同作用以复制对方<ref>{{cite book |author=Moulin, Giuseppone |title=Constitutional Dynamic Chemistry |volume=322 |pages=87–105 |year=2011|publisher=Springer|doi=10.1007/128_2011_198|pmid=21728135 |series=Topics in Current Chemistry |isbn=978-3-642-28343-7 |chapter=Dynamic Combinatorial Self-Replicating Systems }}</ref>。<br />
* '''<font color="#ff8000">模因论(Memetics)</font>'''研究思想及其在人类文化中的传播。'''<font color="#ff8000">模因(Meme)</font>'''只需要很少的材料,因此在理论上与病毒相似,通常被称为病毒性的。<br />
* 分子纳米技术是关于制造纳米级的组装工具。如果没有自我复制,分子机器的资本和组装成本就会变得不可思议的高。<br />
* 空间资源: 美国航天局资助了一些设计研究,通过开发自我复制机制来开采空间资源。这些设计大多数包括计算机控制的可复制自己的机器。<br />
* 计算机安全:许多计算机安全问题是由感染计算机的自复制计算机程序造成的——计算机蠕虫和计算机病毒。<br />
* 在并行计算中,在大型计算机集群或分布式计算系统的每个节点上手动加载一个新程序需要很长时间。使用移动代理程序自动加载新程序可以节省系统管理员大量的时间,并且可以更快地为用户提供结果,只要他们不失去控制。<br />
<br />
==工业==<br />
===太空探索和制造业===<br />
<br />
太空系统中自复制的目标是利用低发射质量的大量物质。例如,一个自养自复制机械可以用太阳能电池覆盖月球或行星,并通过微波将能量传送到地球。一旦就位,自己建造的同样的机器也可以生产原材料或制成品,包括运输产品的运输系统。另一个自复制机械模型会在星系和宇宙中复制自己,把信息传回来。<br />
<br />
<br />
一般来说,由于这些系统是自养的,他们是已知最困难和复杂的复制因子。它们也被认为是最危险的复制因子,因为它们不需要人类的任何投入来繁殖。<br />
<br />
<br />
一个关于太空中复制因子的经典理论研究是1980年由 NASA 的罗伯特·弗雷塔斯 Robert Freitas 编辑的关于自养铿锵复制因子的研究。<ref>[[Wikisource:Advanced Automation for Space Missions]]</ref><br />
<br />
<br />
大部分的设计研究都关注于采用一个简单、灵活的化学系统来处理月球表面的风化层,以及复制因子所需要的元素比率和从风化层中获得的元素比率之间的差异。限制元素是'''氯(Chlorine)''',它是处理风化层以获得铝的一个必不可少的元素。氯在月球的风化层中非常罕见,通过投入适量的氯,可以保证更快的生殖速度。<br />
<br />
<br />
参考设计采用了小型计算机控制的在轨道上运行的电动车。每个推车可以有一个简单的手或一个小型推土机铲,形成一个基本的机器人。<br />
<br />
<br />
电力将由支撑在支柱上的“天篷”状的太阳能电池提供。其他的机器可以在天篷下面运转。<br />
<br />
<br />
一个“铸造机器人”将使用一个机械手臂和一些雕刻工具来制作石膏模具。石膏模具易于制作,而且能够生产表面光洁度好且精密的零件。然后,机器人将用非导电熔岩(玄武岩)或纯金属铸造大部分零件。它内部的电炉可将这些材料熔化。<br />
<br />
<br />
他们提出了一个探索性的、更为复杂的“芯片工厂 chip factory”来生产计算机和电子系统,但设计师们还表示,将这些芯片像“维生素”一样从地球运输出去,可能会被证明是可行的。<br />
<br />
===分子制造业===<br />
纳米技术学家尤其相信,在人类设计出一种纳米尺度的自复制组装器之前,他们的工作很可能无法达到成熟的状态[http://www.MolecularAssembler.com/KSRM/4.11.3.htm]。 <br />
<br />
<br />
这些系统比自养系统简单得多,因为它们可被提供纯净的原料和能源。它们不需要再生这些材料。这种区别是关于分子制造是否可行的一些争论的根源。许多权威认为这是不可能的,他们明确地引证了复杂自养自复制系统的资料;而许多认同这种可能性的权威人士清楚地引用了已经被证明的更简单的自组装系统的资料。与此同时,2003年的一项实验展示了一个乐高积木自主机器人,它能够按照预先设定的轨道,从外部提供的4个组件开始,精确地组装出自己的复制品。[http://www.MolecularAssembler.com/KSRM/3.23.4.htm].<br />
<br />
<br />
仅仅利用现有细胞的复制能力是不够的,因为蛋白质的生物合成过程中存在局限性。<br />
<br />
<br />
我们需要的是合理设计一种具有更广泛合成能力的全新复制因子。<br />
<br />
<br />
2011年,纽约大学的科学家们开发出了可自复制的人造结构,这一过程有产生新型材料的潜力。他们已经证明,这种结构不仅可以复制像细胞 DNA 或 RNA 这样的分子,而且可以复制能够呈现许多不同形态、具有许多不同功能特征、并与许多不同类型的化学物种相关联的离散结构。<ref>{{cite journal | doi = 10.1038/nature10500 | last1 = Wang | first1 = Tong | last2 = Sha | first2 = Ruojie | last3 = Dreyfus | first3 = Rémi | last4 = Leunissen | first4 = Mirjam E. | last5 = Maass | first5 = Corinna | last6 = Pine | first6 = David J. | last7 = Chaikin | first7 = Paul M. | last8 = Seeman | first8 = Nadrian C. | year = 2011 | title = Self-replication of information-bearing nanoscale patterns | journal = Nature | volume = 478 | issue = 7368 | pages = 225–228 | pmid=21993758 | pmc=3192504}}</ref><ref>{{cite web | url = https://www.sciencedaily.com/releases/2011/10/111012132651.htm | title = Self-replication process holds promise for production of new materials. | date = 17 October 2011 | website = Science Daily | accessdate=17 October 2011}}</ref><br />
<br />
<br />
有关假设的自我复制系统的其他化学基础的讨论,请参阅[[替代生物化学 alternative biochemistry]]。<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
==参阅==<br />
*[https://zhuanlan.zhihu.com/p/135833919 从自我复制到自我意识]<br />
* [[人造生命 Artificial life]]<br />
* [[太空鸡实验 Astrochicken]]<br />
* [[自创生 Autopoiesis]]<br />
* [[复杂系统 Complex system]]<br />
* [[DNA复制 ]]<br />
* [[自复制机器 Self-replicating machine]]<br />
** [[自复制空间飞行器 Self-replicating spacecraft]]<br />
* [[空间制造 Space manufacturing]]<br />
* [[冯·诺依曼宇宙构造函数 Von Neumann universal constructor]]<br />
* [[冯·诺依曼机 Von Neumann machine (disambiguation)]]<br />
* [[自重构 Self reconfigurable]]<br />
* [[最终人存原理 Final Anthropic Principle]]<br />
* [[正反馈 Positive feedback]]<br />
* [[谐 Harmonic]]<br />
<br />
<br><br />
<br />
==参考文献==<br />
{{reflist}}<br />
<br />
<br />
==其他文献==<br />
* von Neumann, J., 1966, ''The Theory of Self-reproducing Automata'', A. Burks, ed., Univ. of Illinois Press, Urbana, IL.<br />
* Advanced Automation for Space Missions, a 1980 NASA study edited by Robert Freitas<br />
* [http://www.MolecularAssembler.com/KSRM.htm Kinematic Self-Replicating Machines] first comprehensive survey of entire field in 2004 by Robert Freitas and Ralph Merkle<br />
* [https://web.archive.org/web/20040920220139/http://www.niac.usra.edu/files/studies/final_report/pdf/883Toth-Fejel.pdf NASA Institute for Advance Concepts study by General Dynamics]- concluded that complexity of the development was equal to that of a Pentium 4, and promoted a design based on cellular automata.<br />
* ''Gödel, Escher, Bach'' by Douglas Hofstadter (detailed discussion and many examples)<br />
* Kenyon, R., ''Self-replicating tilings'', in: Symbolic Dynamics and Applications (P. Walters, ed.) Contemporary Math. vol. 135 (1992), 239-264.<br />
<br />
{{refend}}<br />
<br />
----<br />
本中文词条由[[用户:Qige96|Ricky]]翻译,[[用户:Paradoxist-Paradoxer|Paradoxist-Paradoxer]]审校,[[用户:薄荷|薄荷]]欢迎在讨论页面留言。<br />
<br />
'''本词条内容源自公开资料,遵守 CC3.0协议。'''</div>粲兰https://wiki.swarma.org/index.php?title=%E9%80%9A%E7%94%A8%E4%BA%BA%E5%B7%A5%E6%99%BA%E8%83%BD&diff=19049通用人工智能2020-11-22T15:03:34Z<p>粲兰:</p>
<hr />
<div>此词条由袁一博翻译,未经人工整理和审校,带来阅读不便,请见谅。<br />
<br />
本词条已由[[用户:Qige96|Ricky]]审校。<br />
<br />
<br />
'''Artificial general intelligence''' ('''AGI''') is the hypothetical<ref>{{cite news |title=DeepMind and Google: the battle to control artificial intelligence |url=https://www.economist.com/news/2019/04/05/deepmind-and-google-the-battle-to-control-artificial-intelligence |accessdate=15 March 2020 |work=[[The Economist]] ([[1843 (magazine)]]) |date=2019 |quote=AGI stands for Artificial General Intelligence, a hypothetical computer program...}}</ref> intelligence of a machine that has the capacity to understand or learn any intellectual task that a [[human being]] can. It is a primary goal of some [[artificial intelligence]] research and a common topic in [[science fiction]] and [[futures studies]]. AGI can also be referred to as '''strong AI''',<ref>Kurzweil, ''Singularity'' (2005) p. 260</ref><ref name = "Kurzweil 2005-08-05">{{Citation|url=https://www.forbes.com/home/free_forbes/2005/0815/030.htmlhttps://www.forbes.com/home/free_forbes/2005/0815/030.html |first=Ray |last=Kurzweil |date=5 August 2005 |magazine=[[Forbes]]|title=Long Live AI}}: Kurzweil describes strong AI as "machine intelligence with the full range of human intelligence."</ref><br />
<br />
Artificial general intelligence (AGI) is the hypothetical intelligence of a machine that has the capacity to understand or learn any intellectual task that a human being can. It is a primary goal of some artificial intelligence research and a common topic in science fiction and futures studies. AGI can also be referred to as strong AI.<br />
<br />
'''<font color="#ff8000">通用人工智能(Artificial general intelligence,AGI)</font>'''是一种假想中的机器智能<ref>{{cite news |title=DeepMind and Google: the battle to control artificial intelligence |url=https://www.economist.com/news/2019/04/05/deepmind-and-google-the-battle-to-control-artificial-intelligence |accessdate=15 March 2020 |work=[[The Economist]] ([[1843 (magazine)]]) |date=2019 |quote=AGI stands for Artificial General Intelligence, a hypothetical computer program...}}</ref>,它有能力理解或学习任何人类能够完成的智力任务。这是一些人工智能研究的主要目标,也是科幻小说和未来研究的共同话题。通用人工智能也可以被称为'''<font color="#ff8000">强人工智能(Strong AI),完全人工智能(Full AI)</font>''',或'''<font color="#ff8000">通用智能行为</font>'''。<br />
<br />
<br />
<br />
Some academic sources reserve the term "strong AI" for machines that can experience consciousness. Today's AI is speculated to be many years, if not decades, away from AGI.<br />
<br />
一些学术文献保留了“强人工智能”这个术语,用来形容能够体会意识的机器。据推测,今天的人工智能将在很多年,甚至很多个十年之后才能达到通用人工智能的地步。<br />
<br />
<br />
<br />
Some authorities emphasize a distinction between ''strong AI'' and ''applied AI'',<ref>Encyclopædia Britannica [http://www.britannica.com/eb/article-219086/artificial-intelligence Strong AI, applied AI, and cognitive simulation] {{Webarchive|url=https://web.archive.org/web/20071015054758/http://www.britannica.com/eb/article-219086/artificial-intelligence |date=15 October 2007 }} or Jack Copeland [http://www.cs.usfca.edu/www.AlanTuring.net/turing_archive/pages/Reference%20Articles/what_is_AI/What%20is%20AI02.html What is artificial intelligence?] {{Webarchive|url=https://web.archive.org/web/20070818125256/http://www.cs.usfca.edu/www.AlanTuring.net/turing_archive/pages/Reference%20Articles/what_is_AI/What%20is%20AI02.html |date=18 August 2007 }} on AlanTuring.net</ref> also called ''narrow AI''<ref name = "Kurzweil 2005-08-05"/> or ''[[weak AI]]''.<ref>{{Cite web |url=http://www.open2.net/nextbigthing/ai/ai_in_depth/in_depth.htm |title=The Open University on Strong and Weak AI |access-date=8 October 2007 |archive-url=https://web.archive.org/web/20090925043908/http://www.open2.net/nextbigthing/ai/ai_in_depth/in_depth.htm |archive-date=25 September 2009 |url-status=live }} {{Dead link |date=October 2019}}</ref> In contrast to strong AI, weak AI is not intended to perform human [[cognitive]] abilities. Rather, weak AI is limited to the use of software to study or accomplish specific [[problem solving]] or [[reason]]ing tasks.<br />
<br />
Some authorities emphasize a distinction between strong AI and applied AI, also called narrow AI In contrast to strong AI, weak AI is not intended to perform human cognitive abilities. Rather, weak AI is limited to the use of software to study or accomplish specific problem solving or reasoning tasks.<br />
<br />
一些权威机构强调'''强人工智能'''和'''应用人工智能''',或者说''''''<font color="#ff8000">狭义人工智能(weak AI)</font>''''''与'''强人工智能'''之间的区别:弱人工智能并不需要动用人类的认知能力。相反,弱人工智能仅限于用在通过软件来研究或解决特定问题,或完成推理任务。<br />
<br />
<br />
<br />
As of 2017, over forty organizations are researching AGI.<ref name=baum/><br />
<br />
As of 2017, over forty organizations are researching AGI.<br />
<br />
截止到2017年,已经有超过四十家机构在研究 AGI。<br />
<br />
<br />
<br />
==判定要求==<br />
<br />
{{main|Cognitive science}}<br />
<br />
<br />
Various criteria for [[intelligence]] have been proposed (most famously the [[Turing test]]) but to date, there is no definition that satisfies everyone.<ref>AI founder [[John McCarthy (computer scientist)|John McCarthy]] writes: "we cannot yet characterize in general what kinds of computational procedures we want to call intelligent." {{cite web| url=http://www-formal.stanford.edu/jmc/whatisai/node1.html| title=Basic Questions| last=McCarthy| first=John| authorlink=John McCarthy (computer scientist)| publisher=[[Stanford University]]| year=2007| access-date=6 December 2007| archive-url=https://web.archive.org/web/20071026100601/http://www-formal.stanford.edu/jmc/whatisai/node1.html| archive-date=26 October 2007| url-status=live}} (For a discussion of some definitions of intelligence used by [[artificial intelligence]] researchers, see [[philosophy of artificial intelligence]].)</ref> However, there ''is'' wide agreement among artificial intelligence researchers that intelligence is required to do the following:<br />
<br />
人们提出了各种各样的智能标准(最著名的是'''<font color="#ff8000">图灵测试(Turing test)</font>''' ,但到目前为止,还没有一个定义能使所有人满意。然而,人工智能研究人员普遍认为,智能需要做到以下几点: <br />
<ref><br />
This list of intelligent traits is based on the topics covered by major AI textbooks, including:<br />
{{Harvnb|Russell|Norvig|2003}},<br />
{{Harvnb|Luger|Stubblefield|2004}},<br />
{{Harvnb|Poole|Mackworth|Goebel|1998}} and<br />
{{Harvnb|Nilsson|1998}}.<br />
</ref><br />
<br />
* [[automated reasoning|reason]], use strategy, solve puzzles, and make judgments under [[uncertainty]];<br />
* [[knowledge representation|represent knowledge]], including [[Commonsense knowledge base|commonsense knowledge]];<br />
* [[automated planning and scheduling|plan]];<br />
* [[machine learning|learn]];学习。<br />
* communicate in [[natural language processing|natural language]];使用自然语言进行交流。<br />
* and [[Artificial intelligence systems integration|integrate all these skills]] towards common goals.<br />
<br />
<br />
* 推理、使用策略,解决问题,并且在不确定条件下做出决策;<br />
* 表示知识,包括常识;<br />
* 规划;<br />
* 学习;<br />
* 使用自然语言交流;<br />
* 以及综合运用所有技巧以达到某个目的。<br />
<br />
<br />
<br />
Other important capabilities include the ability to [[machine perception|sense]] (e.g. [[computer vision|see]]) and the ability to act (e.g. [[robotics|move and manipulate objects]]) in the world where intelligent behaviour is to be observed.<ref>Pfeifer, R. and Bongard J. C., How the body shapes the way we think: a new view of intelligence (The MIT Press, 2007). {{ISBN|0-262-16239-3}}</ref> This would include an ability to detect and respond to [[hazard]].<ref>{{cite journal | last1 = White | first1 = R. W. | year = 1959 | title = Motivation reconsidered: The concept of competence | journal = Psychological Review | volume = 66 | issue = 5| pages = 297–333 | doi=10.1037/h0040934| pmid = 13844397 }}</ref> Many interdisciplinary approaches to intelligence (e.g. [[cognitive science]], [[computational intelligence]] and [[decision making]]) tend to emphasise the need to consider additional traits such as [[imagination]] (taken as the ability to form mental images and concepts that were not programmed in)<ref>{{Harvnb|Johnson|1987}}</ref> and [[Self-determination theory|autonomy]].<ref>deCharms, R. (1968). Personal causation. New York: Academic Press.</ref><br />
<br />
Other important capabilities include the ability to sense (e.g. see) and the ability to act (e.g. move and manipulate objects) in the world where intelligent behaviour is to be observed. This would include an ability to detect and respond to hazard. Many interdisciplinary approaches to intelligence (e.g. cognitive science, computational intelligence and decision making) tend to emphasise the need to consider additional traits such as imagination (taken as the ability to form mental images and concepts that were not programmed in) and autonomy.<br />
<br />
其他重要的能力包括在客观世界进行感知(例如:视觉)和行动(例如:移动和操纵物体)的能力。智能行为在客观世界中是可观测的。这将包括检测和应对危险的能力。许多跨学科的智力研究方法(例如:认知科学、计算智能和决策)倾向于强调考虑额外特征的必要性,例如想象力(被认为是未编入程序的形成意象和概念的能力)和自主性。<br />
<br />
Computer based systems that exhibit many of these capabilities do exist (e.g. see [[computational creativity]], [[automated reasoning]], [[decision support system]], [[robot]], [[evolutionary computation]], [[intelligent agent]]), but not yet at human levels.<br />
<br />
Computer based systems that exhibit many of these capabilities do exist (e.g. see computational creativity, automated reasoning, decision support system, robot, evolutionary computation, intelligent agent), but not yet at human levels.<br />
<br />
基于计算机的智能系统,展示出了许多上述能力(例如:计算创造性、自动推理、决策支持系统、机器人、进化计算、智能代理) ,但还没有达到人类的水平。<br />
<br />
<br />
<br />
===判定人类水平通用人工智能的测试===<br />
<br />
The following tests to confirm human-level AGI have been considered:<ref>{{cite web|last=Muehlhauser|first=Luke|title=What is AGI?|url=http://intelligence.org/2013/08/11/what-is-agi/|publisher=Machine Intelligence Research Institute|accessdate=1 May 2014|date=11 August 2013|archive-url=https://web.archive.org/web/20140425115445/http://intelligence.org/2013/08/11/what-is-agi/|archive-date=25 April 2014|url-status=live}}</ref><ref>{{Cite web|url=https://www.talkyblog.com/artificial_general_intelligence_agi/|title=What is Artificial General Intelligence (AGI)? {{!}} 4 Tests For Ensuring Artificial General Intelligence|date=13 July 2019|website=Talky Blog|language=en-US|access-date=17 July 2019|archive-url=https://web.archive.org/web/20190717071152/https://www.talkyblog.com/artificial_general_intelligence_agi/|archive-date=17 July 2019|url-status=live}}</ref><br />
<br />
The following tests to confirm human-level AGI have been considered:<br />
<br />
经过了许多思考,下列测试被提出以确认机器是否拥有人类水平的通用智能:<br />
<br />
;[[图灵测试]]<br />
<br />
: A machine and a human both converse sight unseen with a second human, who must evaluate which of the two is the machine, which passes the test if it can fool the evaluator a significant fraction of the time. Note: Turing does not prescribe what should qualify as intelligence, only that knowing that it is a machine should disqualify it.<br />
<br />
: 一个机器人和一个人类都与另一个人类在看不到的情况下进行交流,后者必须评估两者中哪一个是机器,如果它能在很大一部分时间内骗过评估者,那么机器就通过了测试。注意: 图灵并没有规定什么是智能,只规定了只要一台机器能被指认出来,那就不应该认为它是智能的。<br />
<br />
;The Coffee Test ([[Steve Wozniak|''Wozniak'']])<br />
<br />
;咖啡测试(沃兹尼亚克)<br />
<br />
: A machine is required to enter an average American home and figure out how to make coffee: find the coffee machine, find the coffee, add water, find a mug, and brew the coffee by pushing the proper buttons.<br />
<br />
: 一台机器需要进入一个普通的美国家庭,并弄清楚如何制作咖啡: 找到咖啡机,找到咖啡,加水,找到一个马克杯,并通过按下正确的按钮来煮咖啡。<br />
<br />
;The Robot College Student Test ([[Ben Goertzel|''Goertzel'']])<br />
<br />
;机器人大学生考试(格兹尔)<br />
<br />
: A machine enrolls in a university, taking and passing the same classes that humans would, and obtaining a degree.<br />
<br />
: 一台机器进入一所大学,学习并通过与人类相同的课程,并获得学位。<br />
<br />
;The Employment Test ([[Nils John Nilsson|''Nilsson'']])<br />
<br />
;就业测试(尼尔森)<br />
<br />
: A machine works an economically important job, performing at least as well as humans in the same job.<br />
<br />
: 机器从事一项经济上重要的工作,在同一项工作中表现得至少和人类一样好。<br />
<br />
<br />
<br />
===需要通用人工智能解决的问题===<br />
<br />
{{Main|AI-complete}}<br />
<br />
The most difficult problems for computers are informally known as "AI-complete" or "AI-hard", implying that solving them is equivalent to the general aptitude of human intelligence, or strong AI, beyond the capabilities of a purpose-specific algorithm.<ref name="Shapiro92">Shapiro, Stuart C. (1992). [http://www.cse.buffalo.edu/~shapiro/Papers/ai.pdf Artificial Intelligence] {{Webarchive|url=https://web.archive.org/web/20160201014644/http://www.cse.buffalo.edu/~shapiro/Papers/ai.pdf |date=1 February 2016 }} In Stuart C. Shapiro (Ed.), ''Encyclopedia of Artificial Intelligence'' (Second Edition, pp.&nbsp;54–57). New York: John Wiley. (Section 4 is on "AI-Complete Tasks".)</ref><br />
<br />
The most difficult problems for computers are informally known as "AI-complete" or "AI-hard", implying that solving them is equivalent to the general aptitude of human intelligence, or strong AI, beyond the capabilities of a purpose-specific algorithm.<br />
<br />
对于计算机来说,最困难的问题被非正式地称为“'''<font color="#ff8000">AI完全问题(AI-complete)</font>'''”或“'''<font color="#ff8000">AI困难问题(AI-hard)</font>'''”,这意味着解决这些问题相当于拥有人类智能的一般才能,或超出了特定目的算法能力的强人工智能。<br />
<br />
<br />
AI-complete problems are hypothesised to include general [[computer vision]], [[natural language understanding]], and dealing with unexpected circumstances while solving any real world problem.<ref>Roman V. Yampolskiy. Turing Test as a Defining Feature of AI-Completeness. In Artificial Intelligence, Evolutionary Computation and Metaheuristics (AIECM) --In the footsteps of Alan Turing. Xin-She Yang (Ed.). pp. 3–17. (Chapter 1). Springer, London. 2013. http://cecs.louisville.edu/ry/TuringTestasaDefiningFeature04270003.pdf {{Webarchive|url=https://web.archive.org/web/20130522094547/http://cecs.louisville.edu/ry/TuringTestasaDefiningFeature04270003.pdf |date=22 May 2013 }}</ref><br />
<br />
AI-complete problems are hypothesised to include general computer vision, natural language understanding, and dealing with unexpected circumstances while solving any real world problem.<br />
<br />
AI-完全问题被认为包括一般的计算机视觉,自然语言理解,以及在解决任何现实世界问题的同时处理意外情况。<br />
<br />
<br />
<br />
AI-complete problems cannot be solved with current computer technology alone, and also require [[human computation]]. This property could be useful, for example, to test for the presence of humans, as [[CAPTCHA]]s aim to do; and for [[computer security]] to repel [[brute-force attack]]s.<ref>Luis von Ahn, Manuel Blum, Nicholas Hopper, and John Langford. [http://www.captcha.net/captcha_crypt.pdf CAPTCHA: Using Hard AI Problems for Security] {{Webarchive|url=https://web.archive.org/web/20160304001102/http://www.captcha.net/captcha_crypt.pdf |date=4 March 2016 }}. In Proceedings of Eurocrypt, Vol. 2656 (2003), pp. 294–311.</ref><ref>{{cite journal | first = Richard | last = Bergmair | title = Natural Language Steganography and an "AI-complete" Security Primitive | citeseerx = 10.1.1.105.129 | date = 7 January 2006 }} (unpublished?)</ref><br />
<br />
AI-complete problems cannot be solved with current computer technology alone, and also require human computation. This property could be useful, for example, to test for the presence of humans, as CAPTCHAs aim to do; and for computer security to repel brute-force attacks.<br />
<br />
目前的计算机技术不能单独解决AI完全问题,还需要人工计算的参与。这个特性可以用来测试人类是否存在(比如说,CAPTCHAs的目标就是测试服务的使用者是人类而非机器人) ,以及应用于计算机安全以抵御暴力攻击。<br />
<br />
==历史 == <br />
<br />
===经典人工智能===<br />
<br />
{{Main|History of artificial intelligence}}<br />
<br />
Modern AI research began in the mid 1950s.<ref>{{Harvnb|Crevier|1993|pp=48–50}}</ref> The first generation of AI researchers were convinced that artificial general intelligence was possible and that it would exist in just a few decades. AI pioneer [[Herbert A. Simon]] wrote in 1965: "machines will be capable, within twenty years, of doing any work a man can do."<ref>{{Harvnb|Simon|1965|p=96}} quoted in {{Harvnb|Crevier|1993|p=109}}</ref> Their predictions were the inspiration for [[Stanley Kubrick]] and [[Arthur C. Clarke]]'s character [[HAL 9000]], who embodied what AI researchers believed they could create by the year 2001. AI pioneer [[Marvin Minsky]] was a consultant<ref>{{Cite web |url=http://mitpress.mit.edu/e-books/Hal/chap2/two1.html |title=Scientist on the Set: An Interview with Marvin Minsky |access-date=5 April 2008 |archive-url=https://web.archive.org/web/20120716182537/http://mitpress.mit.edu/e-books/Hal/chap2/two1.html |archive-date=16 July 2012 |url-status=live }}</ref> on the project of making HAL 9000 as realistic as possible according to the consensus predictions of the time; Crevier quotes him as having said on the subject in 1967, "Within a generation ... the problem of creating 'artificial intelligence' will substantially be solved,"<ref>Marvin Minsky to {{Harvtxt|Darrach|1970}}, quoted in {{Harvtxt|Crevier|1993|p=109}}.</ref> although Minsky states that he was misquoted.{{Citation needed|date=June 2011}}<br />
<br />
Modern AI research began in the mid 1950s. The first generation of AI researchers were convinced that artificial general intelligence was possible and that it would exist in just a few decades. AI pioneer Herbert A. Simon wrote in 1965: "machines will be capable, within twenty years, of doing any work a man can do." Their predictions were the inspiration for Stanley Kubrick and Arthur C. Clarke's character HAL 9000, who embodied what AI researchers believed they could create by the year 2001. AI pioneer Marvin Minsky was a consultant on the project of making HAL 9000 as realistic as possible according to the consensus prediction of the time; Crevier quotes him as having said on the subject in 1967, "Within a generation ... the problem of creating 'artificial intelligence' will substantially be solved," although Minsky states that he was misquoted.<br />
<br />
现代人工智能研究始于20世纪50年代中期。第一代人工智能研究人员确信,通用人工智能是可能的,并将在短短几十年内出现。人工智能的先驱赫伯特·A·西蒙(Herbert A. Simon)在1965年写道: “机器将在20年内拥有完成人类能做的任何工作的能力。”他们的预言启发了斯坦利·库布里克和亚瑟·查理斯·克拉克塑造的角色“哈尔9000”,它代表了人工智能研究人员相信他们到2001年时能够创造出的东西。人工智能先驱马文·明斯基(Marvin Minsky)当时是一个项目顾问,该项目旨在根据当时的一致预测,使“哈尔9000”尽可能逼真; 克里维尔援引他在1967年关于这个问题的话说,“在一代人的时间里... ... 创造‘人工智能’的问题将大体上得到解决,”尽管明斯基声称,他的话被错误引用了。<br />
<br />
<br />
<br />
However, in the early 1970s, it became obvious that researchers had grossly underestimated the difficulty of the project. Funding agencies became skeptical of AGI and put researchers under increasing pressure to produce useful "applied AI".<ref>The [[Lighthill report]] specifically criticized AI's "grandiose objectives" and led the dismantling of AI research in England. ({{Harvnb|Lighthill|1973}}; {{Harvnb|Howe|1994}}) In the U.S., [[DARPA]] became determined to fund only "mission-oriented direct research, rather than basic undirected research". See {{Harv|NRC|1999}} under "Shift to Applied Research Increases Investment". See also {{Harv|Crevier|1993|pp=115–117}} and {{Harv|Russell|Norvig|2003|pp=21–22}}</ref> As the 1980s began, Japan's [[Fifth Generation Computer]] Project revived interest in AGI, setting out a ten-year timeline that included AGI goals like "carry on a casual conversation".<ref>{{harvnb|Crevier|1993|p=211}}, {{harvnb|Russell|Norvig|2003|p=24}} and see also {{Harvnb|Feigenbaum|McCorduck|1983}}</ref> In response to this and the success of [[expert systems]], both industry and government pumped money back into the field.<ref>{{Harvnb|Crevier| 1993|pp=161–162,197–203,240}}; {{harvnb|Russell|Norvig|2003|p=25}}; {{harvnb|NRC|1999|loc=under "Shift to Applied Research Increases Investment"}}</ref> However, confidence in AI spectacularly collapsed in the late 1980s, and the goals of the Fifth Generation Computer Project were never fulfilled.<ref>{{Harvnb|Crevier|1993|pp=209–212}}</ref> For the second time in 20 years, AI researchers who had predicted the imminent achievement of AGI had been shown to be fundamentally mistaken. By the 1990s, AI researchers had gained a reputation for making vain promises. They became reluctant to make predictions at all<ref>As AI founder [[John McCarthy (computer scientist)|John McCarthy]] writes "it would be a great relief to the rest of the workers in AI if the inventors of new general formalisms would express their hopes in a more guarded form than has sometimes been the case." {{cite web | url=http://www-formal.stanford.edu/jmc/reviews/lighthill/lighthill.html | title=Reply to Lighthill | last=McCarthy | first=John | authorlink=John McCarthy (computer scientist) | publisher=Stanford University | year=2000 | access-date=29 September 2007 | archive-url=https://web.archive.org/web/20080930164952/http://www-formal.stanford.edu/jmc/reviews/lighthill/lighthill.html | archive-date=30 September 2008 | url-status=live }}</ref> and to avoid any mention of "human level" artificial intelligence for fear of being labeled "wild-eyed dreamer[s]."<ref>"At its low point, some computer scientists and software engineers avoided the term artificial intelligence for fear of being viewed as wild-eyed dreamers."{{cite news |first=John |last=Markoff |title=Behind Artificial Intelligence, a Squadron of Bright Real People |url=https://www.nytimes.com/2005/10/14/technology/14artificial.html?ei=5070&en=11ab55edb7cead5e&ex=1185940800&adxnnl=1&adxnnlx=1185805173-o7WsfW7qaP0x5/NUs1cQCQ |work=The New York Times|date=14 October 2005}}</ref><br />
<br />
However, in the early 1970s, it became obvious that researchers had grossly underestimated the difficulty of the project. Funding agencies became skeptical of AGI and put researchers under increasing pressure to produce useful "applied AI". As the 1980s began, Japan's Fifth Generation Computer Project revived interest in AGI, setting out a ten-year timeline that included AGI goals like "carry on a casual conversation". In response to this and the success of expert systems, both industry and government pumped money back into the field. However, confidence in AI spectacularly collapsed in the late 1980s, and the goals of the Fifth Generation Computer Project were never fulfilled. For the second time in 20 years, AI researchers who had predicted the imminent achievement of AGI had been shown to be fundamentally mistaken. By the 1990s, AI researchers had gained a reputation for making vain promises. They became reluctant to make predictions at all and to avoid any mention of "human level" artificial intelligence for fear of being labeled "wild-eyed dreamer[s]."<br />
<br />
然而,在20世纪70年代初,很明显,研究人员严重低估了该项目的难度。资助机构开始对通用人工智能持怀疑态度,并对研究人员施加越来越大的压力,要求他们生产出有用的“应用人工智能”。在20世纪80年代的初期,日本的'''<font color="#ff8000">第五代计算机项目(Fifth Generation Computer Project)</font>'''重新唤起了人们对通用人工智能的兴趣,并设定了一个长达10年的时间线,其中包括通用人工智能的目标,比如“进行一次随意的交谈”。为了应对这种情况,以及专家系统的成功实现,工业界和政府都重新将资金投入这一领域。然而,人们对人工智能的信心在20世纪80年代末大幅下降,第五代计算机项目的目标从未实现。20年来的第二次,人工智能研究员预测通用人工智能即将取得的成果被证明根本是错误的。到了20世纪90年代,人工智能研究人员因做出虚假承诺而臭名昭著。他们变得彻底不再愿意做预测,也不愿意提及“人类水平”的人工智能,因为他们害怕被贴上“狂热梦想家”的标签<br />
<br />
<br />
<br />
===狭义人工智能的研究===<br />
<br />
{{Main|Artificial intelligence}}<br />
<br />
In the 1990s and early 21st century, mainstream AI achieved far greater commercial success and academic respectability by focusing on specific sub-problems where they can produce verifiable results and commercial applications, such as [[artificial neural networks]] and statistical [[machine learning]].<ref>{{Harvnb|Russell|Norvig|2003|pp=25–26}}</ref> These "applied AI" systems are now used extensively throughout the technology industry, and research in this vein is very heavily funded in both academia and industry. Currently, development on this field is considered an emerging trend, and a mature stage is expected to happen in more than 10 years.<ref>{{cite web |title=Trends in the Emerging Tech Hype Cycle |url=https://blogs.gartner.com/smarterwithgartner/files/2018/08/PR_490866_5_Trends_in_the_Emerging_Tech_Hype_Cycle_2018_Hype_Cycle.png |publisher=Gartner Reports |accessdate=7 May 2019 |archive-url=https://web.archive.org/web/20190522024829/https://blogs.gartner.com/smarterwithgartner/files/2018/08/PR_490866_5_Trends_in_the_Emerging_Tech_Hype_Cycle_2018_Hype_Cycle.png |archive-date=22 May 2019 |url-status=live }}</ref><br />
<br />
In the 1990s and early 21st century, mainstream AI achieved far greater commercial success and academic respectability by focusing on specific sub-problems where they can produce verifiable results and commercial applications, such as artificial neural networks and statistical machine learning. These "applied AI" systems are now used extensively throughout the technology industry, and research in this vein is very heavily funded in both academia and industry. Currently, development on this field is considered an emerging trend, and a mature stage is expected to happen in more than 10 years.<br />
<br />
在20世纪90年代和21世纪初,主流人工智能取得了更大的商业成功和学术声望,因为它们把重点放在能够产出可验证结果和商业应用的具体子问题上,例如人工神经网络和统计机器学习。这些“应用人工智能”系统现在在整个技术产业中得到广泛应用,这方面的研究得到了学术界和产业界的大量资助。目前,这一领域的发展被认为是一个新兴的趋势,并有望在10多年内进入一个成熟的阶段。<br />
<br />
<br />
Most mainstream AI researchers hope that strong AI can be developed by combining the programs that solve various sub-problems. [[Hans Moravec]] wrote in 1988: <blockquote>"I am confident that this bottom-up route to artificial intelligence will one day meet the traditional top-down route more than half way, ready to provide the real world competence and the [[Commonsense knowledge base|commonsense knowledge]] that has been so frustratingly elusive in reasoning programs. Fully intelligent machines will result when the metaphorical [[golden spike]] is driven uniting the two efforts."<ref>{{Harvnb|Moravec|1988|p=20}}</ref></blockquote><br />
<br />
Most mainstream AI researchers hope that strong AI can be developed by combining the programs that solve various sub-problems. Hans Moravec wrote in 1988: <blockquote>"I am confident that this bottom-up route to artificial intelligence will one day meet the traditional top-down route more than half way, ready to provide the real world competence and the commonsense knowledge that has been so frustratingly elusive in reasoning programs. Fully intelligent machines will result when the metaphorical golden spike is driven uniting the two efforts."</blockquote><br />
<br />
大多数主流人工智能研究人员希望,通过把能解决各种子问题的程序结合起来,可以开发出强人工智能。汉斯·莫拉维克(Hans Moravec)在1988年写道: “我相信,这种自下而上的人工智能路线,终有一天会与传统的自上而下的路线在后半程相遇,然后提供能解决真实世界中问题的能力,以及常识知识——在推理程序中一直都难以捉摸的令人沮丧的东西。这两种路线结合的人工智能将能为我们解决这些疑难。当以后有一种神奇的方法把这二者结合起来时,完全智能的机器就会产生。” <br />
<br />
<br />
However, even this fundamental philosophy has been disputed; for example, Stevan Harnad of Princeton concluded his 1990 paper on the [[Symbol grounding problem|Symbol Grounding Hypothesis]] by stating: <blockquote>"The expectation has often been voiced that "top-down" (symbolic) approaches to modeling cognition will somehow meet "bottom-up" (sensory) approaches somewhere in between. If the grounding considerations in this paper are valid, then this expectation is hopelessly modular and there is really only one viable route from sense to symbols: from the ground up. A free-floating symbolic level like the software level of a computer will never be reached by this route (or vice versa) – nor is it clear why we should even try to reach such a level, since it looks as if getting there would just amount to uprooting our symbols from their intrinsic meanings (thereby merely reducing ourselves to the functional equivalent of a programmable computer)."<ref>{{cite journal | last1 = Harnad | first1 = S | year = 1990 | title = The Symbol Grounding Problem | journal = Physica D | volume = 42 | issue = 1–3| pages = 335–346 | doi=10.1016/0167-2789(90)90087-6| bibcode = 1990PhyD...42..335H | arxiv = cs/9906002}}</ref></blockquote><br />
<br />
However, even this fundamental philosophy has been disputed; for example, Stevan Harnad of Princeton concluded his 1990 paper on the Symbol Grounding Hypothesis by stating: <blockquote>"The expectation has often been voiced that "top-down" (symbolic) approaches to modeling cognition will somehow meet "bottom-up" (sensory) approaches somewhere in between. If the grounding considerations in this paper are valid, then this expectation is hopelessly modular and there is really only one viable route from sense to symbols: from the ground up. A free-floating symbolic level like the software level of a computer will never be reached by this route (or vice versa) – nor is it clear why we should even try to reach such a level, since it looks as if getting there would just amount to uprooting our symbols from their intrinsic meanings (thereby merely reducing ourselves to the functional equivalent of a programmable computer)."</blockquote><br />
<br />
然而,连如此基本的哲学问题也存在争议; 例如,普林斯顿大学的斯蒂文·哈纳德(Stevan Harnad)在1990年关于'''<font color="#ff8000">符号基础假说(the Symbol Grounding Hypothesis)</font>'''的论文中总结道: “人们经常提出这样的期望,即建立“自上而下”(符号)的认知模型的方法将在某种程度上与“自下而上”(感官)的方法在建模过程中的某处相会。如果本文中的基本假设是正确的,那就不可能存在模块化的后见方式,且从认知到符号真的只有一条可行的路径: 直接建立从感觉到符号的联系。类似计算机软件级别的无意义的符号永远不可能通过这条路径实现,反之亦然——甚至我们也不清楚为什么应该尝试达到这样一个级别,因为它看起来就像是把我们的符号从它们的内在意义上连根拔起(从而仅仅把我们自己变成和可编程计算机一样的东西)。”<br />
<br />
<br />
<br />
===现代通用人工智能的研究===<br />
<br />
The term "artificial general intelligence" was used as early as 1997, by Mark Gubrud<ref>{{Harvnb|Gubrud|1997}}</ref> in a discussion of the implications of fully automated military production and operations. The term was re-introduced and popularized by [[Shane Legg]] and [[Ben Goertzel]] around 2002.<ref>{{Cite web|url=http://goertzel.org/who-coined-the-term-agi/|title=Who coined the term "AGI"? » goertzel.org|language=en-US|access-date=28 December 2018|archive-url=https://web.archive.org/web/20181228083048/http://goertzel.org/who-coined-the-term-agi/|archive-date=28 December 2018|url-status=live}}, via [[Life 3.0]]: 'The term "AGI" was popularized by... Shane Legg, Mark Gubrud and Ben Goertzel'</ref> The research objective is much older, for example [[Doug Lenat]]'s [[Cyc]] project (that began in 1984), and [[Allen Newell]]'s [[Soar (cognitive architecture)|Soar]] project are regarded as within the scope of AGI. AGI research activity in 2006 was described by Pei Wang and Ben Goertzel<ref>{{harvnb|Goertzel|Wang|2006}}. See also {{harvtxt|Wang|2006}} with an up-to-date summary and lots of links.</ref> as "producing publications and preliminary results". The first summer school in AGI was organized in Xiamen, China in 2009<ref>https://goertzel.org/AGI_Summer_School_2009.htm</ref> by the Xiamen university's Artificial Brain Laboratory and OpenCog. The first university course was given in 2010<ref>http://fmi-plovdiv.org/index.jsp?id=1054&ln=1</ref> and 2011<ref>http://fmi.uni-plovdiv.bg/index.jsp?id=1139&ln=1</ref> at Plovdiv University, Bulgaria by Todor Arnaudov. MIT presented a course in AGI in 2018, organized by Lex Fridman and featuring a number of guest lecturers. However, as yet, most AI researchers have devoted little attention to AGI, with some claiming that intelligence is too complex to be completely replicated in the near term. However, a small number of computer scientists are active in AGI research, and many of this group are contributing to a series of [[Conference on Artificial General Intelligence|AGI conferences]]. The research is extremely diverse and often pioneering in nature. In the introduction to his book,{{sfn|Goertzel|Pennachin|2006}} Goertzel says that estimates of the time needed before a truly flexible AGI is built vary from 10 years to over a century, but the consensus in the AGI research community seems to be that the timeline discussed by [[Ray Kurzweil]] in ''[[The Singularity is Near]]''<ref name="K">{{Harv|Kurzweil|2005|p=260}} or see [http://crnano.typepad.com/crnblog/2005/08/advanced_human_.html Advanced Human Intelligence] {{Webarchive|url=https://web.archive.org/web/20110630032301/http://crnano.typepad.com/crnblog/2005/08/advanced_human_.html |date=30 June 2011 }} where he defines strong AI as "machine intelligence with the full range of human intelligence."</ref> (i.e. between 2015 and 2045) is plausible.{{sfn|Goertzel|2007}}<br />
<br />
The term "artificial general intelligence" was used as early as 1997, by Mark Gubrud in a discussion of the implications of fully automated military production and operations. The term was re-introduced and popularized by Shane Legg and Ben Goertzel around 2002. The research objective is much older, for example Doug Lenat's Cyc project (that began in 1984), and Allen Newell's Soar project are regarded as within the scope of AGI. AGI research activity in 2006 was described by Pei Wang and Ben Goertzel as "producing publications and preliminary results". The first summer school in AGI was organized in Xiamen, China in 2009 by the Xiamen university's Artificial Brain Laboratory and OpenCog. The first university course was given in 2010 and 2011 at Plovdiv University, Bulgaria by Todor Arnaudov. MIT presented a course in AGI in 2018, organized by Lex Fridman and featuring a number of guest lecturers. However, as yet, most AI researchers have devoted little attention to AGI, with some claiming that intelligence is too complex to be completely replicated in the near term. However, a small number of computer scientists are active in AGI research, and many of this group are contributing to a series of AGI conferences. The research is extremely diverse and often pioneering in nature. In the introduction to his book, Goertzel says that estimates of the time needed before a truly flexible AGI is built vary from 10 years to over a century, but the consensus in the AGI research community seems to be that the timeline discussed by Ray Kurzweil in The Singularity is Near (i.e. between 2015 and 2045) is plausible.<br />
<br />
“通用人工智能”一词早在1997年就由马克·古布鲁德(Mark Gubrud)在讨论全自动化军事生产和作业的影响时使用,又在2002年左右被肖恩·莱格(Shane Legg)和本·格兹尔(Ben Goertzel)重新引入并推广。通用人工智能的研究目标要古老得多,例如道格•雷纳特(Doug Lenat)的 Cyc 项目(始于1984年) ,以及艾伦•纽厄尔(Allen Newell)的 Soar 项目都被认为属于通用人工智能的范畴。王培(Pei Wang)和本·格兹尔将2006年的通用人工智能研究活动描述为“发表论文和取得初步成果”。2009年,厦门大学人工脑实验室和 OpenCog 在中国厦门组织了通用人工智能的第一个暑期学校。第一个大学课程于2010年和2011年在保加利亚普罗夫迪夫大学由托多尔·阿瑙多夫(Todor Arnaudov)开设。2018年,麻省理工学院开设了一门通用人工智能课程,由莱克斯·弗里德曼(Lex Fridman)组织,并邀请了一些客座讲师。然而,迄今为止,大多数人工智能研究人员对通用人工智能关注甚少,一些人声称,智能过于复杂,在短期内无法完全复制。然而,少数计算机科学家积极参与通用人工智能的研究,其中许多人正在为通用人工智能的一系列会议做出贡献。这项研究极其多样化,而且往往具有开创性。格兹尔在他的书的序言中,说,制造一个真正灵活的通用人工智能所需的时间约为10年到超过一个世纪不等,但是通用人工智能研究团体的似乎一致认为雷·库兹韦尔(Ray Kurzweil)在'''<font color="#ff8000">《奇点临近》(The Singularity is Near)</font>'''(即在2015年至2045年之间)中讨论的时间线是可信的。<br />
<br />
<br />
<br />
However, most mainstream AI researchers doubt that progress will be this rapid.{{citation_needed|date=January 2017}} Organizations explicitly pursuing AGI include the Swiss AI lab [[IDSIA]],{{citation needed|date=December 2017}} Nnaisense,<ref>{{cite news|last1=Markoff|first1=John|title=When A.I. Matures, It May Call Jürgen Schmidhuber 'Dad'|url=https://www.nytimes.com/2016/11/27/technology/artificial-intelligence-pioneer-jurgen-schmidhuber-overlooked.html|accessdate=26 December 2017|work=The New York Times|date=27 November 2016|archive-url=https://web.archive.org/web/20171226234555/https://www.nytimes.com/2016/11/27/technology/artificial-intelligence-pioneer-jurgen-schmidhuber-overlooked.html|archive-date=26 December 2017|url-status=live}}</ref> [[Vicarious (company)|Vicarious]],<!--<ref name=baum/>--> [[Maluuba]],<ref name=baum/> the [[OpenCog|OpenCog Foundation]], Adaptive AI, [[LIDA (cognitive architecture)|LIDA]], and [[Numenta]] and the associated [[Redwood Neuroscience Institute]].<ref>{{cite book|author1=James Barrat|title=Our Final Invention: Artificial Intelligence and the End of the Human Era|date=2013|publisher=St. Martin's Press|location=New York|isbn=9780312622374|edition=First|chapter=Chapter 11: A Hard Takeoff|title-link=Our Final Invention|author1-link=James Barrat}}</ref> In addition, organizations such as the [[Machine Intelligence Research Institute]]<ref>{{cite web|title=About the Machine Intelligence Research Institute|url=https://intelligence.org/about/|website=Machine Intelligence Research Institute|accessdate=26 December 2017|archive-url=https://web.archive.org/web/20180121025925/https://intelligence.org/about/|archive-date=21 January 2018|url-status=live}}</ref> and [[OpenAI]]<ref>{{cite news|title=About OpenAI|url=https://openai.com/about/|accessdate=26 December 2017|work=[[OpenAI]]|language=en-us|archive-url=https://web.archive.org/web/20171222181056/https://openai.com/about/|archive-date=22 December 2017|url-status=live}}</ref> have been founded to influence the development path of AGI. Finally, projects such as the [[Human Brain Project]]<ref>{{cite news|last1=Theil|first1=Stefan|title=Trouble in Mind|url=https://www.scientificamerican.com/article/why-the-human-brain-project-went-wrong-and-how-to-fix-it/|accessdate=26 December 2017|work=Scientific American|pages=36–42|language=en|doi=10.1038/scientificamerican1015-36|bibcode=2015SciAm.313d..36T|archive-url=https://web.archive.org/web/20171109234151/https://www.scientificamerican.com/article/why-the-human-brain-project-went-wrong-and-how-to-fix-it/|archive-date=9 November 2017|url-status=live}}</ref> have the goal of building a functioning simulation of the human brain. A 2017 survey of AGI categorized forty-five known "active R&D projects" that explicitly or implicitly (through published research) research AGI, with the largest three being [[DeepMind]], the Human Brain Project, and [[OpenAI]].<ref name=baum>{{cite journal|title=Baum, Seth, A Survey of Artificial General Intelligence Projects for Ethics, Risk, and Policy (November 12, 2017). Global Catastrophic Risk Institute Working Paper 17-1|url=https://ssrn.com/abstract=3070741|date=12 November 2017|last1=Baum|first1=Seth}}</ref><br />
<br />
However, most mainstream AI researchers doubt that progress will be this rapid. Organizations explicitly pursuing AGI include the Swiss AI lab IDSIA, Nnaisense, Vicarious. In addition, organizations such as the Machine Intelligence Research Institute and OpenAI have been founded to influence the development path of AGI. Finally, projects such as the Human Brain Project have the goal of building a functioning simulation of the human brain. A 2017 survey of AGI categorized forty-five known "active R&D projects" that explicitly or implicitly (through published research) research AGI, with the largest three being DeepMind, the Human Brain Project, and OpenAI.<br />
<br />
然而,大多数主流的人工智能研究人员怀疑进展是否会如此之快。明确寻求通用人工智能的组织包括瑞士人工智能实验室IDSIA,Nnaisense,Vicarious。此外,机器智能研究所和 OpenAI 等机构也建立起来以影响通用人工智能的发展道路。最后,还有像人脑计划这样的项目,目标是建立一个人脑的功能模拟。2017年针对一项通用人工智能的调查(通过已发表的研究)对45个已知的明确的或暗中研究通用人工智能的“活跃研发项目”进行了分类 ,其中最大的三个是 DeepMind、人类大脑项目和 OpenAI。<br />
<br />
<br />
<br />
In 2017, researchers Feng Liu, Yong Shi and Ying Liu conducted intelligence tests on publicly available and freely accessible weak AI such as Google AI or Apple's Siri and others. At the maximum, these AI reached a value of about 47, which corresponds approximately to a six-year-old child in first grade. An adult comes to about 100 on average. Similar tests had been carried out in 2014, with the IQ score reaching a maximum value of 27.<ref>{{cite journal|title=Intelligence Quotient and Intelligence Grade of Artificial Intelligence|journal=Annals of Data Science|volume=4|issue=2|pages=179–191|arxiv=1709.10242|doi=10.1007/s40745-017-0109-0|year=2017|last1=Liu|first1=Feng|last2=Shi|first2=Yong|last3=Liu|first3=Ying|bibcode=2017arXiv170910242L}}</ref><ref>{{cite web|title=Google-KI doppelt so schlau wie Siri|url=https://t3n.de/news/iq-kind-schlauer-google-ki-siri-864003|accessdate=2 January 2019|archive-url=https://web.archive.org/web/20190103055657/https://t3n.de/news/iq-kind-schlauer-google-ki-siri-864003/|archive-date=3 January 2019|url-status=live}}</ref><br />
<br />
In 2017, researchers Feng Liu, Yong Shi and Ying Liu conducted intelligence tests on publicly available and freely accessible weak AI such as Google AI or Apple's Siri and others. At the maximum, these AI reached a value of about 47, which corresponds approximately to a six-year-old child in first grade. An adult comes to about 100 on average. Similar tests had been carried out in 2014, with the IQ score reaching a maximum value of 27.<br />
<br />
2017年,研究人员刘锋、石勇和刘颖对公开的和可自由访问的弱人工智能进行了智商测试,如谷歌人工智能或苹果的 Siri 等。这些人工智能达到的最大值为约47,这大约相当于一个的六岁儿童。一个成年人平均智商为100。2014年也进行了类似的测试,智商分数的最高值达到了27。<br />
<br />
<br />
<br />
In 2019, video game programmer and aerospace engineer [[John Carmack]] announced plans to research AGI.<ref name="lawler">{{cite web |url=https://www.engadget.com/2019-11-13-john-carmack-agi.html |title=John Carmack takes a step back at Oculus to work on human-like AI |date=November 13, 2019 |first=Richard |last=Lawler |accessdate=April 4, 2020 |publisher=[[Engadget]]}}</ref><br />
<br />
In 2019, video game programmer and aerospace engineer John Carmack announced plans to research AGI.<br />
<br />
2019年,游戏程序师和航空工程师约翰·卡迈克(John Carmack)宣布了研究通用人工智能的计划。<br />
<br />
==模拟人脑所需要的处理能力==<br />
<br />
==='''<font color="#ff8000">全脑模拟</font>'''===<br />
<br />
{{main|Mind uploading}}<br />
<br />
A popular discussed approach to achieving general intelligent action is [[whole brain emulation]]. A low-level brain model is built by [[brain scanning|scanning]] and [[Brain mapping|mapping]] a biological brain in detail and copying its state into a computer system or another computational device. The computer runs a [[computer simulation|simulation]] model so faithful to the original that it will behave in essentially the same way as the original brain, or for all practical purposes, indistinguishably.<br />
<br />
A popular discussed approach to achieving general intelligent action is whole brain emulation. A low-level brain model is built by scanning and mapping a biological brain in detail and copying its state into a computer system or another computational device. The computer runs a simulation model so faithful to the original that it will behave in essentially the same way as the original brain, or for all practical purposes, indistinguishably.<ref name=Roadmap><br />
<br />
实现通用智能行为的一种被广泛的讨论方法是全脑模拟。一个低层次的大脑模型是通过扫描和绘制生物大脑的详细情况,并将其状态复制到计算机系统或其他计算设备中来建立的。计算机运行的模拟模型无比忠实于原始模型,以至于它的行为在本质,或者所有实际目的上与原始大脑相同,难以区分。<br />
<br />
{{Harvnb|Sandberg|Boström|2008}}. "The basic idea is to take a particular brain, scan its structure in detail, and construct a software model of it that is so faithful to the original that, when run on appropriate hardware, it will behave in essentially the same way as the original brain."</ref> Whole brain emulation is discussed in [[computational neuroscience]] and [[neuroinformatics]], in the context of [[brain simulation]] for medical research purposes. It is discussed in [[artificial intelligence]] research{{sfn|Goertzel|2007}} as an approach to strong AI. [[Neuroimaging]] technologies that could deliver the necessary detailed understanding are improving rapidly, and [[futurist]] Ray Kurzweil in the book ''The Singularity Is Near''<ref name=K/> predicts that a map of sufficient quality will become available on a similar timescale to the required computing power.<br />
<br />
."The basic idea is to take a particular brain, scan its structure in detail, and construct a software model of it that is so faithful to the original that, when run on appropriate hardware, it will behave in essentially the same way as the original brain."</ref> Whole brain emulation is discussed in computational neuroscience and neuroinformatics, in the context of brain simulation for medical research purposes. It is discussed in artificial intelligence research as an approach to strong AI. Neuroimaging technologies that could deliver the necessary detailed understanding are improving rapidly, and futurist Ray Kurzweil in the book The Singularity Is Near predicts that a map of sufficient quality will become available on a similar timescale to the required computing power.<br />
<br />
“基本思路是,取一个特定的大脑,详细地扫描其结构,并构建一个无比还原原始大脑的软件模型,以至于在适当的硬件上运行时,它基本上与原始大脑的行为方式相同。”在以医学研究为目的的背景下,全脑模拟在计算神经科学和神经信息学医学期刊上被讨论过。它是人工智能研究中讨论的一种强人工智能的方法。能够提供必要详细信息的神经成像技术正在迅速提高,而未来学家雷·库兹韦尔(Ray Kurzweil)在《奇点临近》书中预测,一张高质量的大脑图像将会出现,而同时,实现强人工智能所需的算力也会就绪。<br />
<br />
<br />
<br />
===早期预测===<br />
<br />
[[File:Estimations of Human Brain Emulation Required Performance.svg|thumb|right|400px|Estimates of how much processing power is needed to emulate a human brain at various levels (from Ray Kurzweil, and [[Anders Sandberg]] and [[Nick Bostrom]]), along with the fastest supercomputer from [[TOP500]] mapped by year. Note the logarithmic scale and exponential trendline, which assumes the computational capacity doubles every 1.1 years. Kurzweil believes that mind uploading will be possible at neural simulation, while the Sandberg, Bostrom report is less certain about where [[consciousness]] arises.根据对在不同水平上模拟人类大脑的所需处理能力的估计(来自 Ray Kurzweil,[ Anders Sandberg 和 Nick Bostrom ]) ,以及每年从最快的五百台超级计算机获得的数据,绘制出对数尺度趋势线和指数趋势线。它呈现出计算能力每1.1年增长一倍。库兹韦尔相信,在神经模拟中上传思维是可能的,而桑德伯格和博斯特罗姆的报告对意识从何产生则不太确定。{{sfn|Sandberg|Boström|2008}}]] <br />
<br />
For low-level brain simulation, an extremely powerful computer would be required. The [[human brain]] has a huge number of [[synapses]]. Each of the 10<sup>11</sup> (one hundred billion) [[neurons]] has on average 7,000 synaptic connections (synapses) to other neurons. It has been estimated that the brain of a three-year-old child has about 10<sup>15</sup> synapses (1 quadrillion). This number declines with age, stabilizing by adulthood. Estimates vary for an adult, ranging from 10<sup>14</sup> to 5×10<sup>14</sup> synapses (100 to 500 trillion).{{sfn|Drachman|2005}} An estimate of the brain's processing power, based on a simple switch model for neuron activity, is around 10<sup>14</sup> (100 trillion) synaptic updates per second ([[SUPS]]).{{sfn|Russell|Norvig|2003}} In 1997, Kurzweil looked at various estimates for the hardware required to equal the human brain and adopted a figure of 10<sup>16</sup> computations per second (cps).<ref>In "Mind Children" {{Harvnb|Moravec|1988|page=61}} 10<sup>15</sup> cps is used. More recently, in 1997, <{{cite web|url=http://www.transhumanist.com/volume1/moravec.htm |title=Archived copy |accessdate=23 June 2006 |url-status=dead |archiveurl=https://web.archive.org/web/20060615031852/http://transhumanist.com/volume1/moravec.htm |archivedate=15 June 2006 }}> Moravec argued for 10<sup>8</sup> MIPS which would roughly correspond to 10<sup>14</sup> cps. Moravec talks in terms of MIPS, not "cps", which is a non-standard term Kurzweil introduced.</ref> (For comparison, if a "computation" was equivalent to one "[[FLOPS|floating point operation]]" – a measure used to rate current [[supercomputer]]s – then 10<sup>16</sup> "computations" would be equivalent to 10 [[Peta-|petaFLOPS]], [[FLOPS#Performance records|achieved in 2011]]). He used this figure to predict the necessary hardware would be available sometime between 2015 and 2025, if the exponential growth in computer power at the time of writing continued.<br />
<br />
Estimates of how much processing power is needed to emulate a human brain at various levels (from Ray Kurzweil, and [[Anders Sandberg and Nick Bostrom), along with the fastest supercomputer from TOP500 mapped by year. Note the logarithmic scale and exponential trendline, which assumes the computational capacity doubles every 1.1 years. Kurzweil believes that mind uploading will be possible at neural simulation, while the Sandberg, Bostrom report is less certain about where consciousness arises.]] For low-level brain simulation, an extremely powerful computer would be required. The human brain has a huge number of synapses. Each of the 10<sup>11</sup> (one hundred billion) neurons has on average 7,000 synaptic connections (synapses) to other neurons. It has been estimated that the brain of a three-year-old child has about 10<sup>15</sup> synapses (1 quadrillion). This number declines with age, stabilizing by adulthood. Estimates vary for an adult, ranging from 10<sup>14</sup> to 5×10<sup>14</sup> synapses (100 to 500 trillion). An estimate of the brain's processing power, based on a simple switch model for neuron activity, is around 10<sup>14</sup> (100 trillion) synaptic updates per second (SUPS). In 1997, Kurzweil looked at various estimates for the hardware required to equal the human brain and adopted a figure of 10<sup>16</sup> computations per second (cps). (For comparison, if a "computation" was equivalent to one "floating point operation" – a measure used to rate current supercomputers – then 10<sup>16</sup> "computations" would be equivalent to 10 petaFLOPS, achieved in 2011). He used this figure to predict the necessary hardware would be available sometime between 2015 and 2025, if the exponential growth in computer power at the time of writing continued.<br />
<br />
[[File:Estimations of Human Brain Emulation Required Performance.svg|thumb|right|400px|根据对在不同水平上模拟人类大脑的所需处理能力的估计(来自 Ray Kurzweil,[ Anders Sandberg 和 Nick Bostrom ]) ,以及每年从最快的五百台超级计算机获得的数据,绘制出对数尺度趋势线和指数趋势线。它呈现出计算能力每1.1年增长一倍。库兹韦尔相信,在神经模拟中上传思维是可能的,而桑德伯格和博斯特罗姆的报告对意识从何产生则不太确定。]]<br />
<br />
为进行低层次的大脑模拟,需要一个非常强大的计算机。人类的大脑有大量的突触。10<sup>11</sup> (1000亿)个神经元中每一个平均与其他神经元有7000个突触连接(突触)。据估计,一个三岁儿童的大脑约有10<sup> 15 </sup> 个突触(1千万亿)。这个数字随着年龄的增长而下降,成年后趋于稳定。而每个成年人的估计情况互不相同,从10个 <sup>14 </sup> 到5 * 10<sup>14</sup> 个突触(100万亿到500万亿)不等。基于神经元活动的简单开关模型,对大脑处理能力的估计大约是每秒10<sup>14</sup>(100万亿)突触更新(SUPS)。1997年,库兹韦尔研究了等价模拟人脑所需硬件的各种估计,并采纳了每秒10 <sup> 16 </sup>次 计算(cps)这个估计结果。(作为比较,如果一次“计算”相当于一次“浮点运算”——一种用于对当前超级计算机进行评级的措施——那么10 <sup> 16 </sup>次“计算”相当于2011年达到的的每秒10000亿次浮点运算)。他用这个数字来预测,如果在撰写本文时计算机能力方面的指数增长继续下去的话,那么在2015年到2025年之间的某个时候,必要的硬件将会出现。<br />
<br />
<br />
<br />
===对神经元的更精细的模拟===<br />
<br />
The [[artificial neuron]] model assumed by Kurzweil and used in many current [[artificial neural network]] implementations is simple compared with [[biological neuron model|biological neurons]]. A brain simulation would likely have to capture the detailed cellular behaviour of biological [[neurons]], presently understood only in the broadest of outlines. The overhead introduced by full modeling of the biological, chemical, and physical details of neural behaviour (especially on a molecular scale) would require computational powers several orders of magnitude larger than Kurzweil's estimate. In addition the estimates do not account for [[glial cells]], which are at least as numerous as neurons, and which may outnumber neurons by as much as 10:1, and are now known to play a role in cognitive processes.<ref name="Discover2011JanFeb">{{Cite journal|author=Swaminathan, Nikhil|title=Glia—the other brain cells|journal=Discover|date=Jan–Feb 2011|url=http://discovermagazine.com/2011/jan-feb/62|access-date=24 January 2014|archive-url=https://web.archive.org/web/20140208071350/http://discovermagazine.com/2011/jan-feb/62|archive-date=8 February 2014|url-status=live}}</ref><br />
<br />
The artificial neuron model assumed by Kurzweil and used in many current artificial neural network implementations is simple compared with biological neurons. A brain simulation would likely have to capture the detailed cellular behaviour of biological neurons, presently understood only in the broadest of outlines. The overhead introduced by full modeling of the biological, chemical, and physical details of neural behaviour (especially on a molecular scale) would require computational powers several orders of magnitude larger than Kurzweil's estimate. In addition the estimates do not account for glial cells, which are at least as numerous as neurons, and which may outnumber neurons by as much as 10:1, and are now known to play a role in cognitive processes.<br />
<br />
与生物神经元相比,库兹韦尔假设的人工神经元模型在当前许多'''<font color="#ff8000">人工神经网络(artificial neural network)</font>'''实现中的应用是简单的。大脑模拟可能需要捕捉生物神经元细胞行为的细节,目前只能从最广泛的概要中理解。对神经行为的生物、化学和物理细节(特别是在分子尺度上)进行全面建模所需要的计算能力将比库兹韦尔的估计大数个数量级。此外,这些估计没有考虑到至少和神经元一样多的'''<font color="#ff8000">胶质细胞(glial cells)</font>''',其数量可能是神经元的10倍,且现已知它们在认知过程中发挥作用。<br />
<br />
<br />
===研究现状===<br />
<br />
There are some research projects that are investigating brain simulation using more sophisticated neural models, implemented on conventional computing architectures. The [[Artificial Intelligence System]] project implemented non-real time simulations of a "brain" (with 10<sup>11</sup> neurons) in 2005. It took 50 days on a cluster of 27 processors to simulate 1 second of a model.<ref>{{cite journal |last=Izhikevich |first=Eugene M. |last2=Edelman |first2=Gerald M. |date=4 March 2008 |title=Large-scale model of mammalian thalamocortical systems |url=http://vesicle.nsi.edu/users/izhikevich/publications/large-scale_model_of_human_brain.pdf |journal=PNAS |volume=105 |issue=9 |pages=3593–3598 |doi= 10.1073/pnas.0712231105|access-date=23 June 2015 |archive-url=https://web.archive.org/web/20090612095651/http://vesicle.nsi.edu/users/izhikevich/publications/large-scale_model_of_human_brain.pdf |archive-date=12 June 2009 |pmid=18292226 |pmc=2265160|bibcode=2008PNAS..105.3593I }}</ref> The [[Blue Brain]] project used one of the fastest supercomputer architectures in the world, [[IBM]]'s [[Blue Gene]] platform, to create a real time simulation of a single rat [[Neocortex|neocortical column]] consisting of approximately 10,000 neurons and 10<sup>8</sup> synapses in 2006.<ref>{{cite web|url=http://bluebrain.epfl.ch/Jahia/site/bluebrain/op/edit/pid/19085|title=Project Milestones|work=Blue Brain|accessdate=11 August 2008}}</ref> A longer term goal is to build a detailed, functional simulation of the physiological processes in the human brain: "It is not impossible to build a human brain and we can do it in 10 years," [[Henry Markram]], director of the Blue Brain Project said in 2009 at the [[TED (conference)|TED conference]] in Oxford.<ref>{{Cite news |url=http://news.bbc.co.uk/1/hi/technology/8164060.stm |title=Artificial brain '10 years away' 2009 BBC news |date=22 July 2009 |access-date=25 July 2009 |archive-url=https://web.archive.org/web/20170726040959/http://news.bbc.co.uk/1/hi/technology/8164060.stm |archive-date=26 July 2017 |url-status=live }}</ref> There have also been controversial claims to have simulated a [[cat intelligence#Computer simulation of the cat brain|cat brain]]. Neuro-silicon interfaces have been proposed as an alternative implementation strategy that may scale better.<ref>[http://gauntlet.ucalgary.ca/story/10343 University of Calgary news] {{Webarchive|url=https://web.archive.org/web/20090818081044/http://gauntlet.ucalgary.ca/story/10343 |date=18 August 2009 }}, [http://www.nbcnews.com/id/12037941 NBC News news] {{Webarchive|url=https://web.archive.org/web/20170704063922/http://www.nbcnews.com/id/12037941/ |date=4 July 2017 }}</ref><br />
<br />
There are some research projects that are investigating brain simulation using more sophisticated neural models, implemented on conventional computing architectures. The Artificial Intelligence System project implemented non-real time simulations of a "brain" (with 10<sup>11</sup> neurons) in 2005. It took 50 days on a cluster of 27 processors to simulate 1 second of a model. The Blue Brain project used one of the fastest supercomputer architectures in the world, IBM's Blue Gene platform, to create a real time simulation of a single rat neocortical column consisting of approximately 10,000 neurons and 10<sup>8</sup> synapses in 2006. A longer term goal is to build a detailed, functional simulation of the physiological processes in the human brain: "It is not impossible to build a human brain and we can do it in 10 years," Henry Markram, director of the Blue Brain Project said in 2009 at the TED conference in Oxford. There have also been controversial claims to have simulated a cat brain. Neuro-silicon interfaces have been proposed as an alternative implementation strategy that may scale better.<br />
<br />
有一些研究项目正在使用更复杂的神经模型研究大脑模拟,这些模型是在传统的计算机体系结构上实现的。人工智能系统项目在2005年实现了对一个“大脑”(有10个 <sup> 11 </sup> 神经元)的非实时模拟。在一个由27个处理器组成的集群上,模拟一个模型的一秒钟花费了50天时间。2006年,蓝脑项目利用世界上最快的超级计算机架构之一——IBM 的蓝色基因平台,创建了一个包含大约10,000个神经元和10个 <sup> 8 </sup> 突触的单个大鼠的'''<font color="#ff8000">新皮质柱(neocortical column)</font>'''的实时模拟。一个更长期的目标是建立一个人脑生理过程的详细的功能模拟: “建立一个人脑并不是不可能的,我们可以在10年内完成,”蓝脑项目主任亨利·马克拉姆(Henry Markram)于2009年在牛津举行的 TED 大会上说道。还有一些有争议的说法是模拟猫的大脑。神经-硅接口已作为一种可替代的实施策略被提出,它可能会更好地进行模拟。<br />
<br />
<br />
<br />
[[Hans Moravec]] addressed the above arguments ("brains are more complicated", "neurons have to be modeled in more detail") in his 1997 paper "When will computer hardware match the human brain?".<ref>{{cite web|url=http://www.transhumanist.com/volume1/moravec.htm |title=Archived copy |accessdate=23 June 2006 |url-status=dead |archiveurl=https://web.archive.org/web/20060615031852/http://transhumanist.com/volume1/moravec.htm |archivedate=15 June 2006 }}</ref> He measured the ability of existing software to simulate the functionality of neural tissue, specifically the retina. His results do not depend on the number of glial cells, nor on what kinds of processing neurons perform where.<br />
<br />
Hans Moravec addressed the above arguments ("brains are more complicated", "neurons have to be modeled in more detail") in his 1997 paper "When will computer hardware match the human brain?". He measured the ability of existing software to simulate the functionality of neural tissue, specifically the retina. His results do not depend on the number of glial cells, nor on what kinds of processing neurons perform where.<br />
<br />
汉斯·莫拉维克(Hans Moravec)在他1997年的论文《计算机硬件何时能与人脑匹敌》中提出了上述观点(“大脑更复杂” ,“神经元的建模必须更详细”) .他测量了现有软件模拟神经组织,特别是视网膜功能的能力。他的研究结果并不取决于神经胶质细胞的数量,也不取决于处理神经元在哪里工作。<br />
<br />
<br />
<br />
The actual complexity of modeling biological neurons has been explored in [[OpenWorm|OpenWorm project]] that was aimed on complete simulation of a worm that has only 302 neurons in its neural network (among about 1000 cells in total). The animal's neural network has been well documented before the start of the project. However, although the task seemed simple at the beginning, the models based on a generic neural network did not work. Currently, the efforts are focused on precise emulation of biological neurons (partly on the molecular level), but the result cannot be called a total success yet. Even if the number of issues to be solved in a human-brain-scale model is not proportional to the number of neurons, the amount of work along this path is obvious.<br />
<br />
The actual complexity of modeling biological neurons has been explored in OpenWorm project that was aimed on complete simulation of a worm that has only 302 neurons in its neural network (among about 1000 cells in total). The animal's neural network has been well documented before the start of the project. However, although the task seemed simple at the beginning, the models based on a generic neural network did not work. Currently, the efforts are focused on precise emulation of biological neurons (partly on the molecular level), but the result cannot be called a total success yet. Even if the number of issues to be solved in a human-brain-scale model is not proportional to the number of neurons, the amount of work along this path is obvious.<br />
<br />
OpenWorm 项目已经探讨了建模生物神经元的实际复杂性。该项目旨在完全模拟一个蠕虫,其神经网络中只有302个神经元(在总共约1000个细胞中)。项目开始之前,蠕虫的神经网络已经被很好地记录了下来。然而,尽管任务一开始看起来很简单,基于一般神经网络的模型并不起作用。目前,研究的重点是精确模拟生物神经元(部分在分子水平上) ,但结果还不能被称为完全成功。即使在人脑尺度的模型中需要解决的问题的数量,与神经元的数量不成比例,沿着这条路径走下去的工作量也是显而易见的。<br />
<br />
<br />
<br />
===对基于模拟的研究方法的批评===<br />
<br />
A fundamental criticism of the simulated brain approach derives from [[embodied cognition]] where human embodiment is taken as an essential aspect of human intelligence. Many researchers believe that embodiment is necessary to ground meaning.<ref>{{Harvnb|de Vega|Glenberg|Graesser|2008}}. A wide range of views in current research, all of which require grounding to some degree</ref> If this view is correct, any fully functional brain model will need to encompass more than just the neurons (i.e., a robotic body). Goertzel{{sfn|Goertzel|2007}} proposes virtual embodiment (like in ''[[Second Life]]''), but it is not yet known whether this would be sufficient.<br />
<br />
A fundamental criticism of the simulated brain approach derives from embodied cognition where human embodiment is taken as an essential aspect of human intelligence. Many researchers believe that embodiment is necessary to ground meaning. If this view is correct, any fully functional brain model will need to encompass more than just the neurons (i.e., a robotic body). Goertzel proposes virtual embodiment (like in Second Life), but it is not yet known whether this would be sufficient.<br />
<br />
对模拟大脑的方法的一个基本批评来自具象认知,其中人形被视为人类智力的一个重要方面。许多研究者认为,具象化是必要的基础意义。如果这种观点是正确的,那么任何功能齐全的大脑模型除了神经元还要包含更多东西(例如,一个机器人身体)。格兹尔提出了虚拟体(就像在《第二人生》中那样) ,但是目前还不知道这是否足够。<br />
<br />
<br />
<br />
Desktop computers using microprocessors capable of more than 10<sup>9</sup> cps (Kurzweil's non-standard unit "computations per second", see above) have been available since 2005. According to the brain power estimates used by Kurzweil (and Moravec), this computer should be capable of supporting a simulation of a bee brain, but despite some interest<ref>{{Cite web |url=http://www.setiai.com/archives/cat_honey_bee_brain.html |title=some links to bee brain studies |access-date=30 March 2010 |archive-url=https://web.archive.org/web/20111005162232/http://www.setiai.com/archives/cat_honey_bee_brain.html |archive-date=5 October 2011 |url-status=dead }}</ref> no such simulation exists {{Citation needed|date=April 2011}}. There are at least three reasons for this:<br />
<br />
Desktop computers using microprocessors capable of more than 10<sup>9</sup> cps (Kurzweil's non-standard unit "computations per second", see above) have been available since 2005. According to the brain power estimates used by Kurzweil (and Moravec), this computer should be capable of supporting a simulation of a bee brain, but despite some interest no such simulation exists . There are at least three reasons for this:<br />
<br />
自2005年以来,台式计算机使用的微处理器能够超过10 <sup> 9 </sup> cps (库兹韦尔的非标准单位“每秒计算”,见上文)。根据库兹韦尔(和莫拉维克)使用的大脑能量估算,这台计算机应该能够支持蜜蜂大脑的模拟,但是尽管有些人感兴趣,这样的模拟却并不存在。这至少有三个原因:<br />
<br />
#The neuron model seems to be oversimplified (see next section).<br />
<br />
The neuron model seems to be oversimplified (see next section).<br />
<br />
<br />
<br />
#There is insufficient understanding of higher cognitive processes{{refn|In Goertzels' AGI book ([[#CITEREFYudkowsky2006|Yudkowsky 2006]]), Yudkowsky proposes 5 levels of organisation that must be understood – code/data, sensory modality, concept & category, thought, and deliberation (consciousness) – in order to use the available hardware}} to establish accurately what the brain's neural activity, observed using techniques such as [[Neuroimaging#Functional magnetic resonance imaging|functional magnetic resonance imaging]], correlates with.<br />
<br />
There is insufficient understanding of higher cognitive processes to establish accurately what the brain's neural activity, observed using techniques such as functional magnetic resonance imaging, correlates with.<br />
<br />
<br />
<br />
#Even if our understanding of cognition advances sufficiently, early simulation programs are likely to be very inefficient and will, therefore, need considerably more hardware.<br />
<br />
Even if our understanding of cognition advances sufficiently, early simulation programs are likely to be very inefficient and will, therefore, need considerably more hardware.<br />
<br />
<br />
#The brain of an organism, while critical, may not be an appropriate boundary for a cognitive model. To simulate a bee brain, it may be necessary to simulate the body, and the environment. [[The Extended Mind]] thesis formalizes the philosophical concept, and research into [[Cephalopoda|cephalopods]] has demonstrated clear examples of a decentralized system.<ref>{{cite journal | pmid = 15829594 | doi=10.1152/jn.00684.2004 | volume=94 | issue=2 | title=Dynamic model of the octopus arm. I. Biomechanics of the octopus reaching movement |date=August 2005 | journal=J. Neurophysiol. | pages=1443–58 | last1 = Yekutieli | first1 = Y | last2 = Sagiv-Zohar | first2 = R | last3 = Aharonov | first3 = R | last4 = Engel | first4 = Y | last5 = Hochner | first5 = B | last6 = Flash | first6 = T}}</ref><br />
<br />
The brain of an organism, while critical, may not be an appropriate boundary for a cognitive model. To simulate a bee brain, it may be necessary to simulate the body, and the environment. The Extended Mind thesis formalizes the philosophical concept, and research into cephalopods has demonstrated clear examples of a decentralized system.<br />
<br />
<br />
#神经元模型似乎被过于简化了(见下一节)。<br />
#即使我们对认知的理解有了足够的进步,早期的仿真程序也可能非常低效,因此需要更多的硬件。<br />
#人们对高级认知过程的理解不够充分,使用功能性磁共振成像等技术观察到的大脑活动无法让人们准确地确定大脑的神经活动。<br />
#有机体的大脑虽然关键,但可能不是认知模型的合适边界。为了模拟蜜蜂的大脑,可能需要模拟身体和环境。'''<font color="#ff8000">延展心灵论题(The Extended Mind thesis)</font>'''形式化了哲学概念,对头足类动物的研究已经展示了分散系统的明显的例子。<br />
<br />
<br />
In addition, the scale of the human brain is not currently well-constrained. One estimate puts the human brain at about 100 billion neurons and 100 trillion synapses.<ref>{{Harvnb|Williams|Herrup|1988}}</ref><ref>[http://search.eb.com/eb/article-75525 "nervous system, human."] ''[[Encyclopædia Britannica]]''. 9 January 2007</ref> Another estimate is 86 billion neurons of which 16.3 billion are in the [[cerebral cortex]] and 69 billion in the [[cerebellum]].{{sfn|Azevedo et al.|2009}} [[Glial cell]] synapses are currently unquantified but are known to be extremely numerous.<br />
<br />
In addition, the scale of the human brain is not currently well-constrained. One estimate puts the human brain at about 100 billion neurons and 100 trillion synapses. Another estimate is 86 billion neurons of which 16.3 billion are in the cerebral cortex and 69 billion in the cerebellum. Glial cell synapses are currently unquantified but are known to be extremely numerous.<br />
<br />
此外,人类大脑的规模上限目前还没有得到很好的估计。据测算,人类大脑大约有1000亿个神经元和100万亿个突触。另一个估计是860亿个神经元,其中163亿个在大脑皮层,690亿个在小脑。神经胶质细胞突触目前尚未定量,但已知数量极多。<br />
<br />
==强人工智能和意识==<br />
<br />
{{See also|Philosophy of artificial intelligence|Turing test}}<br />
<br />
<br />
<br />
In 1980, philosopher [[John Searle]] coined the term "strong AI" as part of his [[Chinese room]] argument.<ref>{{Harvnb|Searle|1980}}</ref> He wanted to distinguish between two different hypotheses about artificial intelligence:<ref>As defined in a standard AI textbook: "The assertion that machines could possibly act intelligently (or, perhaps better, act as if they were intelligent) is called the 'weak AI' hypothesis by philosophers, and the assertion that machines that do so are actually thinking (as opposed to simulating thinking) is called the 'strong AI' hypothesis." {{Harv|Russell|Norvig|2003}}</ref><br />
<br />
In 1980, philosopher John Searle coined the term "strong AI" as part of his Chinese room argument. He wanted to distinguish between two different hypotheses about artificial intelligence:<br />
<br />
1980年,哲学家约翰•塞尔(John Searle)提出“强人工智能”(strong AI)一词作为他在中文屋论证的一部分。他想要区分关于人工智能的两种不同假设:<br />
<br />
* An artificial intelligence system can ''think'' and have a ''mind''. (The word "mind" has a specific meaning for philosophers, as used in "the [[mind body problem]]" or "the [[philosophy of mind]]".)<br />
*一个人工智能系统可以思考并拥有心灵。(词语“心灵”对哲学家来说有特殊意义,正如在“身心问题”或“心灵哲学”中的使用一样。)<br />
<br />
* An artificial intelligence system can (only) ''act like'' it thinks and has a mind.<br />
*一个人工智能系统(仅仅)可以表现得像是能思考、拥有心灵那样去行动。<br />
<br />
The first one is called "the ''strong'' AI hypothesis" and the second is "the ''weak'' AI hypothesis" because the first one makes the ''stronger'' statement: it assumes something special has happened to the machine that goes beyond all its abilities that we can test. Searle referred to the "strong AI hypothesis" as "strong AI". This usage is also common in academic AI research and textbooks.<br />
<br />
The first one is called "the strong AI hypothesis" and the second is "the weak AI hypothesis" because the first one makes the stronger statement: it assumes something special has happened to the machine that goes beyond all its abilities that we can test. Searle referred to the "strong AI hypothesis" as "strong AI". This usage is also common in academic AI research and textbooks.<br />
<br />
第一条被称为“'''<font color="#ff8000">强人工智能假设(the strong AI hypothesis)</font>'''” ,第二条被称为“'''<font color="#ff8000">弱人工智能假设(the weak AI hypothesis)</font>'''”,因为第一条假设提出了更强的陈述: 它假定机器发生了某种特殊的事件,超出了我们能够测试的所有能力。塞尔将“强人工智能假说”称为“强人工智能”。这种用法在人工智能学术研究和教科书中也很常见。<ref>For example:<br />
* {{Harvnb|Russell|Norvig|2003}},<br />
* [http://www.encyclopedia.com/doc/1O87-strongAI.html Oxford University Press Dictionary of Psychology] {{Webarchive|url=https://web.archive.org/web/20071203103022/http://www.encyclopedia.com/doc/1O87-strongAI.html |date=3 December 2007 }} (quoted in "High Beam Encyclopedia"),<br />
* [http://www.aaai.org/AITopics/html/phil.html MIT Encyclopedia of Cognitive Science] {{Webarchive|url=https://web.archive.org/web/20080719074502/http://www.aaai.org/AITopics/html/phil.html |date=19 July 2008 }} (quoted in "AITopics")<br />
* [http://planetmath.org/encyclopedia/StrongAIThesis.html Planet Math] {{Webarchive|url=https://web.archive.org/web/20070919012830/http://planetmath.org/encyclopedia/StrongAIThesis.html |date=19 September 2007 }}<br />
* [http://www.cbhd.org/resources/biotech/tongen_2003-11-07.htm Will Biological Computers Enable Artificially Intelligent Machines to Become Persons?] {{Webarchive|url=https://web.archive.org/web/20080513031753/http://www.cbhd.org/resources/biotech/tongen_2003-11-07.htm |date=13 May 2008 }} Anthony Tongen</ref><br />
<br />
<br />
<br />
The weak AI hypothesis is equivalent to the hypothesis that artificial general intelligence is possible. According to [[Stuart J. Russell|Russell]] and [[Peter Norvig|Norvig]], "Most AI researchers take the weak AI hypothesis for granted, and don't care about the strong AI hypothesis."{{sfn|Russell|Norvig|2003|p=947}}<br />
<br />
The weak AI hypothesis is equivalent to the hypothesis that artificial general intelligence is possible. According to Russell and Norvig, "Most AI researchers take the weak AI hypothesis for granted, and don't care about the strong AI hypothesis."<br />
<br />
弱人工智能假说其实就意味着通用人工智能是可能的。根据罗素和诺维格的说法,“大多数人工智能研究人员认为弱人工智能假说是理所当然的,并且不关心强人工智能假说。”<br />
<br />
<br />
<br />
In contrast to Searle, [[Ray Kurzweil]] uses the term "strong AI" to describe any artificial intelligence system that acts like it has a mind,<ref name=K/> regardless of whether a philosopher would be able to determine if it ''actually'' has a mind or not.<br />
<br />
In contrast to Searle, Ray Kurzweil uses the term "strong AI" to describe any artificial intelligence system that acts like it has a mind, regardless of whether a philosopher would be able to determine if it actually has a mind or not.<br />
<br />
与 Searle 不同的是,Ray Kurzweil 使用“强人工智能”这个词来描述任何人工智能系统,这个系统的行为就像它有思想一样,不管哲学家能否确定它是否真的有思想。<br />
<br />
In science fiction, AGI is associated with traits such as [[consciousness]], [[sentience]], [[sapience]], and [[self-awareness]] observed in living beings. However, according to Searle, it is an open question whether general intelligence is sufficient for consciousness. "Strong AI" (as defined above by Kurzweil) should not be confused with Searle's "[[strong AI hypothesis]]." The strong AI hypothesis is the claim that a computer which behaves as intelligently as a person must also necessarily have a [[mind]] and [[consciousness]]. AGI refers only to the amount of intelligence that the machine displays, with or without a mind.<br />
<br />
In science fiction, AGI is associated with traits such as consciousness, sentience, sapience, and self-awareness observed in living beings. However, according to Searle, it is an open question whether general intelligence is sufficient for consciousness. "Strong AI" (as defined above by Kurzweil) should not be confused with Searle's "strong AI hypothesis." The strong AI hypothesis is the claim that a computer which behaves as intelligently as a person must also necessarily have a mind and consciousness. AGI refers only to the amount of intelligence that the machine displays, with or without a mind.<br />
<br />
在科幻小说中,通用人工智能与生物所具有的的意识、知觉、智慧和自我意识等特征有关。然而,根据塞尔的说法,一般智力是否足以产生意识还是一个悬而未决的问题。“强人工智能”(如上文库兹韦尔所定义的)不应与塞尔的“强人工智能假设”相混淆。强人工智能假说认为,一台像人一样智能运行的计算机必然具有思想和意识。通用人工智能只是指机器显示出来的智能程度,而与机器是否拥有心灵无关。<br />
<br />
<br />
<br />
===意识===<br />
<br />
There are other aspects of the human mind besides intelligence that are relevant to the concept of strong AI which play a major role in [[science fiction]] and the [[ethics of artificial intelligence]]:<br />
<br />
There are other aspects of the human mind besides intelligence that are relevant to the concept of strong AI which play a major role in science fiction and the ethics of artificial intelligence:<br />
<br />
除了“智能”这一与强人工智能概念相关的方面,人类思维还有其他方面:<br />
<br />
* [[consciousness]]: To have [[qualia|subjective experience]] and [[thought]].<ref>Note that [[consciousness]] is difficult to define. A popular definition, due to [[Thomas Nagel]], is that it "feels like" something to be conscious. If we are not conscious, then it doesn't feel like anything. Nagel uses the example of a bat: we can sensibly ask "what does it feel like to be a bat?" However, we are unlikely to ask "what does it feel like to be a toaster?" Nagel concludes that a bat appears to be conscious (i.e. has consciousness) but a toaster does not. See {{Harv|Nagel|1974}}</ref><br />
* [[self-awareness]]: To be aware of oneself as a separate individual, especially to be aware of one's own thoughts.<br />
* [[sentience]]: The ability to "feel" perceptions or emotions subjectively.<br />
* [[sapience]]: The capacity for wisdom.<br />
<br />
<br />
*意识:拥有主观体验和思想。值得一提的是意识是很难定义的。由托马斯·内格尔给出的一个著名定义陈述如下:一个事物如果能体会到某种感觉,那么它是有意识的。如果我们不是有意识的,那么我们不会有任何感觉。内格尔以蝙蝠为例:我们可以凭借感觉问出:“成为一只蝙蝠的感觉如何?”但是,我们不大可能问出:“成为一个吐司机的感觉如何?”内格尔总结认为蝙蝠像是有意识的(即拥有意识),但是吐司机却不是。<br />
*自我意识:能够意识到自己是一个独立的个体,尤其是意识到自己的思想。<br />
*知觉:主观地感受概念或者情感的能力。<br />
*智慧:容纳知识的能力。<br />
<br />
<br />
These traits have a moral dimension, because a machine with this form of strong AI may have legal rights, analogous to the [[animal rights|rights of non-human animals]]. As such, preliminary work has been conducted on approaches to integrating full ethical agents with existing legal and social frameworks. These approaches have focused on the legal position and rights of 'strong' AI.<ref>{{Cite journal|last=Sotala|first=Kaj|last2=Yampolskiy|first2=Roman V|date=2014-12-19|title=Responses to catastrophic AGI risk: a survey|journal=Physica Scripta|volume=90|issue=1|pages=8|doi=10.1088/0031-8949/90/1/018001|issn=0031-8949|doi-access=free}}</ref><br />
<br />
These traits have a moral dimension, because a machine with this form of strong AI may have legal rights, analogous to the rights of non-human animals. As such, preliminary work has been conducted on approaches to integrating full ethical agents with existing legal and social frameworks. These approaches have focused on the legal position and rights of 'strong' AI.<br />
<br />
这些特征具有道德维度,因为拥有这种强人工智能形式的机器可能拥有法律权利,类似于非人类动物的权利。因此,一些初步工作已经开展,探讨如何将完全道德主体纳入现有的法律和社会框架。这些方法都集中在“强”人工智能的法律地位和权利上。<br />
<br />
<br />
<br />
However, [[Bill Joy]], among others, argues a machine with these traits may be a threat to human life or dignity.<ref>{{cite journal| title=Why the future doesn't need us | last=Joy | first=Bill |author-link=Bill Joy | journal=Wired |date=April 2000 }}</ref> It remains to be shown whether any of these traits are [[Necessary and sufficient condition|necessary]] for strong AI. The role of [[consciousness]] is not clear, and currently there is no agreed test for its presence. If a machine is built with a device that simulates the [[neural correlates of consciousness]], would it automatically have self-awareness? It is also possible that some of these properties, such as sentience, [[Emergence|naturally emerge]] from a fully intelligent machine, or that it becomes natural to ''ascribe'' these properties to machines once they begin to act in a way that is clearly intelligent. For example, intelligent action may be sufficient for sentience, rather than the other way around.<br />
<br />
However, Bill Joy, among others, argues a machine with these traits may be a threat to human life or dignity. It remains to be shown whether any of these traits are necessary for strong AI. The role of consciousness is not clear, and currently there is no agreed test for its presence. If a machine is built with a device that simulates the neural correlates of consciousness, would it automatically have self-awareness? It is also possible that some of these properties, such as sentience, naturally emerge from a fully intelligent machine, or that it becomes natural to ascribe these properties to machines once they begin to act in a way that is clearly intelligent. For example, intelligent action may be sufficient for sentience, rather than the other way around.<br />
<br />
然而,比尔·乔伊(Bill Joy)等人认为,具有这些特征的机器可能会威胁到人类的生命或尊严。这些特征对于强人工智能来说是否是必要的还有待证明。意识的作用并不清楚,目前也没有公认的测试来确定其存在。如果一台机器装有一个能模拟与意识相关神经的装置,它会自动具有自我意识吗?也有可能这些特性中的一些,比如感知能力,自然而然地从一个完全智能的机器中产生,或者一旦机器开始以一种明显智能的方式行动,人们就会自然而然地认为这些特性是机器自主产生的。例如,智能行为可能足以判定机器产生了知觉,而非反过来。<br />
--[[用户:粲兰|袁一博]]([[用户讨论:粲兰|讨论]])“或者一旦机器开始以一种明显智能的方式行动,,人们就会自然而然地认为这些特性是机器自主产生的。”对应原句“or that it becomes natural to ascribe these properties to machines once they begin to act in a way that is clearly intelligent.”与原句在语序和措辞上略有不同,是译者考虑到中文的阅读习惯在不改变原意的条件下意译得出的。<br />
--[[用户:Qige96|Ricky]]([[用户讨论:Qige96|讨论]])已采纳<br />
<br />
===人工意识研究===<br />
<br />
{{Main|Artificial consciousness}}<br />
<br />
Although the role of consciousness in strong AI/AGI is debatable, many AGI researchers{{sfn|Yudkowsky|2006}} regard research that investigates possibilities for implementing consciousness as vital. In an early effort [[Igor Aleksander]]{{sfn|Aleksander|1996}} argued that the principles for creating a conscious machine already existed but that it would take forty years to train such a machine to understand [[language]].<br />
<br />
Although the role of consciousness in strong AI/AGI is debatable, many AGI researchers regard research that investigates possibilities for implementing consciousness as vital. In an early effort Igor Aleksander argued that the principles for creating a conscious machine already existed but that it would take forty years to train such a machine to understand language.<br />
<br />
虽然意识在强人工智能/通用人工智能中的作用是有争议的,但是很多通用人工智能的研究人员认为研究实现意识的可能性是至关重要的。在早期的努力中,伊格尔·亚历山大(Igor Aleksander)认为如何创造一个有意识的机器的知识已经存在,但是训练这样一个机器去理解语言可能需要四十年。<br />
<br />
==对于人工智能研究进展缓慢的可能解释==<br />
<br />
{{See also|History of artificial intelligence#The problems}}<br />
<br />
<br />
Since the launch of AI research in 1956, the growth of this field has slowed down over time and has stalled the aims of creating machines skilled with intelligent action at the human level.{{sfn|Clocksin|2003}} A possible explanation for this delay is that computers lack a sufficient scope of memory or processing power.{{sfn|Clocksin|2003}} In addition, the level of complexity that connects to the process of AI research may also limit the progress of AI research.{{sfn|Clocksin|2003}}<br />
<br />
Since the launch of AI research in 1956, the growth of this field has slowed down over time and has stalled the aims of creating machines skilled with intelligent action at the human level. A possible explanation for this delay is that computers lack a sufficient scope of memory or processing power. In addition, the level of complexity that connects to the process of AI research may also limit the progress of AI research.<br />
<br />
自从1956年人工智能研究启动以来,这一领域的发展速度已经随着时间的推移而放缓,创造具有人类水平智能机器的目标依然遥不可及。这种延迟的一个可能的解释是计算机缺乏足够的存储空间或处理能力。此外,人工智能研究过程的复杂程度也可能限制人工智能研究的进展。<br />
<br />
<br />
While most AI researchers believe strong AI can be achieved in the future, there are some individuals like [[Hubert Dreyfus]] and [[Roger Penrose]] who deny the possibility of achieving strong AI.{{sfn|Clocksin|2003}} [[John McCarthy (computer scientist)|John McCarthy]] was one of various computer scientists who believe human-level AI will be accomplished, but a date cannot accurately be predicted.{{sfn|McCarthy|2003}}<br />
<br />
While most AI researchers believe strong AI can be achieved in the future, there are some individuals like Hubert Dreyfus and Roger Penrose who deny the possibility of achieving strong AI. John McCarthy was one of various computer scientists who believe human-level AI will be accomplished, but a date cannot accurately be predicted.<br />
<br />
虽然大多数人工智能研究人员认为强人工智能可以在未来实现,但也有一些像休伯特·德雷福斯(Hubert Dreyfus)和罗杰·彭罗斯(Roger Penrose)那样否认实现强人工智能的可能性。约翰·麦卡锡(John McCarthy)是众多相信人类水平人工智能将会实现的计算机科学家之一,但是日期无法准确预测。<br />
<br />
<br />
Conceptual limitations are another possible reason for the slowness in AI research.{{sfn|Clocksin|2003}} AI researchers may need to modify the conceptual framework of their discipline in order to provide a stronger base and contribution to the quest of achieving strong AI. As William Clocksin wrote in 2003: "the framework starts from Weizenbaum's observation that intelligence manifests itself only relative to specific social and cultural contexts".{{sfn|Clocksin|2003}}<br />
<br />
Conceptual limitations are another possible reason for the slowness in AI research. AI researchers may need to modify the conceptual framework of their discipline in order to provide a stronger base and contribution to the quest of achieving strong AI. As William Clocksin wrote in 2003: "the framework starts from Weizenbaum's observation that intelligence manifests itself only relative to specific social and cultural contexts".<br />
<br />
概念上的局限性是人工智能研究缓慢的另一个可能原因。人工智能研究人员可能需要修改他们学科的概念框架,以便为实现强人工智能提供一个更强大的基础。正如威廉·克罗克森(William Clocksin)在2003年写的那样: “这个框架始于魏岑鲍姆(Weizenbaum)的观察,即智能只在特定的社会文化背景下才能表现出来。”。<br />
<br />
<br />
<br />
Furthermore, AI researchers have been able to create computers that can perform jobs that are complicated for people to do, such as mathematics, but conversely they have struggled to develop a computer that is capable of carrying out tasks that are simple for humans to do, such as walking ([[Moravec's paradox]]).{{sfn|Clocksin|2003}} A problem described by David Gelernter is that some people assume thinking and reasoning are equivalent.{{sfn|Gelernter|2010}} However, the idea of whether thoughts and the creator of those thoughts are isolated individually has intrigued AI researchers.{{sfn|Gelernter|2010}}<br />
<br />
Furthermore, AI researchers have been able to create computers that can perform jobs that are complicated for people to do, such as mathematics, but conversely they have struggled to develop a computer that is capable of carrying out tasks that are simple for humans to do, such as walking (Moravec's paradox). A problem described by David Gelernter is that some people assume thinking and reasoning are equivalent. However, the idea of whether thoughts and the creator of those thoughts are isolated individually has intrigued AI researchers.<br />
<br />
此外,人工智能研究人员已经能够创造出能够执行对人类而言复杂的工作(如数学)的计算机,但相反地,他们却难以开发出能够执行人类简单任务(如行走)的计算机('''<font color="#ff8000">莫拉维克悖论(Moravec's paradox)</font>''')。大卫·格勒尼特(David Gelernter)描述的一个问题是,有些人认为思考和推理是等价的。然而,人工智能研究者已经开始探究是否思想和这些思想的创造者是分开的。<br />
<br />
<br />
<br />
The problems that have been encountered in AI research over the past decades have further impeded the progress of AI. The failed predictions that have been promised by AI researchers and the lack of a complete understanding of human behaviors have helped diminish the primary idea of human-level AI.{{sfn|Goertzel|2007}} Although the progress of AI research has brought both improvement and disappointment, most investigators have established optimism about potentially achieving the goal of AI in the 21st century.{{sfn|Goertzel|2007}}<br />
<br />
The problems that have been encountered in AI research over the past decades have further impeded the progress of AI. The failed predictions that have been promised by AI researchers and the lack of a complete understanding of human behaviors have helped diminish the primary idea of human-level AI. Although the progress of AI research has brought both improvement and disappointment, most investigators have established optimism about potentially achieving the goal of AI in the 21st century.<br />
<br />
过去几十年人工智能研究中遇到的问题进一步阻碍了人工智能的发展。人工智能研究人员所做的失败预测,以及对人类行为不够完整的理解,削弱了人类水平人工智能的最初设想。尽管人工智能研究的进展带来了进步和失望,但大多数研究人员对人工智能在21世纪可能实现的目标持乐观态度。<br />
<br />
<br />
<br />
Other possible reasons have been proposed for the lengthy research in the progress of strong AI. The intricacy of scientific problems and the need to fully understand the human brain through psychology and neurophysiology have limited many researchers in emulating the function of the human brain in computer hardware.{{sfn|McCarthy|2007}} Many researchers tend to underestimate any doubt that is involved with future predictions of AI, but without taking those issues seriously, people can then overlook solutions to problematic questions.{{sfn|Goertzel|2007}}<br />
<br />
Other possible reasons have been proposed for the lengthy research in the progress of strong AI. The intricacy of scientific problems and the need to fully understand the human brain through psychology and neurophysiology have limited many researchers in emulating the function of the human brain in computer hardware. Many researchers tend to underestimate any doubt that is involved with future predictions of AI, but without taking those issues seriously, people can then overlook solutions to problematic questions.<br />
<br />
还有其他可能的原因也能解释为什么强人工智能的研究持续了这么长时间。错综复杂的科学问题,以及用心理学和神经生理学充分了解人脑的需要,限制了许多研究人员在计算机硬件中模拟人脑。许多研究人员倾向于低估对人工智能未来预测有关的任何怀疑,但是如果不认真对待这些问题,人们就会忽视这些问题的解决方案。<br />
<br />
<br />
<br />
Clocksin says that a conceptual limitation that may impede the progress of AI research is that people may be using the wrong techniques for computer programs and implementation of equipment.{{sfn|Clocksin|2003}} When AI researchers first began to aim for the goal of artificial intelligence, a main interest was human reasoning.{{sfn|Holte|Choueiry|2003}} Researchers hoped to establish computational models of human knowledge through reasoning and to find out how to design a computer with a specific cognitive task.{{sfn|Holte|Choueiry|2003}}<br />
<br />
Clocksin says that a conceptual limitation that may impede the progress of AI research is that people may be using the wrong techniques for computer programs and implementation of equipment. When AI researchers first began to aim for the goal of artificial intelligence, a main interest was human reasoning. Researchers hoped to establish computational models of human knowledge through reasoning and to find out how to design a computer with a specific cognitive task.<br />
<br />
克罗克森说,阻碍人工智能研究进展的一个概念上的限制是,人们可能在计算机程序和安装设备方面使用了错误的技术。当人工智能研究人员第一次开始瞄准人工智能的目标时,主要的兴趣是人类推理。研究人员希望通过推理建立人类知识的计算模型,并找出如何设计一台具有特定认知任务的计算机。<br />
<br />
<br />
<br />
The practice of abstraction, which people tend to redefine when working with a particular context in research, provides researchers with a concentration on just a few concepts.{{sfn|Holte|Choueiry|2003}} The most productive use of abstraction in AI research comes from planning and problem solving.{{sfn|Holte|Choueiry|2003}} Although the aim is to increase the speed of a computation, the role of abstraction has posed questions about the involvement of abstraction operators.{{sfn|Zucker|2003}}<br />
<br />
The practice of abstraction, which people tend to redefine when working with a particular context in research, provides researchers with a concentration on just a few concepts. The most productive use of abstraction in AI research comes from planning and problem solving. Although the aim is to increase the speed of a computation, the role of abstraction has posed questions about the involvement of abstraction operators.<br />
<br />
人们倾向于在特定研究领域中使用抽象,这使得研究人员能集中在少数几个概念上。抽象在人工智能研究中最有效的应用来自规划和解决问题。虽然目标是提高计算速度,但是抽象的作用和角色已经在呼唤一种能用哦股计算的抽象操作。<br />
<br />
<br />
<br />
A possible reason for the slowness in AI relates to the acknowledgement by many AI researchers that heuristics is a section that contains a significant breach between computer performance and human performance.{{sfn|McCarthy|2007}} The specific functions that are programmed to a computer may be able to account for many of the requirements that allow it to match human intelligence. These explanations are not necessarily guaranteed to be the fundamental causes for the delay in achieving strong AI, but they are widely agreed by numerous researchers.<br />
<br />
A possible reason for the slowness in AI relates to the acknowledgement by many AI researchers that heuristics is a section that contains a significant breach between computer performance and human performance. The specific functions that are programmed to a computer may be able to account for many of the requirements that allow it to match human intelligence. These explanations are not necessarily guaranteed to be the fundamental causes for the delay in achieving strong AI, but they are widely agreed by numerous researchers.<br />
<br />
-- [[用户:Qige96|Ricky]]([[用户讨论:Qige96|讨论]])这段英文审校也读不懂。<br />
<br />
人工智能发展缓慢的一个可能原因是许多人工智能研究人员承认启发式算法是计算机性能和人类表现之间的一个重大缺口。为计算机编入的特定功能可以满足许多要求,使计算机与人类智能相匹配。这些解释并不一定是延迟人工智能的实现的根本原因,但它们得到了众多研究人员的广泛认同。<br />
<br />
<br />
<br />
There have been many AI researchers that debate over the idea whether [[affective computing|machines should be created with emotions]]. There are no emotions in typical models of AI and some researchers say programming emotions into machines allows them to have a mind of their own.{{sfn|Clocksin|2003}} Emotion sums up the experiences of humans because it allows them to remember those experiences.{{sfn|Gelernter|2010}} David Gelernter writes, "No computer will be creative unless it can simulate all the nuances of human emotion."{{sfn|Gelernter|2010}} This concern about emotion has posed problems for AI researchers and it connects to the concept of strong AI as its research progresses into the future.<ref>{{cite journal|doi=10.1016/j.bushor.2018.08.004|title=Kaplan Andreas and Haelein Michael (2019) Siri, Siri, in my hand: Who's the fairest in the land? On the interpretations, illustrations, and implications of artificial intelligence | volume=62 | year=2019|journal=Business Horizons|pages=15–25 | last1 = Kaplan | first1 = Andreas | last2 = Haenlein | first2 = Michael}}</ref><br />
<br />
There have been many AI researchers that debate over the idea whether machines should be created with emotions. There are no emotions in typical models of AI and some researchers say programming emotions into machines allows them to have a mind of their own. Emotion sums up the experiences of humans because it allows them to remember those experiences. David Gelernter writes, "No computer will be creative unless it can simulate all the nuances of human emotion." This concern about emotion has posed problems for AI researchers and it connects to the concept of strong AI as its research progresses into the future.<br />
<br />
许多人工智能研究人员一直在争论机器是否应该带有情感。典型的人工智能模型中没有情感,一些研究人员说,将情感编程到机器中可以让它们拥有自己的思想。情感总结了人类的经历,它使得人们记住那些经历。大卫·格勒尼特(David Gelernter)则写道: “除非计算机能够模拟人类情感的所有细微差别,否则它不会具有创造力。”这种对情绪的关注给人工智能研究人员带来了一些问题,随着未来人工智能研究的进展,它与强人工智能的概念联系起来。<br />
<br />
==Controversies and dangers 争议和风险==<br />
<br />
<br />
<br />
===Feasibility 可能性===<br />
<br />
{{expand section|date=February 2016}}<br />
<br />
As of March 2020, AGI remains speculative<ref name="spec1">[https://www.europarl.europa.eu/at-your-service/files/be-heard/religious-and-non-confessional-dialogue/events/en-20190319-how-artificial-intelligence-works.pdf europarl.europa.eu: How artificial intelligence works], "Concluding remarks: Today's AI is powerful and useful, but remains far from speculated AGI or ASI.", European Parliamentary Research Service, retrieved March 3, 2020</ref><ref name="spec2">[https://www.itu.int/en/journal/001/Documents/itu2018-9.pdf itu.int: Beyond Mad?: The Race For Artificial General Intelligence], "AGI represents a level of power that remains firmly in the realm of speculative fiction as on date." February 2, 2018, retrieved March 3, 2020</ref> as no such system has been demonstrated yet. Opinions vary both on ''whether'' and ''when'' artificial general intelligence will arrive. At one extreme, AI pioneer [[Herbert A. Simon]] wrote in 1965: "machines will be capable, within twenty years, of doing any work a man can do". However, this prediction failed to come true. Microsoft co-founder [[Paul Allen]] believed that such intelligence is unlikely in the 21st century because it would require "unforeseeable and fundamentally unpredictable breakthroughs" and a "scientifically deep understanding of cognition".<ref>{{cite news|last1=Allen|first1=Paul|title=The Singularity Isn't Near|url=http://www.technologyreview.com/view/425733/paul-allen-the-singularity-isnt-near/|accessdate=17 September 2014|work=[[MIT Technology Review]]}}</ref> Writing in [[The Guardian]], roboticist Alan Winfield claimed the gulf between modern computing and human-level artificial intelligence is as wide as the gulf between current space flight and practical faster-than-light spaceflight.<ref>{{cite news|last1=Winfield|first1=Alan|title=Artificial intelligence will not turn into a Frankenstein's monster|url=https://www.theguardian.com/technology/2014/aug/10/artificial-intelligence-will-not-become-a-frankensteins-monster-ian-winfield|accessdate=17 September 2014|work=[[The Guardian]]|archive-url=https://web.archive.org/web/20140917135230/http://www.theguardian.com/technology/2014/aug/10/artificial-intelligence-will-not-become-a-frankensteins-monster-ian-winfield|archive-date=17 September 2014|url-status=live}}</ref><br />
<br />
As of March 2020, AGI remains speculative as no such system has been demonstrated yet. Opinions vary both on whether and when artificial general intelligence will arrive. At one extreme, AI pioneer Herbert A. Simon wrote in 1965: "machines will be capable, within twenty years, of doing any work a man can do". However, this prediction failed to come true. Microsoft co-founder Paul Allen believed that such intelligence is unlikely in the 21st century because it would require "unforeseeable and fundamentally unpredictable breakthroughs" and a "scientifically deep understanding of cognition". Writing in The Guardian, roboticist Alan Winfield claimed the gulf between modern computing and human-level artificial intelligence is as wide as the gulf between current space flight and practical faster-than-light spaceflight.<br />
<br />
截至2020年3月,通用人工智能仍处于推测性的状态,因为迄今此类系统尚未被展示。对于人工通用智能是否会到来以及何时到来,人们的看法各不相同。一个极端如,人工智能的先驱赫伯特·西蒙在1965年写道: “机器将能在20年内具有完成人类能做的任何工作的能力。”然而,这个预言并没有实现。微软(Microsoft)联合创始人保罗•艾伦(Paul Allen)认为,这种情报在21世纪不太可能出现,因为它需要“不可预见且根本无法预测的突破”和“对认知的科学的深入理解”。机器人专家阿兰·温菲尔德(Alan Winfield)在《卫报》上发表文章称,现代计算机和人类水平的人工智能之间的鸿沟就像当前的太空飞行和实际的超光速空间飞行之间的鸿沟一样宽。<br />
<br />
<br />
<br />
AI experts' views on the feasibility of AGI wax and wane, and may have seen a resurgence in the 2010s. Four polls conducted in 2012 and 2013 suggested that the median guess among experts for when they would be 50% confident AGI would arrive was 2040 to 2050, depending on the poll, with the mean being 2081. Of the experts, 16.5% answered with "never" when asked the same question but with a 90% confidence instead.<ref name="new yorker doomsday">{{cite news|author1=Raffi Khatchadourian|title=The Doomsday Invention: Will artificial intelligence bring us utopia or destruction?|url=http://www.newyorker.com/magazine/2015/11/23/doomsday-invention-artificial-intelligence-nick-bostrom|accessdate=7 February 2016|work=[[The New Yorker (magazine)|The New Yorker]]|date=23 November 2015|archive-url=https://web.archive.org/web/20160128105955/http://www.newyorker.com/magazine/2015/11/23/doomsday-invention-artificial-intelligence-nick-bostrom|archive-date=28 January 2016|url-status=live}}</ref><ref>Müller, V. C., & Bostrom, N. (2016). Future progress in artificial intelligence: A survey of expert opinion. In Fundamental issues of artificial intelligence (pp. 555–572). Springer, Cham.</ref> Further current AGI progress considerations can be found below [[#Tests_for_confirming_human-level_AGI|''Tests for confirming human-level AGI'']] and [[#IQ-Tests_AGI|''IQ-tests AGI'']].<br />
<br />
AI experts' views on the feasibility of AGI wax and wane, and may have seen a resurgence in the 2010s. Four polls conducted in 2012 and 2013 suggested that the median guess among experts for when they would be 50% confident AGI would arrive was 2040 to 2050, depending on the poll, with the mean being 2081. Of the experts, 16.5% answered with "never" when asked the same question but with a 90% confidence instead. Further current AGI progress considerations can be found below Tests for confirming human-level AGI and IQ-tests AGI.<br />
<br />
人工智能专家一直在考虑通用人工智能兴衰可能性,可能在2010年代出现了复苏。2012年和2013年进行的四次民意调查显示,专家们平均有50%的信心相信通用人工智能会在2040年到2050年间到来,具体取决于调查结果,平均猜测是2081年。在这些专家中,16.5% 的人在被问到同样的问题时回答“从来没有” ,但他们的自信心却达到了90%。当前通用人工智能项目进展中的进一步考虑可以在确认人类水平通用人工智能和基于智商测试的通用人工智能测试下面找到。<br />
<br />
<br />
<br />
===对人类的潜在威胁===<br />
<br />
{{Main|Existential risk from artificial general intelligence}}<br />
<br />
The thesis that AI poses an existential risk, and that this risk needs much more attention than it currently gets, has been endorsed by many public figures; perhaps the most famous are [[Elon Musk]], [[Bill Gates]], and [[Stephen Hawking]]. The most notable AI researcher to endorse the thesis is [[Stuart J. Russell]]. Endorsers of the thesis sometimes express bafflement at skeptics: Gates states he does not "understand why some people are not concerned",<ref name="BBC News">{{cite news|last1=Rawlinson|first1=Kevin|title=Microsoft's Bill Gates insists AI is a threat|url=https://www.bbc.co.uk/news/31047780|work=[[BBC News]]|accessdate=30 January 2015}}</ref> and Hawking criticized widespread indifference in his 2014 editorial: {{cquote|'So, facing possible futures of incalculable benefits and risks, the experts are surely doing everything possible to ensure the best outcome, right? Wrong. If a superior alien civilisation sent us a message saying, 'We'll arrive in a few decades,' would we just reply, 'OK, call us when you get here{{endash}}we'll leave the lights on?' Probably not{{endash}}but this is more or less what is happening with AI.'<ref name="hawking editorial">{{cite news |title=Stephen Hawking: 'Transcendence looks at the implications of artificial intelligence&nbsp;– but are we taking AI seriously enough?' |url=https://www.independent.co.uk/news/science/stephen-hawking-transcendence-looks-at-the-implications-of-artificial-intelligence--but-are-we-taking-ai-seriously-enough-9313474.html |accessdate=3 December 2014 |publisher=[[The Independent (UK)]]}}</ref>}}<br />
<br />
The thesis that AI poses an existential risk, and that this risk needs much more attention than it currently gets, has been endorsed by many public figures; perhaps the most famous are Elon Musk, Bill Gates, and Stephen Hawking. The most notable AI researcher to endorse the thesis is Stuart J. Russell. Endorsers of the thesis sometimes express bafflement at skeptics: Gates states he does not "understand why some people are not concerned", and Hawking criticized widespread indifference in his 2014 editorial: we'll leave the lights on?' Probably not, but this is more or less what is happening with AI.'}}<br />
<br />
一篇论文提出了“人工智能可能会引发世界末日”,且这种风险需要更多关注。这一论点也已经得到了许多公众人物的支持; 也许最著名的是埃隆·马斯克,比尔·盖茨和斯蒂芬·霍金。支持这一观点的最著名的人工智能研究者是斯图尔特·罗素。这篇论文的支持者有时会对怀疑论者表示困惑: 盖茨表示,他不“理解为什么有些人不关心” ,霍金在2014年的社论中批评了普遍的冷漠: 我们就这样对这个问题不管不顾吗?可能不是,但这或多或少是正在发生在人工智能领域内的。<br />
<br />
<br />
<br />
Many of the scholars who are concerned about existential risk believe that the best way forward would be to conduct (possibly massive) research into solving the difficult "[[AI control problem|control problem]]" to answer the question: what types of safeguards, algorithms, or architectures can programmers implement to maximize the probability that their recursively-improving AI would continue to behave in a friendly, rather than destructive, manner after it reaches superintelligence?<ref name="superintelligence" >{{cite book|last1=Bostrom|first1=Nick|author-link=Nick Bostrom|title=Superintelligence: Paths, Dangers, Strategies|date=2014|isbn=978-0199678112|edition=First|quote=|title-link=Superintelligence: Paths, Dangers, Strategies}}<!-- preface --></ref><ref name="physica_scripta" >{{cite journal|title=Responses to catastrophic AGI risk: a survey|journal=[[Physica Scripta]]|date=19 December 2014|volume=90|issue=1|author1=Kaj Sotala|author2-link=Roman Yampolskiy|author2=Roman Yampolskiy}}</ref><br />
<br />
Many of the scholars who are concerned about existential risk believe that the best way forward would be to conduct (possibly massive) research into solving the difficult "control problem" to answer the question: what types of safeguards, algorithms, or architectures can programmers implement to maximize the probability that their recursively-improving AI would continue to behave in a friendly, rather than destructive, manner after it reaches superintelligence?<br />
<br />
许多关注末日风险的学者认为,最好的方法是进行(可能是大规模的)研究,去解决困难的“控制问题”,以回答这个疑问: 程序员可以实现哪些保障措施、算法或架构,以最大程度地确保其不断改进的人工智能在达到超级智能后会继续以友好地运行,而非破坏?<br />
<br />
<br />
<br />
The thesis that AI can pose existential risk also has many strong detractors. Skeptics sometimes charge that the thesis is crypto-religious, with an irrational belief in the possibility of superintelligence replacing an irrational belief in an omnipotent God; at an extreme, [[Jaron Lanier]] argues that the whole concept that current machines are in any way intelligent is "an illusion" and a "stupendous con" by the wealthy.<ref name="atlantic-but-what">{{cite magazine |url=https://www.theatlantic.com/health/archive/2014/05/but-what-does-the-end-of-humanity-mean-for-me/361931/ |title=But What Would the End of Humanity Mean for Me? |magazine=The Atlantic | date = 9 May 2014 | accessdate =12 December 2015}}</ref><br />
<br />
The thesis that AI can pose existential risk also has many strong detractors. Skeptics sometimes charge that the thesis is crypto-religious, with an irrational belief in the possibility of superintelligence replacing an irrational belief in an omnipotent God; at an extreme, Jaron Lanier argues that the whole concept that current machines are in any way intelligent is "an illusion" and a "stupendous con" by the wealthy.<br />
<br />
人工智能可以造成世界末日的观点也遭到了许多强烈的反对。怀疑论者有时指责该论点是神秘的、宗教性的、非理性地相信超级智能可能取代对万能的上帝的非理性信仰; 极端如杰伦·拉尼尔(Jaron Lanier)认为,目前机器以任何方式具有智能只是“一种幻觉” ,是富人的“惊人骗局”。<br />
<br />
<br />
<br />
Much of existing criticism argues that AGI is unlikely in the short term. Computer scientist [[Gordon Bell]] argues that the human race will already destroy itself before it reaches the [[technological singularity]]. [[Gordon Moore]], the original proponent of [[Moore's Law]], declares that "I am a skeptic. I don't believe [a technological singularity] is likely to happen, at least for a long time. And I don't know why I feel that way."<ref>{{cite news |title=Tech Luminaries Address Singularity |url=https://spectrum.ieee.org/computing/hardware/tech-luminaries-address-singularity |accessdate=8 April 2020 |work=IEEE Spectrum: Technology, Engineering, and Science News |issue=SPECIAL REPORT: THE SINGULARITY |date=1 June 2008 |language=en}}</ref> [[Baidu]] Vice President [[Andrew Ng]] states AI existential risk is "like worrying about overpopulation on Mars when we have not even set foot on the planet yet."<ref name=shermer>{{cite news|last1=Shermer|first1=Michael|title=Apocalypse AI|url=https://www.scientificamerican.com/article/artificial-intelligence-is-not-a-threat-mdash-yet/|accessdate=27 November 2017|work=Scientific American|date=1 March 2017|pages=77|language=en|doi=10.1038/scientificamerican0317-77|bibcode=2017SciAm.316c..77S}}</ref><br />
<br />
Much of existing criticism argues that AGI is unlikely in the short term. Computer scientist Gordon Bell argues that the human race will already destroy itself before it reaches the technological singularity. Gordon Moore, the original proponent of Moore's Law, declares that "I am a skeptic. I don't believe [a technological singularity] is likely to happen, at least for a long time. And I don't know why I feel that way." Baidu Vice President Andrew Ng states AI existential risk is "like worrying about overpopulation on Mars when we have not even set foot on the planet yet."<br />
<br />
现有的许多批评认为,通用人工智能短期内不太可能成功。计算机科学家戈登·贝尔(Gordon Bell)认为人类在到达'''<font color="#ff8000">技术奇点(technological singularity)</font>'''之前就已经自我毁灭了。而戈登·摩尔(Gordon Moore)——'''<font color="#ff8000">摩尔定律(Moore's Law)</font>'''的最初提出者,宣称“我是一个怀疑论者。我不认为技术奇点会发生,至少在很长一段时间内不会。我不知道为什么会有这种感觉。”百度副总裁吴恩达(Andrew Ng)说,人工智能世界末日就像是在担心火星人口过剩,而我们甚至还没有踏上这个星球。<br />
<br />
==See also 请参阅==<br />
<br />
{{div col|colwidth=30em}}<br />
<br />
* [[Automated machine learning]] 自动机器学习<br />
<br />
* [[Machine ethics]] 机器伦理<br />
<br />
* [[Multi-task learning]] 多任务学习<br />
<br />
* [[Superintelligence: Paths, Dangers, Strategies|Superintelligence]] 超级智能<br />
<br />
* [[Nick Bostrom]] 尼克·博斯特罗姆<br />
<br />
* [[Eliezer Yudkowsky]] 埃利泽·尤德科夫斯基<br />
<br />
* [[Future of Humanity Institute]] 人类未来研究所<br />
<br />
* [[Outline of artificial intelligence]] 人工智能概要<br />
<br />
* [[Artificial brain]] 人工大脑<br />
<br />
* [[Transfer learning]] 学习迁移<br />
<br />
* [[Outline of transhumanism]] 超人类主义概要<br />
<br />
* [[General game playing]] 一般博弈<br />
<br />
* [[Synthetic intelligence]] 合成智能<br />
<br />
* [[Intelligence amplification]] (IA), the use of information technology in augmenting human intelligence instead of creating an external autonomous "AGI"{{div col end}}<br />
智能放大,利用信息技术加强人类智慧而不是建造外在的通用人工智能<br />
<br />
<br />
==Notes 附注==<br />
<br />
{{reflist|colwidth=30em}}<br />
<br />
<br />
<br />
==References 参考文献==<br />
<br />
• Stages of Artificial Intelligence"[https://www.computerscience0.xyz/2020/04/ai-artificial-intelligence-stages-of.html Computer Science]" on 2nd April, 2020.{{refbegin|2}}<br />
<br />
• Stages of Artificial Intelligence"[https://www.computerscience0.xyz/2020/04/ai-artificial-intelligence-stages-of.html Computer Science]" on 2nd April, 2020.<br />
<br />
•2020年4月2日,《人工智能的阶段》[ https://www.computerscience0.xyz/2020/04/ai-Artificial-Intelligence-Stages-of.html 计算机科学]。<br />
<br />
* {{cite web |url=http://www.techcast.org/Upload/PDFs/633615794236495345_TCTheAutomationofThought.pdf |title=TechCast Article Series: The Automation of Thought |last1=Halal |first1=William E. |website= |access-date= |archive-url=https://web.archive.org/web/20130606101835/http://www.techcast.org/Upload/PDFs/633615794236495345_TCTheAutomationofThought.pdf |archive-date=6 June 2013}}<br />
<br />
* {{Citation | last=Aleksander | first=Igor | author-link=Igor Aleksander | year=1996 | title=Impossible Minds | publisher=World Scientific Publishing Company | isbn=978-1-86094-036-1 | url-access=registration | url=https://archive.org/details/impossiblemindsm0000alek }}<br />
<br />
* {{Citation | last = Omohundro|first= Steve| author-link= Steve Omohundro | year = 2008| title= The Nature of Self-Improving Artificial Intelligence| publisher= presented and distributed at the 2007 Singularity Summit, San Francisco, CA.}}<br />
<br />
* {{Citation|last=Sandberg |first=Anders|last2=Boström|first2=Nick|title=Whole Brain Emulation: A Roadmap|url=http://www.fhi.ox.ac.uk/Reports/2008-3.pdf|accessdate=5 April 2009|series= Technical Report #2008‐3|year=2008| publisher = Future of Humanity Institute, Oxford University}}<br />
<br />
* {{Citation|vauthors=Azevedo FA, Carvalho LR, Grinberg LT | title = Equal numbers of neuronal and nonneuronal cells make the human brain an isometrically scaled-up primate brain| journal = The Journal of Comparative Neurology| volume = 513| issue = 5| pages = 532–41|date=April 2009| pmid = 19226510| doi = 10.1002/cne.21974|url=https://www.researchgate.net/publication/24024444 |accessdate=4 September 2013| ref={{harvid|Azevedo et al.|2009}}|display-authors=etal}}<br />
<br />
* {{Citation<br />
<br />
| first=Anthony<br />
<br />
| first=Anthony<br />
<br />
首先是安东尼<br />
<br />
| last=Berglas<br />
<br />
| last=Berglas<br />
<br />
最后一个贝格拉斯<br />
<br />
| title=Artificial Intelligence will Kill our Grandchildren<br />
<br />
| title=Artificial Intelligence will Kill our Grandchildren<br />
<br />
人工智能会杀死我们的孙子<br />
<br />
| year=2008<br />
<br />
| year=2008<br />
<br />
2008年<br />
<br />
| url=http://berglas.org/Articles/AIKillGrandchildren/AIKillGrandchildren.html<br />
<br />
| url=http://berglas.org/Articles/AIKillGrandchildren/AIKillGrandchildren.html<br />
<br />
Http://berglas.org/articles/aikillgrandchildren/aikillgrandchildren.html<br />
<br />
}}<br />
<br />
}}<br />
<br />
}}<br />
<br />
* {{Citation | last = Chalmers |first = David | author-link=David Chalmers | year=1996 | title = The Conscious Mind |publisher=Oxford University Press.}}<br />
<br />
* {{Citation | last = Clocksin|first=William |date=Aug 2003 |title=Artificial intelligence and the future|journal=[[Philosophical Transactions of the Royal Society A]] |pmid=12952683 |volume=361 |issue=1809 |pages=1721–1748 |doi=10.1098/rsta.2003.1232 |postscript=.|bibcode=2003RSPTA.361.1721C }}<br />
<br />
* {{Crevier 1993}}<br />
<br />
* {{Citation | first = Brad | last = Darrach | date=20 November 1970 | title=Meet Shakey, the First Electronic Person | magazine=[[Life Magazine]] | pages = 58–68 }}.<br />
<br />
* {{Citation | last = Drachman | first = D | title = Do we have brain to spare? | journal = Neurology | volume = 64 | issue = 12 | pages = 2004–5 | year = 2005 | pmid = 15985565 | doi = 10.1212/01.WNL.0000166914.38327.BB | postscript = .}}<br />
<br />
* {{Citation | last = Feigenbaum | first = Edward A. | first2=Pamela | last2=McCorduck | author-link=Edward Feigenbaum | author2-link = Pamela McCorduck | title = The Fifth Generation: Artificial Intelligence and Japan's Computer Challenge to the World | publisher = Michael Joseph | year = 1983 | isbn = 978-0-7181-2401-4 }}<br />
<br />
* {{Citation<br />
<br />
| last = Gelernter | first = David<br />
<br />
| last = Gelernter | first = David<br />
<br />
最后的盖兰特 | 第一个大卫<br />
<br />
| year = 2010<br />
<br />
| year = 2010<br />
<br />
2010年<br />
<br />
| title = Dream-logic, the Internet and Artificial Thought<br />
<br />
| title = Dream-logic, the Internet and Artificial Thought<br />
<br />
梦的逻辑,互联网和人工思维<br />
<br />
| url=http://www.edge.org/3rd_culture/gelernter10.1/gelernter10.1_index.html | accessdate= 25 July 2010<br />
<br />
| url=http://www.edge.org/3rd_culture/gelernter10.1/gelernter10.1_index.html | accessdate= 25 July 2010<br />
<br />
Http://www.edge.org/3rd_culture/gelernter10.1/gelernter10.1_index.html : 2010年7月25日<br />
<br />
}}<br />
<br />
}}<br />
<br />
}}<br />
<br />
* {{Citation<br />
<br />
| editor1-last = Goertzel | editor1-first = Ben | authorlink = Ben Goertzel<br />
<br />
| editor1-last = Goertzel | editor1-first = Ben | authorlink = Ben Goertzel<br />
<br />
1-first Ben | authorlink Ben Goertzel<br />
<br />
| editor2-last = Pennachin | editor2-first= Cassio<br />
<br />
| editor2-last = Pennachin | editor2-first= Cassio<br />
<br />
| 编辑2-last Pennachin | 编辑2-first Cassio<br />
<br />
| year = 2006<br />
<br />
| year = 2006<br />
<br />
2006年<br />
<br />
| title=Artificial General Intelligence<br />
<br />
| title=Artificial General Intelligence<br />
<br />
人工通用智能<br />
<br />
| publisher = Springer <br />
<br />
| publisher = Springer <br />
<br />
出版商斯普林格<br />
<br />
| url=http://people.inf.elte.hu/csizsekp/ai/books/artificial-general-intelligence-cognitive-technologies.9783540237334.27156.pdf<br />
<br />
| url=http://people.inf.elte.hu/csizsekp/ai/books/artificial-general-intelligence-cognitive-technologies.9783540237334.27156.pdf<br />
<br />
Http://people.inf.elte.hu/csizsekp/ai/books/artificial-general-intelligence-cognitive-technologies.9783540237334.27156.pdf<br />
<br />
| isbn = 978-3-540-23733-4<br />
<br />
| isbn = 978-3-540-23733-4<br />
<br />
[国际标准图书馆编号978-3-540-23733-4]<br />
<br />
| archive-url = https://web.archive.org/web/20130320184603/http://people.inf.elte.hu/csizsekp/ai/books/artificial-general-intelligence-cognitive-technologies.9783540237334.27156.pdf<br />
<br />
| archive-url = https://web.archive.org/web/20130320184603/http://people.inf.elte.hu/csizsekp/ai/books/artificial-general-intelligence-cognitive-technologies.9783540237334.27156.pdf<br />
<br />
| 档案-url https://web.archive.org/web/20130320184603/http://people.inf.elte.hu/csizsekp/ai/books/artificial-general-intelligence-cognitive-technologies.9783540237334.27156.pdf<br />
<br />
| archive-date = 20 March 2013<br />
<br />
| archive-date = 20 March 2013<br />
<br />
| 档案-日期2013年3月20日<br />
<br />
}}<br />
<br />
}}<br />
<br />
}}<br />
<br />
* {{Citation<br />
<br />
| last = Goertzel | first = Ben | authorlink = Ben Goertzel<br />
<br />
| last = Goertzel | first = Ben | authorlink = Ben Goertzel<br />
<br />
作者: Ben Goertzel<br />
<br />
| last2 = Wang | first2 = Pei<br />
<br />
| last2 = Wang | first2 = Pei<br />
<br />
2 Wang | first2 Pei<br />
<br />
| year = 2006<br />
<br />
| year = 2006<br />
<br />
2006年<br />
<br />
| title = Introduction: Aspects of Artificial General Intelligence<br />
<br />
| title = Introduction: Aspects of Artificial General Intelligence<br />
<br />
| 题目简介: 人工通用智能的方方面面<br />
<br />
| url=http://sites.google.com/site/narswang/publications/wang-goertzel.AGI_Aspects.pdf?attredirects=1<br />
<br />
| url=http://sites.google.com/site/narswang/publications/wang-goertzel.AGI_Aspects.pdf?attredirects=1<br />
<br />
Http://sites.google.com/site/narswang/publications/wang-goertzel.agi_aspects.pdf?attredirects=1<br />
<br />
}}<br />
<br />
}}<br />
<br />
}}<br />
<br />
* {{Citation |last=Goertzel|first=Ben |authorlink=Ben Goertzel |date=Dec 2007 |title=Human-level artificial general intelligence and the possibility of a technological singularity: a reaction to Ray Kurzweil's The Singularity Is Near, and McDermott's critique of Kurzweil|journal=Artificial Intelligence |volume=171 |issue=18, Special Review Issue |pages=1161–1173 |url=https://scholar.google.com/scholar?hl=sv&lr=&cluster=15189798216526465792 |accessdate=1 April 2009 |doi=10.1016/j.artint.2007.10.011 |postscript=.}}<br />
<br />
* {{Citation | last = Gubrud | first = Mark | url=http://www.foresight.org/Conferences/MNT05/Papers/Gubrud/ | title = Nanotechnology and International Security | journal= Fifth Foresight Conference on Molecular Nanotechnology |date = November 1997| accessdate= 7 May 2011}}<br />
<br />
* {{Citation| last1 = Holte | first1=RC | last2=Choueiry |first2=BY| title = Abstraction and reformulation in artificial intelligence| journal = [[Philosophical Transactions of the Royal Society B]]| volume = 358| issue = 1435| pages = 1197–1204| year = 2003| pmid = 12903653| pmc = 1693218| doi = 10.1098/rstb.2003.1317| postscript = .}}<br />
<br />
* {{Citation | last = Howe | first = J. | url=http://www.dai.ed.ac.uk/AI_at_Edinburgh_perspective.html | title = Artificial Intelligence at Edinburgh University : a Perspective |date = November 1994| accessdate= 30 August 2007}}<br />
<br />
* {{Citation | last = Johnson| first= Mark | year = 1987| title =The body in the mind| publisher =Chicago|isbn= 978-0-226-40317-5}}<br />
<br />
* {{Citation | last = Kurzweil | first = Ray | author-link = Ray Kurzweil | title = The Singularity is Near | year = 2005 | publisher = Viking Press | title-link = The Singularity is Near }}<br />
<br />
* {{Citation | last = Lighthill | first = Professor Sir James | author-link=James Lighthill | year = 1973 | contribution= Artificial Intelligence: A General Survey | title = Artificial Intelligence: a paper symposium| publisher = Science Research Council }}<br />
<br />
* {{Citation|last=Luger|first=George|first2=William|last2=Stubblefield|year=2004|title=Artificial Intelligence: Structures and Strategies for Complex Problem Solving|edition=5th|publisher=The Benjamin/Cummings Publishing Company, Inc.|page=[https://archive.org/details/artificialintell0000luge/page/720 720]|isbn=978-0-8053-4780-7|url=https://archive.org/details/artificialintell0000luge/page/720}}<br />
<br />
* {{Citation |last=McCarthy|first=John |authorlink=John McCarthy (computer scientist)|date=Oct 2007 |title=From here to human-level AI|journal=Artificial Intelligence |volume=171 |pages=1174–1182 |doi=10.1016/j.artint.2007.10.009 | postscript=. |issue=18}}<br />
<br />
* {{McCorduck 2004}}<br />
<br />
* {{Citation | last = Moravec | first = Hans | author-link = Hans Moravec | year = 1976 | url = http://www.frc.ri.cmu.edu/users/hpm/project.archive/general.articles/1975/Raw.Power.html | title = The Role of Raw Power in Intelligence | access-date = 29 September 2007 | archive-url = https://web.archive.org/web/20160303232511/http://www.frc.ri.cmu.edu/users/hpm/project.archive/general.articles/1975/Raw.Power.html | archive-date = 3 March 2016 | url-status = dead }}<br />
<br />
* {{Citation | last = Moravec | first = Hans | author-link=Hans Moravec | year = 1988 | title = Mind Children | publisher = Harvard University Press}}<br />
<br />
* {{Citation| last=Nagel | year=1974 | title =What Is it Like to Be a Bat | journal = Philosophical Review | volume=83 | issue=4 | pages=435–50 | url=http://organizations.utep.edu/Portals/1475/nagel_bat.pdf| postscript=. | doi=10.2307/2183914 | jstor=2183914 }}<br />
<br />
* {{Citation | last = Newell | first = Allen | author-link=Allen Newell | last2 = Simon | first2=H. A. | year = 1963 | contribution=GPS: A Program that Simulates Human Thought| title=Computers and Thought | editor-last= Feigenbaum | editor-first= E.A. |editor2-last= Feldman |editor2-first= J. |publisher= McGraw-Hill | authorlink2 = Herbert A. Simon|location= New York }}<br />
<br />
* {{cite journal | doi = 10.1145/360018.360022 | last = Newell | first = Allen | last2 = Simon | first2=H. A. | year = 1976 | title=Computer Science as Empirical Inquiry: Symbols and Search| volume= 19 | pages = 113–126 | journal = Communications of the ACM| author-link=Allen Newell | authorlink2=Herbert A. Simon|issue=3 | ref=harv| doi-access= free }}<br />
<br />
* {{Citation| last=Nilsson | first=Nils | author-link=Nils Nilsson (researcher) | year=1998|title=Artificial Intelligence: A New Synthesis|publisher=Morgan Kaufmann Publishers|isbn=978-1-55860-467-4}}<br />
<br />
* {{Russell Norvig 2003}}<br />
<br />
* {{Citation | last = NRC| author-link=United States National Research Council | chapter=Developments in Artificial Intelligence|chapter-url=http://www.nap.edu/readingroom/books/far/ch9.html|title=Funding a Revolution: Government Support for Computing Research|publisher=National Academy Press|year=1999 }}<br />
<br />
* {{Citation | last = Poole | first = David | first2 = Alan | last2 = Mackworth | first3 = Randy | last3 = Goebel | publisher = Oxford University Press | year = 1998 | title = Computational Intelligence: A Logical Approach | url = http://www.cs.ubc.ca/spider/poole/ci.html | author-link=David Poole (researcher) | location = New York }}<br />
<br />
* {{Citation|last=Searle |first=John |author-link=John Searle |title=Minds, Brains and Programs |journal=Behavioral and Brain Sciences |volume=3 |issue=3 |pages=417–457 |year=1980 |doi=10.1017/S0140525X00005756 }}<br />
<br />
* {{Citation | last= Simon | first = H. A. | author-link=Herbert A. Simon | year = 1965 | title=The Shape of Automation for Men and Management | publisher =Harper & Row | location = New York }}<br />
<br />
* {{Citation |last=Sutherland|first= J.G. |year =1990| title= Holographic Model of Memory, Learning, and Expression|journal=International Journal of Neural Systems|volume= 1–3|pages= 256–267 |postscript=.}}<br />
<br />
* {{Citation|vauthors=Williams RW, Herrup K | title = The control of neuron number| journal = Annual Review of Neuroscience| volume = 11| pages = 423–53| year = 1988| pmid = 3284447| doi = 10.1146/annurev.ne.11.030188.002231| postscript = .}}<!--| accessdate = 20 June 2009--><br />
<br />
* {{Citation<br />
<br />
| editor1-last = de Vega | editor1-first = Manuel<br />
<br />
| editor1-last = de Vega | editor1-first = Manuel<br />
<br />
| 编辑1-last de Vega | 编辑1-first Manuel<br />
<br />
| editor2-last = Glenberg | editor2-first = Arthur<br />
<br />
| editor2-last = Glenberg | editor2-first = Arthur<br />
<br />
2- 最后的格伦伯格2- 第一个亚瑟<br />
<br />
| editor3-last = Graesser | editor3-first = Arthur<br />
<br />
| editor3-last = Graesser | editor3-first = Arthur<br />
<br />
| 编辑3-last Graesser | 编辑3-first Arthur<br />
<br />
| year = 2008<br />
<br />
| year = 2008<br />
<br />
2008年<br />
<br />
| title = Symbols and Embodiment: Debates on meaning and cognition<br />
<br />
| title = Symbols and Embodiment: Debates on meaning and cognition<br />
<br />
标题符号与具体化: 关于意义与认知的争论<br />
<br />
| publisher = Oxford University Press<br />
<br />
| publisher = Oxford University Press<br />
<br />
牛津大学出版社<br />
<br />
| isbn=978-0-19-921727-4<br />
<br />
| isbn=978-0-19-921727-4<br />
<br />
[国际标准图书编号978-0-19-921727-4]<br />
<br />
}}<br />
<br />
}}<br />
<br />
}}<br />
<br />
* {{Citation|last=Yudkowsky |first=Eliezer |author-link=Eliezer Yudkowsky |editor-last=Goertzel |editor-first=Ben |editor2-last=Pennachin |editor2-first=Cassio |title=Artificial General Intelligence |journal=Annual Review of Psychology |publisher=Springer |year=2006 |url=http://www.singinst.org/upload/LOGI//LOGI.pdf |doi=10.1146/annurev.psych.49.1.585 |pmid=9496632 |isbn=978-3-540-23733-4 |url-status=dead |archiveurl=https://web.archive.org/web/20090411050423/http://www.singinst.org/upload/LOGI/LOGI.pdf |archivedate=11 April 2009 |volume=49 |pages=585–612}} <br />
<br />
* {{Citation |last=Zucker|first=Jean-Daniel |date=July 2003 |title=A grounded theory of abstraction in artificial intelligence|journal=[[Philosophical Transactions of the Royal Society B]] |pmid=12903672 |volume=358 |issue=1435 |pmc=1693211 |pages=1293–1309 |doi=10.1098/rstb.2003.1308 |postscript=.}}<br />
<br />
* {{Citation| last=Yudkowsky | first=Eliezer | author-link=Eliezer Yudkowsky| year=2008 |title=Artificial Intelligence as a Positive and Negative Factor in Global Risk |journal=Global Catastrophic Risks | bibcode=2008gcr..book..303Y }}.<br />
<br />
{{refend}}<br />
<br />
<br />
<br />
==External links 拓展链接==<br />
<br />
* [https://www.zhihu.com/question/50049187/answer/1361795900 强人工智能目前发展怎样,有希望实现吗?]<br />
<br />
* [https://zhuanlan.zhihu.com/p/59966491 AI寒冬论作者:通用人工智能仍是白日梦]<br />
<br />
* [https://cis.temple.edu/~pwang/AGI-Intro.html The AGI portal maintained by Pei Wang]<br />
<br />
* [https://web.archive.org/web/20050405071221/http://genesis.csail.mit.edu/index.html The Genesis Group at MIT's CSAIL] – Modern research on the computations that underlay human intelligence<br />
<br />
* [http://www.opencog.org/ OpenCog – open source project to develop a human-level AI]<br />
<br />
* [http://academia.wikia.com/wiki/A_Method_for_Simulating_the_Process_of_Logical_Human_Thought Simulating logical human thought]<br />
<br />
* [http://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/ai-timelines What Do We Know about AI Timelines?] – Literature review<br />
<br />
<br />
<br />
{{Existential risk from artificial intelligence}}<br />
<br />
<br />
<br />
{{DEFAULTSORT:Artificial general intelligence}}<br />
<br />
[[Category:Hypothetical technology]]<br />
<br />
Category:Hypothetical technology<br />
<br />
类别: 假设技术<br />
<br />
[[Category:Artificial intelligence]]<br />
<br />
Category:Artificial intelligence<br />
<br />
类别: 人工智能<br />
<br />
[[Category:Computational neuroscience]]<br />
<br />
Category:Computational neuroscience<br />
<br />
类别: 计算神经科学<br />
<br />
<br />
<br />
[[fr:Intelligence artificielle#Intelligence artificielle forte]]<br />
<br />
fr:Intelligence artificielle#Intelligence artificielle forte<br />
<br />
智力人工 # 智力人工强项<br />
<br />
<noinclude><br />
<br />
<small>This page was moved from [[wikipedia:en:Artificial general intelligence]]. Its edit history can be viewed at [[通用人工智能/edithistory]]</small></noinclude><br />
<br />
[[Category:待整理页面]]</div>粲兰https://wiki.swarma.org/index.php?title=%E5%90%88%E6%88%90%E7%94%9F%E7%89%A9%E5%AD%A6&diff=18648合成生物学2020-11-18T03:54:29Z<p>粲兰:</p>
<hr />
<div>此词条暂由袁一博翻译,翻译字数共4491,未经人工整理和审校,带来阅读不便,请见谅。<br />
<br />
{{redirect|Artificial life form|simulated life forms|Artificial life}}<br />
<br />
{{short description|Interdisciplinary branch of biology and engineering}}<br />
<br />
{{Synthetic biology}}<br />
<br />
[[File:Synthetic Biology Research at NASA Ames.jpg|thumb|Synthetic Biology Research at [[Ames Research Center|NASA Ames Research Center]]. NASA埃姆斯研究中心的合成生物学研究。]]<br />
<br />
NASA Ames Research Center.]]<br />
<br />
美国国家航空和宇宙航行局/美国国家航空航天局埃姆斯研究中心<br />
<br />
<br />
<br />
'''Synthetic biology''' ('''SynBio''') is a multidisciplinary area of research that seeks to create new biological parts, devices, and systems, or to redesign systems that are already found in nature.<br />
<br />
Synthetic biology (SynBio) is a multidisciplinary area of research that seeks to create new biological parts, devices, and systems, or to redesign systems that are already found in nature.<br />
<br />
合成生物学(SynBio)是一个多学科的研究领域,旨在创造新的生物部件、设备和系统,或重新设计已经在自然界中发现的系统。<br />
<br />
<br />
<br />
It is a branch of science that encompasses a broad range of methodologies from various disciplines, such as [[biotechnology]], [[genetic engineering]], [[molecular biology]], [[molecular engineering]], [[systems biology]], [[Model lipid bilayer|membrane science]], [[biophysics]], [[Biological engineering|chemical and biological engineering]], [[Electrical engineering|electrical and computer engineering]], [[control engineering]] and [[evolutionary biology]].<br />
<br />
It is a branch of science that encompasses a broad range of methodologies from various disciplines, such as biotechnology, genetic engineering, molecular biology, molecular engineering, systems biology, membrane science, biophysics, chemical and biological engineering, electrical and computer engineering, control engineering and evolutionary biology.<br />
<br />
它是科学的一个分支,包括广泛的方法,从不同的学科,如生物技术,基因工程,分子生物学,分子工程,系统生物学,膜科学,生物物理学,化学和生物工程,电子和计算机工程,控制工程和进化生物学。<br />
<br />
<br />
<br />
Due to more powerful [[genetic engineering]] capabilities and decreased DNA synthesis and [[DNA sequencing|sequencing costs]], the field of synthetic biology is rapidly growing. In 2016, more than 350 companies across 40 countries were actively engaged in synthetic biology applications; all these companies had an estimated net worth of $3.9 billion in the global market.<ref>{{cite journal | last1 = Bueso | first1 = F. Y. | last2 = Tangney | first2 = M. | year = 2017 | title = Synthetic Biology in the Driving Seat of the Bioeconomy | url = | journal = Trends in Biotechnology | volume = 35 | issue = 5| pages = 373–378 | doi = 10.1016/j.tibtech.2017.02.002 | pmid = 28249675 }}</ref><br />
<br />
Due to more powerful genetic engineering capabilities and decreased DNA synthesis and sequencing costs, the field of synthetic biology is rapidly growing. In 2016, more than 350 companies across 40 countries were actively engaged in synthetic biology applications; all these companies had an estimated net worth of $3.9 billion in the global market.<br />
<br />
由于更强大的基因工程能力和降低的 DNA 合成及测序成本,合成生物学领域正在迅速发展。2016年,来自40个国家的350多家公司积极参与合成生物学应用; 所有这些公司在全球市场的净值估计为39亿美元。<br />
<br />
<br />
<br />
== Definition 定义 ==<br />
<br />
Synthetic biology currently has no generally accepted definition. Here are a few examples:<br />
<br />
Synthetic biology currently has no generally accepted definition. Here are a few examples:<br />
<br />
合成生物学目前还没有公认的定义。以下是一些定义的示例:<br />
<br />
<br />
<br />
* "the use of a mixture of physical engineering and genetic engineering to create new (and, therefore, synthetic) life forms混合使用物理工程和基因工程来创建新的(因而也即合成的)生命形式。"<ref>{{cite journal | last1 = Hunter | first1 = D | year = 2013 | title = How to object to radically new technologies on the basis of justice: the case of synthetic biology | url = | journal = Bioethics | volume = 27 | issue = 8| pages = 426–434 | doi = 10.1111/bioe.12049 | pmid = 24010854 }}</ref><br />
<br />
<br />
* "an emerging field of research that aims to combine the knowledge and methods of biology, engineering and related disciplines in the design of chemically synthesized DNA to create organisms with novel or enhanced characteristics and traits一个新兴的研究领域,旨在将生物学,工程学和相关学科领域的知识和方法结合到化学合成DNA 的设计中,从而创造出具有新颖或增强特性和特征的有机体。<br />
"<ref>{{cite journal | last1 = Gutmann | first1 = A | year = 2011 | title = The ethics of synthetic biology: guiding principles for emerging technologies | url = | journal = Hastings Center Report | volume = 41 | issue = 4| pages = 17–22 | doi = 10.1002/j.1552-146x.2011.tb00118.x | pmid = 21845917 | s2cid = 20662786 }}</ref><br />
<br />
* "designing and constructing [[BioBrick|biological modules]], [[biological systems]], and [[biological machine]]s or, re-design of existing biological systems for useful purposes设计并构建生物积木、生物系统以及生物机器,或为有用的目的重新设计现有的生物系统。"<ref name="NakanoEckford2013">{{cite book|url={{google books |plainurl=y |id=uVhsAAAAQBAJ}}|title=Molecular Communication|last1=Nakano|first1=Tadashi|last2=Eckford|first2=Andrew W.|last3=Haraguchi|first3=Tokuko|date=12 September 2013|publisher=Cambridge University Press|isbn=978-1-107-02308-6|name-list-style=vanc}}</ref><br />
<br />
<br />
* “applying the engineering paradigm of systems design to biological systems in order to produce predictable and robust systems with novel functionalities that do not exist in nature” (The European Commission, 2005)This can include the possibility of a [[molecular assembler]], based upon biomolecular systems such as the [[ribosome]]”<ref name="RoadMap">{{Cite web|url=http://www.foresight.org/roadmaps/Nanotech_Roadmap_2007_main.pdf|title=Productive Nanosystems: A Technology Roadmap|website=Foresight Institute}}</ref><br />
将系统设计的工程范式应用到生物系统中,以产生具有自然界中不存在的新功能的可预测且健全的系统”(欧洲委员会,2005年),这可能包括基于生物分子系统——例如核糖体——的分子组合器的可能性。<br />
<br />
<br />
<br />
To note, synthetic biology has traditionally been divided into two different approaches: top down and bottom up.<br />
<br />
To note, synthetic biology has traditionally been divided into two different approaches: top down and bottom up.<br />
<br />
值得注意的是,合成生物学在传统上被分为两种不同的方法: 自上而下和自下而上。<br />
<br />
<br />
<br />
# The <u>top down</u> approach involves using metabolic and genetic engineering techniques to impart new functions to living cells.<br />
<br />
The <u>top down</u> approach involves using metabolic and genetic engineering techniques to impart new functions to living cells.<br />
<br />
自上而下的方法包括利用代谢和基因工程技术赋予活细胞以新的功能。<br />
<br />
# The <u>bottom up</u> approach involves creating new biological systems ''in vitro'' by bringing together 'non-living' biomolecular components,<ref>{{cite journal | vauthors = Schwille P | title = Bottom-up synthetic biology: engineering in a tinkerer's world | journal = Science | volume = 333 | issue = 6047 | pages = 1252–4 | date = September 2011 | pmid = 21885774 | doi = 10.1126/science.1211701 | bibcode = 2011Sci...333.1252S | s2cid = 43354332 }}</ref> often with the aim of constructing an [[artificial cell]].<br />
<br />
The <u>bottom up</u> approach involves creating new biological systems in vitro by bringing together 'non-living' biomolecular components, often with the aim of constructing an artificial cell.<br />
<br />
自下而上的方法包括在体外创建新的生物系统,将“非活性”的生物分子组件聚集在一起,其目的通常是构建一个人工细胞。<br />
<br />
<br />
<br />
Biological systems are thus assembled module-by-module. [[Cell-free protein synthesis|Cell-free protein expression systems]] are often employed,<ref>{{cite journal | vauthors = Noireaux V, Libchaber A | title = A vesicle bioreactor as a step toward an artificial cell assembly | journal = Proceedings of the National Academy of Sciences of the United States of America | volume = 101 | issue = 51 | pages = 17669–74 | date = December 2004 | pmid = 15591347 | pmc = 539773 | doi = 10.1073/pnas.0408236101 | bibcode = 2004PNAS..10117669N }}</ref><ref>{{cite journal | vauthors = Hodgman CE, Jewett MC | title = Cell-free synthetic biology: thinking outside the cell | journal = Metabolic Engineering | volume = 14 | issue = 3 | pages = 261–9 | date = May 2012 | pmid = 21946161 | pmc = 3322310 | doi = 10.1016/j.ymben.2011.09.002 }}</ref><ref>{{cite journal | vauthors = Elani Y, Law RV, Ces O | title = Protein synthesis in artificial cells: using compartmentalisation for spatial organisation in vesicle bioreactors | journal = Physical Chemistry Chemical Physics | volume = 17 | issue = 24 | pages = 15534–7 | date = June 2015 | pmid = 25932977 | doi = 10.1039/C4CP05933F | bibcode = 2015PCCP...1715534E | doi-access = free }}</ref> as are membrane-based molecular machinery. There are increasing efforts to bridge the divide between these approaches by forming hybrid living/synthetic cells,<ref>{{cite journal | vauthors = Elani Y, Trantidou T, Wylie D, Dekker L, Polizzi K, Law RV, Ces O | title = Constructing vesicle-based artificial cells with embedded living cells as organelle-like modules | journal = Scientific Reports | volume = 8 | issue = 1 | pages = 4564 | date = March 2018 | pmid = 29540757 | pmc = 5852042 | doi = 10.1038/s41598-018-22263-3 | bibcode = 2018NatSR...8.4564E }}</ref> and engineering communication between living and synthetic cell populations.<ref>{{cite journal | vauthors = Lentini R, Martín NY, Forlin M, Belmonte L, Fontana J, Cornella M, Martini L, Tamburini S, Bentley WE, Jousson O, Mansy SS | title = Two-Way Chemical Communication between Artificial and Natural Cells | journal = ACS Central Science | volume = 3 | issue = 2 | pages = 117–123 | date = February 2017 | pmid = 28280778 | pmc = 5324081 | doi = 10.1021/acscentsci.6b00330 }}</ref><br />
<br />
Biological systems are thus assembled module-by-module. Cell-free protein expression systems are often employed, as are membrane-based molecular machinery. There are increasing efforts to bridge the divide between these approaches by forming hybrid living/synthetic cells, and engineering communication between living and synthetic cell populations.<br />
<br />
生物系统就是这样一个模块一个模块地组装起来的。无细胞蛋白表达系统作为以膜为基础的分子机制,经常被采用。通过形成活细胞/合成细胞的混合体,以及在活细胞和合成细胞群之间进行工程交流,越来越多的努力为这些系统之间架起了桥梁。<br />
<br />
<br />
<br />
== History 发展历程 ==<br />
<br />
'''1910:''' First identifiable use of the term "synthetic biology" in [[Stéphane Leduc]]'s publication ''Théorie physico-chimique de la vie et générations spontanées''.<ref>[https://openlibrary.org/books/OL23348076M/Théorie_physico-chimique_de_la_vie_et_générations_spontanées Théorie physico-chimique de la vie et générations spontanées, S. Leduc, 1910]</ref> He also noted this term in another publication, ''La Biologie Synthétique'' in 1912.<ref>{{cite book |url=http://www.peiresc.org/bstitre.htm |title=La biologie synthétique, étude de biophysique |last=Leduc |first=Stéphane |date=1912 | veditors = Poinat A }}</ref><br />
<br />
1910: First identifiable use of the term "synthetic biology" in Stéphane Leduc's publication Théorie physico-chimique de la vie et générations spontanées. He also noted this term in another publication, La Biologie Synthétique in 1912.<br />
<br />
1910: First identifiable use of the term "synthetic biology" in Stéphane Leduc's publication Théorie physico-chimique de la vie et générations spontanées.他还在1912年的另一本出版物《生物合成学》中提到了这个术语。<br />
<br />
<br />
<br />
'''1961:''' Jacob and Monod postulate cellular regulation by molecular networks from their study of the ''lac'' operon in ''E. coli'' and envisioned the ability to assemble new systems from molecular components.<ref>Jacob, F.ß. & Monod, J. On the regulation of gene activity. Cold Spring Harb. Symp. Quant. Biol. 26, 193–211 (1961).</ref><br />
<br />
1961: Jacob and Monod postulate cellular regulation by molecular networks from their study of the lac operon in E. coli and envisioned the ability to assemble new systems from molecular components.<br />
<br />
1961年: 雅各布和莫诺德通过他们对大肠杆菌中乳酸操纵子的研究,设想了通过分子网络调控细胞的方法,并预期了由分子组件组装新系统的能力。<br />
<br />
<br />
<br />
'''1973:''' First molecular cloning and amplification of DNA in a plasmid is published in ''P.N.A.S.'' by Cohen, Boyer ''et al.'' constituting the dawn of synthetic biology.<ref>{{cite journal | vauthors = Cohen SN, Chang AC, Boyer HW, Helling RB | title = Construction of biologically functional bacterial plasmids in vitro | journal = Proc. Natl. Acad. Sci. USA | volume = 70 | issue = 11 | pages = 3240–3244 | date = 1973 | pmid = 4594039 | doi = 10.1073/pnas.70.11.3240 | bibcode = 1973PNAS...70.3240C | pmc = 427208 }}</ref></blockquote><br />
<br />
1973: First molecular cloning and amplification of DNA in a plasmid is published in P.N.A.S. by Cohen, Boyer et al. constituting the dawn of synthetic biology.</blockquote><br />
<br />
1973年:第一篇关于质粒中 DNA 的分子克隆和扩增的文章在 P.N.A.S. 发表。作者: 科恩,波义耳等人(Cohen,Boyer et al.) 。构成了合成生物学的黎明<br />
<br />
<br />
<br />
'''1978:''' [[Werner Arber|Arber]], [[Daniel Nathans|Nathans]] and [[Hamilton O. Smith|Smith]] win the [[Nobel Prize in Physiology or Medicine]] for the discovery of [[restriction enzyme]]s, leading Szybalski to offer an editorial comment in the journal ''[[Gene (journal)|Gene]]'':<br />
<br />
1978: Arber, Nathans and Smith win the Nobel Prize in Physiology or Medicine for the discovery of restriction enzymes, leading Szybalski to offer an editorial comment in the journal Gene:<br />
<br />
1978年: 阿尔伯 (Arber) ,纳森斯 (Nathans) 和 史密斯 (Smith) 因为限制性内切酶的发现而获得美国诺贝尔生理学或医学奖学会奖,这使得齐巴尔斯基 (Szybalski) 在《基因》杂志上发表了一篇社论评论:<br />
<br />
<br />
<br />
<blockquote>The work on restriction nucleases not only permits us easily to construct recombinant DNA molecules and to analyze individual genes, but also has led us into the new era of synthetic biology where not only existing genes are described and analyzed but also new gene arrangements can be constructed and evaluated.<ref>{{cite journal | vauthors = Szybalski W, Skalka A | title = Nobel prizes and restriction enzymes | journal = Gene | volume = 4 | issue = 3 | pages = 181–2 | date = November 1978 | pmid = 744485 | doi = 10.1016/0378-1119(78)90016-1 }}</ref></blockquote><br />
<br />
<blockquote>The work on restriction nucleases not only permits us easily to construct recombinant DNA molecules and to analyze individual genes, but also has led us into the new era of synthetic biology where not only existing genes are described and analyzed but also new gene arrangements can be constructed and evaluated.</blockquote><br />
<br />
限制性核酸酶的研究不仅使我们能够很容易地构建重组 DNA 分子和分析单个基因,而且使我们进入了合成生物学的新时代,不仅可以描述和分析现有的基因,而且可以构建和评估新的基因序列。</blockquote ><br />
<br />
<br />
<br />
'''1988:''' First DNA amplification by the polymerase chain reaction (PCR) using a thermostable DNA polymerase is published in ''Science'' by Mullis ''et al.''<ref>{{cite journal | vauthors = Saiki RK, Gelfand DH, Stoffel S, Scharf SJ, Higuchi R, Horn GT, Mullis KB, Erlich HA | title = Primer-directed enzymatic amplification of DNA with a thermostable DNA polymerase | journal = Science | volume = 239 | issue = 4839 | pages = 487–491 | date = 1988 | pmid = 2448875 | doi = 10.1126/science.239.4839.487 }}</ref></blockquote> This obviated adding new DNA polymerase after each PCR cycle, thus greatly simplifying DNA mutagenesis and assembly.<br />
<br />
1988: First DNA amplification by the polymerase chain reaction (PCR) using a thermostable DNA polymerase is published in Science by Mullis et al.</blockquote> This obviated adding new DNA polymerase after each PCR cycle, thus greatly simplifying DNA mutagenesis and assembly.<br />
<br />
1988年: 第一次利用热稳定的 DNA 聚合酶进行聚合酶链式反应以实现 DNA 扩增(PCR)的成果由马利斯 (Mullis) 等人发表在《科学》杂志上,这样就避免了在每次 PCR 循环后增加新的 DNA 聚合酶,从而大大简化了 DNA 的突变和组装。<br />
<br />
<br />
<br />
'''2000:''' Two papers in [[Nature (journal)|Nature]] report [[synthetic biological circuits]], a genetic toggle switch and a biological clock, by combining genes within [[Escherichia coli|''E. coli'']] cells.<ref name=":0">{{cite journal | vauthors = Elowitz MB, Leibler S | title = A synthetic oscillatory network of transcriptional regulators | journal = Nature | volume = 403 | issue = 6767 | pages = 335–8 | date = January 2000 | pmid = 10659856 | doi = 10.1038/35002125 | bibcode = 2000Natur.403..335E | s2cid = 41632754 }}</ref><ref name=":1">{{cite journal | vauthors = Gardner TS, Cantor CR, Collins JJ | title = Construction of a genetic toggle switch in Escherichia coli | journal = Nature | volume = 403 | issue = 6767 | pages = 339–42 | date = January 2000 | pmid = 10659857 | doi = 10.1038/35002131 | bibcode = 2000Natur.403..339G | s2cid = 345059 }}</ref><br />
<br />
2000: Two papers in Nature report synthetic biological circuits, a genetic toggle switch and a biological clock, by combining genes within E. coli cells.<br />
<br />
2000年: 《自然》杂志的两篇论文报告了通过结合大肠杆菌细胞内的基因制造合成生物电路、基因切换开关和生物钟。<br />
<br />
<br />
<br />
'''2003:''' The most widely used standardized DNA parts, [[BioBrick]] plasmids, are invented by [[Tom Knight (scientist)|Tom Knight]].<ref>{{Cite journal|last1=Knight|first1=Thomas| name-list-style = vanc |year=2003|title=Tom Knight (2003). Idempotent Vector Design for Standard Assembly of Biobricks|hdl=1721.1/21168}}</ref> These parts will become central to the international Genetically Engineered Machine competition (iGEM) founded at MIT in the following year.<br />
<br />
2003: The most widely used standardized DNA parts, BioBrick plasmids, are invented by Tom Knight. These parts will become central to the international Genetically Engineered Machine competition (iGEM) founded at MIT in the following year.<br />
<br />
2003年: 最广泛使用的标准化 DNA 部件,生物积木质粒,是由汤姆·奈特 (Tom Knight) 发明的。这些部分将成为第二年在麻省理工学院成立的国际基因工程机器竞赛 (iGEM) 的中心。<br />
<br />
<br />
<br />
[[File:Synthetic Biology Open Language (SBOL) standard visual symbols.png|thumb|upright=1.25| [[Synthetic Biology Open Language]] (SBOL) standard visual symbols for use with [[BioBrick|BioBricks Standard]]]]<br />
<br />
[[Synthetic Biology Open Language (SBOL) standard visual symbols for use with BioBricks Standard]]<br />
<br />
[与生物积木标准一起使用的合成生物学开放式语言(SBOL)标准视觉符号]<br />
<br />
<br />
<br />
'''2003:''' Researchers engineer an artemisinin precursor pathway in ''E. coli''.<ref>Martin, V. J., Pitera, D. J., Withers, S. T., Newman, J. D. & Keasling, J. D. Engineering a mevalonate pathway in Escherichia coli for production of terpenoids. Nature Biotech. 21, 796–802 (2003).</ref><br />
<br />
2003: Researchers engineer an artemisinin precursor pathway in E. coli.<br />
<br />
2003年: 研究人员在大肠杆菌中设计出青蒿素前体途径。<br />
<br />
<br />
<br />
'''2004:''' First international conference for synthetic biology, Synthetic Biology 1.0 (SB1.0) is held at the Massachusetts Institute of Technology, USA.<br />
<br />
2004: First international conference for synthetic biology, Synthetic Biology 1.0 (SB1.0) is held at the Massachusetts Institute of Technology, USA.<br />
<br />
2004年: 第一届合成生物学国际会议,合成生物学1.0(SB1.0)在美国麻省理工学院举行。<br />
<br />
<br />
<br />
'''2005:''' Researchers develop a light-sensing circuit in ''E. coli''.<ref>{{cite journal | last1 = Levskaya | first1 = A. | display-authors = etal | year = 2005 | title = "Synthetic biology " engineering Escherichia coli to see light | url = | journal = Nature | volume = 438 | issue = 7067| pages = 441–442 | doi = 10.1038/nature04405 | pmid = 16306980 | s2cid = 4428475 }}</ref> Another group designs circuits capable of multicellular pattern formation.<ref>Basu, S., Gerchman, Y., Collins, C. H., Arnold, F. H. & Weiss, R. "A synthetic multicellular system for programmed pattern formation. ''Nature'' 434,</ref><br />
<br />
2005: Researchers develop a light-sensing circuit in E. coli. Another group designs circuits capable of multicellular pattern formation.<br />
<br />
2005年: 研究人员在大肠杆菌中开发出一种感光电路。另一个研究小组设计出了能够形成多细胞模式的电路。<br />
<br />
<br />
<br />
'''2006:''' Researchers engineer a synthetic circuit that promotes bacterial invasion of tumour cells.<ref>{{cite journal | last1 = Anderson | first1 = J. C. | last2 = Clarke | first2 = E. J. | last3 = Arkin | first3 = A. P. | last4 = Voigt | first4 = C. A. | year = 2006 | title = Environmentally controlled invasion of cancer cells by engineered bacteria | url = | journal = J. Mol. Biol. | volume = 355 | issue = 4| pages = 619–627 | doi = 10.1016/j.jmb.2005.10.076 | pmid = 16330045 }}</ref><br />
<br />
2006: Researchers engineer a synthetic circuit that promotes bacterial invasion of tumour cells.<br />
<br />
2006年: 研究人员设计了一种能促进细菌侵入肿瘤细胞的合成电路。<br />
<br />
<br />
<br />
'''2010:''' Researchers publish in ''Science'' the first synthetic bacterial genome, called ''M. mycoides'' JCVI-syn1.0.<ref name="gibson52" /><ref>{{Cite news|url=https://www.telegraph.co.uk/news/science/science-news/7747779/American-scientist-who-created-artificial-life-denies-playing-God.html|title=American scientist who created artificial life denies 'playing God'|last=|first=|date=May 2010|website=The Telegraph|url-status=live|archive-url=|archive-date=|access-date=}}</ref> The genome is made from chemically-synthesized DNA using yeast recombination.<br />
<br />
2010: Researchers publish in Science the first synthetic bacterial genome, called M. mycoides JCVI-syn1.0. The genome is made from chemically-synthesized DNA using yeast recombination.<br />
<br />
2010年: 研究人员在《科学》杂志上发表了第一个人工合成的细菌基因组,名为丝状支原体 JCVI-syn1.0。基因组是使用酵母由化学合成的 DNA重组得到的。<br />
<br />
<br />
<br />
'''2011:''' Functional synthetic chromosome arms are engineered in yeast.<ref>{{cite journal | last1 = Dymond | first1 = J. S. | display-authors = etal | year = 2011 | title = Synthetic chromosome arms function in yeast and generate phenotypic diversity by design | url = | journal = Nature | volume = 477 | issue = 7365 | pages = 816–821 | doi = 10.1038/nature10403 | pmid = 21918511 | pmc = 3774833 }}</ref><br />
<br />
2011: Functional synthetic chromosome arms are engineered in yeast.<br />
<br />
2011年: 成功在酵母中设计出功能性合成染色体臂。<br />
<br />
<br />
<br />
'''2012:''' Charpentier and Doudna labs publish in ''Science'' the programming of CRISPR-Cas9 bacterial immunity for targeting DNA cleavage.<ref>{{cite journal | vauthors = Jinek M, Chylinski K, Fonfara I, Hauer M, Doudna JA, Charpentier E | title = A programmable dual-RNA-guided DNA endonuclease in adaptive bacterial immunity | journal = Science | volume = 337 | issue = 6096 | pages = 816–821 | date = 2012 | pmid = 22745249 | doi = 10.1126/science.1225829 | pmc = 6286148 }}</ref></blockquote> This technology greatly simplified and expanded eukaryotic gene editing.<br />
<br />
2012: Charpentier and Doudna labs publish in Science the programming of CRISPR-Cas9 bacterial immunity for targeting DNA cleavage.</blockquote> This technology greatly simplified and expanded eukaryotic gene editing.<br />
<br />
2012年: Charpentier 和 Doudna 实验室在《科学》杂志上发表了 CRISPR-Cas9细菌免疫系统的程序设计,用于靶向 DNA 的裂解。这项技术极大地简化和扩展了真核生物的基因编辑。<br />
<br />
<br />
<br />
'''2019:''' Scientists at [[ETH Zurich]] report the creation of the first [[bacterial genome]], named ''[[Caulobacter crescentus|Caulobacter ethensis-2.0]]'', made entirely by a computer, although a related [[wikt:viability|viable form]] of ''C. ethensis-2.0'' does not yet exist.<ref name="EA-20190401">{{cite news |author=ETH Zurich |title=First bacterial genome created entirely with a computer |url=https://www.eurekalert.org/pub_releases/2019-04/ez-fbg032819.php |date=1 April 2019 |work=[[EurekAlert!]] |accessdate=2 April 2019 |author-link=ETH Zurich }}</ref><ref name="PNAS20190401">{{cite journal |author=Venetz, Jonathan E. |display-authors=et al. |title=Chemical synthesis rewriting of a bacterial genome to achieve design flexibility and biological functionality |date=1 April 2019 |journal=[[Proceedings of the National Academy of Sciences of the United States of America]] |volume=116 |issue=16 |pages=8070–8079 |doi=10.1073/pnas.1818259116 |pmid=30936302 |pmc=6475421 }}</ref><br />
<br />
2019: Scientists at ETH Zurich report the creation of the first bacterial genome, named Caulobacter ethensis-2.0, made entirely by a computer, although a related viable form of C. ethensis-2.0 does not yet exist.<br />
<br />
2019年: 苏黎世联邦理工学院 (ETH Zurich) 的科学家报告说,他们已经创造出了第一个细菌基因组,并将其命名为 Caulobacter ethensis-2.0 ,这个基因组完全是由计算机制造的,尽管与之相关的可存活的Caulobacter ethensis-2.0还不存在。<br />
<br />
<br />
<br />
'''2019:''' Researchers report the production of a new [[Synthetic biology#Synthetic life|synthetic]] (possibly [[Artificial life#Biochemical-based ("wet")|artificial]]) form of [[wikt:viability|viable]] [[life]], a variant of the [[bacteria]] ''[[Escherichia coli]]'', by reducing the natural number of 64 [[codon]]s in the bacterial [[genome]] to 59 codons instead, in order to encode 20 [[amino acid]]s.<ref name="NYT-20190515">{{cite news |last=Zimmer |first=Carl |authorlink=Carl Zimmer |title=Scientists Created Bacteria With a Synthetic Genome. Is This Artificial Life? - In a milestone for synthetic biology, colonies of E. coli thrive with DNA constructed from scratch by humans, not nature. |url=https://www.nytimes.com/2019/05/15/science/synthetic-genome-bacteria.html |date=15 May 2019 |work=[[The New York Times]] |accessdate=16 May 2019 }}</ref><ref name="NAT-20190515">{{cite journal |author=Fredens, Julius |display-authors=et al. |title=Total synthesis of Escherichia coli with a recoded genome |date=15 May 2019 |journal=[[Nature (journal)|Nature]] |volume=569 |issue=7757 |pages=514–518 |doi=10.1038/s41586-019-1192-5 |pmid=31092918 |pmc=7039709 |bibcode=2019Natur.569..514F }}</ref><br />
<br />
2019: Researchers report the production of a new synthetic (possibly artificial) form of viable life, a variant of the bacteria Escherichia coli, by reducing the natural number of 64 codons in the bacterial genome to 59 codons instead, in order to encode 20 amino acids.<br />
<br />
2019年: 研究人员报告了一种新式合成(可能是人工的)可行生命形式的产生,这是大肠杆菌的变种,它通过将细菌基因组中64个密码子的自然数目减少到59个密码子来编码20个氨基酸。<br />
<br />
<br />
<br />
== Perspectives 各方观点 ==<br />
<br />
Engineers view biology as a ''technology'' (in other words, a given system's ''[[biotechnology]]'' or its ''[[biological engineering]]'')<ref>{{cite journal | volume = 6 | last = Zeng | first = Jie (Bangzhe) | title = On the concept of systems bio-engineering | journal = Coomunication on Transgenic Animals, June 1994, CAS, PRC }}</ref> Synthetic biology includes the broad redefinition and expansion of biotechnology, with the ultimate goals of being able to design and build engineered biological systems that process information, manipulate chemicals, fabricate materials and structures, produce energy, provide food, and maintain and enhance human health (see [[Biomedical Engineering]]) and our environment.<ref>{{cite journal | volume = 6 | last = Chopra | first = Paras | author2 = Akhil Kamma | title = Engineering life through Synthetic Biology | journal = In Silico Biology }}</ref><br />
<br />
Engineers view biology as a technology (in other words, a given system's biotechnology or its biological engineering) Synthetic biology includes the broad redefinition and expansion of biotechnology, with the ultimate goals of being able to design and build engineered biological systems that process information, manipulate chemicals, fabricate materials and structures, produce energy, provide food, and maintain and enhance human health (see Biomedical Engineering) and our environment.<br />
<br />
工程师将生物学视为一种技术(换句话说,一个特定系统的生物技术或其生物工程)合成生物学包括生物技术的广泛重新定义和扩展,最终目标是能够设计和建造工程生物系统,处理信息,操纵化学品,制造材料和结构,生产能源,提供食物,维护和增强人类健康(见生物医学工程)和我们的环境。<br />
<br />
<br />
<br />
Studies in synthetic biology can be subdivided into broad classifications according to the approach they take to the problem at hand: standardization of biological parts, biomolecular engineering, genome engineering. {{citation needed|date=May 2020}}<br />
<br />
Studies in synthetic biology can be subdivided into broad classifications according to the approach they take to the problem at hand: standardization of biological parts, biomolecular engineering, genome engineering. <br />
<br />
合成生物学的研究可以根据它们对手头问题所采取的方法再广泛细分为: 生物部分的标准化、生物分子工程、基因组工程。<br />
<br />
<br />
<br />
Biomolecular engineering includes approaches that aim to create a toolkit of functional units that can be introduced to present new technological functions in living cells. [[Genetic engineering]] includes approaches to construct synthetic chromosomes for whole or minimal organisms.<br />
<br />
Biomolecular engineering includes approaches that aim to create a toolkit of functional units that can be introduced to present new technological functions in living cells. Genetic engineering includes approaches to construct synthetic chromosomes for whole or minimal organisms.<br />
<br />
生物分子工程包括旨在创建一个功能单元工具包的方法,这些功能单元可以用来展示活细胞中的新技术性功能。基因工程包括为整个或最小的有机体构建合成染色体的方法。<br />
<br />
<br />
<br />
Biomolecular design refers to the general idea of de novo design and additive combination of biomolecular components. Each of these approaches share a similar task: to develop a more synthetic entity at a higher level of complexity by inventively manipulating a simpler part at the preceding level.<ref>{{cite journal | vauthors = Channon K, Bromley EH, Woolfson DN | title = Synthetic biology through biomolecular design and engineering | journal = Current Opinion in Structural Biology | volume = 18 | issue = 4 | pages = 491–8 | date = August 2008 | pmid = 18644449 | doi = 10.1016/j.sbi.2008.06.006 }}</ref><br />
<br />
Biomolecular design refers to the general idea of de novo design and additive combination of biomolecular components. Each of these approaches share a similar task: to develop a more synthetic entity at a higher level of complexity by inventively manipulating a simpler part at the preceding level.<br />
<br />
生物分子设计是指生物分子组分的重新设计和加合的总体思想。这些方法都有一个相似的任务: 通过在前一级创造性地操作一个更简单的部分,从而在更高的复杂性水平上开发一个更高度合成化的实体。<br />
<br />
<br />
<br />
On the other hand, "re-writers" are synthetic biologists interested in testing the irreducibility of biological systems. Due to the complexity of natural biological systems, it would be simpler to rebuild the natural systems of interest from the ground up; In order to provide engineered surrogates that are easier to comprehend, control and manipulate.<ref>{{cite journal | first = M | last = Stone | title = Life Redesigned to Suit the Engineering Crowd | journal = Microbe | volume = 1 | issue = 12 | pages = 566–570 | date = 2006 | s2cid = 7171812 | url = https://pdfs.semanticscholar.org/8d45/e0f37a0fb6c1a3c659c71ee9c52619b18364.pdf }}</ref> Re-writers draw inspiration from [[refactoring]], a process sometimes used to improve computer software.<br />
<br />
On the other hand, "re-writers" are synthetic biologists interested in testing the irreducibility of biological systems. Due to the complexity of natural biological systems, it would be simpler to rebuild the natural systems of interest from the ground up; In order to provide engineered surrogates that are easier to comprehend, control and manipulate. Re-writers draw inspiration from refactoring, a process sometimes used to improve computer software.<br />
<br />
另一方面,“重写者”指的是对测试生物系统的不可还原性感兴趣的合成生物学家。由于自然生物系统的复杂性,从头开始重建感兴趣的自然系统会更简单; 为了提供更容易理解、控制和操作的工程替代品。重写者从重构中获得灵感,我们有时用这种重构以改进计算机软件。<br />
<br />
<br />
<br />
== Enabling technologies 使能技术 ==<br />
<br />
Several novel enabling technologies were critical to the success of synthetic biology. Concepts include [[standardization]] of biological parts and hierarchical abstraction to permit using those parts in synthetic systems.<ref>{{cite journal | vauthors = Baker D, Church G, Collins J, Endy D, Jacobson J, Keasling J, Modrich P, Smolke C, Weiss R | title = Engineering life: building a fab for biology | journal = Scientific American | volume = 294 | issue = 6 | pages = 44–51 | date = June 2006 | pmid = 16711359 | doi = 10.1038/scientificamerican0606-44 | bibcode = 2006SciAm.294f..44B }}</ref> Basic technologies include reading and writing DNA (sequencing and fabrication). Measurements under multiple conditions are needed for accurate modeling and [[computer-aided design]] (CAD).<br />
<br />
Several novel enabling technologies were critical to the success of synthetic biology. Concepts include standardization of biological parts and hierarchical abstraction to permit using those parts in synthetic systems. Basic technologies include reading and writing DNA (sequencing and fabrication). Measurements under multiple conditions are needed for accurate modeling and computer-aided design (CAD).<br />
<br />
一些新的使能技术对于合成生物学的成功至关重要。概念包括生物部分的标准化和层次抽象,以允许在合成系统中使用这些部分。基本技术包括读写 DNA (测序和编码)。为了精确地建模和计算机辅助设计(CAD) ,需要在多种条件下进行测量。<br />
<br />
<br />
<br />
=== DNA and gene synthesis DNA 和基因合成===<br />
<br />
{{Main|Artificial gene synthesis|Synthetic genomics}}Driven by dramatic decreases in costs of [[oligonucleotides|oligonucleotide]] ("oligos") synthesis and the advent of PCR, the sizes of DNA constructions from oligos have increased to the genomic level.<ref>{{cite journal | vauthors = Kosuri S, Church GM | title = Large-scale de novo DNA synthesis: technologies and applications | journal = Nature Methods | volume = 11 | issue = 5 | pages = 499–507 | date = May 2014 | pmid = 24781323 | doi = 10.1038/nmeth.2918 | pmc = 7098426 }}</ref> In 2000, researchers reported synthesis of the 9.6 kbp (kilo bp) [[Hepatitis C]] virus genome from chemically synthesized 60 to 80-mers.<ref>{{cite journal | vauthors = Blight KJ, Kolykhalov AA, Rice CM | title = Efficient initiation of HCV RNA replication in cell culture | journal = Science | volume = 290 | issue = 5498 | pages = 1972–4 | date = December 2000 | pmid = 11110665 | doi = 10.1126/science.290.5498.1972 | bibcode = 2000Sci...290.1972B }}</ref> In 2002 researchers at [[Stony Brook University]] succeeded in synthesizing the 7741 bp [[poliovirus]] genome from its published sequence, producing the second synthetic genome, spanning two years.<ref>{{cite journal | vauthors = Couzin J | title = Virology. Active poliovirus baked from scratch | journal = Science | volume = 297 | issue = 5579 | pages = 174–5 | date = July 2002 | pmid = 12114601 | doi = 10.1126/science.297.5579.174b | s2cid = 83531627 | url = https://semanticscholar.org/paper/248000e7bc654631ae217274a77253ceddf270a1 }}</ref> In 2003 the 5386 bp genome of the [[bacteriophage]] [[Phi X 174]] was assembled in about two weeks.<ref name="assembly2003">{{cite journal | vauthors = Smith HO, Hutchison CA, Pfannkoch C, Venter JC | title = Generating a synthetic genome by whole genome assembly: phiX174 bacteriophage from synthetic oligonucleotides | journal = Proceedings of the National Academy of Sciences of the United States of America | volume = 100 | issue = 26 | pages = 15440–5 | date = December 2003 | pmid = 14657399 | pmc = 307586 | doi = 10.1073/pnas.2237126100 | bibcode = 2003PNAS..10015440S }}</ref> In 2006, the same team, at the [[J. Craig Venter Institute]], constructed and patented a [[Synthetic genomics|synthetic genome]] of a novel minimal bacterium, ''[[Mycoplasma laboratorium]]'' and were working on getting it functioning in a living cell.<ref>{{cite news|url=https://www.nytimes.com/2007/06/29/science/29cells.html|title=Scientists Transplant Genome of Bacteria|last=Wade|first=Nicholas|date=2007-06-29|work=The New York Times|access-date=2007-12-28|issn=0362-4331}}</ref><ref>{{cite journal | vauthors = Gibson DG, Benders GA, Andrews-Pfannkoch C, Denisova EA, Baden-Tillson H, Zaveri J, Stockwell TB, Brownley A, Thomas DW, Algire MA, Merryman C, Young L, Noskov VN, Glass JI, Venter JC, Hutchison CA, Smith HO | title = Complete chemical synthesis, assembly, and cloning of a Mycoplasma genitalium genome | journal = Science | volume = 319 | issue = 5867 | pages = 1215–20 | date = February 2008 | pmid = 18218864 | doi = 10.1126/science.1151721 | bibcode = 2008Sci...319.1215G | s2cid = 8190996 | url = https://semanticscholar.org/paper/8c662fd0e252c85d056aad7ff16009ebe1dd4cbc }}</ref><ref name="Ball">{{cite journal|last1=Ball|first1=Philip|date=2016|title=Man Made: A History of Synthetic Life|url=https://www.sciencehistory.org/distillations/magazine/man-made-a-history-of-synthetic-life|journal=Distillations|volume=2|issue=1|pages=15–23|access-date=22 March 2018}}</ref><br />
<br />
Driven by dramatic decreases in costs of oligonucleotide ("oligos") synthesis and the advent of PCR, the sizes of DNA constructions from oligos have increased to the genomic level. In 2000, researchers reported synthesis of the 9.6 kbp (kilo bp) Hepatitis C virus genome from chemically synthesized 60 to 80-mers. In 2002 researchers at Stony Brook University succeeded in synthesizing the 7741 bp poliovirus genome from its published sequence, producing the second synthetic genome, spanning two years. In 2003 the 5386 bp genome of the bacteriophage Phi X 174 was assembled in about two weeks. In 2006, the same team, at the J. Craig Venter Institute, constructed and patented a synthetic genome of a novel minimal bacterium, Mycoplasma laboratorium and were working on getting it functioning in a living cell.<br />
<br />
由于寡核苷酸 (oligos) 合成成本的大幅度降低和 PCR 的出现,寡核苷酸 DNA 构建的大小已经提高到基因组水平。2000年,研究人员报道了化学合成的60到80碱基数的聚丙型肝炎病毒基因组的合成。2002年,石溪大学的研究人员成功地从已发表的序列中合成了7741碱基数的脊髓灰质炎病毒基因组,生产了跨越两年的第二个合成基因组。2003年,噬菌体 Phi x 174的5386个基因组在大约两周内组装完毕。2006年,克莱格·凡特 (J. Craig Venter) 研究所的同一个团队,构建了一种新型微型细菌——支原体的合成基因组,并申请了专利,他们正在努力使其在活细胞中发挥作用。<br />
<br />
<br />
<br />
In 2007 it was reported that several companies were offering [[gene synthesis|synthesis of genetic sequences]] up to 2000 base pairs (bp) long, for a price of about $1 per bp and a turnaround time of less than two weeks.<ref>{{cite news| issn = 0362-4331| last = Pollack| first = Andrew| title = How Do You Like Your Genes? Biofabs Take Orders | work = The New York Times | access-date = 2007-12-28| date = 2007-09-12 | url = https://www.nytimes.com/2007/09/12/technology/techspecial/12gene.html?pagewanted=2&_r=1}}</ref> [[Oligonucleotide]]s harvested from a photolithographic- or inkjet-manufactured [[DNA chip]] combined with PCR and DNA mismatch error-correction allows inexpensive large-scale changes of [[codons]] in genetic systems to improve [[gene expression]] or incorporate novel amino-acids (see [[George M. Church]]'s and Anthony Forster's synthetic cell projects.<ref>{{Cite web|url=http://arep.med.harvard.edu/SBP|title=Synthetic Biology Projects|website=arep.med.harvard.edu|access-date=2018-02-17}}</ref><ref>{{cite journal | vauthors = Forster AC, Church GM | title = Towards synthesis of a minimal cell | journal = Molecular Systems Biology | volume = 2 | issue = 1 | pages = 45 | date = 2006-08-22 | pmid = 16924266 | pmc = 1681520 | doi = 10.1038/msb4100090 }}</ref>) This favors a synthesis-from-scratch approach.<br />
<br />
In 2007 it was reported that several companies were offering synthesis of genetic sequences up to 2000 base pairs (bp) long, for a price of about $1 per bp and a turnaround time of less than two weeks. Oligonucleotides harvested from a photolithographic- or inkjet-manufactured DNA chip combined with PCR and DNA mismatch error-correction allows inexpensive large-scale changes of codons in genetic systems to improve gene expression or incorporate novel amino-acids (see George M. Church's and Anthony Forster's synthetic cell projects.) This favors a synthesis-from-scratch approach.<br />
<br />
2007年有报道称,几家公司提供了长达2000个碱基对的基因序列合成,价格约为每个碱基1美元,一个完整流程所需工时不到两周。从光刻或喷墨制造的 DNA 芯片中提取的寡核苷酸,结合 PCR 和 DNA 错配错误校正,可以大规模改变遗传系统中的密码子,从而改善基因表达或合并新的氨基酸(参见乔治·M·丘奇和安东尼·福斯特的合成细胞项目)这有利于采用从头开始的合成方法。<br />
<br />
<br />
<br />
Additionally, the [[CRISPR|CRISPR/Cas]] system has emerged as a promising technique for gene editing. It was described as "the most important innovation in the synthetic biology space in nearly 30 years".<ref name="washpost_crispr">{{cite news|last1=Basulto|first1=Dominic|title=Everything you need to know about why CRISPR is such a hot technology|url=https://www.washingtonpost.com/news/innovations/wp/2015/11/04/everything-you-need-to-know-about-why-crispr-is-such-a-hot-technology/|access-date=5 December 2015|work=Washington Post|date=November 4, 2015}}</ref> While other methods take months or years to edit gene sequences, CRISPR speeds that time up to weeks.<ref name="washpost_crispr" /> Due to its ease of use and accessibility, however, it has raised ethical concerns, especially surrounding its use in [[Do-it-yourself biology|biohacking]].<ref>{{cite news|last1=Kahn|first1=Jennifer|title=The Crispr Quandary|url=https://www.nytimes.com/2015/11/15/magazine/the-crispr-quandary.html?_r=0|access-date=5 December 2015|work=New York Times|date=November 9, 2015}}</ref><ref>{{cite journal|last1=Ledford|first1=Heidi|title=CRISPR, the disruptor|url=http://www.nature.com/news/crispr-the-disruptor-1.17673|access-date=5 December 2015|agency=Nature News|journal=Nature|date=June 3, 2015|pmid=26040877|doi=10.1038/522020a|volume=522|issue=7554|pages=20–4|bibcode=2015Natur.522...20L|doi-access=free}}</ref><ref>{{cite magazine|last1=Higginbotham|first1=Stacey|title=Top VC Says Gene Editing Is Riskier Than Artificial Intelligence|url=http://fortune.com/2015/12/04/khosla-crispr-ai/|access-date=5 December 2015|magazine=Fortune|date=4 December 2015}}</ref><br />
<br />
Additionally, the CRISPR/Cas system has emerged as a promising technique for gene editing. It was described as "the most important innovation in the synthetic biology space in nearly 30 years". While other methods take months or years to edit gene sequences, CRISPR speeds that time up to weeks.<br />
<br />
此外,CRISPR/Cas 系统已经成为一种很有前途的基因编辑技术。它被称作“近30年来合成生物学领域最重要的创新”。虽然其他方法需要数月或数年来编辑基因序列,CRISPR 将这个时间缩短到数周。<br />
<br />
<br />
<br />
=== Sequencing 测序 ===<br />
<br />
[[DNA sequencing]] determines the order of [[nucleotide]] bases in a DNA molecule. Synthetic biologists use DNA sequencing in their work in several ways. First, large-scale genome sequencing efforts continue to provide information on naturally occurring organisms. This information provides a rich substrate from which synthetic biologists can construct parts and devices. Second, sequencing can verify that the fabricated system is as intended. Third, fast, cheap, and reliable sequencing can facilitate rapid detection and identification of synthetic systems and organisms.<ref>{{cite journal| author = Rollie| date = 2012 |title = Designing biological systems: Systems Engineering meets Synthetic Biology| journal = Chemical Engineering Science| volume = 69 | pages = 1–29| doi=10.1016/j.ces.2011.10.068| issue=1|display-authors=etal}}</ref><br />
<br />
DNA sequencing determines the order of nucleotide bases in a DNA molecule. Synthetic biologists use DNA sequencing in their work in several ways. First, large-scale genome sequencing efforts continue to provide information on naturally occurring organisms. This information provides a rich substrate from which synthetic biologists can construct parts and devices. Second, sequencing can verify that the fabricated system is as intended. Third, fast, cheap, and reliable sequencing can facilitate rapid detection and identification of synthetic systems and organisms.<br />
<br />
DNA 测序决定了 DNA 分子中核苷酸碱基的顺序。合成生物学家在他们的工作中以几种方式使用 DNA 测序。首先,大规模的基因组测序工作继续提供有关自然发生的生物体的信息。这些信息为合成生物学家提供了一个丰富的基质,他们可以从中构建零件和设备。其次,排序可以验证制造的系统是否如预期的那样。第三,快速、廉价和可靠的测序可以促进快速检测和识别合成系统和有机体。<br />
<br />
<br />
<br />
=== Microfluidics ===<br />
<br />
[[Microfluidics]], in particular droplet microfluidics, is an emerging tool used to construct new components, and to analyse and characterize them.<ref>{{cite journal | vauthors = Elani Y | title = Construction of membrane-bound artificial cells using microfluidics: a new frontier in bottom-up synthetic biology | journal = Biochemical Society Transactions | volume = 44 | issue = 3 | pages = 723–30 | date = June 2016 | pmid = 27284034 | pmc = 4900754 | doi = 10.1042/BST20160052 }}</ref><ref>{{cite journal | vauthors = Gach PC, Iwai K, Kim PW, Hillson NJ, Singh AK | title = Droplet microfluidics for synthetic biology | journal = Lab on a Chip | volume = 17 | issue = 20 | pages = 3388–3400 | date = October 2017 | pmid = 28820204 | doi = 10.1039/C7LC00576H | osti = 1421856 | url = http://www.escholarship.org/uc/item/6cr3k0v5 }}</ref> It is widely employed in screening assays.<ref>{{cite journal | vauthors = Vinuselvi P, Park S, Kim M, Park JM, Kim T, Lee SK | title = Microfluidic technologies for synthetic biology | journal = International Journal of Molecular Sciences | volume = 12 | issue = 6 | pages = 3576–93 | date = 2011-06-03 | pmid = 21747695 | pmc = 3131579 | doi = 10.3390/ijms12063576 }}</ref><br />
<br />
Microfluidics, in particular droplet microfluidics, is an emerging tool used to construct new components, and to analyse and characterize them. It is widely employed in screening assays.<br />
<br />
微流体,特别是液滴微流体,是一种新兴的工具,用于构造新的元件,并分析和表征它们。它被广泛应用于筛选分析。<br />
<br />
<br />
<br />
=== Modularity 模块化 ===<br />
<br />
The most used<ref name="primer">{{Cite book|title=Synthetic Biology – A Primer|last1=Freemont|first1=Paul S.|last2=Kitney|first2=Richard I.| name-list-style = vanc |date=2012|publisher=World Scientific|isbn=978-1-84816-863-3|doi=10.1142/p837}}</ref>{{rp|22–23}} standardized DNA parts are [[BioBrick]] plasmids, invented by [[Tom Knight (scientist)|Tom Knight]] in 2003.<ref>{{Cite journal|last1=Knight|first1=Thomas| name-list-style = vanc |year=2003|title=Tom Knight (2003). Idempotent Vector Design for Standard Assembly of Biobricks|hdl=1721.1/21168}}</ref> Biobricks are stored at the [[Registry of Standard Biological Parts]] in Cambridge, Massachusetts. The BioBrick standard has been used by thousands of students worldwide in the [[international Genetically Engineered Machine]] (iGEM) competition.<ref name="primer" />{{rp|22–23}}<br />
<br />
The most used standardized DNA parts are BioBrick plasmids, invented by Tom Knight in 2003. Biobricks are stored at the Registry of Standard Biological Parts in Cambridge, Massachusetts. The BioBrick standard has been used by thousands of students worldwide in the international Genetically Engineered Machine (iGEM) competition. SH3 domain-peptide binding or SpyTag/SpyCatcher offer such control. In addition it is necessary to regulate protein-protein interactions in cells, such as with light (using light-oxygen-voltage-sensing domains) or cell-permeable small molecules by chemically induced dimerization.<br />
<br />
最常用的标准化 DNA 部分是生物积木质粒,由汤姆·奈特在2003年发明。生物积木储存在马萨诸塞州剑桥的标准生物部件注册处。生物积木标准已经被全世界成千上万的学生用于国际基因工程机器竞赛 (iGEM) 。SH3域-肽结合或 SpyTag/SpyCatcher 能提供这样的控制。此外,还必须通过化学诱导的二聚化来调节细胞中蛋白质-蛋白质的相互作用,例如利用光(光-氧-电压感应域)或细胞渗透性小分子。<br />
<br />
<br />
<br />
While DNA is most important for information storage, a large fraction of the cell's activities are carried out by proteins. Tools can send proteins to specific regions of the cell and to link different proteins together. The interaction strength between protein partners should be tunable between a lifetime of seconds (desirable for dynamic signaling events) up to an irreversible interaction (desirable for device stability or resilient to harsh conditions). Interactions such as [[coiled coil]]s,<ref>{{cite journal | vauthors = Woolfson DN, Bartlett GJ, Bruning M, Thomson AR | title = New currency for old rope: from coiled-coil assemblies to α-helical barrels | journal = Current Opinion in Structural Biology | volume = 22 | issue = 4 | pages = 432–41 | date = August 2012 | pmid = 22445228 | doi = 10.1016/j.sbi.2012.03.002 }}</ref> [[SH3 domain]]-peptide binding<ref>{{cite journal | vauthors = Dueber JE, Wu GC, Malmirchegini GR, Moon TS, Petzold CJ, Ullal AV, Prather KL, Keasling JD | title = Synthetic protein scaffolds provide modular control over metabolic flux | journal = Nature Biotechnology | volume = 27 | issue = 8 | pages = 753–9 | date = August 2009 | pmid = 19648908 | doi = 10.1038/nbt.1557 | s2cid = 2756476 }}</ref> or [[SpyCatcher|SpyTag/SpyCatcher]]<ref>{{cite journal | vauthors = Reddington SC, Howarth M | title = Secrets of a covalent interaction for biomaterials and biotechnology: SpyTag and SpyCatcher | journal = Current Opinion in Chemical Biology | volume = 29 | pages = 94–9 | date = December 2015 | pmid = 26517567 | doi = 10.1016/j.cbpa.2015.10.002 | doi-access = free }}</ref> offer such control. In addition it is necessary to regulate protein-protein interactions in cells, such as with light (using [[light-oxygen-voltage-sensing domain]]s) or cell-permeable small molecules by [[chemically induced dimerization]].<ref>{{cite journal | vauthors = Bayle JH, Grimley JS, Stankunas K, Gestwicki JE, Wandless TJ, Crabtree GR | title = Rapamycin analogs with differential binding specificity permit orthogonal control of protein activity | journal = Chemistry & Biology | volume = 13 | issue = 1 | pages = 99–107 | date = January 2006 | pmid = 16426976 | doi = 10.1016/j.chembiol.2005.10.017 | doi-access = free }}</ref><br />
<br />
In a living cell, molecular motifs are embedded in a bigger network with upstream and downstream components. These components may alter the signaling capability of the modeling module. In the case of ultrasensitive modules, the sensitivity contribution of a module can differ from the sensitivity that the module sustains in isolation.<br />
<br />
在一个活细胞中,分子模体被嵌入到一个更大的由上游或下游组件构成的网络中。这些组件可以改变建模模块的信令能力。在超灵敏模块的情况下,模块的灵敏度贡献可能不同于该模块在隔离状态下支持的灵敏度。<br />
<br />
<br />
<br />
In a living cell, molecular motifs are embedded in a bigger network with upstream and downstream components. These components may alter the signaling capability of the modeling module. In the case of ultrasensitive modules, the sensitivity contribution of a module can differ from the sensitivity that the module sustains in isolation.<ref name="altszylerUltrasens2014">{{cite journal | vauthors = Altszyler E, Ventura A, Colman-Lerner A, Chernomoretz A | title = Impact of upstream and downstream constraints on a signaling module's ultrasensitivity | journal = Physical Biology | volume = 11 | issue = 6 | pages = 066003 | date = October 2014 | pmid = 25313165 | pmc = 4233326 | doi = 10.1088/1478-3975/11/6/066003 | bibcode = 2014PhBio..11f6003A }}</ref><ref name="altszylerUltrasens2017">{{cite journal | vauthors = Altszyler E, Ventura AC, Colman-Lerner A, Chernomoretz A | title = Ultrasensitivity in signaling cascades revisited: Linking local and global ultrasensitivity estimations | journal = PLOS ONE | volume = 12 | issue = 6 | pages = e0180083 | year = 2017 | pmid = 28662096 | pmc = 5491127 | doi = 10.1371/journal.pone.0180083 | bibcode = 2017PLoSO..1280083A | arxiv = 1608.08007 }}</ref><br />
<br />
<br />
<br />
Models inform the design of engineered biological systems by better predicting system behavior prior to fabrication. Synthetic biology benefits from better models of how biological molecules bind substrates and catalyze reactions, how DNA encodes the information needed to specify the cell and how multi-component integrated systems behave. Multiscale models of gene regulatory networks focus on synthetic biology applications. Simulations can model all biomolecular interactions in transcription, translation, regulation and induction of gene regulatory networks.<br />
<br />
模型通过在制造之前更好地预测系统行为来指导工程生物系统的设计。合成生物学受益于更好的模型,这些模型包括生物分子如何结合底物和催化反应,DNA 如何编码指定细胞所需的信息,以及多组分综合系统如何运作。基因调控网络的多尺度模型关注于合成生物学的应用。模拟可以模拟所有的生物分子相互作用的转录,翻译,调节和基因调控网络的诱导。<br />
<br />
=== Modeling 建模 ===<br />
<br />
Models inform the design of engineered biological systems by better predicting system behavior prior to fabrication. Synthetic biology benefits from better models of how biological molecules bind substrates and catalyze reactions, how DNA encodes the information needed to specify the cell and how multi-component integrated systems behave. Multiscale models of gene regulatory networks focus on synthetic biology applications. Simulations can model all biomolecular interactions in [[Transcription (biology)|transcription]], [[Translation (biology)|translation]], regulation and induction of gene regulatory networks.<ref>{{cite journal | vauthors = Carbonell-Ballestero M, Duran-Nebreda S, Montañez R, Solé R, Macía J, Rodríguez-Caso C | title = A bottom-up characterization of transfer functions for synthetic biology designs: lessons from enzymology | journal = Nucleic Acids Research | volume = 42 | issue = 22 | pages = 14060–14069 | date = December 2014 | pmid = 25404136 | pmc = 4267673 | doi = 10.1093/nar/gku964 }}</ref><br />
<br />
<ref>{{cite journal | vauthors = Kaznessis YN | title = Models for synthetic biology | journal = BMC Systems Biology | volume = 1 | issue = 1 | pages = 47 | date = November 2007 | pmid = 17986347 | pmc = 2194732 | doi = 10.1186/1752-0509-1-47 }}</ref><br />
<br />
<ref>{{cite conference |vauthors=Tuza ZA, Singhal V, Kim J, Murray RM | title = An in silico modeling toolbox for rapid prototyping of circuits in a biomolecular "breadboard" system. |book-title=52nd IEEE Conference on Decision and Control |date=December 2013 |doi=10.1109/CDC.2013.6760079}}</ref><br />
<br />
<br />
<br />
Studies have considered the components of the DNA transcription mechanism. One desire of scientists creating synthetic biological circuits is to be able to control the transcription of synthetic DNA in unicellular organisms (prokaryotes) and in multicellular organisms (eukaryotes). One study tested the adjustability of synthetic transcription factors (sTFs) in areas of transcription output and cooperative ability among multiple transcription factor complexes. Researchers were able to mutate functional regions called zinc fingers, the DNA specific component of sTFs, to decrease their affinity for specific operator DNA sequence sites, and thus decrease the associated site-specific activity of the sTF (usually transcriptional regulation). They further used the zinc fingers as components of complex-forming sTFs, which are the eukaryotic translation mechanisms. and demonstrated both analog and digital computation in living cells.<br />
--[[用户:粲兰|袁一博]]([[用户讨论:粲兰|讨论]]) “and demonstrated both analog and digital computation in living cells. ”这句首字母没有大写,不知是搬运时少了一点还是仅仅是格式错误。<br />
They demonstrated that bacteria can be engineered to perform both analog and/or digital computation. In human cells research demonstrated a universal logic evaluator that operates in mammalian cells in 2007. Subsequently, researchers utilized this paradigm to demonstrate a proof-of-concept therapy that uses biological digital computation to detect and kill human cancer cells in 2011. Another group of researchers demonstrated in 2016 that principles of computer engineering, can be used to automate digital circuit design in bacterial cells. In 2017, researchers demonstrated the 'Boolean logic and arithmetic through DNA excision' (BLADE) system to engineer digital computation in human cells.<br />
<br />
研究已经考虑了 DNA 转录机制的组成部分。科学家创造合成生物电路的一个愿望是能够控制单细胞生物(原核生物)和多细胞生物(真核生物)中合成 DNA 的转录。一项研究测试了合成转录因子(sTFs)在转录输出和多个转录因子复合物之间的合作能力方面的可调节性。研究人员能够突变被称为锌指——转录因子的一段特殊DNA——的功能区域,以减少它们对特定算子DNA 序列位点的亲和力,从而减少相关的特定位点的转录因子活性(通常是转录调控)。他们进一步使用锌指作为复杂地组成的转录因子的组成部分,这是真核翻译的机制。并在活细胞中进行了模拟和数字计算。他们证明了可以设计细菌使其同时执行模拟和/或数字计算。2007年,人类细胞研究展示了一种在哺乳动物细胞中运作的通用逻辑求值器。随后,研究人员在2011年利用这一范式展示了一种概念验证疗法,利用生物数字计算来检测和杀死人类癌细胞。另一组研究人员在2016年证明了计算机工程的原理,可以用来自动化细菌细胞中的数字电路设计。2017年,研究人员演示了“通过 DNA 删除的布尔逻辑和算术”(BLADE)系统,用于在人类细胞中构建数字计算。<br />
<br />
=== Synthetic transcription factors 合成转录因子 ===<br />
<br />
Studies have considered the components of the [[Transcription (biology)|DNA transcription]] mechanism. One desire of scientists creating [[synthetic biological circuit]]s is to be able to control the transcription of synthetic DNA in unicellular organisms ([[prokaryote]]s) and in multicellular organisms ([[eukaryote]]s). One study tested the adjustability of synthetic [[transcription factor]]s (sTFs) in areas of transcription output and cooperative ability among multiple transcription factor complexes.<ref name="Khalil AS 2012">{{cite journal | vauthors = Khalil AS, Lu TK, Bashor CJ, Ramirez CL, Pyenson NC, Joung JK, Collins JJ | title = A synthetic biology framework for programming eukaryotic transcription functions | journal = Cell | volume = 150 | issue = 3 | pages = 647–58 | date = August 2012 | pmid = 22863014 | pmc = 3653585 | doi = 10.1016/j.cell.2012.05.045 }}</ref> Researchers were able to mutate functional regions called [[zinc finger]]s, the DNA specific component of sTFs, to decrease their affinity for specific operator DNA sequence sites, and thus decrease the associated site-specific activity of the sTF (usually transcriptional regulation). They further used the zinc fingers as components of complex-forming sTFs, which are the [[eukaryotic translation]] mechanisms.<ref name="Khalil AS 2012"/><br />
<br />
<br />
<br />
A biosensor refers to an engineered organism, usually a bacterium, that is capable of reporting some ambient phenomenon such as the presence of heavy metals or toxins. One such system is the Lux operon of Aliivibrio fischeri, which codes for the enzyme that is the source of bacterial bioluminescence, and can be placed after a respondent promoter to express the luminescence genes in response to a specific environmental stimulus. One such sensor created, consisted of a bioluminescent bacterial coating on a photosensitive computer chip to detect certain petroleum pollutants. When the bacteria sense the pollutant, they luminesce. Another example of a similar mechanism is the detection of landmines by an engineered E.coli reporter strain capable of detecting TNT and its main degradation product DNT, and consequently producing a green fluorescent protein (GFP).<br />
<br />
生物传感器是一种工用有机体,它能够报告周边某些环境现象,如重金属或毒素的存在,通常是细菌。其中一个这样的系统是 Aliivibrio 费氏弧菌的 Lux 操纵子,它编码的酶是细菌生物发光的来源,可以放置在应答启动子之后表达发光基因以响应特定的环境刺激。其中一个传感器是由一个光敏计算机芯片上的生物发光细菌涂层组成的,用以检测某些石油污染物。当细菌感觉到污染物时,它们就会发光。另一个类似机制的例子是地雷的检测,用一个能够检测 TNT 及其主要降解产物 DNT 的大肠杆菌报告基因工程菌株,从而产生绿色荧光蛋白。<br />
<br />
== Applications 应用 ==<br />
<br />
=== Biological computers 生物计算机 ===<br />
<br />
Modified organisms can sense environmental signals and send output signals that can be detected and serve diagnostic purposes. Microbe cohorts have been used.<br />
<br />
改良有机体可以感知环境信号,并发送能够被检测到的输出信号,用于诊断目的。微生物群落已经被应用于这种用途。<br />
<br />
A [[biological computer]] refers to an engineered biological system that can perform computer-like operations, which is a dominant paradigm in synthetic biology. Researchers built and characterized a variety of [[logic gate]]s in a number of organisms,<ref>{{cite journal | vauthors = Singh V | title = Recent advances and opportunities in synthetic logic gates engineering in living cells | journal = Systems and Synthetic Biology | volume = 8 | issue = 4 | pages = 271–82 | date = December 2014 | pmid = 26396651 | pmc = 4571725 | doi = 10.1007/s11693-014-9154-6 }}</ref> and demonstrated both analog and digital computation in living cells. They demonstrated that bacteria can be engineered to perform both analog and/or digital computation.<ref>{{cite journal | vauthors = Purcell O, Lu TK | title = Synthetic analog and digital circuits for cellular computation and memory | journal = Current Opinion in Biotechnology | volume = 29 | pages = 146–55 | date = October 2014 | pmid = 24794536 | pmc = 4237220 | doi = 10.1016/j.copbio.2014.04.009 | series = Cell and Pathway Engineering }}</ref><ref>{{cite journal | vauthors = Daniel R, Rubens JR, Sarpeshkar R, Lu TK | title = Synthetic analog computation in living cells | journal = Nature | volume = 497 | issue = 7451 | pages = 619–23 | date = May 2013 | pmid = 23676681 | doi = 10.1038/nature12148 | bibcode = 2013Natur.497..619D | s2cid = 4358570 }}</ref> In human cells research demonstrated a universal logic evaluator that operates in mammalian cells in 2007.<ref>{{cite journal | vauthors = Rinaudo K, Bleris L, Maddamsetti R, Subramanian S, Weiss R, Benenson Y | title = A universal RNAi-based logic evaluator that operates in mammalian cells | journal = Nature Biotechnology | volume = 25 | issue = 7 | pages = 795–801 | date = July 2007 | pmid = 17515909 | doi = 10.1038/nbt1307 | s2cid = 280451 }}</ref> Subsequently, researchers utilized this paradigm to demonstrate a proof-of-concept therapy that uses biological digital computation to detect and kill human cancer cells in 2011.<ref>{{cite journal | vauthors = Xie Z, Wroblewska L, Prochazka L, Weiss R, Benenson Y | title = Multi-input RNAi-based logic circuit for identification of specific cancer cells | journal = Science | volume = 333 | issue = 6047 | pages = 1307–11 | date = September 2011 | pmid = 21885784 | doi = 10.1126/science.1205527 | bibcode = 2011Sci...333.1307X | s2cid = 13743291 | url = https://semanticscholar.org/paper/372e175668b5323d79950b58f12b36f6974a81ef }}</ref> Another group of researchers demonstrated in 2016 that principles of [[computer engineering]], can be used to automate digital circuit design in bacterial cells.<ref>{{cite journal | vauthors = Nielsen AA, Der BS, Shin J, Vaidyanathan P, Paralanov V, Strychalski EA, Ross D, Densmore D, Voigt CA | title = Genetic circuit design automation | journal = Science | volume = 352 | issue = 6281 | pages = aac7341 | date = April 2016 | pmid = 27034378 | doi = 10.1126/science.aac7341 | doi-access = free }}</ref> In 2017, researchers demonstrated the 'Boolean logic and arithmetic through DNA excision' (BLADE) system to engineer digital computation in human cells.<ref>{{cite journal | vauthors = Weinberg BH, Pham NT, Caraballo LD, Lozanoski T, Engel A, Bhatia S, Wong WW | title = Large-scale design of robust genetic circuits with multiple inputs and outputs for mammalian cells | journal = Nature Biotechnology | volume = 35 | issue = 5 | pages = 453–462 | date = May 2017 | pmid = 28346402 | pmc = 5423837 | doi = 10.1038/nbt.3805 }}</ref><br />
<br />
<br />
<br />
=== Biosensors 生物传感器 ===<br />
<br />
Cells use interacting genes and proteins, which are called gene circuits, to implement diverse function, such as responding to environmental signals, decision making and communication. Three key components are involved: DNA, RNA and Synthetic biologist designed gene circuits that can control gene expression from several levels including transcriptional, post-transcriptional and translational levels.<br />
<br />
细胞使用相互作用的基因和蛋白质,即所谓的基因回路,来实现不同的功能,如响应环境信号,决策和沟通。其中涉及到三个关键组成部分: DNA、 RNA 和合成生物学家设计的基因电路,可以从转录、转录后和翻译水平等几个层面控制基因表达。<br />
<br />
A [[biosensor]] refers to an engineered organism, usually a bacterium, that is capable of reporting some ambient phenomenon such as the presence of heavy metals or toxins. One such system is the [[Luciferase|Lux operon]] of ''[[Aliivibrio fischeri]],''<ref>{{cite journal | vauthors = de Almeida PE, van Rappard JR, Wu JC | title = In vivo bioluminescence for tracking cell fate and function | journal = American Journal of Physiology. Heart and Circulatory Physiology | volume = 301 | issue = 3 | pages = H663–71 | date = September 2011 | pmid = 21666118 | pmc = 3191083 | doi = 10.1152/ajpheart.00337.2011 }}</ref> which codes for the enzyme that is the source of bacterial [[bioluminescence]], and can be placed after a respondent [[Promoter (genetics)|promoter]] to express the luminescence genes in response to a specific environmental stimulus.<ref>{{cite journal | vauthors = Close DM, Xu T, Sayler GS, Ripp S | title = In vivo bioluminescent imaging (BLI): noninvasive visualization and interrogation of biological processes in living animals | journal = Sensors | volume = 11 | issue = 1 | pages = 180–206 | date = 2011 | pmid = 22346573 | pmc = 3274065 | doi = 10.3390/s110100180 }}</ref> One such sensor created, consisted of a [[bioluminescent bacteria]]l coating on a photosensitive [[computer chip]] to detect certain [[petroleum]] [[pollutant]]s. When the bacteria sense the pollutant, they luminesce.<ref>{{cite journal|last=Gibbs|first=W. Wayt| name-list-style = vanc |date=1997 |title=Critters on a Chip |url=http://www.sciam.com/article.cfm?id=critters-on-a-chip |journal=Scientific American|access-date=2 Mar 2009}}</ref> Another example of a similar mechanism is the detection of landmines by an engineered ''E.coli'' reporter strain capable of detecting [[TNT]] and its main degradation product [[2,4-Dinitrotoluene|DNT]], and consequently producing a green fluorescent protein ([[Green fluorescent protein|GFP]]).<ref>{{Cite journal|last1=Belkin|first1=Shimshon|last2=Yagur-Kroll|first2=Sharon|last3=Kabessa|first3=Yossef|last4=Korouma|first4=Victor|last5=Septon|first5=Tali|last6=Anati|first6=Yonatan|last7=Zohar-Perez|first7=Cheinat|last8=Rabinovitz|first8=Zahi|last9=Nussinovitch|first9=Amos|date=April 2017|title=Remote detection of buried landmines using a bacterial sensor|journal=Nature Biotechnology|volume=35|issue=4|pages=308–310|doi=10.1038/nbt.3791|pmid=28398330|s2cid=3645230|issn=1087-0156}}</ref><br />
<br />
<br />
<br />
Traditional metabolic engineering has been bolstered by the introduction of combinations of foreign genes and optimization by directed evolution. This includes engineering E. coli and yeast for commercial production of a precursor of the antimalarial drug, Artemisinin.<br />
<br />
传统的代谢工程学已经通过引入外源基因的组合和定向进化的优化得到了支持。这包括改造大肠杆菌和酵母菌,用于商业化生产抗疟药物青蒿素的前体。<br />
<br />
Modified organisms can sense environmental signals and send output signals that can be detected and serve diagnostic purposes. Microbe cohorts have been used.<ref name="pmid26019220">{{cite journal | vauthors = Danino T, Prindle A, Kwong GA, Skalak M, Li H, Allen K, Hasty J, Bhatia SN | title = Programmable probiotics for detection of cancer in urine | journal = Science Translational Medicine | volume = 7 | issue = 289 | pages = 289ra84 | date = May 2015 | pmid = 26019220 | pmc = 4511399 | doi = 10.1126/scitranslmed.aaa3519 }}</ref><br />
<br />
<br />
<br />
Entire organisms have yet to be created from scratch, although living cells can be transformed with new DNA. <br />
--[[用户:粲兰|袁一博]]([[用户讨论:粲兰|讨论]])“Entire organisms have yet to be created from scratch, although living cells can be transformed with new DNA.”have 疑似应为 haven’t <br />
Several ways allow constructing synthetic DNA components and even entire synthetic genomes, but once the desired genetic code is obtained, it is integrated into a living cell that is expected to manifest the desired new capabilities or phenotypes while growing and thriving. Cell transformation is used to create biological circuits, which can be manipulated to yield desired outputs.<br />
<br />
虽然活细胞可以通过新的 DNA 转化,但整个有机体还没有从头开始创造。有几种方法可以构建合成 DNA 组件,甚至是整个合成基因组,但是一旦获得了所需的遗传密码,它就会被整合到一个活细胞中,这个活细胞在生长和发育的过程中,有望表现所需的新能力或表型。细胞分化被用于创造生物电路,我们可以通过操纵这些电路来产生所需的输出。<br />
<br />
=== Cell transformation 细胞分化 ===<br />
<br />
{{Main|Transformation (genetics)}}Cells use interacting genes and proteins, which are called gene circuits, to implement diverse function, such as responding to environmental signals, decision making and communication. Three key components are involved: DNA, RNA and Synthetic biologist designed gene circuits that can control gene expression from several levels including transcriptional, post-transcriptional and translational levels.<br />
<br />
<br />
<br />
Traditional metabolic engineering has been bolstered by the introduction of combinations of foreign genes and optimization by directed evolution. This includes engineering ''E. coli'' and [[yeast]] for commercial production of a precursor of the [[Antimalarial medication|antimalarial drug]], [[Artemisinin]].<ref>{{cite journal | vauthors = Westfall PJ, Pitera DJ, Lenihan JR, Eng D, Woolard FX, Regentin R, Horning T, Tsuruta H, Melis DJ, Owens A, Fickes S, Diola D, Benjamin KR, Keasling JD, Leavell MD, McPhee DJ, Renninger NS, Newman JD, Paddon CJ | title = Production of amorphadiene in yeast, and its conversion to dihydroartemisinic acid, precursor to the antimalarial agent artemisinin | journal = Proceedings of the National Academy of Sciences of the United States of America | volume = 109 | issue = 3 | pages = E111–8 | date = January 2012 | pmid = 22247290 | pmc = 3271868 | doi = 10.1073/pnas.1110740109 | bibcode = 2012PNAS..109E.111W }}</ref><br />
<br />
The [[Top7 protein was one of the first proteins designed for a fold that had never been seen before in nature ]]<br />
<br />
[[ Top7蛋白是为了折叠而设计的最初几个蛋白质之一,以前从未在自然界中见过]<br />
<br />
<br />
<br />
Entire organisms have yet to be created from scratch, although living cells can be [[Transformation (genetics)|transformed]] with new DNA. Several ways allow constructing synthetic DNA components and even entire [[Artificial gene synthesis|synthetic genomes]], but once the desired genetic code is obtained, it is integrated into a living cell that is expected to manifest the desired new capabilities or [[phenotype]]s while growing and thriving.<ref>{{cite news|url=https://www.independent.co.uk/news/science/eureka-scientists-unveil-giant-leap-towards-synthetic-life-9219644.html|title=Eureka! Scientists unveil giant leap towards synthetic life|last=Connor|first=Steve|date=28 March 2014|work=The Independent|access-date=2015-08-06}}</ref> Cell transformation is used to create [[Synthetic biological circuit|biological circuits]], which can be manipulated to yield desired outputs.<ref name=":0" /><ref name=":1" /><br />
<br />
Natural proteins can be engineered, for example, by directed evolution, novel protein structures that match or improve on the functionality of existing proteins can be produced. One group generated a helix bundle that was capable of binding oxygen with similar properties as hemoglobin, yet did not bind carbon monoxide. A similar protein structure was generated to support a variety of oxidoreductase activities while another formed a structurally and sequentially novel ATPase. Another group generated a family of G-protein coupled receptors that could be activated by the inert small molecule clozapine N-oxide but insensitive to the native ligand, acetylcholine; these receptors are known as DREADDs. Novel functionalities or protein specificity can also be engineered using computational approaches. One study was able to use two different computational methods – a bioinformatics and molecular modeling method to mine sequence databases, and a computational enzyme design method to reprogram enzyme specificity. Both methods resulted in designed enzymes with greater than 100 fold specificity for production of longer chain alcohols from sugar.<br />
<br />
天然蛋白质可以被设计出来,例如,通过定向进化,新的蛋白质结构可以匹配或改进现有蛋白质的功能。其中一组可以产生能将血红蛋白与具有类似性质的氧结合的螺旋束,但不结合一氧化碳。一个类似的蛋白质结构被生成以支持多种氧化还原酶活性,而另一组生成一个在结构和顺序上全新的 ATP 酶。另一组产生了一类 g 蛋白偶联受体,这类受体可以被惰性小分子N-氧化氯氮平激活,但对天然配体乙酰胆碱不敏感; 这些受体被称为 DREADDs。新的功能或蛋白质特异性也可以利用计算方法进行设计。一项研究能够使用两种不同的计算方法——生物信息学和分子模拟方法挖掘序列数据库,使用计算酶设计方法重新编写酶的专一性。这两种方法都可以使设计的酶具有大于100倍的专一性,可以用糖生产出长链醇。<br />
<br />
<br />
<br />
By integrating synthetic biology with [[materials science]], it would be possible to use cells as microscopic molecular foundries to produce materials with properties whose properties were genetically encoded. Re-engineering has produced Curli fibers, the [[amyloid]] component of extracellular material of [[biofilms]], as a platform for programmable [[nanomaterial]]. These nanofibers were genetically constructed for specific functions, including adhesion to substrates, nanoparticle templating and protein immobilization.<ref>{{cite journal|vauthors=Nguyen PQ, Botyanszki Z, Tay PK, Joshi NS|date=September 2014|title=Programmable biofilm-based materials from engineered curli nanofibres|journal=Nature Communications|volume=5|pages=4945|bibcode=2014NatCo...5.4945N|doi=10.1038/ncomms5945|pmid=25229329|doi-access=free}}</ref><br />
<br />
Another common investigation is expansion of the natural set of 20 amino acids. Excluding stop codons, 61 codons have been identified, but only 20 amino acids are coded generally in all organisms. Certain codons are engineered to code for alternative amino acids including: nonstandard amino acids such as O-methyl tyrosine; or exogenous amino acids such as 4-fluorophenylalanine. Typically, these projects make use of re-coded nonsense suppressor tRNA-Aminoacyl tRNA synthetase pairs from other organisms, though in most cases substantial engineering is required.<br />
<br />
另一个常见的研究是对20种天然氨基酸的扩展。除了终止密码子, 61个密码子已被破译出,但所有生物体中一般只有20个氨基酸。某些密码子被设计为编码可替代的氨基酸,包括: 非标准氨基酸,如 o- 甲基酪氨酸; 或外源氨基酸,如4- 氟苯丙氨酸。通常情况下,这些项目利用从其他生物体获取的重新编码的无意义抑制 tRNA-氨酰基 tRNA 合成酶对,虽然在大多数情况下这需要大量的工程。<br />
<br />
<br />
<br />
=== Designed proteins 设计蛋白质 ===<br />
<br />
Other researchers investigated protein structure and function by reducing the normal set of 20 amino acids. Limited protein sequence libraries are made by generating proteins where groups of amino acids may be replaced by a single amino acid. For instance, several non-polar amino acids within a protein can all be replaced with a single non-polar amino acid. . One project demonstrated that an engineered version of Chorismate mutase still had catalytic activity when only 9 amino acids were used.<br />
<br />
其他研究人员通过减少一般的20种天然氨基酸来研究蛋白质的结构和功能。有限的蛋白质序列库是通过生成蛋白质制成的,其中一组氨基酸可以被一个单一的氨基酸所取代。例如,一个蛋白质中的几个非极性氨基酸都可以被一个非极性氨基酸所取代。.一个研究项目证明了,当只使用9种氨基酸时,一种改造过的分支酸变位酶仍然具有催化活性。<br />
<br />
<br />
<br />
[[File:Top7.png|thumb|The [[Top7]] protein was one of the first proteins designed for a fold that had never been seen before in nature<ref name="kuhlman03">{{cite journal | vauthors = Kuhlman B, Dantas G, Ireton GC, Varani G, Stoddard BL, Baker D | title = Design of a novel globular protein fold with atomic-level accuracy | journal = Science | volume = 302 | issue = 5649 | pages = 1364–8 | date = November 2003 | pmid = 14631033 | doi = 10.1126/science.1089427 | bibcode = 2003Sci...302.1364K | s2cid = 1939390 | url = https://semanticscholar.org/paper/3188f905b60172dcad17a9b8c23567400c2bb65f }}</ref> ]]<br />
<br />
Researchers and companies practice synthetic biology to synthesize industrial enzymes with high activity, optimal yields and effectiveness. These synthesized enzymes aim to improve products such as detergents and lactose-free dairy products, as well as make them more cost effective. The improvements of metabolic engineering by synthetic biology is an example of a biotechnological technique utilized in industry to discover pharmaceuticals and fermentive chemicals. Synthetic biology may investigate modular pathway systems in biochemical production and increase yields of metabolic production. Artificial enzymatic activity and subsequent effects on metabolic reaction rates and yields may develop "efficient new strategies for improving cellular properties ... for industrially important biochemical production".<br />
<br />
研究人员和公司运用合成生物学来合成具有高活性、最佳产量和有效性的工业酶。这些合成酶旨在改善产品,如洗涤剂和无乳糖乳制品,以及使他们更具成本效益。合成生物学对代谢工程学的改进是生物技术用于工业发现药物和发酵性化学品的一个典例。合成生物学可以研究生化生产中的模块化途径系统,并提高代谢生产的产量。人工酶活性及其对代谢反应速率和产量的后续影响可能开发出“改善细胞特性有效的新策略...... 用于重要的工业生化产品”。<br />
<br />
<br />
<br />
Natural proteins can be engineered, for example, by [[directed evolution]], novel protein structures that match or improve on the functionality of existing proteins can be produced. One group generated a [[helix bundle]] that was capable of binding [[oxygen]] with similar properties as [[hemoglobin]], yet did not bind [[carbon monoxide]].<ref>{{cite journal | vauthors = Koder RL, Anderson JL, Solomon LA, Reddy KS, Moser CC, Dutton PL | title = Design and engineering of an O(2) transport protein | journal = Nature | volume = 458 | issue = 7236 | pages = 305–9 | date = March 2009 | pmid = 19295603 | pmc = 3539743 | doi = 10.1038/nature07841 | bibcode = 2009Natur.458..305K }}</ref> A similar protein structure was generated to support a variety of [[oxidoreductase]] activities <ref>{{cite journal | vauthors = Farid TA, Kodali G, Solomon LA, Lichtenstein BR, Sheehan MM, Fry BA, Bialas C, Ennist NM, Siedlecki JA, Zhao Z, Stetz MA, Valentine KG, Anderson JL, Wand AJ, Discher BM, Moser CC, Dutton PL | title = Elementary tetrahelical protein design for diverse oxidoreductase functions | journal = Nature Chemical Biology | volume = 9 | issue = 12 | pages = 826–833 | date = December 2013 | pmid = 24121554 | pmc = 4034760 | doi = 10.1038/nchembio.1362 }}</ref> while another formed a structurally and sequentially novel [[ATPase]].<ref name="WangHecht2020">{{cite journal|last1=Wang|first1=MS|last2=Hecht|first2=MH|title=A Completely De Novo ATPase from Combinatorial Protein Design|journal=Journal of the American Chemical Society|year=2020|volume=142|issue=36|pages=15230–15234|issn=0002-7863|doi=10.1021/jacs.0c02954|pmid=32833456}}</ref> Another group generated a family of G-protein coupled receptors that could be activated by the inert small molecule [[clozapine N-oxide]] but insensitive to the native [[ligand]], [[acetylcholine]]; these receptors are known as [[Receptor activated solely by a synthetic ligand|DREADDs]].<ref>{{cite journal | vauthors = Armbruster BN, Li X, Pausch MH, Herlitze S, Roth BL | title = Evolving the lock to fit the key to create a family of G protein-coupled receptors potently activated by an inert ligand | journal = Proceedings of the National Academy of Sciences of the United States of America | volume = 104 | issue = 12 | pages = 5163–8 | date = March 2007 | pmid = 17360345 | pmc = 1829280 | doi = 10.1073/pnas.0700293104 | bibcode = 2007PNAS..104.5163A }}</ref> Novel functionalities or protein specificity can also be engineered using computational approaches. One study was able to use two different computational methods – a bioinformatics and molecular modeling method to mine sequence databases, and a computational enzyme design method to reprogram enzyme specificity. Both methods resulted in designed enzymes with greater than 100 fold specificity for production of longer chain alcohols from sugar.<ref>{{cite journal | vauthors = Mak WS, Tran S, Marcheschi R, Bertolani S, Thompson J, Baker D, Liao JC, Siegel JB | title = Integrative genomic mining for enzyme function to enable engineering of a non-natural biosynthetic pathway | journal = Nature Communications | volume = 6 | pages = 10005 | date = November 2015 | pmid = 26598135 | pmc = 4673503 | doi = 10.1038/ncomms10005 | bibcode = 2015NatCo...610005M }}</ref><br />
<br />
<br />
<br />
Scientists can encode digital information onto a single strand of synthetic DNA. In 2012, George M. Church encoded one of his books about synthetic biology in DNA. The 5.3 Mb of data was more than 1000 times greater than the previous largest amount of information to be stored in synthesized DNA. A similar project encoded the complete sonnets of William Shakespeare in DNA. More generally, algorithms such as NUPACK, ViennaRNA, Ribosome Binding Site Calculator, Cello, and Non-Repetitive Parts Calculator enables the design of new genetic systems.<br />
<br />
科学家可以将数字信息编码到一条合成 DNA 链上。2012年,乔治·M·丘奇用 DNA 将他的一本关于合成生物学的书编码。这5.3 Mb 的数据量比之前存储在合成 DNA 中的最大信息量大了1000多倍。一个类似的项目将威廉·莎士比亚的十四行诗全部编码在 DNA 中。更广泛地说,例如 NUPACK,vienna,Ribosome Binding Site Calculator,cello,和 non-repeated Parts Calculator 等算法使新遗传系统的设计成为可能。<br />
<br />
Another common investigation is [[Expanded genetic code|expansion]] of the natural set of 20 [[amino acid]]s. Excluding [[stop codon]]s, 61 [[codons]] have been identified, but only 20 amino acids are coded generally in all organisms. Certain codons are engineered to code for alternative amino acids including: nonstandard amino acids such as O-methyl [[tyrosine]]; or exogenous amino acids such as 4-fluorophenylalanine. Typically, these projects make use of re-coded [[nonsense suppressor]] [[Transfer RNA|tRNA]]-[[Aminoacyl tRNA synthetase]] pairs from other organisms, though in most cases substantial engineering is required.<ref>{{cite journal | vauthors = Wang Q, Parrish AR, Wang L | title = Expanding the genetic code for biological studies | journal = Chemistry & Biology | volume = 16 | issue = 3 | pages = 323–36 | date = March 2009 | pmid = 19318213 | pmc = 2696486 | doi = 10.1016/j.chembiol.2009.03.001 }}</ref><br />
<br />
<br />
<br />
Many technologies have been developed for incorporating unnatural nucleotides and amino acids into nucleic acids and proteins, both in vitro and in vivo. For example, in May 2014, researchers announced that they had successfully introduced two new artificial nucleotides into bacterial DNA. By including individual artificial nucleotides in the culture media, they were able to exchange the bacteria 24 times; they did not generate mRNA or proteins able to use the artificial nucleotides.<br />
<br />
无论是在体外还是体内,在核酸和蛋白质中将非天然的核苷酸和氨基酸结合的技术已经被开发出来。例如,2014年5月,研究人员宣布他们已经成功地将两种新的人工核苷酸引入细菌 DNA。通过在培养基中加入单个的人工核苷酸,他们能够交换细菌24次; 他们没有产生能够人工核苷酸表达的 mRNA 或蛋白质。<br />
<br />
Other researchers investigated protein structure and function by reducing the normal set of 20 amino acids. Limited protein sequence libraries are made by generating proteins where groups of amino acids may be replaced by a single amino acid.<ref>{{cite journal|author=Davidson, AR|author2=Lumb, KJ|author3=Sauer, RT|date=1995|title=Cooperatively folded proteins in random sequence libraries|journal=Nature Structural Biology|volume=2|issue=10|pages=856–864|doi=10.1038/nsb1095-856|pmid=7552709|s2cid=31781262}}</ref> For instance, several [[Chemical polarity|non-polar]] amino acids within a protein can all be replaced with a single non-polar amino acid.<ref>{{cite journal|vauthors=Kamtekar S, Schiffer JM, Xiong H, Babik JM, Hecht MH|date=December 1993|title=Protein design by binary patterning of polar and nonpolar amino acids|journal=Science|volume=262|issue=5140|pages=1680–5|bibcode=1993Sci...262.1680K|doi=10.1126/science.8259512|pmid=8259512}}</ref> . One project demonstrated that an engineered version of [[Chorismate mutase]] still had catalytic activity when only 9 amino acids were used.<ref>{{cite journal|vauthors=Walter KU, Vamvaca K, Hilvert D|date=November 2005|title=An active enzyme constructed from a 9-amino acid alphabet|journal=The Journal of Biological Chemistry|volume=280|issue=45|pages=37742–6|doi=10.1074/jbc.M507210200|pmid=16144843|doi-access=free}}</ref><br />
<br />
<br />
<br />
Researchers and companies practice synthetic biology to synthesize [[industrial enzymes]] with high activity, optimal yields and effectiveness. These synthesized enzymes aim to improve products such as detergents and lactose-free dairy products, as well as make them more cost effective.<ref>{{cite web|url=https://www.thermofisher.com/us/en/home/life-science/synthetic-biology/synthetic-biology-applications.html|title=Synthetic Biology Applications|website=www.thermofisher.com|access-date=2015-11-12}}</ref> The improvements of metabolic engineering by synthetic biology is an example of a biotechnological technique utilized in industry to discover pharmaceuticals and fermentive chemicals. Synthetic biology may investigate modular pathway systems in biochemical production and increase yields of metabolic production. Artificial enzymatic activity and subsequent effects on metabolic reaction rates and yields may develop "efficient new strategies for improving cellular properties ... for industrially important biochemical production".<ref>{{cite journal | vauthors = Liu Y, Shin HD, Li J, Liu L | title = Toward metabolic engineering in the context of system biology and synthetic biology: advances and prospects | journal = Applied Microbiology and Biotechnology | volume = 99 | issue = 3 | pages = 1109–18 | date = February 2015 | pmid = 25547833 | doi = 10.1007/s00253-014-6298-y | s2cid = 954858 }}</ref><br />
<br />
Synthetic biology raised NASA's interest as it could help to produce resources for astronauts from a restricted portfolio of compounds sent from Earth. On Mars, in particular, synthetic biology could lead to production processes based on local resources, making it a powerful tool in the development of manned outposts with less dependence on Earth.<br />
<br />
合成生物学引起了美国国家航空航天局的兴趣,因为它可以促使从地球上发射的受限化合物组合为宇航员生产资源。特别是在火星上,合成生物学可以产生基于当地资源的生产过程,使其成为开发对地球依赖性较低的载人前哨站的有力工具。<br />
<br />
<br />
<br />
=== Designed nucleic acid systems 设计核酸系统 ===<br />
<br />
Scientists can encode digital information onto a single strand of [[synthetic DNA]]. In 2012, [[George M. Church]] encoded one of his books about synthetic biology in DNA. The 5.3 [[Megabit|Mb]] of data was more than 1000 times greater than the previous largest amount of information to be stored in synthesized DNA.<ref>{{cite journal | vauthors = Church GM, Gao Y, Kosuri S | title = Next-generation digital information storage in DNA | journal = Science | volume = 337 | issue = 6102 | pages = 1628 | date = September 2012 | pmid = 22903519 | doi = 10.1126/science.1226355 | bibcode = 2012Sci...337.1628C | s2cid = 934617 | url = https://semanticscholar.org/paper/0856a685e85bcd27c11cd5f385be818deceb27bd }}</ref> A similar project encoded the complete [[sonnet]]s of [[William Shakespeare]] in DNA.<ref>{{cite web|url=http://news.sky.com/story/1041917/huge-amounts-of-data-can-be-stored-in-dna|title=Huge amounts of data can be stored in DNA|date=23 January 2013|publisher=Sky News|access-date=24 January 2013|archive-url=https://web.archive.org/web/20160531044937/http://news.sky.com/story/1041917/huge-amounts-of-data-can-be-stored-in-dna|archive-date=2016-05-31 }}</ref> More generally, algorithms such as NUPACK,<ref>{{Cite journal|last1=Zadeh|first1=Joseph N.|last2=Steenberg|first2=Conrad D.|last3=Bois|first3=Justin S.|last4=Wolfe|first4=Brian R.|last5=Pierce|first5=Marshall B.|last6=Khan|first6=Asif R.|last7=Dirks|first7=Robert M.|last8=Pierce|first8=Niles A.|date=2011-01-15|title=NUPACK: Analysis and design of nucleic acid systems|journal=Journal of Computational Chemistry|language=en|volume=32|issue=1|pages=170–173|doi=10.1002/jcc.21596|pmid=20645303}}</ref> ViennaRNA,<ref>{{Cite journal|last1=Lorenz|first1=Ronny|last2=Bernhart|first2=Stephan H.|last3=Höner zu Siederdissen|first3=Christian|last4=Tafer|first4=Hakim|last5=Flamm|first5=Christoph|last6=Stadler|first6=Peter F.|last7=Hofacker|first7=Ivo L.|date=2011-11-24|title=ViennaRNA Package 2.0|journal=Algorithms for Molecular Biology|language=en|volume=6|issue=1|pages=26|doi=10.1186/1748-7188-6-26|issn=1748-7188|pmc=3319429|pmid=22115189}}</ref> Ribosome Binding Site Calculator,<ref>{{Cite journal|last1=Salis|first1=Howard M.|last2=Mirsky|first2=Ethan A.|last3=Voigt|first3=Christopher A.|date=October 2009|title=Automated design of synthetic ribosome binding sites to control protein expression|journal=Nature Biotechnology|language=en|volume=27|issue=10|pages=946–950|doi=10.1038/nbt.1568|pmid=19801975|issn=1546-1696|pmc=2782888}}</ref> Cello,<ref>{{Cite journal|last1=Nielsen|first1=A. A. K.|last2=Der|first2=B. S.|last3=Shin|first3=J.|last4=Vaidyanathan|first4=P.|last5=Paralanov|first5=V.|last6=Strychalski|first6=E. A.|last7=Ross|first7=D.|last8=Densmore|first8=D.|last9=Voigt|first9=C. A.|date=2016-04-01|title=Genetic circuit design automation|journal=Science|language=en|volume=352|issue=6281|pages=aac7341|doi=10.1126/science.aac7341|pmid=27034378|issn=0036-8075|doi-access=free}}</ref> and Non-Repetitive Parts Calculator<ref>{{Cite journal|last1=Hossain|first1=Ayaan|last2=Lopez|first2=Eriberto|last3=Halper|first3=Sean M.|last4=Cetnar|first4=Daniel P.|last5=Reis|first5=Alexander C.|last6=Strickland|first6=Devin|last7=Klavins|first7=Eric|last8=Salis|first8=Howard M.|date=2020-07-13|title=Automated design of thousands of nonrepetitive parts for engineering stable genetic systems|url=https://www.nature.com/articles/s41587-020-0584-2|journal=Nature Biotechnology|language=en|pages=1–10|doi=10.1038/s41587-020-0584-2|pmid=32661437|s2cid=220506228|issn=1546-1696}}</ref> enables the design of new genetic systems.<br />
<br />
<br />
<br />
[[Gene functions in the minimal genome of the synthetic organism, Syn 3.]]<br />
<br />
[[在合成生物的最小基因组中发挥功能的基因,Syn 3. ]]<br />
<br />
Many technologies have been developed for incorporating [[Nucleic acid analogue|unnatural nucleotides]] and amino acids into nucleic acids and proteins, both ''in vitro'' and ''in vivo''. For example, in May 2014, researchers announced that they had successfully introduced two new artificial [[nucleotides]] into bacterial DNA. By including individual artificial nucleotides in the culture media, they were able to exchange the bacteria 24 times; they did not generate [[Messenger RNA|mRNA]] or proteins able to use the artificial nucleotides.<ref name="NYT-20140507">{{cite news|url=https://www.nytimes.com/2014/05/08/business/researchers-report-breakthrough-in-creating-artificial-genetic-code.html|title=Researchers Report Breakthrough in Creating Artificial Genetic Code|last=Pollack|first=Andrew|date=May 7, 2014|work=[[New York Times]]|access-date=May 7, 2014}}</ref><ref name="NATURE-20140507">{{cite journal|last=Callaway|first=Ewen|date=May 7, 2014|title=First life with 'alien' DNA|url=http://www.nature.com/news/first-life-with-alien-dna-1.15179|journal=[[Nature (journal)|Nature]]|doi=10.1038/nature.2014.15179|s2cid=86967999|access-date=May 7, 2014}}</ref><ref name="NATJ-20140507">{{cite journal|vauthors=Malyshev DA, Dhami K, Lavergne T, Chen T, Dai N, Foster JM, Corrêa IR, Romesberg FE|date=May 2014|title=A semi-synthetic organism with an expanded genetic alphabet|journal=Nature|volume=509|issue=7500|pages=385–8|bibcode=2014Natur.509..385M|doi=10.1038/nature13314|pmc=4058825|pmid=24805238}}</ref><br />
<br />
One important topic in synthetic biology is synthetic life, that is concerned with hypothetical organisms created in vitro from biomolecules and/or chemical analogues thereof. Synthetic life experiments attempt to either probe the origins of life, study some of the properties of life, or more ambitiously to recreate life from non-living (abiotic) components. Synthetic life biology attempts to create living organisms capable of carrying out important functions, from manufacturing pharmaceuticals to detoxifying polluted land and water. In medicine, it offers prospects of using designer biological parts as a starting point for new classes of therapies and diagnostic tools. Nobody has been able to create such a cell. The host cells were able to grow and replicate. The Mycoplasma laboratorium is the only living organism with completely engineered genome.<br />
<br />
合成生物学的一个重要课题是合成生命,它涉及到在体外由生物分子和/或其化学类似物创造的假想生物体。合成生命实验或者试图探索生命的起源,研究生命的某些特性,或者更雄心勃勃地从非生命(非生物)组成部分中重新创造生命。合成生命生物学试图创造能够执行重要功能的生命有机体,从制造药品到净化被污染的土地和水。在医学上,它提供了使用设计生物学部件作为新类型治疗和诊断工具的起点的前景。没有人能够制造出这样的细胞。宿主细胞能够生长和复制。实验室合成支原体是唯一一个拥有完全工程化基因组的生物体。<br />
<br />
<br />
<br />
=== Space exploration 太空探索 ===<br />
<br />
The first living organism with 'artificial' expanded DNA code was presented in 2014; the team used E. coli that had its genome extracted and replaced with a chromosome with an expanded genetic code. The nucleosides added are d5SICS and dNaM. followed by national synthetic cell organizations in several countries, including FabriCell, MaxSynBio and BaSyC. <br />
--[[用户:粲兰|袁一博]]([[用户讨论:粲兰|讨论]])“followed by national synthetic cell organizations in several countries, including FabriCell, MaxSynBio and BaSyC.”该句首字母未大写,疑似搬运时少了部分语句。<br />
The European synthetic cell efforts were unified in 2019 as SynCellEU initiative.<br />
<br />
2014年,第一个具有人工扩展 DNA 编码的活有机体问世; 研究小组使用大肠杆菌提取了它的基因组,并用扩展基因编码的染色体替换了它。添加的核苷是 d5SICS 和 dNaM。其次是一些国家的国家合成细胞组织,包括 FabriCell,MaxSynBio 和 BaSyC。欧洲合成细胞的努力在2019年被统一为 SynCellEU 倡议。<br />
<br />
Synthetic biology raised [[NASA|NASA's]] interest as it could help to produce resources for astronauts from a restricted portfolio of compounds sent from Earth.<ref name="Verseux, C. 2015 73–100">{{Cite book|author=Verseux, C.|author2=Paulino-Lima, I.|author3=Baque, M.|author4=Billi, D.|author5=Rothschild, L.|date=2016|title=Synthetic Biology for Space Exploration: Promises and Societal Implications|journal=Ambivalences of Creating Life. Societal and Philosophical Dimensions of Synthetic Biology, Publisher: Springer-Verlag|volume=45|pages=73–100|doi=10.1007/978-3-319-21088-9_4|series=Ethics of Science and Technology Assessment|isbn=978-3-319-21087-2}}</ref><ref>{{cite journal|last1=Menezes|first1=A|last2=Cumbers|first2=J|last3=Hogan|first3=J|last4=Arkin|first4=A|date=2014|title=Towards synthetic biological approaches to resource utilization on space missions|journal=Journal of the Royal Society, Interface|volume=12|issue=102|pages=20140715|doi=10.1098/rsif.2014.0715|pmid=25376875|pmc=4277073}}</ref><ref>{{cite journal | vauthors = Montague M, McArthur GH, Cockell CS, Held J, Marshall W, Sherman LA, Wang N, Nicholson WL, Tarjan DR, Cumbers J | title = The role of synthetic biology for in situ resource utilization (ISRU) | journal = Astrobiology | volume = 12 | issue = 12 | pages = 1135–42 | date = December 2012 | pmid = 23140229 | doi = 10.1089/ast.2012.0829 | bibcode = 2012AsBio..12.1135M }}</ref> On Mars, in particular, synthetic biology could lead to production processes based on local resources, making it a powerful tool in the development of manned outposts with less dependence on Earth.<ref name="Verseux, C. 2015 73–100" /> Work has gone into developing plant strains that are able to cope with the harsh Martian environment, using similar techniques to those employed to increase resilience to certain environmental factors in agricultural crops.<ref>{{Cite web|title=NASA - Designer Plants on Mars|url=https://www.nasa.gov/centers/goddard/news/topstory/2005/mars_plants.html|last=GSFC|first=Bill Steigerwald |website=www.nasa.gov|language=en|access-date=2020-05-29}}</ref><br />
<br />
<br />
<br />
=== Synthetic life 合成生命 ===<br />
<br />
{{Further|Artificially Expanded Genetic Information System|Hypothetical types of biochemistry}}<br />
<br />
Bacteria have long been used in cancer treatment. Bifidobacterium and Clostridium selectively colonize tumors and reduce their size. Recently synthetic biologists reprogrammed bacteria to sense and respond to a particular cancer state. Most often bacteria are used to deliver a therapeutic molecule directly to the tumor to minimize off-target effects. To target the tumor cells, peptides that can specifically recognize a tumor were expressed on the surfaces of bacteria. Peptides used include an affibody molecule that specifically targets human epidermal growth factor receptor 2 and a synthetic adhesin. The other way is to allow bacteria to sense the tumor microenvironment, for example hypoxia, by building an AND logic gate into bacteria. The bacteria then only release target therapeutic molecules to the tumor through either lysis or the bacterial secretion system. Lysis has the advantage that it can stimulate the immune system and control growth. Multiple types of secretion systems can be used and other strategies as well. The system is inducible by external signals. Inducers include chemicals, electromagnetic or light waves.<br />
<br />
长期以来,细菌一直被用于癌症治疗。双歧杆菌和梭状芽胞杆菌选择性地定殖于肿瘤并减小肿瘤体积。最近,合成生物学家对细菌进行了重新编码,使其能够感知特定的癌症状态并对其做出反应。大多数情况下,细菌被用来直接向肿瘤输送治疗分子,以最小化脱靶效应。为了定靶于肿瘤细胞,细菌表面表达出了可以特异性识别肿瘤的肽。识别过程中涉及的多肽包括一个特定作用于人类表皮生长因子受体的粘附分子和一个合成粘附素。另一种方法是通过在细菌中建立一个与逻辑门让细菌感知肿瘤的微环境,例如,缺氧。然后,细菌只通过溶菌或细菌分泌系统向肿瘤释放靶向治疗分子。溶菌具有刺激免疫系统和控制生长的优点。这个过程中可以使用多种类型的分泌系统和其他策略。该系统由外部信号诱导。这些诱导因子包括化学物质、电磁波或光波。<br />
<br />
[[File:Syn3 genome.svg|thumb|upright=1.25|[[Gene]] functions in the minimal [[genome]] of the synthetic organism, ''[[Syn 3]]''.<ref name="Hutchison">{{cite journal | vauthors = Hutchison CA, Chuang RY, Noskov VN, Assad-Garcia N, Deerinck TJ, Ellisman MH, Gill J, Kannan K, Karas BJ, Ma L, Pelletier JF, Qi ZQ, Richter RA, Strychalski EA, Sun L, Suzuki Y, Tsvetanova B, Wise KS, Smith HO, Glass JI, Merryman C, Gibson DG, Venter JC | title = Design and synthesis of a minimal bacterial genome | journal = Science | volume = 351 | issue = 6280 | pages = aad6253 | date = March 2016 | pmid = 27013737 | doi = 10.1126/science.aad6253 | bibcode = 2016Sci...351.....H | doi-access = free }}</ref>]]<br />
<br />
One important topic in synthetic biology is ''synthetic life'', that is concerned with hypothetical organisms created ''[[in vitro]]'' from [[biomolecule]]s and/or [[hypothetical types of biochemistry|chemical analogues thereof]]. Synthetic life experiments attempt to either probe the [[origins of life]], study some of the properties of life, or more ambitiously to recreate life from non-living ([[abiotic components|abiotic]]) components. Synthetic life biology attempts to create living organisms capable of carrying out important functions, from manufacturing pharmaceuticals to detoxifying polluted land and water.<ref name="enzymes2014">{{cite news |last=Connor |first=Steve |url=https://www.independent.co.uk/news/science/major-synthetic-life-breakthrough-as-scientists-make-the-first-artificial-enzymes-9896333.html |title=Major synthetic life breakthrough as scientists make the first artificial enzymes |work=The Independent |location=London |date=1 December 2014 |access-date=2015-08-06 }}</ref> In medicine, it offers prospects of using designer biological parts as a starting point for new classes of therapies and diagnostic tools.<ref name="enzymes2014" /><br />
<br />
Multiple species and strains are applied in these therapeutics. Most commonly used bacteria are Salmonella typhimurium, Escherichia Coli, Bifidobacteria, Streptococcus, Lactobacillus, Listeria and Bacillus subtilis. Each of these species have their own property and are unique to cancer therapy in terms of tissue colonization, interaction with immune system and ease of application.<br />
<br />
在这些治疗方法中应用了多种菌株。最常用的细菌是鼠伤寒沙门氏菌、大肠桿菌、双歧杆菌、链球菌、乳酸菌、李斯特菌和枯草杆菌。这些物种中的每一个都有自己的特性。在定殖组织、与免疫系统的相互作用和易于应用方面,它们对癌症治疗各有独到之处。<br />
<br />
<br />
<br />
A living "artificial cell" has been defined as a completely synthetic cell that can capture [[energy]], maintain [[electrochemical gradient|ion gradients]], contain [[macromolecules]] as well as store information and have the ability to [[mutate]].<ref name="Deamer">{{cite journal | vauthors = Deamer D | title = A giant step towards artificial life? | journal = Trends in Biotechnology | volume = 23 | issue = 7 | pages = 336–8 | date = July 2005 | pmid = 15935500 | doi = 10.1016/j.tibtech.2005.05.008 }}</ref> Nobody has been able to create such a cell.<ref name='Deamer'/><br />
<br />
<br />
<br />
The immune system plays an important role in cancer and can be harnessed to attack cancer cells. Cell-based therapies focus on immunotherapies, mostly by engineering T cells.<br />
<br />
免疫系统在癌症中起着重要作用。可以利用免疫系统攻击癌细胞。以细胞为基础的疗法主要是免疫疗法,主要是通过改造 T 细胞。<br />
<br />
A completely synthetic bacterial chromosome was produced in 2010 by [[Craig Venter]], and his team introduced it to genomically emptied bacterial host cells.<ref name="gibson52">{{cite journal | vauthors = Gibson DG, Glass JI, Lartigue C, Noskov VN, Chuang RY, Algire MA, Benders GA, Montague MG, Ma L, Moodie MM, Merryman C, Vashee S, Krishnakumar R, Assad-Garcia N, Andrews-Pfannkoch C, Denisova EA, Young L, Qi ZQ, Segall-Shapiro TH, Calvey CH, Parmar PP, Hutchison CA, Smith HO, Venter JC | title = Creation of a bacterial cell controlled by a chemically synthesized genome | journal = Science | volume = 329 | issue = 5987 | pages = 52–6 | date = July 2010 | pmid = 20488990 | doi = 10.1126/science.1190719 | bibcode = 2010Sci...329...52G | doi-access = free }}</ref> The host cells were able to grow and replicate.<ref>{{cite web| url=https://www.npr.org/templates/transcript/transcript.php?storyId=127010591| title=Scientists Reach Milestone On Way To Artificial Life| access-date=2010-06-09|date=2010-05-20}}</ref><ref>{{cite web|last1=Venter|first1=JC|title=From Designing Life to Prolonging Healthy Life|url=https://www.youtube.com/watch?v=Gwu_djYMm3w&t=30s|website=YouTube|publisher=University of California Television (UCTV)|access-date=1 February 2017}}</ref> The [[Mycoplasma laboratorium]] is the only living organism with completely engineered genome.<br />
<br />
<br />
<br />
T cell receptors were engineered and ‘trained’ to detect cancer epitopes. Chimeric antigen receptors (CARs) are composed of a fragment of an antibody fused to intracellular T cell signaling domains that can activate and trigger proliferation of the cell. A second generation CAR-based therapy was approved by FDA.<br />
<br />
T 细胞受体被设计和“训练”用以检测癌症表位。嵌合抗原受体(CAR)是由融合于细胞内 T 细胞信号域的抗体片段组成,这些信号域可以激活并触发细胞增殖。美国食品药品监督管理局(FDA)批准了第二代基于嵌合抗原受体的基因治疗。<br />
<br />
The first living organism with 'artificial' expanded DNA code was presented in 2014; the team used ''E. coli'' that had its genome extracted and replaced with a chromosome with an expanded genetic code. The [[nucleoside]]s added are [[d5SICS]] and [[dNaM]].<ref name="NATJ-20140507"/><br />
<br />
<br />
<br />
Gene switches were designed to enhance safety of the treatment. Kill switches were developed to terminate the therapy should the patient show severe side effects. Mechanisms can more finely control the system and stop and reactivate it. Since the number of T-cells are important for therapy persistence and severity, growth of T-cells is also controlled to dial the effectiveness and safety of therapeutics.<br />
<br />
基因开关被设计出来以提高治疗的安全性。如果病人出现严重的副作用,杀伤开关就会终止治疗。机制可以更好地控制系统,停止和重新激活它。由于 T 细胞的数量对治疗的持续性和强度非常重要,因此 T 细胞的生长也受到控制,从而平衡治疗的有效性和安全性。<br />
<br />
In May 2019, researchers, in a milestone effort, reported the creation of a new [[Synthetic biology#Synthetic life|synthetic]] (possibly [[Artificial life#Biochemical-based ("wet")|artificial]]) form of [[wikt:viability|viable]] [[life]], a variant of the [[bacteria]] ''[[Escherichia coli]]'', by reducing the natural number of 64 [[codon]]s in the bacterial [[genome]] to 59 codons instead, in order to encode 20 [[amino acid]]s.<ref name="NYT-20190515"/><ref name="NAT-20190515"/><br />
<br />
<br />
<br />
Although several mechanisms can improve safety and control, limitations include the difficulty of inducing large DNA circuits into the cells and risks associated with introducing foreign components, especially proteins, into cells.<br />
<br />
虽然有几种机制可以提高安全性和控制性,但他们也都存在局限性,包括很难将大型 DNA 电路诱导入细胞,以及将外来成分,特别是蛋白质引入细胞的风险。<br />
<br />
In 2017 the international [[Build-a-Cell]] large-scale research collaboration for the construction of synthetic living cell was started,<ref>{{cite web|url=http://buildacell.io/|title=Build-a-Cell|accessdate=4 Dec 2019}}</ref> followed by national synthetic cell organizations in several countries, including FabriCell,<ref>{{cite web|url=http://fabricell.org/|title=FabriCell|accessdate=8 Dec 2019}}</ref> MaxSynBio<ref>{{cite web|url=https://www.maxsynbio.mpg.de/home/|title=MaxSynBio - Max Planck Research Network in Synthetic Biology|accessdate=8 Dec 2019}}</ref> and BaSyC.<ref>{{cite web|url=http://www.basyc.nl/|title=BaSyC|accessdate=8 Dec 2019}}</ref> The European synthetic cell efforts were unified in 2019 as SynCellEU initiative.<ref>{{cite web|url=http://www.syntheticcell.eu/|title=SynCell EU|accessdate=8 Dec 2019}}</ref><br />
<br />
<br />
<br />
=== Drug delivery platforms 药物输送平台 ===<br />
<br />
==== Engineered bacteria-based platform 基于细菌设计的平台 ====<br />
<br />
Bacteria have long been used in cancer treatment. ''[[Bifidobacterium]]'' and ''[[Clostridium]]'' selectively colonize tumors and reduce their size.<ref name="Zu_2014">{{cite journal|vauthors=Zu C, Wang J|date=August 2014|title=Tumor-colonizing bacteria: a potential tumor targeting therapy|url=|journal=Critical Reviews in Microbiology|volume=40|issue=3|pages=225–35|doi=10.3109/1040841X.2013.776511|pmid=23964706|s2cid=26498221}}</ref> Recently synthetic biologists reprogrammed bacteria to sense and respond to a particular cancer state. Most often bacteria are used to deliver a therapeutic molecule directly to the tumor to minimize off-target effects. To target the tumor cells, [[peptide]]s that can specifically recognize a tumor were expressed on the surfaces of bacteria. Peptides used include an [[affibody molecule]] that specifically targets human [[Epidermal growth factor receptor|epidermal growth factor receptor 2]]<ref name="Gujrati_2014">{{cite journal|vauthors=Gujrati V, Kim S, Kim SH, Min JJ, Choy HE, Kim SC, Jon S|date=February 2014|title=Bioengineered bacterial outer membrane vesicles as cell-specific drug-delivery vehicles for cancer therapy|url=|journal=ACS Nano|volume=8|issue=2|pages=1525–37|doi=10.1021/nn405724x|pmid=24410085}}</ref> and a synthetic [[Adhesin molecule (immunoglobulin -like)|adhesin]].<ref name="Piñero-Lambea_2015">{{cite journal|vauthors=Piñero-Lambea C, Bodelón G, Fernández-Periáñez R, Cuesta AM, Álvarez-Vallina L, Fernández LÁ|date=April 2015|title=Programming controlled adhesion of E. coli to target surfaces, cells, and tumors with synthetic adhesins|journal=ACS Synthetic Biology|volume=4|issue=4|pages=463–73|doi=10.1021/sb500252a|pmc=4410913|pmid=25045780}}</ref> The other way is to allow bacteria to sense the [[tumor microenvironment]], for example hypoxia, by building an AND logic gate into bacteria.<ref>{{cite journal | last1 = Deyneko | first1 = I.V. | last2 = Kasnitz | first2 = N. | last3 = Leschner | first3 = S. | last4 = Weiss | first4 = S. | year = 2016| title = Composing a tumor specific bacterial promoter | url = | journal = PLOS ONE | volume = 11| issue = 5| page = e0155338| doi = 10.1371/journal.pone.0155338 | pmid = 27171245 | pmc = 4865170 }}</ref> The bacteria then only release target therapeutic molecules to the tumor through either [[lysis]]<ref>{{cite journal | last1 = Rice | first1 = KC | last2 = Bayles | first2 = KW | year = 2008 | title = Molecular control of bacterial death and lysis | journal = Microbiol Mol Biol Rev | volume = 72 | issue = 1| pages = 85–109 | doi = 10.1128/mmbr.00030-07 | pmid = 18322035 | pmc = 2268280 }}</ref> or the [[bacterial secretion system]].<ref>{{cite journal | last1 = Ganai | first1 = S. | last2 = Arenas | first2 = R. B. | last3 = Forbes | first3 = N. S. | year = 2009 | title = Tumour-targeted delivery of TRAIL using Salmonella typhimurium enhances breast cancer survival in mice | url = | journal = Br. J. Cancer | volume = 101 | issue = 10| pages = 1683–1691 | doi = 10.1038/sj.bjc.6605403 | pmid = 19861961 | pmc = 2778534 }}</ref> Lysis has the advantage that it can stimulate the immune system and control growth. Multiple types of secretion systems can be used and other strategies as well. The system is inducible by external signals. Inducers include chemicals, electromagnetic or light waves.<br />
<br />
The creation of new life and the tampering of existing life has raised ethical concerns in the field of synthetic biology and are actively being discussed.<br />
<br />
创造新生命以及篡改现存生命引起了合成生物学领域的伦理问题,目前正处于积极的讨论中。<br />
<br />
<br />
<br />
Multiple species and strains are applied in these therapeutics. Most commonly used bacteria are ''[[Salmonella enterica subsp. enterica|Salmonella typhimurium]]'', [[Escherichia coli|''Escherichia Coli'']], ''Bifidobacteria'', ''[[Streptococcus]]'', ''[[Lactobacillus]]'', ''[[Listeria]]'' and ''[[Bacillus subtilis]]''. Each of these species have their own property and are unique to cancer therapy in terms of tissue colonization, interaction with immune system and ease of application.<br />
<br />
<br />
<br />
The ethical aspects of synthetic biology has 3 main features: biosafety, biosecurity, and the creation of new life forms. Other ethical issues mentioned include the regulation of new creations, patent management of new creations, benefit distribution, and research integrity.<br />
<br />
合成生物学的伦理方面有三个主要特点: 生物研究安全性、生物安全性和创造新的生命形式。其他提到的伦理问题包括新生命的管理、新生命的专利管理、利益分配和研究的完整性。<br />
<br />
==== Cell-based platform 基于细胞的平台====<br />
<br />
The immune system plays an important role in cancer and can be harnessed to attack cancer cells. Cell-based therapies focus on [[Cancer immunotherapy|immunotherapies]], mostly by engineering [[T cell]]s.<br />
<br />
Ethical issues have surfaced for recombinant DNA and genetically modified organism (GMO) technologies and extensive regulations of genetic engineering and pathogen research were in place in many jurisdictions. Amy Gutmann, former head of the Presidential Bioethics Commission, argued that we should avoid the temptation to over-regulate synthetic biology in general, and genetic engineering in particular. According to Gutmann, "Regulatory parsimony is especially important in emerging technologies...where the temptation to stifle innovation on the basis of uncertainty and fear of the unknown is particularly great. The blunt instruments of statutory and regulatory restraint may not only inhibit the distribution of new benefits, but can be counterproductive to security and safety by preventing researchers from developing effective safeguards.".<br />
<br />
重组 DNA 和转基因生物(GMO)技术的伦理问题已经浮出水面,许多司法管辖区对基因工程和病原体研究有着广泛的规定。生物伦理总统委员会前任主席艾米 · 古特曼认为,我们应该避免过度监管合成生物学,尤其是基因工程。古特曼认为,“过度监管在新兴技术领域尤为显要...... 在这些领域,处于不确定性和对未知事物的恐惧而扼杀创新的倾向尤为强烈。法律和监管限制的生硬手段可能不仅会抑制新利益的分配,而且可能阻碍研究人员制定有效的保障措施,从而对研究安全性和生命安全性产生反作用。".<br />
<br />
<br />
<br />
T cell receptors were engineered and ‘trained’ to detect cancer [[epitope]]s. [[Chimeric antigen receptor]]s (CARs) are composed of a fragment of an [[antibody]] fused to intracellular T cell signaling domains that can activate and trigger proliferation of the cell. A second generation CAR-based therapy was approved by FDA.{{Citation needed|date=April 2018}}<br />
<br />
<br />
<br />
Gene switches were designed to enhance safety of the treatment. Kill switches were developed to terminate the therapy should the patient show severe side effects.<ref>Jones, B.S., Lamb, L.S., Goldman, F. & Di Stasi, A. Improving the safety of cell therapy products by suicide gene transfer. Front. Pharmacol. 5, 254 (2014).</ref> Mechanisms can more finely control the system and stop and reactivate it.<ref>{{cite journal | last1 = Wei | first1 = P | last2 = Wong | first2 = WW | last3 = Park | first3 = JS | last4 = Corcoran | first4 = EE | last5 = Peisajovich | first5 = SG | last6 = Onuffer | first6 = JJ | last7 = Weiss | first7 = A | last8 = LiWA | year = 2012 | title = Bacterial virulence proteins as tools to rewire kinase pathways in yeast and immune cells | url = | journal = Nature | volume = 488 | issue = 7411| pages = 384–388 | doi = 10.1038/nature11259 | pmid = 22820255 | pmc = 3422413 }}</ref><ref>{{cite journal | last1 = Danino | first1 = T. | last2 = Mondragon-Palomino | first2 = O. | last3 = Tsimring | first3 = L. | last4 = Hasty | first4 = J. | year = 2010 | title = A synchronized quorum of genetic clocks | url = | journal = Nature | volume = 463 | issue = 7279| pages = 326–330 | doi = 10.1038/nature08753 | pmid = 20090747 | pmc = 2838179 }}</ref> Since the number of T-cells are important for therapy persistence and severity, growth of T-cells is also controlled to dial the effectiveness and safety of therapeutics.<ref>{{cite journal | last1 = Chen | first1 = Y. Y. | last2 = Jensen | first2 = M. C. | last3 = Smolke | first3 = C. D. | year = 2010 | title = Genetic control of mammalian T-cell proliferation with synthetic RNA regulatory systems | journal = Proc. Natl. Acad. Sci. U.S.A. | volume = 107 | issue = 19| pages = 8531–6 | doi = 10.1073/pnas.1001721107 | pmid = 20421500 | pmc = 2889348 }}</ref><br />
<br />
One ethical question is whether or not it is acceptable to create new life forms, sometimes known as "playing God". Currently, the creation of new life forms not present in nature is at small-scale, the potential benefits and dangers remain unknown, and careful consideration and oversight are ensured for most studies. Regarding auxotrophy, bacteria and yeast can be engineered to be unable to produce histidine, an important amino acid for all life. Such organisms can thus only be grown on histidine-rich media in laboratory conditions, nullifying fears that they could spread into undesirable areas.<br />
<br />
有这样一个道德问题,创造新的生命形式(有时被称为“扮演上帝”)是否可以接受。目前,自然界中不存在的新生命形式的创造规模很小,潜在的好处和危险仍然不为人知,并且大多数研究确保进行了认真的考虑和监督。通过制造营养缺陷,细菌和酵母可以被改造为不能生产组氨酸的类型。组氨酸是一种对所有生命来说都很重要的氨基酸。因此,这些微生物只能在实验室条件下在富含组氨酸的培养基上生长,从而消除了人们对它们可能扩散到不良区域的担忧。<br />
<br />
<br />
<br />
<br />
== Ethics 伦理问题 ==<br />
<br />
Some ethical issues relate to biosecurity, where biosynthetic technologies could be deliberately used to cause harm to society and/or the environment. Since synthetic biology raises ethical issues and biosecurity issues, humanity must consider and plan on how to deal with potentially harmful creations, and what kinds of ethical measures could possibly be employed to deter nefarious biosynthetic technologies. With the exception of regulating synthetic biology and biotechnology companies, however, the issues are not seen as new because they were raised during the earlier recombinant DNA and genetically modified organism (GMO) debates and extensive regulations of genetic engineering and pathogen research are already in place in many jurisdictions.<br /><br />
<br />
一些伦理问题与生物安全有关,在这方面,生物合成技术可能被有意用以破坏社会和/或环境。由于合成生物学引起了伦理问题和生物安全问题,人类必须考虑和计划如何处理潜在的有害创造物,以及何种伦理措施具有阻止邪恶的生物合成技术的可行性。然而,除了监管合成生物学和生物技术公司之外,这些问题并不被视为新问题,因为它们是在早期的重组 DNA 和转基因生物(GMO)辩论中提出的,而且许多司法辖区已经对基因工程和病原体研究进行了广泛的监管。< br/><br />
<br />
{{Update|section|date=January 2019}}<br />
<br />
<br />
<br />
The creation of new life and the tampering of existing life has raised [[Ethics|ethical concerns]] in the field of synthetic biology and are actively being discussed.<ref name=":3" /><br />
<br />
<br />
<br />
The European Union-funded project SYNBIOSAFE has issued reports on how to manage synthetic biology. A 2007 paper identified key issues in safety, security, ethics and the science-society interface, which the project defined as public education and ongoing dialogue among scientists, businesses, government and ethicists. The key security issues that SYNBIOSAFE identified involved engaging companies that sell synthetic DNA and the biohacking community of amateur biologists. Key ethical issues concerned the creation of new life forms.<br />
<br />
欧盟资助的项目 SYNBIOSAFE 已经发布了关于如何管理合成生物学的报告。2007年的一篇论文确定了技术安全、生命安全、伦理和科学-社会接口方面的关键问题,并将其定义为公共教育和科学家、企业、政府和伦理学家之间的持续交流。SYNBIOSAFE 确定的关键生命安全问题涉及到销售合成 DNA 的公司和业余生物学家组成的生物黑客社区。关键的伦理问题涉及到创造新的生命形式。<br />
<br />
Common ethical questions include:<br />
常见的伦理问题包括:<br />
<br />
<br />
A subsequent report focused on biosecurity, especially the so-called dual-use challenge. For example, while synthetic biology may lead to more efficient production of medical treatments, it may also lead to synthesis or modification of harmful pathogens (e.g., smallpox). The biohacking community remains a source of special concern, as the distributed and diffuse nature of open-source biotechnology makes it difficult to track, regulate or mitigate potential concerns over biosafety and biosecurity.<br />
<br />
随后的一份聚焦于于生物安全的报告,特别是所谓的两用挑战。例如,虽然合成生物学可能带来更有效的医疗生产,但它也可能合成或改造出有害的病原体(例如天花)。生物黑客界仍然是一个特别令人关切的问题,因为开源生物技术的分布和扩散性质使得跟踪、管理或减轻对生物安全和生物安保的隐忧变得困难。<br />
<br />
* Is it morally right to tamper with nature?<br />
篡改自然在道德上是正确的吗?<br />
<br />
* Is one playing God when creating new life?<br />
创造新生命时,人是否就是上帝?<br />
<br />
COSY, another European initiative, focuses on public perception and communication. To better communicate synthetic biology and its societal ramifications to a broader public, COSY and SYNBIOSAFE published SYNBIOSAFE, a 38-minute documentary film, in October 2009.<br />
<br />
COSY 是欧洲的另一项倡议,主要关注于公众认知和交流。为了更好地向更广泛的公众宣传合成生物学及其社会影响,COSY 和 SYNBIOSAFE 于2009年10月出版了一部38分钟的纪录片《安全的合成生物学》。<br />
<br />
* What happens if a synthetic organism accidentally escapes?<br />
如果一种合成生命体意外地从实验室中泄露出去,会发生什么?<br />
<br />
* What if an individual misuses synthetic biology and creates a harmful entity (e.g., a biological weapon)?<br />
假如某个个体错误地使用合成生物学并制造了一个有害的尸体,那该怎么办?<br />
<br />
The International Association Synthetic Biology has proposed self-regulation. This proposes specific measures that the synthetic biology industry, especially DNA synthesis companies, should implement. In 2007, a group led by scientists from leading DNA-synthesis companies published a "practical plan for developing an effective oversight framework for the DNA-synthesis industry".<br />
<br />
国际合成生物学协会已经建议进行自我调节。它提出了合成生物产业,特别是 DNA 合成公司,应该实施的具体措施。2007年,由主要的 DNA 合成公司的科学家领导的一个小组发表了“为 DNA 合成工业制定有效监督框架的实用计划”。<br />
<br />
* Who will have control of and access to the products of synthetic biology? <br />
谁会拥有控制和访问合成生物产品的权限?<br />
<br />
* Who will gain from these innovations? Investors? Medical patients? Industrial farmers?<br />
谁会从这些创新中获利?投资者?患者?工业农民?<br />
<br />
On July 9–10, 2009, the National Academies' Committee of Science, Technology & Law convened a symposium on "Opportunities and Challenges in the Emerging Field of Synthetic Biology".<br />
<br />
2009年7月9日至10日,美国国家学院科学、技术和法律委员会召开了一次名为“合成生物学新兴领域的机遇与挑战”的研讨会。<br />
<br />
* Does the patent system allow patents on living organisms? What about parts of organisms, like HIV resistance genes in humans?<ref>{{Cite web|url=https://www.theguardian.com/science/2018/nov/26/worlds-first-gene-edited-babies-created-in-china-claims-scientist|title= World's first gene-edited babies created in China, claims scientist |last=Staff|first=Agencies|date=November 2018|website=The Guardian|url-status=live|archive-url=|archive-date=|access-date=}}</ref><br />
<br />
* What if a new creation is deserving of moral or legal status?<br />
如果一个新生命理应拥有道德和法律地位该怎么办?<br />
<br />
After the publication of the first synthetic genome and the accompanying media coverage about "life" being created, President Barack Obama established the Presidential Commission for the Study of Bioethical Issues to study synthetic biology. The commission convened a series of meetings, and issued a report in December 2010 titled "New Directions: The Ethics of Synthetic Biology and Emerging Technologies." The commission stated that "while Venter’s achievement marked a significant technical advance in demonstrating that a relatively large genome could be accurately synthesized and substituted for another, it did not amount to the “creation of life”. It noted that synthetic biology is an emerging field, which creates potential risks and rewards. The commission did not recommend policy or oversight changes and called for continued funding of the research and new funding for monitoring, study of emerging ethical issues and public education. These security issues may be avoided by regulating industry uses of biotechnology through policy legislation. Federal guidelines on genetic manipulation are being proposed by "the President's Bioethics Commission ... in response to the announced creation of a self-replicating cell from a chemically synthesized genome, put forward 18 recommendations not only for regulating the science ... for educating the public". Richard Lewontin wrote that some of the safety tenets for oversight discussed in The Principles for the Oversight of Synthetic Biology are reasonable, but that the main problem with the recommendations in the manifesto is that "the public at large lacks the ability to enforce any meaningful realization of those recommendations".<br />
<br />
在发表了第一个合成基因组以及随之而来的关于”生命”的媒体报道之后,巴拉克·奥巴马总统设立了研究合成生物学的生物伦理问题总统委员会。该委员会召开了一系列会议,并于2010年12月发布了一份题为《新方向: 合成生物学和新兴技术的伦理学》的报告。委员会指出:“虽然文特尔的成就标志着一项重大的技术进步,证明了一个相对较大的基因组可以准确地合成和替代另一个基因组,但它并不等于‘创造生命’。”报告指出,合成生物学是一个新兴的领域,它产生了潜在的风险和回报。该委员会没有对政策或监督方面的改变提出建议,并呼吁继续为研究提供资金,并为监测、研究新出现的道德问题和公共教育提供新资金。这些安全问题可以通过政策立法规范生物技术的工业用途来避免。“生物伦理总统委员会正在提出关于基因操纵的联邦指导方针...... 作为对宣布从化学合成的基因组中创造出自我复制细胞的回应,提出了18项建议,不仅仅是为了规范科学...... 为了教育公众。”。理查德·路文汀 (Richard Lewontin) 写道,《合成生物学监督原则》中讨论的一些监督安全原则是合理的,但宣言中的建议存在的主要问题是“广大公众缺乏能力,无法强制任意有意义地实现这些建议”。<br />
<br />
<br />
<br />
The ethical aspects of synthetic biology has 3 main features: biosafety, biosecurity, and the creation of new life forms.<ref>{{Cite journal|title=Synthetic Biology and Ethics: Past, Present, and Future|last=Hayry|first=Mattie|date=April 2017|journal=Cambridge Quarterly of Healthcare Ethics|volume=26|issue=2|pages=186–205|doi=10.1017/S0963180116000803|pmid=28361718}}</ref> Other ethical issues mentioned include the regulation of new creations, patent management of new creations, benefit distribution, and research integrity.<ref>{{Cite journal|title=Synthetic biology applied in the agrifood sector: Public perceptions, attitudes and implications for future studies|last=Jin |display-authors=etal |first=Shan|date=September 2019|journal=Trends in Food Science and Technology|volume=91|pages=454–466|doi=10.1016/j.tifs.2019.07.025}}</ref><ref name=":3">{{Cite journal|url=https://heinonline.org/HOL/LandingPage?handle=hein.journals/macq15&div=8&id=&page=| title=Synthetic Biology: Ethics, Exeptionalism and Expectations| pages=45| last=Newson|first=AJ|date=2015|journal=Macquarie Law Journal| volume=15|url-status=live|archive-url=|archive-date=|access-date=}}</ref><br />
<br />
<br />
<br />
Ethical issues have surfaced for [[recombinant DNA]] and [[genetically modified organism]] (GMO) technologies and extensive regulations of [[genetic engineering]] and pathogen research were in place in many jurisdictions. [[Amy Gutmann]], former head of the Presidential Bioethics Commission, argued that we should avoid the temptation to over-regulate synthetic biology in general, and genetic engineering in particular. According to Gutmann, "Regulatory parsimony is especially important in emerging technologies...where the temptation to stifle innovation on the basis of uncertainty and fear of the unknown is particularly great. The blunt instruments of statutory and regulatory restraint may not only inhibit the distribution of new benefits, but can be counterproductive to security and safety by preventing researchers from developing effective safeguards.".<ref>{{cite journal | first = Gutmann | last = Amy | date = 2012 | title = The Ethics of Synthetic Biology | volume=41 | issue=4 | pages = 17–22 | journal = The Hastings Center Report | doi = 10.1002/j.1552-146X.2011.tb00118.x | pmid = 21845917 | s2cid = 20662786 }}</ref><br />
<br />
<br />
<br />
The hazards of synthetic biology include biosafety hazards to workers and the public, biosecurity hazards stemming from deliberate engineering of organisms to cause harm, and environmental hazards. The biosafety hazards are similar to those for existing fields of biotechnology, mainly exposure to pathogens and toxic chemicals, although novel synthetic organisms may have novel risks. For biosecurity, there is concern that synthetic or redesigned organisms could theoretically be used for bioterrorism. Potential risks include recreating known pathogens from scratch, engineering existing pathogens to be more dangerous, and engineering microbes to produce harmful biochemicals. Lastly, environmental hazards include adverse effects on biodiversity and ecosystem services, including potential changes to land use resulting from agricultural use of synthetic organisms.<br />
<br />
合成生物学的危害包括对工人和公众的生物安全危害、蓄意设计可造成危害的生物体所产生的生物安全危害以及环境危害。生物安全危害类似于现有生物技术领域的危害,尽管新的合成生物可能有新的风险,它的主要形式是接触病原体和有毒化学品。为了生物安全,人们担心人工合成或重新设计的生物体在理论上可能被用于生物恐怖主义。潜在的风险包括从零开始再造已知的病原体,将现有的病原体设计成更危险的,以及设计微生物来生产有害的生物化学产品。最后,环境危害包括对生物多样性和生态系统服务的不利影响,包括在农业上利用合成生物体对土地使用的潜在变化。<br />
<br />
=== The "creation" of life 创造生命 ===<br />
<br />
<br />
<br />
Existing risk analysis systems for GMOs are generally considered sufficient for synthetic organisms, although there may be difficulties for an organism built "bottom-up" from individual genetic sequences. Synthetic biology generally falls under existing regulations for GMOs and biotechnology in general, and any regulations that exist for downstream commercial products, although there are generally no regulations in any jurisdiction that are specific to synthetic biology.<br />
<br />
通常认为,尽管由单个基因序列构成的”自下而上”的生物体可能存在困难,现有的转基因生物风险分析系统足以用于合成生物体。一般而言,尽管任何法域一般都没有专门针对合成生物学的条例,合成生物学属于现有的转基因生物和生物技术条例的范围,也属于现有的关于下游商业产品的条例的范围。<br />
<br />
One ethical question is whether or not it is acceptable to create new life forms, sometimes known as "playing God". Currently, the creation of new life forms not present in nature is at small-scale, the potential benefits and dangers remain unknown, and careful consideration and oversight are ensured for most studies.<ref name=":3" /> Many advocates express the great potential value—to agriculture, medicine, and academic knowledge, among other fields—of creating artificial life forms. Creation of new entities could expand scientific knowledge well beyond what is currently known from studying natural phenomena. Yet there is concern that artificial life forms may reduce nature’s "purity" (i.e., nature could be somehow corrupted by human intervention and manipulation) and potentially influence the adoption of more engineering-like principles instead of biodiversity- and nature-focused ideals. Some are also concerned that if an artificial life form were to be released into nature, it could hamper biodiversity by beating out natural species for resources (similar to how [[algal bloom]]s kill marine species). Another concern involves the ethical treatment of newly created entities if they happen to [[nociception|sense pain]], [[sentience]], and self-perception. Should such life be given moral or legal rights? If so, how?<br />
<br />
<br />
<br />
=== Biosafety and biocontainment 生物技术安全和生物抑制 ===<br />
<br />
What is most ethically appropriate when considering biosafety measures? How can accidental introduction of synthetic life in the natural environment be avoided? Much ethical consideration and critical thought has been given to these questions. Biosafety not only refers to biological containment; it also refers to strides taken to protect the public from potentially hazardous biological agents. Even though such concerns are important and remain unanswered, not all products of synthetic biology present concern for biological safety or negative consequences for the environment. It is argued that most synthetic technologies are benign and are incapable of flourishing in the outside world due to their "unnatural" characteristics as there is yet to be an example of a transgenic microbe conferred with a fitness advantage in the wild.<br />
<br />
<br />
<br />
In general, existing [[Hierarchy of hazard controls|hazard controls]], risk assessment methodologies, and regulations developed for traditional [[genetically modified organism]]s (GMOs) are considered to be sufficient for synthetic organisms. "Extrinsic" [[biocontainment]] methods in a laboratory context include physical containment through [[biosafety cabinet]]s and [[glovebox]]es, as well as [[personal protective equipment]]. In an agricultural context they include isolation distances and [[pollen]] barriers, similar to methods for [[Biocontainment of genetically modified organisms|biocontainment of GMOs]]. Synthetic organisms may offer increased hazard control because they can be engineered with "intrinsic" biocontainment methods that limit their growth in an uncontained environment, or prevent [[horizontal gene transfer]] to natural organisms. Examples of intrinsic biocontainment include [[auxotrophy]], biological [[kill switch]]es, inability of the organism to replicate or to pass modified or synthetic genes to offspring, and the use of [[Xenobiology|xenobiological]] organisms using alternative biochemistry, for example using artificial [[xeno nucleic acid]]s (XNA) instead of DNA.<ref name=":12" /><ref name=":32">{{Cite journal|url=https://publications.europa.eu/en/publication-detail/-/publication/bfd7d06c-d3ae-11e5-a4b5-01aa75ed71a1/language-en|title=Opinion on synthetic biology II: Risk assessment methodologies and safety aspects|last=|first=|date=2016-02-12|website=EU [[Directorate-General for Health and Consumers]]|pages=|via=|doi=10.2772/63529|archive-url=|archive-date=|access-date=|volume=|publisher=Publications Office}}</ref> Regarding auxotrophy, bacteria and yeast can be engineered to be unable to produce [[histidine]], an important amino acid for all life. Such organisms can thus only be grown on histidine-rich media in laboratory conditions, nullifying fears that they could spread into undesirable areas.<br />
<br />
<br />
<br />
<br />
<br />
=== Biosecurity 生物安全 ===<br />
<br />
Some ethical issues relate to biosecurity, where biosynthetic technologies could be deliberately used to cause harm to society and/or the environment. Since synthetic biology raises ethical issues and biosecurity issues, humanity must consider and plan on how to deal with potentially harmful creations, and what kinds of ethical measures could possibly be employed to deter nefarious biosynthetic technologies. With the exception of regulating synthetic biology and biotechnology companies,<ref name="Bügl, H. et al. 2007 627–629">{{cite journal | vauthors = Bügl H, Danner JP, Molinari RJ, Mulligan JT, Park HO, Reichert B, Roth DA, Wagner R, Budowle B, Scripp RM, Smith JA, Steele SJ, Church G, Endy D | title = DNA synthesis and biological security | journal = Nature Biotechnology | volume = 25 | issue = 6 | pages = 627–9 | date = June 2007 | pmid = 17557094 | doi = 10.1038/nbt0607-627 | s2cid = 7776829 }}</ref><ref>{{cite web|url = http://www.synbioproject.org/site/assets/files/1335/hastings.pdf|title = Ethical Issues in Synthetic Biology: An Overview of the Debates|date = |access-date = |website = }}</ref> however, the issues are not seen as new because they were raised during the earlier [[recombinant DNA]] and [[genetically modified organism]] (GMO) debates and extensive regulations of [[genetic engineering]] and pathogen research are already in place in many jurisdictions.<ref name="bioethics.gov">Presidential Commission for the study of Bioethical Issues, December 2010 [http://bioethics.gov/synthetic-biology-report NEW DIRECTIONS The Ethics of Synthetic Biology and Emerging Technologies] Retrieved 2012-04-14.</ref><br /><br />
<br />
<br />
<br />
=== European Union 欧盟方面 ===<br />
<br />
<br />
<br />
The [[European Union]]-funded project SYNBIOSAFE<ref>[http://www.synbiosafe.eu/ SYNBIOSAFE official site]</ref> has issued reports on how to manage synthetic biology. A 2007 paper identified key issues in safety, security, ethics and the science-society interface, which the project defined as public education and ongoing dialogue among scientists, businesses, government and ethicists.<ref name="Priorities">{{cite journal | vauthors = Schmidt M, Ganguli-Mitra A, Torgersen H, Kelle A, Deplazes A, Biller-Andorno N | title = A priority paper for the societal and ethical aspects of synthetic biology | journal = Systems and Synthetic Biology | volume = 3 | issue = 1–4 | pages = 3–7 | date = December 2009 | pmid = 19816794 | pmc = 2759426 | doi = 10.1007/s11693-009-9034-7 | url = http://www.synbiosafe.eu/uploads/pdf/Schmidt_etal-2009-SSBJ.pdf }}</ref><ref>Schmidt M. Kelle A. Ganguli A, de Vriend H. (Eds.) 2009. [https://www.springer.com/biomed/book/978-90-481-2677-4 "Synthetic Biology. The Technoscience and its Societal Consequences".] Springer Academic Publishing.</ref> The key security issues that SYNBIOSAFE identified involved engaging companies that sell synthetic DNA and the [[Do-it-yourself biology|biohacking]] community of amateur biologists. Key ethical issues concerned the creation of new life forms.<br />
<br />
<br />
<br />
A subsequent report focused on biosecurity, especially the so-called [[dual use technology|dual-use]] challenge. For example, while synthetic biology may lead to more efficient production of medical treatments, it may also lead to synthesis or modification of harmful pathogens (e.g., [[smallpox]]).<ref>{{cite journal | vauthors = Kelle A | title = Ensuring the security of synthetic biology-towards a 5P governance strategy | journal = Systems and Synthetic Biology | volume = 3 | issue = 1–4 | pages = 85–90 | date = December 2009 | pmid = 19816803 | pmc = 2759433 | doi = 10.1007/s11693-009-9041-8 }}</ref> The biohacking community remains a source of special concern, as the distributed and diffuse nature of open-source biotechnology makes it difficult to track, regulate or mitigate potential concerns over biosafety and biosecurity.<ref>{{cite journal | vauthors = Schmidt M | title = Diffusion of synthetic biology: a challenge to biosafety | journal = Systems and Synthetic Biology | volume = 2 | issue = 1–2 | pages = 1–6 | date = June 2008 | pmid = 19003431 | pmc = 2671588 | doi = 10.1007/s11693-008-9018-z | url = http://www.markusschmidt.eu/pdf/Diffusion_of_synthetic_biology.pdf }}</ref><br />
<br />
<br />
<br />
COSY, another European initiative, focuses on public perception and communication.<ref>[http://www.synbio.at/ COSY: Communicating Synthetic Biology]</ref><ref>{{cite journal | vauthors = Kronberger N, Holtz P, Kerbe W, Strasser E, Wagner W | title = Communicating Synthetic Biology: from the lab via the media to the broader public | journal = Systems and Synthetic Biology | volume = 3 | issue = 1–4 | pages = 19–26 | date = December 2009 | pmid = 19816796 | pmc = 2759424 | doi = 10.1007/s11693-009-9031-x }}</ref><ref>{{cite journal | vauthors = Cserer A, Seiringer A | title = Pictures of Synthetic Biology : A reflective discussion of the representation of Synthetic Biology (SB) in the German-language media and by SB experts | journal = Systems and Synthetic Biology | volume = 3 | issue = 1–4 | pages = 27–35 | date = December 2009 | pmid = 19816797 | pmc = 2759430 | doi = 10.1007/s11693-009-9038-3 }}</ref> To better communicate synthetic biology and its societal ramifications to a broader public, COSY and SYNBIOSAFE published ''SYNBIOSAFE'', a 38-minute documentary film, in October 2009.<ref>[http://www.synbiosafe.eu/DVD COSY/SYNBIOSAFE Documentary]</ref><br />
<br />
<br />
<br />
The International Association Synthetic Biology has proposed self-regulation.<ref>Report of IASB [http://www.ia-sb.eu/tasks/sites/synthetic-biology/assets/File/pdf/iasb_report_biosecurity_syntheticbiology.pdf "Technical solutions for biosecurity in synthetic biology"] {{webarchive |url=https://web.archive.org/web/20110719031805/http://www.ia-sb.eu/tasks/sites/synthetic-biology/assets/File/pdf/iasb_report_biosecurity_syntheticbiology.pdf |date=July 19, 2011 }}, Munich, 2008</ref> This proposes specific measures that the synthetic biology industry, especially DNA synthesis companies, should implement. In 2007, a group led by scientists from leading DNA-synthesis companies published a "practical plan for developing an effective oversight framework for the DNA-synthesis industry".<ref name="Bügl, H. et al. 2007 627–629" /><br />
<br />
<br />
<br />
=== United States 美国方面 ===<br />
<br />
<br />
<br />
In January 2009, the [[Alfred P. Sloan Foundation]] funded the [[Woodrow Wilson Center]], the [[Hastings Center]], and the [[J. Craig Venter Institute]] to examine the public perception, ethics and policy implications of synthetic biology.<ref>Parens E., Johnston J., Moses J. [http://www.thehastingscenter.org/who-we-are/our-research/selected-past-projects/ethical-issues-in-synthetic-biology-2/ Ethical Issues in Synthetic Biology.] 2009.</ref><br />
<br />
<br />
<br />
On July 9–10, 2009, the National Academies' Committee of Science, Technology & Law convened a symposium on "Opportunities and Challenges in the Emerging Field of Synthetic Biology".<ref>[http://sites.nationalacademies.org/PGA/stl/PGA_050738 NAS Symposium official site]</ref><br />
<br />
<br />
<br />
After the publication of the [[Mycoplasma laboratorium|first synthetic genome]] and the accompanying media coverage about "life" being created, President [[Barack Obama]] established the [[Presidential Commission for the Study of Bioethical Issues]] to study synthetic biology.<ref>Presidential Commission for the study of Bioethical Issues, December 2010 [http://bioethics.gov/node/353 FAQ]</ref> The commission convened a series of meetings, and issued a report in December 2010 titled "New Directions: The Ethics of Synthetic Biology and Emerging Technologies." The commission stated that "while Venter’s achievement marked a significant technical advance in demonstrating that a relatively large genome could be accurately synthesized and substituted for another, it did not amount to the “creation of life”.<ref>[http://bioethics.gov/node/353 Synthetic Biology F.A.Q.'s | Presidential Commission for the Study of Bioethical Issues]</ref> It noted that synthetic biology is an emerging field, which creates potential risks and rewards. The commission did not recommend policy or oversight changes and called for continued funding of the research and new funding for monitoring, study of emerging ethical issues and public education.<ref name="bioethics.gov" /><br />
<br />
<br />
<br />
Synthetic biology, as a major tool for biological advances, results in the "potential for developing biological weapons, possible unforeseen negative impacts on human health ... and any potential environmental impact".<ref name=":2">{{cite journal | vauthors = Erickson B, Singh R, Winters P | title = Synthetic biology: regulating industry uses of new biotechnologies | journal = Science | volume = 333 | issue = 6047 | pages = 1254–6 | date = September 2011 | pmid = 21885775 | doi = 10.1126/science.1211066 | bibcode = 2011Sci...333.1254E | s2cid = 1568198 | url = https://semanticscholar.org/paper/6ae989f6b07dc3c8a8694792d6fe8f036a0e0292 }}</ref> These security issues may be avoided by regulating industry uses of biotechnology through policy legislation. Federal guidelines on genetic manipulation are being proposed by "the President's Bioethics Commission ... in response to the announced creation of a self-replicating cell from a chemically synthesized genome, put forward 18 recommendations not only for regulating the science ... for educating the public".<ref name=":2" /><br />
<br />
<br />
<br />
=== Opposition 反对意见 ===<br />
<br />
On March 13, 2012, over 100 environmental and civil society groups, including [[Friends of the Earth]], the [[International Center for Technology Assessment]] and the [[ETC Group (AGETC)|ETC Group]] issued the manifesto ''The Principles for the Oversight of Synthetic Biology''. This manifesto calls for a worldwide moratorium on the release and commercial use of synthetic organisms until more robust regulations and rigorous biosafety measures are established. The groups specifically call for an outright ban on the use of synthetic biology on the [[human genome]] or [[human microbiome]].<ref>Katherine Xue for Harvard Magazine. September–October 2014 [http://harvardmagazine.com/2014/09/synthetic-biologys-new-menagerie Synthetic Biology’s New Menagerie]</ref><ref>Yojana Sharma for Scidev.net March 15, 2012. [http://www.scidev.net/global/genomics/news/ngos-call-for-international-regulation-of-synthetic-biology.html NGOs call for international regulation of synthetic biology]</ref> [[Richard Lewontin]] wrote that some of the safety tenets for oversight discussed in ''The Principles for the Oversight of Synthetic Biology'' are reasonable, but that the main problem with the recommendations in the manifesto is that "the public at large lacks the ability to enforce any meaningful realization of those recommendations".<ref>[http://www.nybooks.com/articles/archives/2014/may/08/new-synthetic-biology-who-gains/?insrc=rel#fnr-1 The New Synthetic Biology: Who Gains?] (2014-05-08), [[Richard C. Lewontin]], ''[[New York Review of Books]]''</ref><br />
<br />
<br />
<br />
== Health and safety 健康和安全 ==<br />
<br />
{{Main|Hazards of synthetic biology}}<br />
<br />
<br />
<br />
The hazards of synthetic biology include [[biosafety]] hazards to workers and the public, [[biosecurity]] hazards stemming from deliberate engineering of organisms to cause harm, and environmental hazards. The biosafety hazards are similar to those for existing fields of biotechnology, mainly exposure to pathogens and toxic chemicals, although novel synthetic organisms may have novel risks.<ref name=":02">{{Cite journal|url=https://blogs.cdc.gov/niosh-science-blog/2017/01/24/synthetic-biology/|title=Synthetic Biology and Occupational Risk|last1=Howard|first1=John|last2=Murashov|first2=Vladimir|date=2017-01-24|journal=Journal of Occupational and Environmental Hygiene|archive-url=|archive-date=|access-date=2018-11-30|last3=Schulte|first3=Paul|volume=14|issue=3|pages=224–236|pmid=27754800|doi=10.1080/15459624.2016.1237031|s2cid=205893358}}</ref><ref name=":12">{{Cite journal|last1=Howard|first1=John|last2=Murashov|first2=Vladimir|last3=Schulte|first3=Paul|date=2016-10-18|title=Synthetic biology and occupational risk|journal=Journal of Occupational and Environmental Hygiene|volume=14|issue=3|pages=224–236|doi=10.1080/15459624.2016.1237031|pmid=27754800|s2cid=205893358|issn=1545-9624}}</ref> For biosecurity, there is concern that synthetic or redesigned organisms could theoretically be used for [[bioterrorism]]. Potential risks include recreating known pathogens from scratch, engineering existing pathogens to be more dangerous, and engineering microbes to produce harmful biochemicals.<ref name=":7">{{Cite book|title=Biodefense in the Age of Synthetic Biology|date=2018-06-19|publisher=[[National Academies of Sciences, Engineering, and Medicine]]|isbn=9780309465182|location=|pages=|doi=10.17226/24890|pmid=30629396|last1=National Academies Of Sciences|first1=Engineering|author2=Division on Earth Life Studies|last3=Board On Life|first3=Sciences|author4=Board on Chemical Sciences Technology|author5=Committee on Strategies for Identifying Addressing Potential Biodefense Vulnerabilities Posed by Synthetic Biology}}</ref> Lastly, environmental hazards include adverse effects on [[biodiversity]] and [[ecosystem services]], including potential changes to land use resulting from agricultural use of synthetic organisms.<ref name=":8">{{Cite web|url=http://ec.europa.eu/environment/integration/research/newsalert/multimedia/synthetic_biology_and_biodiversity.htm|title=Future Brief: Synthetic biology and biodiversity|last=|first=|date=September 2016|website=European Commission|pages=14–15|archive-url=|archive-date=|access-date=2019-01-14}}</ref><ref>{{Cite web|url=https://publications.europa.eu/en/publication-detail/-/publication/9b231c71-faf1-11e5-b713-01aa75ed71a1/language-en/format-PDF|title=Final opinion on synthetic biology III: Risks to the environment and biodiversity related to synthetic biology and research priorities in the field of synthetic biology|last=|first=|date=2016-04-04|website=EU Directorate-General for Health and Food Safety|pages=8, 27|archive-url=|archive-date=|access-date=2019-01-14}}</ref><br />
<br />
<br />
<br />
Existing risk analysis systems for GMOs are generally considered sufficient for synthetic organisms, although there may be difficulties for an organism built "bottom-up" from individual genetic sequences.<ref name=":32" /><ref name=":22">{{Cite web|url=http://www.hse.gov.uk/research/rrpdf/rr944.pdf|title=Synthetic biology: A review of the technology, and current and future needs from the regulatory framework in Great Britain|last1=Bailey|first1=Claire|last2=Metcalf|first2=Heather|date=2012|website=UK [[Health and Safety Executive]]|archive-url=|archive-date=|access-date=2018-11-29|last3=Crook|first3=Brian}}</ref> Synthetic biology generally falls under existing regulations for GMOs and biotechnology in general, and any regulations that exist for downstream commercial products, although there are generally no regulations in any jurisdiction that are specific to synthetic biology.<ref name=":5">{{Citation|last1=Pei|first1=Lei|title=Regulatory Frameworks for Synthetic Biology|date=2012|work=Synthetic Biology|pages=157–226|publisher=John Wiley & Sons, Ltd|doi=10.1002/9783527659296.ch5|isbn=9783527659296|last2=Bar‐Yam|first2=Shlomiya|last3=Byers‐Corbin|first3=Jennifer|last4=Casagrande|first4=Rocco|last5=Eichler|first5=Florentine|last6=Lin|first6=Allen|last7=Österreicher|first7=Martin|last8=Regardh|first8=Pernilla C.|last9=Turlington|first9=Ralph D.}}</ref><ref name=":4">{{Cite journal|last=Trump|first=Benjamin D.|date=2017-11-01|title=Synthetic biology regulation and governance: Lessons from TAPIC for the United States, European Union, and Singapore|journal=Health Policy|volume=121|issue=11|pages=1139–1146|doi=10.1016/j.healthpol.2017.07.010|pmid=28807332|issn=0168-8510|doi-access=free}}</ref><br />
<br />
<br />
<br />
== See also 请参阅 ==<br />
<br />
{{Colbegin|colwidth=20em}}<br />
<br />
* ''[[ACS Synthetic Biology]]'' (journal)<br />
<br />
* [[Bioengineering]]<br />
<br />
* [[Biomimicry]]<br />
<br />
Category:Biotechnology<br />
<br />
类别: 生物技术<br />
<br />
* [[Carlson Curve]]<br />
<br />
Category:Molecular genetics<br />
<br />
类别: 分子遗传学<br />
<br />
* [[Chiral life concept]]<br />
<br />
Category:Systems biology<br />
<br />
分类: 系统生物学<br />
<br />
* [[Computational biology]]<br />
<br />
Category:Bioinformatics<br />
<br />
类别: 生物信息学<br />
<br />
* [[Computational biomodeling]]<br />
<br />
Category:Biocybernetics<br />
<br />
类别: 生物控制论<br />
<br />
* [[DNA digital data storage]]<br />
<br />
Category:Appropriate technology<br />
<br />
类别: 适当的技术<br />
<br />
* [[Engineering biology]]<br />
<br />
Category:Emerging technologies<br />
<br />
类别: 新兴技术<br />
<br />
<noinclude><br />
<br />
<small>This page was moved from [[wikipedia:en:Synthetic biology]]. Its edit history can be viewed at [[合成生物学/edithistory]]</small></noinclude><br />
<br />
[[Category:待整理页面]]</div>粲兰https://wiki.swarma.org/index.php?title=%E5%90%88%E6%88%90%E7%94%9F%E7%89%A9%E5%AD%A6&diff=18647合成生物学2020-11-18T03:49:19Z<p>粲兰:</p>
<hr />
<div>此词条暂由袁一博翻译,翻译字数共4491,未经人工整理和审校,带来阅读不便,请见谅。<br />
<br />
{{redirect|Artificial life form|simulated life forms|Artificial life}}<br />
<br />
{{short description|Interdisciplinary branch of biology and engineering}}<br />
<br />
{{Synthetic biology}}<br />
<br />
[[File:Synthetic Biology Research at NASA Ames.jpg|thumb|Synthetic Biology Research at [[Ames Research Center|NASA Ames Research Center]]. NASA埃姆斯研究中心的合成生物学研究。]]<br />
<br />
NASA Ames Research Center.]]<br />
<br />
美国国家航空和宇宙航行局/美国国家航空航天局埃姆斯研究中心<br />
<br />
<br />
<br />
'''Synthetic biology''' ('''SynBio''') is a multidisciplinary area of research that seeks to create new biological parts, devices, and systems, or to redesign systems that are already found in nature.<br />
<br />
Synthetic biology (SynBio) is a multidisciplinary area of research that seeks to create new biological parts, devices, and systems, or to redesign systems that are already found in nature.<br />
<br />
合成生物学(SynBio)是一个多学科的研究领域,旨在创造新的生物部件、设备和系统,或重新设计已经在自然界中发现的系统。<br />
<br />
<br />
<br />
It is a branch of science that encompasses a broad range of methodologies from various disciplines, such as [[biotechnology]], [[genetic engineering]], [[molecular biology]], [[molecular engineering]], [[systems biology]], [[Model lipid bilayer|membrane science]], [[biophysics]], [[Biological engineering|chemical and biological engineering]], [[Electrical engineering|electrical and computer engineering]], [[control engineering]] and [[evolutionary biology]].<br />
<br />
It is a branch of science that encompasses a broad range of methodologies from various disciplines, such as biotechnology, genetic engineering, molecular biology, molecular engineering, systems biology, membrane science, biophysics, chemical and biological engineering, electrical and computer engineering, control engineering and evolutionary biology.<br />
<br />
它是科学的一个分支,包括广泛的方法,从不同的学科,如生物技术,基因工程,分子生物学,分子工程,系统生物学,膜科学,生物物理学,化学和生物工程,电子和计算机工程,控制工程和进化生物学。<br />
<br />
<br />
<br />
Due to more powerful [[genetic engineering]] capabilities and decreased DNA synthesis and [[DNA sequencing|sequencing costs]], the field of synthetic biology is rapidly growing. In 2016, more than 350 companies across 40 countries were actively engaged in synthetic biology applications; all these companies had an estimated net worth of $3.9 billion in the global market.<ref>{{cite journal | last1 = Bueso | first1 = F. Y. | last2 = Tangney | first2 = M. | year = 2017 | title = Synthetic Biology in the Driving Seat of the Bioeconomy | url = | journal = Trends in Biotechnology | volume = 35 | issue = 5| pages = 373–378 | doi = 10.1016/j.tibtech.2017.02.002 | pmid = 28249675 }}</ref><br />
<br />
Due to more powerful genetic engineering capabilities and decreased DNA synthesis and sequencing costs, the field of synthetic biology is rapidly growing. In 2016, more than 350 companies across 40 countries were actively engaged in synthetic biology applications; all these companies had an estimated net worth of $3.9 billion in the global market.<br />
<br />
由于更强大的基因工程能力和降低的 DNA 合成及测序成本,合成生物学领域正在迅速发展。2016年,来自40个国家的350多家公司积极参与合成生物学应用; 所有这些公司在全球市场的净值估计为39亿美元。<br />
<br />
<br />
<br />
== Definition 定义 ==<br />
<br />
Synthetic biology currently has no generally accepted definition. Here are a few examples:<br />
<br />
Synthetic biology currently has no generally accepted definition. Here are a few examples:<br />
<br />
合成生物学目前还没有公认的定义。以下是一些定义的示例:<br />
<br />
<br />
<br />
* "the use of a mixture of physical engineering and genetic engineering to create new (and, therefore, synthetic) life forms混合使用物理工程和基因工程来创建新的(因而也即合成的)生命形式。"<ref>{{cite journal | last1 = Hunter | first1 = D | year = 2013 | title = How to object to radically new technologies on the basis of justice: the case of synthetic biology | url = | journal = Bioethics | volume = 27 | issue = 8| pages = 426–434 | doi = 10.1111/bioe.12049 | pmid = 24010854 }}</ref><br />
<br />
<br />
* "an emerging field of research that aims to combine the knowledge and methods of biology, engineering and related disciplines in the design of chemically synthesized DNA to create organisms with novel or enhanced characteristics and traits一个新兴的研究领域,旨在将生物学,工程学和相关学科领域的知识和方法结合到化学合成DNA 的设计中,从而创造出具有新颖或增强特性和特征的有机体。<br />
"<ref>{{cite journal | last1 = Gutmann | first1 = A | year = 2011 | title = The ethics of synthetic biology: guiding principles for emerging technologies | url = | journal = Hastings Center Report | volume = 41 | issue = 4| pages = 17–22 | doi = 10.1002/j.1552-146x.2011.tb00118.x | pmid = 21845917 | s2cid = 20662786 }}</ref><br />
<br />
* "designing and constructing [[BioBrick|biological modules]], [[biological systems]], and [[biological machine]]s or, re-design of existing biological systems for useful purposes设计并构建生物积木、生物系统以及生物机器,或为有用的目的重新设计现有的生物系统。"<ref name="NakanoEckford2013">{{cite book|url={{google books |plainurl=y |id=uVhsAAAAQBAJ}}|title=Molecular Communication|last1=Nakano|first1=Tadashi|last2=Eckford|first2=Andrew W.|last3=Haraguchi|first3=Tokuko|date=12 September 2013|publisher=Cambridge University Press|isbn=978-1-107-02308-6|name-list-style=vanc}}</ref><br />
<br />
<br />
* “applying the engineering paradigm of systems design to biological systems in order to produce predictable and robust systems with novel functionalities that do not exist in nature” (The European Commission, 2005)This can include the possibility of a [[molecular assembler]], based upon biomolecular systems such as the [[ribosome]]”<ref name="RoadMap">{{Cite web|url=http://www.foresight.org/roadmaps/Nanotech_Roadmap_2007_main.pdf|title=Productive Nanosystems: A Technology Roadmap|website=Foresight Institute}}</ref><br />
将系统设计的工程范式应用到生物系统中,以产生具有自然界中不存在的新功能的可预测且健全的系统”(欧洲委员会,2005年),这可能包括基于生物分子系统——例如核糖体——的分子组合器的可能性。<br />
<br />
<br />
<br />
To note, synthetic biology has traditionally been divided into two different approaches: top down and bottom up.<br />
<br />
To note, synthetic biology has traditionally been divided into two different approaches: top down and bottom up.<br />
<br />
值得注意的是,合成生物学在传统上被分为两种不同的方法: 自上而下和自下而上。<br />
<br />
<br />
<br />
# The <u>top down</u> approach involves using metabolic and genetic engineering techniques to impart new functions to living cells.<br />
<br />
The <u>top down</u> approach involves using metabolic and genetic engineering techniques to impart new functions to living cells.<br />
<br />
自上而下的方法包括利用代谢和基因工程技术赋予活细胞以新的功能。<br />
<br />
# The <u>bottom up</u> approach involves creating new biological systems ''in vitro'' by bringing together 'non-living' biomolecular components,<ref>{{cite journal | vauthors = Schwille P | title = Bottom-up synthetic biology: engineering in a tinkerer's world | journal = Science | volume = 333 | issue = 6047 | pages = 1252–4 | date = September 2011 | pmid = 21885774 | doi = 10.1126/science.1211701 | bibcode = 2011Sci...333.1252S | s2cid = 43354332 }}</ref> often with the aim of constructing an [[artificial cell]].<br />
<br />
The <u>bottom up</u> approach involves creating new biological systems in vitro by bringing together 'non-living' biomolecular components, often with the aim of constructing an artificial cell.<br />
<br />
自下而上的方法包括在体外创建新的生物系统,将“非活性”的生物分子组件聚集在一起,其目的通常是构建一个人工细胞。<br />
<br />
<br />
<br />
Biological systems are thus assembled module-by-module. [[Cell-free protein synthesis|Cell-free protein expression systems]] are often employed,<ref>{{cite journal | vauthors = Noireaux V, Libchaber A | title = A vesicle bioreactor as a step toward an artificial cell assembly | journal = Proceedings of the National Academy of Sciences of the United States of America | volume = 101 | issue = 51 | pages = 17669–74 | date = December 2004 | pmid = 15591347 | pmc = 539773 | doi = 10.1073/pnas.0408236101 | bibcode = 2004PNAS..10117669N }}</ref><ref>{{cite journal | vauthors = Hodgman CE, Jewett MC | title = Cell-free synthetic biology: thinking outside the cell | journal = Metabolic Engineering | volume = 14 | issue = 3 | pages = 261–9 | date = May 2012 | pmid = 21946161 | pmc = 3322310 | doi = 10.1016/j.ymben.2011.09.002 }}</ref><ref>{{cite journal | vauthors = Elani Y, Law RV, Ces O | title = Protein synthesis in artificial cells: using compartmentalisation for spatial organisation in vesicle bioreactors | journal = Physical Chemistry Chemical Physics | volume = 17 | issue = 24 | pages = 15534–7 | date = June 2015 | pmid = 25932977 | doi = 10.1039/C4CP05933F | bibcode = 2015PCCP...1715534E | doi-access = free }}</ref> as are membrane-based molecular machinery. There are increasing efforts to bridge the divide between these approaches by forming hybrid living/synthetic cells,<ref>{{cite journal | vauthors = Elani Y, Trantidou T, Wylie D, Dekker L, Polizzi K, Law RV, Ces O | title = Constructing vesicle-based artificial cells with embedded living cells as organelle-like modules | journal = Scientific Reports | volume = 8 | issue = 1 | pages = 4564 | date = March 2018 | pmid = 29540757 | pmc = 5852042 | doi = 10.1038/s41598-018-22263-3 | bibcode = 2018NatSR...8.4564E }}</ref> and engineering communication between living and synthetic cell populations.<ref>{{cite journal | vauthors = Lentini R, Martín NY, Forlin M, Belmonte L, Fontana J, Cornella M, Martini L, Tamburini S, Bentley WE, Jousson O, Mansy SS | title = Two-Way Chemical Communication between Artificial and Natural Cells | journal = ACS Central Science | volume = 3 | issue = 2 | pages = 117–123 | date = February 2017 | pmid = 28280778 | pmc = 5324081 | doi = 10.1021/acscentsci.6b00330 }}</ref><br />
<br />
Biological systems are thus assembled module-by-module. Cell-free protein expression systems are often employed, as are membrane-based molecular machinery. There are increasing efforts to bridge the divide between these approaches by forming hybrid living/synthetic cells, and engineering communication between living and synthetic cell populations.<br />
<br />
生物系统就是这样一个模块一个模块地组装起来的。无细胞蛋白表达系统作为以膜为基础的分子机制,经常被采用。通过形成活细胞/合成细胞的混合体,以及在活细胞和合成细胞群之间进行工程交流,越来越多的努力为这些系统之间架起了桥梁。<br />
<br />
<br />
<br />
== History 发展历程 ==<br />
<br />
'''1910:''' First identifiable use of the term "synthetic biology" in [[Stéphane Leduc]]'s publication ''Théorie physico-chimique de la vie et générations spontanées''.<ref>[https://openlibrary.org/books/OL23348076M/Théorie_physico-chimique_de_la_vie_et_générations_spontanées Théorie physico-chimique de la vie et générations spontanées, S. Leduc, 1910]</ref> He also noted this term in another publication, ''La Biologie Synthétique'' in 1912.<ref>{{cite book |url=http://www.peiresc.org/bstitre.htm |title=La biologie synthétique, étude de biophysique |last=Leduc |first=Stéphane |date=1912 | veditors = Poinat A }}</ref><br />
<br />
1910: First identifiable use of the term "synthetic biology" in Stéphane Leduc's publication Théorie physico-chimique de la vie et générations spontanées. He also noted this term in another publication, La Biologie Synthétique in 1912.<br />
<br />
1910: First identifiable use of the term "synthetic biology" in Stéphane Leduc's publication Théorie physico-chimique de la vie et générations spontanées.他还在1912年的另一本出版物《生物合成学》中提到了这个术语。<br />
<br />
<br />
<br />
'''1961:''' Jacob and Monod postulate cellular regulation by molecular networks from their study of the ''lac'' operon in ''E. coli'' and envisioned the ability to assemble new systems from molecular components.<ref>Jacob, F.ß. & Monod, J. On the regulation of gene activity. Cold Spring Harb. Symp. Quant. Biol. 26, 193–211 (1961).</ref><br />
<br />
1961: Jacob and Monod postulate cellular regulation by molecular networks from their study of the lac operon in E. coli and envisioned the ability to assemble new systems from molecular components.<br />
<br />
1961年: 雅各布和莫诺德通过他们对大肠杆菌中乳酸操纵子的研究,设想了通过分子网络调控细胞的方法,并预期了由分子组件组装新系统的能力。<br />
<br />
<br />
<br />
'''1973:''' First molecular cloning and amplification of DNA in a plasmid is published in ''P.N.A.S.'' by Cohen, Boyer ''et al.'' constituting the dawn of synthetic biology.<ref>{{cite journal | vauthors = Cohen SN, Chang AC, Boyer HW, Helling RB | title = Construction of biologically functional bacterial plasmids in vitro | journal = Proc. Natl. Acad. Sci. USA | volume = 70 | issue = 11 | pages = 3240–3244 | date = 1973 | pmid = 4594039 | doi = 10.1073/pnas.70.11.3240 | bibcode = 1973PNAS...70.3240C | pmc = 427208 }}</ref></blockquote><br />
<br />
1973: First molecular cloning and amplification of DNA in a plasmid is published in P.N.A.S. by Cohen, Boyer et al. constituting the dawn of synthetic biology.</blockquote><br />
<br />
1973年:第一篇关于质粒中 DNA 的分子克隆和扩增的文章在 P.N.A.S. 发表。作者: 科恩,波义耳等人(Cohen,Boyer et al.) 。构成了合成生物学的黎明<br />
<br />
<br />
<br />
'''1978:''' [[Werner Arber|Arber]], [[Daniel Nathans|Nathans]] and [[Hamilton O. Smith|Smith]] win the [[Nobel Prize in Physiology or Medicine]] for the discovery of [[restriction enzyme]]s, leading Szybalski to offer an editorial comment in the journal ''[[Gene (journal)|Gene]]'':<br />
<br />
1978: Arber, Nathans and Smith win the Nobel Prize in Physiology or Medicine for the discovery of restriction enzymes, leading Szybalski to offer an editorial comment in the journal Gene:<br />
<br />
1978年: 阿尔伯 (Arber) ,纳森斯 (Nathans) 和 史密斯 (Smith) 因为限制性内切酶的发现而获得美国诺贝尔生理学或医学奖学会奖,这使得齐巴尔斯基 (Szybalski) 在《基因》杂志上发表了一篇社论评论:<br />
<br />
<br />
<br />
<blockquote>The work on restriction nucleases not only permits us easily to construct recombinant DNA molecules and to analyze individual genes, but also has led us into the new era of synthetic biology where not only existing genes are described and analyzed but also new gene arrangements can be constructed and evaluated.<ref>{{cite journal | vauthors = Szybalski W, Skalka A | title = Nobel prizes and restriction enzymes | journal = Gene | volume = 4 | issue = 3 | pages = 181–2 | date = November 1978 | pmid = 744485 | doi = 10.1016/0378-1119(78)90016-1 }}</ref></blockquote><br />
<br />
<blockquote>The work on restriction nucleases not only permits us easily to construct recombinant DNA molecules and to analyze individual genes, but also has led us into the new era of synthetic biology where not only existing genes are described and analyzed but also new gene arrangements can be constructed and evaluated.</blockquote><br />
<br />
限制性核酸酶的研究不仅使我们能够很容易地构建重组 DNA 分子和分析单个基因,而且使我们进入了合成生物学的新时代,不仅可以描述和分析现有的基因,而且可以构建和评估新的基因序列。</blockquote ><br />
<br />
<br />
<br />
'''1988:''' First DNA amplification by the polymerase chain reaction (PCR) using a thermostable DNA polymerase is published in ''Science'' by Mullis ''et al.''<ref>{{cite journal | vauthors = Saiki RK, Gelfand DH, Stoffel S, Scharf SJ, Higuchi R, Horn GT, Mullis KB, Erlich HA | title = Primer-directed enzymatic amplification of DNA with a thermostable DNA polymerase | journal = Science | volume = 239 | issue = 4839 | pages = 487–491 | date = 1988 | pmid = 2448875 | doi = 10.1126/science.239.4839.487 }}</ref></blockquote> This obviated adding new DNA polymerase after each PCR cycle, thus greatly simplifying DNA mutagenesis and assembly.<br />
<br />
1988: First DNA amplification by the polymerase chain reaction (PCR) using a thermostable DNA polymerase is published in Science by Mullis et al.</blockquote> This obviated adding new DNA polymerase after each PCR cycle, thus greatly simplifying DNA mutagenesis and assembly.<br />
<br />
1988年: 第一次利用热稳定的 DNA 聚合酶进行聚合酶链式反应以实现 DNA 扩增(PCR)的成果由马利斯 (Mullis) 等人发表在《科学》杂志上,这样就避免了在每次 PCR 循环后增加新的 DNA 聚合酶,从而大大简化了 DNA 的突变和组装。<br />
<br />
<br />
<br />
'''2000:''' Two papers in [[Nature (journal)|Nature]] report [[synthetic biological circuits]], a genetic toggle switch and a biological clock, by combining genes within [[Escherichia coli|''E. coli'']] cells.<ref name=":0">{{cite journal | vauthors = Elowitz MB, Leibler S | title = A synthetic oscillatory network of transcriptional regulators | journal = Nature | volume = 403 | issue = 6767 | pages = 335–8 | date = January 2000 | pmid = 10659856 | doi = 10.1038/35002125 | bibcode = 2000Natur.403..335E | s2cid = 41632754 }}</ref><ref name=":1">{{cite journal | vauthors = Gardner TS, Cantor CR, Collins JJ | title = Construction of a genetic toggle switch in Escherichia coli | journal = Nature | volume = 403 | issue = 6767 | pages = 339–42 | date = January 2000 | pmid = 10659857 | doi = 10.1038/35002131 | bibcode = 2000Natur.403..339G | s2cid = 345059 }}</ref><br />
<br />
2000: Two papers in Nature report synthetic biological circuits, a genetic toggle switch and a biological clock, by combining genes within E. coli cells.<br />
<br />
2000年: 《自然》杂志的两篇论文报告了通过结合大肠杆菌细胞内的基因制造合成生物电路、基因切换开关和生物钟。<br />
<br />
<br />
<br />
'''2003:''' The most widely used standardized DNA parts, [[BioBrick]] plasmids, are invented by [[Tom Knight (scientist)|Tom Knight]].<ref>{{Cite journal|last1=Knight|first1=Thomas| name-list-style = vanc |year=2003|title=Tom Knight (2003). Idempotent Vector Design for Standard Assembly of Biobricks|hdl=1721.1/21168}}</ref> These parts will become central to the international Genetically Engineered Machine competition (iGEM) founded at MIT in the following year.<br />
<br />
2003: The most widely used standardized DNA parts, BioBrick plasmids, are invented by Tom Knight. These parts will become central to the international Genetically Engineered Machine competition (iGEM) founded at MIT in the following year.<br />
<br />
2003年: 最广泛使用的标准化 DNA 部件,生物积木质粒,是由汤姆·奈特 (Tom Knight) 发明的。这些部分将成为第二年在麻省理工学院成立的国际基因工程机器竞赛 (iGEM) 的中心。<br />
<br />
<br />
<br />
[[File:Synthetic Biology Open Language (SBOL) standard visual symbols.png|thumb|upright=1.25| [[Synthetic Biology Open Language]] (SBOL) standard visual symbols for use with [[BioBrick|BioBricks Standard]]]]<br />
<br />
[[Synthetic Biology Open Language (SBOL) standard visual symbols for use with BioBricks Standard]]<br />
<br />
[与生物积木标准一起使用的合成生物学开放式语言(SBOL)标准视觉符号]<br />
<br />
<br />
<br />
'''2003:''' Researchers engineer an artemisinin precursor pathway in ''E. coli''.<ref>Martin, V. J., Pitera, D. J., Withers, S. T., Newman, J. D. & Keasling, J. D. Engineering a mevalonate pathway in Escherichia coli for production of terpenoids. Nature Biotech. 21, 796–802 (2003).</ref><br />
<br />
2003: Researchers engineer an artemisinin precursor pathway in E. coli.<br />
<br />
2003年: 研究人员在大肠杆菌中设计出青蒿素前体途径。<br />
<br />
<br />
<br />
'''2004:''' First international conference for synthetic biology, Synthetic Biology 1.0 (SB1.0) is held at the Massachusetts Institute of Technology, USA.<br />
<br />
2004: First international conference for synthetic biology, Synthetic Biology 1.0 (SB1.0) is held at the Massachusetts Institute of Technology, USA.<br />
<br />
2004年: 第一届合成生物学国际会议,合成生物学1.0(SB1.0)在美国麻省理工学院举行。<br />
<br />
<br />
<br />
'''2005:''' Researchers develop a light-sensing circuit in ''E. coli''.<ref>{{cite journal | last1 = Levskaya | first1 = A. | display-authors = etal | year = 2005 | title = "Synthetic biology " engineering Escherichia coli to see light | url = | journal = Nature | volume = 438 | issue = 7067| pages = 441–442 | doi = 10.1038/nature04405 | pmid = 16306980 | s2cid = 4428475 }}</ref> Another group designs circuits capable of multicellular pattern formation.<ref>Basu, S., Gerchman, Y., Collins, C. H., Arnold, F. H. & Weiss, R. "A synthetic multicellular system for programmed pattern formation. ''Nature'' 434,</ref><br />
<br />
2005: Researchers develop a light-sensing circuit in E. coli. Another group designs circuits capable of multicellular pattern formation.<br />
<br />
2005年: 研究人员在大肠杆菌中开发出一种感光电路。另一个研究小组设计出了能够形成多细胞模式的电路。<br />
<br />
<br />
<br />
'''2006:''' Researchers engineer a synthetic circuit that promotes bacterial invasion of tumour cells.<ref>{{cite journal | last1 = Anderson | first1 = J. C. | last2 = Clarke | first2 = E. J. | last3 = Arkin | first3 = A. P. | last4 = Voigt | first4 = C. A. | year = 2006 | title = Environmentally controlled invasion of cancer cells by engineered bacteria | url = | journal = J. Mol. Biol. | volume = 355 | issue = 4| pages = 619–627 | doi = 10.1016/j.jmb.2005.10.076 | pmid = 16330045 }}</ref><br />
<br />
2006: Researchers engineer a synthetic circuit that promotes bacterial invasion of tumour cells.<br />
<br />
2006年: 研究人员设计了一种能促进细菌侵入肿瘤细胞的合成电路。<br />
<br />
<br />
<br />
'''2010:''' Researchers publish in ''Science'' the first synthetic bacterial genome, called ''M. mycoides'' JCVI-syn1.0.<ref name="gibson52" /><ref>{{Cite news|url=https://www.telegraph.co.uk/news/science/science-news/7747779/American-scientist-who-created-artificial-life-denies-playing-God.html|title=American scientist who created artificial life denies 'playing God'|last=|first=|date=May 2010|website=The Telegraph|url-status=live|archive-url=|archive-date=|access-date=}}</ref> The genome is made from chemically-synthesized DNA using yeast recombination.<br />
<br />
2010: Researchers publish in Science the first synthetic bacterial genome, called M. mycoides JCVI-syn1.0. The genome is made from chemically-synthesized DNA using yeast recombination.<br />
<br />
2010年: 研究人员在《科学》杂志上发表了第一个人工合成的细菌基因组,名为丝状支原体 JCVI-syn1.0。基因组是使用酵母由化学合成的 DNA重组得到的。<br />
<br />
<br />
<br />
'''2011:''' Functional synthetic chromosome arms are engineered in yeast.<ref>{{cite journal | last1 = Dymond | first1 = J. S. | display-authors = etal | year = 2011 | title = Synthetic chromosome arms function in yeast and generate phenotypic diversity by design | url = | journal = Nature | volume = 477 | issue = 7365 | pages = 816–821 | doi = 10.1038/nature10403 | pmid = 21918511 | pmc = 3774833 }}</ref><br />
<br />
2011: Functional synthetic chromosome arms are engineered in yeast.<br />
<br />
2011年: 成功在酵母中设计出功能性合成染色体臂。<br />
<br />
<br />
<br />
'''2012:''' Charpentier and Doudna labs publish in ''Science'' the programming of CRISPR-Cas9 bacterial immunity for targeting DNA cleavage.<ref>{{cite journal | vauthors = Jinek M, Chylinski K, Fonfara I, Hauer M, Doudna JA, Charpentier E | title = A programmable dual-RNA-guided DNA endonuclease in adaptive bacterial immunity | journal = Science | volume = 337 | issue = 6096 | pages = 816–821 | date = 2012 | pmid = 22745249 | doi = 10.1126/science.1225829 | pmc = 6286148 }}</ref></blockquote> This technology greatly simplified and expanded eukaryotic gene editing.<br />
<br />
2012: Charpentier and Doudna labs publish in Science the programming of CRISPR-Cas9 bacterial immunity for targeting DNA cleavage.</blockquote> This technology greatly simplified and expanded eukaryotic gene editing.<br />
<br />
2012年: Charpentier 和 Doudna 实验室在《科学》杂志上发表了 CRISPR-Cas9细菌免疫系统的程序设计,用于靶向 DNA 的裂解。这项技术极大地简化和扩展了真核生物的基因编辑。<br />
<br />
<br />
<br />
'''2019:''' Scientists at [[ETH Zurich]] report the creation of the first [[bacterial genome]], named ''[[Caulobacter crescentus|Caulobacter ethensis-2.0]]'', made entirely by a computer, although a related [[wikt:viability|viable form]] of ''C. ethensis-2.0'' does not yet exist.<ref name="EA-20190401">{{cite news |author=ETH Zurich |title=First bacterial genome created entirely with a computer |url=https://www.eurekalert.org/pub_releases/2019-04/ez-fbg032819.php |date=1 April 2019 |work=[[EurekAlert!]] |accessdate=2 April 2019 |author-link=ETH Zurich }}</ref><ref name="PNAS20190401">{{cite journal |author=Venetz, Jonathan E. |display-authors=et al. |title=Chemical synthesis rewriting of a bacterial genome to achieve design flexibility and biological functionality |date=1 April 2019 |journal=[[Proceedings of the National Academy of Sciences of the United States of America]] |volume=116 |issue=16 |pages=8070–8079 |doi=10.1073/pnas.1818259116 |pmid=30936302 |pmc=6475421 }}</ref><br />
<br />
2019: Scientists at ETH Zurich report the creation of the first bacterial genome, named Caulobacter ethensis-2.0, made entirely by a computer, although a related viable form of C. ethensis-2.0 does not yet exist.<br />
<br />
2019年: 苏黎世联邦理工学院 (ETH Zurich) 的科学家报告说,他们已经创造出了第一个细菌基因组,并将其命名为 Caulobacter ethensis-2.0 ,这个基因组完全是由计算机制造的,尽管与之相关的可存活的Caulobacter ethensis-2.0还不存在。<br />
<br />
<br />
<br />
'''2019:''' Researchers report the production of a new [[Synthetic biology#Synthetic life|synthetic]] (possibly [[Artificial life#Biochemical-based ("wet")|artificial]]) form of [[wikt:viability|viable]] [[life]], a variant of the [[bacteria]] ''[[Escherichia coli]]'', by reducing the natural number of 64 [[codon]]s in the bacterial [[genome]] to 59 codons instead, in order to encode 20 [[amino acid]]s.<ref name="NYT-20190515">{{cite news |last=Zimmer |first=Carl |authorlink=Carl Zimmer |title=Scientists Created Bacteria With a Synthetic Genome. Is This Artificial Life? - In a milestone for synthetic biology, colonies of E. coli thrive with DNA constructed from scratch by humans, not nature. |url=https://www.nytimes.com/2019/05/15/science/synthetic-genome-bacteria.html |date=15 May 2019 |work=[[The New York Times]] |accessdate=16 May 2019 }}</ref><ref name="NAT-20190515">{{cite journal |author=Fredens, Julius |display-authors=et al. |title=Total synthesis of Escherichia coli with a recoded genome |date=15 May 2019 |journal=[[Nature (journal)|Nature]] |volume=569 |issue=7757 |pages=514–518 |doi=10.1038/s41586-019-1192-5 |pmid=31092918 |pmc=7039709 |bibcode=2019Natur.569..514F }}</ref><br />
<br />
2019: Researchers report the production of a new synthetic (possibly artificial) form of viable life, a variant of the bacteria Escherichia coli, by reducing the natural number of 64 codons in the bacterial genome to 59 codons instead, in order to encode 20 amino acids.<br />
<br />
2019年: 研究人员报告了一种新式合成(可能是人工的)可行生命形式的产生,这是大肠杆菌的变种,它通过将细菌基因组中64个密码子的自然数目减少到59个密码子来编码20个氨基酸。<br />
<br />
<br />
<br />
== Perspectives 各方观点 ==<br />
<br />
Engineers view biology as a ''technology'' (in other words, a given system's ''[[biotechnology]]'' or its ''[[biological engineering]]'')<ref>{{cite journal | volume = 6 | last = Zeng | first = Jie (Bangzhe) | title = On the concept of systems bio-engineering | journal = Coomunication on Transgenic Animals, June 1994, CAS, PRC }}</ref> Synthetic biology includes the broad redefinition and expansion of biotechnology, with the ultimate goals of being able to design and build engineered biological systems that process information, manipulate chemicals, fabricate materials and structures, produce energy, provide food, and maintain and enhance human health (see [[Biomedical Engineering]]) and our environment.<ref>{{cite journal | volume = 6 | last = Chopra | first = Paras | author2 = Akhil Kamma | title = Engineering life through Synthetic Biology | journal = In Silico Biology }}</ref><br />
<br />
Engineers view biology as a technology (in other words, a given system's biotechnology or its biological engineering) Synthetic biology includes the broad redefinition and expansion of biotechnology, with the ultimate goals of being able to design and build engineered biological systems that process information, manipulate chemicals, fabricate materials and structures, produce energy, provide food, and maintain and enhance human health (see Biomedical Engineering) and our environment.<br />
<br />
工程师将生物学视为一种技术(换句话说,一个特定系统的生物技术或其生物工程)合成生物学包括生物技术的广泛重新定义和扩展,最终目标是能够设计和建造工程生物系统,处理信息,操纵化学品,制造材料和结构,生产能源,提供食物,维护和增强人类健康(见生物医学工程)和我们的环境。<br />
<br />
<br />
<br />
Studies in synthetic biology can be subdivided into broad classifications according to the approach they take to the problem at hand: standardization of biological parts, biomolecular engineering, genome engineering. {{citation needed|date=May 2020}}<br />
<br />
Studies in synthetic biology can be subdivided into broad classifications according to the approach they take to the problem at hand: standardization of biological parts, biomolecular engineering, genome engineering. <br />
<br />
合成生物学的研究可以根据它们对手头问题所采取的方法再广泛细分为: 生物部分的标准化、生物分子工程、基因组工程。<br />
<br />
<br />
<br />
Biomolecular engineering includes approaches that aim to create a toolkit of functional units that can be introduced to present new technological functions in living cells. [[Genetic engineering]] includes approaches to construct synthetic chromosomes for whole or minimal organisms.<br />
<br />
Biomolecular engineering includes approaches that aim to create a toolkit of functional units that can be introduced to present new technological functions in living cells. Genetic engineering includes approaches to construct synthetic chromosomes for whole or minimal organisms.<br />
<br />
生物分子工程包括旨在创建一个功能单元工具包的方法,这些功能单元可以用来展示活细胞中的新技术性功能。基因工程包括为整个或最小的有机体构建合成染色体的方法。<br />
<br />
<br />
<br />
Biomolecular design refers to the general idea of de novo design and additive combination of biomolecular components. Each of these approaches share a similar task: to develop a more synthetic entity at a higher level of complexity by inventively manipulating a simpler part at the preceding level.<ref>{{cite journal | vauthors = Channon K, Bromley EH, Woolfson DN | title = Synthetic biology through biomolecular design and engineering | journal = Current Opinion in Structural Biology | volume = 18 | issue = 4 | pages = 491–8 | date = August 2008 | pmid = 18644449 | doi = 10.1016/j.sbi.2008.06.006 }}</ref><br />
<br />
Biomolecular design refers to the general idea of de novo design and additive combination of biomolecular components. Each of these approaches share a similar task: to develop a more synthetic entity at a higher level of complexity by inventively manipulating a simpler part at the preceding level.<br />
<br />
生物分子设计是指生物分子组分的重新设计和加合的总体思想。这些方法都有一个相似的任务: 通过在前一级创造性地操作一个更简单的部分,从而在更高的复杂性水平上开发一个更高度合成化的实体。<br />
<br />
<br />
<br />
On the other hand, "re-writers" are synthetic biologists interested in testing the irreducibility of biological systems. Due to the complexity of natural biological systems, it would be simpler to rebuild the natural systems of interest from the ground up; In order to provide engineered surrogates that are easier to comprehend, control and manipulate.<ref>{{cite journal | first = M | last = Stone | title = Life Redesigned to Suit the Engineering Crowd | journal = Microbe | volume = 1 | issue = 12 | pages = 566–570 | date = 2006 | s2cid = 7171812 | url = https://pdfs.semanticscholar.org/8d45/e0f37a0fb6c1a3c659c71ee9c52619b18364.pdf }}</ref> Re-writers draw inspiration from [[refactoring]], a process sometimes used to improve computer software.<br />
<br />
On the other hand, "re-writers" are synthetic biologists interested in testing the irreducibility of biological systems. Due to the complexity of natural biological systems, it would be simpler to rebuild the natural systems of interest from the ground up; In order to provide engineered surrogates that are easier to comprehend, control and manipulate. Re-writers draw inspiration from refactoring, a process sometimes used to improve computer software.<br />
<br />
另一方面,“重写者”指的是对测试生物系统的不可还原性感兴趣的合成生物学家。由于自然生物系统的复杂性,从头开始重建感兴趣的自然系统会更简单; 为了提供更容易理解、控制和操作的工程替代品。重写者从重构中获得灵感,我们有时用这种重构以改进计算机软件。<br />
<br />
<br />
<br />
== Enabling technologies 使能技术 ==<br />
<br />
Several novel enabling technologies were critical to the success of synthetic biology. Concepts include [[standardization]] of biological parts and hierarchical abstraction to permit using those parts in synthetic systems.<ref>{{cite journal | vauthors = Baker D, Church G, Collins J, Endy D, Jacobson J, Keasling J, Modrich P, Smolke C, Weiss R | title = Engineering life: building a fab for biology | journal = Scientific American | volume = 294 | issue = 6 | pages = 44–51 | date = June 2006 | pmid = 16711359 | doi = 10.1038/scientificamerican0606-44 | bibcode = 2006SciAm.294f..44B }}</ref> Basic technologies include reading and writing DNA (sequencing and fabrication). Measurements under multiple conditions are needed for accurate modeling and [[computer-aided design]] (CAD).<br />
<br />
Several novel enabling technologies were critical to the success of synthetic biology. Concepts include standardization of biological parts and hierarchical abstraction to permit using those parts in synthetic systems. Basic technologies include reading and writing DNA (sequencing and fabrication). Measurements under multiple conditions are needed for accurate modeling and computer-aided design (CAD).<br />
<br />
一些新的使能技术对于合成生物学的成功至关重要。概念包括生物部分的标准化和层次抽象,以允许在合成系统中使用这些部分。基本技术包括读写 DNA (测序和编码)。为了精确地建模和计算机辅助设计(CAD) ,需要在多种条件下进行测量。<br />
<br />
<br />
<br />
=== DNA and gene synthesis DNA 和基因合成===<br />
<br />
{{Main|Artificial gene synthesis|Synthetic genomics}}Driven by dramatic decreases in costs of [[oligonucleotides|oligonucleotide]] ("oligos") synthesis and the advent of PCR, the sizes of DNA constructions from oligos have increased to the genomic level.<ref>{{cite journal | vauthors = Kosuri S, Church GM | title = Large-scale de novo DNA synthesis: technologies and applications | journal = Nature Methods | volume = 11 | issue = 5 | pages = 499–507 | date = May 2014 | pmid = 24781323 | doi = 10.1038/nmeth.2918 | pmc = 7098426 }}</ref> In 2000, researchers reported synthesis of the 9.6 kbp (kilo bp) [[Hepatitis C]] virus genome from chemically synthesized 60 to 80-mers.<ref>{{cite journal | vauthors = Blight KJ, Kolykhalov AA, Rice CM | title = Efficient initiation of HCV RNA replication in cell culture | journal = Science | volume = 290 | issue = 5498 | pages = 1972–4 | date = December 2000 | pmid = 11110665 | doi = 10.1126/science.290.5498.1972 | bibcode = 2000Sci...290.1972B }}</ref> In 2002 researchers at [[Stony Brook University]] succeeded in synthesizing the 7741 bp [[poliovirus]] genome from its published sequence, producing the second synthetic genome, spanning two years.<ref>{{cite journal | vauthors = Couzin J | title = Virology. Active poliovirus baked from scratch | journal = Science | volume = 297 | issue = 5579 | pages = 174–5 | date = July 2002 | pmid = 12114601 | doi = 10.1126/science.297.5579.174b | s2cid = 83531627 | url = https://semanticscholar.org/paper/248000e7bc654631ae217274a77253ceddf270a1 }}</ref> In 2003 the 5386 bp genome of the [[bacteriophage]] [[Phi X 174]] was assembled in about two weeks.<ref name="assembly2003">{{cite journal | vauthors = Smith HO, Hutchison CA, Pfannkoch C, Venter JC | title = Generating a synthetic genome by whole genome assembly: phiX174 bacteriophage from synthetic oligonucleotides | journal = Proceedings of the National Academy of Sciences of the United States of America | volume = 100 | issue = 26 | pages = 15440–5 | date = December 2003 | pmid = 14657399 | pmc = 307586 | doi = 10.1073/pnas.2237126100 | bibcode = 2003PNAS..10015440S }}</ref> In 2006, the same team, at the [[J. Craig Venter Institute]], constructed and patented a [[Synthetic genomics|synthetic genome]] of a novel minimal bacterium, ''[[Mycoplasma laboratorium]]'' and were working on getting it functioning in a living cell.<ref>{{cite news|url=https://www.nytimes.com/2007/06/29/science/29cells.html|title=Scientists Transplant Genome of Bacteria|last=Wade|first=Nicholas|date=2007-06-29|work=The New York Times|access-date=2007-12-28|issn=0362-4331}}</ref><ref>{{cite journal | vauthors = Gibson DG, Benders GA, Andrews-Pfannkoch C, Denisova EA, Baden-Tillson H, Zaveri J, Stockwell TB, Brownley A, Thomas DW, Algire MA, Merryman C, Young L, Noskov VN, Glass JI, Venter JC, Hutchison CA, Smith HO | title = Complete chemical synthesis, assembly, and cloning of a Mycoplasma genitalium genome | journal = Science | volume = 319 | issue = 5867 | pages = 1215–20 | date = February 2008 | pmid = 18218864 | doi = 10.1126/science.1151721 | bibcode = 2008Sci...319.1215G | s2cid = 8190996 | url = https://semanticscholar.org/paper/8c662fd0e252c85d056aad7ff16009ebe1dd4cbc }}</ref><ref name="Ball">{{cite journal|last1=Ball|first1=Philip|date=2016|title=Man Made: A History of Synthetic Life|url=https://www.sciencehistory.org/distillations/magazine/man-made-a-history-of-synthetic-life|journal=Distillations|volume=2|issue=1|pages=15–23|access-date=22 March 2018}}</ref><br />
<br />
Driven by dramatic decreases in costs of oligonucleotide ("oligos") synthesis and the advent of PCR, the sizes of DNA constructions from oligos have increased to the genomic level. In 2000, researchers reported synthesis of the 9.6 kbp (kilo bp) Hepatitis C virus genome from chemically synthesized 60 to 80-mers. In 2002 researchers at Stony Brook University succeeded in synthesizing the 7741 bp poliovirus genome from its published sequence, producing the second synthetic genome, spanning two years. In 2003 the 5386 bp genome of the bacteriophage Phi X 174 was assembled in about two weeks. In 2006, the same team, at the J. Craig Venter Institute, constructed and patented a synthetic genome of a novel minimal bacterium, Mycoplasma laboratorium and were working on getting it functioning in a living cell.<br />
<br />
由于寡核苷酸 (oligos) 合成成本的大幅度降低和 PCR 的出现,寡核苷酸 DNA 构建的大小已经提高到基因组水平。2000年,研究人员报道了化学合成的60到80碱基数的聚丙型肝炎病毒基因组的合成。2002年,石溪大学的研究人员成功地从已发表的序列中合成了7741碱基数的脊髓灰质炎病毒基因组,生产了跨越两年的第二个合成基因组。2003年,噬菌体 Phi x 174的5386个基因组在大约两周内组装完毕。2006年,克莱格·凡特 (J. Craig Venter) 研究所的同一个团队,构建了一种新型微型细菌——支原体的合成基因组,并申请了专利,他们正在努力使其在活细胞中发挥作用。<br />
<br />
<br />
<br />
In 2007 it was reported that several companies were offering [[gene synthesis|synthesis of genetic sequences]] up to 2000 base pairs (bp) long, for a price of about $1 per bp and a turnaround time of less than two weeks.<ref>{{cite news| issn = 0362-4331| last = Pollack| first = Andrew| title = How Do You Like Your Genes? Biofabs Take Orders | work = The New York Times | access-date = 2007-12-28| date = 2007-09-12 | url = https://www.nytimes.com/2007/09/12/technology/techspecial/12gene.html?pagewanted=2&_r=1}}</ref> [[Oligonucleotide]]s harvested from a photolithographic- or inkjet-manufactured [[DNA chip]] combined with PCR and DNA mismatch error-correction allows inexpensive large-scale changes of [[codons]] in genetic systems to improve [[gene expression]] or incorporate novel amino-acids (see [[George M. Church]]'s and Anthony Forster's synthetic cell projects.<ref>{{Cite web|url=http://arep.med.harvard.edu/SBP|title=Synthetic Biology Projects|website=arep.med.harvard.edu|access-date=2018-02-17}}</ref><ref>{{cite journal | vauthors = Forster AC, Church GM | title = Towards synthesis of a minimal cell | journal = Molecular Systems Biology | volume = 2 | issue = 1 | pages = 45 | date = 2006-08-22 | pmid = 16924266 | pmc = 1681520 | doi = 10.1038/msb4100090 }}</ref>) This favors a synthesis-from-scratch approach.<br />
<br />
In 2007 it was reported that several companies were offering synthesis of genetic sequences up to 2000 base pairs (bp) long, for a price of about $1 per bp and a turnaround time of less than two weeks. Oligonucleotides harvested from a photolithographic- or inkjet-manufactured DNA chip combined with PCR and DNA mismatch error-correction allows inexpensive large-scale changes of codons in genetic systems to improve gene expression or incorporate novel amino-acids (see George M. Church's and Anthony Forster's synthetic cell projects.) This favors a synthesis-from-scratch approach.<br />
<br />
2007年有报道称,几家公司提供了长达2000个碱基对的基因序列合成,价格约为每个碱基1美元,一个完整流程所需工时不到两周。从光刻或喷墨制造的 DNA 芯片中提取的寡核苷酸,结合 PCR 和 DNA 错配错误校正,可以大规模改变遗传系统中的密码子,从而改善基因表达或合并新的氨基酸(参见乔治·M·丘奇和安东尼·福斯特的合成细胞项目)这有利于采用从头开始的合成方法。<br />
<br />
<br />
<br />
Additionally, the [[CRISPR|CRISPR/Cas]] system has emerged as a promising technique for gene editing. It was described as "the most important innovation in the synthetic biology space in nearly 30 years".<ref name="washpost_crispr">{{cite news|last1=Basulto|first1=Dominic|title=Everything you need to know about why CRISPR is such a hot technology|url=https://www.washingtonpost.com/news/innovations/wp/2015/11/04/everything-you-need-to-know-about-why-crispr-is-such-a-hot-technology/|access-date=5 December 2015|work=Washington Post|date=November 4, 2015}}</ref> While other methods take months or years to edit gene sequences, CRISPR speeds that time up to weeks.<ref name="washpost_crispr" /> Due to its ease of use and accessibility, however, it has raised ethical concerns, especially surrounding its use in [[Do-it-yourself biology|biohacking]].<ref>{{cite news|last1=Kahn|first1=Jennifer|title=The Crispr Quandary|url=https://www.nytimes.com/2015/11/15/magazine/the-crispr-quandary.html?_r=0|access-date=5 December 2015|work=New York Times|date=November 9, 2015}}</ref><ref>{{cite journal|last1=Ledford|first1=Heidi|title=CRISPR, the disruptor|url=http://www.nature.com/news/crispr-the-disruptor-1.17673|access-date=5 December 2015|agency=Nature News|journal=Nature|date=June 3, 2015|pmid=26040877|doi=10.1038/522020a|volume=522|issue=7554|pages=20–4|bibcode=2015Natur.522...20L|doi-access=free}}</ref><ref>{{cite magazine|last1=Higginbotham|first1=Stacey|title=Top VC Says Gene Editing Is Riskier Than Artificial Intelligence|url=http://fortune.com/2015/12/04/khosla-crispr-ai/|access-date=5 December 2015|magazine=Fortune|date=4 December 2015}}</ref><br />
<br />
Additionally, the CRISPR/Cas system has emerged as a promising technique for gene editing. It was described as "the most important innovation in the synthetic biology space in nearly 30 years". While other methods take months or years to edit gene sequences, CRISPR speeds that time up to weeks.<br />
<br />
此外,CRISPR/Cas 系统已经成为一种很有前途的基因编辑技术。它被称作“近30年来合成生物学领域最重要的创新”。虽然其他方法需要数月或数年来编辑基因序列,CRISPR 将这个时间缩短到数周。<br />
<br />
<br />
<br />
=== Sequencing 测序 ===<br />
<br />
[[DNA sequencing]] determines the order of [[nucleotide]] bases in a DNA molecule. Synthetic biologists use DNA sequencing in their work in several ways. First, large-scale genome sequencing efforts continue to provide information on naturally occurring organisms. This information provides a rich substrate from which synthetic biologists can construct parts and devices. Second, sequencing can verify that the fabricated system is as intended. Third, fast, cheap, and reliable sequencing can facilitate rapid detection and identification of synthetic systems and organisms.<ref>{{cite journal| author = Rollie| date = 2012 |title = Designing biological systems: Systems Engineering meets Synthetic Biology| journal = Chemical Engineering Science| volume = 69 | pages = 1–29| doi=10.1016/j.ces.2011.10.068| issue=1|display-authors=etal}}</ref><br />
<br />
DNA sequencing determines the order of nucleotide bases in a DNA molecule. Synthetic biologists use DNA sequencing in their work in several ways. First, large-scale genome sequencing efforts continue to provide information on naturally occurring organisms. This information provides a rich substrate from which synthetic biologists can construct parts and devices. Second, sequencing can verify that the fabricated system is as intended. Third, fast, cheap, and reliable sequencing can facilitate rapid detection and identification of synthetic systems and organisms.<br />
<br />
DNA 测序决定了 DNA 分子中核苷酸碱基的顺序。合成生物学家在他们的工作中以几种方式使用 DNA 测序。首先,大规模的基因组测序工作继续提供有关自然发生的生物体的信息。这些信息为合成生物学家提供了一个丰富的基质,他们可以从中构建零件和设备。其次,排序可以验证制造的系统是否如预期的那样。第三,快速、廉价和可靠的测序可以促进快速检测和识别合成系统和有机体。<br />
<br />
<br />
<br />
=== Microfluidics ===<br />
<br />
[[Microfluidics]], in particular droplet microfluidics, is an emerging tool used to construct new components, and to analyse and characterize them.<ref>{{cite journal | vauthors = Elani Y | title = Construction of membrane-bound artificial cells using microfluidics: a new frontier in bottom-up synthetic biology | journal = Biochemical Society Transactions | volume = 44 | issue = 3 | pages = 723–30 | date = June 2016 | pmid = 27284034 | pmc = 4900754 | doi = 10.1042/BST20160052 }}</ref><ref>{{cite journal | vauthors = Gach PC, Iwai K, Kim PW, Hillson NJ, Singh AK | title = Droplet microfluidics for synthetic biology | journal = Lab on a Chip | volume = 17 | issue = 20 | pages = 3388–3400 | date = October 2017 | pmid = 28820204 | doi = 10.1039/C7LC00576H | osti = 1421856 | url = http://www.escholarship.org/uc/item/6cr3k0v5 }}</ref> It is widely employed in screening assays.<ref>{{cite journal | vauthors = Vinuselvi P, Park S, Kim M, Park JM, Kim T, Lee SK | title = Microfluidic technologies for synthetic biology | journal = International Journal of Molecular Sciences | volume = 12 | issue = 6 | pages = 3576–93 | date = 2011-06-03 | pmid = 21747695 | pmc = 3131579 | doi = 10.3390/ijms12063576 }}</ref><br />
<br />
Microfluidics, in particular droplet microfluidics, is an emerging tool used to construct new components, and to analyse and characterize them. It is widely employed in screening assays.<br />
<br />
微流体,特别是液滴微流体,是一种新兴的工具,用于构造新的元件,并分析和表征它们。它被广泛应用于筛选分析。<br />
<br />
<br />
<br />
=== Modularity 模块化 ===<br />
<br />
The most used<ref name="primer">{{Cite book|title=Synthetic Biology – A Primer|last1=Freemont|first1=Paul S.|last2=Kitney|first2=Richard I.| name-list-style = vanc |date=2012|publisher=World Scientific|isbn=978-1-84816-863-3|doi=10.1142/p837}}</ref>{{rp|22–23}} standardized DNA parts are [[BioBrick]] plasmids, invented by [[Tom Knight (scientist)|Tom Knight]] in 2003.<ref>{{Cite journal|last1=Knight|first1=Thomas| name-list-style = vanc |year=2003|title=Tom Knight (2003). Idempotent Vector Design for Standard Assembly of Biobricks|hdl=1721.1/21168}}</ref> Biobricks are stored at the [[Registry of Standard Biological Parts]] in Cambridge, Massachusetts. The BioBrick standard has been used by thousands of students worldwide in the [[international Genetically Engineered Machine]] (iGEM) competition.<ref name="primer" />{{rp|22–23}}<br />
<br />
The most used standardized DNA parts are BioBrick plasmids, invented by Tom Knight in 2003. Biobricks are stored at the Registry of Standard Biological Parts in Cambridge, Massachusetts. The BioBrick standard has been used by thousands of students worldwide in the international Genetically Engineered Machine (iGEM) competition. SH3 domain-peptide binding or SpyTag/SpyCatcher offer such control. In addition it is necessary to regulate protein-protein interactions in cells, such as with light (using light-oxygen-voltage-sensing domains) or cell-permeable small molecules by chemically induced dimerization.<br />
<br />
最常用的标准化 DNA 部分是生物积木质粒,由汤姆·奈特在2003年发明。生物积木储存在马萨诸塞州剑桥的标准生物部件注册处。生物积木标准已经被全世界成千上万的学生用于国际基因工程机器竞赛 (iGEM) 。SH3域-肽结合或 SpyTag/SpyCatcher 能提供这样的控制。此外,还必须通过化学诱导的二聚化来调节细胞中蛋白质-蛋白质的相互作用,例如利用光(光-氧-电压感应域)或细胞渗透性小分子。<br />
<br />
<br />
<br />
While DNA is most important for information storage, a large fraction of the cell's activities are carried out by proteins. Tools can send proteins to specific regions of the cell and to link different proteins together. The interaction strength between protein partners should be tunable between a lifetime of seconds (desirable for dynamic signaling events) up to an irreversible interaction (desirable for device stability or resilient to harsh conditions). Interactions such as [[coiled coil]]s,<ref>{{cite journal | vauthors = Woolfson DN, Bartlett GJ, Bruning M, Thomson AR | title = New currency for old rope: from coiled-coil assemblies to α-helical barrels | journal = Current Opinion in Structural Biology | volume = 22 | issue = 4 | pages = 432–41 | date = August 2012 | pmid = 22445228 | doi = 10.1016/j.sbi.2012.03.002 }}</ref> [[SH3 domain]]-peptide binding<ref>{{cite journal | vauthors = Dueber JE, Wu GC, Malmirchegini GR, Moon TS, Petzold CJ, Ullal AV, Prather KL, Keasling JD | title = Synthetic protein scaffolds provide modular control over metabolic flux | journal = Nature Biotechnology | volume = 27 | issue = 8 | pages = 753–9 | date = August 2009 | pmid = 19648908 | doi = 10.1038/nbt.1557 | s2cid = 2756476 }}</ref> or [[SpyCatcher|SpyTag/SpyCatcher]]<ref>{{cite journal | vauthors = Reddington SC, Howarth M | title = Secrets of a covalent interaction for biomaterials and biotechnology: SpyTag and SpyCatcher | journal = Current Opinion in Chemical Biology | volume = 29 | pages = 94–9 | date = December 2015 | pmid = 26517567 | doi = 10.1016/j.cbpa.2015.10.002 | doi-access = free }}</ref> offer such control. In addition it is necessary to regulate protein-protein interactions in cells, such as with light (using [[light-oxygen-voltage-sensing domain]]s) or cell-permeable small molecules by [[chemically induced dimerization]].<ref>{{cite journal | vauthors = Bayle JH, Grimley JS, Stankunas K, Gestwicki JE, Wandless TJ, Crabtree GR | title = Rapamycin analogs with differential binding specificity permit orthogonal control of protein activity | journal = Chemistry & Biology | volume = 13 | issue = 1 | pages = 99–107 | date = January 2006 | pmid = 16426976 | doi = 10.1016/j.chembiol.2005.10.017 | doi-access = free }}</ref><br />
<br />
In a living cell, molecular motifs are embedded in a bigger network with upstream and downstream components. These components may alter the signaling capability of the modeling module. In the case of ultrasensitive modules, the sensitivity contribution of a module can differ from the sensitivity that the module sustains in isolation.<br />
<br />
在一个活细胞中,分子模体被嵌入到一个更大的由上游或下游组件构成的网络中。这些组件可以改变建模模块的信令能力。在超灵敏模块的情况下,模块的灵敏度贡献可能不同于该模块在隔离状态下支持的灵敏度。<br />
<br />
<br />
<br />
In a living cell, molecular motifs are embedded in a bigger network with upstream and downstream components. These components may alter the signaling capability of the modeling module. In the case of ultrasensitive modules, the sensitivity contribution of a module can differ from the sensitivity that the module sustains in isolation.<ref name="altszylerUltrasens2014">{{cite journal | vauthors = Altszyler E, Ventura A, Colman-Lerner A, Chernomoretz A | title = Impact of upstream and downstream constraints on a signaling module's ultrasensitivity | journal = Physical Biology | volume = 11 | issue = 6 | pages = 066003 | date = October 2014 | pmid = 25313165 | pmc = 4233326 | doi = 10.1088/1478-3975/11/6/066003 | bibcode = 2014PhBio..11f6003A }}</ref><ref name="altszylerUltrasens2017">{{cite journal | vauthors = Altszyler E, Ventura AC, Colman-Lerner A, Chernomoretz A | title = Ultrasensitivity in signaling cascades revisited: Linking local and global ultrasensitivity estimations | journal = PLOS ONE | volume = 12 | issue = 6 | pages = e0180083 | year = 2017 | pmid = 28662096 | pmc = 5491127 | doi = 10.1371/journal.pone.0180083 | bibcode = 2017PLoSO..1280083A | arxiv = 1608.08007 }}</ref><br />
<br />
<br />
<br />
Models inform the design of engineered biological systems by better predicting system behavior prior to fabrication. Synthetic biology benefits from better models of how biological molecules bind substrates and catalyze reactions, how DNA encodes the information needed to specify the cell and how multi-component integrated systems behave. Multiscale models of gene regulatory networks focus on synthetic biology applications. Simulations can model all biomolecular interactions in transcription, translation, regulation and induction of gene regulatory networks.<br />
<br />
模型通过在制造之前更好地预测系统行为来指导工程生物系统的设计。合成生物学受益于更好的模型,这些模型包括生物分子如何结合底物和催化反应,DNA 如何编码指定细胞所需的信息,以及多组分综合系统如何运作。基因调控网络的多尺度模型关注于合成生物学的应用。模拟可以模拟所有的生物分子相互作用的转录,翻译,调节和基因调控网络的诱导。<br />
<br />
=== Modeling 建模 ===<br />
<br />
Models inform the design of engineered biological systems by better predicting system behavior prior to fabrication. Synthetic biology benefits from better models of how biological molecules bind substrates and catalyze reactions, how DNA encodes the information needed to specify the cell and how multi-component integrated systems behave. Multiscale models of gene regulatory networks focus on synthetic biology applications. Simulations can model all biomolecular interactions in [[Transcription (biology)|transcription]], [[Translation (biology)|translation]], regulation and induction of gene regulatory networks.<ref>{{cite journal | vauthors = Carbonell-Ballestero M, Duran-Nebreda S, Montañez R, Solé R, Macía J, Rodríguez-Caso C | title = A bottom-up characterization of transfer functions for synthetic biology designs: lessons from enzymology | journal = Nucleic Acids Research | volume = 42 | issue = 22 | pages = 14060–14069 | date = December 2014 | pmid = 25404136 | pmc = 4267673 | doi = 10.1093/nar/gku964 }}</ref><br />
<br />
<ref>{{cite journal | vauthors = Kaznessis YN | title = Models for synthetic biology | journal = BMC Systems Biology | volume = 1 | issue = 1 | pages = 47 | date = November 2007 | pmid = 17986347 | pmc = 2194732 | doi = 10.1186/1752-0509-1-47 }}</ref><br />
<br />
<ref>{{cite conference |vauthors=Tuza ZA, Singhal V, Kim J, Murray RM | title = An in silico modeling toolbox for rapid prototyping of circuits in a biomolecular "breadboard" system. |book-title=52nd IEEE Conference on Decision and Control |date=December 2013 |doi=10.1109/CDC.2013.6760079}}</ref><br />
<br />
<br />
<br />
Studies have considered the components of the DNA transcription mechanism. One desire of scientists creating synthetic biological circuits is to be able to control the transcription of synthetic DNA in unicellular organisms (prokaryotes) and in multicellular organisms (eukaryotes). One study tested the adjustability of synthetic transcription factors (sTFs) in areas of transcription output and cooperative ability among multiple transcription factor complexes. Researchers were able to mutate functional regions called zinc fingers, the DNA specific component of sTFs, to decrease their affinity for specific operator DNA sequence sites, and thus decrease the associated site-specific activity of the sTF (usually transcriptional regulation). They further used the zinc fingers as components of complex-forming sTFs, which are the eukaryotic translation mechanisms. and demonstrated both analog and digital computation in living cells.<br />
--[[用户:粲兰|袁一博]]([[用户讨论:粲兰|讨论]]) “and demonstrated both analog and digital computation in living cells. ”这句首字母没有大写,不知是搬运时少了一点还是仅仅是格式错误。<br />
They demonstrated that bacteria can be engineered to perform both analog and/or digital computation. In human cells research demonstrated a universal logic evaluator that operates in mammalian cells in 2007. Subsequently, researchers utilized this paradigm to demonstrate a proof-of-concept therapy that uses biological digital computation to detect and kill human cancer cells in 2011. Another group of researchers demonstrated in 2016 that principles of computer engineering, can be used to automate digital circuit design in bacterial cells. In 2017, researchers demonstrated the 'Boolean logic and arithmetic through DNA excision' (BLADE) system to engineer digital computation in human cells.<br />
<br />
研究已经考虑了 DNA 转录机制的组成部分。科学家创造合成生物电路的一个愿望是能够控制单细胞生物(原核生物)和多细胞生物(真核生物)中合成 DNA 的转录。一项研究测试了合成转录因子(sTFs)在转录输出和多个转录因子复合物之间的合作能力方面的可调节性。研究人员能够突变被称为锌指——转录因子的一段特殊DNA——的功能区域,以减少它们对特定算子DNA 序列位点的亲和力,从而减少相关的特定位点的转录因子活性(通常是转录调控)。他们进一步使用锌指作为复杂地组成的转录因子的组成部分,这是真核翻译的机制。并在活细胞中进行了模拟和数字计算。他们证明了可以设计细菌使其同时执行模拟和/或数字计算。2007年,人类细胞研究展示了一种在哺乳动物细胞中运作的通用逻辑求值器。随后,研究人员在2011年利用这一范式展示了一种概念验证疗法,利用生物数字计算来检测和杀死人类癌细胞。另一组研究人员在2016年证明了计算机工程的原理,可以用来自动化细菌细胞中的数字电路设计。2017年,研究人员演示了“通过 DNA 删除的布尔逻辑和算术”(BLADE)系统,用于在人类细胞中构建数字计算。<br />
<br />
=== Synthetic transcription factors 合成转录因子 ===<br />
<br />
Studies have considered the components of the [[Transcription (biology)|DNA transcription]] mechanism. One desire of scientists creating [[synthetic biological circuit]]s is to be able to control the transcription of synthetic DNA in unicellular organisms ([[prokaryote]]s) and in multicellular organisms ([[eukaryote]]s). One study tested the adjustability of synthetic [[transcription factor]]s (sTFs) in areas of transcription output and cooperative ability among multiple transcription factor complexes.<ref name="Khalil AS 2012">{{cite journal | vauthors = Khalil AS, Lu TK, Bashor CJ, Ramirez CL, Pyenson NC, Joung JK, Collins JJ | title = A synthetic biology framework for programming eukaryotic transcription functions | journal = Cell | volume = 150 | issue = 3 | pages = 647–58 | date = August 2012 | pmid = 22863014 | pmc = 3653585 | doi = 10.1016/j.cell.2012.05.045 }}</ref> Researchers were able to mutate functional regions called [[zinc finger]]s, the DNA specific component of sTFs, to decrease their affinity for specific operator DNA sequence sites, and thus decrease the associated site-specific activity of the sTF (usually transcriptional regulation). They further used the zinc fingers as components of complex-forming sTFs, which are the [[eukaryotic translation]] mechanisms.<ref name="Khalil AS 2012"/><br />
<br />
<br />
<br />
A biosensor refers to an engineered organism, usually a bacterium, that is capable of reporting some ambient phenomenon such as the presence of heavy metals or toxins. One such system is the Lux operon of Aliivibrio fischeri, which codes for the enzyme that is the source of bacterial bioluminescence, and can be placed after a respondent promoter to express the luminescence genes in response to a specific environmental stimulus. One such sensor created, consisted of a bioluminescent bacterial coating on a photosensitive computer chip to detect certain petroleum pollutants. When the bacteria sense the pollutant, they luminesce. Another example of a similar mechanism is the detection of landmines by an engineered E.coli reporter strain capable of detecting TNT and its main degradation product DNT, and consequently producing a green fluorescent protein (GFP).<br />
<br />
生物传感器是一种工用有机体,它能够报告周边某些环境现象,如重金属或毒素的存在,通常是细菌。其中一个这样的系统是 Aliivibrio 费氏弧菌的 Lux 操纵子,它编码的酶是细菌生物发光的来源,可以放置在应答启动子之后表达发光基因以响应特定的环境刺激。其中一个传感器是由一个光敏计算机芯片上的生物发光细菌涂层组成的,用以检测某些石油污染物。当细菌感觉到污染物时,它们就会发光。另一个类似机制的例子是地雷的检测,用一个能够检测 TNT 及其主要降解产物 DNT 的大肠杆菌报告基因工程菌株,从而产生绿色荧光蛋白。<br />
<br />
== Applications 应用 ==<br />
<br />
=== Biological computers 生物计算机 ===<br />
<br />
Modified organisms can sense environmental signals and send output signals that can be detected and serve diagnostic purposes. Microbe cohorts have been used.<br />
<br />
改良有机体可以感知环境信号,并发送能够被检测到的输出信号,用于诊断目的。微生物群落已经被应用于这种用途。<br />
<br />
A [[biological computer]] refers to an engineered biological system that can perform computer-like operations, which is a dominant paradigm in synthetic biology. Researchers built and characterized a variety of [[logic gate]]s in a number of organisms,<ref>{{cite journal | vauthors = Singh V | title = Recent advances and opportunities in synthetic logic gates engineering in living cells | journal = Systems and Synthetic Biology | volume = 8 | issue = 4 | pages = 271–82 | date = December 2014 | pmid = 26396651 | pmc = 4571725 | doi = 10.1007/s11693-014-9154-6 }}</ref> and demonstrated both analog and digital computation in living cells. They demonstrated that bacteria can be engineered to perform both analog and/or digital computation.<ref>{{cite journal | vauthors = Purcell O, Lu TK | title = Synthetic analog and digital circuits for cellular computation and memory | journal = Current Opinion in Biotechnology | volume = 29 | pages = 146–55 | date = October 2014 | pmid = 24794536 | pmc = 4237220 | doi = 10.1016/j.copbio.2014.04.009 | series = Cell and Pathway Engineering }}</ref><ref>{{cite journal | vauthors = Daniel R, Rubens JR, Sarpeshkar R, Lu TK | title = Synthetic analog computation in living cells | journal = Nature | volume = 497 | issue = 7451 | pages = 619–23 | date = May 2013 | pmid = 23676681 | doi = 10.1038/nature12148 | bibcode = 2013Natur.497..619D | s2cid = 4358570 }}</ref> In human cells research demonstrated a universal logic evaluator that operates in mammalian cells in 2007.<ref>{{cite journal | vauthors = Rinaudo K, Bleris L, Maddamsetti R, Subramanian S, Weiss R, Benenson Y | title = A universal RNAi-based logic evaluator that operates in mammalian cells | journal = Nature Biotechnology | volume = 25 | issue = 7 | pages = 795–801 | date = July 2007 | pmid = 17515909 | doi = 10.1038/nbt1307 | s2cid = 280451 }}</ref> Subsequently, researchers utilized this paradigm to demonstrate a proof-of-concept therapy that uses biological digital computation to detect and kill human cancer cells in 2011.<ref>{{cite journal | vauthors = Xie Z, Wroblewska L, Prochazka L, Weiss R, Benenson Y | title = Multi-input RNAi-based logic circuit for identification of specific cancer cells | journal = Science | volume = 333 | issue = 6047 | pages = 1307–11 | date = September 2011 | pmid = 21885784 | doi = 10.1126/science.1205527 | bibcode = 2011Sci...333.1307X | s2cid = 13743291 | url = https://semanticscholar.org/paper/372e175668b5323d79950b58f12b36f6974a81ef }}</ref> Another group of researchers demonstrated in 2016 that principles of [[computer engineering]], can be used to automate digital circuit design in bacterial cells.<ref>{{cite journal | vauthors = Nielsen AA, Der BS, Shin J, Vaidyanathan P, Paralanov V, Strychalski EA, Ross D, Densmore D, Voigt CA | title = Genetic circuit design automation | journal = Science | volume = 352 | issue = 6281 | pages = aac7341 | date = April 2016 | pmid = 27034378 | doi = 10.1126/science.aac7341 | doi-access = free }}</ref> In 2017, researchers demonstrated the 'Boolean logic and arithmetic through DNA excision' (BLADE) system to engineer digital computation in human cells.<ref>{{cite journal | vauthors = Weinberg BH, Pham NT, Caraballo LD, Lozanoski T, Engel A, Bhatia S, Wong WW | title = Large-scale design of robust genetic circuits with multiple inputs and outputs for mammalian cells | journal = Nature Biotechnology | volume = 35 | issue = 5 | pages = 453–462 | date = May 2017 | pmid = 28346402 | pmc = 5423837 | doi = 10.1038/nbt.3805 }}</ref><br />
<br />
<br />
<br />
=== Biosensors 生物传感器 ===<br />
<br />
Cells use interacting genes and proteins, which are called gene circuits, to implement diverse function, such as responding to environmental signals, decision making and communication. Three key components are involved: DNA, RNA and Synthetic biologist designed gene circuits that can control gene expression from several levels including transcriptional, post-transcriptional and translational levels.<br />
<br />
细胞使用相互作用的基因和蛋白质,即所谓的基因回路,来实现不同的功能,如响应环境信号,决策和沟通。其中涉及到三个关键组成部分: DNA、 RNA 和合成生物学家设计的基因电路,可以从转录、转录后和翻译水平等几个层面控制基因表达。<br />
<br />
A [[biosensor]] refers to an engineered organism, usually a bacterium, that is capable of reporting some ambient phenomenon such as the presence of heavy metals or toxins. One such system is the [[Luciferase|Lux operon]] of ''[[Aliivibrio fischeri]],''<ref>{{cite journal | vauthors = de Almeida PE, van Rappard JR, Wu JC | title = In vivo bioluminescence for tracking cell fate and function | journal = American Journal of Physiology. Heart and Circulatory Physiology | volume = 301 | issue = 3 | pages = H663–71 | date = September 2011 | pmid = 21666118 | pmc = 3191083 | doi = 10.1152/ajpheart.00337.2011 }}</ref> which codes for the enzyme that is the source of bacterial [[bioluminescence]], and can be placed after a respondent [[Promoter (genetics)|promoter]] to express the luminescence genes in response to a specific environmental stimulus.<ref>{{cite journal | vauthors = Close DM, Xu T, Sayler GS, Ripp S | title = In vivo bioluminescent imaging (BLI): noninvasive visualization and interrogation of biological processes in living animals | journal = Sensors | volume = 11 | issue = 1 | pages = 180–206 | date = 2011 | pmid = 22346573 | pmc = 3274065 | doi = 10.3390/s110100180 }}</ref> One such sensor created, consisted of a [[bioluminescent bacteria]]l coating on a photosensitive [[computer chip]] to detect certain [[petroleum]] [[pollutant]]s. When the bacteria sense the pollutant, they luminesce.<ref>{{cite journal|last=Gibbs|first=W. Wayt| name-list-style = vanc |date=1997 |title=Critters on a Chip |url=http://www.sciam.com/article.cfm?id=critters-on-a-chip |journal=Scientific American|access-date=2 Mar 2009}}</ref> Another example of a similar mechanism is the detection of landmines by an engineered ''E.coli'' reporter strain capable of detecting [[TNT]] and its main degradation product [[2,4-Dinitrotoluene|DNT]], and consequently producing a green fluorescent protein ([[Green fluorescent protein|GFP]]).<ref>{{Cite journal|last1=Belkin|first1=Shimshon|last2=Yagur-Kroll|first2=Sharon|last3=Kabessa|first3=Yossef|last4=Korouma|first4=Victor|last5=Septon|first5=Tali|last6=Anati|first6=Yonatan|last7=Zohar-Perez|first7=Cheinat|last8=Rabinovitz|first8=Zahi|last9=Nussinovitch|first9=Amos|date=April 2017|title=Remote detection of buried landmines using a bacterial sensor|journal=Nature Biotechnology|volume=35|issue=4|pages=308–310|doi=10.1038/nbt.3791|pmid=28398330|s2cid=3645230|issn=1087-0156}}</ref><br />
<br />
<br />
<br />
Traditional metabolic engineering has been bolstered by the introduction of combinations of foreign genes and optimization by directed evolution. This includes engineering E. coli and yeast for commercial production of a precursor of the antimalarial drug, Artemisinin.<br />
<br />
传统的代谢工程学已经通过引入外源基因的组合和定向进化的优化得到了支持。这包括改造大肠杆菌和酵母菌,用于商业化生产抗疟药物青蒿素的前体。<br />
<br />
Modified organisms can sense environmental signals and send output signals that can be detected and serve diagnostic purposes. Microbe cohorts have been used.<ref name="pmid26019220">{{cite journal | vauthors = Danino T, Prindle A, Kwong GA, Skalak M, Li H, Allen K, Hasty J, Bhatia SN | title = Programmable probiotics for detection of cancer in urine | journal = Science Translational Medicine | volume = 7 | issue = 289 | pages = 289ra84 | date = May 2015 | pmid = 26019220 | pmc = 4511399 | doi = 10.1126/scitranslmed.aaa3519 }}</ref><br />
<br />
<br />
<br />
Entire organisms have yet to be created from scratch, although living cells can be transformed with new DNA. <br />
--[[用户:粲兰|袁一博]]([[用户讨论:粲兰|讨论]])“Entire organisms have yet to be created from scratch, although living cells can be transformed with new DNA.”have 疑似应为 haven’t <br />
Several ways allow constructing synthetic DNA components and even entire synthetic genomes, but once the desired genetic code is obtained, it is integrated into a living cell that is expected to manifest the desired new capabilities or phenotypes while growing and thriving. Cell transformation is used to create biological circuits, which can be manipulated to yield desired outputs.<br />
<br />
虽然活细胞可以通过新的 DNA 转化,但整个有机体还没有从头开始创造。有几种方法可以构建合成 DNA 组件,甚至是整个合成基因组,但是一旦获得了所需的遗传密码,它就会被整合到一个活细胞中,这个活细胞在生长和发育的过程中,有望表现所需的新能力或表型。细胞分化被用于创造生物电路,我们可以通过操纵这些电路来产生所需的输出。<br />
<br />
=== Cell transformation 细胞分化 ===<br />
<br />
{{Main|Transformation (genetics)}}Cells use interacting genes and proteins, which are called gene circuits, to implement diverse function, such as responding to environmental signals, decision making and communication. Three key components are involved: DNA, RNA and Synthetic biologist designed gene circuits that can control gene expression from several levels including transcriptional, post-transcriptional and translational levels.<br />
<br />
<br />
<br />
Traditional metabolic engineering has been bolstered by the introduction of combinations of foreign genes and optimization by directed evolution. This includes engineering ''E. coli'' and [[yeast]] for commercial production of a precursor of the [[Antimalarial medication|antimalarial drug]], [[Artemisinin]].<ref>{{cite journal | vauthors = Westfall PJ, Pitera DJ, Lenihan JR, Eng D, Woolard FX, Regentin R, Horning T, Tsuruta H, Melis DJ, Owens A, Fickes S, Diola D, Benjamin KR, Keasling JD, Leavell MD, McPhee DJ, Renninger NS, Newman JD, Paddon CJ | title = Production of amorphadiene in yeast, and its conversion to dihydroartemisinic acid, precursor to the antimalarial agent artemisinin | journal = Proceedings of the National Academy of Sciences of the United States of America | volume = 109 | issue = 3 | pages = E111–8 | date = January 2012 | pmid = 22247290 | pmc = 3271868 | doi = 10.1073/pnas.1110740109 | bibcode = 2012PNAS..109E.111W }}</ref><br />
<br />
The [[Top7 protein was one of the first proteins designed for a fold that had never been seen before in nature ]]<br />
<br />
[[ Top7蛋白是为了折叠而设计的最初几个蛋白质之一,以前从未在自然界中见过]<br />
<br />
<br />
<br />
Entire organisms have yet to be created from scratch, although living cells can be [[Transformation (genetics)|transformed]] with new DNA. Several ways allow constructing synthetic DNA components and even entire [[Artificial gene synthesis|synthetic genomes]], but once the desired genetic code is obtained, it is integrated into a living cell that is expected to manifest the desired new capabilities or [[phenotype]]s while growing and thriving.<ref>{{cite news|url=https://www.independent.co.uk/news/science/eureka-scientists-unveil-giant-leap-towards-synthetic-life-9219644.html|title=Eureka! Scientists unveil giant leap towards synthetic life|last=Connor|first=Steve|date=28 March 2014|work=The Independent|access-date=2015-08-06}}</ref> Cell transformation is used to create [[Synthetic biological circuit|biological circuits]], which can be manipulated to yield desired outputs.<ref name=":0" /><ref name=":1" /><br />
<br />
Natural proteins can be engineered, for example, by directed evolution, novel protein structures that match or improve on the functionality of existing proteins can be produced. One group generated a helix bundle that was capable of binding oxygen with similar properties as hemoglobin, yet did not bind carbon monoxide. A similar protein structure was generated to support a variety of oxidoreductase activities while another formed a structurally and sequentially novel ATPase. Another group generated a family of G-protein coupled receptors that could be activated by the inert small molecule clozapine N-oxide but insensitive to the native ligand, acetylcholine; these receptors are known as DREADDs. Novel functionalities or protein specificity can also be engineered using computational approaches. One study was able to use two different computational methods – a bioinformatics and molecular modeling method to mine sequence databases, and a computational enzyme design method to reprogram enzyme specificity. Both methods resulted in designed enzymes with greater than 100 fold specificity for production of longer chain alcohols from sugar.<br />
<br />
天然蛋白质可以被设计出来,例如,通过定向进化,新的蛋白质结构可以匹配或改进现有蛋白质的功能。其中一组可以产生能将血红蛋白与具有类似性质的氧结合的螺旋束,但不结合一氧化碳。一个类似的蛋白质结构被生成以支持多种氧化还原酶活性,而另一组生成一个在结构和顺序上全新的 ATP 酶。另一组产生了一类 g 蛋白偶联受体,这类受体可以被惰性小分子N-氧化氯氮平激活,但对天然配体乙酰胆碱不敏感; 这些受体被称为 DREADDs。新的功能或蛋白质特异性也可以利用计算方法进行设计。一项研究能够使用两种不同的计算方法——生物信息学和分子模拟方法挖掘序列数据库,使用计算酶设计方法重新编写酶的专一性。这两种方法都可以使设计的酶具有大于100倍的专一性,可以用糖生产出长链醇。<br />
<br />
<br />
<br />
By integrating synthetic biology with [[materials science]], it would be possible to use cells as microscopic molecular foundries to produce materials with properties whose properties were genetically encoded. Re-engineering has produced Curli fibers, the [[amyloid]] component of extracellular material of [[biofilms]], as a platform for programmable [[nanomaterial]]. These nanofibers were genetically constructed for specific functions, including adhesion to substrates, nanoparticle templating and protein immobilization.<ref>{{cite journal|vauthors=Nguyen PQ, Botyanszki Z, Tay PK, Joshi NS|date=September 2014|title=Programmable biofilm-based materials from engineered curli nanofibres|journal=Nature Communications|volume=5|pages=4945|bibcode=2014NatCo...5.4945N|doi=10.1038/ncomms5945|pmid=25229329|doi-access=free}}</ref><br />
<br />
Another common investigation is expansion of the natural set of 20 amino acids. Excluding stop codons, 61 codons have been identified, but only 20 amino acids are coded generally in all organisms. Certain codons are engineered to code for alternative amino acids including: nonstandard amino acids such as O-methyl tyrosine; or exogenous amino acids such as 4-fluorophenylalanine. Typically, these projects make use of re-coded nonsense suppressor tRNA-Aminoacyl tRNA synthetase pairs from other organisms, though in most cases substantial engineering is required.<br />
<br />
另一个常见的研究是对20种天然氨基酸的扩展。除了终止密码子, 61个密码子已被破译出,但所有生物体中一般只有20个氨基酸。某些密码子被设计为编码可替代的氨基酸,包括: 非标准氨基酸,如 o- 甲基酪氨酸; 或外源氨基酸,如4- 氟苯丙氨酸。通常情况下,这些项目利用从其他生物体获取的重新编码的无意义抑制 tRNA-氨酰基 tRNA 合成酶对,虽然在大多数情况下这需要大量的工程。<br />
<br />
<br />
<br />
=== Designed proteins 设计蛋白质 ===<br />
<br />
Other researchers investigated protein structure and function by reducing the normal set of 20 amino acids. Limited protein sequence libraries are made by generating proteins where groups of amino acids may be replaced by a single amino acid. For instance, several non-polar amino acids within a protein can all be replaced with a single non-polar amino acid. . One project demonstrated that an engineered version of Chorismate mutase still had catalytic activity when only 9 amino acids were used.<br />
<br />
其他研究人员通过减少一般的20种天然氨基酸来研究蛋白质的结构和功能。有限的蛋白质序列库是通过生成蛋白质制成的,其中一组氨基酸可以被一个单一的氨基酸所取代。例如,一个蛋白质中的几个非极性氨基酸都可以被一个非极性氨基酸所取代。.一个研究项目证明了,当只使用9种氨基酸时,一种改造过的分支酸变位酶仍然具有催化活性。<br />
<br />
<br />
<br />
[[File:Top7.png|thumb|The [[Top7]] protein was one of the first proteins designed for a fold that had never been seen before in nature<ref name="kuhlman03">{{cite journal | vauthors = Kuhlman B, Dantas G, Ireton GC, Varani G, Stoddard BL, Baker D | title = Design of a novel globular protein fold with atomic-level accuracy | journal = Science | volume = 302 | issue = 5649 | pages = 1364–8 | date = November 2003 | pmid = 14631033 | doi = 10.1126/science.1089427 | bibcode = 2003Sci...302.1364K | s2cid = 1939390 | url = https://semanticscholar.org/paper/3188f905b60172dcad17a9b8c23567400c2bb65f }}</ref> ]]<br />
<br />
Researchers and companies practice synthetic biology to synthesize industrial enzymes with high activity, optimal yields and effectiveness. These synthesized enzymes aim to improve products such as detergents and lactose-free dairy products, as well as make them more cost effective. The improvements of metabolic engineering by synthetic biology is an example of a biotechnological technique utilized in industry to discover pharmaceuticals and fermentive chemicals. Synthetic biology may investigate modular pathway systems in biochemical production and increase yields of metabolic production. Artificial enzymatic activity and subsequent effects on metabolic reaction rates and yields may develop "efficient new strategies for improving cellular properties ... for industrially important biochemical production".<br />
<br />
研究人员和公司运用合成生物学来合成具有高活性、最佳产量和有效性的工业酶。这些合成酶旨在改善产品,如洗涤剂和无乳糖乳制品,以及使他们更具成本效益。合成生物学对代谢工程学的改进是生物技术用于工业发现药物和发酵性化学品的一个典例。合成生物学可以研究生化生产中的模块化途径系统,并提高代谢生产的产量。人工酶活性及其对代谢反应速率和产量的后续影响可能开发出“改善细胞特性有效的新策略...... 用于重要的工业生化产品”。<br />
<br />
<br />
<br />
Natural proteins can be engineered, for example, by [[directed evolution]], novel protein structures that match or improve on the functionality of existing proteins can be produced. One group generated a [[helix bundle]] that was capable of binding [[oxygen]] with similar properties as [[hemoglobin]], yet did not bind [[carbon monoxide]].<ref>{{cite journal | vauthors = Koder RL, Anderson JL, Solomon LA, Reddy KS, Moser CC, Dutton PL | title = Design and engineering of an O(2) transport protein | journal = Nature | volume = 458 | issue = 7236 | pages = 305–9 | date = March 2009 | pmid = 19295603 | pmc = 3539743 | doi = 10.1038/nature07841 | bibcode = 2009Natur.458..305K }}</ref> A similar protein structure was generated to support a variety of [[oxidoreductase]] activities <ref>{{cite journal | vauthors = Farid TA, Kodali G, Solomon LA, Lichtenstein BR, Sheehan MM, Fry BA, Bialas C, Ennist NM, Siedlecki JA, Zhao Z, Stetz MA, Valentine KG, Anderson JL, Wand AJ, Discher BM, Moser CC, Dutton PL | title = Elementary tetrahelical protein design for diverse oxidoreductase functions | journal = Nature Chemical Biology | volume = 9 | issue = 12 | pages = 826–833 | date = December 2013 | pmid = 24121554 | pmc = 4034760 | doi = 10.1038/nchembio.1362 }}</ref> while another formed a structurally and sequentially novel [[ATPase]].<ref name="WangHecht2020">{{cite journal|last1=Wang|first1=MS|last2=Hecht|first2=MH|title=A Completely De Novo ATPase from Combinatorial Protein Design|journal=Journal of the American Chemical Society|year=2020|volume=142|issue=36|pages=15230–15234|issn=0002-7863|doi=10.1021/jacs.0c02954|pmid=32833456}}</ref> Another group generated a family of G-protein coupled receptors that could be activated by the inert small molecule [[clozapine N-oxide]] but insensitive to the native [[ligand]], [[acetylcholine]]; these receptors are known as [[Receptor activated solely by a synthetic ligand|DREADDs]].<ref>{{cite journal | vauthors = Armbruster BN, Li X, Pausch MH, Herlitze S, Roth BL | title = Evolving the lock to fit the key to create a family of G protein-coupled receptors potently activated by an inert ligand | journal = Proceedings of the National Academy of Sciences of the United States of America | volume = 104 | issue = 12 | pages = 5163–8 | date = March 2007 | pmid = 17360345 | pmc = 1829280 | doi = 10.1073/pnas.0700293104 | bibcode = 2007PNAS..104.5163A }}</ref> Novel functionalities or protein specificity can also be engineered using computational approaches. One study was able to use two different computational methods – a bioinformatics and molecular modeling method to mine sequence databases, and a computational enzyme design method to reprogram enzyme specificity. Both methods resulted in designed enzymes with greater than 100 fold specificity for production of longer chain alcohols from sugar.<ref>{{cite journal | vauthors = Mak WS, Tran S, Marcheschi R, Bertolani S, Thompson J, Baker D, Liao JC, Siegel JB | title = Integrative genomic mining for enzyme function to enable engineering of a non-natural biosynthetic pathway | journal = Nature Communications | volume = 6 | pages = 10005 | date = November 2015 | pmid = 26598135 | pmc = 4673503 | doi = 10.1038/ncomms10005 | bibcode = 2015NatCo...610005M }}</ref><br />
<br />
<br />
<br />
Scientists can encode digital information onto a single strand of synthetic DNA. In 2012, George M. Church encoded one of his books about synthetic biology in DNA. The 5.3 Mb of data was more than 1000 times greater than the previous largest amount of information to be stored in synthesized DNA. A similar project encoded the complete sonnets of William Shakespeare in DNA. More generally, algorithms such as NUPACK, ViennaRNA, Ribosome Binding Site Calculator, Cello, and Non-Repetitive Parts Calculator enables the design of new genetic systems.<br />
<br />
科学家可以将数字信息编码到一条合成 DNA 链上。2012年,乔治·M·丘奇用 DNA 将他的一本关于合成生物学的书编码。这5.3 Mb 的数据量比之前存储在合成 DNA 中的最大信息量大了1000多倍。一个类似的项目将威廉·莎士比亚的十四行诗全部编码在 DNA 中。更广泛地说,例如 NUPACK,vienna,Ribosome Binding Site Calculator,cello,和 non-repeated Parts Calculator 等算法使新遗传系统的设计成为可能。<br />
<br />
Another common investigation is [[Expanded genetic code|expansion]] of the natural set of 20 [[amino acid]]s. Excluding [[stop codon]]s, 61 [[codons]] have been identified, but only 20 amino acids are coded generally in all organisms. Certain codons are engineered to code for alternative amino acids including: nonstandard amino acids such as O-methyl [[tyrosine]]; or exogenous amino acids such as 4-fluorophenylalanine. Typically, these projects make use of re-coded [[nonsense suppressor]] [[Transfer RNA|tRNA]]-[[Aminoacyl tRNA synthetase]] pairs from other organisms, though in most cases substantial engineering is required.<ref>{{cite journal | vauthors = Wang Q, Parrish AR, Wang L | title = Expanding the genetic code for biological studies | journal = Chemistry & Biology | volume = 16 | issue = 3 | pages = 323–36 | date = March 2009 | pmid = 19318213 | pmc = 2696486 | doi = 10.1016/j.chembiol.2009.03.001 }}</ref><br />
<br />
<br />
<br />
Many technologies have been developed for incorporating unnatural nucleotides and amino acids into nucleic acids and proteins, both in vitro and in vivo. For example, in May 2014, researchers announced that they had successfully introduced two new artificial nucleotides into bacterial DNA. By including individual artificial nucleotides in the culture media, they were able to exchange the bacteria 24 times; they did not generate mRNA or proteins able to use the artificial nucleotides.<br />
<br />
无论是在体外还是体内,在核酸和蛋白质中将非天然的核苷酸和氨基酸结合的技术已经被开发出来。例如,2014年5月,研究人员宣布他们已经成功地将两种新的人工核苷酸引入细菌 DNA。通过在培养基中加入单个的人工核苷酸,他们能够交换细菌24次; 他们没有产生能够人工核苷酸表达的 mRNA 或蛋白质。<br />
<br />
Other researchers investigated protein structure and function by reducing the normal set of 20 amino acids. Limited protein sequence libraries are made by generating proteins where groups of amino acids may be replaced by a single amino acid.<ref>{{cite journal|author=Davidson, AR|author2=Lumb, KJ|author3=Sauer, RT|date=1995|title=Cooperatively folded proteins in random sequence libraries|journal=Nature Structural Biology|volume=2|issue=10|pages=856–864|doi=10.1038/nsb1095-856|pmid=7552709|s2cid=31781262}}</ref> For instance, several [[Chemical polarity|non-polar]] amino acids within a protein can all be replaced with a single non-polar amino acid.<ref>{{cite journal|vauthors=Kamtekar S, Schiffer JM, Xiong H, Babik JM, Hecht MH|date=December 1993|title=Protein design by binary patterning of polar and nonpolar amino acids|journal=Science|volume=262|issue=5140|pages=1680–5|bibcode=1993Sci...262.1680K|doi=10.1126/science.8259512|pmid=8259512}}</ref> . One project demonstrated that an engineered version of [[Chorismate mutase]] still had catalytic activity when only 9 amino acids were used.<ref>{{cite journal|vauthors=Walter KU, Vamvaca K, Hilvert D|date=November 2005|title=An active enzyme constructed from a 9-amino acid alphabet|journal=The Journal of Biological Chemistry|volume=280|issue=45|pages=37742–6|doi=10.1074/jbc.M507210200|pmid=16144843|doi-access=free}}</ref><br />
<br />
<br />
<br />
Researchers and companies practice synthetic biology to synthesize [[industrial enzymes]] with high activity, optimal yields and effectiveness. These synthesized enzymes aim to improve products such as detergents and lactose-free dairy products, as well as make them more cost effective.<ref>{{cite web|url=https://www.thermofisher.com/us/en/home/life-science/synthetic-biology/synthetic-biology-applications.html|title=Synthetic Biology Applications|website=www.thermofisher.com|access-date=2015-11-12}}</ref> The improvements of metabolic engineering by synthetic biology is an example of a biotechnological technique utilized in industry to discover pharmaceuticals and fermentive chemicals. Synthetic biology may investigate modular pathway systems in biochemical production and increase yields of metabolic production. Artificial enzymatic activity and subsequent effects on metabolic reaction rates and yields may develop "efficient new strategies for improving cellular properties ... for industrially important biochemical production".<ref>{{cite journal | vauthors = Liu Y, Shin HD, Li J, Liu L | title = Toward metabolic engineering in the context of system biology and synthetic biology: advances and prospects | journal = Applied Microbiology and Biotechnology | volume = 99 | issue = 3 | pages = 1109–18 | date = February 2015 | pmid = 25547833 | doi = 10.1007/s00253-014-6298-y | s2cid = 954858 }}</ref><br />
<br />
Synthetic biology raised NASA's interest as it could help to produce resources for astronauts from a restricted portfolio of compounds sent from Earth. On Mars, in particular, synthetic biology could lead to production processes based on local resources, making it a powerful tool in the development of manned outposts with less dependence on Earth.<br />
<br />
合成生物学引起了美国国家航空航天局的兴趣,因为它可以促使从地球上发射的受限化合物组合为宇航员生产资源。特别是在火星上,合成生物学可以产生基于当地资源的生产过程,使其成为开发对地球依赖性较低的载人前哨站的有力工具。<br />
<br />
<br />
<br />
=== Designed nucleic acid systems 设计核酸系统 ===<br />
<br />
Scientists can encode digital information onto a single strand of [[synthetic DNA]]. In 2012, [[George M. Church]] encoded one of his books about synthetic biology in DNA. The 5.3 [[Megabit|Mb]] of data was more than 1000 times greater than the previous largest amount of information to be stored in synthesized DNA.<ref>{{cite journal | vauthors = Church GM, Gao Y, Kosuri S | title = Next-generation digital information storage in DNA | journal = Science | volume = 337 | issue = 6102 | pages = 1628 | date = September 2012 | pmid = 22903519 | doi = 10.1126/science.1226355 | bibcode = 2012Sci...337.1628C | s2cid = 934617 | url = https://semanticscholar.org/paper/0856a685e85bcd27c11cd5f385be818deceb27bd }}</ref> A similar project encoded the complete [[sonnet]]s of [[William Shakespeare]] in DNA.<ref>{{cite web|url=http://news.sky.com/story/1041917/huge-amounts-of-data-can-be-stored-in-dna|title=Huge amounts of data can be stored in DNA|date=23 January 2013|publisher=Sky News|access-date=24 January 2013|archive-url=https://web.archive.org/web/20160531044937/http://news.sky.com/story/1041917/huge-amounts-of-data-can-be-stored-in-dna|archive-date=2016-05-31 }}</ref> More generally, algorithms such as NUPACK,<ref>{{Cite journal|last1=Zadeh|first1=Joseph N.|last2=Steenberg|first2=Conrad D.|last3=Bois|first3=Justin S.|last4=Wolfe|first4=Brian R.|last5=Pierce|first5=Marshall B.|last6=Khan|first6=Asif R.|last7=Dirks|first7=Robert M.|last8=Pierce|first8=Niles A.|date=2011-01-15|title=NUPACK: Analysis and design of nucleic acid systems|journal=Journal of Computational Chemistry|language=en|volume=32|issue=1|pages=170–173|doi=10.1002/jcc.21596|pmid=20645303}}</ref> ViennaRNA,<ref>{{Cite journal|last1=Lorenz|first1=Ronny|last2=Bernhart|first2=Stephan H.|last3=Höner zu Siederdissen|first3=Christian|last4=Tafer|first4=Hakim|last5=Flamm|first5=Christoph|last6=Stadler|first6=Peter F.|last7=Hofacker|first7=Ivo L.|date=2011-11-24|title=ViennaRNA Package 2.0|journal=Algorithms for Molecular Biology|language=en|volume=6|issue=1|pages=26|doi=10.1186/1748-7188-6-26|issn=1748-7188|pmc=3319429|pmid=22115189}}</ref> Ribosome Binding Site Calculator,<ref>{{Cite journal|last1=Salis|first1=Howard M.|last2=Mirsky|first2=Ethan A.|last3=Voigt|first3=Christopher A.|date=October 2009|title=Automated design of synthetic ribosome binding sites to control protein expression|journal=Nature Biotechnology|language=en|volume=27|issue=10|pages=946–950|doi=10.1038/nbt.1568|pmid=19801975|issn=1546-1696|pmc=2782888}}</ref> Cello,<ref>{{Cite journal|last1=Nielsen|first1=A. A. K.|last2=Der|first2=B. S.|last3=Shin|first3=J.|last4=Vaidyanathan|first4=P.|last5=Paralanov|first5=V.|last6=Strychalski|first6=E. A.|last7=Ross|first7=D.|last8=Densmore|first8=D.|last9=Voigt|first9=C. A.|date=2016-04-01|title=Genetic circuit design automation|journal=Science|language=en|volume=352|issue=6281|pages=aac7341|doi=10.1126/science.aac7341|pmid=27034378|issn=0036-8075|doi-access=free}}</ref> and Non-Repetitive Parts Calculator<ref>{{Cite journal|last1=Hossain|first1=Ayaan|last2=Lopez|first2=Eriberto|last3=Halper|first3=Sean M.|last4=Cetnar|first4=Daniel P.|last5=Reis|first5=Alexander C.|last6=Strickland|first6=Devin|last7=Klavins|first7=Eric|last8=Salis|first8=Howard M.|date=2020-07-13|title=Automated design of thousands of nonrepetitive parts for engineering stable genetic systems|url=https://www.nature.com/articles/s41587-020-0584-2|journal=Nature Biotechnology|language=en|pages=1–10|doi=10.1038/s41587-020-0584-2|pmid=32661437|s2cid=220506228|issn=1546-1696}}</ref> enables the design of new genetic systems.<br />
<br />
<br />
<br />
[[Gene functions in the minimal genome of the synthetic organism, Syn 3.]]<br />
<br />
[[在合成生物的最小基因组中发挥功能的基因,Syn 3. ]]<br />
<br />
Many technologies have been developed for incorporating [[Nucleic acid analogue|unnatural nucleotides]] and amino acids into nucleic acids and proteins, both ''in vitro'' and ''in vivo''. For example, in May 2014, researchers announced that they had successfully introduced two new artificial [[nucleotides]] into bacterial DNA. By including individual artificial nucleotides in the culture media, they were able to exchange the bacteria 24 times; they did not generate [[Messenger RNA|mRNA]] or proteins able to use the artificial nucleotides.<ref name="NYT-20140507">{{cite news|url=https://www.nytimes.com/2014/05/08/business/researchers-report-breakthrough-in-creating-artificial-genetic-code.html|title=Researchers Report Breakthrough in Creating Artificial Genetic Code|last=Pollack|first=Andrew|date=May 7, 2014|work=[[New York Times]]|access-date=May 7, 2014}}</ref><ref name="NATURE-20140507">{{cite journal|last=Callaway|first=Ewen|date=May 7, 2014|title=First life with 'alien' DNA|url=http://www.nature.com/news/first-life-with-alien-dna-1.15179|journal=[[Nature (journal)|Nature]]|doi=10.1038/nature.2014.15179|s2cid=86967999|access-date=May 7, 2014}}</ref><ref name="NATJ-20140507">{{cite journal|vauthors=Malyshev DA, Dhami K, Lavergne T, Chen T, Dai N, Foster JM, Corrêa IR, Romesberg FE|date=May 2014|title=A semi-synthetic organism with an expanded genetic alphabet|journal=Nature|volume=509|issue=7500|pages=385–8|bibcode=2014Natur.509..385M|doi=10.1038/nature13314|pmc=4058825|pmid=24805238}}</ref><br />
<br />
One important topic in synthetic biology is synthetic life, that is concerned with hypothetical organisms created in vitro from biomolecules and/or chemical analogues thereof. Synthetic life experiments attempt to either probe the origins of life, study some of the properties of life, or more ambitiously to recreate life from non-living (abiotic) components. Synthetic life biology attempts to create living organisms capable of carrying out important functions, from manufacturing pharmaceuticals to detoxifying polluted land and water. In medicine, it offers prospects of using designer biological parts as a starting point for new classes of therapies and diagnostic tools. Nobody has been able to create such a cell. The host cells were able to grow and replicate. The Mycoplasma laboratorium is the only living organism with completely engineered genome.<br />
<br />
合成生物学的一个重要课题是合成生命,它涉及到在体外由生物分子和/或其化学类似物创造的假想生物体。合成生命实验或者试图探索生命的起源,研究生命的某些特性,或者更雄心勃勃地从非生命(非生物)组成部分中重新创造生命。合成生命生物学试图创造能够执行重要功能的生命有机体,从制造药品到净化被污染的土地和水。在医学上,它提供了使用设计生物学部件作为新类型治疗和诊断工具的起点的前景。没有人能够制造出这样的细胞。宿主细胞能够生长和复制。实验室合成支原体是唯一一个拥有完全工程化基因组的生物体。<br />
<br />
<br />
<br />
=== Space exploration 太空探索 ===<br />
<br />
The first living organism with 'artificial' expanded DNA code was presented in 2014; the team used E. coli that had its genome extracted and replaced with a chromosome with an expanded genetic code. The nucleosides added are d5SICS and dNaM. followed by national synthetic cell organizations in several countries, including FabriCell, MaxSynBio and BaSyC. <br />
--[[用户:粲兰|袁一博]]([[用户讨论:粲兰|讨论]])“followed by national synthetic cell organizations in several countries, including FabriCell, MaxSynBio and BaSyC.”该句首字母未大写,疑似搬运时少了部分语句。<br />
The European synthetic cell efforts were unified in 2019 as SynCellEU initiative.<br />
<br />
2014年,第一个具有人工扩展 DNA 编码的活有机体问世; 研究小组使用大肠杆菌提取了它的基因组,并用扩展基因编码的染色体替换了它。添加的核苷是 d5SICS 和 dNaM。其次是一些国家的国家合成细胞组织,包括 FabriCell,MaxSynBio 和 BaSyC。欧洲合成细胞的努力在2019年被统一为 SynCellEU 倡议。<br />
<br />
Synthetic biology raised [[NASA|NASA's]] interest as it could help to produce resources for astronauts from a restricted portfolio of compounds sent from Earth.<ref name="Verseux, C. 2015 73–100">{{Cite book|author=Verseux, C.|author2=Paulino-Lima, I.|author3=Baque, M.|author4=Billi, D.|author5=Rothschild, L.|date=2016|title=Synthetic Biology for Space Exploration: Promises and Societal Implications|journal=Ambivalences of Creating Life. Societal and Philosophical Dimensions of Synthetic Biology, Publisher: Springer-Verlag|volume=45|pages=73–100|doi=10.1007/978-3-319-21088-9_4|series=Ethics of Science and Technology Assessment|isbn=978-3-319-21087-2}}</ref><ref>{{cite journal|last1=Menezes|first1=A|last2=Cumbers|first2=J|last3=Hogan|first3=J|last4=Arkin|first4=A|date=2014|title=Towards synthetic biological approaches to resource utilization on space missions|journal=Journal of the Royal Society, Interface|volume=12|issue=102|pages=20140715|doi=10.1098/rsif.2014.0715|pmid=25376875|pmc=4277073}}</ref><ref>{{cite journal | vauthors = Montague M, McArthur GH, Cockell CS, Held J, Marshall W, Sherman LA, Wang N, Nicholson WL, Tarjan DR, Cumbers J | title = The role of synthetic biology for in situ resource utilization (ISRU) | journal = Astrobiology | volume = 12 | issue = 12 | pages = 1135–42 | date = December 2012 | pmid = 23140229 | doi = 10.1089/ast.2012.0829 | bibcode = 2012AsBio..12.1135M }}</ref> On Mars, in particular, synthetic biology could lead to production processes based on local resources, making it a powerful tool in the development of manned outposts with less dependence on Earth.<ref name="Verseux, C. 2015 73–100" /> Work has gone into developing plant strains that are able to cope with the harsh Martian environment, using similar techniques to those employed to increase resilience to certain environmental factors in agricultural crops.<ref>{{Cite web|title=NASA - Designer Plants on Mars|url=https://www.nasa.gov/centers/goddard/news/topstory/2005/mars_plants.html|last=GSFC|first=Bill Steigerwald |website=www.nasa.gov|language=en|access-date=2020-05-29}}</ref><br />
<br />
<br />
<br />
=== Synthetic life 合成生命 ===<br />
<br />
{{Further|Artificially Expanded Genetic Information System|Hypothetical types of biochemistry}}<br />
<br />
Bacteria have long been used in cancer treatment. Bifidobacterium and Clostridium selectively colonize tumors and reduce their size. Recently synthetic biologists reprogrammed bacteria to sense and respond to a particular cancer state. Most often bacteria are used to deliver a therapeutic molecule directly to the tumor to minimize off-target effects. To target the tumor cells, peptides that can specifically recognize a tumor were expressed on the surfaces of bacteria. Peptides used include an affibody molecule that specifically targets human epidermal growth factor receptor 2 and a synthetic adhesin. The other way is to allow bacteria to sense the tumor microenvironment, for example hypoxia, by building an AND logic gate into bacteria. The bacteria then only release target therapeutic molecules to the tumor through either lysis or the bacterial secretion system. Lysis has the advantage that it can stimulate the immune system and control growth. Multiple types of secretion systems can be used and other strategies as well. The system is inducible by external signals. Inducers include chemicals, electromagnetic or light waves.<br />
<br />
长期以来,细菌一直被用于癌症治疗。双歧杆菌和梭状芽胞杆菌选择性地定殖于肿瘤并减小肿瘤体积。最近,合成生物学家对细菌进行了重新编码,使其能够感知特定的癌症状态并对其做出反应。大多数情况下,细菌被用来直接向肿瘤输送治疗分子,以最小化脱靶效应。为了定靶于肿瘤细胞,细菌表面表达出了可以特异性识别肿瘤的肽。识别过程中涉及的多肽包括一个特定作用于人类表皮生长因子受体的粘附分子和一个合成粘附素。另一种方法是通过在细菌中建立一个与逻辑门让细菌感知肿瘤的微环境,例如,缺氧。然后,细菌只通过溶菌或细菌分泌系统向肿瘤释放靶向治疗分子。溶菌具有刺激免疫系统和控制生长的优点。这个过程中可以使用多种类型的分泌系统和其他策略。该系统由外部信号诱导。这些诱导因子包括化学物质、电磁波或光波。<br />
<br />
[[File:Syn3 genome.svg|thumb|upright=1.25|[[Gene]] functions in the minimal [[genome]] of the synthetic organism, ''[[Syn 3]]''.<ref name="Hutchison">{{cite journal | vauthors = Hutchison CA, Chuang RY, Noskov VN, Assad-Garcia N, Deerinck TJ, Ellisman MH, Gill J, Kannan K, Karas BJ, Ma L, Pelletier JF, Qi ZQ, Richter RA, Strychalski EA, Sun L, Suzuki Y, Tsvetanova B, Wise KS, Smith HO, Glass JI, Merryman C, Gibson DG, Venter JC | title = Design and synthesis of a minimal bacterial genome | journal = Science | volume = 351 | issue = 6280 | pages = aad6253 | date = March 2016 | pmid = 27013737 | doi = 10.1126/science.aad6253 | bibcode = 2016Sci...351.....H | doi-access = free }}</ref>]]<br />
<br />
One important topic in synthetic biology is ''synthetic life'', that is concerned with hypothetical organisms created ''[[in vitro]]'' from [[biomolecule]]s and/or [[hypothetical types of biochemistry|chemical analogues thereof]]. Synthetic life experiments attempt to either probe the [[origins of life]], study some of the properties of life, or more ambitiously to recreate life from non-living ([[abiotic components|abiotic]]) components. Synthetic life biology attempts to create living organisms capable of carrying out important functions, from manufacturing pharmaceuticals to detoxifying polluted land and water.<ref name="enzymes2014">{{cite news |last=Connor |first=Steve |url=https://www.independent.co.uk/news/science/major-synthetic-life-breakthrough-as-scientists-make-the-first-artificial-enzymes-9896333.html |title=Major synthetic life breakthrough as scientists make the first artificial enzymes |work=The Independent |location=London |date=1 December 2014 |access-date=2015-08-06 }}</ref> In medicine, it offers prospects of using designer biological parts as a starting point for new classes of therapies and diagnostic tools.<ref name="enzymes2014" /><br />
<br />
Multiple species and strains are applied in these therapeutics. Most commonly used bacteria are Salmonella typhimurium, Escherichia Coli, Bifidobacteria, Streptococcus, Lactobacillus, Listeria and Bacillus subtilis. Each of these species have their own property and are unique to cancer therapy in terms of tissue colonization, interaction with immune system and ease of application.<br />
<br />
在这些治疗方法中应用了多种菌株。最常用的细菌是鼠伤寒沙门氏菌、大肠桿菌、双歧杆菌、链球菌、乳酸菌、李斯特菌和枯草杆菌。这些物种中的每一个都有自己的特性。在定殖组织、与免疫系统的相互作用和易于应用方面,它们对癌症治疗各有独到之处。<br />
<br />
<br />
<br />
A living "artificial cell" has been defined as a completely synthetic cell that can capture [[energy]], maintain [[electrochemical gradient|ion gradients]], contain [[macromolecules]] as well as store information and have the ability to [[mutate]].<ref name="Deamer">{{cite journal | vauthors = Deamer D | title = A giant step towards artificial life? | journal = Trends in Biotechnology | volume = 23 | issue = 7 | pages = 336–8 | date = July 2005 | pmid = 15935500 | doi = 10.1016/j.tibtech.2005.05.008 }}</ref> Nobody has been able to create such a cell.<ref name='Deamer'/><br />
<br />
<br />
<br />
The immune system plays an important role in cancer and can be harnessed to attack cancer cells. Cell-based therapies focus on immunotherapies, mostly by engineering T cells.<br />
<br />
免疫系统在癌症中起着重要作用。可以利用免疫系统攻击癌细胞。以细胞为基础的疗法主要是免疫疗法,主要是通过改造 T 细胞。<br />
<br />
A completely synthetic bacterial chromosome was produced in 2010 by [[Craig Venter]], and his team introduced it to genomically emptied bacterial host cells.<ref name="gibson52">{{cite journal | vauthors = Gibson DG, Glass JI, Lartigue C, Noskov VN, Chuang RY, Algire MA, Benders GA, Montague MG, Ma L, Moodie MM, Merryman C, Vashee S, Krishnakumar R, Assad-Garcia N, Andrews-Pfannkoch C, Denisova EA, Young L, Qi ZQ, Segall-Shapiro TH, Calvey CH, Parmar PP, Hutchison CA, Smith HO, Venter JC | title = Creation of a bacterial cell controlled by a chemically synthesized genome | journal = Science | volume = 329 | issue = 5987 | pages = 52–6 | date = July 2010 | pmid = 20488990 | doi = 10.1126/science.1190719 | bibcode = 2010Sci...329...52G | doi-access = free }}</ref> The host cells were able to grow and replicate.<ref>{{cite web| url=https://www.npr.org/templates/transcript/transcript.php?storyId=127010591| title=Scientists Reach Milestone On Way To Artificial Life| access-date=2010-06-09|date=2010-05-20}}</ref><ref>{{cite web|last1=Venter|first1=JC|title=From Designing Life to Prolonging Healthy Life|url=https://www.youtube.com/watch?v=Gwu_djYMm3w&t=30s|website=YouTube|publisher=University of California Television (UCTV)|access-date=1 February 2017}}</ref> The [[Mycoplasma laboratorium]] is the only living organism with completely engineered genome.<br />
<br />
<br />
<br />
T cell receptors were engineered and ‘trained’ to detect cancer epitopes. Chimeric antigen receptors (CARs) are composed of a fragment of an antibody fused to intracellular T cell signaling domains that can activate and trigger proliferation of the cell. A second generation CAR-based therapy was approved by FDA.<br />
<br />
T 细胞受体被设计和“训练”用以检测癌症表位。嵌合抗原受体(CAR)是由融合于细胞内 T 细胞信号域的抗体片段组成,这些信号域可以激活并触发细胞增殖。美国食品药品监督管理局(FDA)批准了第二代基于嵌合抗原受体的基因治疗。<br />
<br />
The first living organism with 'artificial' expanded DNA code was presented in 2014; the team used ''E. coli'' that had its genome extracted and replaced with a chromosome with an expanded genetic code. The [[nucleoside]]s added are [[d5SICS]] and [[dNaM]].<ref name="NATJ-20140507"/><br />
<br />
<br />
<br />
Gene switches were designed to enhance safety of the treatment. Kill switches were developed to terminate the therapy should the patient show severe side effects. Mechanisms can more finely control the system and stop and reactivate it. Since the number of T-cells are important for therapy persistence and severity, growth of T-cells is also controlled to dial the effectiveness and safety of therapeutics.<br />
<br />
基因开关被设计出来以提高治疗的安全性。如果病人出现严重的副作用,杀伤开关就会终止治疗。机制可以更好地控制系统,停止和重新激活它。由于 T 细胞的数量对治疗的持续性和强度非常重要,因此 T 细胞的生长也受到控制,从而平衡治疗的有效性和安全性。<br />
<br />
In May 2019, researchers, in a milestone effort, reported the creation of a new [[Synthetic biology#Synthetic life|synthetic]] (possibly [[Artificial life#Biochemical-based ("wet")|artificial]]) form of [[wikt:viability|viable]] [[life]], a variant of the [[bacteria]] ''[[Escherichia coli]]'', by reducing the natural number of 64 [[codon]]s in the bacterial [[genome]] to 59 codons instead, in order to encode 20 [[amino acid]]s.<ref name="NYT-20190515"/><ref name="NAT-20190515"/><br />
<br />
<br />
<br />
Although several mechanisms can improve safety and control, limitations include the difficulty of inducing large DNA circuits into the cells and risks associated with introducing foreign components, especially proteins, into cells.<br />
<br />
虽然有几种机制可以提高安全性和控制性,但他们也都存在局限性,包括很难将大型 DNA 电路诱导入细胞,以及将外来成分,特别是蛋白质引入细胞的风险。<br />
<br />
In 2017 the international [[Build-a-Cell]] large-scale research collaboration for the construction of synthetic living cell was started,<ref>{{cite web|url=http://buildacell.io/|title=Build-a-Cell|accessdate=4 Dec 2019}}</ref> followed by national synthetic cell organizations in several countries, including FabriCell,<ref>{{cite web|url=http://fabricell.org/|title=FabriCell|accessdate=8 Dec 2019}}</ref> MaxSynBio<ref>{{cite web|url=https://www.maxsynbio.mpg.de/home/|title=MaxSynBio - Max Planck Research Network in Synthetic Biology|accessdate=8 Dec 2019}}</ref> and BaSyC.<ref>{{cite web|url=http://www.basyc.nl/|title=BaSyC|accessdate=8 Dec 2019}}</ref> The European synthetic cell efforts were unified in 2019 as SynCellEU initiative.<ref>{{cite web|url=http://www.syntheticcell.eu/|title=SynCell EU|accessdate=8 Dec 2019}}</ref><br />
<br />
<br />
<br />
=== Drug delivery platforms 药物输送平台 ===<br />
<br />
==== Engineered bacteria-based platform 基于细菌设计的平台 ====<br />
<br />
Bacteria have long been used in cancer treatment. ''[[Bifidobacterium]]'' and ''[[Clostridium]]'' selectively colonize tumors and reduce their size.<ref name="Zu_2014">{{cite journal|vauthors=Zu C, Wang J|date=August 2014|title=Tumor-colonizing bacteria: a potential tumor targeting therapy|url=|journal=Critical Reviews in Microbiology|volume=40|issue=3|pages=225–35|doi=10.3109/1040841X.2013.776511|pmid=23964706|s2cid=26498221}}</ref> Recently synthetic biologists reprogrammed bacteria to sense and respond to a particular cancer state. Most often bacteria are used to deliver a therapeutic molecule directly to the tumor to minimize off-target effects. To target the tumor cells, [[peptide]]s that can specifically recognize a tumor were expressed on the surfaces of bacteria. Peptides used include an [[affibody molecule]] that specifically targets human [[Epidermal growth factor receptor|epidermal growth factor receptor 2]]<ref name="Gujrati_2014">{{cite journal|vauthors=Gujrati V, Kim S, Kim SH, Min JJ, Choy HE, Kim SC, Jon S|date=February 2014|title=Bioengineered bacterial outer membrane vesicles as cell-specific drug-delivery vehicles for cancer therapy|url=|journal=ACS Nano|volume=8|issue=2|pages=1525–37|doi=10.1021/nn405724x|pmid=24410085}}</ref> and a synthetic [[Adhesin molecule (immunoglobulin -like)|adhesin]].<ref name="Piñero-Lambea_2015">{{cite journal|vauthors=Piñero-Lambea C, Bodelón G, Fernández-Periáñez R, Cuesta AM, Álvarez-Vallina L, Fernández LÁ|date=April 2015|title=Programming controlled adhesion of E. coli to target surfaces, cells, and tumors with synthetic adhesins|journal=ACS Synthetic Biology|volume=4|issue=4|pages=463–73|doi=10.1021/sb500252a|pmc=4410913|pmid=25045780}}</ref> The other way is to allow bacteria to sense the [[tumor microenvironment]], for example hypoxia, by building an AND logic gate into bacteria.<ref>{{cite journal | last1 = Deyneko | first1 = I.V. | last2 = Kasnitz | first2 = N. | last3 = Leschner | first3 = S. | last4 = Weiss | first4 = S. | year = 2016| title = Composing a tumor specific bacterial promoter | url = | journal = PLOS ONE | volume = 11| issue = 5| page = e0155338| doi = 10.1371/journal.pone.0155338 | pmid = 27171245 | pmc = 4865170 }}</ref> The bacteria then only release target therapeutic molecules to the tumor through either [[lysis]]<ref>{{cite journal | last1 = Rice | first1 = KC | last2 = Bayles | first2 = KW | year = 2008 | title = Molecular control of bacterial death and lysis | journal = Microbiol Mol Biol Rev | volume = 72 | issue = 1| pages = 85–109 | doi = 10.1128/mmbr.00030-07 | pmid = 18322035 | pmc = 2268280 }}</ref> or the [[bacterial secretion system]].<ref>{{cite journal | last1 = Ganai | first1 = S. | last2 = Arenas | first2 = R. B. | last3 = Forbes | first3 = N. S. | year = 2009 | title = Tumour-targeted delivery of TRAIL using Salmonella typhimurium enhances breast cancer survival in mice | url = | journal = Br. J. Cancer | volume = 101 | issue = 10| pages = 1683–1691 | doi = 10.1038/sj.bjc.6605403 | pmid = 19861961 | pmc = 2778534 }}</ref> Lysis has the advantage that it can stimulate the immune system and control growth. Multiple types of secretion systems can be used and other strategies as well. The system is inducible by external signals. Inducers include chemicals, electromagnetic or light waves.<br />
<br />
The creation of new life and the tampering of existing life has raised ethical concerns in the field of synthetic biology and are actively being discussed.<br />
<br />
创造新生命以及篡改现存生命引起了合成生物学领域的伦理问题,目前正处于积极的讨论中。<br />
<br />
<br />
<br />
Multiple species and strains are applied in these therapeutics. Most commonly used bacteria are ''[[Salmonella enterica subsp. enterica|Salmonella typhimurium]]'', [[Escherichia coli|''Escherichia Coli'']], ''Bifidobacteria'', ''[[Streptococcus]]'', ''[[Lactobacillus]]'', ''[[Listeria]]'' and ''[[Bacillus subtilis]]''. Each of these species have their own property and are unique to cancer therapy in terms of tissue colonization, interaction with immune system and ease of application.<br />
<br />
<br />
<br />
The ethical aspects of synthetic biology has 3 main features: biosafety, biosecurity, and the creation of new life forms. Other ethical issues mentioned include the regulation of new creations, patent management of new creations, benefit distribution, and research integrity.<br />
<br />
合成生物学的伦理方面有三个主要特点: 生物研究安全性、生物安全性和创造新的生命形式。其他提到的伦理问题包括新生命的管理、新生命的专利管理、利益分配和研究的完整性。<br />
<br />
==== Cell-based platform 基于细胞的平台====<br />
<br />
The immune system plays an important role in cancer and can be harnessed to attack cancer cells. Cell-based therapies focus on [[Cancer immunotherapy|immunotherapies]], mostly by engineering [[T cell]]s.<br />
<br />
Ethical issues have surfaced for recombinant DNA and genetically modified organism (GMO) technologies and extensive regulations of genetic engineering and pathogen research were in place in many jurisdictions. Amy Gutmann, former head of the Presidential Bioethics Commission, argued that we should avoid the temptation to over-regulate synthetic biology in general, and genetic engineering in particular. According to Gutmann, "Regulatory parsimony is especially important in emerging technologies...where the temptation to stifle innovation on the basis of uncertainty and fear of the unknown is particularly great. The blunt instruments of statutory and regulatory restraint may not only inhibit the distribution of new benefits, but can be counterproductive to security and safety by preventing researchers from developing effective safeguards.".<br />
<br />
重组 DNA 和转基因生物(GMO)技术的伦理问题已经浮出水面,许多司法管辖区对基因工程和病原体研究有着广泛的规定。生物伦理总统委员会前任主席艾米 · 古特曼认为,我们应该避免过度监管合成生物学,尤其是基因工程。古特曼认为,“过度监管在新兴技术领域尤为显要...... 在这些领域,处于不确定性和对未知事物的恐惧而扼杀创新的倾向尤为强烈。法律和监管限制的生硬手段可能不仅会抑制新利益的分配,而且可能阻碍研究人员制定有效的保障措施,从而对研究安全性和生命安全性产生反作用。".<br />
<br />
<br />
<br />
T cell receptors were engineered and ‘trained’ to detect cancer [[epitope]]s. [[Chimeric antigen receptor]]s (CARs) are composed of a fragment of an [[antibody]] fused to intracellular T cell signaling domains that can activate and trigger proliferation of the cell. A second generation CAR-based therapy was approved by FDA.{{Citation needed|date=April 2018}}<br />
<br />
<br />
<br />
Gene switches were designed to enhance safety of the treatment. Kill switches were developed to terminate the therapy should the patient show severe side effects.<ref>Jones, B.S., Lamb, L.S., Goldman, F. & Di Stasi, A. Improving the safety of cell therapy products by suicide gene transfer. Front. Pharmacol. 5, 254 (2014).</ref> Mechanisms can more finely control the system and stop and reactivate it.<ref>{{cite journal | last1 = Wei | first1 = P | last2 = Wong | first2 = WW | last3 = Park | first3 = JS | last4 = Corcoran | first4 = EE | last5 = Peisajovich | first5 = SG | last6 = Onuffer | first6 = JJ | last7 = Weiss | first7 = A | last8 = LiWA | year = 2012 | title = Bacterial virulence proteins as tools to rewire kinase pathways in yeast and immune cells | url = | journal = Nature | volume = 488 | issue = 7411| pages = 384–388 | doi = 10.1038/nature11259 | pmid = 22820255 | pmc = 3422413 }}</ref><ref>{{cite journal | last1 = Danino | first1 = T. | last2 = Mondragon-Palomino | first2 = O. | last3 = Tsimring | first3 = L. | last4 = Hasty | first4 = J. | year = 2010 | title = A synchronized quorum of genetic clocks | url = | journal = Nature | volume = 463 | issue = 7279| pages = 326–330 | doi = 10.1038/nature08753 | pmid = 20090747 | pmc = 2838179 }}</ref> Since the number of T-cells are important for therapy persistence and severity, growth of T-cells is also controlled to dial the effectiveness and safety of therapeutics.<ref>{{cite journal | last1 = Chen | first1 = Y. Y. | last2 = Jensen | first2 = M. C. | last3 = Smolke | first3 = C. D. | year = 2010 | title = Genetic control of mammalian T-cell proliferation with synthetic RNA regulatory systems | journal = Proc. Natl. Acad. Sci. U.S.A. | volume = 107 | issue = 19| pages = 8531–6 | doi = 10.1073/pnas.1001721107 | pmid = 20421500 | pmc = 2889348 }}</ref><br />
<br />
One ethical question is whether or not it is acceptable to create new life forms, sometimes known as "playing God". Currently, the creation of new life forms not present in nature is at small-scale, the potential benefits and dangers remain unknown, and careful consideration and oversight are ensured for most studies. Regarding auxotrophy, bacteria and yeast can be engineered to be unable to produce histidine, an important amino acid for all life. Such organisms can thus only be grown on histidine-rich media in laboratory conditions, nullifying fears that they could spread into undesirable areas.<br />
<br />
有这样一个道德问题,创造新的生命形式(有时被称为“扮演上帝”)是否可以接受。目前,自然界中不存在的新生命形式的创造规模很小,潜在的好处和危险仍然不为人知,并且大多数研究确保进行了认真的考虑和监督。通过制造营养缺陷,细菌和酵母可以被改造为不能生产组氨酸的类型。组氨酸是一种对所有生命来说都很重要的氨基酸。因此,这些微生物只能在实验室条件下在富含组氨酸的培养基上生长,从而消除了人们对它们可能扩散到不良区域的担忧。<br />
<br />
<br />
<br />
<br />
== Ethics 伦理问题 ==<br />
<br />
Some ethical issues relate to biosecurity, where biosynthetic technologies could be deliberately used to cause harm to society and/or the environment. Since synthetic biology raises ethical issues and biosecurity issues, humanity must consider and plan on how to deal with potentially harmful creations, and what kinds of ethical measures could possibly be employed to deter nefarious biosynthetic technologies. With the exception of regulating synthetic biology and biotechnology companies, however, the issues are not seen as new because they were raised during the earlier recombinant DNA and genetically modified organism (GMO) debates and extensive regulations of genetic engineering and pathogen research are already in place in many jurisdictions.<br /><br />
<br />
一些伦理问题与生物安全有关,在这方面,生物合成技术可能被有意用以破坏社会和/或环境。由于合成生物学引起了伦理问题和生物安全问题,人类必须考虑和计划如何处理潜在的有害创造物,以及何种伦理措施具有阻止邪恶的生物合成技术的可行性。然而,除了监管合成生物学和生物技术公司之外,这些问题并不被视为新问题,因为它们是在早期的重组 DNA 和转基因生物(GMO)辩论中提出的,而且许多司法辖区已经对基因工程和病原体研究进行了广泛的监管。< br/><br />
<br />
{{Update|section|date=January 2019}}<br />
<br />
<br />
<br />
The creation of new life and the tampering of existing life has raised [[Ethics|ethical concerns]] in the field of synthetic biology and are actively being discussed.<ref name=":3" /><br />
<br />
<br />
<br />
The European Union-funded project SYNBIOSAFE has issued reports on how to manage synthetic biology. A 2007 paper identified key issues in safety, security, ethics and the science-society interface, which the project defined as public education and ongoing dialogue among scientists, businesses, government and ethicists. The key security issues that SYNBIOSAFE identified involved engaging companies that sell synthetic DNA and the biohacking community of amateur biologists. Key ethical issues concerned the creation of new life forms.<br />
<br />
欧盟资助的项目 SYNBIOSAFE 已经发布了关于如何管理合成生物学的报告。2007年的一篇论文确定了技术安全、生命安全、伦理和科学-社会接口方面的关键问题,并将其定义为公共教育和科学家、企业、政府和伦理学家之间的持续交流。SYNBIOSAFE 确定的关键生命安全问题涉及到销售合成 DNA 的公司和业余生物学家组成的生物黑客社区。关键的伦理问题涉及到创造新的生命形式。<br />
<br />
Common ethical questions include:<br />
常见的伦理问题包括:<br />
<br />
<br />
A subsequent report focused on biosecurity, especially the so-called dual-use challenge. For example, while synthetic biology may lead to more efficient production of medical treatments, it may also lead to synthesis or modification of harmful pathogens (e.g., smallpox). The biohacking community remains a source of special concern, as the distributed and diffuse nature of open-source biotechnology makes it difficult to track, regulate or mitigate potential concerns over biosafety and biosecurity.<br />
<br />
随后的一份聚焦于于生物安全的报告,特别是所谓的两用挑战。例如,虽然合成生物学可能带来更有效的医疗生产,但它也可能合成或改造出有害的病原体(例如天花)。生物黑客界仍然是一个特别令人关切的问题,因为开源生物技术的分布和扩散性质使得跟踪、管理或减轻对生物安全和生物安保的隐忧变得困难。<br />
<br />
* Is it morally right to tamper with nature?<br />
篡改自然在道德上是正确的吗?<br />
<br />
* Is one playing God when creating new life?<br />
创造新生命时,人是否就是上帝?<br />
<br />
COSY, another European initiative, focuses on public perception and communication. To better communicate synthetic biology and its societal ramifications to a broader public, COSY and SYNBIOSAFE published SYNBIOSAFE, a 38-minute documentary film, in October 2009.<br />
<br />
COSY 是欧洲的另一项倡议,主要关注于公众认知和交流。为了更好地向更广泛的公众宣传合成生物学及其社会影响,COSY 和 SYNBIOSAFE 于2009年10月出版了一部38分钟的纪录片《安全的合成生物学》。<br />
<br />
* What happens if a synthetic organism accidentally escapes?<br />
如果一种合成生命体意外地从实验室中泄露出去,会发生什么?<br />
<br />
* What if an individual misuses synthetic biology and creates a harmful entity (e.g., a biological weapon)?<br />
假如某个个体错误地使用合成生物学并制造了一个有害的尸体,那该怎么办?<br />
<br />
The International Association Synthetic Biology has proposed self-regulation. This proposes specific measures that the synthetic biology industry, especially DNA synthesis companies, should implement. In 2007, a group led by scientists from leading DNA-synthesis companies published a "practical plan for developing an effective oversight framework for the DNA-synthesis industry".<br />
<br />
国际合成生物学协会已经建议进行自我调节。它提出了合成生物产业,特别是 DNA 合成公司,应该实施的具体措施。2007年,由主要的 DNA 合成公司的科学家领导的一个小组发表了“为 DNA 合成工业制定有效监督框架的实用计划”。<br />
<br />
* Who will have control of and access to the products of synthetic biology? <br />
谁会拥有控制和访问合成生物产品的权限?<br />
<br />
* Who will gain from these innovations? Investors? Medical patients? Industrial farmers?<br />
谁会从这些创新中获利?投资者?患者?工业农民?<br />
<br />
On July 9–10, 2009, the National Academies' Committee of Science, Technology & Law convened a symposium on "Opportunities and Challenges in the Emerging Field of Synthetic Biology".<br />
<br />
2009年7月9日至10日,美国国家学院科学、技术和法律委员会召开了一次名为“合成生物学新兴领域的机遇与挑战”的研讨会。<br />
<br />
* Does the patent system allow patents on living organisms? What about parts of organisms, like HIV resistance genes in humans?<ref>{{Cite web|url=https://www.theguardian.com/science/2018/nov/26/worlds-first-gene-edited-babies-created-in-china-claims-scientist|title= World's first gene-edited babies created in China, claims scientist |last=Staff|first=Agencies|date=November 2018|website=The Guardian|url-status=live|archive-url=|archive-date=|access-date=}}</ref><br />
<br />
* What if a new creation is deserving of moral or legal status?<br />
如果一个新生命理应拥有道德和法律地位该怎么办?<br />
<br />
After the publication of the first synthetic genome and the accompanying media coverage about "life" being created, President Barack Obama established the Presidential Commission for the Study of Bioethical Issues to study synthetic biology. The commission convened a series of meetings, and issued a report in December 2010 titled "New Directions: The Ethics of Synthetic Biology and Emerging Technologies." The commission stated that "while Venter’s achievement marked a significant technical advance in demonstrating that a relatively large genome could be accurately synthesized and substituted for another, it did not amount to the “creation of life”. It noted that synthetic biology is an emerging field, which creates potential risks and rewards. The commission did not recommend policy or oversight changes and called for continued funding of the research and new funding for monitoring, study of emerging ethical issues and public education. These security issues may be avoided by regulating industry uses of biotechnology through policy legislation. Federal guidelines on genetic manipulation are being proposed by "the President's Bioethics Commission ... in response to the announced creation of a self-replicating cell from a chemically synthesized genome, put forward 18 recommendations not only for regulating the science ... for educating the public". Richard Lewontin wrote that some of the safety tenets for oversight discussed in The Principles for the Oversight of Synthetic Biology are reasonable, but that the main problem with the recommendations in the manifesto is that "the public at large lacks the ability to enforce any meaningful realization of those recommendations".<br />
<br />
在发表了第一个合成基因组以及随之而来的关于”生命”的媒体报道之后,巴拉克·奥巴马总统设立了研究合成生物学的生物伦理问题总统委员会。该委员会召开了一系列会议,并于2010年12月发布了一份题为《新方向: 合成生物学和新兴技术的伦理学》的报告。委员会指出:“虽然文特尔的成就标志着一项重大的技术进步,证明了一个相对较大的基因组可以准确地合成和替代另一个基因组,但它并不等于‘创造生命’。”报告指出,合成生物学是一个新兴的领域,它产生了潜在的风险和回报。该委员会没有对政策或监督方面的改变提出建议,并呼吁继续为研究提供资金,并为监测、研究新出现的道德问题和公共教育提供新资金。这些安全问题可以通过政策立法规范生物技术的工业用途来避免。“生物伦理总统委员会正在提出关于基因操纵的联邦指导方针...... 作为对宣布从化学合成的基因组中创造出自我复制细胞的回应,提出了18项建议,不仅仅是为了规范科学...... 为了教育公众。”。理查德·路文汀 (Richard Lewontin) 写道,《合成生物学监督原则》中讨论的一些监督安全原则是合理的,但宣言中的建议存在的主要问题是“广大公众缺乏能力,无法强制任意有意义地实现这些建议”。<br />
<br />
<br />
<br />
The ethical aspects of synthetic biology has 3 main features: biosafety, biosecurity, and the creation of new life forms.<ref>{{Cite journal|title=Synthetic Biology and Ethics: Past, Present, and Future|last=Hayry|first=Mattie|date=April 2017|journal=Cambridge Quarterly of Healthcare Ethics|volume=26|issue=2|pages=186–205|doi=10.1017/S0963180116000803|pmid=28361718}}</ref> Other ethical issues mentioned include the regulation of new creations, patent management of new creations, benefit distribution, and research integrity.<ref>{{Cite journal|title=Synthetic biology applied in the agrifood sector: Public perceptions, attitudes and implications for future studies|last=Jin |display-authors=etal |first=Shan|date=September 2019|journal=Trends in Food Science and Technology|volume=91|pages=454–466|doi=10.1016/j.tifs.2019.07.025}}</ref><ref name=":3">{{Cite journal|url=https://heinonline.org/HOL/LandingPage?handle=hein.journals/macq15&div=8&id=&page=| title=Synthetic Biology: Ethics, Exeptionalism and Expectations| pages=45| last=Newson|first=AJ|date=2015|journal=Macquarie Law Journal| volume=15|url-status=live|archive-url=|archive-date=|access-date=}}</ref><br />
<br />
<br />
<br />
Ethical issues have surfaced for [[recombinant DNA]] and [[genetically modified organism]] (GMO) technologies and extensive regulations of [[genetic engineering]] and pathogen research were in place in many jurisdictions. [[Amy Gutmann]], former head of the Presidential Bioethics Commission, argued that we should avoid the temptation to over-regulate synthetic biology in general, and genetic engineering in particular. According to Gutmann, "Regulatory parsimony is especially important in emerging technologies...where the temptation to stifle innovation on the basis of uncertainty and fear of the unknown is particularly great. The blunt instruments of statutory and regulatory restraint may not only inhibit the distribution of new benefits, but can be counterproductive to security and safety by preventing researchers from developing effective safeguards.".<ref>{{cite journal | first = Gutmann | last = Amy | date = 2012 | title = The Ethics of Synthetic Biology | volume=41 | issue=4 | pages = 17–22 | journal = The Hastings Center Report | doi = 10.1002/j.1552-146X.2011.tb00118.x | pmid = 21845917 | s2cid = 20662786 }}</ref><br />
<br />
<br />
<br />
The hazards of synthetic biology include biosafety hazards to workers and the public, biosecurity hazards stemming from deliberate engineering of organisms to cause harm, and environmental hazards. The biosafety hazards are similar to those for existing fields of biotechnology, mainly exposure to pathogens and toxic chemicals, although novel synthetic organisms may have novel risks. For biosecurity, there is concern that synthetic or redesigned organisms could theoretically be used for bioterrorism. Potential risks include recreating known pathogens from scratch, engineering existing pathogens to be more dangerous, and engineering microbes to produce harmful biochemicals. Lastly, environmental hazards include adverse effects on biodiversity and ecosystem services, including potential changes to land use resulting from agricultural use of synthetic organisms.<br />
<br />
合成生物学的危害包括对工人和公众的生物安全危害、蓄意设计可造成危害的生物体所产生的生物安全危害以及环境危害。生物安全危害类似于现有生物技术领域的危害,尽管新的合成生物可能有新的风险,它的主要形式是接触病原体和有毒化学品。为了生物安全,人们担心人工合成或重新设计的生物体在理论上可能被用于生物恐怖主义。潜在的风险包括从零开始再造已知的病原体,将现有的病原体设计成更危险的,以及设计微生物来生产有害的生物化学产品。最后,环境危害包括对生物多样性和生态系统服务的不利影响,包括在农业上利用合成生物体对土地使用的潜在变化。<br />
<br />
=== The "creation" of life 创造生命 ===<br />
<br />
<br />
<br />
Existing risk analysis systems for GMOs are generally considered sufficient for synthetic organisms, although there may be difficulties for an organism built "bottom-up" from individual genetic sequences. Synthetic biology generally falls under existing regulations for GMOs and biotechnology in general, and any regulations that exist for downstream commercial products, although there are generally no regulations in any jurisdiction that are specific to synthetic biology.<br />
<br />
通常认为,尽管由单个基因序列构成的”自下而上”的生物体可能存在困难,现有的转基因生物风险分析系统足以用于合成生物体。一般而言,尽管任何法域一般都没有专门针对合成生物学的条例,合成生物学属于现有的转基因生物和生物技术条例的范围,也属于现有的关于下游商业产品的条例的范围。<br />
<br />
One ethical question is whether or not it is acceptable to create new life forms, sometimes known as "playing God". Currently, the creation of new life forms not present in nature is at small-scale, the potential benefits and dangers remain unknown, and careful consideration and oversight are ensured for most studies.<ref name=":3" /> Many advocates express the great potential value—to agriculture, medicine, and academic knowledge, among other fields—of creating artificial life forms. Creation of new entities could expand scientific knowledge well beyond what is currently known from studying natural phenomena. Yet there is concern that artificial life forms may reduce nature’s "purity" (i.e., nature could be somehow corrupted by human intervention and manipulation) and potentially influence the adoption of more engineering-like principles instead of biodiversity- and nature-focused ideals. Some are also concerned that if an artificial life form were to be released into nature, it could hamper biodiversity by beating out natural species for resources (similar to how [[algal bloom]]s kill marine species). Another concern involves the ethical treatment of newly created entities if they happen to [[nociception|sense pain]], [[sentience]], and self-perception. Should such life be given moral or legal rights? If so, how?<br />
<br />
<br />
<br />
=== Biosafety and biocontainment 生物技术安全和生物抑制 ===<br />
<br />
What is most ethically appropriate when considering biosafety measures? How can accidental introduction of synthetic life in the natural environment be avoided? Much ethical consideration and critical thought has been given to these questions. Biosafety not only refers to biological containment; it also refers to strides taken to protect the public from potentially hazardous biological agents. Even though such concerns are important and remain unanswered, not all products of synthetic biology present concern for biological safety or negative consequences for the environment. It is argued that most synthetic technologies are benign and are incapable of flourishing in the outside world due to their "unnatural" characteristics as there is yet to be an example of a transgenic microbe conferred with a fitness advantage in the wild.<br />
<br />
<br />
<br />
In general, existing [[Hierarchy of hazard controls|hazard controls]], risk assessment methodologies, and regulations developed for traditional [[genetically modified organism]]s (GMOs) are considered to be sufficient for synthetic organisms. "Extrinsic" [[biocontainment]] methods in a laboratory context include physical containment through [[biosafety cabinet]]s and [[glovebox]]es, as well as [[personal protective equipment]]. In an agricultural context they include isolation distances and [[pollen]] barriers, similar to methods for [[Biocontainment of genetically modified organisms|biocontainment of GMOs]]. Synthetic organisms may offer increased hazard control because they can be engineered with "intrinsic" biocontainment methods that limit their growth in an uncontained environment, or prevent [[horizontal gene transfer]] to natural organisms. Examples of intrinsic biocontainment include [[auxotrophy]], biological [[kill switch]]es, inability of the organism to replicate or to pass modified or synthetic genes to offspring, and the use of [[Xenobiology|xenobiological]] organisms using alternative biochemistry, for example using artificial [[xeno nucleic acid]]s (XNA) instead of DNA.<ref name=":12" /><ref name=":32">{{Cite journal|url=https://publications.europa.eu/en/publication-detail/-/publication/bfd7d06c-d3ae-11e5-a4b5-01aa75ed71a1/language-en|title=Opinion on synthetic biology II: Risk assessment methodologies and safety aspects|last=|first=|date=2016-02-12|website=EU [[Directorate-General for Health and Consumers]]|pages=|via=|doi=10.2772/63529|archive-url=|archive-date=|access-date=|volume=|publisher=Publications Office}}</ref> Regarding auxotrophy, bacteria and yeast can be engineered to be unable to produce [[histidine]], an important amino acid for all life. Such organisms can thus only be grown on histidine-rich media in laboratory conditions, nullifying fears that they could spread into undesirable areas.<br />
<br />
<br />
<br />
<br />
<br />
=== Biosecurity 生物安全 ===<br />
<br />
Some ethical issues relate to biosecurity, where biosynthetic technologies could be deliberately used to cause harm to society and/or the environment. Since synthetic biology raises ethical issues and biosecurity issues, humanity must consider and plan on how to deal with potentially harmful creations, and what kinds of ethical measures could possibly be employed to deter nefarious biosynthetic technologies. With the exception of regulating synthetic biology and biotechnology companies,<ref name="Bügl, H. et al. 2007 627–629">{{cite journal | vauthors = Bügl H, Danner JP, Molinari RJ, Mulligan JT, Park HO, Reichert B, Roth DA, Wagner R, Budowle B, Scripp RM, Smith JA, Steele SJ, Church G, Endy D | title = DNA synthesis and biological security | journal = Nature Biotechnology | volume = 25 | issue = 6 | pages = 627–9 | date = June 2007 | pmid = 17557094 | doi = 10.1038/nbt0607-627 | s2cid = 7776829 }}</ref><ref>{{cite web|url = http://www.synbioproject.org/site/assets/files/1335/hastings.pdf|title = Ethical Issues in Synthetic Biology: An Overview of the Debates|date = |access-date = |website = }}</ref> however, the issues are not seen as new because they were raised during the earlier [[recombinant DNA]] and [[genetically modified organism]] (GMO) debates and extensive regulations of [[genetic engineering]] and pathogen research are already in place in many jurisdictions.<ref name="bioethics.gov">Presidential Commission for the study of Bioethical Issues, December 2010 [http://bioethics.gov/synthetic-biology-report NEW DIRECTIONS The Ethics of Synthetic Biology and Emerging Technologies] Retrieved 2012-04-14.</ref><br /><br />
<br />
<br />
<br />
=== European Union 欧盟===<br />
<br />
<br />
<br />
The [[European Union]]-funded project SYNBIOSAFE<ref>[http://www.synbiosafe.eu/ SYNBIOSAFE official site]</ref> has issued reports on how to manage synthetic biology. A 2007 paper identified key issues in safety, security, ethics and the science-society interface, which the project defined as public education and ongoing dialogue among scientists, businesses, government and ethicists.<ref name="Priorities">{{cite journal | vauthors = Schmidt M, Ganguli-Mitra A, Torgersen H, Kelle A, Deplazes A, Biller-Andorno N | title = A priority paper for the societal and ethical aspects of synthetic biology | journal = Systems and Synthetic Biology | volume = 3 | issue = 1–4 | pages = 3–7 | date = December 2009 | pmid = 19816794 | pmc = 2759426 | doi = 10.1007/s11693-009-9034-7 | url = http://www.synbiosafe.eu/uploads/pdf/Schmidt_etal-2009-SSBJ.pdf }}</ref><ref>Schmidt M. Kelle A. Ganguli A, de Vriend H. (Eds.) 2009. [https://www.springer.com/biomed/book/978-90-481-2677-4 "Synthetic Biology. The Technoscience and its Societal Consequences".] Springer Academic Publishing.</ref> The key security issues that SYNBIOSAFE identified involved engaging companies that sell synthetic DNA and the [[Do-it-yourself biology|biohacking]] community of amateur biologists. Key ethical issues concerned the creation of new life forms.<br />
<br />
<br />
<br />
A subsequent report focused on biosecurity, especially the so-called [[dual use technology|dual-use]] challenge. For example, while synthetic biology may lead to more efficient production of medical treatments, it may also lead to synthesis or modification of harmful pathogens (e.g., [[smallpox]]).<ref>{{cite journal | vauthors = Kelle A | title = Ensuring the security of synthetic biology-towards a 5P governance strategy | journal = Systems and Synthetic Biology | volume = 3 | issue = 1–4 | pages = 85–90 | date = December 2009 | pmid = 19816803 | pmc = 2759433 | doi = 10.1007/s11693-009-9041-8 }}</ref> The biohacking community remains a source of special concern, as the distributed and diffuse nature of open-source biotechnology makes it difficult to track, regulate or mitigate potential concerns over biosafety and biosecurity.<ref>{{cite journal | vauthors = Schmidt M | title = Diffusion of synthetic biology: a challenge to biosafety | journal = Systems and Synthetic Biology | volume = 2 | issue = 1–2 | pages = 1–6 | date = June 2008 | pmid = 19003431 | pmc = 2671588 | doi = 10.1007/s11693-008-9018-z | url = http://www.markusschmidt.eu/pdf/Diffusion_of_synthetic_biology.pdf }}</ref><br />
<br />
<br />
<br />
COSY, another European initiative, focuses on public perception and communication.<ref>[http://www.synbio.at/ COSY: Communicating Synthetic Biology]</ref><ref>{{cite journal | vauthors = Kronberger N, Holtz P, Kerbe W, Strasser E, Wagner W | title = Communicating Synthetic Biology: from the lab via the media to the broader public | journal = Systems and Synthetic Biology | volume = 3 | issue = 1–4 | pages = 19–26 | date = December 2009 | pmid = 19816796 | pmc = 2759424 | doi = 10.1007/s11693-009-9031-x }}</ref><ref>{{cite journal | vauthors = Cserer A, Seiringer A | title = Pictures of Synthetic Biology : A reflective discussion of the representation of Synthetic Biology (SB) in the German-language media and by SB experts | journal = Systems and Synthetic Biology | volume = 3 | issue = 1–4 | pages = 27–35 | date = December 2009 | pmid = 19816797 | pmc = 2759430 | doi = 10.1007/s11693-009-9038-3 }}</ref> To better communicate synthetic biology and its societal ramifications to a broader public, COSY and SYNBIOSAFE published ''SYNBIOSAFE'', a 38-minute documentary film, in October 2009.<ref>[http://www.synbiosafe.eu/DVD COSY/SYNBIOSAFE Documentary]</ref><br />
<br />
<br />
<br />
The International Association Synthetic Biology has proposed self-regulation.<ref>Report of IASB [http://www.ia-sb.eu/tasks/sites/synthetic-biology/assets/File/pdf/iasb_report_biosecurity_syntheticbiology.pdf "Technical solutions for biosecurity in synthetic biology"] {{webarchive |url=https://web.archive.org/web/20110719031805/http://www.ia-sb.eu/tasks/sites/synthetic-biology/assets/File/pdf/iasb_report_biosecurity_syntheticbiology.pdf |date=July 19, 2011 }}, Munich, 2008</ref> This proposes specific measures that the synthetic biology industry, especially DNA synthesis companies, should implement. In 2007, a group led by scientists from leading DNA-synthesis companies published a "practical plan for developing an effective oversight framework for the DNA-synthesis industry".<ref name="Bügl, H. et al. 2007 627–629" /><br />
<br />
<br />
<br />
=== United States 美国 ===<br />
<br />
<br />
<br />
In January 2009, the [[Alfred P. Sloan Foundation]] funded the [[Woodrow Wilson Center]], the [[Hastings Center]], and the [[J. Craig Venter Institute]] to examine the public perception, ethics and policy implications of synthetic biology.<ref>Parens E., Johnston J., Moses J. [http://www.thehastingscenter.org/who-we-are/our-research/selected-past-projects/ethical-issues-in-synthetic-biology-2/ Ethical Issues in Synthetic Biology.] 2009.</ref><br />
<br />
<br />
<br />
On July 9–10, 2009, the National Academies' Committee of Science, Technology & Law convened a symposium on "Opportunities and Challenges in the Emerging Field of Synthetic Biology".<ref>[http://sites.nationalacademies.org/PGA/stl/PGA_050738 NAS Symposium official site]</ref><br />
<br />
<br />
<br />
After the publication of the [[Mycoplasma laboratorium|first synthetic genome]] and the accompanying media coverage about "life" being created, President [[Barack Obama]] established the [[Presidential Commission for the Study of Bioethical Issues]] to study synthetic biology.<ref>Presidential Commission for the study of Bioethical Issues, December 2010 [http://bioethics.gov/node/353 FAQ]</ref> The commission convened a series of meetings, and issued a report in December 2010 titled "New Directions: The Ethics of Synthetic Biology and Emerging Technologies." The commission stated that "while Venter’s achievement marked a significant technical advance in demonstrating that a relatively large genome could be accurately synthesized and substituted for another, it did not amount to the “creation of life”.<ref>[http://bioethics.gov/node/353 Synthetic Biology F.A.Q.'s | Presidential Commission for the Study of Bioethical Issues]</ref> It noted that synthetic biology is an emerging field, which creates potential risks and rewards. The commission did not recommend policy or oversight changes and called for continued funding of the research and new funding for monitoring, study of emerging ethical issues and public education.<ref name="bioethics.gov" /><br />
<br />
<br />
<br />
Synthetic biology, as a major tool for biological advances, results in the "potential for developing biological weapons, possible unforeseen negative impacts on human health ... and any potential environmental impact".<ref name=":2">{{cite journal | vauthors = Erickson B, Singh R, Winters P | title = Synthetic biology: regulating industry uses of new biotechnologies | journal = Science | volume = 333 | issue = 6047 | pages = 1254–6 | date = September 2011 | pmid = 21885775 | doi = 10.1126/science.1211066 | bibcode = 2011Sci...333.1254E | s2cid = 1568198 | url = https://semanticscholar.org/paper/6ae989f6b07dc3c8a8694792d6fe8f036a0e0292 }}</ref> These security issues may be avoided by regulating industry uses of biotechnology through policy legislation. Federal guidelines on genetic manipulation are being proposed by "the President's Bioethics Commission ... in response to the announced creation of a self-replicating cell from a chemically synthesized genome, put forward 18 recommendations not only for regulating the science ... for educating the public".<ref name=":2" /><br />
<br />
<br />
<br />
=== Opposition 反对意见 ===<br />
<br />
On March 13, 2012, over 100 environmental and civil society groups, including [[Friends of the Earth]], the [[International Center for Technology Assessment]] and the [[ETC Group (AGETC)|ETC Group]] issued the manifesto ''The Principles for the Oversight of Synthetic Biology''. This manifesto calls for a worldwide moratorium on the release and commercial use of synthetic organisms until more robust regulations and rigorous biosafety measures are established. The groups specifically call for an outright ban on the use of synthetic biology on the [[human genome]] or [[human microbiome]].<ref>Katherine Xue for Harvard Magazine. September–October 2014 [http://harvardmagazine.com/2014/09/synthetic-biologys-new-menagerie Synthetic Biology’s New Menagerie]</ref><ref>Yojana Sharma for Scidev.net March 15, 2012. [http://www.scidev.net/global/genomics/news/ngos-call-for-international-regulation-of-synthetic-biology.html NGOs call for international regulation of synthetic biology]</ref> [[Richard Lewontin]] wrote that some of the safety tenets for oversight discussed in ''The Principles for the Oversight of Synthetic Biology'' are reasonable, but that the main problem with the recommendations in the manifesto is that "the public at large lacks the ability to enforce any meaningful realization of those recommendations".<ref>[http://www.nybooks.com/articles/archives/2014/may/08/new-synthetic-biology-who-gains/?insrc=rel#fnr-1 The New Synthetic Biology: Who Gains?] (2014-05-08), [[Richard C. Lewontin]], ''[[New York Review of Books]]''</ref><br />
<br />
<br />
<br />
== Health and safety 健康和安全 ==<br />
<br />
{{Main|Hazards of synthetic biology}}<br />
<br />
<br />
<br />
The hazards of synthetic biology include [[biosafety]] hazards to workers and the public, [[biosecurity]] hazards stemming from deliberate engineering of organisms to cause harm, and environmental hazards. The biosafety hazards are similar to those for existing fields of biotechnology, mainly exposure to pathogens and toxic chemicals, although novel synthetic organisms may have novel risks.<ref name=":02">{{Cite journal|url=https://blogs.cdc.gov/niosh-science-blog/2017/01/24/synthetic-biology/|title=Synthetic Biology and Occupational Risk|last1=Howard|first1=John|last2=Murashov|first2=Vladimir|date=2017-01-24|journal=Journal of Occupational and Environmental Hygiene|archive-url=|archive-date=|access-date=2018-11-30|last3=Schulte|first3=Paul|volume=14|issue=3|pages=224–236|pmid=27754800|doi=10.1080/15459624.2016.1237031|s2cid=205893358}}</ref><ref name=":12">{{Cite journal|last1=Howard|first1=John|last2=Murashov|first2=Vladimir|last3=Schulte|first3=Paul|date=2016-10-18|title=Synthetic biology and occupational risk|journal=Journal of Occupational and Environmental Hygiene|volume=14|issue=3|pages=224–236|doi=10.1080/15459624.2016.1237031|pmid=27754800|s2cid=205893358|issn=1545-9624}}</ref> For biosecurity, there is concern that synthetic or redesigned organisms could theoretically be used for [[bioterrorism]]. Potential risks include recreating known pathogens from scratch, engineering existing pathogens to be more dangerous, and engineering microbes to produce harmful biochemicals.<ref name=":7">{{Cite book|title=Biodefense in the Age of Synthetic Biology|date=2018-06-19|publisher=[[National Academies of Sciences, Engineering, and Medicine]]|isbn=9780309465182|location=|pages=|doi=10.17226/24890|pmid=30629396|last1=National Academies Of Sciences|first1=Engineering|author2=Division on Earth Life Studies|last3=Board On Life|first3=Sciences|author4=Board on Chemical Sciences Technology|author5=Committee on Strategies for Identifying Addressing Potential Biodefense Vulnerabilities Posed by Synthetic Biology}}</ref> Lastly, environmental hazards include adverse effects on [[biodiversity]] and [[ecosystem services]], including potential changes to land use resulting from agricultural use of synthetic organisms.<ref name=":8">{{Cite web|url=http://ec.europa.eu/environment/integration/research/newsalert/multimedia/synthetic_biology_and_biodiversity.htm|title=Future Brief: Synthetic biology and biodiversity|last=|first=|date=September 2016|website=European Commission|pages=14–15|archive-url=|archive-date=|access-date=2019-01-14}}</ref><ref>{{Cite web|url=https://publications.europa.eu/en/publication-detail/-/publication/9b231c71-faf1-11e5-b713-01aa75ed71a1/language-en/format-PDF|title=Final opinion on synthetic biology III: Risks to the environment and biodiversity related to synthetic biology and research priorities in the field of synthetic biology|last=|first=|date=2016-04-04|website=EU Directorate-General for Health and Food Safety|pages=8, 27|archive-url=|archive-date=|access-date=2019-01-14}}</ref><br />
<br />
<br />
<br />
Existing risk analysis systems for GMOs are generally considered sufficient for synthetic organisms, although there may be difficulties for an organism built "bottom-up" from individual genetic sequences.<ref name=":32" /><ref name=":22">{{Cite web|url=http://www.hse.gov.uk/research/rrpdf/rr944.pdf|title=Synthetic biology: A review of the technology, and current and future needs from the regulatory framework in Great Britain|last1=Bailey|first1=Claire|last2=Metcalf|first2=Heather|date=2012|website=UK [[Health and Safety Executive]]|archive-url=|archive-date=|access-date=2018-11-29|last3=Crook|first3=Brian}}</ref> Synthetic biology generally falls under existing regulations for GMOs and biotechnology in general, and any regulations that exist for downstream commercial products, although there are generally no regulations in any jurisdiction that are specific to synthetic biology.<ref name=":5">{{Citation|last1=Pei|first1=Lei|title=Regulatory Frameworks for Synthetic Biology|date=2012|work=Synthetic Biology|pages=157–226|publisher=John Wiley & Sons, Ltd|doi=10.1002/9783527659296.ch5|isbn=9783527659296|last2=Bar‐Yam|first2=Shlomiya|last3=Byers‐Corbin|first3=Jennifer|last4=Casagrande|first4=Rocco|last5=Eichler|first5=Florentine|last6=Lin|first6=Allen|last7=Österreicher|first7=Martin|last8=Regardh|first8=Pernilla C.|last9=Turlington|first9=Ralph D.}}</ref><ref name=":4">{{Cite journal|last=Trump|first=Benjamin D.|date=2017-11-01|title=Synthetic biology regulation and governance: Lessons from TAPIC for the United States, European Union, and Singapore|journal=Health Policy|volume=121|issue=11|pages=1139–1146|doi=10.1016/j.healthpol.2017.07.010|pmid=28807332|issn=0168-8510|doi-access=free}}</ref><br />
<br />
<br />
<br />
== See also 请参阅 ==<br />
<br />
{{Colbegin|colwidth=20em}}<br />
<br />
* ''[[ACS Synthetic Biology]]'' (journal)<br />
<br />
* [[Bioengineering]]<br />
<br />
* [[Biomimicry]]<br />
<br />
Category:Biotechnology<br />
<br />
类别: 生物技术<br />
<br />
* [[Carlson Curve]]<br />
<br />
Category:Molecular genetics<br />
<br />
类别: 分子遗传学<br />
<br />
* [[Chiral life concept]]<br />
<br />
Category:Systems biology<br />
<br />
分类: 系统生物学<br />
<br />
* [[Computational biology]]<br />
<br />
Category:Bioinformatics<br />
<br />
类别: 生物信息学<br />
<br />
* [[Computational biomodeling]]<br />
<br />
Category:Biocybernetics<br />
<br />
类别: 生物控制论<br />
<br />
* [[DNA digital data storage]]<br />
<br />
Category:Appropriate technology<br />
<br />
类别: 适当的技术<br />
<br />
* [[Engineering biology]]<br />
<br />
Category:Emerging technologies<br />
<br />
类别: 新兴技术<br />
<br />
<noinclude><br />
<br />
<small>This page was moved from [[wikipedia:en:Synthetic biology]]. Its edit history can be viewed at [[合成生物学/edithistory]]</small></noinclude><br />
<br />
[[Category:待整理页面]]</div>粲兰https://wiki.swarma.org/index.php?title=%E5%90%88%E6%88%90%E7%94%9F%E7%89%A9%E5%AD%A6&diff=18646合成生物学2020-11-18T03:36:28Z<p>粲兰:</p>
<hr />
<div>此词条暂由袁一博翻译,翻译字数共4491,未经人工整理和审校,带来阅读不便,请见谅。<br />
<br />
{{redirect|Artificial life form|simulated life forms|Artificial life}}<br />
<br />
{{short description|Interdisciplinary branch of biology and engineering}}<br />
<br />
{{Synthetic biology}}<br />
<br />
[[File:Synthetic Biology Research at NASA Ames.jpg|thumb|Synthetic Biology Research at [[Ames Research Center|NASA Ames Research Center]].]]<br />
<br />
NASA Ames Research Center.]]<br />
<br />
美国国家航空和宇宙航行局/美国国家航空航天局埃姆斯研究中心<br />
<br />
<br />
<br />
'''Synthetic biology''' ('''SynBio''') is a multidisciplinary area of research that seeks to create new biological parts, devices, and systems, or to redesign systems that are already found in nature.<br />
<br />
Synthetic biology (SynBio) is a multidisciplinary area of research that seeks to create new biological parts, devices, and systems, or to redesign systems that are already found in nature.<br />
<br />
合成生物学(SynBio)是一个多学科的研究领域,旨在创造新的生物部件、设备和系统,或重新设计已经在自然界中发现的系统。<br />
<br />
<br />
<br />
It is a branch of science that encompasses a broad range of methodologies from various disciplines, such as [[biotechnology]], [[genetic engineering]], [[molecular biology]], [[molecular engineering]], [[systems biology]], [[Model lipid bilayer|membrane science]], [[biophysics]], [[Biological engineering|chemical and biological engineering]], [[Electrical engineering|electrical and computer engineering]], [[control engineering]] and [[evolutionary biology]].<br />
<br />
It is a branch of science that encompasses a broad range of methodologies from various disciplines, such as biotechnology, genetic engineering, molecular biology, molecular engineering, systems biology, membrane science, biophysics, chemical and biological engineering, electrical and computer engineering, control engineering and evolutionary biology.<br />
<br />
它是科学的一个分支,包括广泛的方法,从不同的学科,如生物技术,基因工程,分子生物学,分子工程,系统生物学,膜科学,生物物理学,化学和生物工程,电子和计算机工程,控制工程和进化生物学。<br />
<br />
<br />
<br />
Due to more powerful [[genetic engineering]] capabilities and decreased DNA synthesis and [[DNA sequencing|sequencing costs]], the field of synthetic biology is rapidly growing. In 2016, more than 350 companies across 40 countries were actively engaged in synthetic biology applications; all these companies had an estimated net worth of $3.9 billion in the global market.<ref>{{cite journal | last1 = Bueso | first1 = F. Y. | last2 = Tangney | first2 = M. | year = 2017 | title = Synthetic Biology in the Driving Seat of the Bioeconomy | url = | journal = Trends in Biotechnology | volume = 35 | issue = 5| pages = 373–378 | doi = 10.1016/j.tibtech.2017.02.002 | pmid = 28249675 }}</ref><br />
<br />
Due to more powerful genetic engineering capabilities and decreased DNA synthesis and sequencing costs, the field of synthetic biology is rapidly growing. In 2016, more than 350 companies across 40 countries were actively engaged in synthetic biology applications; all these companies had an estimated net worth of $3.9 billion in the global market.<br />
<br />
由于更强大的基因工程能力和降低的 DNA 合成及测序成本,合成生物学领域正在迅速发展。2016年,来自40个国家的350多家公司积极参与合成生物学应用; 所有这些公司在全球市场的净值估计为39亿美元。<br />
<br />
<br />
<br />
== Definition 定义 ==<br />
<br />
Synthetic biology currently has no generally accepted definition. Here are a few examples:<br />
<br />
Synthetic biology currently has no generally accepted definition. Here are a few examples:<br />
<br />
合成生物学目前还没有公认的定义。以下是一些定义的示例:<br />
<br />
<br />
<br />
* "the use of a mixture of physical engineering and genetic engineering to create new (and, therefore, synthetic) life forms混合使用物理工程和基因工程来创建新的(因而也即合成的)生命形式。"<ref>{{cite journal | last1 = Hunter | first1 = D | year = 2013 | title = How to object to radically new technologies on the basis of justice: the case of synthetic biology | url = | journal = Bioethics | volume = 27 | issue = 8| pages = 426–434 | doi = 10.1111/bioe.12049 | pmid = 24010854 }}</ref><br />
<br />
<br />
* "an emerging field of research that aims to combine the knowledge and methods of biology, engineering and related disciplines in the design of chemically synthesized DNA to create organisms with novel or enhanced characteristics and traits一个新兴的研究领域,旨在将生物学,工程学和相关学科领域的知识和方法结合到化学合成DNA 的设计中,从而创造出具有新颖或增强特性和特征的有机体。<br />
"<ref>{{cite journal | last1 = Gutmann | first1 = A | year = 2011 | title = The ethics of synthetic biology: guiding principles for emerging technologies | url = | journal = Hastings Center Report | volume = 41 | issue = 4| pages = 17–22 | doi = 10.1002/j.1552-146x.2011.tb00118.x | pmid = 21845917 | s2cid = 20662786 }}</ref><br />
<br />
* "designing and constructing [[BioBrick|biological modules]], [[biological systems]], and [[biological machine]]s or, re-design of existing biological systems for useful purposes设计并构建生物积木、生物系统以及生物机器,或为有用的目的重新设计现有的生物系统。"<ref name="NakanoEckford2013">{{cite book|url={{google books |plainurl=y |id=uVhsAAAAQBAJ}}|title=Molecular Communication|last1=Nakano|first1=Tadashi|last2=Eckford|first2=Andrew W.|last3=Haraguchi|first3=Tokuko|date=12 September 2013|publisher=Cambridge University Press|isbn=978-1-107-02308-6|name-list-style=vanc}}</ref><br />
<br />
<br />
* “applying the engineering paradigm of systems design to biological systems in order to produce predictable and robust systems with novel functionalities that do not exist in nature” (The European Commission, 2005)This can include the possibility of a [[molecular assembler]], based upon biomolecular systems such as the [[ribosome]]”<ref name="RoadMap">{{Cite web|url=http://www.foresight.org/roadmaps/Nanotech_Roadmap_2007_main.pdf|title=Productive Nanosystems: A Technology Roadmap|website=Foresight Institute}}</ref><br />
将系统设计的工程范式应用到生物系统中,以产生具有自然界中不存在的新功能的可预测且健全的系统”(欧洲委员会,2005年),这可能包括基于生物分子系统——例如核糖体——的分子组合器的可能性。<br />
<br />
<br />
<br />
To note, synthetic biology has traditionally been divided into two different approaches: top down and bottom up.<br />
<br />
To note, synthetic biology has traditionally been divided into two different approaches: top down and bottom up.<br />
<br />
值得注意的是,合成生物学在传统上被分为两种不同的方法: 自上而下和自下而上。<br />
<br />
<br />
<br />
# The <u>top down</u> approach involves using metabolic and genetic engineering techniques to impart new functions to living cells.<br />
<br />
The <u>top down</u> approach involves using metabolic and genetic engineering techniques to impart new functions to living cells.<br />
<br />
自上而下的方法包括利用代谢和基因工程技术赋予活细胞以新的功能。<br />
<br />
# The <u>bottom up</u> approach involves creating new biological systems ''in vitro'' by bringing together 'non-living' biomolecular components,<ref>{{cite journal | vauthors = Schwille P | title = Bottom-up synthetic biology: engineering in a tinkerer's world | journal = Science | volume = 333 | issue = 6047 | pages = 1252–4 | date = September 2011 | pmid = 21885774 | doi = 10.1126/science.1211701 | bibcode = 2011Sci...333.1252S | s2cid = 43354332 }}</ref> often with the aim of constructing an [[artificial cell]].<br />
<br />
The <u>bottom up</u> approach involves creating new biological systems in vitro by bringing together 'non-living' biomolecular components, often with the aim of constructing an artificial cell.<br />
<br />
自下而上的方法包括在体外创建新的生物系统,将“非活性”的生物分子组件聚集在一起,其目的通常是构建一个人工细胞。<br />
<br />
<br />
<br />
Biological systems are thus assembled module-by-module. [[Cell-free protein synthesis|Cell-free protein expression systems]] are often employed,<ref>{{cite journal | vauthors = Noireaux V, Libchaber A | title = A vesicle bioreactor as a step toward an artificial cell assembly | journal = Proceedings of the National Academy of Sciences of the United States of America | volume = 101 | issue = 51 | pages = 17669–74 | date = December 2004 | pmid = 15591347 | pmc = 539773 | doi = 10.1073/pnas.0408236101 | bibcode = 2004PNAS..10117669N }}</ref><ref>{{cite journal | vauthors = Hodgman CE, Jewett MC | title = Cell-free synthetic biology: thinking outside the cell | journal = Metabolic Engineering | volume = 14 | issue = 3 | pages = 261–9 | date = May 2012 | pmid = 21946161 | pmc = 3322310 | doi = 10.1016/j.ymben.2011.09.002 }}</ref><ref>{{cite journal | vauthors = Elani Y, Law RV, Ces O | title = Protein synthesis in artificial cells: using compartmentalisation for spatial organisation in vesicle bioreactors | journal = Physical Chemistry Chemical Physics | volume = 17 | issue = 24 | pages = 15534–7 | date = June 2015 | pmid = 25932977 | doi = 10.1039/C4CP05933F | bibcode = 2015PCCP...1715534E | doi-access = free }}</ref> as are membrane-based molecular machinery. There are increasing efforts to bridge the divide between these approaches by forming hybrid living/synthetic cells,<ref>{{cite journal | vauthors = Elani Y, Trantidou T, Wylie D, Dekker L, Polizzi K, Law RV, Ces O | title = Constructing vesicle-based artificial cells with embedded living cells as organelle-like modules | journal = Scientific Reports | volume = 8 | issue = 1 | pages = 4564 | date = March 2018 | pmid = 29540757 | pmc = 5852042 | doi = 10.1038/s41598-018-22263-3 | bibcode = 2018NatSR...8.4564E }}</ref> and engineering communication between living and synthetic cell populations.<ref>{{cite journal | vauthors = Lentini R, Martín NY, Forlin M, Belmonte L, Fontana J, Cornella M, Martini L, Tamburini S, Bentley WE, Jousson O, Mansy SS | title = Two-Way Chemical Communication between Artificial and Natural Cells | journal = ACS Central Science | volume = 3 | issue = 2 | pages = 117–123 | date = February 2017 | pmid = 28280778 | pmc = 5324081 | doi = 10.1021/acscentsci.6b00330 }}</ref><br />
<br />
Biological systems are thus assembled module-by-module. Cell-free protein expression systems are often employed, as are membrane-based molecular machinery. There are increasing efforts to bridge the divide between these approaches by forming hybrid living/synthetic cells, and engineering communication between living and synthetic cell populations.<br />
<br />
生物系统就是这样一个模块一个模块地组装起来的。无细胞蛋白表达系统作为以膜为基础的分子机制,经常被采用。通过形成活细胞/合成细胞的混合体,以及在活细胞和合成细胞群之间进行工程交流,越来越多的努力为这些系统之间架起了桥梁。<br />
<br />
<br />
<br />
== History 发展历程 ==<br />
<br />
'''1910:''' First identifiable use of the term "synthetic biology" in [[Stéphane Leduc]]'s publication ''Théorie physico-chimique de la vie et générations spontanées''.<ref>[https://openlibrary.org/books/OL23348076M/Théorie_physico-chimique_de_la_vie_et_générations_spontanées Théorie physico-chimique de la vie et générations spontanées, S. Leduc, 1910]</ref> He also noted this term in another publication, ''La Biologie Synthétique'' in 1912.<ref>{{cite book |url=http://www.peiresc.org/bstitre.htm |title=La biologie synthétique, étude de biophysique |last=Leduc |first=Stéphane |date=1912 | veditors = Poinat A }}</ref><br />
<br />
1910: First identifiable use of the term "synthetic biology" in Stéphane Leduc's publication Théorie physico-chimique de la vie et générations spontanées. He also noted this term in another publication, La Biologie Synthétique in 1912.<br />
<br />
1910: First identifiable use of the term "synthetic biology" in Stéphane Leduc's publication Théorie physico-chimique de la vie et générations spontanées.他还在1912年的另一本出版物《生物合成学》中提到了这个术语。<br />
<br />
<br />
<br />
'''1961:''' Jacob and Monod postulate cellular regulation by molecular networks from their study of the ''lac'' operon in ''E. coli'' and envisioned the ability to assemble new systems from molecular components.<ref>Jacob, F.ß. & Monod, J. On the regulation of gene activity. Cold Spring Harb. Symp. Quant. Biol. 26, 193–211 (1961).</ref><br />
<br />
1961: Jacob and Monod postulate cellular regulation by molecular networks from their study of the lac operon in E. coli and envisioned the ability to assemble new systems from molecular components.<br />
<br />
1961年: 雅各布和莫诺德通过他们对大肠杆菌中乳酸操纵子的研究,设想了通过分子网络调控细胞的方法,并预期了由分子组件组装新系统的能力。<br />
<br />
<br />
<br />
'''1973:''' First molecular cloning and amplification of DNA in a plasmid is published in ''P.N.A.S.'' by Cohen, Boyer ''et al.'' constituting the dawn of synthetic biology.<ref>{{cite journal | vauthors = Cohen SN, Chang AC, Boyer HW, Helling RB | title = Construction of biologically functional bacterial plasmids in vitro | journal = Proc. Natl. Acad. Sci. USA | volume = 70 | issue = 11 | pages = 3240–3244 | date = 1973 | pmid = 4594039 | doi = 10.1073/pnas.70.11.3240 | bibcode = 1973PNAS...70.3240C | pmc = 427208 }}</ref></blockquote><br />
<br />
1973: First molecular cloning and amplification of DNA in a plasmid is published in P.N.A.S. by Cohen, Boyer et al. constituting the dawn of synthetic biology.</blockquote><br />
<br />
1973年:第一篇关于质粒中 DNA 的分子克隆和扩增的文章在 P.N.A.S. 发表。作者: 科恩,波义耳等人(Cohen,Boyer et al.) 。构成了合成生物学的黎明<br />
<br />
<br />
<br />
'''1978:''' [[Werner Arber|Arber]], [[Daniel Nathans|Nathans]] and [[Hamilton O. Smith|Smith]] win the [[Nobel Prize in Physiology or Medicine]] for the discovery of [[restriction enzyme]]s, leading Szybalski to offer an editorial comment in the journal ''[[Gene (journal)|Gene]]'':<br />
<br />
1978: Arber, Nathans and Smith win the Nobel Prize in Physiology or Medicine for the discovery of restriction enzymes, leading Szybalski to offer an editorial comment in the journal Gene:<br />
<br />
1978年: 阿尔伯 (Arber) ,纳森斯 (Nathans) 和 史密斯 (Smith) 因为限制性内切酶的发现而获得美国诺贝尔生理学或医学奖学会奖,这使得齐巴尔斯基 (Szybalski) 在《基因》杂志上发表了一篇社论评论:<br />
<br />
<br />
<br />
<blockquote>The work on restriction nucleases not only permits us easily to construct recombinant DNA molecules and to analyze individual genes, but also has led us into the new era of synthetic biology where not only existing genes are described and analyzed but also new gene arrangements can be constructed and evaluated.<ref>{{cite journal | vauthors = Szybalski W, Skalka A | title = Nobel prizes and restriction enzymes | journal = Gene | volume = 4 | issue = 3 | pages = 181–2 | date = November 1978 | pmid = 744485 | doi = 10.1016/0378-1119(78)90016-1 }}</ref></blockquote><br />
<br />
<blockquote>The work on restriction nucleases not only permits us easily to construct recombinant DNA molecules and to analyze individual genes, but also has led us into the new era of synthetic biology where not only existing genes are described and analyzed but also new gene arrangements can be constructed and evaluated.</blockquote><br />
<br />
限制性核酸酶的研究不仅使我们能够很容易地构建重组 DNA 分子和分析单个基因,而且使我们进入了合成生物学的新时代,不仅可以描述和分析现有的基因,而且可以构建和评估新的基因序列。</blockquote ><br />
<br />
<br />
<br />
'''1988:''' First DNA amplification by the polymerase chain reaction (PCR) using a thermostable DNA polymerase is published in ''Science'' by Mullis ''et al.''<ref>{{cite journal | vauthors = Saiki RK, Gelfand DH, Stoffel S, Scharf SJ, Higuchi R, Horn GT, Mullis KB, Erlich HA | title = Primer-directed enzymatic amplification of DNA with a thermostable DNA polymerase | journal = Science | volume = 239 | issue = 4839 | pages = 487–491 | date = 1988 | pmid = 2448875 | doi = 10.1126/science.239.4839.487 }}</ref></blockquote> This obviated adding new DNA polymerase after each PCR cycle, thus greatly simplifying DNA mutagenesis and assembly.<br />
<br />
1988: First DNA amplification by the polymerase chain reaction (PCR) using a thermostable DNA polymerase is published in Science by Mullis et al.</blockquote> This obviated adding new DNA polymerase after each PCR cycle, thus greatly simplifying DNA mutagenesis and assembly.<br />
<br />
1988年: 第一次利用热稳定的 DNA 聚合酶进行聚合酶链式反应以实现 DNA 扩增(PCR)的成果由马利斯 (Mullis) 等人发表在《科学》杂志上,这样就避免了在每次 PCR 循环后增加新的 DNA 聚合酶,从而大大简化了 DNA 的突变和组装。<br />
<br />
<br />
<br />
'''2000:''' Two papers in [[Nature (journal)|Nature]] report [[synthetic biological circuits]], a genetic toggle switch and a biological clock, by combining genes within [[Escherichia coli|''E. coli'']] cells.<ref name=":0">{{cite journal | vauthors = Elowitz MB, Leibler S | title = A synthetic oscillatory network of transcriptional regulators | journal = Nature | volume = 403 | issue = 6767 | pages = 335–8 | date = January 2000 | pmid = 10659856 | doi = 10.1038/35002125 | bibcode = 2000Natur.403..335E | s2cid = 41632754 }}</ref><ref name=":1">{{cite journal | vauthors = Gardner TS, Cantor CR, Collins JJ | title = Construction of a genetic toggle switch in Escherichia coli | journal = Nature | volume = 403 | issue = 6767 | pages = 339–42 | date = January 2000 | pmid = 10659857 | doi = 10.1038/35002131 | bibcode = 2000Natur.403..339G | s2cid = 345059 }}</ref><br />
<br />
2000: Two papers in Nature report synthetic biological circuits, a genetic toggle switch and a biological clock, by combining genes within E. coli cells.<br />
<br />
2000年: 《自然》杂志的两篇论文报告了通过结合大肠杆菌细胞内的基因制造合成生物电路、基因切换开关和生物钟。<br />
<br />
<br />
<br />
'''2003:''' The most widely used standardized DNA parts, [[BioBrick]] plasmids, are invented by [[Tom Knight (scientist)|Tom Knight]].<ref>{{Cite journal|last1=Knight|first1=Thomas| name-list-style = vanc |year=2003|title=Tom Knight (2003). Idempotent Vector Design for Standard Assembly of Biobricks|hdl=1721.1/21168}}</ref> These parts will become central to the international Genetically Engineered Machine competition (iGEM) founded at MIT in the following year.<br />
<br />
2003: The most widely used standardized DNA parts, BioBrick plasmids, are invented by Tom Knight. These parts will become central to the international Genetically Engineered Machine competition (iGEM) founded at MIT in the following year.<br />
<br />
2003年: 最广泛使用的标准化 DNA 部件,生物积木质粒,是由汤姆·奈特 (Tom Knight) 发明的。这些部分将成为第二年在麻省理工学院成立的国际基因工程机器竞赛 (iGEM) 的中心。<br />
<br />
<br />
<br />
[[File:Synthetic Biology Open Language (SBOL) standard visual symbols.png|thumb|upright=1.25| [[Synthetic Biology Open Language]] (SBOL) standard visual symbols for use with [[BioBrick|BioBricks Standard]]]]<br />
<br />
[[Synthetic Biology Open Language (SBOL) standard visual symbols for use with BioBricks Standard]]<br />
<br />
[与生物积木标准一起使用的合成生物学开放式语言(SBOL)标准视觉符号]<br />
<br />
<br />
<br />
'''2003:''' Researchers engineer an artemisinin precursor pathway in ''E. coli''.<ref>Martin, V. J., Pitera, D. J., Withers, S. T., Newman, J. D. & Keasling, J. D. Engineering a mevalonate pathway in Escherichia coli for production of terpenoids. Nature Biotech. 21, 796–802 (2003).</ref><br />
<br />
2003: Researchers engineer an artemisinin precursor pathway in E. coli.<br />
<br />
2003年: 研究人员在大肠杆菌中设计出青蒿素前体途径。<br />
<br />
<br />
<br />
'''2004:''' First international conference for synthetic biology, Synthetic Biology 1.0 (SB1.0) is held at the Massachusetts Institute of Technology, USA.<br />
<br />
2004: First international conference for synthetic biology, Synthetic Biology 1.0 (SB1.0) is held at the Massachusetts Institute of Technology, USA.<br />
<br />
2004年: 第一届合成生物学国际会议,合成生物学1.0(SB1.0)在美国麻省理工学院举行。<br />
<br />
<br />
<br />
'''2005:''' Researchers develop a light-sensing circuit in ''E. coli''.<ref>{{cite journal | last1 = Levskaya | first1 = A. | display-authors = etal | year = 2005 | title = "Synthetic biology " engineering Escherichia coli to see light | url = | journal = Nature | volume = 438 | issue = 7067| pages = 441–442 | doi = 10.1038/nature04405 | pmid = 16306980 | s2cid = 4428475 }}</ref> Another group designs circuits capable of multicellular pattern formation.<ref>Basu, S., Gerchman, Y., Collins, C. H., Arnold, F. H. & Weiss, R. "A synthetic multicellular system for programmed pattern formation. ''Nature'' 434,</ref><br />
<br />
2005: Researchers develop a light-sensing circuit in E. coli. Another group designs circuits capable of multicellular pattern formation.<br />
<br />
2005年: 研究人员在大肠杆菌中开发出一种感光电路。另一个研究小组设计出了能够形成多细胞模式的电路。<br />
<br />
<br />
<br />
'''2006:''' Researchers engineer a synthetic circuit that promotes bacterial invasion of tumour cells.<ref>{{cite journal | last1 = Anderson | first1 = J. C. | last2 = Clarke | first2 = E. J. | last3 = Arkin | first3 = A. P. | last4 = Voigt | first4 = C. A. | year = 2006 | title = Environmentally controlled invasion of cancer cells by engineered bacteria | url = | journal = J. Mol. Biol. | volume = 355 | issue = 4| pages = 619–627 | doi = 10.1016/j.jmb.2005.10.076 | pmid = 16330045 }}</ref><br />
<br />
2006: Researchers engineer a synthetic circuit that promotes bacterial invasion of tumour cells.<br />
<br />
2006年: 研究人员设计了一种能促进细菌侵入肿瘤细胞的合成电路。<br />
<br />
<br />
<br />
'''2010:''' Researchers publish in ''Science'' the first synthetic bacterial genome, called ''M. mycoides'' JCVI-syn1.0.<ref name="gibson52" /><ref>{{Cite news|url=https://www.telegraph.co.uk/news/science/science-news/7747779/American-scientist-who-created-artificial-life-denies-playing-God.html|title=American scientist who created artificial life denies 'playing God'|last=|first=|date=May 2010|website=The Telegraph|url-status=live|archive-url=|archive-date=|access-date=}}</ref> The genome is made from chemically-synthesized DNA using yeast recombination.<br />
<br />
2010: Researchers publish in Science the first synthetic bacterial genome, called M. mycoides JCVI-syn1.0. The genome is made from chemically-synthesized DNA using yeast recombination.<br />
<br />
2010年: 研究人员在《科学》杂志上发表了第一个人工合成的细菌基因组,名为丝状支原体 JCVI-syn1.0。基因组是使用酵母由化学合成的 DNA重组得到的。<br />
<br />
<br />
<br />
'''2011:''' Functional synthetic chromosome arms are engineered in yeast.<ref>{{cite journal | last1 = Dymond | first1 = J. S. | display-authors = etal | year = 2011 | title = Synthetic chromosome arms function in yeast and generate phenotypic diversity by design | url = | journal = Nature | volume = 477 | issue = 7365 | pages = 816–821 | doi = 10.1038/nature10403 | pmid = 21918511 | pmc = 3774833 }}</ref><br />
<br />
2011: Functional synthetic chromosome arms are engineered in yeast.<br />
<br />
2011年: 成功在酵母中设计出功能性合成染色体臂。<br />
<br />
<br />
<br />
'''2012:''' Charpentier and Doudna labs publish in ''Science'' the programming of CRISPR-Cas9 bacterial immunity for targeting DNA cleavage.<ref>{{cite journal | vauthors = Jinek M, Chylinski K, Fonfara I, Hauer M, Doudna JA, Charpentier E | title = A programmable dual-RNA-guided DNA endonuclease in adaptive bacterial immunity | journal = Science | volume = 337 | issue = 6096 | pages = 816–821 | date = 2012 | pmid = 22745249 | doi = 10.1126/science.1225829 | pmc = 6286148 }}</ref></blockquote> This technology greatly simplified and expanded eukaryotic gene editing.<br />
<br />
2012: Charpentier and Doudna labs publish in Science the programming of CRISPR-Cas9 bacterial immunity for targeting DNA cleavage.</blockquote> This technology greatly simplified and expanded eukaryotic gene editing.<br />
<br />
2012年: Charpentier 和 Doudna 实验室在《科学》杂志上发表了 CRISPR-Cas9细菌免疫系统的程序设计,用于靶向 DNA 的裂解。这项技术极大地简化和扩展了真核生物的基因编辑。<br />
<br />
<br />
<br />
'''2019:''' Scientists at [[ETH Zurich]] report the creation of the first [[bacterial genome]], named ''[[Caulobacter crescentus|Caulobacter ethensis-2.0]]'', made entirely by a computer, although a related [[wikt:viability|viable form]] of ''C. ethensis-2.0'' does not yet exist.<ref name="EA-20190401">{{cite news |author=ETH Zurich |title=First bacterial genome created entirely with a computer |url=https://www.eurekalert.org/pub_releases/2019-04/ez-fbg032819.php |date=1 April 2019 |work=[[EurekAlert!]] |accessdate=2 April 2019 |author-link=ETH Zurich }}</ref><ref name="PNAS20190401">{{cite journal |author=Venetz, Jonathan E. |display-authors=et al. |title=Chemical synthesis rewriting of a bacterial genome to achieve design flexibility and biological functionality |date=1 April 2019 |journal=[[Proceedings of the National Academy of Sciences of the United States of America]] |volume=116 |issue=16 |pages=8070–8079 |doi=10.1073/pnas.1818259116 |pmid=30936302 |pmc=6475421 }}</ref><br />
<br />
2019: Scientists at ETH Zurich report the creation of the first bacterial genome, named Caulobacter ethensis-2.0, made entirely by a computer, although a related viable form of C. ethensis-2.0 does not yet exist.<br />
<br />
2019年: 苏黎世联邦理工学院 (ETH Zurich) 的科学家报告说,他们已经创造出了第一个细菌基因组,并将其命名为 Caulobacter ethensis-2.0 ,这个基因组完全是由计算机制造的,尽管与之相关的可存活的Caulobacter ethensis-2.0还不存在。<br />
<br />
<br />
<br />
'''2019:''' Researchers report the production of a new [[Synthetic biology#Synthetic life|synthetic]] (possibly [[Artificial life#Biochemical-based ("wet")|artificial]]) form of [[wikt:viability|viable]] [[life]], a variant of the [[bacteria]] ''[[Escherichia coli]]'', by reducing the natural number of 64 [[codon]]s in the bacterial [[genome]] to 59 codons instead, in order to encode 20 [[amino acid]]s.<ref name="NYT-20190515">{{cite news |last=Zimmer |first=Carl |authorlink=Carl Zimmer |title=Scientists Created Bacteria With a Synthetic Genome. Is This Artificial Life? - In a milestone for synthetic biology, colonies of E. coli thrive with DNA constructed from scratch by humans, not nature. |url=https://www.nytimes.com/2019/05/15/science/synthetic-genome-bacteria.html |date=15 May 2019 |work=[[The New York Times]] |accessdate=16 May 2019 }}</ref><ref name="NAT-20190515">{{cite journal |author=Fredens, Julius |display-authors=et al. |title=Total synthesis of Escherichia coli with a recoded genome |date=15 May 2019 |journal=[[Nature (journal)|Nature]] |volume=569 |issue=7757 |pages=514–518 |doi=10.1038/s41586-019-1192-5 |pmid=31092918 |pmc=7039709 |bibcode=2019Natur.569..514F }}</ref><br />
<br />
2019: Researchers report the production of a new synthetic (possibly artificial) form of viable life, a variant of the bacteria Escherichia coli, by reducing the natural number of 64 codons in the bacterial genome to 59 codons instead, in order to encode 20 amino acids.<br />
<br />
2019年: 研究人员报告了一种新式合成(可能是人工的)可行生命形式的产生,这是大肠杆菌的变种,它通过将细菌基因组中64个密码子的自然数目减少到59个密码子来编码20个氨基酸。<br />
<br />
<br />
<br />
== Perspectives 各方观点 ==<br />
<br />
Engineers view biology as a ''technology'' (in other words, a given system's ''[[biotechnology]]'' or its ''[[biological engineering]]'')<ref>{{cite journal | volume = 6 | last = Zeng | first = Jie (Bangzhe) | title = On the concept of systems bio-engineering | journal = Coomunication on Transgenic Animals, June 1994, CAS, PRC }}</ref> Synthetic biology includes the broad redefinition and expansion of biotechnology, with the ultimate goals of being able to design and build engineered biological systems that process information, manipulate chemicals, fabricate materials and structures, produce energy, provide food, and maintain and enhance human health (see [[Biomedical Engineering]]) and our environment.<ref>{{cite journal | volume = 6 | last = Chopra | first = Paras | author2 = Akhil Kamma | title = Engineering life through Synthetic Biology | journal = In Silico Biology }}</ref><br />
<br />
Engineers view biology as a technology (in other words, a given system's biotechnology or its biological engineering) Synthetic biology includes the broad redefinition and expansion of biotechnology, with the ultimate goals of being able to design and build engineered biological systems that process information, manipulate chemicals, fabricate materials and structures, produce energy, provide food, and maintain and enhance human health (see Biomedical Engineering) and our environment.<br />
<br />
工程师将生物学视为一种技术(换句话说,一个特定系统的生物技术或其生物工程)合成生物学包括生物技术的广泛重新定义和扩展,最终目标是能够设计和建造工程生物系统,处理信息,操纵化学品,制造材料和结构,生产能源,提供食物,维护和增强人类健康(见生物医学工程)和我们的环境。<br />
<br />
<br />
<br />
Studies in synthetic biology can be subdivided into broad classifications according to the approach they take to the problem at hand: standardization of biological parts, biomolecular engineering, genome engineering. {{citation needed|date=May 2020}}<br />
<br />
Studies in synthetic biology can be subdivided into broad classifications according to the approach they take to the problem at hand: standardization of biological parts, biomolecular engineering, genome engineering. <br />
<br />
合成生物学的研究可以根据它们对手头问题所采取的方法再广泛细分为: 生物部分的标准化、生物分子工程、基因组工程。<br />
<br />
<br />
<br />
Biomolecular engineering includes approaches that aim to create a toolkit of functional units that can be introduced to present new technological functions in living cells. [[Genetic engineering]] includes approaches to construct synthetic chromosomes for whole or minimal organisms.<br />
<br />
Biomolecular engineering includes approaches that aim to create a toolkit of functional units that can be introduced to present new technological functions in living cells. Genetic engineering includes approaches to construct synthetic chromosomes for whole or minimal organisms.<br />
<br />
生物分子工程包括旨在创建一个功能单元工具包的方法,这些功能单元可以用来展示活细胞中的新技术性功能。基因工程包括为整个或最小的有机体构建合成染色体的方法。<br />
<br />
<br />
<br />
Biomolecular design refers to the general idea of de novo design and additive combination of biomolecular components. Each of these approaches share a similar task: to develop a more synthetic entity at a higher level of complexity by inventively manipulating a simpler part at the preceding level.<ref>{{cite journal | vauthors = Channon K, Bromley EH, Woolfson DN | title = Synthetic biology through biomolecular design and engineering | journal = Current Opinion in Structural Biology | volume = 18 | issue = 4 | pages = 491–8 | date = August 2008 | pmid = 18644449 | doi = 10.1016/j.sbi.2008.06.006 }}</ref><br />
<br />
Biomolecular design refers to the general idea of de novo design and additive combination of biomolecular components. Each of these approaches share a similar task: to develop a more synthetic entity at a higher level of complexity by inventively manipulating a simpler part at the preceding level.<br />
<br />
生物分子设计是指生物分子组分的重新设计和加合的总体思想。这些方法都有一个相似的任务: 通过在前一级创造性地操作一个更简单的部分,从而在更高的复杂性水平上开发一个更高度合成化的实体。<br />
<br />
<br />
<br />
On the other hand, "re-writers" are synthetic biologists interested in testing the irreducibility of biological systems. Due to the complexity of natural biological systems, it would be simpler to rebuild the natural systems of interest from the ground up; In order to provide engineered surrogates that are easier to comprehend, control and manipulate.<ref>{{cite journal | first = M | last = Stone | title = Life Redesigned to Suit the Engineering Crowd | journal = Microbe | volume = 1 | issue = 12 | pages = 566–570 | date = 2006 | s2cid = 7171812 | url = https://pdfs.semanticscholar.org/8d45/e0f37a0fb6c1a3c659c71ee9c52619b18364.pdf }}</ref> Re-writers draw inspiration from [[refactoring]], a process sometimes used to improve computer software.<br />
<br />
On the other hand, "re-writers" are synthetic biologists interested in testing the irreducibility of biological systems. Due to the complexity of natural biological systems, it would be simpler to rebuild the natural systems of interest from the ground up; In order to provide engineered surrogates that are easier to comprehend, control and manipulate. Re-writers draw inspiration from refactoring, a process sometimes used to improve computer software.<br />
<br />
另一方面,“重写者”指的是对测试生物系统的不可还原性感兴趣的合成生物学家。由于自然生物系统的复杂性,从头开始重建感兴趣的自然系统会更简单; 为了提供更容易理解、控制和操作的工程替代品。重写者从重构中获得灵感,我们有时用这种重构以改进计算机软件。<br />
<br />
<br />
<br />
== Enabling technologies 使能技术 ==<br />
<br />
Several novel enabling technologies were critical to the success of synthetic biology. Concepts include [[standardization]] of biological parts and hierarchical abstraction to permit using those parts in synthetic systems.<ref>{{cite journal | vauthors = Baker D, Church G, Collins J, Endy D, Jacobson J, Keasling J, Modrich P, Smolke C, Weiss R | title = Engineering life: building a fab for biology | journal = Scientific American | volume = 294 | issue = 6 | pages = 44–51 | date = June 2006 | pmid = 16711359 | doi = 10.1038/scientificamerican0606-44 | bibcode = 2006SciAm.294f..44B }}</ref> Basic technologies include reading and writing DNA (sequencing and fabrication). Measurements under multiple conditions are needed for accurate modeling and [[computer-aided design]] (CAD).<br />
<br />
Several novel enabling technologies were critical to the success of synthetic biology. Concepts include standardization of biological parts and hierarchical abstraction to permit using those parts in synthetic systems. Basic technologies include reading and writing DNA (sequencing and fabrication). Measurements under multiple conditions are needed for accurate modeling and computer-aided design (CAD).<br />
<br />
一些新的使能技术对于合成生物学的成功至关重要。概念包括生物部分的标准化和层次抽象,以允许在合成系统中使用这些部分。基本技术包括读写 DNA (测序和编码)。为了精确地建模和计算机辅助设计(CAD) ,需要在多种条件下进行测量。<br />
<br />
<br />
<br />
=== DNA and gene synthesis DNA 和基因合成===<br />
<br />
{{Main|Artificial gene synthesis|Synthetic genomics}}Driven by dramatic decreases in costs of [[oligonucleotides|oligonucleotide]] ("oligos") synthesis and the advent of PCR, the sizes of DNA constructions from oligos have increased to the genomic level.<ref>{{cite journal | vauthors = Kosuri S, Church GM | title = Large-scale de novo DNA synthesis: technologies and applications | journal = Nature Methods | volume = 11 | issue = 5 | pages = 499–507 | date = May 2014 | pmid = 24781323 | doi = 10.1038/nmeth.2918 | pmc = 7098426 }}</ref> In 2000, researchers reported synthesis of the 9.6 kbp (kilo bp) [[Hepatitis C]] virus genome from chemically synthesized 60 to 80-mers.<ref>{{cite journal | vauthors = Blight KJ, Kolykhalov AA, Rice CM | title = Efficient initiation of HCV RNA replication in cell culture | journal = Science | volume = 290 | issue = 5498 | pages = 1972–4 | date = December 2000 | pmid = 11110665 | doi = 10.1126/science.290.5498.1972 | bibcode = 2000Sci...290.1972B }}</ref> In 2002 researchers at [[Stony Brook University]] succeeded in synthesizing the 7741 bp [[poliovirus]] genome from its published sequence, producing the second synthetic genome, spanning two years.<ref>{{cite journal | vauthors = Couzin J | title = Virology. Active poliovirus baked from scratch | journal = Science | volume = 297 | issue = 5579 | pages = 174–5 | date = July 2002 | pmid = 12114601 | doi = 10.1126/science.297.5579.174b | s2cid = 83531627 | url = https://semanticscholar.org/paper/248000e7bc654631ae217274a77253ceddf270a1 }}</ref> In 2003 the 5386 bp genome of the [[bacteriophage]] [[Phi X 174]] was assembled in about two weeks.<ref name="assembly2003">{{cite journal | vauthors = Smith HO, Hutchison CA, Pfannkoch C, Venter JC | title = Generating a synthetic genome by whole genome assembly: phiX174 bacteriophage from synthetic oligonucleotides | journal = Proceedings of the National Academy of Sciences of the United States of America | volume = 100 | issue = 26 | pages = 15440–5 | date = December 2003 | pmid = 14657399 | pmc = 307586 | doi = 10.1073/pnas.2237126100 | bibcode = 2003PNAS..10015440S }}</ref> In 2006, the same team, at the [[J. Craig Venter Institute]], constructed and patented a [[Synthetic genomics|synthetic genome]] of a novel minimal bacterium, ''[[Mycoplasma laboratorium]]'' and were working on getting it functioning in a living cell.<ref>{{cite news|url=https://www.nytimes.com/2007/06/29/science/29cells.html|title=Scientists Transplant Genome of Bacteria|last=Wade|first=Nicholas|date=2007-06-29|work=The New York Times|access-date=2007-12-28|issn=0362-4331}}</ref><ref>{{cite journal | vauthors = Gibson DG, Benders GA, Andrews-Pfannkoch C, Denisova EA, Baden-Tillson H, Zaveri J, Stockwell TB, Brownley A, Thomas DW, Algire MA, Merryman C, Young L, Noskov VN, Glass JI, Venter JC, Hutchison CA, Smith HO | title = Complete chemical synthesis, assembly, and cloning of a Mycoplasma genitalium genome | journal = Science | volume = 319 | issue = 5867 | pages = 1215–20 | date = February 2008 | pmid = 18218864 | doi = 10.1126/science.1151721 | bibcode = 2008Sci...319.1215G | s2cid = 8190996 | url = https://semanticscholar.org/paper/8c662fd0e252c85d056aad7ff16009ebe1dd4cbc }}</ref><ref name="Ball">{{cite journal|last1=Ball|first1=Philip|date=2016|title=Man Made: A History of Synthetic Life|url=https://www.sciencehistory.org/distillations/magazine/man-made-a-history-of-synthetic-life|journal=Distillations|volume=2|issue=1|pages=15–23|access-date=22 March 2018}}</ref><br />
<br />
Driven by dramatic decreases in costs of oligonucleotide ("oligos") synthesis and the advent of PCR, the sizes of DNA constructions from oligos have increased to the genomic level. In 2000, researchers reported synthesis of the 9.6 kbp (kilo bp) Hepatitis C virus genome from chemically synthesized 60 to 80-mers. In 2002 researchers at Stony Brook University succeeded in synthesizing the 7741 bp poliovirus genome from its published sequence, producing the second synthetic genome, spanning two years. In 2003 the 5386 bp genome of the bacteriophage Phi X 174 was assembled in about two weeks. In 2006, the same team, at the J. Craig Venter Institute, constructed and patented a synthetic genome of a novel minimal bacterium, Mycoplasma laboratorium and were working on getting it functioning in a living cell.<br />
<br />
由于寡核苷酸 (oligos) 合成成本的大幅度降低和 PCR 的出现,寡核苷酸 DNA 构建的大小已经提高到基因组水平。2000年,研究人员报道了化学合成的60到80碱基数的聚丙型肝炎病毒基因组的合成。2002年,石溪大学的研究人员成功地从已发表的序列中合成了7741碱基数的脊髓灰质炎病毒基因组,生产了跨越两年的第二个合成基因组。2003年,噬菌体 Phi x 174的5386个基因组在大约两周内组装完毕。2006年,克莱格·凡特 (J. Craig Venter) 研究所的同一个团队,构建了一种新型微型细菌——支原体的合成基因组,并申请了专利,他们正在努力使其在活细胞中发挥作用。<br />
<br />
<br />
<br />
In 2007 it was reported that several companies were offering [[gene synthesis|synthesis of genetic sequences]] up to 2000 base pairs (bp) long, for a price of about $1 per bp and a turnaround time of less than two weeks.<ref>{{cite news| issn = 0362-4331| last = Pollack| first = Andrew| title = How Do You Like Your Genes? Biofabs Take Orders | work = The New York Times | access-date = 2007-12-28| date = 2007-09-12 | url = https://www.nytimes.com/2007/09/12/technology/techspecial/12gene.html?pagewanted=2&_r=1}}</ref> [[Oligonucleotide]]s harvested from a photolithographic- or inkjet-manufactured [[DNA chip]] combined with PCR and DNA mismatch error-correction allows inexpensive large-scale changes of [[codons]] in genetic systems to improve [[gene expression]] or incorporate novel amino-acids (see [[George M. Church]]'s and Anthony Forster's synthetic cell projects.<ref>{{Cite web|url=http://arep.med.harvard.edu/SBP|title=Synthetic Biology Projects|website=arep.med.harvard.edu|access-date=2018-02-17}}</ref><ref>{{cite journal | vauthors = Forster AC, Church GM | title = Towards synthesis of a minimal cell | journal = Molecular Systems Biology | volume = 2 | issue = 1 | pages = 45 | date = 2006-08-22 | pmid = 16924266 | pmc = 1681520 | doi = 10.1038/msb4100090 }}</ref>) This favors a synthesis-from-scratch approach.<br />
<br />
In 2007 it was reported that several companies were offering synthesis of genetic sequences up to 2000 base pairs (bp) long, for a price of about $1 per bp and a turnaround time of less than two weeks. Oligonucleotides harvested from a photolithographic- or inkjet-manufactured DNA chip combined with PCR and DNA mismatch error-correction allows inexpensive large-scale changes of codons in genetic systems to improve gene expression or incorporate novel amino-acids (see George M. Church's and Anthony Forster's synthetic cell projects.) This favors a synthesis-from-scratch approach.<br />
<br />
2007年有报道称,几家公司提供了长达2000个碱基对的基因序列合成,价格约为每个碱基1美元,一个完整流程所需工时不到两周。从光刻或喷墨制造的 DNA 芯片中提取的寡核苷酸,结合 PCR 和 DNA 错配错误校正,可以大规模改变遗传系统中的密码子,从而改善基因表达或合并新的氨基酸(参见乔治·M·丘奇和安东尼·福斯特的合成细胞项目)这有利于采用从头开始的合成方法。<br />
<br />
<br />
<br />
Additionally, the [[CRISPR|CRISPR/Cas]] system has emerged as a promising technique for gene editing. It was described as "the most important innovation in the synthetic biology space in nearly 30 years".<ref name="washpost_crispr">{{cite news|last1=Basulto|first1=Dominic|title=Everything you need to know about why CRISPR is such a hot technology|url=https://www.washingtonpost.com/news/innovations/wp/2015/11/04/everything-you-need-to-know-about-why-crispr-is-such-a-hot-technology/|access-date=5 December 2015|work=Washington Post|date=November 4, 2015}}</ref> While other methods take months or years to edit gene sequences, CRISPR speeds that time up to weeks.<ref name="washpost_crispr" /> Due to its ease of use and accessibility, however, it has raised ethical concerns, especially surrounding its use in [[Do-it-yourself biology|biohacking]].<ref>{{cite news|last1=Kahn|first1=Jennifer|title=The Crispr Quandary|url=https://www.nytimes.com/2015/11/15/magazine/the-crispr-quandary.html?_r=0|access-date=5 December 2015|work=New York Times|date=November 9, 2015}}</ref><ref>{{cite journal|last1=Ledford|first1=Heidi|title=CRISPR, the disruptor|url=http://www.nature.com/news/crispr-the-disruptor-1.17673|access-date=5 December 2015|agency=Nature News|journal=Nature|date=June 3, 2015|pmid=26040877|doi=10.1038/522020a|volume=522|issue=7554|pages=20–4|bibcode=2015Natur.522...20L|doi-access=free}}</ref><ref>{{cite magazine|last1=Higginbotham|first1=Stacey|title=Top VC Says Gene Editing Is Riskier Than Artificial Intelligence|url=http://fortune.com/2015/12/04/khosla-crispr-ai/|access-date=5 December 2015|magazine=Fortune|date=4 December 2015}}</ref><br />
<br />
Additionally, the CRISPR/Cas system has emerged as a promising technique for gene editing. It was described as "the most important innovation in the synthetic biology space in nearly 30 years". While other methods take months or years to edit gene sequences, CRISPR speeds that time up to weeks.<br />
<br />
此外,CRISPR/Cas 系统已经成为一种很有前途的基因编辑技术。它被称作“近30年来合成生物学领域最重要的创新”。虽然其他方法需要数月或数年来编辑基因序列,CRISPR 将这个时间缩短到数周。<br />
<br />
<br />
<br />
=== Sequencing 测序 ===<br />
<br />
[[DNA sequencing]] determines the order of [[nucleotide]] bases in a DNA molecule. Synthetic biologists use DNA sequencing in their work in several ways. First, large-scale genome sequencing efforts continue to provide information on naturally occurring organisms. This information provides a rich substrate from which synthetic biologists can construct parts and devices. Second, sequencing can verify that the fabricated system is as intended. Third, fast, cheap, and reliable sequencing can facilitate rapid detection and identification of synthetic systems and organisms.<ref>{{cite journal| author = Rollie| date = 2012 |title = Designing biological systems: Systems Engineering meets Synthetic Biology| journal = Chemical Engineering Science| volume = 69 | pages = 1–29| doi=10.1016/j.ces.2011.10.068| issue=1|display-authors=etal}}</ref><br />
<br />
DNA sequencing determines the order of nucleotide bases in a DNA molecule. Synthetic biologists use DNA sequencing in their work in several ways. First, large-scale genome sequencing efforts continue to provide information on naturally occurring organisms. This information provides a rich substrate from which synthetic biologists can construct parts and devices. Second, sequencing can verify that the fabricated system is as intended. Third, fast, cheap, and reliable sequencing can facilitate rapid detection and identification of synthetic systems and organisms.<br />
<br />
DNA 测序决定了 DNA 分子中核苷酸碱基的顺序。合成生物学家在他们的工作中以几种方式使用 DNA 测序。首先,大规模的基因组测序工作继续提供有关自然发生的生物体的信息。这些信息为合成生物学家提供了一个丰富的基质,他们可以从中构建零件和设备。其次,排序可以验证制造的系统是否如预期的那样。第三,快速、廉价和可靠的测序可以促进快速检测和识别合成系统和有机体。<br />
<br />
<br />
<br />
=== Microfluidics ===<br />
<br />
[[Microfluidics]], in particular droplet microfluidics, is an emerging tool used to construct new components, and to analyse and characterize them.<ref>{{cite journal | vauthors = Elani Y | title = Construction of membrane-bound artificial cells using microfluidics: a new frontier in bottom-up synthetic biology | journal = Biochemical Society Transactions | volume = 44 | issue = 3 | pages = 723–30 | date = June 2016 | pmid = 27284034 | pmc = 4900754 | doi = 10.1042/BST20160052 }}</ref><ref>{{cite journal | vauthors = Gach PC, Iwai K, Kim PW, Hillson NJ, Singh AK | title = Droplet microfluidics for synthetic biology | journal = Lab on a Chip | volume = 17 | issue = 20 | pages = 3388–3400 | date = October 2017 | pmid = 28820204 | doi = 10.1039/C7LC00576H | osti = 1421856 | url = http://www.escholarship.org/uc/item/6cr3k0v5 }}</ref> It is widely employed in screening assays.<ref>{{cite journal | vauthors = Vinuselvi P, Park S, Kim M, Park JM, Kim T, Lee SK | title = Microfluidic technologies for synthetic biology | journal = International Journal of Molecular Sciences | volume = 12 | issue = 6 | pages = 3576–93 | date = 2011-06-03 | pmid = 21747695 | pmc = 3131579 | doi = 10.3390/ijms12063576 }}</ref><br />
<br />
Microfluidics, in particular droplet microfluidics, is an emerging tool used to construct new components, and to analyse and characterize them. It is widely employed in screening assays.<br />
<br />
微流体,特别是液滴微流体,是一种新兴的工具,用于构造新的元件,并分析和表征它们。它被广泛应用于筛选分析。<br />
<br />
<br />
<br />
=== Modularity 模块化 ===<br />
<br />
The most used<ref name="primer">{{Cite book|title=Synthetic Biology – A Primer|last1=Freemont|first1=Paul S.|last2=Kitney|first2=Richard I.| name-list-style = vanc |date=2012|publisher=World Scientific|isbn=978-1-84816-863-3|doi=10.1142/p837}}</ref>{{rp|22–23}} standardized DNA parts are [[BioBrick]] plasmids, invented by [[Tom Knight (scientist)|Tom Knight]] in 2003.<ref>{{Cite journal|last1=Knight|first1=Thomas| name-list-style = vanc |year=2003|title=Tom Knight (2003). Idempotent Vector Design for Standard Assembly of Biobricks|hdl=1721.1/21168}}</ref> Biobricks are stored at the [[Registry of Standard Biological Parts]] in Cambridge, Massachusetts. The BioBrick standard has been used by thousands of students worldwide in the [[international Genetically Engineered Machine]] (iGEM) competition.<ref name="primer" />{{rp|22–23}}<br />
<br />
The most used standardized DNA parts are BioBrick plasmids, invented by Tom Knight in 2003. Biobricks are stored at the Registry of Standard Biological Parts in Cambridge, Massachusetts. The BioBrick standard has been used by thousands of students worldwide in the international Genetically Engineered Machine (iGEM) competition. SH3 domain-peptide binding or SpyTag/SpyCatcher offer such control. In addition it is necessary to regulate protein-protein interactions in cells, such as with light (using light-oxygen-voltage-sensing domains) or cell-permeable small molecules by chemically induced dimerization.<br />
<br />
最常用的标准化 DNA 部分是生物积木质粒,由汤姆·奈特在2003年发明。生物积木储存在马萨诸塞州剑桥的标准生物部件注册处。生物积木标准已经被全世界成千上万的学生用于国际基因工程机器竞赛 (iGEM) 。SH3域-肽结合或 SpyTag/SpyCatcher 能提供这样的控制。此外,还必须通过化学诱导的二聚化来调节细胞中蛋白质-蛋白质的相互作用,例如利用光(光-氧-电压感应域)或细胞渗透性小分子。<br />
<br />
<br />
<br />
While DNA is most important for information storage, a large fraction of the cell's activities are carried out by proteins. Tools can send proteins to specific regions of the cell and to link different proteins together. The interaction strength between protein partners should be tunable between a lifetime of seconds (desirable for dynamic signaling events) up to an irreversible interaction (desirable for device stability or resilient to harsh conditions). Interactions such as [[coiled coil]]s,<ref>{{cite journal | vauthors = Woolfson DN, Bartlett GJ, Bruning M, Thomson AR | title = New currency for old rope: from coiled-coil assemblies to α-helical barrels | journal = Current Opinion in Structural Biology | volume = 22 | issue = 4 | pages = 432–41 | date = August 2012 | pmid = 22445228 | doi = 10.1016/j.sbi.2012.03.002 }}</ref> [[SH3 domain]]-peptide binding<ref>{{cite journal | vauthors = Dueber JE, Wu GC, Malmirchegini GR, Moon TS, Petzold CJ, Ullal AV, Prather KL, Keasling JD | title = Synthetic protein scaffolds provide modular control over metabolic flux | journal = Nature Biotechnology | volume = 27 | issue = 8 | pages = 753–9 | date = August 2009 | pmid = 19648908 | doi = 10.1038/nbt.1557 | s2cid = 2756476 }}</ref> or [[SpyCatcher|SpyTag/SpyCatcher]]<ref>{{cite journal | vauthors = Reddington SC, Howarth M | title = Secrets of a covalent interaction for biomaterials and biotechnology: SpyTag and SpyCatcher | journal = Current Opinion in Chemical Biology | volume = 29 | pages = 94–9 | date = December 2015 | pmid = 26517567 | doi = 10.1016/j.cbpa.2015.10.002 | doi-access = free }}</ref> offer such control. In addition it is necessary to regulate protein-protein interactions in cells, such as with light (using [[light-oxygen-voltage-sensing domain]]s) or cell-permeable small molecules by [[chemically induced dimerization]].<ref>{{cite journal | vauthors = Bayle JH, Grimley JS, Stankunas K, Gestwicki JE, Wandless TJ, Crabtree GR | title = Rapamycin analogs with differential binding specificity permit orthogonal control of protein activity | journal = Chemistry & Biology | volume = 13 | issue = 1 | pages = 99–107 | date = January 2006 | pmid = 16426976 | doi = 10.1016/j.chembiol.2005.10.017 | doi-access = free }}</ref><br />
<br />
In a living cell, molecular motifs are embedded in a bigger network with upstream and downstream components. These components may alter the signaling capability of the modeling module. In the case of ultrasensitive modules, the sensitivity contribution of a module can differ from the sensitivity that the module sustains in isolation.<br />
<br />
在一个活细胞中,分子模体被嵌入到一个更大的由上游或下游组件构成的网络中。这些组件可以改变建模模块的信令能力。在超灵敏模块的情况下,模块的灵敏度贡献可能不同于该模块在隔离状态下支持的灵敏度。<br />
<br />
<br />
<br />
In a living cell, molecular motifs are embedded in a bigger network with upstream and downstream components. These components may alter the signaling capability of the modeling module. In the case of ultrasensitive modules, the sensitivity contribution of a module can differ from the sensitivity that the module sustains in isolation.<ref name="altszylerUltrasens2014">{{cite journal | vauthors = Altszyler E, Ventura A, Colman-Lerner A, Chernomoretz A | title = Impact of upstream and downstream constraints on a signaling module's ultrasensitivity | journal = Physical Biology | volume = 11 | issue = 6 | pages = 066003 | date = October 2014 | pmid = 25313165 | pmc = 4233326 | doi = 10.1088/1478-3975/11/6/066003 | bibcode = 2014PhBio..11f6003A }}</ref><ref name="altszylerUltrasens2017">{{cite journal | vauthors = Altszyler E, Ventura AC, Colman-Lerner A, Chernomoretz A | title = Ultrasensitivity in signaling cascades revisited: Linking local and global ultrasensitivity estimations | journal = PLOS ONE | volume = 12 | issue = 6 | pages = e0180083 | year = 2017 | pmid = 28662096 | pmc = 5491127 | doi = 10.1371/journal.pone.0180083 | bibcode = 2017PLoSO..1280083A | arxiv = 1608.08007 }}</ref><br />
<br />
<br />
<br />
Models inform the design of engineered biological systems by better predicting system behavior prior to fabrication. Synthetic biology benefits from better models of how biological molecules bind substrates and catalyze reactions, how DNA encodes the information needed to specify the cell and how multi-component integrated systems behave. Multiscale models of gene regulatory networks focus on synthetic biology applications. Simulations can model all biomolecular interactions in transcription, translation, regulation and induction of gene regulatory networks.<br />
<br />
模型通过在制造之前更好地预测系统行为来指导工程生物系统的设计。合成生物学受益于更好的模型,这些模型包括生物分子如何结合底物和催化反应,DNA 如何编码指定细胞所需的信息,以及多组分综合系统如何运作。基因调控网络的多尺度模型关注于合成生物学的应用。模拟可以模拟所有的生物分子相互作用的转录,翻译,调节和基因调控网络的诱导。<br />
<br />
=== Modeling 建模 ===<br />
<br />
Models inform the design of engineered biological systems by better predicting system behavior prior to fabrication. Synthetic biology benefits from better models of how biological molecules bind substrates and catalyze reactions, how DNA encodes the information needed to specify the cell and how multi-component integrated systems behave. Multiscale models of gene regulatory networks focus on synthetic biology applications. Simulations can model all biomolecular interactions in [[Transcription (biology)|transcription]], [[Translation (biology)|translation]], regulation and induction of gene regulatory networks.<ref>{{cite journal | vauthors = Carbonell-Ballestero M, Duran-Nebreda S, Montañez R, Solé R, Macía J, Rodríguez-Caso C | title = A bottom-up characterization of transfer functions for synthetic biology designs: lessons from enzymology | journal = Nucleic Acids Research | volume = 42 | issue = 22 | pages = 14060–14069 | date = December 2014 | pmid = 25404136 | pmc = 4267673 | doi = 10.1093/nar/gku964 }}</ref><br />
<br />
<ref>{{cite journal | vauthors = Kaznessis YN | title = Models for synthetic biology | journal = BMC Systems Biology | volume = 1 | issue = 1 | pages = 47 | date = November 2007 | pmid = 17986347 | pmc = 2194732 | doi = 10.1186/1752-0509-1-47 }}</ref><br />
<br />
<ref>{{cite conference |vauthors=Tuza ZA, Singhal V, Kim J, Murray RM | title = An in silico modeling toolbox for rapid prototyping of circuits in a biomolecular "breadboard" system. |book-title=52nd IEEE Conference on Decision and Control |date=December 2013 |doi=10.1109/CDC.2013.6760079}}</ref><br />
<br />
<br />
<br />
Studies have considered the components of the DNA transcription mechanism. One desire of scientists creating synthetic biological circuits is to be able to control the transcription of synthetic DNA in unicellular organisms (prokaryotes) and in multicellular organisms (eukaryotes). One study tested the adjustability of synthetic transcription factors (sTFs) in areas of transcription output and cooperative ability among multiple transcription factor complexes. Researchers were able to mutate functional regions called zinc fingers, the DNA specific component of sTFs, to decrease their affinity for specific operator DNA sequence sites, and thus decrease the associated site-specific activity of the sTF (usually transcriptional regulation). They further used the zinc fingers as components of complex-forming sTFs, which are the eukaryotic translation mechanisms. and demonstrated both analog and digital computation in living cells.<br />
--[[用户:粲兰|袁一博]]([[用户讨论:粲兰|讨论]]) “and demonstrated both analog and digital computation in living cells. ”这句首字母没有大写,不知是搬运时少了一点还是仅仅是格式错误。<br />
They demonstrated that bacteria can be engineered to perform both analog and/or digital computation. In human cells research demonstrated a universal logic evaluator that operates in mammalian cells in 2007. Subsequently, researchers utilized this paradigm to demonstrate a proof-of-concept therapy that uses biological digital computation to detect and kill human cancer cells in 2011. Another group of researchers demonstrated in 2016 that principles of computer engineering, can be used to automate digital circuit design in bacterial cells. In 2017, researchers demonstrated the 'Boolean logic and arithmetic through DNA excision' (BLADE) system to engineer digital computation in human cells.<br />
<br />
研究已经考虑了 DNA 转录机制的组成部分。科学家创造合成生物电路的一个愿望是能够控制单细胞生物(原核生物)和多细胞生物(真核生物)中合成 DNA 的转录。一项研究测试了合成转录因子(sTFs)在转录输出和多个转录因子复合物之间的合作能力方面的可调节性。研究人员能够突变被称为锌指——转录因子的一段特殊DNA——的功能区域,以减少它们对特定算子DNA 序列位点的亲和力,从而减少相关的特定位点的转录因子活性(通常是转录调控)。他们进一步使用锌指作为复杂地组成的转录因子的组成部分,这是真核翻译的机制。并在活细胞中进行了模拟和数字计算。他们证明了可以设计细菌使其同时执行模拟和/或数字计算。2007年,人类细胞研究展示了一种在哺乳动物细胞中运作的通用逻辑求值器。随后,研究人员在2011年利用这一范式展示了一种概念验证疗法,利用生物数字计算来检测和杀死人类癌细胞。另一组研究人员在2016年证明了计算机工程的原理,可以用来自动化细菌细胞中的数字电路设计。2017年,研究人员演示了“通过 DNA 删除的布尔逻辑和算术”(BLADE)系统,用于在人类细胞中构建数字计算。<br />
<br />
=== Synthetic transcription factors 合成转录因子 ===<br />
<br />
Studies have considered the components of the [[Transcription (biology)|DNA transcription]] mechanism. One desire of scientists creating [[synthetic biological circuit]]s is to be able to control the transcription of synthetic DNA in unicellular organisms ([[prokaryote]]s) and in multicellular organisms ([[eukaryote]]s). One study tested the adjustability of synthetic [[transcription factor]]s (sTFs) in areas of transcription output and cooperative ability among multiple transcription factor complexes.<ref name="Khalil AS 2012">{{cite journal | vauthors = Khalil AS, Lu TK, Bashor CJ, Ramirez CL, Pyenson NC, Joung JK, Collins JJ | title = A synthetic biology framework for programming eukaryotic transcription functions | journal = Cell | volume = 150 | issue = 3 | pages = 647–58 | date = August 2012 | pmid = 22863014 | pmc = 3653585 | doi = 10.1016/j.cell.2012.05.045 }}</ref> Researchers were able to mutate functional regions called [[zinc finger]]s, the DNA specific component of sTFs, to decrease their affinity for specific operator DNA sequence sites, and thus decrease the associated site-specific activity of the sTF (usually transcriptional regulation). They further used the zinc fingers as components of complex-forming sTFs, which are the [[eukaryotic translation]] mechanisms.<ref name="Khalil AS 2012"/><br />
<br />
<br />
<br />
A biosensor refers to an engineered organism, usually a bacterium, that is capable of reporting some ambient phenomenon such as the presence of heavy metals or toxins. One such system is the Lux operon of Aliivibrio fischeri, which codes for the enzyme that is the source of bacterial bioluminescence, and can be placed after a respondent promoter to express the luminescence genes in response to a specific environmental stimulus. One such sensor created, consisted of a bioluminescent bacterial coating on a photosensitive computer chip to detect certain petroleum pollutants. When the bacteria sense the pollutant, they luminesce. Another example of a similar mechanism is the detection of landmines by an engineered E.coli reporter strain capable of detecting TNT and its main degradation product DNT, and consequently producing a green fluorescent protein (GFP).<br />
<br />
生物传感器是一种工用有机体,它能够报告周边某些环境现象,如重金属或毒素的存在,通常是细菌。其中一个这样的系统是 Aliivibrio 费氏弧菌的 Lux 操纵子,它编码的酶是细菌生物发光的来源,可以放置在应答启动子之后表达发光基因以响应特定的环境刺激。其中一个传感器是由一个光敏计算机芯片上的生物发光细菌涂层组成的,用以检测某些石油污染物。当细菌感觉到污染物时,它们就会发光。另一个类似机制的例子是地雷的检测,用一个能够检测 TNT 及其主要降解产物 DNT 的大肠杆菌报告基因工程菌株,从而产生绿色荧光蛋白。<br />
<br />
== Applications 应用 ==<br />
<br />
=== Biological computers 生物计算机 ===<br />
<br />
Modified organisms can sense environmental signals and send output signals that can be detected and serve diagnostic purposes. Microbe cohorts have been used.<br />
<br />
改良有机体可以感知环境信号,并发送能够被检测到的输出信号,用于诊断目的。微生物群落已经被应用于这种用途。<br />
<br />
A [[biological computer]] refers to an engineered biological system that can perform computer-like operations, which is a dominant paradigm in synthetic biology. Researchers built and characterized a variety of [[logic gate]]s in a number of organisms,<ref>{{cite journal | vauthors = Singh V | title = Recent advances and opportunities in synthetic logic gates engineering in living cells | journal = Systems and Synthetic Biology | volume = 8 | issue = 4 | pages = 271–82 | date = December 2014 | pmid = 26396651 | pmc = 4571725 | doi = 10.1007/s11693-014-9154-6 }}</ref> and demonstrated both analog and digital computation in living cells. They demonstrated that bacteria can be engineered to perform both analog and/or digital computation.<ref>{{cite journal | vauthors = Purcell O, Lu TK | title = Synthetic analog and digital circuits for cellular computation and memory | journal = Current Opinion in Biotechnology | volume = 29 | pages = 146–55 | date = October 2014 | pmid = 24794536 | pmc = 4237220 | doi = 10.1016/j.copbio.2014.04.009 | series = Cell and Pathway Engineering }}</ref><ref>{{cite journal | vauthors = Daniel R, Rubens JR, Sarpeshkar R, Lu TK | title = Synthetic analog computation in living cells | journal = Nature | volume = 497 | issue = 7451 | pages = 619–23 | date = May 2013 | pmid = 23676681 | doi = 10.1038/nature12148 | bibcode = 2013Natur.497..619D | s2cid = 4358570 }}</ref> In human cells research demonstrated a universal logic evaluator that operates in mammalian cells in 2007.<ref>{{cite journal | vauthors = Rinaudo K, Bleris L, Maddamsetti R, Subramanian S, Weiss R, Benenson Y | title = A universal RNAi-based logic evaluator that operates in mammalian cells | journal = Nature Biotechnology | volume = 25 | issue = 7 | pages = 795–801 | date = July 2007 | pmid = 17515909 | doi = 10.1038/nbt1307 | s2cid = 280451 }}</ref> Subsequently, researchers utilized this paradigm to demonstrate a proof-of-concept therapy that uses biological digital computation to detect and kill human cancer cells in 2011.<ref>{{cite journal | vauthors = Xie Z, Wroblewska L, Prochazka L, Weiss R, Benenson Y | title = Multi-input RNAi-based logic circuit for identification of specific cancer cells | journal = Science | volume = 333 | issue = 6047 | pages = 1307–11 | date = September 2011 | pmid = 21885784 | doi = 10.1126/science.1205527 | bibcode = 2011Sci...333.1307X | s2cid = 13743291 | url = https://semanticscholar.org/paper/372e175668b5323d79950b58f12b36f6974a81ef }}</ref> Another group of researchers demonstrated in 2016 that principles of [[computer engineering]], can be used to automate digital circuit design in bacterial cells.<ref>{{cite journal | vauthors = Nielsen AA, Der BS, Shin J, Vaidyanathan P, Paralanov V, Strychalski EA, Ross D, Densmore D, Voigt CA | title = Genetic circuit design automation | journal = Science | volume = 352 | issue = 6281 | pages = aac7341 | date = April 2016 | pmid = 27034378 | doi = 10.1126/science.aac7341 | doi-access = free }}</ref> In 2017, researchers demonstrated the 'Boolean logic and arithmetic through DNA excision' (BLADE) system to engineer digital computation in human cells.<ref>{{cite journal | vauthors = Weinberg BH, Pham NT, Caraballo LD, Lozanoski T, Engel A, Bhatia S, Wong WW | title = Large-scale design of robust genetic circuits with multiple inputs and outputs for mammalian cells | journal = Nature Biotechnology | volume = 35 | issue = 5 | pages = 453–462 | date = May 2017 | pmid = 28346402 | pmc = 5423837 | doi = 10.1038/nbt.3805 }}</ref><br />
<br />
<br />
<br />
=== Biosensors 生物传感器 ===<br />
<br />
Cells use interacting genes and proteins, which are called gene circuits, to implement diverse function, such as responding to environmental signals, decision making and communication. Three key components are involved: DNA, RNA and Synthetic biologist designed gene circuits that can control gene expression from several levels including transcriptional, post-transcriptional and translational levels.<br />
<br />
细胞使用相互作用的基因和蛋白质,即所谓的基因回路,来实现不同的功能,如响应环境信号,决策和沟通。其中涉及到三个关键组成部分: DNA、 RNA 和合成生物学家设计的基因电路,可以从转录、转录后和翻译水平等几个层面控制基因表达。<br />
<br />
A [[biosensor]] refers to an engineered organism, usually a bacterium, that is capable of reporting some ambient phenomenon such as the presence of heavy metals or toxins. One such system is the [[Luciferase|Lux operon]] of ''[[Aliivibrio fischeri]],''<ref>{{cite journal | vauthors = de Almeida PE, van Rappard JR, Wu JC | title = In vivo bioluminescence for tracking cell fate and function | journal = American Journal of Physiology. Heart and Circulatory Physiology | volume = 301 | issue = 3 | pages = H663–71 | date = September 2011 | pmid = 21666118 | pmc = 3191083 | doi = 10.1152/ajpheart.00337.2011 }}</ref> which codes for the enzyme that is the source of bacterial [[bioluminescence]], and can be placed after a respondent [[Promoter (genetics)|promoter]] to express the luminescence genes in response to a specific environmental stimulus.<ref>{{cite journal | vauthors = Close DM, Xu T, Sayler GS, Ripp S | title = In vivo bioluminescent imaging (BLI): noninvasive visualization and interrogation of biological processes in living animals | journal = Sensors | volume = 11 | issue = 1 | pages = 180–206 | date = 2011 | pmid = 22346573 | pmc = 3274065 | doi = 10.3390/s110100180 }}</ref> One such sensor created, consisted of a [[bioluminescent bacteria]]l coating on a photosensitive [[computer chip]] to detect certain [[petroleum]] [[pollutant]]s. When the bacteria sense the pollutant, they luminesce.<ref>{{cite journal|last=Gibbs|first=W. Wayt| name-list-style = vanc |date=1997 |title=Critters on a Chip |url=http://www.sciam.com/article.cfm?id=critters-on-a-chip |journal=Scientific American|access-date=2 Mar 2009}}</ref> Another example of a similar mechanism is the detection of landmines by an engineered ''E.coli'' reporter strain capable of detecting [[TNT]] and its main degradation product [[2,4-Dinitrotoluene|DNT]], and consequently producing a green fluorescent protein ([[Green fluorescent protein|GFP]]).<ref>{{Cite journal|last1=Belkin|first1=Shimshon|last2=Yagur-Kroll|first2=Sharon|last3=Kabessa|first3=Yossef|last4=Korouma|first4=Victor|last5=Septon|first5=Tali|last6=Anati|first6=Yonatan|last7=Zohar-Perez|first7=Cheinat|last8=Rabinovitz|first8=Zahi|last9=Nussinovitch|first9=Amos|date=April 2017|title=Remote detection of buried landmines using a bacterial sensor|journal=Nature Biotechnology|volume=35|issue=4|pages=308–310|doi=10.1038/nbt.3791|pmid=28398330|s2cid=3645230|issn=1087-0156}}</ref><br />
<br />
<br />
<br />
Traditional metabolic engineering has been bolstered by the introduction of combinations of foreign genes and optimization by directed evolution. This includes engineering E. coli and yeast for commercial production of a precursor of the antimalarial drug, Artemisinin.<br />
<br />
传统的代谢工程学已经通过引入外源基因的组合和定向进化的优化得到了支持。这包括改造大肠杆菌和酵母菌,用于商业化生产抗疟药物青蒿素的前体。<br />
<br />
Modified organisms can sense environmental signals and send output signals that can be detected and serve diagnostic purposes. Microbe cohorts have been used.<ref name="pmid26019220">{{cite journal | vauthors = Danino T, Prindle A, Kwong GA, Skalak M, Li H, Allen K, Hasty J, Bhatia SN | title = Programmable probiotics for detection of cancer in urine | journal = Science Translational Medicine | volume = 7 | issue = 289 | pages = 289ra84 | date = May 2015 | pmid = 26019220 | pmc = 4511399 | doi = 10.1126/scitranslmed.aaa3519 }}</ref><br />
<br />
<br />
<br />
Entire organisms have yet to be created from scratch, although living cells can be transformed with new DNA. <br />
--[[用户:粲兰|袁一博]]([[用户讨论:粲兰|讨论]])“Entire organisms have yet to be created from scratch, although living cells can be transformed with new DNA.”have 疑似应为 haven’t <br />
Several ways allow constructing synthetic DNA components and even entire synthetic genomes, but once the desired genetic code is obtained, it is integrated into a living cell that is expected to manifest the desired new capabilities or phenotypes while growing and thriving. Cell transformation is used to create biological circuits, which can be manipulated to yield desired outputs.<br />
<br />
虽然活细胞可以通过新的 DNA 转化,但整个有机体还没有从头开始创造。有几种方法可以构建合成 DNA 组件,甚至是整个合成基因组,但是一旦获得了所需的遗传密码,它就会被整合到一个活细胞中,这个活细胞在生长和发育的过程中,有望表现所需的新能力或表型。细胞分化被用于创造生物电路,我们可以通过操纵这些电路来产生所需的输出。<br />
<br />
=== Cell transformation 细胞分化 ===<br />
<br />
{{Main|Transformation (genetics)}}Cells use interacting genes and proteins, which are called gene circuits, to implement diverse function, such as responding to environmental signals, decision making and communication. Three key components are involved: DNA, RNA and Synthetic biologist designed gene circuits that can control gene expression from several levels including transcriptional, post-transcriptional and translational levels.<br />
<br />
<br />
<br />
Traditional metabolic engineering has been bolstered by the introduction of combinations of foreign genes and optimization by directed evolution. This includes engineering ''E. coli'' and [[yeast]] for commercial production of a precursor of the [[Antimalarial medication|antimalarial drug]], [[Artemisinin]].<ref>{{cite journal | vauthors = Westfall PJ, Pitera DJ, Lenihan JR, Eng D, Woolard FX, Regentin R, Horning T, Tsuruta H, Melis DJ, Owens A, Fickes S, Diola D, Benjamin KR, Keasling JD, Leavell MD, McPhee DJ, Renninger NS, Newman JD, Paddon CJ | title = Production of amorphadiene in yeast, and its conversion to dihydroartemisinic acid, precursor to the antimalarial agent artemisinin | journal = Proceedings of the National Academy of Sciences of the United States of America | volume = 109 | issue = 3 | pages = E111–8 | date = January 2012 | pmid = 22247290 | pmc = 3271868 | doi = 10.1073/pnas.1110740109 | bibcode = 2012PNAS..109E.111W }}</ref><br />
<br />
The [[Top7 protein was one of the first proteins designed for a fold that had never been seen before in nature ]]<br />
<br />
[[ Top7蛋白是第一个为了折叠而设计的蛋白质之一,以前从未在自然界中见过]<br />
<br />
<br />
<br />
Entire organisms have yet to be created from scratch, although living cells can be [[Transformation (genetics)|transformed]] with new DNA. Several ways allow constructing synthetic DNA components and even entire [[Artificial gene synthesis|synthetic genomes]], but once the desired genetic code is obtained, it is integrated into a living cell that is expected to manifest the desired new capabilities or [[phenotype]]s while growing and thriving.<ref>{{cite news|url=https://www.independent.co.uk/news/science/eureka-scientists-unveil-giant-leap-towards-synthetic-life-9219644.html|title=Eureka! Scientists unveil giant leap towards synthetic life|last=Connor|first=Steve|date=28 March 2014|work=The Independent|access-date=2015-08-06}}</ref> Cell transformation is used to create [[Synthetic biological circuit|biological circuits]], which can be manipulated to yield desired outputs.<ref name=":0" /><ref name=":1" /><br />
<br />
Natural proteins can be engineered, for example, by directed evolution, novel protein structures that match or improve on the functionality of existing proteins can be produced. One group generated a helix bundle that was capable of binding oxygen with similar properties as hemoglobin, yet did not bind carbon monoxide. A similar protein structure was generated to support a variety of oxidoreductase activities while another formed a structurally and sequentially novel ATPase. Another group generated a family of G-protein coupled receptors that could be activated by the inert small molecule clozapine N-oxide but insensitive to the native ligand, acetylcholine; these receptors are known as DREADDs. Novel functionalities or protein specificity can also be engineered using computational approaches. One study was able to use two different computational methods – a bioinformatics and molecular modeling method to mine sequence databases, and a computational enzyme design method to reprogram enzyme specificity. Both methods resulted in designed enzymes with greater than 100 fold specificity for production of longer chain alcohols from sugar.<br />
<br />
天然蛋白质可以被设计出来,例如,通过定向进化,新的蛋白质结构可以匹配或改进现有蛋白质的功能。其中一组可以产生能将血红蛋白与具有类似性质的氧结合的螺旋束,但不结合一氧化碳。一个类似的蛋白质结构被生成以支持多种氧化还原酶活性,而另一组生成一个在结构和顺序上全新的 ATP 酶。另一组产生了一类 g 蛋白偶联受体,这类受体可以被惰性小分子N-氧化氯氮平激活,但对天然配体乙酰胆碱不敏感; 这些受体被称为 DREADDs。新的功能或蛋白质特异性也可以利用计算方法进行设计。一项研究能够使用两种不同的计算方法——生物信息学和分子模拟方法挖掘序列数据库,使用计算酶设计方法重新编写酶的专一性。这两种方法都可以使设计的酶具有大于100倍的专一性,可以用糖生产出长链醇。<br />
<br />
<br />
<br />
By integrating synthetic biology with [[materials science]], it would be possible to use cells as microscopic molecular foundries to produce materials with properties whose properties were genetically encoded. Re-engineering has produced Curli fibers, the [[amyloid]] component of extracellular material of [[biofilms]], as a platform for programmable [[nanomaterial]]. These nanofibers were genetically constructed for specific functions, including adhesion to substrates, nanoparticle templating and protein immobilization.<ref>{{cite journal|vauthors=Nguyen PQ, Botyanszki Z, Tay PK, Joshi NS|date=September 2014|title=Programmable biofilm-based materials from engineered curli nanofibres|journal=Nature Communications|volume=5|pages=4945|bibcode=2014NatCo...5.4945N|doi=10.1038/ncomms5945|pmid=25229329|doi-access=free}}</ref><br />
<br />
Another common investigation is expansion of the natural set of 20 amino acids. Excluding stop codons, 61 codons have been identified, but only 20 amino acids are coded generally in all organisms. Certain codons are engineered to code for alternative amino acids including: nonstandard amino acids such as O-methyl tyrosine; or exogenous amino acids such as 4-fluorophenylalanine. Typically, these projects make use of re-coded nonsense suppressor tRNA-Aminoacyl tRNA synthetase pairs from other organisms, though in most cases substantial engineering is required.<br />
<br />
另一个常见的研究是对20种天然氨基酸的扩展。除了终止密码子, 61个密码子已被破译出,但所有生物体中一般只有20个氨基酸。某些密码子被设计为编码可替代的氨基酸,包括: 非标准氨基酸,如 o- 甲基酪氨酸; 或外源氨基酸,如4- 氟苯丙氨酸。通常情况下,这些项目利用从其他生物体获取的重新编码的无意义抑制 tRNA-氨酰基 tRNA 合成酶对,虽然在大多数情况下这需要大量的工程。<br />
<br />
<br />
<br />
=== Designed proteins 设计蛋白质 ===<br />
<br />
Other researchers investigated protein structure and function by reducing the normal set of 20 amino acids. Limited protein sequence libraries are made by generating proteins where groups of amino acids may be replaced by a single amino acid. For instance, several non-polar amino acids within a protein can all be replaced with a single non-polar amino acid. . One project demonstrated that an engineered version of Chorismate mutase still had catalytic activity when only 9 amino acids were used.<br />
<br />
其他研究人员通过减少一般的20种天然氨基酸来研究蛋白质的结构和功能。有限的蛋白质序列库是通过生成蛋白质制成的,其中一组氨基酸可以被一个单一的氨基酸所取代。例如,一个蛋白质中的几个非极性氨基酸都可以被一个非极性氨基酸所取代。.一个研究项目证明了,当只使用9种氨基酸时,一种改造过的分支酸变位酶仍然具有催化活性。<br />
<br />
<br />
<br />
[[File:Top7.png|thumb|The [[Top7]] protein was one of the first proteins designed for a fold that had never been seen before in nature<ref name="kuhlman03">{{cite journal | vauthors = Kuhlman B, Dantas G, Ireton GC, Varani G, Stoddard BL, Baker D | title = Design of a novel globular protein fold with atomic-level accuracy | journal = Science | volume = 302 | issue = 5649 | pages = 1364–8 | date = November 2003 | pmid = 14631033 | doi = 10.1126/science.1089427 | bibcode = 2003Sci...302.1364K | s2cid = 1939390 | url = https://semanticscholar.org/paper/3188f905b60172dcad17a9b8c23567400c2bb65f }}</ref> ]]<br />
<br />
Researchers and companies practice synthetic biology to synthesize industrial enzymes with high activity, optimal yields and effectiveness. These synthesized enzymes aim to improve products such as detergents and lactose-free dairy products, as well as make them more cost effective. The improvements of metabolic engineering by synthetic biology is an example of a biotechnological technique utilized in industry to discover pharmaceuticals and fermentive chemicals. Synthetic biology may investigate modular pathway systems in biochemical production and increase yields of metabolic production. Artificial enzymatic activity and subsequent effects on metabolic reaction rates and yields may develop "efficient new strategies for improving cellular properties ... for industrially important biochemical production".<br />
<br />
研究人员和公司运用合成生物学来合成具有高活性、最佳产量和有效性的工业酶。这些合成酶旨在改善产品,如洗涤剂和无乳糖乳制品,以及使他们更具成本效益。合成生物学对代谢工程学的改进是生物技术用于工业发现药物和发酵性化学品的一个典例。合成生物学可以研究生化生产中的模块化途径系统,并提高代谢生产的产量。人工酶活性及其对代谢反应速率和产量的后续影响可能开发出“改善细胞特性有效的新策略...... 用于重要的工业生化产品”。<br />
<br />
<br />
<br />
Natural proteins can be engineered, for example, by [[directed evolution]], novel protein structures that match or improve on the functionality of existing proteins can be produced. One group generated a [[helix bundle]] that was capable of binding [[oxygen]] with similar properties as [[hemoglobin]], yet did not bind [[carbon monoxide]].<ref>{{cite journal | vauthors = Koder RL, Anderson JL, Solomon LA, Reddy KS, Moser CC, Dutton PL | title = Design and engineering of an O(2) transport protein | journal = Nature | volume = 458 | issue = 7236 | pages = 305–9 | date = March 2009 | pmid = 19295603 | pmc = 3539743 | doi = 10.1038/nature07841 | bibcode = 2009Natur.458..305K }}</ref> A similar protein structure was generated to support a variety of [[oxidoreductase]] activities <ref>{{cite journal | vauthors = Farid TA, Kodali G, Solomon LA, Lichtenstein BR, Sheehan MM, Fry BA, Bialas C, Ennist NM, Siedlecki JA, Zhao Z, Stetz MA, Valentine KG, Anderson JL, Wand AJ, Discher BM, Moser CC, Dutton PL | title = Elementary tetrahelical protein design for diverse oxidoreductase functions | journal = Nature Chemical Biology | volume = 9 | issue = 12 | pages = 826–833 | date = December 2013 | pmid = 24121554 | pmc = 4034760 | doi = 10.1038/nchembio.1362 }}</ref> while another formed a structurally and sequentially novel [[ATPase]].<ref name="WangHecht2020">{{cite journal|last1=Wang|first1=MS|last2=Hecht|first2=MH|title=A Completely De Novo ATPase from Combinatorial Protein Design|journal=Journal of the American Chemical Society|year=2020|volume=142|issue=36|pages=15230–15234|issn=0002-7863|doi=10.1021/jacs.0c02954|pmid=32833456}}</ref> Another group generated a family of G-protein coupled receptors that could be activated by the inert small molecule [[clozapine N-oxide]] but insensitive to the native [[ligand]], [[acetylcholine]]; these receptors are known as [[Receptor activated solely by a synthetic ligand|DREADDs]].<ref>{{cite journal | vauthors = Armbruster BN, Li X, Pausch MH, Herlitze S, Roth BL | title = Evolving the lock to fit the key to create a family of G protein-coupled receptors potently activated by an inert ligand | journal = Proceedings of the National Academy of Sciences of the United States of America | volume = 104 | issue = 12 | pages = 5163–8 | date = March 2007 | pmid = 17360345 | pmc = 1829280 | doi = 10.1073/pnas.0700293104 | bibcode = 2007PNAS..104.5163A }}</ref> Novel functionalities or protein specificity can also be engineered using computational approaches. One study was able to use two different computational methods – a bioinformatics and molecular modeling method to mine sequence databases, and a computational enzyme design method to reprogram enzyme specificity. Both methods resulted in designed enzymes with greater than 100 fold specificity for production of longer chain alcohols from sugar.<ref>{{cite journal | vauthors = Mak WS, Tran S, Marcheschi R, Bertolani S, Thompson J, Baker D, Liao JC, Siegel JB | title = Integrative genomic mining for enzyme function to enable engineering of a non-natural biosynthetic pathway | journal = Nature Communications | volume = 6 | pages = 10005 | date = November 2015 | pmid = 26598135 | pmc = 4673503 | doi = 10.1038/ncomms10005 | bibcode = 2015NatCo...610005M }}</ref><br />
<br />
<br />
<br />
Scientists can encode digital information onto a single strand of synthetic DNA. In 2012, George M. Church encoded one of his books about synthetic biology in DNA. The 5.3 Mb of data was more than 1000 times greater than the previous largest amount of information to be stored in synthesized DNA. A similar project encoded the complete sonnets of William Shakespeare in DNA. More generally, algorithms such as NUPACK, ViennaRNA, Ribosome Binding Site Calculator, Cello, and Non-Repetitive Parts Calculator enables the design of new genetic systems.<br />
<br />
科学家可以将数字信息编码到一条合成 DNA 链上。2012年,乔治·M·丘奇用 DNA 将他的一本关于合成生物学的书编码。这5.3 Mb 的数据量比之前存储在合成 DNA 中的最大信息量大了1000多倍。一个类似的项目将威廉·莎士比亚的十四行诗全部编码在 DNA 中。更广泛地说,例如 NUPACK,vienna,Ribosome Binding Site Calculator,cello,和 non-repeated Parts Calculator 等算法使新遗传系统的设计成为可能。<br />
<br />
Another common investigation is [[Expanded genetic code|expansion]] of the natural set of 20 [[amino acid]]s. Excluding [[stop codon]]s, 61 [[codons]] have been identified, but only 20 amino acids are coded generally in all organisms. Certain codons are engineered to code for alternative amino acids including: nonstandard amino acids such as O-methyl [[tyrosine]]; or exogenous amino acids such as 4-fluorophenylalanine. Typically, these projects make use of re-coded [[nonsense suppressor]] [[Transfer RNA|tRNA]]-[[Aminoacyl tRNA synthetase]] pairs from other organisms, though in most cases substantial engineering is required.<ref>{{cite journal | vauthors = Wang Q, Parrish AR, Wang L | title = Expanding the genetic code for biological studies | journal = Chemistry & Biology | volume = 16 | issue = 3 | pages = 323–36 | date = March 2009 | pmid = 19318213 | pmc = 2696486 | doi = 10.1016/j.chembiol.2009.03.001 }}</ref><br />
<br />
<br />
<br />
Many technologies have been developed for incorporating unnatural nucleotides and amino acids into nucleic acids and proteins, both in vitro and in vivo. For example, in May 2014, researchers announced that they had successfully introduced two new artificial nucleotides into bacterial DNA. By including individual artificial nucleotides in the culture media, they were able to exchange the bacteria 24 times; they did not generate mRNA or proteins able to use the artificial nucleotides.<br />
<br />
无论是在体外还是体内,在核酸和蛋白质中将非天然的核苷酸和氨基酸结合的技术已经被开发出来。例如,2014年5月,研究人员宣布他们已经成功地将两种新的人工核苷酸引入细菌 DNA。通过在培养基中加入单个的人工核苷酸,他们能够交换细菌24次; 他们没有产生能够人工核苷酸表达的 mRNA 或蛋白质。<br />
<br />
Other researchers investigated protein structure and function by reducing the normal set of 20 amino acids. Limited protein sequence libraries are made by generating proteins where groups of amino acids may be replaced by a single amino acid.<ref>{{cite journal|author=Davidson, AR|author2=Lumb, KJ|author3=Sauer, RT|date=1995|title=Cooperatively folded proteins in random sequence libraries|journal=Nature Structural Biology|volume=2|issue=10|pages=856–864|doi=10.1038/nsb1095-856|pmid=7552709|s2cid=31781262}}</ref> For instance, several [[Chemical polarity|non-polar]] amino acids within a protein can all be replaced with a single non-polar amino acid.<ref>{{cite journal|vauthors=Kamtekar S, Schiffer JM, Xiong H, Babik JM, Hecht MH|date=December 1993|title=Protein design by binary patterning of polar and nonpolar amino acids|journal=Science|volume=262|issue=5140|pages=1680–5|bibcode=1993Sci...262.1680K|doi=10.1126/science.8259512|pmid=8259512}}</ref> . One project demonstrated that an engineered version of [[Chorismate mutase]] still had catalytic activity when only 9 amino acids were used.<ref>{{cite journal|vauthors=Walter KU, Vamvaca K, Hilvert D|date=November 2005|title=An active enzyme constructed from a 9-amino acid alphabet|journal=The Journal of Biological Chemistry|volume=280|issue=45|pages=37742–6|doi=10.1074/jbc.M507210200|pmid=16144843|doi-access=free}}</ref><br />
<br />
<br />
<br />
Researchers and companies practice synthetic biology to synthesize [[industrial enzymes]] with high activity, optimal yields and effectiveness. These synthesized enzymes aim to improve products such as detergents and lactose-free dairy products, as well as make them more cost effective.<ref>{{cite web|url=https://www.thermofisher.com/us/en/home/life-science/synthetic-biology/synthetic-biology-applications.html|title=Synthetic Biology Applications|website=www.thermofisher.com|access-date=2015-11-12}}</ref> The improvements of metabolic engineering by synthetic biology is an example of a biotechnological technique utilized in industry to discover pharmaceuticals and fermentive chemicals. Synthetic biology may investigate modular pathway systems in biochemical production and increase yields of metabolic production. Artificial enzymatic activity and subsequent effects on metabolic reaction rates and yields may develop "efficient new strategies for improving cellular properties ... for industrially important biochemical production".<ref>{{cite journal | vauthors = Liu Y, Shin HD, Li J, Liu L | title = Toward metabolic engineering in the context of system biology and synthetic biology: advances and prospects | journal = Applied Microbiology and Biotechnology | volume = 99 | issue = 3 | pages = 1109–18 | date = February 2015 | pmid = 25547833 | doi = 10.1007/s00253-014-6298-y | s2cid = 954858 }}</ref><br />
<br />
Synthetic biology raised NASA's interest as it could help to produce resources for astronauts from a restricted portfolio of compounds sent from Earth. On Mars, in particular, synthetic biology could lead to production processes based on local resources, making it a powerful tool in the development of manned outposts with less dependence on Earth.<br />
<br />
合成生物学引起了美国国家航空航天局的兴趣,因为它可以促使从地球上发射的受限化合物组合为宇航员生产资源。特别是在火星上,合成生物学可以产生基于当地资源的生产过程,使其成为开发对地球依赖性较低的载人前哨站的有力工具。<br />
<br />
<br />
<br />
=== Designed nucleic acid systems 设计核酸系统 ===<br />
<br />
Scientists can encode digital information onto a single strand of [[synthetic DNA]]. In 2012, [[George M. Church]] encoded one of his books about synthetic biology in DNA. The 5.3 [[Megabit|Mb]] of data was more than 1000 times greater than the previous largest amount of information to be stored in synthesized DNA.<ref>{{cite journal | vauthors = Church GM, Gao Y, Kosuri S | title = Next-generation digital information storage in DNA | journal = Science | volume = 337 | issue = 6102 | pages = 1628 | date = September 2012 | pmid = 22903519 | doi = 10.1126/science.1226355 | bibcode = 2012Sci...337.1628C | s2cid = 934617 | url = https://semanticscholar.org/paper/0856a685e85bcd27c11cd5f385be818deceb27bd }}</ref> A similar project encoded the complete [[sonnet]]s of [[William Shakespeare]] in DNA.<ref>{{cite web|url=http://news.sky.com/story/1041917/huge-amounts-of-data-can-be-stored-in-dna|title=Huge amounts of data can be stored in DNA|date=23 January 2013|publisher=Sky News|access-date=24 January 2013|archive-url=https://web.archive.org/web/20160531044937/http://news.sky.com/story/1041917/huge-amounts-of-data-can-be-stored-in-dna|archive-date=2016-05-31 }}</ref> More generally, algorithms such as NUPACK,<ref>{{Cite journal|last1=Zadeh|first1=Joseph N.|last2=Steenberg|first2=Conrad D.|last3=Bois|first3=Justin S.|last4=Wolfe|first4=Brian R.|last5=Pierce|first5=Marshall B.|last6=Khan|first6=Asif R.|last7=Dirks|first7=Robert M.|last8=Pierce|first8=Niles A.|date=2011-01-15|title=NUPACK: Analysis and design of nucleic acid systems|journal=Journal of Computational Chemistry|language=en|volume=32|issue=1|pages=170–173|doi=10.1002/jcc.21596|pmid=20645303}}</ref> ViennaRNA,<ref>{{Cite journal|last1=Lorenz|first1=Ronny|last2=Bernhart|first2=Stephan H.|last3=Höner zu Siederdissen|first3=Christian|last4=Tafer|first4=Hakim|last5=Flamm|first5=Christoph|last6=Stadler|first6=Peter F.|last7=Hofacker|first7=Ivo L.|date=2011-11-24|title=ViennaRNA Package 2.0|journal=Algorithms for Molecular Biology|language=en|volume=6|issue=1|pages=26|doi=10.1186/1748-7188-6-26|issn=1748-7188|pmc=3319429|pmid=22115189}}</ref> Ribosome Binding Site Calculator,<ref>{{Cite journal|last1=Salis|first1=Howard M.|last2=Mirsky|first2=Ethan A.|last3=Voigt|first3=Christopher A.|date=October 2009|title=Automated design of synthetic ribosome binding sites to control protein expression|journal=Nature Biotechnology|language=en|volume=27|issue=10|pages=946–950|doi=10.1038/nbt.1568|pmid=19801975|issn=1546-1696|pmc=2782888}}</ref> Cello,<ref>{{Cite journal|last1=Nielsen|first1=A. A. K.|last2=Der|first2=B. S.|last3=Shin|first3=J.|last4=Vaidyanathan|first4=P.|last5=Paralanov|first5=V.|last6=Strychalski|first6=E. A.|last7=Ross|first7=D.|last8=Densmore|first8=D.|last9=Voigt|first9=C. A.|date=2016-04-01|title=Genetic circuit design automation|journal=Science|language=en|volume=352|issue=6281|pages=aac7341|doi=10.1126/science.aac7341|pmid=27034378|issn=0036-8075|doi-access=free}}</ref> and Non-Repetitive Parts Calculator<ref>{{Cite journal|last1=Hossain|first1=Ayaan|last2=Lopez|first2=Eriberto|last3=Halper|first3=Sean M.|last4=Cetnar|first4=Daniel P.|last5=Reis|first5=Alexander C.|last6=Strickland|first6=Devin|last7=Klavins|first7=Eric|last8=Salis|first8=Howard M.|date=2020-07-13|title=Automated design of thousands of nonrepetitive parts for engineering stable genetic systems|url=https://www.nature.com/articles/s41587-020-0584-2|journal=Nature Biotechnology|language=en|pages=1–10|doi=10.1038/s41587-020-0584-2|pmid=32661437|s2cid=220506228|issn=1546-1696}}</ref> enables the design of new genetic systems.<br />
<br />
<br />
<br />
[[Gene functions in the minimal genome of the synthetic organism, Syn 3.]]<br />
<br />
[[在合成生物的最小基因组中发挥功能的基因,Syn 3. ]]<br />
<br />
Many technologies have been developed for incorporating [[Nucleic acid analogue|unnatural nucleotides]] and amino acids into nucleic acids and proteins, both ''in vitro'' and ''in vivo''. For example, in May 2014, researchers announced that they had successfully introduced two new artificial [[nucleotides]] into bacterial DNA. By including individual artificial nucleotides in the culture media, they were able to exchange the bacteria 24 times; they did not generate [[Messenger RNA|mRNA]] or proteins able to use the artificial nucleotides.<ref name="NYT-20140507">{{cite news|url=https://www.nytimes.com/2014/05/08/business/researchers-report-breakthrough-in-creating-artificial-genetic-code.html|title=Researchers Report Breakthrough in Creating Artificial Genetic Code|last=Pollack|first=Andrew|date=May 7, 2014|work=[[New York Times]]|access-date=May 7, 2014}}</ref><ref name="NATURE-20140507">{{cite journal|last=Callaway|first=Ewen|date=May 7, 2014|title=First life with 'alien' DNA|url=http://www.nature.com/news/first-life-with-alien-dna-1.15179|journal=[[Nature (journal)|Nature]]|doi=10.1038/nature.2014.15179|s2cid=86967999|access-date=May 7, 2014}}</ref><ref name="NATJ-20140507">{{cite journal|vauthors=Malyshev DA, Dhami K, Lavergne T, Chen T, Dai N, Foster JM, Corrêa IR, Romesberg FE|date=May 2014|title=A semi-synthetic organism with an expanded genetic alphabet|journal=Nature|volume=509|issue=7500|pages=385–8|bibcode=2014Natur.509..385M|doi=10.1038/nature13314|pmc=4058825|pmid=24805238}}</ref><br />
<br />
One important topic in synthetic biology is synthetic life, that is concerned with hypothetical organisms created in vitro from biomolecules and/or chemical analogues thereof. Synthetic life experiments attempt to either probe the origins of life, study some of the properties of life, or more ambitiously to recreate life from non-living (abiotic) components. Synthetic life biology attempts to create living organisms capable of carrying out important functions, from manufacturing pharmaceuticals to detoxifying polluted land and water. In medicine, it offers prospects of using designer biological parts as a starting point for new classes of therapies and diagnostic tools. Nobody has been able to create such a cell. The host cells were able to grow and replicate. The Mycoplasma laboratorium is the only living organism with completely engineered genome.<br />
<br />
合成生物学的一个重要课题是合成生命,它涉及到在体外由生物分子和/或其化学类似物创造的假想生物体。合成生命实验或者试图探索生命的起源,研究生命的某些特性,或者更雄心勃勃地从非生命(非生物)组成部分中重新创造生命。合成生命生物学试图创造能够执行重要功能的生命有机体,从制造药品到净化被污染的土地和水。在医学上,它提供了使用设计生物学部件作为新类型治疗和诊断工具的起点的前景。没有人能够制造出这样的细胞。宿主细胞能够生长和复制。实验室合成支原体是唯一一个拥有完全工程化基因组的生物体。<br />
<br />
<br />
<br />
=== Space exploration 太空探索 ===<br />
<br />
The first living organism with 'artificial' expanded DNA code was presented in 2014; the team used E. coli that had its genome extracted and replaced with a chromosome with an expanded genetic code. The nucleosides added are d5SICS and dNaM. followed by national synthetic cell organizations in several countries, including FabriCell, MaxSynBio and BaSyC. <br />
--[[用户:粲兰|袁一博]]([[用户讨论:粲兰|讨论]])“followed by national synthetic cell organizations in several countries, including FabriCell, MaxSynBio and BaSyC.”该句首字母未大写,疑似搬运时少了部分语句。<br />
The European synthetic cell efforts were unified in 2019 as SynCellEU initiative.<br />
<br />
2014年,第一个具有人工扩展 DNA 编码的活有机体问世; 研究小组使用大肠杆菌提取了它的基因组,并用扩展基因编码的染色体替换了它。添加的核苷是 d5SICS 和 dNaM。其次是一些国家的国家合成细胞组织,包括 FabriCell,MaxSynBio 和 BaSyC。欧洲合成细胞的努力在2019年被统一为 SynCellEU 倡议。<br />
<br />
Synthetic biology raised [[NASA|NASA's]] interest as it could help to produce resources for astronauts from a restricted portfolio of compounds sent from Earth.<ref name="Verseux, C. 2015 73–100">{{Cite book|author=Verseux, C.|author2=Paulino-Lima, I.|author3=Baque, M.|author4=Billi, D.|author5=Rothschild, L.|date=2016|title=Synthetic Biology for Space Exploration: Promises and Societal Implications|journal=Ambivalences of Creating Life. Societal and Philosophical Dimensions of Synthetic Biology, Publisher: Springer-Verlag|volume=45|pages=73–100|doi=10.1007/978-3-319-21088-9_4|series=Ethics of Science and Technology Assessment|isbn=978-3-319-21087-2}}</ref><ref>{{cite journal|last1=Menezes|first1=A|last2=Cumbers|first2=J|last3=Hogan|first3=J|last4=Arkin|first4=A|date=2014|title=Towards synthetic biological approaches to resource utilization on space missions|journal=Journal of the Royal Society, Interface|volume=12|issue=102|pages=20140715|doi=10.1098/rsif.2014.0715|pmid=25376875|pmc=4277073}}</ref><ref>{{cite journal | vauthors = Montague M, McArthur GH, Cockell CS, Held J, Marshall W, Sherman LA, Wang N, Nicholson WL, Tarjan DR, Cumbers J | title = The role of synthetic biology for in situ resource utilization (ISRU) | journal = Astrobiology | volume = 12 | issue = 12 | pages = 1135–42 | date = December 2012 | pmid = 23140229 | doi = 10.1089/ast.2012.0829 | bibcode = 2012AsBio..12.1135M }}</ref> On Mars, in particular, synthetic biology could lead to production processes based on local resources, making it a powerful tool in the development of manned outposts with less dependence on Earth.<ref name="Verseux, C. 2015 73–100" /> Work has gone into developing plant strains that are able to cope with the harsh Martian environment, using similar techniques to those employed to increase resilience to certain environmental factors in agricultural crops.<ref>{{Cite web|title=NASA - Designer Plants on Mars|url=https://www.nasa.gov/centers/goddard/news/topstory/2005/mars_plants.html|last=GSFC|first=Bill Steigerwald |website=www.nasa.gov|language=en|access-date=2020-05-29}}</ref><br />
<br />
<br />
<br />
=== Synthetic life 合成生命 ===<br />
<br />
{{Further|Artificially Expanded Genetic Information System|Hypothetical types of biochemistry}}<br />
<br />
Bacteria have long been used in cancer treatment. Bifidobacterium and Clostridium selectively colonize tumors and reduce their size. Recently synthetic biologists reprogrammed bacteria to sense and respond to a particular cancer state. Most often bacteria are used to deliver a therapeutic molecule directly to the tumor to minimize off-target effects. To target the tumor cells, peptides that can specifically recognize a tumor were expressed on the surfaces of bacteria. Peptides used include an affibody molecule that specifically targets human epidermal growth factor receptor 2 and a synthetic adhesin. The other way is to allow bacteria to sense the tumor microenvironment, for example hypoxia, by building an AND logic gate into bacteria. The bacteria then only release target therapeutic molecules to the tumor through either lysis or the bacterial secretion system. Lysis has the advantage that it can stimulate the immune system and control growth. Multiple types of secretion systems can be used and other strategies as well. The system is inducible by external signals. Inducers include chemicals, electromagnetic or light waves.<br />
<br />
长期以来,细菌一直被用于癌症治疗。双歧杆菌和梭状芽胞杆菌选择性地定殖于肿瘤并减小肿瘤体积。最近,合成生物学家对细菌进行了重新编码,使其能够感知特定的癌症状态并对其做出反应。大多数情况下,细菌被用来直接向肿瘤输送治疗分子,以最小化脱靶效应。为了定靶于肿瘤细胞,细菌表面表达出了可以特异性识别肿瘤的肽。识别过程中涉及的多肽包括一个特定作用于人类表皮生长因子受体的粘附分子和一个合成粘附素。另一种方法是通过在细菌中建立一个与逻辑门让细菌感知肿瘤的微环境,例如,缺氧。然后,细菌只通过溶菌或细菌分泌系统向肿瘤释放靶向治疗分子。溶菌具有刺激免疫系统和控制生长的优点。这个过程中可以使用多种类型的分泌系统和其他策略。该系统由外部信号诱导。这些诱导因子包括化学物质、电磁波或光波。<br />
<br />
[[File:Syn3 genome.svg|thumb|upright=1.25|[[Gene]] functions in the minimal [[genome]] of the synthetic organism, ''[[Syn 3]]''.<ref name="Hutchison">{{cite journal | vauthors = Hutchison CA, Chuang RY, Noskov VN, Assad-Garcia N, Deerinck TJ, Ellisman MH, Gill J, Kannan K, Karas BJ, Ma L, Pelletier JF, Qi ZQ, Richter RA, Strychalski EA, Sun L, Suzuki Y, Tsvetanova B, Wise KS, Smith HO, Glass JI, Merryman C, Gibson DG, Venter JC | title = Design and synthesis of a minimal bacterial genome | journal = Science | volume = 351 | issue = 6280 | pages = aad6253 | date = March 2016 | pmid = 27013737 | doi = 10.1126/science.aad6253 | bibcode = 2016Sci...351.....H | doi-access = free }}</ref>]]<br />
<br />
One important topic in synthetic biology is ''synthetic life'', that is concerned with hypothetical organisms created ''[[in vitro]]'' from [[biomolecule]]s and/or [[hypothetical types of biochemistry|chemical analogues thereof]]. Synthetic life experiments attempt to either probe the [[origins of life]], study some of the properties of life, or more ambitiously to recreate life from non-living ([[abiotic components|abiotic]]) components. Synthetic life biology attempts to create living organisms capable of carrying out important functions, from manufacturing pharmaceuticals to detoxifying polluted land and water.<ref name="enzymes2014">{{cite news |last=Connor |first=Steve |url=https://www.independent.co.uk/news/science/major-synthetic-life-breakthrough-as-scientists-make-the-first-artificial-enzymes-9896333.html |title=Major synthetic life breakthrough as scientists make the first artificial enzymes |work=The Independent |location=London |date=1 December 2014 |access-date=2015-08-06 }}</ref> In medicine, it offers prospects of using designer biological parts as a starting point for new classes of therapies and diagnostic tools.<ref name="enzymes2014" /><br />
<br />
Multiple species and strains are applied in these therapeutics. Most commonly used bacteria are Salmonella typhimurium, Escherichia Coli, Bifidobacteria, Streptococcus, Lactobacillus, Listeria and Bacillus subtilis. Each of these species have their own property and are unique to cancer therapy in terms of tissue colonization, interaction with immune system and ease of application.<br />
<br />
在这些治疗方法中应用了多种菌株。最常用的细菌是鼠伤寒沙门氏菌、大肠桿菌、双歧杆菌、链球菌、乳酸菌、李斯特菌和枯草杆菌。这些物种中的每一个都有自己的特性。在定殖组织、与免疫系统的相互作用和易于应用方面,它们对癌症治疗各有独到之处。<br />
<br />
<br />
<br />
A living "artificial cell" has been defined as a completely synthetic cell that can capture [[energy]], maintain [[electrochemical gradient|ion gradients]], contain [[macromolecules]] as well as store information and have the ability to [[mutate]].<ref name="Deamer">{{cite journal | vauthors = Deamer D | title = A giant step towards artificial life? | journal = Trends in Biotechnology | volume = 23 | issue = 7 | pages = 336–8 | date = July 2005 | pmid = 15935500 | doi = 10.1016/j.tibtech.2005.05.008 }}</ref> Nobody has been able to create such a cell.<ref name='Deamer'/><br />
<br />
<br />
<br />
The immune system plays an important role in cancer and can be harnessed to attack cancer cells. Cell-based therapies focus on immunotherapies, mostly by engineering T cells.<br />
<br />
免疫系统在癌症中起着重要作用。可以利用免疫系统攻击癌细胞。以细胞为基础的疗法主要是免疫疗法,主要是通过改造 T 细胞。<br />
<br />
A completely synthetic bacterial chromosome was produced in 2010 by [[Craig Venter]], and his team introduced it to genomically emptied bacterial host cells.<ref name="gibson52">{{cite journal | vauthors = Gibson DG, Glass JI, Lartigue C, Noskov VN, Chuang RY, Algire MA, Benders GA, Montague MG, Ma L, Moodie MM, Merryman C, Vashee S, Krishnakumar R, Assad-Garcia N, Andrews-Pfannkoch C, Denisova EA, Young L, Qi ZQ, Segall-Shapiro TH, Calvey CH, Parmar PP, Hutchison CA, Smith HO, Venter JC | title = Creation of a bacterial cell controlled by a chemically synthesized genome | journal = Science | volume = 329 | issue = 5987 | pages = 52–6 | date = July 2010 | pmid = 20488990 | doi = 10.1126/science.1190719 | bibcode = 2010Sci...329...52G | doi-access = free }}</ref> The host cells were able to grow and replicate.<ref>{{cite web| url=https://www.npr.org/templates/transcript/transcript.php?storyId=127010591| title=Scientists Reach Milestone On Way To Artificial Life| access-date=2010-06-09|date=2010-05-20}}</ref><ref>{{cite web|last1=Venter|first1=JC|title=From Designing Life to Prolonging Healthy Life|url=https://www.youtube.com/watch?v=Gwu_djYMm3w&t=30s|website=YouTube|publisher=University of California Television (UCTV)|access-date=1 February 2017}}</ref> The [[Mycoplasma laboratorium]] is the only living organism with completely engineered genome.<br />
<br />
<br />
<br />
T cell receptors were engineered and ‘trained’ to detect cancer epitopes. Chimeric antigen receptors (CARs) are composed of a fragment of an antibody fused to intracellular T cell signaling domains that can activate and trigger proliferation of the cell. A second generation CAR-based therapy was approved by FDA.<br />
<br />
T 细胞受体被设计和“训练”用以检测癌症表位。嵌合抗原受体(CAR)是由融合于细胞内 T 细胞信号域的抗体片段组成,这些信号域可以激活并触发细胞增殖。美国食品药品监督管理局(FDA)批准了第二代基于嵌合抗原受体的基因治疗。<br />
<br />
The first living organism with 'artificial' expanded DNA code was presented in 2014; the team used ''E. coli'' that had its genome extracted and replaced with a chromosome with an expanded genetic code. The [[nucleoside]]s added are [[d5SICS]] and [[dNaM]].<ref name="NATJ-20140507"/><br />
<br />
<br />
<br />
Gene switches were designed to enhance safety of the treatment. Kill switches were developed to terminate the therapy should the patient show severe side effects. Mechanisms can more finely control the system and stop and reactivate it. Since the number of T-cells are important for therapy persistence and severity, growth of T-cells is also controlled to dial the effectiveness and safety of therapeutics.<br />
<br />
基因开关被设计出来以提高治疗的安全性。如果病人出现严重的副作用,杀伤开关就会终止治疗。机制可以更好地控制系统,停止和重新激活它。由于 T 细胞的数量对治疗的持续性和强度非常重要,因此 T 细胞的生长也受到控制,从而平衡治疗的有效性和安全性。<br />
<br />
In May 2019, researchers, in a milestone effort, reported the creation of a new [[Synthetic biology#Synthetic life|synthetic]] (possibly [[Artificial life#Biochemical-based ("wet")|artificial]]) form of [[wikt:viability|viable]] [[life]], a variant of the [[bacteria]] ''[[Escherichia coli]]'', by reducing the natural number of 64 [[codon]]s in the bacterial [[genome]] to 59 codons instead, in order to encode 20 [[amino acid]]s.<ref name="NYT-20190515"/><ref name="NAT-20190515"/><br />
<br />
<br />
<br />
Although several mechanisms can improve safety and control, limitations include the difficulty of inducing large DNA circuits into the cells and risks associated with introducing foreign components, especially proteins, into cells.<br />
<br />
虽然有几种机制可以提高安全性和控制性,但他们也都存在局限性,包括很难将大型 DNA 电路诱导入细胞,以及将外来成分,特别是蛋白质引入细胞的风险。<br />
<br />
In 2017 the international [[Build-a-Cell]] large-scale research collaboration for the construction of synthetic living cell was started,<ref>{{cite web|url=http://buildacell.io/|title=Build-a-Cell|accessdate=4 Dec 2019}}</ref> followed by national synthetic cell organizations in several countries, including FabriCell,<ref>{{cite web|url=http://fabricell.org/|title=FabriCell|accessdate=8 Dec 2019}}</ref> MaxSynBio<ref>{{cite web|url=https://www.maxsynbio.mpg.de/home/|title=MaxSynBio - Max Planck Research Network in Synthetic Biology|accessdate=8 Dec 2019}}</ref> and BaSyC.<ref>{{cite web|url=http://www.basyc.nl/|title=BaSyC|accessdate=8 Dec 2019}}</ref> The European synthetic cell efforts were unified in 2019 as SynCellEU initiative.<ref>{{cite web|url=http://www.syntheticcell.eu/|title=SynCell EU|accessdate=8 Dec 2019}}</ref><br />
<br />
<br />
<br />
=== Drug delivery platforms 药物输送平台 ===<br />
<br />
==== Engineered bacteria-based platform 基于细菌设计的平台 ====<br />
<br />
Bacteria have long been used in cancer treatment. ''[[Bifidobacterium]]'' and ''[[Clostridium]]'' selectively colonize tumors and reduce their size.<ref name="Zu_2014">{{cite journal|vauthors=Zu C, Wang J|date=August 2014|title=Tumor-colonizing bacteria: a potential tumor targeting therapy|url=|journal=Critical Reviews in Microbiology|volume=40|issue=3|pages=225–35|doi=10.3109/1040841X.2013.776511|pmid=23964706|s2cid=26498221}}</ref> Recently synthetic biologists reprogrammed bacteria to sense and respond to a particular cancer state. Most often bacteria are used to deliver a therapeutic molecule directly to the tumor to minimize off-target effects. To target the tumor cells, [[peptide]]s that can specifically recognize a tumor were expressed on the surfaces of bacteria. Peptides used include an [[affibody molecule]] that specifically targets human [[Epidermal growth factor receptor|epidermal growth factor receptor 2]]<ref name="Gujrati_2014">{{cite journal|vauthors=Gujrati V, Kim S, Kim SH, Min JJ, Choy HE, Kim SC, Jon S|date=February 2014|title=Bioengineered bacterial outer membrane vesicles as cell-specific drug-delivery vehicles for cancer therapy|url=|journal=ACS Nano|volume=8|issue=2|pages=1525–37|doi=10.1021/nn405724x|pmid=24410085}}</ref> and a synthetic [[Adhesin molecule (immunoglobulin -like)|adhesin]].<ref name="Piñero-Lambea_2015">{{cite journal|vauthors=Piñero-Lambea C, Bodelón G, Fernández-Periáñez R, Cuesta AM, Álvarez-Vallina L, Fernández LÁ|date=April 2015|title=Programming controlled adhesion of E. coli to target surfaces, cells, and tumors with synthetic adhesins|journal=ACS Synthetic Biology|volume=4|issue=4|pages=463–73|doi=10.1021/sb500252a|pmc=4410913|pmid=25045780}}</ref> The other way is to allow bacteria to sense the [[tumor microenvironment]], for example hypoxia, by building an AND logic gate into bacteria.<ref>{{cite journal | last1 = Deyneko | first1 = I.V. | last2 = Kasnitz | first2 = N. | last3 = Leschner | first3 = S. | last4 = Weiss | first4 = S. | year = 2016| title = Composing a tumor specific bacterial promoter | url = | journal = PLOS ONE | volume = 11| issue = 5| page = e0155338| doi = 10.1371/journal.pone.0155338 | pmid = 27171245 | pmc = 4865170 }}</ref> The bacteria then only release target therapeutic molecules to the tumor through either [[lysis]]<ref>{{cite journal | last1 = Rice | first1 = KC | last2 = Bayles | first2 = KW | year = 2008 | title = Molecular control of bacterial death and lysis | journal = Microbiol Mol Biol Rev | volume = 72 | issue = 1| pages = 85–109 | doi = 10.1128/mmbr.00030-07 | pmid = 18322035 | pmc = 2268280 }}</ref> or the [[bacterial secretion system]].<ref>{{cite journal | last1 = Ganai | first1 = S. | last2 = Arenas | first2 = R. B. | last3 = Forbes | first3 = N. S. | year = 2009 | title = Tumour-targeted delivery of TRAIL using Salmonella typhimurium enhances breast cancer survival in mice | url = | journal = Br. J. Cancer | volume = 101 | issue = 10| pages = 1683–1691 | doi = 10.1038/sj.bjc.6605403 | pmid = 19861961 | pmc = 2778534 }}</ref> Lysis has the advantage that it can stimulate the immune system and control growth. Multiple types of secretion systems can be used and other strategies as well. The system is inducible by external signals. Inducers include chemicals, electromagnetic or light waves.<br />
<br />
The creation of new life and the tampering of existing life has raised ethical concerns in the field of synthetic biology and are actively being discussed.<br />
<br />
创造新生命以及篡改现存生命引起了合成生物学领域的伦理问题,目前正处于积极的讨论中。<br />
<br />
<br />
<br />
Multiple species and strains are applied in these therapeutics. Most commonly used bacteria are ''[[Salmonella enterica subsp. enterica|Salmonella typhimurium]]'', [[Escherichia coli|''Escherichia Coli'']], ''Bifidobacteria'', ''[[Streptococcus]]'', ''[[Lactobacillus]]'', ''[[Listeria]]'' and ''[[Bacillus subtilis]]''. Each of these species have their own property and are unique to cancer therapy in terms of tissue colonization, interaction with immune system and ease of application.<br />
<br />
<br />
<br />
The ethical aspects of synthetic biology has 3 main features: biosafety, biosecurity, and the creation of new life forms. Other ethical issues mentioned include the regulation of new creations, patent management of new creations, benefit distribution, and research integrity.<br />
<br />
合成生物学的伦理方面有三个主要特点: 生物研究安全性、生物安全性和创造新的生命形式。其他提到的伦理问题包括新生命的管理、新生命的专利管理、利益分配和研究的完整性。<br />
<br />
==== Cell-based platform 基于细胞的平台====<br />
<br />
The immune system plays an important role in cancer and can be harnessed to attack cancer cells. Cell-based therapies focus on [[Cancer immunotherapy|immunotherapies]], mostly by engineering [[T cell]]s.<br />
<br />
Ethical issues have surfaced for recombinant DNA and genetically modified organism (GMO) technologies and extensive regulations of genetic engineering and pathogen research were in place in many jurisdictions. Amy Gutmann, former head of the Presidential Bioethics Commission, argued that we should avoid the temptation to over-regulate synthetic biology in general, and genetic engineering in particular. According to Gutmann, "Regulatory parsimony is especially important in emerging technologies...where the temptation to stifle innovation on the basis of uncertainty and fear of the unknown is particularly great. The blunt instruments of statutory and regulatory restraint may not only inhibit the distribution of new benefits, but can be counterproductive to security and safety by preventing researchers from developing effective safeguards.".<br />
<br />
重组 DNA 和转基因生物(GMO)技术的伦理问题已经浮出水面,许多司法管辖区对基因工程和病原体研究有着广泛的规定。生物伦理总统委员会前任主席艾米 · 古特曼认为,我们应该避免过度监管合成生物学,尤其是基因工程。古特曼认为,“过度监管在新兴技术领域尤为显要...... 在这些领域,处于不确定性和对未知事物的恐惧而扼杀创新的倾向尤为强烈。法律和监管限制的生硬手段可能不仅会抑制新利益的分配,而且可能阻碍研究人员制定有效的保障措施,从而对研究安全性和生命安全性产生反作用。".<br />
<br />
<br />
<br />
T cell receptors were engineered and ‘trained’ to detect cancer [[epitope]]s. [[Chimeric antigen receptor]]s (CARs) are composed of a fragment of an [[antibody]] fused to intracellular T cell signaling domains that can activate and trigger proliferation of the cell. A second generation CAR-based therapy was approved by FDA.{{Citation needed|date=April 2018}}<br />
<br />
<br />
<br />
Gene switches were designed to enhance safety of the treatment. Kill switches were developed to terminate the therapy should the patient show severe side effects.<ref>Jones, B.S., Lamb, L.S., Goldman, F. & Di Stasi, A. Improving the safety of cell therapy products by suicide gene transfer. Front. Pharmacol. 5, 254 (2014).</ref> Mechanisms can more finely control the system and stop and reactivate it.<ref>{{cite journal | last1 = Wei | first1 = P | last2 = Wong | first2 = WW | last3 = Park | first3 = JS | last4 = Corcoran | first4 = EE | last5 = Peisajovich | first5 = SG | last6 = Onuffer | first6 = JJ | last7 = Weiss | first7 = A | last8 = LiWA | year = 2012 | title = Bacterial virulence proteins as tools to rewire kinase pathways in yeast and immune cells | url = | journal = Nature | volume = 488 | issue = 7411| pages = 384–388 | doi = 10.1038/nature11259 | pmid = 22820255 | pmc = 3422413 }}</ref><ref>{{cite journal | last1 = Danino | first1 = T. | last2 = Mondragon-Palomino | first2 = O. | last3 = Tsimring | first3 = L. | last4 = Hasty | first4 = J. | year = 2010 | title = A synchronized quorum of genetic clocks | url = | journal = Nature | volume = 463 | issue = 7279| pages = 326–330 | doi = 10.1038/nature08753 | pmid = 20090747 | pmc = 2838179 }}</ref> Since the number of T-cells are important for therapy persistence and severity, growth of T-cells is also controlled to dial the effectiveness and safety of therapeutics.<ref>{{cite journal | last1 = Chen | first1 = Y. Y. | last2 = Jensen | first2 = M. C. | last3 = Smolke | first3 = C. D. | year = 2010 | title = Genetic control of mammalian T-cell proliferation with synthetic RNA regulatory systems | journal = Proc. Natl. Acad. Sci. U.S.A. | volume = 107 | issue = 19| pages = 8531–6 | doi = 10.1073/pnas.1001721107 | pmid = 20421500 | pmc = 2889348 }}</ref><br />
<br />
One ethical question is whether or not it is acceptable to create new life forms, sometimes known as "playing God". Currently, the creation of new life forms not present in nature is at small-scale, the potential benefits and dangers remain unknown, and careful consideration and oversight are ensured for most studies. Regarding auxotrophy, bacteria and yeast can be engineered to be unable to produce histidine, an important amino acid for all life. Such organisms can thus only be grown on histidine-rich media in laboratory conditions, nullifying fears that they could spread into undesirable areas.<br />
<br />
有这样一个道德问题,创造新的生命形式(有时被称为“扮演上帝”)是否可以接受。目前,自然界中不存在的新生命形式的创造规模很小,潜在的好处和危险仍然不为人知,并且大多数研究确保进行了认真的考虑和监督。通过制造营养缺陷,细菌和酵母可以被改造为不能生产组氨酸的类型。组氨酸是一种对所有生命来说都很重要的氨基酸。因此,这些微生物只能在实验室条件下在富含组氨酸的培养基上生长,从而消除了人们对它们可能扩散到不良区域的担忧。<br />
<br />
<br />
<br />
<br />
== Ethics 伦理问题 ==<br />
<br />
Some ethical issues relate to biosecurity, where biosynthetic technologies could be deliberately used to cause harm to society and/or the environment. Since synthetic biology raises ethical issues and biosecurity issues, humanity must consider and plan on how to deal with potentially harmful creations, and what kinds of ethical measures could possibly be employed to deter nefarious biosynthetic technologies. With the exception of regulating synthetic biology and biotechnology companies, however, the issues are not seen as new because they were raised during the earlier recombinant DNA and genetically modified organism (GMO) debates and extensive regulations of genetic engineering and pathogen research are already in place in many jurisdictions.<br /><br />
<br />
一些伦理问题与生物安全有关,在这方面,生物合成技术可能被有意用以破坏社会和/或环境。由于合成生物学引起了伦理问题和生物安全问题,人类必须考虑和计划如何处理潜在的有害创造物,以及何种伦理措施具有阻止邪恶的生物合成技术的可行性。然而,除了监管合成生物学和生物技术公司之外,这些问题并不被视为新问题,因为它们是在早期的重组 DNA 和转基因生物(GMO)辩论中提出的,而且许多司法辖区已经对基因工程和病原体研究进行了广泛的监管。< br/><br />
<br />
{{Update|section|date=January 2019}}<br />
<br />
<br />
<br />
The creation of new life and the tampering of existing life has raised [[Ethics|ethical concerns]] in the field of synthetic biology and are actively being discussed.<ref name=":3" /><br />
<br />
<br />
<br />
The European Union-funded project SYNBIOSAFE has issued reports on how to manage synthetic biology. A 2007 paper identified key issues in safety, security, ethics and the science-society interface, which the project defined as public education and ongoing dialogue among scientists, businesses, government and ethicists. The key security issues that SYNBIOSAFE identified involved engaging companies that sell synthetic DNA and the biohacking community of amateur biologists. Key ethical issues concerned the creation of new life forms.<br />
<br />
欧盟资助的项目 SYNBIOSAFE 已经发布了关于如何管理合成生物学的报告。2007年的一篇论文确定了技术安全、生命安全、伦理和科学-社会接口方面的关键问题,并将其定义为公共教育和科学家、企业、政府和伦理学家之间的持续交流。SYNBIOSAFE 确定的关键生命安全问题涉及到销售合成 DNA 的公司和业余生物学家组成的生物黑客社区。关键的伦理问题涉及到创造新的生命形式。<br />
<br />
Common ethical questions include:<br />
常见的伦理问题包括:<br />
<br />
<br />
A subsequent report focused on biosecurity, especially the so-called dual-use challenge. For example, while synthetic biology may lead to more efficient production of medical treatments, it may also lead to synthesis or modification of harmful pathogens (e.g., smallpox). The biohacking community remains a source of special concern, as the distributed and diffuse nature of open-source biotechnology makes it difficult to track, regulate or mitigate potential concerns over biosafety and biosecurity.<br />
<br />
随后的一份聚焦于于生物安全的报告,特别是所谓的两用挑战。例如,虽然合成生物学可能带来更有效的医疗生产,但它也可能合成或改造出有害的病原体(例如天花)。生物黑客界仍然是一个特别令人关切的问题,因为开源生物技术的分布和扩散性质使得跟踪、管理或减轻对生物安全和生物安保的隐忧变得困难。<br />
<br />
* Is it morally right to tamper with nature?<br />
篡改自然在道德上是正确的吗?<br />
<br />
* Is one playing God when creating new life?<br />
创造新生命时,人是否就是上帝?<br />
<br />
COSY, another European initiative, focuses on public perception and communication. To better communicate synthetic biology and its societal ramifications to a broader public, COSY and SYNBIOSAFE published SYNBIOSAFE, a 38-minute documentary film, in October 2009.<br />
<br />
COSY 是欧洲的另一项倡议,主要关注于公众认知和交流。为了更好地向更广泛的公众宣传合成生物学及其社会影响,COSY 和 SYNBIOSAFE 于2009年10月出版了一部38分钟的纪录片《安全的合成生物学》。<br />
<br />
* What happens if a synthetic organism accidentally escapes?<br />
如果一种合成生命体意外地从实验室中泄露出去,会发生什么?<br />
<br />
* What if an individual misuses synthetic biology and creates a harmful entity (e.g., a biological weapon)?<br />
假如某个个体错误地使用合成生物学并制造了一个有害的尸体,那该怎么办?<br />
<br />
The International Association Synthetic Biology has proposed self-regulation. This proposes specific measures that the synthetic biology industry, especially DNA synthesis companies, should implement. In 2007, a group led by scientists from leading DNA-synthesis companies published a "practical plan for developing an effective oversight framework for the DNA-synthesis industry".<br />
<br />
国际合成生物学协会已经建议进行自我调节。它提出了合成生物产业,特别是 DNA 合成公司,应该实施的具体措施。2007年,由主要的 DNA 合成公司的科学家领导的一个小组发表了“为 DNA 合成工业制定有效监督框架的实用计划”。<br />
<br />
* Who will have control of and access to the products of synthetic biology? <br />
谁会拥有控制和访问合成生物产品的权限?<br />
<br />
* Who will gain from these innovations? Investors? Medical patients? Industrial farmers?<br />
谁会从这些创新中获利?投资者?患者?工业农民?<br />
<br />
On July 9–10, 2009, the National Academies' Committee of Science, Technology & Law convened a symposium on "Opportunities and Challenges in the Emerging Field of Synthetic Biology".<br />
<br />
2009年7月9日至10日,美国国家学院科学、技术和法律委员会召开了一次名为“合成生物学新兴领域的机遇与挑战”的研讨会。<br />
<br />
* Does the patent system allow patents on living organisms? What about parts of organisms, like HIV resistance genes in humans?<ref>{{Cite web|url=https://www.theguardian.com/science/2018/nov/26/worlds-first-gene-edited-babies-created-in-china-claims-scientist|title= World's first gene-edited babies created in China, claims scientist |last=Staff|first=Agencies|date=November 2018|website=The Guardian|url-status=live|archive-url=|archive-date=|access-date=}}</ref><br />
<br />
* What if a new creation is deserving of moral or legal status?<br />
如果一个新生命理应拥有道德和法律地位该怎么办?<br />
<br />
After the publication of the first synthetic genome and the accompanying media coverage about "life" being created, President Barack Obama established the Presidential Commission for the Study of Bioethical Issues to study synthetic biology. The commission convened a series of meetings, and issued a report in December 2010 titled "New Directions: The Ethics of Synthetic Biology and Emerging Technologies." The commission stated that "while Venter’s achievement marked a significant technical advance in demonstrating that a relatively large genome could be accurately synthesized and substituted for another, it did not amount to the “creation of life”. It noted that synthetic biology is an emerging field, which creates potential risks and rewards. The commission did not recommend policy or oversight changes and called for continued funding of the research and new funding for monitoring, study of emerging ethical issues and public education. These security issues may be avoided by regulating industry uses of biotechnology through policy legislation. Federal guidelines on genetic manipulation are being proposed by "the President's Bioethics Commission ... in response to the announced creation of a self-replicating cell from a chemically synthesized genome, put forward 18 recommendations not only for regulating the science ... for educating the public". Richard Lewontin wrote that some of the safety tenets for oversight discussed in The Principles for the Oversight of Synthetic Biology are reasonable, but that the main problem with the recommendations in the manifesto is that "the public at large lacks the ability to enforce any meaningful realization of those recommendations".<br />
<br />
在发表了第一个合成基因组以及随之而来的关于”生命”的媒体报道之后,巴拉克·奥巴马总统设立了研究合成生物学的生物伦理问题总统委员会。该委员会召开了一系列会议,并于2010年12月发布了一份题为《新方向: 合成生物学和新兴技术的伦理学》的报告。委员会指出:“虽然文特尔的成就标志着一项重大的技术进步,证明了一个相对较大的基因组可以准确地合成和替代另一个基因组,但它并不等于‘创造生命’。”报告指出,合成生物学是一个新兴的领域,它产生了潜在的风险和回报。该委员会没有对政策或监督方面的改变提出建议,并呼吁继续为研究提供资金,并为监测、研究新出现的道德问题和公共教育提供新资金。这些安全问题可以通过政策立法规范生物技术的工业用途来避免。“生物伦理总统委员会正在提出关于基因操纵的联邦指导方针...... 作为对宣布从化学合成的基因组中创造出自我复制细胞的回应,提出了18项建议,不仅仅是为了规范科学...... 为了教育公众。”。理查德·路文汀 (Richard Lewontin) 写道,《合成生物学监督原则》中讨论的一些监督安全原则是合理的,但宣言中的建议存在的主要问题是“广大公众缺乏能力,无法强制任意有意义地实现这些建议”。<br />
<br />
<br />
<br />
The ethical aspects of synthetic biology has 3 main features: biosafety, biosecurity, and the creation of new life forms.<ref>{{Cite journal|title=Synthetic Biology and Ethics: Past, Present, and Future|last=Hayry|first=Mattie|date=April 2017|journal=Cambridge Quarterly of Healthcare Ethics|volume=26|issue=2|pages=186–205|doi=10.1017/S0963180116000803|pmid=28361718}}</ref> Other ethical issues mentioned include the regulation of new creations, patent management of new creations, benefit distribution, and research integrity.<ref>{{Cite journal|title=Synthetic biology applied in the agrifood sector: Public perceptions, attitudes and implications for future studies|last=Jin |display-authors=etal |first=Shan|date=September 2019|journal=Trends in Food Science and Technology|volume=91|pages=454–466|doi=10.1016/j.tifs.2019.07.025}}</ref><ref name=":3">{{Cite journal|url=https://heinonline.org/HOL/LandingPage?handle=hein.journals/macq15&div=8&id=&page=| title=Synthetic Biology: Ethics, Exeptionalism and Expectations| pages=45| last=Newson|first=AJ|date=2015|journal=Macquarie Law Journal| volume=15|url-status=live|archive-url=|archive-date=|access-date=}}</ref><br />
<br />
<br />
<br />
Ethical issues have surfaced for [[recombinant DNA]] and [[genetically modified organism]] (GMO) technologies and extensive regulations of [[genetic engineering]] and pathogen research were in place in many jurisdictions. [[Amy Gutmann]], former head of the Presidential Bioethics Commission, argued that we should avoid the temptation to over-regulate synthetic biology in general, and genetic engineering in particular. According to Gutmann, "Regulatory parsimony is especially important in emerging technologies...where the temptation to stifle innovation on the basis of uncertainty and fear of the unknown is particularly great. The blunt instruments of statutory and regulatory restraint may not only inhibit the distribution of new benefits, but can be counterproductive to security and safety by preventing researchers from developing effective safeguards.".<ref>{{cite journal | first = Gutmann | last = Amy | date = 2012 | title = The Ethics of Synthetic Biology | volume=41 | issue=4 | pages = 17–22 | journal = The Hastings Center Report | doi = 10.1002/j.1552-146X.2011.tb00118.x | pmid = 21845917 | s2cid = 20662786 }}</ref><br />
<br />
<br />
<br />
The hazards of synthetic biology include biosafety hazards to workers and the public, biosecurity hazards stemming from deliberate engineering of organisms to cause harm, and environmental hazards. The biosafety hazards are similar to those for existing fields of biotechnology, mainly exposure to pathogens and toxic chemicals, although novel synthetic organisms may have novel risks. For biosecurity, there is concern that synthetic or redesigned organisms could theoretically be used for bioterrorism. Potential risks include recreating known pathogens from scratch, engineering existing pathogens to be more dangerous, and engineering microbes to produce harmful biochemicals. Lastly, environmental hazards include adverse effects on biodiversity and ecosystem services, including potential changes to land use resulting from agricultural use of synthetic organisms.<br />
<br />
合成生物学的危害包括对工人和公众的生物安全危害、蓄意设计可造成危害的生物体所产生的生物安全危害以及环境危害。生物安全危害类似于现有生物技术领域的危害,尽管新的合成生物可能有新的风险,它的主要形式是接触病原体和有毒化学品。为了生物安全,人们担心人工合成或重新设计的生物体在理论上可能被用于生物恐怖主义。潜在的风险包括从零开始再造已知的病原体,将现有的病原体设计成更危险的,以及设计微生物来生产有害的生物化学产品。最后,环境危害包括对生物多样性和生态系统服务的不利影响,包括在农业上利用合成生物体对土地使用的潜在变化。<br />
<br />
=== The "creation" of life 创造生命 ===<br />
<br />
<br />
<br />
Existing risk analysis systems for GMOs are generally considered sufficient for synthetic organisms, although there may be difficulties for an organism built "bottom-up" from individual genetic sequences. Synthetic biology generally falls under existing regulations for GMOs and biotechnology in general, and any regulations that exist for downstream commercial products, although there are generally no regulations in any jurisdiction that are specific to synthetic biology.<br />
<br />
通常认为,尽管由单个基因序列构成的”自下而上”的生物体可能存在困难,现有的转基因生物风险分析系统足以用于合成生物体。一般而言,尽管任何法域一般都没有专门针对合成生物学的条例,合成生物学属于现有的转基因生物和生物技术条例的范围,也属于现有的关于下游商业产品的条例的范围。<br />
<br />
One ethical question is whether or not it is acceptable to create new life forms, sometimes known as "playing God". Currently, the creation of new life forms not present in nature is at small-scale, the potential benefits and dangers remain unknown, and careful consideration and oversight are ensured for most studies.<ref name=":3" /> Many advocates express the great potential value—to agriculture, medicine, and academic knowledge, among other fields—of creating artificial life forms. Creation of new entities could expand scientific knowledge well beyond what is currently known from studying natural phenomena. Yet there is concern that artificial life forms may reduce nature’s "purity" (i.e., nature could be somehow corrupted by human intervention and manipulation) and potentially influence the adoption of more engineering-like principles instead of biodiversity- and nature-focused ideals. Some are also concerned that if an artificial life form were to be released into nature, it could hamper biodiversity by beating out natural species for resources (similar to how [[algal bloom]]s kill marine species). Another concern involves the ethical treatment of newly created entities if they happen to [[nociception|sense pain]], [[sentience]], and self-perception. Should such life be given moral or legal rights? If so, how?<br />
<br />
<br />
<br />
=== Biosafety and biocontainment 生物技术安全和生物抑制 ===<br />
<br />
What is most ethically appropriate when considering biosafety measures? How can accidental introduction of synthetic life in the natural environment be avoided? Much ethical consideration and critical thought has been given to these questions. Biosafety not only refers to biological containment; it also refers to strides taken to protect the public from potentially hazardous biological agents. Even though such concerns are important and remain unanswered, not all products of synthetic biology present concern for biological safety or negative consequences for the environment. It is argued that most synthetic technologies are benign and are incapable of flourishing in the outside world due to their "unnatural" characteristics as there is yet to be an example of a transgenic microbe conferred with a fitness advantage in the wild.<br />
<br />
<br />
<br />
In general, existing [[Hierarchy of hazard controls|hazard controls]], risk assessment methodologies, and regulations developed for traditional [[genetically modified organism]]s (GMOs) are considered to be sufficient for synthetic organisms. "Extrinsic" [[biocontainment]] methods in a laboratory context include physical containment through [[biosafety cabinet]]s and [[glovebox]]es, as well as [[personal protective equipment]]. In an agricultural context they include isolation distances and [[pollen]] barriers, similar to methods for [[Biocontainment of genetically modified organisms|biocontainment of GMOs]]. Synthetic organisms may offer increased hazard control because they can be engineered with "intrinsic" biocontainment methods that limit their growth in an uncontained environment, or prevent [[horizontal gene transfer]] to natural organisms. Examples of intrinsic biocontainment include [[auxotrophy]], biological [[kill switch]]es, inability of the organism to replicate or to pass modified or synthetic genes to offspring, and the use of [[Xenobiology|xenobiological]] organisms using alternative biochemistry, for example using artificial [[xeno nucleic acid]]s (XNA) instead of DNA.<ref name=":12" /><ref name=":32">{{Cite journal|url=https://publications.europa.eu/en/publication-detail/-/publication/bfd7d06c-d3ae-11e5-a4b5-01aa75ed71a1/language-en|title=Opinion on synthetic biology II: Risk assessment methodologies and safety aspects|last=|first=|date=2016-02-12|website=EU [[Directorate-General for Health and Consumers]]|pages=|via=|doi=10.2772/63529|archive-url=|archive-date=|access-date=|volume=|publisher=Publications Office}}</ref> Regarding auxotrophy, bacteria and yeast can be engineered to be unable to produce [[histidine]], an important amino acid for all life. Such organisms can thus only be grown on histidine-rich media in laboratory conditions, nullifying fears that they could spread into undesirable areas.<br />
<br />
<br />
<br />
<br />
<br />
=== Biosecurity 生物安全 ===<br />
<br />
Some ethical issues relate to biosecurity, where biosynthetic technologies could be deliberately used to cause harm to society and/or the environment. Since synthetic biology raises ethical issues and biosecurity issues, humanity must consider and plan on how to deal with potentially harmful creations, and what kinds of ethical measures could possibly be employed to deter nefarious biosynthetic technologies. With the exception of regulating synthetic biology and biotechnology companies,<ref name="Bügl, H. et al. 2007 627–629">{{cite journal | vauthors = Bügl H, Danner JP, Molinari RJ, Mulligan JT, Park HO, Reichert B, Roth DA, Wagner R, Budowle B, Scripp RM, Smith JA, Steele SJ, Church G, Endy D | title = DNA synthesis and biological security | journal = Nature Biotechnology | volume = 25 | issue = 6 | pages = 627–9 | date = June 2007 | pmid = 17557094 | doi = 10.1038/nbt0607-627 | s2cid = 7776829 }}</ref><ref>{{cite web|url = http://www.synbioproject.org/site/assets/files/1335/hastings.pdf|title = Ethical Issues in Synthetic Biology: An Overview of the Debates|date = |access-date = |website = }}</ref> however, the issues are not seen as new because they were raised during the earlier [[recombinant DNA]] and [[genetically modified organism]] (GMO) debates and extensive regulations of [[genetic engineering]] and pathogen research are already in place in many jurisdictions.<ref name="bioethics.gov">Presidential Commission for the study of Bioethical Issues, December 2010 [http://bioethics.gov/synthetic-biology-report NEW DIRECTIONS The Ethics of Synthetic Biology and Emerging Technologies] Retrieved 2012-04-14.</ref><br /><br />
<br />
<br />
<br />
=== European Union 欧盟===<br />
<br />
<br />
<br />
The [[European Union]]-funded project SYNBIOSAFE<ref>[http://www.synbiosafe.eu/ SYNBIOSAFE official site]</ref> has issued reports on how to manage synthetic biology. A 2007 paper identified key issues in safety, security, ethics and the science-society interface, which the project defined as public education and ongoing dialogue among scientists, businesses, government and ethicists.<ref name="Priorities">{{cite journal | vauthors = Schmidt M, Ganguli-Mitra A, Torgersen H, Kelle A, Deplazes A, Biller-Andorno N | title = A priority paper for the societal and ethical aspects of synthetic biology | journal = Systems and Synthetic Biology | volume = 3 | issue = 1–4 | pages = 3–7 | date = December 2009 | pmid = 19816794 | pmc = 2759426 | doi = 10.1007/s11693-009-9034-7 | url = http://www.synbiosafe.eu/uploads/pdf/Schmidt_etal-2009-SSBJ.pdf }}</ref><ref>Schmidt M. Kelle A. Ganguli A, de Vriend H. (Eds.) 2009. [https://www.springer.com/biomed/book/978-90-481-2677-4 "Synthetic Biology. The Technoscience and its Societal Consequences".] Springer Academic Publishing.</ref> The key security issues that SYNBIOSAFE identified involved engaging companies that sell synthetic DNA and the [[Do-it-yourself biology|biohacking]] community of amateur biologists. Key ethical issues concerned the creation of new life forms.<br />
<br />
<br />
<br />
A subsequent report focused on biosecurity, especially the so-called [[dual use technology|dual-use]] challenge. For example, while synthetic biology may lead to more efficient production of medical treatments, it may also lead to synthesis or modification of harmful pathogens (e.g., [[smallpox]]).<ref>{{cite journal | vauthors = Kelle A | title = Ensuring the security of synthetic biology-towards a 5P governance strategy | journal = Systems and Synthetic Biology | volume = 3 | issue = 1–4 | pages = 85–90 | date = December 2009 | pmid = 19816803 | pmc = 2759433 | doi = 10.1007/s11693-009-9041-8 }}</ref> The biohacking community remains a source of special concern, as the distributed and diffuse nature of open-source biotechnology makes it difficult to track, regulate or mitigate potential concerns over biosafety and biosecurity.<ref>{{cite journal | vauthors = Schmidt M | title = Diffusion of synthetic biology: a challenge to biosafety | journal = Systems and Synthetic Biology | volume = 2 | issue = 1–2 | pages = 1–6 | date = June 2008 | pmid = 19003431 | pmc = 2671588 | doi = 10.1007/s11693-008-9018-z | url = http://www.markusschmidt.eu/pdf/Diffusion_of_synthetic_biology.pdf }}</ref><br />
<br />
<br />
<br />
COSY, another European initiative, focuses on public perception and communication.<ref>[http://www.synbio.at/ COSY: Communicating Synthetic Biology]</ref><ref>{{cite journal | vauthors = Kronberger N, Holtz P, Kerbe W, Strasser E, Wagner W | title = Communicating Synthetic Biology: from the lab via the media to the broader public | journal = Systems and Synthetic Biology | volume = 3 | issue = 1–4 | pages = 19–26 | date = December 2009 | pmid = 19816796 | pmc = 2759424 | doi = 10.1007/s11693-009-9031-x }}</ref><ref>{{cite journal | vauthors = Cserer A, Seiringer A | title = Pictures of Synthetic Biology : A reflective discussion of the representation of Synthetic Biology (SB) in the German-language media and by SB experts | journal = Systems and Synthetic Biology | volume = 3 | issue = 1–4 | pages = 27–35 | date = December 2009 | pmid = 19816797 | pmc = 2759430 | doi = 10.1007/s11693-009-9038-3 }}</ref> To better communicate synthetic biology and its societal ramifications to a broader public, COSY and SYNBIOSAFE published ''SYNBIOSAFE'', a 38-minute documentary film, in October 2009.<ref>[http://www.synbiosafe.eu/DVD COSY/SYNBIOSAFE Documentary]</ref><br />
<br />
<br />
<br />
The International Association Synthetic Biology has proposed self-regulation.<ref>Report of IASB [http://www.ia-sb.eu/tasks/sites/synthetic-biology/assets/File/pdf/iasb_report_biosecurity_syntheticbiology.pdf "Technical solutions for biosecurity in synthetic biology"] {{webarchive |url=https://web.archive.org/web/20110719031805/http://www.ia-sb.eu/tasks/sites/synthetic-biology/assets/File/pdf/iasb_report_biosecurity_syntheticbiology.pdf |date=July 19, 2011 }}, Munich, 2008</ref> This proposes specific measures that the synthetic biology industry, especially DNA synthesis companies, should implement. In 2007, a group led by scientists from leading DNA-synthesis companies published a "practical plan for developing an effective oversight framework for the DNA-synthesis industry".<ref name="Bügl, H. et al. 2007 627–629" /><br />
<br />
<br />
<br />
=== United States 美国 ===<br />
<br />
<br />
<br />
In January 2009, the [[Alfred P. Sloan Foundation]] funded the [[Woodrow Wilson Center]], the [[Hastings Center]], and the [[J. Craig Venter Institute]] to examine the public perception, ethics and policy implications of synthetic biology.<ref>Parens E., Johnston J., Moses J. [http://www.thehastingscenter.org/who-we-are/our-research/selected-past-projects/ethical-issues-in-synthetic-biology-2/ Ethical Issues in Synthetic Biology.] 2009.</ref><br />
<br />
<br />
<br />
On July 9–10, 2009, the National Academies' Committee of Science, Technology & Law convened a symposium on "Opportunities and Challenges in the Emerging Field of Synthetic Biology".<ref>[http://sites.nationalacademies.org/PGA/stl/PGA_050738 NAS Symposium official site]</ref><br />
<br />
<br />
<br />
After the publication of the [[Mycoplasma laboratorium|first synthetic genome]] and the accompanying media coverage about "life" being created, President [[Barack Obama]] established the [[Presidential Commission for the Study of Bioethical Issues]] to study synthetic biology.<ref>Presidential Commission for the study of Bioethical Issues, December 2010 [http://bioethics.gov/node/353 FAQ]</ref> The commission convened a series of meetings, and issued a report in December 2010 titled "New Directions: The Ethics of Synthetic Biology and Emerging Technologies." The commission stated that "while Venter’s achievement marked a significant technical advance in demonstrating that a relatively large genome could be accurately synthesized and substituted for another, it did not amount to the “creation of life”.<ref>[http://bioethics.gov/node/353 Synthetic Biology F.A.Q.'s | Presidential Commission for the Study of Bioethical Issues]</ref> It noted that synthetic biology is an emerging field, which creates potential risks and rewards. The commission did not recommend policy or oversight changes and called for continued funding of the research and new funding for monitoring, study of emerging ethical issues and public education.<ref name="bioethics.gov" /><br />
<br />
<br />
<br />
Synthetic biology, as a major tool for biological advances, results in the "potential for developing biological weapons, possible unforeseen negative impacts on human health ... and any potential environmental impact".<ref name=":2">{{cite journal | vauthors = Erickson B, Singh R, Winters P | title = Synthetic biology: regulating industry uses of new biotechnologies | journal = Science | volume = 333 | issue = 6047 | pages = 1254–6 | date = September 2011 | pmid = 21885775 | doi = 10.1126/science.1211066 | bibcode = 2011Sci...333.1254E | s2cid = 1568198 | url = https://semanticscholar.org/paper/6ae989f6b07dc3c8a8694792d6fe8f036a0e0292 }}</ref> These security issues may be avoided by regulating industry uses of biotechnology through policy legislation. Federal guidelines on genetic manipulation are being proposed by "the President's Bioethics Commission ... in response to the announced creation of a self-replicating cell from a chemically synthesized genome, put forward 18 recommendations not only for regulating the science ... for educating the public".<ref name=":2" /><br />
<br />
<br />
<br />
=== Opposition 反对意见 ===<br />
<br />
On March 13, 2012, over 100 environmental and civil society groups, including [[Friends of the Earth]], the [[International Center for Technology Assessment]] and the [[ETC Group (AGETC)|ETC Group]] issued the manifesto ''The Principles for the Oversight of Synthetic Biology''. This manifesto calls for a worldwide moratorium on the release and commercial use of synthetic organisms until more robust regulations and rigorous biosafety measures are established. The groups specifically call for an outright ban on the use of synthetic biology on the [[human genome]] or [[human microbiome]].<ref>Katherine Xue for Harvard Magazine. September–October 2014 [http://harvardmagazine.com/2014/09/synthetic-biologys-new-menagerie Synthetic Biology’s New Menagerie]</ref><ref>Yojana Sharma for Scidev.net March 15, 2012. [http://www.scidev.net/global/genomics/news/ngos-call-for-international-regulation-of-synthetic-biology.html NGOs call for international regulation of synthetic biology]</ref> [[Richard Lewontin]] wrote that some of the safety tenets for oversight discussed in ''The Principles for the Oversight of Synthetic Biology'' are reasonable, but that the main problem with the recommendations in the manifesto is that "the public at large lacks the ability to enforce any meaningful realization of those recommendations".<ref>[http://www.nybooks.com/articles/archives/2014/may/08/new-synthetic-biology-who-gains/?insrc=rel#fnr-1 The New Synthetic Biology: Who Gains?] (2014-05-08), [[Richard C. Lewontin]], ''[[New York Review of Books]]''</ref><br />
<br />
<br />
<br />
== Health and safety 健康和安全 ==<br />
<br />
{{Main|Hazards of synthetic biology}}<br />
<br />
<br />
<br />
The hazards of synthetic biology include [[biosafety]] hazards to workers and the public, [[biosecurity]] hazards stemming from deliberate engineering of organisms to cause harm, and environmental hazards. The biosafety hazards are similar to those for existing fields of biotechnology, mainly exposure to pathogens and toxic chemicals, although novel synthetic organisms may have novel risks.<ref name=":02">{{Cite journal|url=https://blogs.cdc.gov/niosh-science-blog/2017/01/24/synthetic-biology/|title=Synthetic Biology and Occupational Risk|last1=Howard|first1=John|last2=Murashov|first2=Vladimir|date=2017-01-24|journal=Journal of Occupational and Environmental Hygiene|archive-url=|archive-date=|access-date=2018-11-30|last3=Schulte|first3=Paul|volume=14|issue=3|pages=224–236|pmid=27754800|doi=10.1080/15459624.2016.1237031|s2cid=205893358}}</ref><ref name=":12">{{Cite journal|last1=Howard|first1=John|last2=Murashov|first2=Vladimir|last3=Schulte|first3=Paul|date=2016-10-18|title=Synthetic biology and occupational risk|journal=Journal of Occupational and Environmental Hygiene|volume=14|issue=3|pages=224–236|doi=10.1080/15459624.2016.1237031|pmid=27754800|s2cid=205893358|issn=1545-9624}}</ref> For biosecurity, there is concern that synthetic or redesigned organisms could theoretically be used for [[bioterrorism]]. Potential risks include recreating known pathogens from scratch, engineering existing pathogens to be more dangerous, and engineering microbes to produce harmful biochemicals.<ref name=":7">{{Cite book|title=Biodefense in the Age of Synthetic Biology|date=2018-06-19|publisher=[[National Academies of Sciences, Engineering, and Medicine]]|isbn=9780309465182|location=|pages=|doi=10.17226/24890|pmid=30629396|last1=National Academies Of Sciences|first1=Engineering|author2=Division on Earth Life Studies|last3=Board On Life|first3=Sciences|author4=Board on Chemical Sciences Technology|author5=Committee on Strategies for Identifying Addressing Potential Biodefense Vulnerabilities Posed by Synthetic Biology}}</ref> Lastly, environmental hazards include adverse effects on [[biodiversity]] and [[ecosystem services]], including potential changes to land use resulting from agricultural use of synthetic organisms.<ref name=":8">{{Cite web|url=http://ec.europa.eu/environment/integration/research/newsalert/multimedia/synthetic_biology_and_biodiversity.htm|title=Future Brief: Synthetic biology and biodiversity|last=|first=|date=September 2016|website=European Commission|pages=14–15|archive-url=|archive-date=|access-date=2019-01-14}}</ref><ref>{{Cite web|url=https://publications.europa.eu/en/publication-detail/-/publication/9b231c71-faf1-11e5-b713-01aa75ed71a1/language-en/format-PDF|title=Final opinion on synthetic biology III: Risks to the environment and biodiversity related to synthetic biology and research priorities in the field of synthetic biology|last=|first=|date=2016-04-04|website=EU Directorate-General for Health and Food Safety|pages=8, 27|archive-url=|archive-date=|access-date=2019-01-14}}</ref><br />
<br />
<br />
<br />
Existing risk analysis systems for GMOs are generally considered sufficient for synthetic organisms, although there may be difficulties for an organism built "bottom-up" from individual genetic sequences.<ref name=":32" /><ref name=":22">{{Cite web|url=http://www.hse.gov.uk/research/rrpdf/rr944.pdf|title=Synthetic biology: A review of the technology, and current and future needs from the regulatory framework in Great Britain|last1=Bailey|first1=Claire|last2=Metcalf|first2=Heather|date=2012|website=UK [[Health and Safety Executive]]|archive-url=|archive-date=|access-date=2018-11-29|last3=Crook|first3=Brian}}</ref> Synthetic biology generally falls under existing regulations for GMOs and biotechnology in general, and any regulations that exist for downstream commercial products, although there are generally no regulations in any jurisdiction that are specific to synthetic biology.<ref name=":5">{{Citation|last1=Pei|first1=Lei|title=Regulatory Frameworks for Synthetic Biology|date=2012|work=Synthetic Biology|pages=157–226|publisher=John Wiley & Sons, Ltd|doi=10.1002/9783527659296.ch5|isbn=9783527659296|last2=Bar‐Yam|first2=Shlomiya|last3=Byers‐Corbin|first3=Jennifer|last4=Casagrande|first4=Rocco|last5=Eichler|first5=Florentine|last6=Lin|first6=Allen|last7=Österreicher|first7=Martin|last8=Regardh|first8=Pernilla C.|last9=Turlington|first9=Ralph D.}}</ref><ref name=":4">{{Cite journal|last=Trump|first=Benjamin D.|date=2017-11-01|title=Synthetic biology regulation and governance: Lessons from TAPIC for the United States, European Union, and Singapore|journal=Health Policy|volume=121|issue=11|pages=1139–1146|doi=10.1016/j.healthpol.2017.07.010|pmid=28807332|issn=0168-8510|doi-access=free}}</ref><br />
<br />
<br />
<br />
== See also 请参阅 ==<br />
<br />
{{Colbegin|colwidth=20em}}<br />
<br />
* ''[[ACS Synthetic Biology]]'' (journal)<br />
<br />
* [[Bioengineering]]<br />
<br />
* [[Biomimicry]]<br />
<br />
Category:Biotechnology<br />
<br />
类别: 生物技术<br />
<br />
* [[Carlson Curve]]<br />
<br />
Category:Molecular genetics<br />
<br />
类别: 分子遗传学<br />
<br />
* [[Chiral life concept]]<br />
<br />
Category:Systems biology<br />
<br />
分类: 系统生物学<br />
<br />
* [[Computational biology]]<br />
<br />
Category:Bioinformatics<br />
<br />
类别: 生物信息学<br />
<br />
* [[Computational biomodeling]]<br />
<br />
Category:Biocybernetics<br />
<br />
类别: 生物控制论<br />
<br />
* [[DNA digital data storage]]<br />
<br />
Category:Appropriate technology<br />
<br />
类别: 适当的技术<br />
<br />
* [[Engineering biology]]<br />
<br />
Category:Emerging technologies<br />
<br />
类别: 新兴技术<br />
<br />
<noinclude><br />
<br />
<small>This page was moved from [[wikipedia:en:Synthetic biology]]. Its edit history can be viewed at [[合成生物学/edithistory]]</small></noinclude><br />
<br />
[[Category:待整理页面]]</div>粲兰https://wiki.swarma.org/index.php?title=%E5%90%88%E6%88%90%E7%94%9F%E7%89%A9%E5%AD%A6&diff=18645合成生物学2020-11-18T03:35:02Z<p>粲兰:</p>
<hr />
<div>此词条暂由袁一博翻译,翻译字数共4491,未经人工整理和审校,带来阅读不便,请见谅。<br />
<br />
{{redirect|Artificial life form|simulated life forms|Artificial life}}<br />
<br />
{{short description|Interdisciplinary branch of biology and engineering}}<br />
<br />
{{Synthetic biology}}<br />
<br />
[[File:Synthetic Biology Research at NASA Ames.jpg|thumb|Synthetic Biology Research at [[Ames Research Center|NASA Ames Research Center]].]]<br />
<br />
NASA Ames Research Center.]]<br />
<br />
美国国家航空和宇宙航行局/美国国家航空航天局埃姆斯研究中心<br />
<br />
<br />
<br />
'''Synthetic biology''' ('''SynBio''') is a multidisciplinary area of research that seeks to create new biological parts, devices, and systems, or to redesign systems that are already found in nature.<br />
<br />
Synthetic biology (SynBio) is a multidisciplinary area of research that seeks to create new biological parts, devices, and systems, or to redesign systems that are already found in nature.<br />
<br />
合成生物学(SynBio)是一个多学科的研究领域,旨在创造新的生物部件、设备和系统,或重新设计已经在自然界中发现的系统。<br />
<br />
<br />
<br />
It is a branch of science that encompasses a broad range of methodologies from various disciplines, such as [[biotechnology]], [[genetic engineering]], [[molecular biology]], [[molecular engineering]], [[systems biology]], [[Model lipid bilayer|membrane science]], [[biophysics]], [[Biological engineering|chemical and biological engineering]], [[Electrical engineering|electrical and computer engineering]], [[control engineering]] and [[evolutionary biology]].<br />
<br />
It is a branch of science that encompasses a broad range of methodologies from various disciplines, such as biotechnology, genetic engineering, molecular biology, molecular engineering, systems biology, membrane science, biophysics, chemical and biological engineering, electrical and computer engineering, control engineering and evolutionary biology.<br />
<br />
它是科学的一个分支,包括广泛的方法,从不同的学科,如生物技术,基因工程,分子生物学,分子工程,系统生物学,膜科学,生物物理学,化学和生物工程,电子和计算机工程,控制工程和进化生物学。<br />
<br />
<br />
<br />
Due to more powerful [[genetic engineering]] capabilities and decreased DNA synthesis and [[DNA sequencing|sequencing costs]], the field of synthetic biology is rapidly growing. In 2016, more than 350 companies across 40 countries were actively engaged in synthetic biology applications; all these companies had an estimated net worth of $3.9 billion in the global market.<ref>{{cite journal | last1 = Bueso | first1 = F. Y. | last2 = Tangney | first2 = M. | year = 2017 | title = Synthetic Biology in the Driving Seat of the Bioeconomy | url = | journal = Trends in Biotechnology | volume = 35 | issue = 5| pages = 373–378 | doi = 10.1016/j.tibtech.2017.02.002 | pmid = 28249675 }}</ref><br />
<br />
Due to more powerful genetic engineering capabilities and decreased DNA synthesis and sequencing costs, the field of synthetic biology is rapidly growing. In 2016, more than 350 companies across 40 countries were actively engaged in synthetic biology applications; all these companies had an estimated net worth of $3.9 billion in the global market.<br />
<br />
由于更强大的基因工程能力和降低的 DNA 合成及测序成本,合成生物学领域正在迅速发展。2016年,来自40个国家的350多家公司积极参与合成生物学应用; 所有这些公司在全球市场的净值估计为39亿美元。<br />
<br />
<br />
<br />
== Definition 定义 ==<br />
<br />
Synthetic biology currently has no generally accepted definition. Here are a few examples:<br />
<br />
Synthetic biology currently has no generally accepted definition. Here are a few examples:<br />
<br />
合成生物学目前还没有公认的定义。以下是一些定义的示例:<br />
<br />
<br />
<br />
* "the use of a mixture of physical engineering and genetic engineering to create new (and, therefore, synthetic) life forms混合使用物理工程和基因工程来创建新的(因而也即合成的)生命形式。"<ref>{{cite journal | last1 = Hunter | first1 = D | year = 2013 | title = How to object to radically new technologies on the basis of justice: the case of synthetic biology | url = | journal = Bioethics | volume = 27 | issue = 8| pages = 426–434 | doi = 10.1111/bioe.12049 | pmid = 24010854 }}</ref><br />
<br />
<br />
* "an emerging field of research that aims to combine the knowledge and methods of biology, engineering and related disciplines in the design of chemically synthesized DNA to create organisms with novel or enhanced characteristics and traits一个新兴的研究领域,旨在将生物学,工程学和相关学科领域的知识和方法结合到化学合成DNA 的设计中,从而创造出具有新颖或增强特性和特征的有机体。<br />
"<ref>{{cite journal | last1 = Gutmann | first1 = A | year = 2011 | title = The ethics of synthetic biology: guiding principles for emerging technologies | url = | journal = Hastings Center Report | volume = 41 | issue = 4| pages = 17–22 | doi = 10.1002/j.1552-146x.2011.tb00118.x | pmid = 21845917 | s2cid = 20662786 }}</ref><br />
<br />
* "designing and constructing [[BioBrick|biological modules]], [[biological systems]], and [[biological machine]]s or, re-design of existing biological systems for useful purposes设计并构建生物积木、生物系统以及生物机器,或为有用的目的重新设计现有的生物系统。"<ref name="NakanoEckford2013">{{cite book|url={{google books |plainurl=y |id=uVhsAAAAQBAJ}}|title=Molecular Communication|last1=Nakano|first1=Tadashi|last2=Eckford|first2=Andrew W.|last3=Haraguchi|first3=Tokuko|date=12 September 2013|publisher=Cambridge University Press|isbn=978-1-107-02308-6|name-list-style=vanc}}</ref><br />
<br />
<br />
* “applying the engineering paradigm of systems design to biological systems in order to produce predictable and robust systems with novel functionalities that do not exist in nature” (The European Commission, 2005)This can include the possibility of a [[molecular assembler]], based upon biomolecular systems such as the [[ribosome]]”<ref name="RoadMap">{{Cite web|url=http://www.foresight.org/roadmaps/Nanotech_Roadmap_2007_main.pdf|title=Productive Nanosystems: A Technology Roadmap|website=Foresight Institute}}</ref><br />
将系统设计的工程范式应用到生物系统中,以产生具有自然界中不存在的新功能的可预测且健全的系统”(欧洲委员会,2005年),这可能包括基于生物分子系统——例如核糖体——的分子组合器的可能性。<br />
<br />
<br />
<br />
To note, synthetic biology has traditionally been divided into two different approaches: top down and bottom up.<br />
<br />
To note, synthetic biology has traditionally been divided into two different approaches: top down and bottom up.<br />
<br />
值得注意的是,合成生物学在传统上被分为两种不同的方法: 自上而下和自下而上。<br />
<br />
<br />
<br />
# The <u>top down</u> approach involves using metabolic and genetic engineering techniques to impart new functions to living cells.<br />
<br />
The <u>top down</u> approach involves using metabolic and genetic engineering techniques to impart new functions to living cells.<br />
<br />
自上而下的方法包括利用代谢和基因工程技术赋予活细胞以新的功能。<br />
<br />
# The <u>bottom up</u> approach involves creating new biological systems ''in vitro'' by bringing together 'non-living' biomolecular components,<ref>{{cite journal | vauthors = Schwille P | title = Bottom-up synthetic biology: engineering in a tinkerer's world | journal = Science | volume = 333 | issue = 6047 | pages = 1252–4 | date = September 2011 | pmid = 21885774 | doi = 10.1126/science.1211701 | bibcode = 2011Sci...333.1252S | s2cid = 43354332 }}</ref> often with the aim of constructing an [[artificial cell]].<br />
<br />
The <u>bottom up</u> approach involves creating new biological systems in vitro by bringing together 'non-living' biomolecular components, often with the aim of constructing an artificial cell.<br />
<br />
自下而上的方法包括在体外创建新的生物系统,将“非活性”的生物分子组件聚集在一起,其目的通常是构建一个人工细胞。<br />
<br />
<br />
<br />
Biological systems are thus assembled module-by-module. [[Cell-free protein synthesis|Cell-free protein expression systems]] are often employed,<ref>{{cite journal | vauthors = Noireaux V, Libchaber A | title = A vesicle bioreactor as a step toward an artificial cell assembly | journal = Proceedings of the National Academy of Sciences of the United States of America | volume = 101 | issue = 51 | pages = 17669–74 | date = December 2004 | pmid = 15591347 | pmc = 539773 | doi = 10.1073/pnas.0408236101 | bibcode = 2004PNAS..10117669N }}</ref><ref>{{cite journal | vauthors = Hodgman CE, Jewett MC | title = Cell-free synthetic biology: thinking outside the cell | journal = Metabolic Engineering | volume = 14 | issue = 3 | pages = 261–9 | date = May 2012 | pmid = 21946161 | pmc = 3322310 | doi = 10.1016/j.ymben.2011.09.002 }}</ref><ref>{{cite journal | vauthors = Elani Y, Law RV, Ces O | title = Protein synthesis in artificial cells: using compartmentalisation for spatial organisation in vesicle bioreactors | journal = Physical Chemistry Chemical Physics | volume = 17 | issue = 24 | pages = 15534–7 | date = June 2015 | pmid = 25932977 | doi = 10.1039/C4CP05933F | bibcode = 2015PCCP...1715534E | doi-access = free }}</ref> as are membrane-based molecular machinery. There are increasing efforts to bridge the divide between these approaches by forming hybrid living/synthetic cells,<ref>{{cite journal | vauthors = Elani Y, Trantidou T, Wylie D, Dekker L, Polizzi K, Law RV, Ces O | title = Constructing vesicle-based artificial cells with embedded living cells as organelle-like modules | journal = Scientific Reports | volume = 8 | issue = 1 | pages = 4564 | date = March 2018 | pmid = 29540757 | pmc = 5852042 | doi = 10.1038/s41598-018-22263-3 | bibcode = 2018NatSR...8.4564E }}</ref> and engineering communication between living and synthetic cell populations.<ref>{{cite journal | vauthors = Lentini R, Martín NY, Forlin M, Belmonte L, Fontana J, Cornella M, Martini L, Tamburini S, Bentley WE, Jousson O, Mansy SS | title = Two-Way Chemical Communication between Artificial and Natural Cells | journal = ACS Central Science | volume = 3 | issue = 2 | pages = 117–123 | date = February 2017 | pmid = 28280778 | pmc = 5324081 | doi = 10.1021/acscentsci.6b00330 }}</ref><br />
<br />
Biological systems are thus assembled module-by-module. Cell-free protein expression systems are often employed, as are membrane-based molecular machinery. There are increasing efforts to bridge the divide between these approaches by forming hybrid living/synthetic cells, and engineering communication between living and synthetic cell populations.<br />
<br />
生物系统就是这样一个模块一个模块地组装起来的。无细胞蛋白表达系统作为以膜为基础的分子机制,经常被采用。通过形成活细胞/合成细胞的混合体,以及在活细胞和合成细胞群之间进行工程交流,越来越多的努力为这些系统之间架起了桥梁。<br />
<br />
<br />
<br />
== History 发展历程 ==<br />
<br />
'''1910:''' First identifiable use of the term "synthetic biology" in [[Stéphane Leduc]]'s publication ''Théorie physico-chimique de la vie et générations spontanées''.<ref>[https://openlibrary.org/books/OL23348076M/Théorie_physico-chimique_de_la_vie_et_générations_spontanées Théorie physico-chimique de la vie et générations spontanées, S. Leduc, 1910]</ref> He also noted this term in another publication, ''La Biologie Synthétique'' in 1912.<ref>{{cite book |url=http://www.peiresc.org/bstitre.htm |title=La biologie synthétique, étude de biophysique |last=Leduc |first=Stéphane |date=1912 | veditors = Poinat A }}</ref><br />
<br />
1910: First identifiable use of the term "synthetic biology" in Stéphane Leduc's publication Théorie physico-chimique de la vie et générations spontanées. He also noted this term in another publication, La Biologie Synthétique in 1912.<br />
<br />
1910: First identifiable use of the term "synthetic biology" in Stéphane Leduc's publication Théorie physico-chimique de la vie et générations spontanées.他还在1912年的另一本出版物《生物合成学》中提到了这个术语。<br />
<br />
<br />
<br />
'''1961:''' Jacob and Monod postulate cellular regulation by molecular networks from their study of the ''lac'' operon in ''E. coli'' and envisioned the ability to assemble new systems from molecular components.<ref>Jacob, F.ß. & Monod, J. On the regulation of gene activity. Cold Spring Harb. Symp. Quant. Biol. 26, 193–211 (1961).</ref><br />
<br />
1961: Jacob and Monod postulate cellular regulation by molecular networks from their study of the lac operon in E. coli and envisioned the ability to assemble new systems from molecular components.<br />
<br />
1961年: 雅各布和莫诺德通过他们对大肠杆菌中乳酸操纵子的研究,设想了通过分子网络调控细胞的方法,并预期了由分子组件组装新系统的能力。<br />
<br />
<br />
<br />
'''1973:''' First molecular cloning and amplification of DNA in a plasmid is published in ''P.N.A.S.'' by Cohen, Boyer ''et al.'' constituting the dawn of synthetic biology.<ref>{{cite journal | vauthors = Cohen SN, Chang AC, Boyer HW, Helling RB | title = Construction of biologically functional bacterial plasmids in vitro | journal = Proc. Natl. Acad. Sci. USA | volume = 70 | issue = 11 | pages = 3240–3244 | date = 1973 | pmid = 4594039 | doi = 10.1073/pnas.70.11.3240 | bibcode = 1973PNAS...70.3240C | pmc = 427208 }}</ref></blockquote><br />
<br />
1973: First molecular cloning and amplification of DNA in a plasmid is published in P.N.A.S. by Cohen, Boyer et al. constituting the dawn of synthetic biology.</blockquote><br />
<br />
1973年:第一篇关于质粒中 DNA 的分子克隆和扩增的文章在 P.N.A.S. 发表。作者: 科恩,波义耳等人(Cohen,Boyer et al.) 。构成了合成生物学的黎明<br />
<br />
<br />
<br />
'''1978:''' [[Werner Arber|Arber]], [[Daniel Nathans|Nathans]] and [[Hamilton O. Smith|Smith]] win the [[Nobel Prize in Physiology or Medicine]] for the discovery of [[restriction enzyme]]s, leading Szybalski to offer an editorial comment in the journal ''[[Gene (journal)|Gene]]'':<br />
<br />
1978: Arber, Nathans and Smith win the Nobel Prize in Physiology or Medicine for the discovery of restriction enzymes, leading Szybalski to offer an editorial comment in the journal Gene:<br />
<br />
1978年: 阿尔伯 (Arber) ,纳森斯 (Nathans) 和 史密斯 (Smith) 因为限制性内切酶的发现而获得美国诺贝尔生理学或医学奖学会奖,这使得齐巴尔斯基 (Szybalski) 在《基因》杂志上发表了一篇社论评论:<br />
<br />
<br />
<br />
<blockquote>The work on restriction nucleases not only permits us easily to construct recombinant DNA molecules and to analyze individual genes, but also has led us into the new era of synthetic biology where not only existing genes are described and analyzed but also new gene arrangements can be constructed and evaluated.<ref>{{cite journal | vauthors = Szybalski W, Skalka A | title = Nobel prizes and restriction enzymes | journal = Gene | volume = 4 | issue = 3 | pages = 181–2 | date = November 1978 | pmid = 744485 | doi = 10.1016/0378-1119(78)90016-1 }}</ref></blockquote><br />
<br />
<blockquote>The work on restriction nucleases not only permits us easily to construct recombinant DNA molecules and to analyze individual genes, but also has led us into the new era of synthetic biology where not only existing genes are described and analyzed but also new gene arrangements can be constructed and evaluated.</blockquote><br />
<br />
限制性核酸酶的研究不仅使我们能够很容易地构建重组 DNA 分子和分析单个基因,而且使我们进入了合成生物学的新时代,不仅可以描述和分析现有的基因,而且可以构建和评估新的基因序列。</blockquote ><br />
<br />
<br />
<br />
'''1988:''' First DNA amplification by the polymerase chain reaction (PCR) using a thermostable DNA polymerase is published in ''Science'' by Mullis ''et al.''<ref>{{cite journal | vauthors = Saiki RK, Gelfand DH, Stoffel S, Scharf SJ, Higuchi R, Horn GT, Mullis KB, Erlich HA | title = Primer-directed enzymatic amplification of DNA with a thermostable DNA polymerase | journal = Science | volume = 239 | issue = 4839 | pages = 487–491 | date = 1988 | pmid = 2448875 | doi = 10.1126/science.239.4839.487 }}</ref></blockquote> This obviated adding new DNA polymerase after each PCR cycle, thus greatly simplifying DNA mutagenesis and assembly.<br />
<br />
1988: First DNA amplification by the polymerase chain reaction (PCR) using a thermostable DNA polymerase is published in Science by Mullis et al.</blockquote> This obviated adding new DNA polymerase after each PCR cycle, thus greatly simplifying DNA mutagenesis and assembly.<br />
<br />
1988年: 第一次利用热稳定的 DNA 聚合酶进行聚合酶链式反应以实现 DNA 扩增(PCR)的成果由马利斯 (Mullis) 等人发表在《科学》杂志上,这样就避免了在每次 PCR 循环后增加新的 DNA 聚合酶,从而大大简化了 DNA 的突变和组装。<br />
<br />
<br />
<br />
'''2000:''' Two papers in [[Nature (journal)|Nature]] report [[synthetic biological circuits]], a genetic toggle switch and a biological clock, by combining genes within [[Escherichia coli|''E. coli'']] cells.<ref name=":0">{{cite journal | vauthors = Elowitz MB, Leibler S | title = A synthetic oscillatory network of transcriptional regulators | journal = Nature | volume = 403 | issue = 6767 | pages = 335–8 | date = January 2000 | pmid = 10659856 | doi = 10.1038/35002125 | bibcode = 2000Natur.403..335E | s2cid = 41632754 }}</ref><ref name=":1">{{cite journal | vauthors = Gardner TS, Cantor CR, Collins JJ | title = Construction of a genetic toggle switch in Escherichia coli | journal = Nature | volume = 403 | issue = 6767 | pages = 339–42 | date = January 2000 | pmid = 10659857 | doi = 10.1038/35002131 | bibcode = 2000Natur.403..339G | s2cid = 345059 }}</ref><br />
<br />
2000: Two papers in Nature report synthetic biological circuits, a genetic toggle switch and a biological clock, by combining genes within E. coli cells.<br />
<br />
2000年: 《自然》杂志的两篇论文报告了通过结合大肠杆菌细胞内的基因制造合成生物电路、基因切换开关和生物钟。<br />
<br />
<br />
<br />
'''2003:''' The most widely used standardized DNA parts, [[BioBrick]] plasmids, are invented by [[Tom Knight (scientist)|Tom Knight]].<ref>{{Cite journal|last1=Knight|first1=Thomas| name-list-style = vanc |year=2003|title=Tom Knight (2003). Idempotent Vector Design for Standard Assembly of Biobricks|hdl=1721.1/21168}}</ref> These parts will become central to the international Genetically Engineered Machine competition (iGEM) founded at MIT in the following year.<br />
<br />
2003: The most widely used standardized DNA parts, BioBrick plasmids, are invented by Tom Knight. These parts will become central to the international Genetically Engineered Machine competition (iGEM) founded at MIT in the following year.<br />
<br />
2003年: 最广泛使用的标准化 DNA 部件,生物积木质粒,是由汤姆·奈特 (Tom Knight) 发明的。这些部分将成为第二年在麻省理工学院成立的国际基因工程机器竞赛 (iGEM) 的中心。<br />
<br />
<br />
<br />
[[File:Synthetic Biology Open Language (SBOL) standard visual symbols.png|thumb|upright=1.25| [[Synthetic Biology Open Language]] (SBOL) standard visual symbols for use with [[BioBrick|BioBricks Standard]]]]<br />
<br />
[[Synthetic Biology Open Language (SBOL) standard visual symbols for use with BioBricks Standard]]<br />
<br />
[与生物积木标准一起使用的合成生物学开放式语言(SBOL)标准视觉符号]<br />
<br />
<br />
<br />
'''2003:''' Researchers engineer an artemisinin precursor pathway in ''E. coli''.<ref>Martin, V. J., Pitera, D. J., Withers, S. T., Newman, J. D. & Keasling, J. D. Engineering a mevalonate pathway in Escherichia coli for production of terpenoids. Nature Biotech. 21, 796–802 (2003).</ref><br />
<br />
2003: Researchers engineer an artemisinin precursor pathway in E. coli.<br />
<br />
2003年: 研究人员在大肠杆菌中设计出青蒿素前体途径。<br />
<br />
<br />
<br />
'''2004:''' First international conference for synthetic biology, Synthetic Biology 1.0 (SB1.0) is held at the Massachusetts Institute of Technology, USA.<br />
<br />
2004: First international conference for synthetic biology, Synthetic Biology 1.0 (SB1.0) is held at the Massachusetts Institute of Technology, USA.<br />
<br />
2004年: 第一届合成生物学国际会议,合成生物学1.0(SB1.0)在美国麻省理工学院举行。<br />
<br />
<br />
<br />
'''2005:''' Researchers develop a light-sensing circuit in ''E. coli''.<ref>{{cite journal | last1 = Levskaya | first1 = A. | display-authors = etal | year = 2005 | title = "Synthetic biology " engineering Escherichia coli to see light | url = | journal = Nature | volume = 438 | issue = 7067| pages = 441–442 | doi = 10.1038/nature04405 | pmid = 16306980 | s2cid = 4428475 }}</ref> Another group designs circuits capable of multicellular pattern formation.<ref>Basu, S., Gerchman, Y., Collins, C. H., Arnold, F. H. & Weiss, R. "A synthetic multicellular system for programmed pattern formation. ''Nature'' 434,</ref><br />
<br />
2005: Researchers develop a light-sensing circuit in E. coli. Another group designs circuits capable of multicellular pattern formation.<br />
<br />
2005年: 研究人员在大肠杆菌中开发出一种感光电路。另一个研究小组设计出了能够形成多细胞模式的电路。<br />
<br />
<br />
<br />
'''2006:''' Researchers engineer a synthetic circuit that promotes bacterial invasion of tumour cells.<ref>{{cite journal | last1 = Anderson | first1 = J. C. | last2 = Clarke | first2 = E. J. | last3 = Arkin | first3 = A. P. | last4 = Voigt | first4 = C. A. | year = 2006 | title = Environmentally controlled invasion of cancer cells by engineered bacteria | url = | journal = J. Mol. Biol. | volume = 355 | issue = 4| pages = 619–627 | doi = 10.1016/j.jmb.2005.10.076 | pmid = 16330045 }}</ref><br />
<br />
2006: Researchers engineer a synthetic circuit that promotes bacterial invasion of tumour cells.<br />
<br />
2006年: 研究人员设计了一种能促进细菌侵入肿瘤细胞的合成电路。<br />
<br />
<br />
<br />
'''2010:''' Researchers publish in ''Science'' the first synthetic bacterial genome, called ''M. mycoides'' JCVI-syn1.0.<ref name="gibson52" /><ref>{{Cite news|url=https://www.telegraph.co.uk/news/science/science-news/7747779/American-scientist-who-created-artificial-life-denies-playing-God.html|title=American scientist who created artificial life denies 'playing God'|last=|first=|date=May 2010|website=The Telegraph|url-status=live|archive-url=|archive-date=|access-date=}}</ref> The genome is made from chemically-synthesized DNA using yeast recombination.<br />
<br />
2010: Researchers publish in Science the first synthetic bacterial genome, called M. mycoides JCVI-syn1.0. The genome is made from chemically-synthesized DNA using yeast recombination.<br />
<br />
2010年: 研究人员在《科学》杂志上发表了第一个人工合成的细菌基因组,名为丝状支原体 JCVI-syn1.0。基因组是使用酵母由化学合成的 DNA重组得到的。<br />
<br />
<br />
<br />
'''2011:''' Functional synthetic chromosome arms are engineered in yeast.<ref>{{cite journal | last1 = Dymond | first1 = J. S. | display-authors = etal | year = 2011 | title = Synthetic chromosome arms function in yeast and generate phenotypic diversity by design | url = | journal = Nature | volume = 477 | issue = 7365 | pages = 816–821 | doi = 10.1038/nature10403 | pmid = 21918511 | pmc = 3774833 }}</ref><br />
<br />
2011: Functional synthetic chromosome arms are engineered in yeast.<br />
<br />
2011年: 成功在酵母中设计出功能性合成染色体臂。<br />
<br />
<br />
<br />
'''2012:''' Charpentier and Doudna labs publish in ''Science'' the programming of CRISPR-Cas9 bacterial immunity for targeting DNA cleavage.<ref>{{cite journal | vauthors = Jinek M, Chylinski K, Fonfara I, Hauer M, Doudna JA, Charpentier E | title = A programmable dual-RNA-guided DNA endonuclease in adaptive bacterial immunity | journal = Science | volume = 337 | issue = 6096 | pages = 816–821 | date = 2012 | pmid = 22745249 | doi = 10.1126/science.1225829 | pmc = 6286148 }}</ref></blockquote> This technology greatly simplified and expanded eukaryotic gene editing.<br />
<br />
2012: Charpentier and Doudna labs publish in Science the programming of CRISPR-Cas9 bacterial immunity for targeting DNA cleavage.</blockquote> This technology greatly simplified and expanded eukaryotic gene editing.<br />
<br />
2012年: Charpentier 和 Doudna 实验室在《科学》杂志上发表了 CRISPR-Cas9细菌免疫系统的程序设计,用于靶向 DNA 的裂解。这项技术极大地简化和扩展了真核生物的基因编辑。<br />
<br />
<br />
<br />
'''2019:''' Scientists at [[ETH Zurich]] report the creation of the first [[bacterial genome]], named ''[[Caulobacter crescentus|Caulobacter ethensis-2.0]]'', made entirely by a computer, although a related [[wikt:viability|viable form]] of ''C. ethensis-2.0'' does not yet exist.<ref name="EA-20190401">{{cite news |author=ETH Zurich |title=First bacterial genome created entirely with a computer |url=https://www.eurekalert.org/pub_releases/2019-04/ez-fbg032819.php |date=1 April 2019 |work=[[EurekAlert!]] |accessdate=2 April 2019 |author-link=ETH Zurich }}</ref><ref name="PNAS20190401">{{cite journal |author=Venetz, Jonathan E. |display-authors=et al. |title=Chemical synthesis rewriting of a bacterial genome to achieve design flexibility and biological functionality |date=1 April 2019 |journal=[[Proceedings of the National Academy of Sciences of the United States of America]] |volume=116 |issue=16 |pages=8070–8079 |doi=10.1073/pnas.1818259116 |pmid=30936302 |pmc=6475421 }}</ref><br />
<br />
2019: Scientists at ETH Zurich report the creation of the first bacterial genome, named Caulobacter ethensis-2.0, made entirely by a computer, although a related viable form of C. ethensis-2.0 does not yet exist.<br />
<br />
2019年: 苏黎世联邦理工学院 (ETH Zurich) 的科学家报告说,他们已经创造出了第一个细菌基因组,并将其命名为 Caulobacter ethensis-2.0 ,这个基因组完全是由计算机制造的,尽管与之相关的可存活的Caulobacter ethensis-2.0还不存在。<br />
<br />
<br />
<br />
'''2019:''' Researchers report the production of a new [[Synthetic biology#Synthetic life|synthetic]] (possibly [[Artificial life#Biochemical-based ("wet")|artificial]]) form of [[wikt:viability|viable]] [[life]], a variant of the [[bacteria]] ''[[Escherichia coli]]'', by reducing the natural number of 64 [[codon]]s in the bacterial [[genome]] to 59 codons instead, in order to encode 20 [[amino acid]]s.<ref name="NYT-20190515">{{cite news |last=Zimmer |first=Carl |authorlink=Carl Zimmer |title=Scientists Created Bacteria With a Synthetic Genome. Is This Artificial Life? - In a milestone for synthetic biology, colonies of E. coli thrive with DNA constructed from scratch by humans, not nature. |url=https://www.nytimes.com/2019/05/15/science/synthetic-genome-bacteria.html |date=15 May 2019 |work=[[The New York Times]] |accessdate=16 May 2019 }}</ref><ref name="NAT-20190515">{{cite journal |author=Fredens, Julius |display-authors=et al. |title=Total synthesis of Escherichia coli with a recoded genome |date=15 May 2019 |journal=[[Nature (journal)|Nature]] |volume=569 |issue=7757 |pages=514–518 |doi=10.1038/s41586-019-1192-5 |pmid=31092918 |pmc=7039709 |bibcode=2019Natur.569..514F }}</ref><br />
<br />
2019: Researchers report the production of a new synthetic (possibly artificial) form of viable life, a variant of the bacteria Escherichia coli, by reducing the natural number of 64 codons in the bacterial genome to 59 codons instead, in order to encode 20 amino acids.<br />
<br />
2019年: 研究人员报告了一种新式合成(可能是人工的)可行生命形式的产生,这是大肠杆菌的变种,它通过将细菌基因组中64个密码子的自然数目减少到59个密码子来编码20个氨基酸。<br />
<br />
<br />
<br />
== Perspectives 各方观点 ==<br />
<br />
Engineers view biology as a ''technology'' (in other words, a given system's ''[[biotechnology]]'' or its ''[[biological engineering]]'')<ref>{{cite journal | volume = 6 | last = Zeng | first = Jie (Bangzhe) | title = On the concept of systems bio-engineering | journal = Coomunication on Transgenic Animals, June 1994, CAS, PRC }}</ref> Synthetic biology includes the broad redefinition and expansion of biotechnology, with the ultimate goals of being able to design and build engineered biological systems that process information, manipulate chemicals, fabricate materials and structures, produce energy, provide food, and maintain and enhance human health (see [[Biomedical Engineering]]) and our environment.<ref>{{cite journal | volume = 6 | last = Chopra | first = Paras | author2 = Akhil Kamma | title = Engineering life through Synthetic Biology | journal = In Silico Biology }}</ref><br />
<br />
Engineers view biology as a technology (in other words, a given system's biotechnology or its biological engineering) Synthetic biology includes the broad redefinition and expansion of biotechnology, with the ultimate goals of being able to design and build engineered biological systems that process information, manipulate chemicals, fabricate materials and structures, produce energy, provide food, and maintain and enhance human health (see Biomedical Engineering) and our environment.<br />
<br />
工程师将生物学视为一种技术(换句话说,一个特定系统的生物技术或其生物工程)合成生物学包括生物技术的广泛重新定义和扩展,最终目标是能够设计和建造工程生物系统,处理信息,操纵化学品,制造材料和结构,生产能源,提供食物,维护和增强人类健康(见生物医学工程)和我们的环境。<br />
<br />
<br />
<br />
Studies in synthetic biology can be subdivided into broad classifications according to the approach they take to the problem at hand: standardization of biological parts, biomolecular engineering, genome engineering. {{citation needed|date=May 2020}}<br />
<br />
Studies in synthetic biology can be subdivided into broad classifications according to the approach they take to the problem at hand: standardization of biological parts, biomolecular engineering, genome engineering. <br />
<br />
合成生物学的研究可以根据它们对手头问题所采取的方法再广泛细分为: 生物部分的标准化、生物分子工程、基因组工程。<br />
<br />
<br />
<br />
Biomolecular engineering includes approaches that aim to create a toolkit of functional units that can be introduced to present new technological functions in living cells. [[Genetic engineering]] includes approaches to construct synthetic chromosomes for whole or minimal organisms.<br />
<br />
Biomolecular engineering includes approaches that aim to create a toolkit of functional units that can be introduced to present new technological functions in living cells. Genetic engineering includes approaches to construct synthetic chromosomes for whole or minimal organisms.<br />
<br />
生物分子工程包括旨在创建一个功能单元工具包的方法,这些功能单元可以用来展示活细胞中的新技术性功能。基因工程包括为整个或最小的有机体构建合成染色体的方法。<br />
<br />
<br />
<br />
Biomolecular design refers to the general idea of de novo design and additive combination of biomolecular components. Each of these approaches share a similar task: to develop a more synthetic entity at a higher level of complexity by inventively manipulating a simpler part at the preceding level.<ref>{{cite journal | vauthors = Channon K, Bromley EH, Woolfson DN | title = Synthetic biology through biomolecular design and engineering | journal = Current Opinion in Structural Biology | volume = 18 | issue = 4 | pages = 491–8 | date = August 2008 | pmid = 18644449 | doi = 10.1016/j.sbi.2008.06.006 }}</ref><br />
<br />
Biomolecular design refers to the general idea of de novo design and additive combination of biomolecular components. Each of these approaches share a similar task: to develop a more synthetic entity at a higher level of complexity by inventively manipulating a simpler part at the preceding level.<br />
<br />
生物分子设计是指生物分子组分的重新设计和加合的总体思想。这些方法都有一个相似的任务: 通过在前一级创造性地操作一个更简单的部分,从而在更高的复杂性水平上开发一个更高度合成化的实体。<br />
<br />
<br />
<br />
On the other hand, "re-writers" are synthetic biologists interested in testing the irreducibility of biological systems. Due to the complexity of natural biological systems, it would be simpler to rebuild the natural systems of interest from the ground up; In order to provide engineered surrogates that are easier to comprehend, control and manipulate.<ref>{{cite journal | first = M | last = Stone | title = Life Redesigned to Suit the Engineering Crowd | journal = Microbe | volume = 1 | issue = 12 | pages = 566–570 | date = 2006 | s2cid = 7171812 | url = https://pdfs.semanticscholar.org/8d45/e0f37a0fb6c1a3c659c71ee9c52619b18364.pdf }}</ref> Re-writers draw inspiration from [[refactoring]], a process sometimes used to improve computer software.<br />
<br />
On the other hand, "re-writers" are synthetic biologists interested in testing the irreducibility of biological systems. Due to the complexity of natural biological systems, it would be simpler to rebuild the natural systems of interest from the ground up; In order to provide engineered surrogates that are easier to comprehend, control and manipulate. Re-writers draw inspiration from refactoring, a process sometimes used to improve computer software.<br />
<br />
另一方面,“重写者”指的是对测试生物系统的不可还原性感兴趣的合成生物学家。由于自然生物系统的复杂性,从头开始重建感兴趣的自然系统会更简单; 为了提供更容易理解、控制和操作的工程替代品。重写者从重构中获得灵感,我们有时用这种重构以改进计算机软件。<br />
<br />
<br />
<br />
== Enabling technologies 使能技术 ==<br />
<br />
Several novel enabling technologies were critical to the success of synthetic biology. Concepts include [[standardization]] of biological parts and hierarchical abstraction to permit using those parts in synthetic systems.<ref>{{cite journal | vauthors = Baker D, Church G, Collins J, Endy D, Jacobson J, Keasling J, Modrich P, Smolke C, Weiss R | title = Engineering life: building a fab for biology | journal = Scientific American | volume = 294 | issue = 6 | pages = 44–51 | date = June 2006 | pmid = 16711359 | doi = 10.1038/scientificamerican0606-44 | bibcode = 2006SciAm.294f..44B }}</ref> Basic technologies include reading and writing DNA (sequencing and fabrication). Measurements under multiple conditions are needed for accurate modeling and [[computer-aided design]] (CAD).<br />
<br />
Several novel enabling technologies were critical to the success of synthetic biology. Concepts include standardization of biological parts and hierarchical abstraction to permit using those parts in synthetic systems. Basic technologies include reading and writing DNA (sequencing and fabrication). Measurements under multiple conditions are needed for accurate modeling and computer-aided design (CAD).<br />
<br />
一些新的使能技术对于合成生物学的成功至关重要。概念包括生物部分的标准化和层次抽象,以允许在合成系统中使用这些部分。基本技术包括读写 DNA (测序和编码)。为了精确地建模和计算机辅助设计(CAD) ,需要在多种条件下进行测量。<br />
<br />
<br />
<br />
=== DNA and gene synthesis DNA 和基因合成===<br />
<br />
{{Main|Artificial gene synthesis|Synthetic genomics}}Driven by dramatic decreases in costs of [[oligonucleotides|oligonucleotide]] ("oligos") synthesis and the advent of PCR, the sizes of DNA constructions from oligos have increased to the genomic level.<ref>{{cite journal | vauthors = Kosuri S, Church GM | title = Large-scale de novo DNA synthesis: technologies and applications | journal = Nature Methods | volume = 11 | issue = 5 | pages = 499–507 | date = May 2014 | pmid = 24781323 | doi = 10.1038/nmeth.2918 | pmc = 7098426 }}</ref> In 2000, researchers reported synthesis of the 9.6 kbp (kilo bp) [[Hepatitis C]] virus genome from chemically synthesized 60 to 80-mers.<ref>{{cite journal | vauthors = Blight KJ, Kolykhalov AA, Rice CM | title = Efficient initiation of HCV RNA replication in cell culture | journal = Science | volume = 290 | issue = 5498 | pages = 1972–4 | date = December 2000 | pmid = 11110665 | doi = 10.1126/science.290.5498.1972 | bibcode = 2000Sci...290.1972B }}</ref> In 2002 researchers at [[Stony Brook University]] succeeded in synthesizing the 7741 bp [[poliovirus]] genome from its published sequence, producing the second synthetic genome, spanning two years.<ref>{{cite journal | vauthors = Couzin J | title = Virology. Active poliovirus baked from scratch | journal = Science | volume = 297 | issue = 5579 | pages = 174–5 | date = July 2002 | pmid = 12114601 | doi = 10.1126/science.297.5579.174b | s2cid = 83531627 | url = https://semanticscholar.org/paper/248000e7bc654631ae217274a77253ceddf270a1 }}</ref> In 2003 the 5386 bp genome of the [[bacteriophage]] [[Phi X 174]] was assembled in about two weeks.<ref name="assembly2003">{{cite journal | vauthors = Smith HO, Hutchison CA, Pfannkoch C, Venter JC | title = Generating a synthetic genome by whole genome assembly: phiX174 bacteriophage from synthetic oligonucleotides | journal = Proceedings of the National Academy of Sciences of the United States of America | volume = 100 | issue = 26 | pages = 15440–5 | date = December 2003 | pmid = 14657399 | pmc = 307586 | doi = 10.1073/pnas.2237126100 | bibcode = 2003PNAS..10015440S }}</ref> In 2006, the same team, at the [[J. Craig Venter Institute]], constructed and patented a [[Synthetic genomics|synthetic genome]] of a novel minimal bacterium, ''[[Mycoplasma laboratorium]]'' and were working on getting it functioning in a living cell.<ref>{{cite news|url=https://www.nytimes.com/2007/06/29/science/29cells.html|title=Scientists Transplant Genome of Bacteria|last=Wade|first=Nicholas|date=2007-06-29|work=The New York Times|access-date=2007-12-28|issn=0362-4331}}</ref><ref>{{cite journal | vauthors = Gibson DG, Benders GA, Andrews-Pfannkoch C, Denisova EA, Baden-Tillson H, Zaveri J, Stockwell TB, Brownley A, Thomas DW, Algire MA, Merryman C, Young L, Noskov VN, Glass JI, Venter JC, Hutchison CA, Smith HO | title = Complete chemical synthesis, assembly, and cloning of a Mycoplasma genitalium genome | journal = Science | volume = 319 | issue = 5867 | pages = 1215–20 | date = February 2008 | pmid = 18218864 | doi = 10.1126/science.1151721 | bibcode = 2008Sci...319.1215G | s2cid = 8190996 | url = https://semanticscholar.org/paper/8c662fd0e252c85d056aad7ff16009ebe1dd4cbc }}</ref><ref name="Ball">{{cite journal|last1=Ball|first1=Philip|date=2016|title=Man Made: A History of Synthetic Life|url=https://www.sciencehistory.org/distillations/magazine/man-made-a-history-of-synthetic-life|journal=Distillations|volume=2|issue=1|pages=15–23|access-date=22 March 2018}}</ref><br />
<br />
Driven by dramatic decreases in costs of oligonucleotide ("oligos") synthesis and the advent of PCR, the sizes of DNA constructions from oligos have increased to the genomic level. In 2000, researchers reported synthesis of the 9.6 kbp (kilo bp) Hepatitis C virus genome from chemically synthesized 60 to 80-mers. In 2002 researchers at Stony Brook University succeeded in synthesizing the 7741 bp poliovirus genome from its published sequence, producing the second synthetic genome, spanning two years. In 2003 the 5386 bp genome of the bacteriophage Phi X 174 was assembled in about two weeks. In 2006, the same team, at the J. Craig Venter Institute, constructed and patented a synthetic genome of a novel minimal bacterium, Mycoplasma laboratorium and were working on getting it functioning in a living cell.<br />
<br />
由于寡核苷酸 (oligos) 合成成本的大幅度降低和 PCR 的出现,寡核苷酸 DNA 构建的大小已经提高到基因组水平。2000年,研究人员报道了化学合成的60到80碱基数的聚丙型肝炎病毒基因组的合成。2002年,石溪大学的研究人员成功地从已发表的序列中合成了7741碱基数的脊髓灰质炎病毒基因组,生产了跨越两年的第二个合成基因组。2003年,噬菌体 Phi x 174的5386个基因组在大约两周内组装完毕。2006年,克莱格·凡特 (J. Craig Venter) 研究所的同一个团队,构建了一种新型微型细菌——支原体的合成基因组,并申请了专利,他们正在努力使其在活细胞中发挥作用。<br />
<br />
<br />
<br />
In 2007 it was reported that several companies were offering [[gene synthesis|synthesis of genetic sequences]] up to 2000 base pairs (bp) long, for a price of about $1 per bp and a turnaround time of less than two weeks.<ref>{{cite news| issn = 0362-4331| last = Pollack| first = Andrew| title = How Do You Like Your Genes? Biofabs Take Orders | work = The New York Times | access-date = 2007-12-28| date = 2007-09-12 | url = https://www.nytimes.com/2007/09/12/technology/techspecial/12gene.html?pagewanted=2&_r=1}}</ref> [[Oligonucleotide]]s harvested from a photolithographic- or inkjet-manufactured [[DNA chip]] combined with PCR and DNA mismatch error-correction allows inexpensive large-scale changes of [[codons]] in genetic systems to improve [[gene expression]] or incorporate novel amino-acids (see [[George M. Church]]'s and Anthony Forster's synthetic cell projects.<ref>{{Cite web|url=http://arep.med.harvard.edu/SBP|title=Synthetic Biology Projects|website=arep.med.harvard.edu|access-date=2018-02-17}}</ref><ref>{{cite journal | vauthors = Forster AC, Church GM | title = Towards synthesis of a minimal cell | journal = Molecular Systems Biology | volume = 2 | issue = 1 | pages = 45 | date = 2006-08-22 | pmid = 16924266 | pmc = 1681520 | doi = 10.1038/msb4100090 }}</ref>) This favors a synthesis-from-scratch approach.<br />
<br />
In 2007 it was reported that several companies were offering synthesis of genetic sequences up to 2000 base pairs (bp) long, for a price of about $1 per bp and a turnaround time of less than two weeks. Oligonucleotides harvested from a photolithographic- or inkjet-manufactured DNA chip combined with PCR and DNA mismatch error-correction allows inexpensive large-scale changes of codons in genetic systems to improve gene expression or incorporate novel amino-acids (see George M. Church's and Anthony Forster's synthetic cell projects.) This favors a synthesis-from-scratch approach.<br />
<br />
2007年有报道称,几家公司提供了长达2000个碱基对的基因序列合成,价格约为每个碱基1美元,一个完整流程所需工时不到两周。从光刻或喷墨制造的 DNA 芯片中提取的寡核苷酸,结合 PCR 和 DNA 错配错误校正,可以大规模改变遗传系统中的密码子,从而改善基因表达或合并新的氨基酸(参见乔治·M·丘奇和安东尼·福斯特的合成细胞项目)这有利于采用从头开始的合成方法。<br />
<br />
<br />
<br />
Additionally, the [[CRISPR|CRISPR/Cas]] system has emerged as a promising technique for gene editing. It was described as "the most important innovation in the synthetic biology space in nearly 30 years".<ref name="washpost_crispr">{{cite news|last1=Basulto|first1=Dominic|title=Everything you need to know about why CRISPR is such a hot technology|url=https://www.washingtonpost.com/news/innovations/wp/2015/11/04/everything-you-need-to-know-about-why-crispr-is-such-a-hot-technology/|access-date=5 December 2015|work=Washington Post|date=November 4, 2015}}</ref> While other methods take months or years to edit gene sequences, CRISPR speeds that time up to weeks.<ref name="washpost_crispr" /> Due to its ease of use and accessibility, however, it has raised ethical concerns, especially surrounding its use in [[Do-it-yourself biology|biohacking]].<ref>{{cite news|last1=Kahn|first1=Jennifer|title=The Crispr Quandary|url=https://www.nytimes.com/2015/11/15/magazine/the-crispr-quandary.html?_r=0|access-date=5 December 2015|work=New York Times|date=November 9, 2015}}</ref><ref>{{cite journal|last1=Ledford|first1=Heidi|title=CRISPR, the disruptor|url=http://www.nature.com/news/crispr-the-disruptor-1.17673|access-date=5 December 2015|agency=Nature News|journal=Nature|date=June 3, 2015|pmid=26040877|doi=10.1038/522020a|volume=522|issue=7554|pages=20–4|bibcode=2015Natur.522...20L|doi-access=free}}</ref><ref>{{cite magazine|last1=Higginbotham|first1=Stacey|title=Top VC Says Gene Editing Is Riskier Than Artificial Intelligence|url=http://fortune.com/2015/12/04/khosla-crispr-ai/|access-date=5 December 2015|magazine=Fortune|date=4 December 2015}}</ref><br />
<br />
Additionally, the CRISPR/Cas system has emerged as a promising technique for gene editing. It was described as "the most important innovation in the synthetic biology space in nearly 30 years". While other methods take months or years to edit gene sequences, CRISPR speeds that time up to weeks.<br />
<br />
此外,CRISPR/Cas 系统已经成为一种很有前途的基因编辑技术。它被称作“近30年来合成生物学领域最重要的创新”。虽然其他方法需要数月或数年来编辑基因序列,CRISPR 将这个时间缩短到数周。<br />
<br />
<br />
<br />
=== Sequencing 测序 ===<br />
<br />
[[DNA sequencing]] determines the order of [[nucleotide]] bases in a DNA molecule. Synthetic biologists use DNA sequencing in their work in several ways. First, large-scale genome sequencing efforts continue to provide information on naturally occurring organisms. This information provides a rich substrate from which synthetic biologists can construct parts and devices. Second, sequencing can verify that the fabricated system is as intended. Third, fast, cheap, and reliable sequencing can facilitate rapid detection and identification of synthetic systems and organisms.<ref>{{cite journal| author = Rollie| date = 2012 |title = Designing biological systems: Systems Engineering meets Synthetic Biology| journal = Chemical Engineering Science| volume = 69 | pages = 1–29| doi=10.1016/j.ces.2011.10.068| issue=1|display-authors=etal}}</ref><br />
<br />
DNA sequencing determines the order of nucleotide bases in a DNA molecule. Synthetic biologists use DNA sequencing in their work in several ways. First, large-scale genome sequencing efforts continue to provide information on naturally occurring organisms. This information provides a rich substrate from which synthetic biologists can construct parts and devices. Second, sequencing can verify that the fabricated system is as intended. Third, fast, cheap, and reliable sequencing can facilitate rapid detection and identification of synthetic systems and organisms.<br />
<br />
DNA 测序决定了 DNA 分子中核苷酸碱基的顺序。合成生物学家在他们的工作中以几种方式使用 DNA 测序。首先,大规模的基因组测序工作继续提供有关自然发生的生物体的信息。这些信息为合成生物学家提供了一个丰富的基质,他们可以从中构建零件和设备。其次,排序可以验证制造的系统是否如预期的那样。第三,快速、廉价和可靠的测序可以促进快速检测和识别合成系统和有机体。<br />
<br />
<br />
<br />
=== Microfluidics ===<br />
<br />
[[Microfluidics]], in particular droplet microfluidics, is an emerging tool used to construct new components, and to analyse and characterize them.<ref>{{cite journal | vauthors = Elani Y | title = Construction of membrane-bound artificial cells using microfluidics: a new frontier in bottom-up synthetic biology | journal = Biochemical Society Transactions | volume = 44 | issue = 3 | pages = 723–30 | date = June 2016 | pmid = 27284034 | pmc = 4900754 | doi = 10.1042/BST20160052 }}</ref><ref>{{cite journal | vauthors = Gach PC, Iwai K, Kim PW, Hillson NJ, Singh AK | title = Droplet microfluidics for synthetic biology | journal = Lab on a Chip | volume = 17 | issue = 20 | pages = 3388–3400 | date = October 2017 | pmid = 28820204 | doi = 10.1039/C7LC00576H | osti = 1421856 | url = http://www.escholarship.org/uc/item/6cr3k0v5 }}</ref> It is widely employed in screening assays.<ref>{{cite journal | vauthors = Vinuselvi P, Park S, Kim M, Park JM, Kim T, Lee SK | title = Microfluidic technologies for synthetic biology | journal = International Journal of Molecular Sciences | volume = 12 | issue = 6 | pages = 3576–93 | date = 2011-06-03 | pmid = 21747695 | pmc = 3131579 | doi = 10.3390/ijms12063576 }}</ref><br />
<br />
Microfluidics, in particular droplet microfluidics, is an emerging tool used to construct new components, and to analyse and characterize them. It is widely employed in screening assays.<br />
<br />
微流体,特别是液滴微流体,是一种新兴的工具,用于构造新的元件,并分析和表征它们。它被广泛应用于筛选分析。<br />
<br />
<br />
<br />
=== Modularity 模块化 ===<br />
<br />
The most used<ref name="primer">{{Cite book|title=Synthetic Biology – A Primer|last1=Freemont|first1=Paul S.|last2=Kitney|first2=Richard I.| name-list-style = vanc |date=2012|publisher=World Scientific|isbn=978-1-84816-863-3|doi=10.1142/p837}}</ref>{{rp|22–23}} standardized DNA parts are [[BioBrick]] plasmids, invented by [[Tom Knight (scientist)|Tom Knight]] in 2003.<ref>{{Cite journal|last1=Knight|first1=Thomas| name-list-style = vanc |year=2003|title=Tom Knight (2003). Idempotent Vector Design for Standard Assembly of Biobricks|hdl=1721.1/21168}}</ref> Biobricks are stored at the [[Registry of Standard Biological Parts]] in Cambridge, Massachusetts. The BioBrick standard has been used by thousands of students worldwide in the [[international Genetically Engineered Machine]] (iGEM) competition.<ref name="primer" />{{rp|22–23}}<br />
<br />
The most used standardized DNA parts are BioBrick plasmids, invented by Tom Knight in 2003. Biobricks are stored at the Registry of Standard Biological Parts in Cambridge, Massachusetts. The BioBrick standard has been used by thousands of students worldwide in the international Genetically Engineered Machine (iGEM) competition. SH3 domain-peptide binding or SpyTag/SpyCatcher offer such control. In addition it is necessary to regulate protein-protein interactions in cells, such as with light (using light-oxygen-voltage-sensing domains) or cell-permeable small molecules by chemically induced dimerization.<br />
<br />
最常用的标准化 DNA 部分是生物积木质粒,由汤姆·奈特在2003年发明。生物积木储存在马萨诸塞州剑桥的标准生物部件注册处。生物积木标准已经被全世界成千上万的学生用于国际基因工程机器竞赛 (iGEM) 。SH3域-肽结合或 SpyTag/SpyCatcher 能提供这样的控制。此外,还必须通过化学诱导的二聚化来调节细胞中蛋白质-蛋白质的相互作用,例如利用光(光-氧-电压感应域)或细胞渗透性小分子。<br />
<br />
<br />
<br />
While DNA is most important for information storage, a large fraction of the cell's activities are carried out by proteins. Tools can send proteins to specific regions of the cell and to link different proteins together. The interaction strength between protein partners should be tunable between a lifetime of seconds (desirable for dynamic signaling events) up to an irreversible interaction (desirable for device stability or resilient to harsh conditions). Interactions such as [[coiled coil]]s,<ref>{{cite journal | vauthors = Woolfson DN, Bartlett GJ, Bruning M, Thomson AR | title = New currency for old rope: from coiled-coil assemblies to α-helical barrels | journal = Current Opinion in Structural Biology | volume = 22 | issue = 4 | pages = 432–41 | date = August 2012 | pmid = 22445228 | doi = 10.1016/j.sbi.2012.03.002 }}</ref> [[SH3 domain]]-peptide binding<ref>{{cite journal | vauthors = Dueber JE, Wu GC, Malmirchegini GR, Moon TS, Petzold CJ, Ullal AV, Prather KL, Keasling JD | title = Synthetic protein scaffolds provide modular control over metabolic flux | journal = Nature Biotechnology | volume = 27 | issue = 8 | pages = 753–9 | date = August 2009 | pmid = 19648908 | doi = 10.1038/nbt.1557 | s2cid = 2756476 }}</ref> or [[SpyCatcher|SpyTag/SpyCatcher]]<ref>{{cite journal | vauthors = Reddington SC, Howarth M | title = Secrets of a covalent interaction for biomaterials and biotechnology: SpyTag and SpyCatcher | journal = Current Opinion in Chemical Biology | volume = 29 | pages = 94–9 | date = December 2015 | pmid = 26517567 | doi = 10.1016/j.cbpa.2015.10.002 | doi-access = free }}</ref> offer such control. In addition it is necessary to regulate protein-protein interactions in cells, such as with light (using [[light-oxygen-voltage-sensing domain]]s) or cell-permeable small molecules by [[chemically induced dimerization]].<ref>{{cite journal | vauthors = Bayle JH, Grimley JS, Stankunas K, Gestwicki JE, Wandless TJ, Crabtree GR | title = Rapamycin analogs with differential binding specificity permit orthogonal control of protein activity | journal = Chemistry & Biology | volume = 13 | issue = 1 | pages = 99–107 | date = January 2006 | pmid = 16426976 | doi = 10.1016/j.chembiol.2005.10.017 | doi-access = free }}</ref><br />
<br />
In a living cell, molecular motifs are embedded in a bigger network with upstream and downstream components. These components may alter the signaling capability of the modeling module. In the case of ultrasensitive modules, the sensitivity contribution of a module can differ from the sensitivity that the module sustains in isolation.<br />
<br />
在一个活细胞中,分子模体被嵌入到一个更大的由上游或下游组件构成的网络中。这些组件可以改变建模模块的信令能力。在超灵敏模块的情况下,模块的灵敏度贡献可能不同于该模块在隔离状态下支持的灵敏度。<br />
<br />
<br />
<br />
In a living cell, molecular motifs are embedded in a bigger network with upstream and downstream components. These components may alter the signaling capability of the modeling module. In the case of ultrasensitive modules, the sensitivity contribution of a module can differ from the sensitivity that the module sustains in isolation.<ref name="altszylerUltrasens2014">{{cite journal | vauthors = Altszyler E, Ventura A, Colman-Lerner A, Chernomoretz A | title = Impact of upstream and downstream constraints on a signaling module's ultrasensitivity | journal = Physical Biology | volume = 11 | issue = 6 | pages = 066003 | date = October 2014 | pmid = 25313165 | pmc = 4233326 | doi = 10.1088/1478-3975/11/6/066003 | bibcode = 2014PhBio..11f6003A }}</ref><ref name="altszylerUltrasens2017">{{cite journal | vauthors = Altszyler E, Ventura AC, Colman-Lerner A, Chernomoretz A | title = Ultrasensitivity in signaling cascades revisited: Linking local and global ultrasensitivity estimations | journal = PLOS ONE | volume = 12 | issue = 6 | pages = e0180083 | year = 2017 | pmid = 28662096 | pmc = 5491127 | doi = 10.1371/journal.pone.0180083 | bibcode = 2017PLoSO..1280083A | arxiv = 1608.08007 }}</ref><br />
<br />
<br />
<br />
Models inform the design of engineered biological systems by better predicting system behavior prior to fabrication. Synthetic biology benefits from better models of how biological molecules bind substrates and catalyze reactions, how DNA encodes the information needed to specify the cell and how multi-component integrated systems behave. Multiscale models of gene regulatory networks focus on synthetic biology applications. Simulations can model all biomolecular interactions in transcription, translation, regulation and induction of gene regulatory networks.<br />
<br />
模型通过在制造之前更好地预测系统行为来指导工程生物系统的设计。合成生物学受益于更好的模型,这些模型包括生物分子如何结合底物和催化反应,DNA 如何编码指定细胞所需的信息,以及多组分综合系统如何运作。基因调控网络的多尺度模型关注于合成生物学的应用。模拟可以模拟所有的生物分子相互作用的转录,翻译,调节和基因调控网络的诱导。<br />
<br />
=== Modeling 建模 ===<br />
<br />
Models inform the design of engineered biological systems by better predicting system behavior prior to fabrication. Synthetic biology benefits from better models of how biological molecules bind substrates and catalyze reactions, how DNA encodes the information needed to specify the cell and how multi-component integrated systems behave. Multiscale models of gene regulatory networks focus on synthetic biology applications. Simulations can model all biomolecular interactions in [[Transcription (biology)|transcription]], [[Translation (biology)|translation]], regulation and induction of gene regulatory networks.<ref>{{cite journal | vauthors = Carbonell-Ballestero M, Duran-Nebreda S, Montañez R, Solé R, Macía J, Rodríguez-Caso C | title = A bottom-up characterization of transfer functions for synthetic biology designs: lessons from enzymology | journal = Nucleic Acids Research | volume = 42 | issue = 22 | pages = 14060–14069 | date = December 2014 | pmid = 25404136 | pmc = 4267673 | doi = 10.1093/nar/gku964 }}</ref><br />
<br />
<ref>{{cite journal | vauthors = Kaznessis YN | title = Models for synthetic biology | journal = BMC Systems Biology | volume = 1 | issue = 1 | pages = 47 | date = November 2007 | pmid = 17986347 | pmc = 2194732 | doi = 10.1186/1752-0509-1-47 }}</ref><br />
<br />
<ref>{{cite conference |vauthors=Tuza ZA, Singhal V, Kim J, Murray RM | title = An in silico modeling toolbox for rapid prototyping of circuits in a biomolecular "breadboard" system. |book-title=52nd IEEE Conference on Decision and Control |date=December 2013 |doi=10.1109/CDC.2013.6760079}}</ref><br />
<br />
<br />
<br />
Studies have considered the components of the DNA transcription mechanism. One desire of scientists creating synthetic biological circuits is to be able to control the transcription of synthetic DNA in unicellular organisms (prokaryotes) and in multicellular organisms (eukaryotes). One study tested the adjustability of synthetic transcription factors (sTFs) in areas of transcription output and cooperative ability among multiple transcription factor complexes. Researchers were able to mutate functional regions called zinc fingers, the DNA specific component of sTFs, to decrease their affinity for specific operator DNA sequence sites, and thus decrease the associated site-specific activity of the sTF (usually transcriptional regulation). They further used the zinc fingers as components of complex-forming sTFs, which are the eukaryotic translation mechanisms. and demonstrated both analog and digital computation in living cells.<br />
--[[用户:粲兰|袁一博]]([[用户讨论:粲兰|讨论]]) “and demonstrated both analog and digital computation in living cells. ”这句首字母没有大写,不知是搬运时少了一点还是仅仅是格式错误。<br />
They demonstrated that bacteria can be engineered to perform both analog and/or digital computation. In human cells research demonstrated a universal logic evaluator that operates in mammalian cells in 2007. Subsequently, researchers utilized this paradigm to demonstrate a proof-of-concept therapy that uses biological digital computation to detect and kill human cancer cells in 2011. Another group of researchers demonstrated in 2016 that principles of computer engineering, can be used to automate digital circuit design in bacterial cells. In 2017, researchers demonstrated the 'Boolean logic and arithmetic through DNA excision' (BLADE) system to engineer digital computation in human cells.<br />
<br />
研究已经考虑了 DNA 转录机制的组成部分。科学家创造合成生物电路的一个愿望是能够控制单细胞生物(原核生物)和多细胞生物(真核生物)中合成 DNA 的转录。一项研究测试了合成转录因子(sTFs)在转录输出和多个转录因子复合物之间的合作能力方面的可调节性。研究人员能够突变被称为锌指——转录因子的一段特殊DNA——的功能区域,以减少它们对特定算子DNA 序列位点的亲和力,从而减少相关的特定位点的转录因子活性(通常是转录调控)。他们进一步使用锌指作为复杂地组成的转录因子的组成部分,这是真核翻译的机制。并在活细胞中进行了模拟和数字计算。他们证明了可以设计细菌使其同时执行模拟和/或数字计算。2007年,人类细胞研究展示了一种在哺乳动物细胞中运作的通用逻辑求值器。随后,研究人员在2011年利用这一范式展示了一种概念验证疗法,利用生物数字计算来检测和杀死人类癌细胞。另一组研究人员在2016年证明了计算机工程的原理,可以用来自动化细菌细胞中的数字电路设计。2017年,研究人员演示了“通过 DNA 删除的布尔逻辑和算术”(BLADE)系统,用于在人类细胞中构建数字计算。<br />
<br />
=== Synthetic transcription factors 合成转录因子 ===<br />
<br />
Studies have considered the components of the [[Transcription (biology)|DNA transcription]] mechanism. One desire of scientists creating [[synthetic biological circuit]]s is to be able to control the transcription of synthetic DNA in unicellular organisms ([[prokaryote]]s) and in multicellular organisms ([[eukaryote]]s). One study tested the adjustability of synthetic [[transcription factor]]s (sTFs) in areas of transcription output and cooperative ability among multiple transcription factor complexes.<ref name="Khalil AS 2012">{{cite journal | vauthors = Khalil AS, Lu TK, Bashor CJ, Ramirez CL, Pyenson NC, Joung JK, Collins JJ | title = A synthetic biology framework for programming eukaryotic transcription functions | journal = Cell | volume = 150 | issue = 3 | pages = 647–58 | date = August 2012 | pmid = 22863014 | pmc = 3653585 | doi = 10.1016/j.cell.2012.05.045 }}</ref> Researchers were able to mutate functional regions called [[zinc finger]]s, the DNA specific component of sTFs, to decrease their affinity for specific operator DNA sequence sites, and thus decrease the associated site-specific activity of the sTF (usually transcriptional regulation). They further used the zinc fingers as components of complex-forming sTFs, which are the [[eukaryotic translation]] mechanisms.<ref name="Khalil AS 2012"/><br />
<br />
<br />
<br />
A biosensor refers to an engineered organism, usually a bacterium, that is capable of reporting some ambient phenomenon such as the presence of heavy metals or toxins. One such system is the Lux operon of Aliivibrio fischeri, which codes for the enzyme that is the source of bacterial bioluminescence, and can be placed after a respondent promoter to express the luminescence genes in response to a specific environmental stimulus. One such sensor created, consisted of a bioluminescent bacterial coating on a photosensitive computer chip to detect certain petroleum pollutants. When the bacteria sense the pollutant, they luminesce. Another example of a similar mechanism is the detection of landmines by an engineered E.coli reporter strain capable of detecting TNT and its main degradation product DNT, and consequently producing a green fluorescent protein (GFP).<br />
<br />
生物传感器是一种工用有机体,它能够报告周边某些环境现象,如重金属或毒素的存在,通常是细菌。其中一个这样的系统是 Aliivibrio 费氏弧菌的 Lux 操纵子,它编码的酶是细菌生物发光的来源,可以放置在应答启动子之后表达发光基因以响应特定的环境刺激。其中一个传感器是由一个光敏计算机芯片上的生物发光细菌涂层组成的,用以检测某些石油污染物。当细菌感觉到污染物时,它们就会发光。另一个类似机制的例子是地雷的检测,用一个能够检测 TNT 及其主要降解产物 DNT 的大肠杆菌报告基因工程菌株,从而产生绿色荧光蛋白。<br />
<br />
== Applications 应用 ==<br />
<br />
=== Biological computers 生物计算机 ===<br />
<br />
Modified organisms can sense environmental signals and send output signals that can be detected and serve diagnostic purposes. Microbe cohorts have been used.<br />
<br />
改良有机体可以感知环境信号,并发送能够被检测到的输出信号,用于诊断目的。微生物群落已经被应用于这种用途。<br />
<br />
A [[biological computer]] refers to an engineered biological system that can perform computer-like operations, which is a dominant paradigm in synthetic biology. Researchers built and characterized a variety of [[logic gate]]s in a number of organisms,<ref>{{cite journal | vauthors = Singh V | title = Recent advances and opportunities in synthetic logic gates engineering in living cells | journal = Systems and Synthetic Biology | volume = 8 | issue = 4 | pages = 271–82 | date = December 2014 | pmid = 26396651 | pmc = 4571725 | doi = 10.1007/s11693-014-9154-6 }}</ref> and demonstrated both analog and digital computation in living cells. They demonstrated that bacteria can be engineered to perform both analog and/or digital computation.<ref>{{cite journal | vauthors = Purcell O, Lu TK | title = Synthetic analog and digital circuits for cellular computation and memory | journal = Current Opinion in Biotechnology | volume = 29 | pages = 146–55 | date = October 2014 | pmid = 24794536 | pmc = 4237220 | doi = 10.1016/j.copbio.2014.04.009 | series = Cell and Pathway Engineering }}</ref><ref>{{cite journal | vauthors = Daniel R, Rubens JR, Sarpeshkar R, Lu TK | title = Synthetic analog computation in living cells | journal = Nature | volume = 497 | issue = 7451 | pages = 619–23 | date = May 2013 | pmid = 23676681 | doi = 10.1038/nature12148 | bibcode = 2013Natur.497..619D | s2cid = 4358570 }}</ref> In human cells research demonstrated a universal logic evaluator that operates in mammalian cells in 2007.<ref>{{cite journal | vauthors = Rinaudo K, Bleris L, Maddamsetti R, Subramanian S, Weiss R, Benenson Y | title = A universal RNAi-based logic evaluator that operates in mammalian cells | journal = Nature Biotechnology | volume = 25 | issue = 7 | pages = 795–801 | date = July 2007 | pmid = 17515909 | doi = 10.1038/nbt1307 | s2cid = 280451 }}</ref> Subsequently, researchers utilized this paradigm to demonstrate a proof-of-concept therapy that uses biological digital computation to detect and kill human cancer cells in 2011.<ref>{{cite journal | vauthors = Xie Z, Wroblewska L, Prochazka L, Weiss R, Benenson Y | title = Multi-input RNAi-based logic circuit for identification of specific cancer cells | journal = Science | volume = 333 | issue = 6047 | pages = 1307–11 | date = September 2011 | pmid = 21885784 | doi = 10.1126/science.1205527 | bibcode = 2011Sci...333.1307X | s2cid = 13743291 | url = https://semanticscholar.org/paper/372e175668b5323d79950b58f12b36f6974a81ef }}</ref> Another group of researchers demonstrated in 2016 that principles of [[computer engineering]], can be used to automate digital circuit design in bacterial cells.<ref>{{cite journal | vauthors = Nielsen AA, Der BS, Shin J, Vaidyanathan P, Paralanov V, Strychalski EA, Ross D, Densmore D, Voigt CA | title = Genetic circuit design automation | journal = Science | volume = 352 | issue = 6281 | pages = aac7341 | date = April 2016 | pmid = 27034378 | doi = 10.1126/science.aac7341 | doi-access = free }}</ref> In 2017, researchers demonstrated the 'Boolean logic and arithmetic through DNA excision' (BLADE) system to engineer digital computation in human cells.<ref>{{cite journal | vauthors = Weinberg BH, Pham NT, Caraballo LD, Lozanoski T, Engel A, Bhatia S, Wong WW | title = Large-scale design of robust genetic circuits with multiple inputs and outputs for mammalian cells | journal = Nature Biotechnology | volume = 35 | issue = 5 | pages = 453–462 | date = May 2017 | pmid = 28346402 | pmc = 5423837 | doi = 10.1038/nbt.3805 }}</ref><br />
<br />
<br />
<br />
=== Biosensors 生物传感器 ===<br />
<br />
Cells use interacting genes and proteins, which are called gene circuits, to implement diverse function, such as responding to environmental signals, decision making and communication. Three key components are involved: DNA, RNA and Synthetic biologist designed gene circuits that can control gene expression from several levels including transcriptional, post-transcriptional and translational levels.<br />
<br />
细胞使用相互作用的基因和蛋白质,即所谓的基因回路,来实现不同的功能,如响应环境信号,决策和沟通。其中涉及到三个关键组成部分: DNA、 RNA 和合成生物学家设计的基因电路,可以从转录、转录后和翻译水平等几个层面控制基因表达。<br />
<br />
A [[biosensor]] refers to an engineered organism, usually a bacterium, that is capable of reporting some ambient phenomenon such as the presence of heavy metals or toxins. One such system is the [[Luciferase|Lux operon]] of ''[[Aliivibrio fischeri]],''<ref>{{cite journal | vauthors = de Almeida PE, van Rappard JR, Wu JC | title = In vivo bioluminescence for tracking cell fate and function | journal = American Journal of Physiology. Heart and Circulatory Physiology | volume = 301 | issue = 3 | pages = H663–71 | date = September 2011 | pmid = 21666118 | pmc = 3191083 | doi = 10.1152/ajpheart.00337.2011 }}</ref> which codes for the enzyme that is the source of bacterial [[bioluminescence]], and can be placed after a respondent [[Promoter (genetics)|promoter]] to express the luminescence genes in response to a specific environmental stimulus.<ref>{{cite journal | vauthors = Close DM, Xu T, Sayler GS, Ripp S | title = In vivo bioluminescent imaging (BLI): noninvasive visualization and interrogation of biological processes in living animals | journal = Sensors | volume = 11 | issue = 1 | pages = 180–206 | date = 2011 | pmid = 22346573 | pmc = 3274065 | doi = 10.3390/s110100180 }}</ref> One such sensor created, consisted of a [[bioluminescent bacteria]]l coating on a photosensitive [[computer chip]] to detect certain [[petroleum]] [[pollutant]]s. When the bacteria sense the pollutant, they luminesce.<ref>{{cite journal|last=Gibbs|first=W. Wayt| name-list-style = vanc |date=1997 |title=Critters on a Chip |url=http://www.sciam.com/article.cfm?id=critters-on-a-chip |journal=Scientific American|access-date=2 Mar 2009}}</ref> Another example of a similar mechanism is the detection of landmines by an engineered ''E.coli'' reporter strain capable of detecting [[TNT]] and its main degradation product [[2,4-Dinitrotoluene|DNT]], and consequently producing a green fluorescent protein ([[Green fluorescent protein|GFP]]).<ref>{{Cite journal|last1=Belkin|first1=Shimshon|last2=Yagur-Kroll|first2=Sharon|last3=Kabessa|first3=Yossef|last4=Korouma|first4=Victor|last5=Septon|first5=Tali|last6=Anati|first6=Yonatan|last7=Zohar-Perez|first7=Cheinat|last8=Rabinovitz|first8=Zahi|last9=Nussinovitch|first9=Amos|date=April 2017|title=Remote detection of buried landmines using a bacterial sensor|journal=Nature Biotechnology|volume=35|issue=4|pages=308–310|doi=10.1038/nbt.3791|pmid=28398330|s2cid=3645230|issn=1087-0156}}</ref><br />
<br />
<br />
<br />
Traditional metabolic engineering has been bolstered by the introduction of combinations of foreign genes and optimization by directed evolution. This includes engineering E. coli and yeast for commercial production of a precursor of the antimalarial drug, Artemisinin.<br />
<br />
传统的代谢工程学已经通过引入外源基因的组合和定向进化的优化得到了支持。这包括改造大肠杆菌和酵母菌,用于商业化生产抗疟药物青蒿素的前体。<br />
<br />
Modified organisms can sense environmental signals and send output signals that can be detected and serve diagnostic purposes. Microbe cohorts have been used.<ref name="pmid26019220">{{cite journal | vauthors = Danino T, Prindle A, Kwong GA, Skalak M, Li H, Allen K, Hasty J, Bhatia SN | title = Programmable probiotics for detection of cancer in urine | journal = Science Translational Medicine | volume = 7 | issue = 289 | pages = 289ra84 | date = May 2015 | pmid = 26019220 | pmc = 4511399 | doi = 10.1126/scitranslmed.aaa3519 }}</ref><br />
<br />
<br />
<br />
Entire organisms have yet to be created from scratch, although living cells can be transformed with new DNA. <br />
--[[用户:粲兰|袁一博]]([[用户讨论:粲兰|讨论]])“Entire organisms have yet to be created from scratch, although living cells can be transformed with new DNA.”have 疑似应为 haven’t <br />
Several ways allow constructing synthetic DNA components and even entire synthetic genomes, but once the desired genetic code is obtained, it is integrated into a living cell that is expected to manifest the desired new capabilities or phenotypes while growing and thriving. Cell transformation is used to create biological circuits, which can be manipulated to yield desired outputs.<br />
<br />
虽然活细胞可以通过新的 DNA 转化,但整个有机体还没有从头开始创造。有几种方法可以构建合成 DNA 组件,甚至是整个合成基因组,但是一旦获得了所需的遗传密码,它就会被整合到一个活细胞中,这个活细胞在生长和发育的过程中,有望表现所需的新能力或表型。细胞分化被用于创造生物电路,我们可以通过操纵这些电路来产生所需的输出。<br />
<br />
=== Cell transformation 细胞分化 ===<br />
<br />
{{Main|Transformation (genetics)}}Cells use interacting genes and proteins, which are called gene circuits, to implement diverse function, such as responding to environmental signals, decision making and communication. Three key components are involved: DNA, RNA and Synthetic biologist designed gene circuits that can control gene expression from several levels including transcriptional, post-transcriptional and translational levels.<br />
<br />
<br />
<br />
Traditional metabolic engineering has been bolstered by the introduction of combinations of foreign genes and optimization by directed evolution. This includes engineering ''E. coli'' and [[yeast]] for commercial production of a precursor of the [[Antimalarial medication|antimalarial drug]], [[Artemisinin]].<ref>{{cite journal | vauthors = Westfall PJ, Pitera DJ, Lenihan JR, Eng D, Woolard FX, Regentin R, Horning T, Tsuruta H, Melis DJ, Owens A, Fickes S, Diola D, Benjamin KR, Keasling JD, Leavell MD, McPhee DJ, Renninger NS, Newman JD, Paddon CJ | title = Production of amorphadiene in yeast, and its conversion to dihydroartemisinic acid, precursor to the antimalarial agent artemisinin | journal = Proceedings of the National Academy of Sciences of the United States of America | volume = 109 | issue = 3 | pages = E111–8 | date = January 2012 | pmid = 22247290 | pmc = 3271868 | doi = 10.1073/pnas.1110740109 | bibcode = 2012PNAS..109E.111W }}</ref><br />
<br />
The [[Top7 protein was one of the first proteins designed for a fold that had never been seen before in nature ]]<br />
<br />
[[ Top7蛋白是第一个为了折叠而设计的蛋白质之一,以前从未在自然界中见过]<br />
<br />
<br />
<br />
Entire organisms have yet to be created from scratch, although living cells can be [[Transformation (genetics)|transformed]] with new DNA. Several ways allow constructing synthetic DNA components and even entire [[Artificial gene synthesis|synthetic genomes]], but once the desired genetic code is obtained, it is integrated into a living cell that is expected to manifest the desired new capabilities or [[phenotype]]s while growing and thriving.<ref>{{cite news|url=https://www.independent.co.uk/news/science/eureka-scientists-unveil-giant-leap-towards-synthetic-life-9219644.html|title=Eureka! Scientists unveil giant leap towards synthetic life|last=Connor|first=Steve|date=28 March 2014|work=The Independent|access-date=2015-08-06}}</ref> Cell transformation is used to create [[Synthetic biological circuit|biological circuits]], which can be manipulated to yield desired outputs.<ref name=":0" /><ref name=":1" /><br />
<br />
Natural proteins can be engineered, for example, by directed evolution, novel protein structures that match or improve on the functionality of existing proteins can be produced. One group generated a helix bundle that was capable of binding oxygen with similar properties as hemoglobin, yet did not bind carbon monoxide. A similar protein structure was generated to support a variety of oxidoreductase activities while another formed a structurally and sequentially novel ATPase. Another group generated a family of G-protein coupled receptors that could be activated by the inert small molecule clozapine N-oxide but insensitive to the native ligand, acetylcholine; these receptors are known as DREADDs. Novel functionalities or protein specificity can also be engineered using computational approaches. One study was able to use two different computational methods – a bioinformatics and molecular modeling method to mine sequence databases, and a computational enzyme design method to reprogram enzyme specificity. Both methods resulted in designed enzymes with greater than 100 fold specificity for production of longer chain alcohols from sugar.<br />
<br />
天然蛋白质可以被设计出来,例如,通过定向进化,新的蛋白质结构可以匹配或改进现有蛋白质的功能。其中一组可以产生能将血红蛋白与具有类似性质的氧结合的螺旋束,但不结合一氧化碳。一个类似的蛋白质结构被生成以支持多种氧化还原酶活性,而另一组生成一个在结构和顺序上全新的 ATP 酶。另一组产生了一类 g 蛋白偶联受体,这类受体可以被惰性小分子N-氧化氯氮平激活,但对天然配体乙酰胆碱不敏感; 这些受体被称为 DREADDs。新的功能或蛋白质特异性也可以利用计算方法进行设计。一项研究能够使用两种不同的计算方法——生物信息学和分子模拟方法挖掘序列数据库,使用计算酶设计方法重新编写酶的专一性。这两种方法都可以使设计的酶具有大于100倍的专一性,可以用糖生产出长链醇。<br />
<br />
<br />
<br />
By integrating synthetic biology with [[materials science]], it would be possible to use cells as microscopic molecular foundries to produce materials with properties whose properties were genetically encoded. Re-engineering has produced Curli fibers, the [[amyloid]] component of extracellular material of [[biofilms]], as a platform for programmable [[nanomaterial]]. These nanofibers were genetically constructed for specific functions, including adhesion to substrates, nanoparticle templating and protein immobilization.<ref>{{cite journal|vauthors=Nguyen PQ, Botyanszki Z, Tay PK, Joshi NS|date=September 2014|title=Programmable biofilm-based materials from engineered curli nanofibres|journal=Nature Communications|volume=5|pages=4945|bibcode=2014NatCo...5.4945N|doi=10.1038/ncomms5945|pmid=25229329|doi-access=free}}</ref><br />
<br />
Another common investigation is expansion of the natural set of 20 amino acids. Excluding stop codons, 61 codons have been identified, but only 20 amino acids are coded generally in all organisms. Certain codons are engineered to code for alternative amino acids including: nonstandard amino acids such as O-methyl tyrosine; or exogenous amino acids such as 4-fluorophenylalanine. Typically, these projects make use of re-coded nonsense suppressor tRNA-Aminoacyl tRNA synthetase pairs from other organisms, though in most cases substantial engineering is required.<br />
<br />
另一个常见的研究是对20种天然氨基酸的扩展。除了终止密码子, 61个密码子已被破译出,但所有生物体中一般只有20个氨基酸。某些密码子被设计为编码可替代的氨基酸,包括: 非标准氨基酸,如 o- 甲基酪氨酸; 或外源氨基酸,如4- 氟苯丙氨酸。通常情况下,这些项目利用从其他生物体获取的重新编码的无意义抑制 tRNA-氨酰基 tRNA 合成酶对,虽然在大多数情况下这需要大量的工程。<br />
<br />
<br />
<br />
=== Designed proteins 设计蛋白质 ===<br />
<br />
Other researchers investigated protein structure and function by reducing the normal set of 20 amino acids. Limited protein sequence libraries are made by generating proteins where groups of amino acids may be replaced by a single amino acid. For instance, several non-polar amino acids within a protein can all be replaced with a single non-polar amino acid. . One project demonstrated that an engineered version of Chorismate mutase still had catalytic activity when only 9 amino acids were used.<br />
<br />
其他研究人员通过减少一般的20种天然氨基酸来研究蛋白质的结构和功能。有限的蛋白质序列库是通过生成蛋白质制成的,其中一组氨基酸可以被一个单一的氨基酸所取代。例如,一个蛋白质中的几个非极性氨基酸都可以被一个非极性氨基酸所取代。.一个研究项目证明了,当只使用9种氨基酸时,一种改造过的分支酸变位酶仍然具有催化活性。<br />
<br />
<br />
<br />
[[File:Top7.png|thumb|The [[Top7]] protein was one of the first proteins designed for a fold that had never been seen before in nature<ref name="kuhlman03">{{cite journal | vauthors = Kuhlman B, Dantas G, Ireton GC, Varani G, Stoddard BL, Baker D | title = Design of a novel globular protein fold with atomic-level accuracy | journal = Science | volume = 302 | issue = 5649 | pages = 1364–8 | date = November 2003 | pmid = 14631033 | doi = 10.1126/science.1089427 | bibcode = 2003Sci...302.1364K | s2cid = 1939390 | url = https://semanticscholar.org/paper/3188f905b60172dcad17a9b8c23567400c2bb65f }}</ref> ]]<br />
<br />
Researchers and companies practice synthetic biology to synthesize industrial enzymes with high activity, optimal yields and effectiveness. These synthesized enzymes aim to improve products such as detergents and lactose-free dairy products, as well as make them more cost effective. The improvements of metabolic engineering by synthetic biology is an example of a biotechnological technique utilized in industry to discover pharmaceuticals and fermentive chemicals. Synthetic biology may investigate modular pathway systems in biochemical production and increase yields of metabolic production. Artificial enzymatic activity and subsequent effects on metabolic reaction rates and yields may develop "efficient new strategies for improving cellular properties ... for industrially important biochemical production".<br />
<br />
研究人员和公司运用合成生物学来合成具有高活性、最佳产量和有效性的工业酶。这些合成酶旨在改善产品,如洗涤剂和无乳糖乳制品,以及使他们更具成本效益。合成生物学对代谢工程学的改进是生物技术用于工业发现药物和发酵性化学品的一个典例。合成生物学可以研究生化生产中的模块化途径系统,并提高代谢生产的产量。人工酶活性及其对代谢反应速率和产量的后续影响可能开发出“改善细胞特性有效的新策略...... 用于重要的工业生化产品”。<br />
<br />
<br />
<br />
Natural proteins can be engineered, for example, by [[directed evolution]], novel protein structures that match or improve on the functionality of existing proteins can be produced. One group generated a [[helix bundle]] that was capable of binding [[oxygen]] with similar properties as [[hemoglobin]], yet did not bind [[carbon monoxide]].<ref>{{cite journal | vauthors = Koder RL, Anderson JL, Solomon LA, Reddy KS, Moser CC, Dutton PL | title = Design and engineering of an O(2) transport protein | journal = Nature | volume = 458 | issue = 7236 | pages = 305–9 | date = March 2009 | pmid = 19295603 | pmc = 3539743 | doi = 10.1038/nature07841 | bibcode = 2009Natur.458..305K }}</ref> A similar protein structure was generated to support a variety of [[oxidoreductase]] activities <ref>{{cite journal | vauthors = Farid TA, Kodali G, Solomon LA, Lichtenstein BR, Sheehan MM, Fry BA, Bialas C, Ennist NM, Siedlecki JA, Zhao Z, Stetz MA, Valentine KG, Anderson JL, Wand AJ, Discher BM, Moser CC, Dutton PL | title = Elementary tetrahelical protein design for diverse oxidoreductase functions | journal = Nature Chemical Biology | volume = 9 | issue = 12 | pages = 826–833 | date = December 2013 | pmid = 24121554 | pmc = 4034760 | doi = 10.1038/nchembio.1362 }}</ref> while another formed a structurally and sequentially novel [[ATPase]].<ref name="WangHecht2020">{{cite journal|last1=Wang|first1=MS|last2=Hecht|first2=MH|title=A Completely De Novo ATPase from Combinatorial Protein Design|journal=Journal of the American Chemical Society|year=2020|volume=142|issue=36|pages=15230–15234|issn=0002-7863|doi=10.1021/jacs.0c02954|pmid=32833456}}</ref> Another group generated a family of G-protein coupled receptors that could be activated by the inert small molecule [[clozapine N-oxide]] but insensitive to the native [[ligand]], [[acetylcholine]]; these receptors are known as [[Receptor activated solely by a synthetic ligand|DREADDs]].<ref>{{cite journal | vauthors = Armbruster BN, Li X, Pausch MH, Herlitze S, Roth BL | title = Evolving the lock to fit the key to create a family of G protein-coupled receptors potently activated by an inert ligand | journal = Proceedings of the National Academy of Sciences of the United States of America | volume = 104 | issue = 12 | pages = 5163–8 | date = March 2007 | pmid = 17360345 | pmc = 1829280 | doi = 10.1073/pnas.0700293104 | bibcode = 2007PNAS..104.5163A }}</ref> Novel functionalities or protein specificity can also be engineered using computational approaches. One study was able to use two different computational methods – a bioinformatics and molecular modeling method to mine sequence databases, and a computational enzyme design method to reprogram enzyme specificity. Both methods resulted in designed enzymes with greater than 100 fold specificity for production of longer chain alcohols from sugar.<ref>{{cite journal | vauthors = Mak WS, Tran S, Marcheschi R, Bertolani S, Thompson J, Baker D, Liao JC, Siegel JB | title = Integrative genomic mining for enzyme function to enable engineering of a non-natural biosynthetic pathway | journal = Nature Communications | volume = 6 | pages = 10005 | date = November 2015 | pmid = 26598135 | pmc = 4673503 | doi = 10.1038/ncomms10005 | bibcode = 2015NatCo...610005M }}</ref><br />
<br />
<br />
<br />
Scientists can encode digital information onto a single strand of synthetic DNA. In 2012, George M. Church encoded one of his books about synthetic biology in DNA. The 5.3 Mb of data was more than 1000 times greater than the previous largest amount of information to be stored in synthesized DNA. A similar project encoded the complete sonnets of William Shakespeare in DNA. More generally, algorithms such as NUPACK, ViennaRNA, Ribosome Binding Site Calculator, Cello, and Non-Repetitive Parts Calculator enables the design of new genetic systems.<br />
<br />
科学家可以将数字信息编码到一条合成 DNA 链上。2012年,乔治·M·丘奇用 DNA 将他的一本关于合成生物学的书编码。这5.3 Mb 的数据量比之前存储在合成 DNA 中的最大信息量大了1000多倍。一个类似的项目将威廉·莎士比亚的十四行诗全部编码在 DNA 中。更广泛地说,例如 NUPACK,vienna,Ribosome Binding Site Calculator,cello,和 non-repeated Parts Calculator 等算法使新遗传系统的设计成为可能。<br />
<br />
Another common investigation is [[Expanded genetic code|expansion]] of the natural set of 20 [[amino acid]]s. Excluding [[stop codon]]s, 61 [[codons]] have been identified, but only 20 amino acids are coded generally in all organisms. Certain codons are engineered to code for alternative amino acids including: nonstandard amino acids such as O-methyl [[tyrosine]]; or exogenous amino acids such as 4-fluorophenylalanine. Typically, these projects make use of re-coded [[nonsense suppressor]] [[Transfer RNA|tRNA]]-[[Aminoacyl tRNA synthetase]] pairs from other organisms, though in most cases substantial engineering is required.<ref>{{cite journal | vauthors = Wang Q, Parrish AR, Wang L | title = Expanding the genetic code for biological studies | journal = Chemistry & Biology | volume = 16 | issue = 3 | pages = 323–36 | date = March 2009 | pmid = 19318213 | pmc = 2696486 | doi = 10.1016/j.chembiol.2009.03.001 }}</ref><br />
<br />
<br />
<br />
Many technologies have been developed for incorporating unnatural nucleotides and amino acids into nucleic acids and proteins, both in vitro and in vivo. For example, in May 2014, researchers announced that they had successfully introduced two new artificial nucleotides into bacterial DNA. By including individual artificial nucleotides in the culture media, they were able to exchange the bacteria 24 times; they did not generate mRNA or proteins able to use the artificial nucleotides.<br />
<br />
无论是在体外还是体内,在核酸和蛋白质中将非天然的核苷酸和氨基酸结合的技术已经被开发出来。例如,2014年5月,研究人员宣布他们已经成功地将两种新的人工核苷酸引入细菌 DNA。通过在培养基中加入单个的人工核苷酸,他们能够交换细菌24次; 他们没有产生能够人工核苷酸表达的 mRNA 或蛋白质。<br />
<br />
Other researchers investigated protein structure and function by reducing the normal set of 20 amino acids. Limited protein sequence libraries are made by generating proteins where groups of amino acids may be replaced by a single amino acid.<ref>{{cite journal|author=Davidson, AR|author2=Lumb, KJ|author3=Sauer, RT|date=1995|title=Cooperatively folded proteins in random sequence libraries|journal=Nature Structural Biology|volume=2|issue=10|pages=856–864|doi=10.1038/nsb1095-856|pmid=7552709|s2cid=31781262}}</ref> For instance, several [[Chemical polarity|non-polar]] amino acids within a protein can all be replaced with a single non-polar amino acid.<ref>{{cite journal|vauthors=Kamtekar S, Schiffer JM, Xiong H, Babik JM, Hecht MH|date=December 1993|title=Protein design by binary patterning of polar and nonpolar amino acids|journal=Science|volume=262|issue=5140|pages=1680–5|bibcode=1993Sci...262.1680K|doi=10.1126/science.8259512|pmid=8259512}}</ref> . One project demonstrated that an engineered version of [[Chorismate mutase]] still had catalytic activity when only 9 amino acids were used.<ref>{{cite journal|vauthors=Walter KU, Vamvaca K, Hilvert D|date=November 2005|title=An active enzyme constructed from a 9-amino acid alphabet|journal=The Journal of Biological Chemistry|volume=280|issue=45|pages=37742–6|doi=10.1074/jbc.M507210200|pmid=16144843|doi-access=free}}</ref><br />
<br />
<br />
<br />
Researchers and companies practice synthetic biology to synthesize [[industrial enzymes]] with high activity, optimal yields and effectiveness. These synthesized enzymes aim to improve products such as detergents and lactose-free dairy products, as well as make them more cost effective.<ref>{{cite web|url=https://www.thermofisher.com/us/en/home/life-science/synthetic-biology/synthetic-biology-applications.html|title=Synthetic Biology Applications|website=www.thermofisher.com|access-date=2015-11-12}}</ref> The improvements of metabolic engineering by synthetic biology is an example of a biotechnological technique utilized in industry to discover pharmaceuticals and fermentive chemicals. Synthetic biology may investigate modular pathway systems in biochemical production and increase yields of metabolic production. Artificial enzymatic activity and subsequent effects on metabolic reaction rates and yields may develop "efficient new strategies for improving cellular properties ... for industrially important biochemical production".<ref>{{cite journal | vauthors = Liu Y, Shin HD, Li J, Liu L | title = Toward metabolic engineering in the context of system biology and synthetic biology: advances and prospects | journal = Applied Microbiology and Biotechnology | volume = 99 | issue = 3 | pages = 1109–18 | date = February 2015 | pmid = 25547833 | doi = 10.1007/s00253-014-6298-y | s2cid = 954858 }}</ref><br />
<br />
Synthetic biology raised NASA's interest as it could help to produce resources for astronauts from a restricted portfolio of compounds sent from Earth. On Mars, in particular, synthetic biology could lead to production processes based on local resources, making it a powerful tool in the development of manned outposts with less dependence on Earth.<br />
<br />
合成生物学引起了美国国家航空航天局的兴趣,因为它可以促使从地球上发射的受限化合物组合为宇航员生产资源。特别是在火星上,合成生物学可以产生基于当地资源的生产过程,使其成为开发对地球依赖性较低的载人前哨站的有力工具。<br />
<br />
<br />
<br />
=== Designed nucleic acid systems 设计核酸系统 ===<br />
<br />
Scientists can encode digital information onto a single strand of [[synthetic DNA]]. In 2012, [[George M. Church]] encoded one of his books about synthetic biology in DNA. The 5.3 [[Megabit|Mb]] of data was more than 1000 times greater than the previous largest amount of information to be stored in synthesized DNA.<ref>{{cite journal | vauthors = Church GM, Gao Y, Kosuri S | title = Next-generation digital information storage in DNA | journal = Science | volume = 337 | issue = 6102 | pages = 1628 | date = September 2012 | pmid = 22903519 | doi = 10.1126/science.1226355 | bibcode = 2012Sci...337.1628C | s2cid = 934617 | url = https://semanticscholar.org/paper/0856a685e85bcd27c11cd5f385be818deceb27bd }}</ref> A similar project encoded the complete [[sonnet]]s of [[William Shakespeare]] in DNA.<ref>{{cite web|url=http://news.sky.com/story/1041917/huge-amounts-of-data-can-be-stored-in-dna|title=Huge amounts of data can be stored in DNA|date=23 January 2013|publisher=Sky News|access-date=24 January 2013|archive-url=https://web.archive.org/web/20160531044937/http://news.sky.com/story/1041917/huge-amounts-of-data-can-be-stored-in-dna|archive-date=2016-05-31 }}</ref> More generally, algorithms such as NUPACK,<ref>{{Cite journal|last1=Zadeh|first1=Joseph N.|last2=Steenberg|first2=Conrad D.|last3=Bois|first3=Justin S.|last4=Wolfe|first4=Brian R.|last5=Pierce|first5=Marshall B.|last6=Khan|first6=Asif R.|last7=Dirks|first7=Robert M.|last8=Pierce|first8=Niles A.|date=2011-01-15|title=NUPACK: Analysis and design of nucleic acid systems|journal=Journal of Computational Chemistry|language=en|volume=32|issue=1|pages=170–173|doi=10.1002/jcc.21596|pmid=20645303}}</ref> ViennaRNA,<ref>{{Cite journal|last1=Lorenz|first1=Ronny|last2=Bernhart|first2=Stephan H.|last3=Höner zu Siederdissen|first3=Christian|last4=Tafer|first4=Hakim|last5=Flamm|first5=Christoph|last6=Stadler|first6=Peter F.|last7=Hofacker|first7=Ivo L.|date=2011-11-24|title=ViennaRNA Package 2.0|journal=Algorithms for Molecular Biology|language=en|volume=6|issue=1|pages=26|doi=10.1186/1748-7188-6-26|issn=1748-7188|pmc=3319429|pmid=22115189}}</ref> Ribosome Binding Site Calculator,<ref>{{Cite journal|last1=Salis|first1=Howard M.|last2=Mirsky|first2=Ethan A.|last3=Voigt|first3=Christopher A.|date=October 2009|title=Automated design of synthetic ribosome binding sites to control protein expression|journal=Nature Biotechnology|language=en|volume=27|issue=10|pages=946–950|doi=10.1038/nbt.1568|pmid=19801975|issn=1546-1696|pmc=2782888}}</ref> Cello,<ref>{{Cite journal|last1=Nielsen|first1=A. A. K.|last2=Der|first2=B. S.|last3=Shin|first3=J.|last4=Vaidyanathan|first4=P.|last5=Paralanov|first5=V.|last6=Strychalski|first6=E. A.|last7=Ross|first7=D.|last8=Densmore|first8=D.|last9=Voigt|first9=C. A.|date=2016-04-01|title=Genetic circuit design automation|journal=Science|language=en|volume=352|issue=6281|pages=aac7341|doi=10.1126/science.aac7341|pmid=27034378|issn=0036-8075|doi-access=free}}</ref> and Non-Repetitive Parts Calculator<ref>{{Cite journal|last1=Hossain|first1=Ayaan|last2=Lopez|first2=Eriberto|last3=Halper|first3=Sean M.|last4=Cetnar|first4=Daniel P.|last5=Reis|first5=Alexander C.|last6=Strickland|first6=Devin|last7=Klavins|first7=Eric|last8=Salis|first8=Howard M.|date=2020-07-13|title=Automated design of thousands of nonrepetitive parts for engineering stable genetic systems|url=https://www.nature.com/articles/s41587-020-0584-2|journal=Nature Biotechnology|language=en|pages=1–10|doi=10.1038/s41587-020-0584-2|pmid=32661437|s2cid=220506228|issn=1546-1696}}</ref> enables the design of new genetic systems.<br />
<br />
<br />
<br />
[[Gene functions in the minimal genome of the synthetic organism, Syn 3.]]<br />
<br />
[[在合成生物的最小基因组中发挥功能的基因,Syn 3. ]]<br />
<br />
Many technologies have been developed for incorporating [[Nucleic acid analogue|unnatural nucleotides]] and amino acids into nucleic acids and proteins, both ''in vitro'' and ''in vivo''. For example, in May 2014, researchers announced that they had successfully introduced two new artificial [[nucleotides]] into bacterial DNA. By including individual artificial nucleotides in the culture media, they were able to exchange the bacteria 24 times; they did not generate [[Messenger RNA|mRNA]] or proteins able to use the artificial nucleotides.<ref name="NYT-20140507">{{cite news|url=https://www.nytimes.com/2014/05/08/business/researchers-report-breakthrough-in-creating-artificial-genetic-code.html|title=Researchers Report Breakthrough in Creating Artificial Genetic Code|last=Pollack|first=Andrew|date=May 7, 2014|work=[[New York Times]]|access-date=May 7, 2014}}</ref><ref name="NATURE-20140507">{{cite journal|last=Callaway|first=Ewen|date=May 7, 2014|title=First life with 'alien' DNA|url=http://www.nature.com/news/first-life-with-alien-dna-1.15179|journal=[[Nature (journal)|Nature]]|doi=10.1038/nature.2014.15179|s2cid=86967999|access-date=May 7, 2014}}</ref><ref name="NATJ-20140507">{{cite journal|vauthors=Malyshev DA, Dhami K, Lavergne T, Chen T, Dai N, Foster JM, Corrêa IR, Romesberg FE|date=May 2014|title=A semi-synthetic organism with an expanded genetic alphabet|journal=Nature|volume=509|issue=7500|pages=385–8|bibcode=2014Natur.509..385M|doi=10.1038/nature13314|pmc=4058825|pmid=24805238}}</ref><br />
<br />
One important topic in synthetic biology is synthetic life, that is concerned with hypothetical organisms created in vitro from biomolecules and/or chemical analogues thereof. Synthetic life experiments attempt to either probe the origins of life, study some of the properties of life, or more ambitiously to recreate life from non-living (abiotic) components. Synthetic life biology attempts to create living organisms capable of carrying out important functions, from manufacturing pharmaceuticals to detoxifying polluted land and water. In medicine, it offers prospects of using designer biological parts as a starting point for new classes of therapies and diagnostic tools. Nobody has been able to create such a cell. The host cells were able to grow and replicate. The Mycoplasma laboratorium is the only living organism with completely engineered genome.<br />
<br />
合成生物学的一个重要课题是合成生命,它涉及到在体外由生物分子和/或其化学类似物创造的假想生物体。合成生命实验或者试图探索生命的起源,研究生命的某些特性,或者更雄心勃勃地从非生命(非生物)组成部分中重新创造生命。合成生命生物学试图创造能够执行重要功能的生命有机体,从制造药品到净化被污染的土地和水。在医学上,它提供了使用设计生物学部件作为新类型治疗和诊断工具的起点的前景。没有人能够制造出这样的细胞。宿主细胞能够生长和复制。实验室合成支原体是唯一一个拥有完全工程化基因组的生物体。<br />
<br />
<br />
<br />
=== Space exploration 太空探索 ===<br />
<br />
The first living organism with 'artificial' expanded DNA code was presented in 2014; the team used E. coli that had its genome extracted and replaced with a chromosome with an expanded genetic code. The nucleosides added are d5SICS and dNaM. followed by national synthetic cell organizations in several countries, including FabriCell, MaxSynBio and BaSyC. <br />
--[[用户:粲兰|袁一博]]([[用户讨论:粲兰|讨论]])“followed by national synthetic cell organizations in several countries, including FabriCell, MaxSynBio and BaSyC.”该句首字母未大写,疑似搬运时少了部分语句。<br />
The European synthetic cell efforts were unified in 2019 as SynCellEU initiative.<br />
<br />
2014年,第一个具有人工扩展 DNA 编码的活有机体问世; 研究小组使用大肠杆菌提取了它的基因组,并用扩展基因编码的染色体替换了它。添加的核苷是 d5SICS 和 dNaM。其次是一些国家的国家合成细胞组织,包括 FabriCell,MaxSynBio 和 BaSyC。欧洲合成细胞的努力在2019年被统一为 SynCellEU 倡议。<br />
<br />
Synthetic biology raised [[NASA|NASA's]] interest as it could help to produce resources for astronauts from a restricted portfolio of compounds sent from Earth.<ref name="Verseux, C. 2015 73–100">{{Cite book|author=Verseux, C.|author2=Paulino-Lima, I.|author3=Baque, M.|author4=Billi, D.|author5=Rothschild, L.|date=2016|title=Synthetic Biology for Space Exploration: Promises and Societal Implications|journal=Ambivalences of Creating Life. Societal and Philosophical Dimensions of Synthetic Biology, Publisher: Springer-Verlag|volume=45|pages=73–100|doi=10.1007/978-3-319-21088-9_4|series=Ethics of Science and Technology Assessment|isbn=978-3-319-21087-2}}</ref><ref>{{cite journal|last1=Menezes|first1=A|last2=Cumbers|first2=J|last3=Hogan|first3=J|last4=Arkin|first4=A|date=2014|title=Towards synthetic biological approaches to resource utilization on space missions|journal=Journal of the Royal Society, Interface|volume=12|issue=102|pages=20140715|doi=10.1098/rsif.2014.0715|pmid=25376875|pmc=4277073}}</ref><ref>{{cite journal | vauthors = Montague M, McArthur GH, Cockell CS, Held J, Marshall W, Sherman LA, Wang N, Nicholson WL, Tarjan DR, Cumbers J | title = The role of synthetic biology for in situ resource utilization (ISRU) | journal = Astrobiology | volume = 12 | issue = 12 | pages = 1135–42 | date = December 2012 | pmid = 23140229 | doi = 10.1089/ast.2012.0829 | bibcode = 2012AsBio..12.1135M }}</ref> On Mars, in particular, synthetic biology could lead to production processes based on local resources, making it a powerful tool in the development of manned outposts with less dependence on Earth.<ref name="Verseux, C. 2015 73–100" /> Work has gone into developing plant strains that are able to cope with the harsh Martian environment, using similar techniques to those employed to increase resilience to certain environmental factors in agricultural crops.<ref>{{Cite web|title=NASA - Designer Plants on Mars|url=https://www.nasa.gov/centers/goddard/news/topstory/2005/mars_plants.html|last=GSFC|first=Bill Steigerwald |website=www.nasa.gov|language=en|access-date=2020-05-29}}</ref><br />
<br />
<br />
<br />
=== Synthetic life 合成生命 ===<br />
<br />
{{Further|Artificially Expanded Genetic Information System|Hypothetical types of biochemistry}}<br />
<br />
Bacteria have long been used in cancer treatment. Bifidobacterium and Clostridium selectively colonize tumors and reduce their size. Recently synthetic biologists reprogrammed bacteria to sense and respond to a particular cancer state. Most often bacteria are used to deliver a therapeutic molecule directly to the tumor to minimize off-target effects. To target the tumor cells, peptides that can specifically recognize a tumor were expressed on the surfaces of bacteria. Peptides used include an affibody molecule that specifically targets human epidermal growth factor receptor 2 and a synthetic adhesin. The other way is to allow bacteria to sense the tumor microenvironment, for example hypoxia, by building an AND logic gate into bacteria. The bacteria then only release target therapeutic molecules to the tumor through either lysis or the bacterial secretion system. Lysis has the advantage that it can stimulate the immune system and control growth. Multiple types of secretion systems can be used and other strategies as well. The system is inducible by external signals. Inducers include chemicals, electromagnetic or light waves.<br />
<br />
长期以来,细菌一直被用于癌症治疗。双歧杆菌和梭状芽胞杆菌选择性地定殖于肿瘤并减小肿瘤体积。最近,合成生物学家对细菌进行了重新编码,使其能够感知特定的癌症状态并对其做出反应。大多数情况下,细菌被用来直接向肿瘤输送治疗分子,以最小化脱靶效应。为了定靶于肿瘤细胞,细菌表面表达出了可以特异性识别肿瘤的肽。识别过程中涉及的多肽包括一个特定作用于人类表皮生长因子受体的粘附分子和一个合成粘附素。另一种方法是通过在细菌中建立一个与逻辑门让细菌感知肿瘤的微环境,例如,缺氧。然后,细菌只通过溶菌或细菌分泌系统向肿瘤释放靶向治疗分子。溶菌具有刺激免疫系统和控制生长的优点。这个过程中可以使用多种类型的分泌系统和其他策略。该系统由外部信号诱导。这些诱导因子包括化学物质、电磁波或光波。<br />
<br />
[[File:Syn3 genome.svg|thumb|upright=1.25|[[Gene]] functions in the minimal [[genome]] of the synthetic organism, ''[[Syn 3]]''.<ref name="Hutchison">{{cite journal | vauthors = Hutchison CA, Chuang RY, Noskov VN, Assad-Garcia N, Deerinck TJ, Ellisman MH, Gill J, Kannan K, Karas BJ, Ma L, Pelletier JF, Qi ZQ, Richter RA, Strychalski EA, Sun L, Suzuki Y, Tsvetanova B, Wise KS, Smith HO, Glass JI, Merryman C, Gibson DG, Venter JC | title = Design and synthesis of a minimal bacterial genome | journal = Science | volume = 351 | issue = 6280 | pages = aad6253 | date = March 2016 | pmid = 27013737 | doi = 10.1126/science.aad6253 | bibcode = 2016Sci...351.....H | doi-access = free }}</ref>]]<br />
<br />
One important topic in synthetic biology is ''synthetic life'', that is concerned with hypothetical organisms created ''[[in vitro]]'' from [[biomolecule]]s and/or [[hypothetical types of biochemistry|chemical analogues thereof]]. Synthetic life experiments attempt to either probe the [[origins of life]], study some of the properties of life, or more ambitiously to recreate life from non-living ([[abiotic components|abiotic]]) components. Synthetic life biology attempts to create living organisms capable of carrying out important functions, from manufacturing pharmaceuticals to detoxifying polluted land and water.<ref name="enzymes2014">{{cite news |last=Connor |first=Steve |url=https://www.independent.co.uk/news/science/major-synthetic-life-breakthrough-as-scientists-make-the-first-artificial-enzymes-9896333.html |title=Major synthetic life breakthrough as scientists make the first artificial enzymes |work=The Independent |location=London |date=1 December 2014 |access-date=2015-08-06 }}</ref> In medicine, it offers prospects of using designer biological parts as a starting point for new classes of therapies and diagnostic tools.<ref name="enzymes2014" /><br />
<br />
Multiple species and strains are applied in these therapeutics. Most commonly used bacteria are Salmonella typhimurium, Escherichia Coli, Bifidobacteria, Streptococcus, Lactobacillus, Listeria and Bacillus subtilis. Each of these species have their own property and are unique to cancer therapy in terms of tissue colonization, interaction with immune system and ease of application.<br />
<br />
在这些治疗方法中应用了多种菌株。最常用的细菌是鼠伤寒沙门氏菌、大肠桿菌、双歧杆菌、链球菌、乳酸菌、李斯特菌和枯草杆菌。这些物种中的每一个都有自己的特性。在定殖组织、与免疫系统的相互作用和易于应用方面,它们对癌症治疗各有独到之处。<br />
<br />
<br />
<br />
A living "artificial cell" has been defined as a completely synthetic cell that can capture [[energy]], maintain [[electrochemical gradient|ion gradients]], contain [[macromolecules]] as well as store information and have the ability to [[mutate]].<ref name="Deamer">{{cite journal | vauthors = Deamer D | title = A giant step towards artificial life? | journal = Trends in Biotechnology | volume = 23 | issue = 7 | pages = 336–8 | date = July 2005 | pmid = 15935500 | doi = 10.1016/j.tibtech.2005.05.008 }}</ref> Nobody has been able to create such a cell.<ref name='Deamer'/><br />
<br />
<br />
<br />
The immune system plays an important role in cancer and can be harnessed to attack cancer cells. Cell-based therapies focus on immunotherapies, mostly by engineering T cells.<br />
<br />
免疫系统在癌症中起着重要作用。可以利用免疫系统攻击癌细胞。以细胞为基础的疗法主要是免疫疗法,主要是通过改造 T 细胞。<br />
<br />
A completely synthetic bacterial chromosome was produced in 2010 by [[Craig Venter]], and his team introduced it to genomically emptied bacterial host cells.<ref name="gibson52">{{cite journal | vauthors = Gibson DG, Glass JI, Lartigue C, Noskov VN, Chuang RY, Algire MA, Benders GA, Montague MG, Ma L, Moodie MM, Merryman C, Vashee S, Krishnakumar R, Assad-Garcia N, Andrews-Pfannkoch C, Denisova EA, Young L, Qi ZQ, Segall-Shapiro TH, Calvey CH, Parmar PP, Hutchison CA, Smith HO, Venter JC | title = Creation of a bacterial cell controlled by a chemically synthesized genome | journal = Science | volume = 329 | issue = 5987 | pages = 52–6 | date = July 2010 | pmid = 20488990 | doi = 10.1126/science.1190719 | bibcode = 2010Sci...329...52G | doi-access = free }}</ref> The host cells were able to grow and replicate.<ref>{{cite web| url=https://www.npr.org/templates/transcript/transcript.php?storyId=127010591| title=Scientists Reach Milestone On Way To Artificial Life| access-date=2010-06-09|date=2010-05-20}}</ref><ref>{{cite web|last1=Venter|first1=JC|title=From Designing Life to Prolonging Healthy Life|url=https://www.youtube.com/watch?v=Gwu_djYMm3w&t=30s|website=YouTube|publisher=University of California Television (UCTV)|access-date=1 February 2017}}</ref> The [[Mycoplasma laboratorium]] is the only living organism with completely engineered genome.<br />
<br />
<br />
<br />
T cell receptors were engineered and ‘trained’ to detect cancer epitopes. Chimeric antigen receptors (CARs) are composed of a fragment of an antibody fused to intracellular T cell signaling domains that can activate and trigger proliferation of the cell. A second generation CAR-based therapy was approved by FDA.<br />
<br />
T 细胞受体被设计和“训练”用以检测癌症表位。嵌合抗原受体(CAR)是由融合于细胞内 T 细胞信号域的抗体片段组成,这些信号域可以激活并触发细胞增殖。美国食品药品监督管理局(FDA)批准了第二代基于嵌合抗原受体的基因治疗。<br />
<br />
The first living organism with 'artificial' expanded DNA code was presented in 2014; the team used ''E. coli'' that had its genome extracted and replaced with a chromosome with an expanded genetic code. The [[nucleoside]]s added are [[d5SICS]] and [[dNaM]].<ref name="NATJ-20140507"/><br />
<br />
<br />
<br />
Gene switches were designed to enhance safety of the treatment. Kill switches were developed to terminate the therapy should the patient show severe side effects. Mechanisms can more finely control the system and stop and reactivate it. Since the number of T-cells are important for therapy persistence and severity, growth of T-cells is also controlled to dial the effectiveness and safety of therapeutics.<br />
<br />
基因开关被设计出来以提高治疗的安全性。如果病人出现严重的副作用,杀伤开关就会终止治疗。机制可以更好地控制系统,停止和重新激活它。由于 T 细胞的数量对治疗的持续性和强度非常重要,因此 T 细胞的生长也受到控制,从而平衡治疗的有效性和安全性。<br />
<br />
In May 2019, researchers, in a milestone effort, reported the creation of a new [[Synthetic biology#Synthetic life|synthetic]] (possibly [[Artificial life#Biochemical-based ("wet")|artificial]]) form of [[wikt:viability|viable]] [[life]], a variant of the [[bacteria]] ''[[Escherichia coli]]'', by reducing the natural number of 64 [[codon]]s in the bacterial [[genome]] to 59 codons instead, in order to encode 20 [[amino acid]]s.<ref name="NYT-20190515"/><ref name="NAT-20190515"/><br />
<br />
<br />
<br />
Although several mechanisms can improve safety and control, limitations include the difficulty of inducing large DNA circuits into the cells and risks associated with introducing foreign components, especially proteins, into cells.<br />
<br />
虽然有几种机制可以提高安全性和控制性,但他们也都存在局限性,包括很难将大型 DNA 电路诱导入细胞,以及将外来成分,特别是蛋白质引入细胞的风险。<br />
<br />
In 2017 the international [[Build-a-Cell]] large-scale research collaboration for the construction of synthetic living cell was started,<ref>{{cite web|url=http://buildacell.io/|title=Build-a-Cell|accessdate=4 Dec 2019}}</ref> followed by national synthetic cell organizations in several countries, including FabriCell,<ref>{{cite web|url=http://fabricell.org/|title=FabriCell|accessdate=8 Dec 2019}}</ref> MaxSynBio<ref>{{cite web|url=https://www.maxsynbio.mpg.de/home/|title=MaxSynBio - Max Planck Research Network in Synthetic Biology|accessdate=8 Dec 2019}}</ref> and BaSyC.<ref>{{cite web|url=http://www.basyc.nl/|title=BaSyC|accessdate=8 Dec 2019}}</ref> The European synthetic cell efforts were unified in 2019 as SynCellEU initiative.<ref>{{cite web|url=http://www.syntheticcell.eu/|title=SynCell EU|accessdate=8 Dec 2019}}</ref><br />
<br />
<br />
<br />
=== Drug delivery platforms 药物输送平台 ===<br />
<br />
==== Engineered bacteria-based platform 基于细菌设计的平台 ====<br />
<br />
Bacteria have long been used in cancer treatment. ''[[Bifidobacterium]]'' and ''[[Clostridium]]'' selectively colonize tumors and reduce their size.<ref name="Zu_2014">{{cite journal|vauthors=Zu C, Wang J|date=August 2014|title=Tumor-colonizing bacteria: a potential tumor targeting therapy|url=|journal=Critical Reviews in Microbiology|volume=40|issue=3|pages=225–35|doi=10.3109/1040841X.2013.776511|pmid=23964706|s2cid=26498221}}</ref> Recently synthetic biologists reprogrammed bacteria to sense and respond to a particular cancer state. Most often bacteria are used to deliver a therapeutic molecule directly to the tumor to minimize off-target effects. To target the tumor cells, [[peptide]]s that can specifically recognize a tumor were expressed on the surfaces of bacteria. Peptides used include an [[affibody molecule]] that specifically targets human [[Epidermal growth factor receptor|epidermal growth factor receptor 2]]<ref name="Gujrati_2014">{{cite journal|vauthors=Gujrati V, Kim S, Kim SH, Min JJ, Choy HE, Kim SC, Jon S|date=February 2014|title=Bioengineered bacterial outer membrane vesicles as cell-specific drug-delivery vehicles for cancer therapy|url=|journal=ACS Nano|volume=8|issue=2|pages=1525–37|doi=10.1021/nn405724x|pmid=24410085}}</ref> and a synthetic [[Adhesin molecule (immunoglobulin -like)|adhesin]].<ref name="Piñero-Lambea_2015">{{cite journal|vauthors=Piñero-Lambea C, Bodelón G, Fernández-Periáñez R, Cuesta AM, Álvarez-Vallina L, Fernández LÁ|date=April 2015|title=Programming controlled adhesion of E. coli to target surfaces, cells, and tumors with synthetic adhesins|journal=ACS Synthetic Biology|volume=4|issue=4|pages=463–73|doi=10.1021/sb500252a|pmc=4410913|pmid=25045780}}</ref> The other way is to allow bacteria to sense the [[tumor microenvironment]], for example hypoxia, by building an AND logic gate into bacteria.<ref>{{cite journal | last1 = Deyneko | first1 = I.V. | last2 = Kasnitz | first2 = N. | last3 = Leschner | first3 = S. | last4 = Weiss | first4 = S. | year = 2016| title = Composing a tumor specific bacterial promoter | url = | journal = PLOS ONE | volume = 11| issue = 5| page = e0155338| doi = 10.1371/journal.pone.0155338 | pmid = 27171245 | pmc = 4865170 }}</ref> The bacteria then only release target therapeutic molecules to the tumor through either [[lysis]]<ref>{{cite journal | last1 = Rice | first1 = KC | last2 = Bayles | first2 = KW | year = 2008 | title = Molecular control of bacterial death and lysis | journal = Microbiol Mol Biol Rev | volume = 72 | issue = 1| pages = 85–109 | doi = 10.1128/mmbr.00030-07 | pmid = 18322035 | pmc = 2268280 }}</ref> or the [[bacterial secretion system]].<ref>{{cite journal | last1 = Ganai | first1 = S. | last2 = Arenas | first2 = R. B. | last3 = Forbes | first3 = N. S. | year = 2009 | title = Tumour-targeted delivery of TRAIL using Salmonella typhimurium enhances breast cancer survival in mice | url = | journal = Br. J. Cancer | volume = 101 | issue = 10| pages = 1683–1691 | doi = 10.1038/sj.bjc.6605403 | pmid = 19861961 | pmc = 2778534 }}</ref> Lysis has the advantage that it can stimulate the immune system and control growth. Multiple types of secretion systems can be used and other strategies as well. The system is inducible by external signals. Inducers include chemicals, electromagnetic or light waves.<br />
<br />
The creation of new life and the tampering of existing life has raised ethical concerns in the field of synthetic biology and are actively being discussed.<br />
<br />
创造新生命以及篡改现存生命引起了合成生物学领域的伦理问题,目前正处于积极的讨论中。<br />
<br />
<br />
<br />
Multiple species and strains are applied in these therapeutics. Most commonly used bacteria are ''[[Salmonella enterica subsp. enterica|Salmonella typhimurium]]'', [[Escherichia coli|''Escherichia Coli'']], ''Bifidobacteria'', ''[[Streptococcus]]'', ''[[Lactobacillus]]'', ''[[Listeria]]'' and ''[[Bacillus subtilis]]''. Each of these species have their own property and are unique to cancer therapy in terms of tissue colonization, interaction with immune system and ease of application.<br />
<br />
<br />
<br />
The ethical aspects of synthetic biology has 3 main features: biosafety, biosecurity, and the creation of new life forms. Other ethical issues mentioned include the regulation of new creations, patent management of new creations, benefit distribution, and research integrity.<br />
<br />
合成生物学的伦理方面有三个主要特点: 生物研究安全性、生物安全性和创造新的生命形式。其他提到的伦理问题包括新生命的管理、新生命的专利管理、利益分配和研究的完整性。<br />
<br />
==== Cell-based platform 基于细胞的平台====<br />
<br />
The immune system plays an important role in cancer and can be harnessed to attack cancer cells. Cell-based therapies focus on [[Cancer immunotherapy|immunotherapies]], mostly by engineering [[T cell]]s.<br />
<br />
Ethical issues have surfaced for recombinant DNA and genetically modified organism (GMO) technologies and extensive regulations of genetic engineering and pathogen research were in place in many jurisdictions. Amy Gutmann, former head of the Presidential Bioethics Commission, argued that we should avoid the temptation to over-regulate synthetic biology in general, and genetic engineering in particular. According to Gutmann, "Regulatory parsimony is especially important in emerging technologies...where the temptation to stifle innovation on the basis of uncertainty and fear of the unknown is particularly great. The blunt instruments of statutory and regulatory restraint may not only inhibit the distribution of new benefits, but can be counterproductive to security and safety by preventing researchers from developing effective safeguards.".<br />
<br />
重组 DNA 和转基因生物(GMO)技术的伦理问题已经浮出水面,许多司法管辖区对基因工程和病原体研究有着广泛的规定。生物伦理总统委员会前任主席艾米 · 古特曼认为,我们应该避免过度监管合成生物学,尤其是基因工程。古特曼认为,“过度监管在新兴技术领域尤为显要...... 在这些领域,处于不确定性和对未知事物的恐惧而扼杀创新的倾向尤为强烈。法律和监管限制的生硬手段可能不仅会抑制新利益的分配,而且可能阻碍研究人员制定有效的保障措施,从而对研究安全性和生命安全性产生反作用。".<br />
<br />
<br />
<br />
T cell receptors were engineered and ‘trained’ to detect cancer [[epitope]]s. [[Chimeric antigen receptor]]s (CARs) are composed of a fragment of an [[antibody]] fused to intracellular T cell signaling domains that can activate and trigger proliferation of the cell. A second generation CAR-based therapy was approved by FDA.{{Citation needed|date=April 2018}}<br />
<br />
<br />
<br />
Gene switches were designed to enhance safety of the treatment. Kill switches were developed to terminate the therapy should the patient show severe side effects.<ref>Jones, B.S., Lamb, L.S., Goldman, F. & Di Stasi, A. Improving the safety of cell therapy products by suicide gene transfer. Front. Pharmacol. 5, 254 (2014).</ref> Mechanisms can more finely control the system and stop and reactivate it.<ref>{{cite journal | last1 = Wei | first1 = P | last2 = Wong | first2 = WW | last3 = Park | first3 = JS | last4 = Corcoran | first4 = EE | last5 = Peisajovich | first5 = SG | last6 = Onuffer | first6 = JJ | last7 = Weiss | first7 = A | last8 = LiWA | year = 2012 | title = Bacterial virulence proteins as tools to rewire kinase pathways in yeast and immune cells | url = | journal = Nature | volume = 488 | issue = 7411| pages = 384–388 | doi = 10.1038/nature11259 | pmid = 22820255 | pmc = 3422413 }}</ref><ref>{{cite journal | last1 = Danino | first1 = T. | last2 = Mondragon-Palomino | first2 = O. | last3 = Tsimring | first3 = L. | last4 = Hasty | first4 = J. | year = 2010 | title = A synchronized quorum of genetic clocks | url = | journal = Nature | volume = 463 | issue = 7279| pages = 326–330 | doi = 10.1038/nature08753 | pmid = 20090747 | pmc = 2838179 }}</ref> Since the number of T-cells are important for therapy persistence and severity, growth of T-cells is also controlled to dial the effectiveness and safety of therapeutics.<ref>{{cite journal | last1 = Chen | first1 = Y. Y. | last2 = Jensen | first2 = M. C. | last3 = Smolke | first3 = C. D. | year = 2010 | title = Genetic control of mammalian T-cell proliferation with synthetic RNA regulatory systems | journal = Proc. Natl. Acad. Sci. U.S.A. | volume = 107 | issue = 19| pages = 8531–6 | doi = 10.1073/pnas.1001721107 | pmid = 20421500 | pmc = 2889348 }}</ref><br />
<br />
One ethical question is whether or not it is acceptable to create new life forms, sometimes known as "playing God". Currently, the creation of new life forms not present in nature is at small-scale, the potential benefits and dangers remain unknown, and careful consideration and oversight are ensured for most studies. Regarding auxotrophy, bacteria and yeast can be engineered to be unable to produce histidine, an important amino acid for all life. Such organisms can thus only be grown on histidine-rich media in laboratory conditions, nullifying fears that they could spread into undesirable areas.<br />
<br />
有这样一个道德问题,创造新的生命形式(有时被称为“扮演上帝”)是否可以接受。目前,自然界中不存在的新生命形式的创造规模很小,潜在的好处和危险仍然不为人知,并且大多数研究确保进行了认真的考虑和监督。通过制造营养缺陷,细菌和酵母可以被改造为不能生产组氨酸的类型。组氨酸是一种对所有生命来说都很重要的氨基酸。因此,这些微生物只能在实验室条件下在富含组氨酸的培养基上生长,从而消除了人们对它们可能扩散到不良区域的担忧。<br />
<br />
<br />
<br />
<br />
== Ethics 伦理问题 ==<br />
<br />
Some ethical issues relate to biosecurity, where biosynthetic technologies could be deliberately used to cause harm to society and/or the environment. Since synthetic biology raises ethical issues and biosecurity issues, humanity must consider and plan on how to deal with potentially harmful creations, and what kinds of ethical measures could possibly be employed to deter nefarious biosynthetic technologies. With the exception of regulating synthetic biology and biotechnology companies, however, the issues are not seen as new because they were raised during the earlier recombinant DNA and genetically modified organism (GMO) debates and extensive regulations of genetic engineering and pathogen research are already in place in many jurisdictions.<br /><br />
<br />
一些伦理问题与生物安全有关,在这方面,生物合成技术可能被有意用以破坏社会和/或环境。由于合成生物学引起了伦理问题和生物安全问题,人类必须考虑和计划如何处理潜在的有害创造物,以及何种伦理措施具有阻止邪恶的生物合成技术的可行性。然而,除了监管合成生物学和生物技术公司之外,这些问题并不被视为新问题,因为它们是在早期的重组 DNA 和转基因生物(GMO)辩论中提出的,而且许多司法辖区已经对基因工程和病原体研究进行了广泛的监管。< br/><br />
<br />
{{Update|section|date=January 2019}}<br />
<br />
<br />
<br />
The creation of new life and the tampering of existing life has raised [[Ethics|ethical concerns]] in the field of synthetic biology and are actively being discussed.<ref name=":3" /><br />
<br />
<br />
<br />
The European Union-funded project SYNBIOSAFE has issued reports on how to manage synthetic biology. A 2007 paper identified key issues in safety, security, ethics and the science-society interface, which the project defined as public education and ongoing dialogue among scientists, businesses, government and ethicists. The key security issues that SYNBIOSAFE identified involved engaging companies that sell synthetic DNA and the biohacking community of amateur biologists. Key ethical issues concerned the creation of new life forms.<br />
<br />
欧盟资助的项目 SYNBIOSAFE 已经发布了关于如何管理合成生物学的报告。2007年的一篇论文确定了技术安全、生命安全、伦理和科学-社会接口方面的关键问题,并将其定义为公共教育和科学家、企业、政府和伦理学家之间的持续交流。SYNBIOSAFE 确定的关键生命安全问题涉及到销售合成 DNA 的公司和业余生物学家组成的生物黑客社区。关键的伦理问题涉及到创造新的生命形式。<br />
<br />
Common ethical questions include:<br />
常见的伦理问题包括:<br />
<br />
<br />
A subsequent report focused on biosecurity, especially the so-called dual-use challenge. For example, while synthetic biology may lead to more efficient production of medical treatments, it may also lead to synthesis or modification of harmful pathogens (e.g., smallpox). The biohacking community remains a source of special concern, as the distributed and diffuse nature of open-source biotechnology makes it difficult to track, regulate or mitigate potential concerns over biosafety and biosecurity.<br />
<br />
随后的一份聚焦于于生物安全的报告,特别是所谓的两用挑战。例如,虽然合成生物学可能带来更有效的医疗生产,但它也可能合成或改造出有害的病原体(例如天花)。生物黑客界仍然是一个特别令人关切的问题,因为开源生物技术的分布和扩散性质使得跟踪、管理或减轻对生物安全和生物安保的隐忧变得困难。<br />
<br />
* Is it morally right to tamper with nature?<br />
篡改自然在道德上是正确的吗?<br />
<br />
* Is one playing God when creating new life?<br />
创造新生命时,人是否就是上帝?<br />
<br />
COSY, another European initiative, focuses on public perception and communication. To better communicate synthetic biology and its societal ramifications to a broader public, COSY and SYNBIOSAFE published SYNBIOSAFE, a 38-minute documentary film, in October 2009.<br />
<br />
COSY 是欧洲的另一项倡议,主要关注于公众认知和交流。为了更好地向更广泛的公众宣传合成生物学及其社会影响,COSY 和 SYNBIOSAFE 于2009年10月出版了一部38分钟的纪录片《安全的合成生物学》。<br />
<br />
* What happens if a synthetic organism accidentally escapes?<br />
如果一种合成生命体意外地从实验室中泄露出去,会发生什么?<br />
<br />
* What if an individual misuses synthetic biology and creates a harmful entity (e.g., a biological weapon)?<br />
假如某个个体错误地使用合成生物学并制造了一个有害的尸体,那该怎么办?<br />
<br />
The International Association Synthetic Biology has proposed self-regulation. This proposes specific measures that the synthetic biology industry, especially DNA synthesis companies, should implement. In 2007, a group led by scientists from leading DNA-synthesis companies published a "practical plan for developing an effective oversight framework for the DNA-synthesis industry".<br />
<br />
国际合成生物学协会已经建议进行自我调节。它提出了合成生物产业,特别是 DNA 合成公司,应该实施的具体措施。2007年,由主要的 DNA 合成公司的科学家领导的一个小组发表了“为 DNA 合成工业制定有效监督框架的实用计划”。<br />
<br />
* Who will have control of and access to the products of synthetic biology? <br />
谁会拥有控制和访问合成生物产品的权限?<br />
<br />
* Who will gain from these innovations? Investors? Medical patients? Industrial farmers?<br />
谁会从这些创新中获利?投资者?患者?工业农民?<br />
<br />
On July 9–10, 2009, the National Academies' Committee of Science, Technology & Law convened a symposium on "Opportunities and Challenges in the Emerging Field of Synthetic Biology".<br />
<br />
2009年7月9日至10日,美国国家学院科学、技术和法律委员会召开了一次名为“合成生物学新兴领域的机遇与挑战”的研讨会。<br />
<br />
* Does the patent system allow patents on living organisms? What about parts of organisms, like HIV resistance genes in humans?<ref>{{Cite web|url=https://www.theguardian.com/science/2018/nov/26/worlds-first-gene-edited-babies-created-in-china-claims-scientist|title= World's first gene-edited babies created in China, claims scientist |last=Staff|first=Agencies|date=November 2018|website=The Guardian|url-status=live|archive-url=|archive-date=|access-date=}}</ref><br />
<br />
* What if a new creation is deserving of moral or legal status?<br />
如果一个新生命理应拥有道德和法律地位该怎么办?<br />
<br />
After the publication of the first synthetic genome and the accompanying media coverage about "life" being created, President Barack Obama established the Presidential Commission for the Study of Bioethical Issues to study synthetic biology. The commission convened a series of meetings, and issued a report in December 2010 titled "New Directions: The Ethics of Synthetic Biology and Emerging Technologies." The commission stated that "while Venter’s achievement marked a significant technical advance in demonstrating that a relatively large genome could be accurately synthesized and substituted for another, it did not amount to the “creation of life”. It noted that synthetic biology is an emerging field, which creates potential risks and rewards. The commission did not recommend policy or oversight changes and called for continued funding of the research and new funding for monitoring, study of emerging ethical issues and public education. These security issues may be avoided by regulating industry uses of biotechnology through policy legislation. Federal guidelines on genetic manipulation are being proposed by "the President's Bioethics Commission ... in response to the announced creation of a self-replicating cell from a chemically synthesized genome, put forward 18 recommendations not only for regulating the science ... for educating the public". Richard Lewontin wrote that some of the safety tenets for oversight discussed in The Principles for the Oversight of Synthetic Biology are reasonable, but that the main problem with the recommendations in the manifesto is that "the public at large lacks the ability to enforce any meaningful realization of those recommendations".<br />
<br />
在发表了第一个合成基因组以及随之而来的关于”生命”的媒体报道之后,巴拉克·奥巴马总统设立了研究合成生物学的生物伦理问题总统委员会。该委员会召开了一系列会议,并于2010年12月发布了一份题为《新方向: 合成生物学和新兴技术的伦理学》的报告。委员会指出:“虽然文特尔的成就标志着一项重大的技术进步,证明了一个相对较大的基因组可以准确地合成和替代另一个基因组,但它并不等于‘创造生命’。”报告指出,合成生物学是一个新兴的领域,它产生了潜在的风险和回报。该委员会没有对政策或监督方面的改变提出建议,并呼吁继续为研究提供资金,并为监测、研究新出现的道德问题和公共教育提供新资金。这些安全问题可以通过政策立法规范生物技术的工业用途来避免。“生物伦理总统委员会正在提出关于基因操纵的联邦指导方针...... 作为对宣布从化学合成的基因组中创造出自我复制细胞的回应,提出了18项建议,不仅仅是为了规范科学...... 为了教育公众。”。理查德·路文汀 (Richard Lewontin) 写道,《合成生物学监督原则》中讨论的一些监督安全原则是合理的,但宣言中的建议存在的主要问题是“广大公众缺乏能力,无法强制任意有意义地实现这些建议”。<br />
<br />
<br />
<br />
The ethical aspects of synthetic biology has 3 main features: biosafety, biosecurity, and the creation of new life forms.<ref>{{Cite journal|title=Synthetic Biology and Ethics: Past, Present, and Future|last=Hayry|first=Mattie|date=April 2017|journal=Cambridge Quarterly of Healthcare Ethics|volume=26|issue=2|pages=186–205|doi=10.1017/S0963180116000803|pmid=28361718}}</ref> Other ethical issues mentioned include the regulation of new creations, patent management of new creations, benefit distribution, and research integrity.<ref>{{Cite journal|title=Synthetic biology applied in the agrifood sector: Public perceptions, attitudes and implications for future studies|last=Jin |display-authors=etal |first=Shan|date=September 2019|journal=Trends in Food Science and Technology|volume=91|pages=454–466|doi=10.1016/j.tifs.2019.07.025}}</ref><ref name=":3">{{Cite journal|url=https://heinonline.org/HOL/LandingPage?handle=hein.journals/macq15&div=8&id=&page=| title=Synthetic Biology: Ethics, Exeptionalism and Expectations| pages=45| last=Newson|first=AJ|date=2015|journal=Macquarie Law Journal| volume=15|url-status=live|archive-url=|archive-date=|access-date=}}</ref><br />
<br />
<br />
<br />
Ethical issues have surfaced for [[recombinant DNA]] and [[genetically modified organism]] (GMO) technologies and extensive regulations of [[genetic engineering]] and pathogen research were in place in many jurisdictions. [[Amy Gutmann]], former head of the Presidential Bioethics Commission, argued that we should avoid the temptation to over-regulate synthetic biology in general, and genetic engineering in particular. According to Gutmann, "Regulatory parsimony is especially important in emerging technologies...where the temptation to stifle innovation on the basis of uncertainty and fear of the unknown is particularly great. The blunt instruments of statutory and regulatory restraint may not only inhibit the distribution of new benefits, but can be counterproductive to security and safety by preventing researchers from developing effective safeguards.".<ref>{{cite journal | first = Gutmann | last = Amy | date = 2012 | title = The Ethics of Synthetic Biology | volume=41 | issue=4 | pages = 17–22 | journal = The Hastings Center Report | doi = 10.1002/j.1552-146X.2011.tb00118.x | pmid = 21845917 | s2cid = 20662786 }}</ref><br />
<br />
<br />
<br />
The hazards of synthetic biology include biosafety hazards to workers and the public, biosecurity hazards stemming from deliberate engineering of organisms to cause harm, and environmental hazards. The biosafety hazards are similar to those for existing fields of biotechnology, mainly exposure to pathogens and toxic chemicals, although novel synthetic organisms may have novel risks. For biosecurity, there is concern that synthetic or redesigned organisms could theoretically be used for bioterrorism. Potential risks include recreating known pathogens from scratch, engineering existing pathogens to be more dangerous, and engineering microbes to produce harmful biochemicals. Lastly, environmental hazards include adverse effects on biodiversity and ecosystem services, including potential changes to land use resulting from agricultural use of synthetic organisms.<br />
<br />
合成生物学的危害包括对工人和公众的生物安全危害、蓄意设计可造成危害的生物体所产生的生物安全危害以及环境危害。生物安全危害类似于现有生物技术领域的危害,尽管新的合成生物可能有新的风险,它的主要形式是接触病原体和有毒化学品。为了生物安全,人们担心人工合成或重新设计的生物体在理论上可能被用于生物恐怖主义。潜在的风险包括从零开始再造已知的病原体,将现有的病原体设计成更危险的,以及设计微生物来生产有害的生物化学产品。最后,环境危害包括对生物多样性和生态系统服务的不利影响,包括在农业上利用合成生物体对土地使用的潜在变化。<br />
<br />
=== The "creation" of life 创造生命 ===<br />
<br />
<br />
<br />
Existing risk analysis systems for GMOs are generally considered sufficient for synthetic organisms, although there may be difficulties for an organism built "bottom-up" from individual genetic sequences. Synthetic biology generally falls under existing regulations for GMOs and biotechnology in general, and any regulations that exist for downstream commercial products, although there are generally no regulations in any jurisdiction that are specific to synthetic biology.<br />
<br />
通常认为,尽管由单个基因序列构成的”自下而上”的生物体可能存在困难,现有的转基因生物风险分析系统足以用于合成生物体。一般而言,尽管任何法域一般都没有专门针对合成生物学的条例,合成生物学属于现有的转基因生物和生物技术条例的范围,也属于现有的关于下游商业产品的条例的范围。<br />
<br />
One ethical question is whether or not it is acceptable to create new life forms, sometimes known as "playing God". Currently, the creation of new life forms not present in nature is at small-scale, the potential benefits and dangers remain unknown, and careful consideration and oversight are ensured for most studies.<ref name=":3" /> Many advocates express the great potential value—to agriculture, medicine, and academic knowledge, among other fields—of creating artificial life forms. Creation of new entities could expand scientific knowledge well beyond what is currently known from studying natural phenomena. Yet there is concern that artificial life forms may reduce nature’s "purity" (i.e., nature could be somehow corrupted by human intervention and manipulation) and potentially influence the adoption of more engineering-like principles instead of biodiversity- and nature-focused ideals. Some are also concerned that if an artificial life form were to be released into nature, it could hamper biodiversity by beating out natural species for resources (similar to how [[algal bloom]]s kill marine species). Another concern involves the ethical treatment of newly created entities if they happen to [[nociception|sense pain]], [[sentience]], and self-perception. Should such life be given moral or legal rights? If so, how?<br />
<br />
<br />
<br />
=== Biosafety and biocontainment 生物技术安全和生物抑制 ===<br />
<br />
What is most ethically appropriate when considering biosafety measures? How can accidental introduction of synthetic life in the natural environment be avoided? Much ethical consideration and critical thought has been given to these questions. Biosafety not only refers to biological containment; it also refers to strides taken to protect the public from potentially hazardous biological agents. Even though such concerns are important and remain unanswered, not all products of synthetic biology present concern for biological safety or negative consequences for the environment. It is argued that most synthetic technologies are benign and are incapable of flourishing in the outside world due to their "unnatural" characteristics as there is yet to be an example of a transgenic microbe conferred with a fitness advantage in the wild.<br />
<br />
<br />
<br />
In general, existing [[Hierarchy of hazard controls|hazard controls]], risk assessment methodologies, and regulations developed for traditional [[genetically modified organism]]s (GMOs) are considered to be sufficient for synthetic organisms. "Extrinsic" [[biocontainment]] methods in a laboratory context include physical containment through [[biosafety cabinet]]s and [[glovebox]]es, as well as [[personal protective equipment]]. In an agricultural context they include isolation distances and [[pollen]] barriers, similar to methods for [[Biocontainment of genetically modified organisms|biocontainment of GMOs]]. Synthetic organisms may offer increased hazard control because they can be engineered with "intrinsic" biocontainment methods that limit their growth in an uncontained environment, or prevent [[horizontal gene transfer]] to natural organisms. Examples of intrinsic biocontainment include [[auxotrophy]], biological [[kill switch]]es, inability of the organism to replicate or to pass modified or synthetic genes to offspring, and the use of [[Xenobiology|xenobiological]] organisms using alternative biochemistry, for example using artificial [[xeno nucleic acid]]s (XNA) instead of DNA.<ref name=":12" /><ref name=":32">{{Cite journal|url=https://publications.europa.eu/en/publication-detail/-/publication/bfd7d06c-d3ae-11e5-a4b5-01aa75ed71a1/language-en|title=Opinion on synthetic biology II: Risk assessment methodologies and safety aspects|last=|first=|date=2016-02-12|website=EU [[Directorate-General for Health and Consumers]]|pages=|via=|doi=10.2772/63529|archive-url=|archive-date=|access-date=|volume=|publisher=Publications Office}}</ref> Regarding auxotrophy, bacteria and yeast can be engineered to be unable to produce [[histidine]], an important amino acid for all life. Such organisms can thus only be grown on histidine-rich media in laboratory conditions, nullifying fears that they could spread into undesirable areas.<br />
<br />
<br />
<br />
<br />
<br />
=== Biosecurity 生物安全 ===<br />
<br />
Some ethical issues relate to biosecurity, where biosynthetic technologies could be deliberately used to cause harm to society and/or the environment. Since synthetic biology raises ethical issues and biosecurity issues, humanity must consider and plan on how to deal with potentially harmful creations, and what kinds of ethical measures could possibly be employed to deter nefarious biosynthetic technologies. With the exception of regulating synthetic biology and biotechnology companies,<ref name="Bügl, H. et al. 2007 627–629">{{cite journal | vauthors = Bügl H, Danner JP, Molinari RJ, Mulligan JT, Park HO, Reichert B, Roth DA, Wagner R, Budowle B, Scripp RM, Smith JA, Steele SJ, Church G, Endy D | title = DNA synthesis and biological security | journal = Nature Biotechnology | volume = 25 | issue = 6 | pages = 627–9 | date = June 2007 | pmid = 17557094 | doi = 10.1038/nbt0607-627 | s2cid = 7776829 }}</ref><ref>{{cite web|url = http://www.synbioproject.org/site/assets/files/1335/hastings.pdf|title = Ethical Issues in Synthetic Biology: An Overview of the Debates|date = |access-date = |website = }}</ref> however, the issues are not seen as new because they were raised during the earlier [[recombinant DNA]] and [[genetically modified organism]] (GMO) debates and extensive regulations of [[genetic engineering]] and pathogen research are already in place in many jurisdictions.<ref name="bioethics.gov">Presidential Commission for the study of Bioethical Issues, December 2010 [http://bioethics.gov/synthetic-biology-report NEW DIRECTIONS The Ethics of Synthetic Biology and Emerging Technologies] Retrieved 2012-04-14.</ref><br /><br />
<br />
<br />
<br />
=== European Union 欧盟===<br />
<br />
<br />
<br />
The [[European Union]]-funded project SYNBIOSAFE<ref>[http://www.synbiosafe.eu/ SYNBIOSAFE official site]</ref> has issued reports on how to manage synthetic biology. A 2007 paper identified key issues in safety, security, ethics and the science-society interface, which the project defined as public education and ongoing dialogue among scientists, businesses, government and ethicists.<ref name="Priorities">{{cite journal | vauthors = Schmidt M, Ganguli-Mitra A, Torgersen H, Kelle A, Deplazes A, Biller-Andorno N | title = A priority paper for the societal and ethical aspects of synthetic biology | journal = Systems and Synthetic Biology | volume = 3 | issue = 1–4 | pages = 3–7 | date = December 2009 | pmid = 19816794 | pmc = 2759426 | doi = 10.1007/s11693-009-9034-7 | url = http://www.synbiosafe.eu/uploads/pdf/Schmidt_etal-2009-SSBJ.pdf }}</ref><ref>Schmidt M. Kelle A. Ganguli A, de Vriend H. (Eds.) 2009. [https://www.springer.com/biomed/book/978-90-481-2677-4 "Synthetic Biology. The Technoscience and its Societal Consequences".] Springer Academic Publishing.</ref> The key security issues that SYNBIOSAFE identified involved engaging companies that sell synthetic DNA and the [[Do-it-yourself biology|biohacking]] community of amateur biologists. Key ethical issues concerned the creation of new life forms.<br />
<br />
<br />
<br />
A subsequent report focused on biosecurity, especially the so-called [[dual use technology|dual-use]] challenge. For example, while synthetic biology may lead to more efficient production of medical treatments, it may also lead to synthesis or modification of harmful pathogens (e.g., [[smallpox]]).<ref>{{cite journal | vauthors = Kelle A | title = Ensuring the security of synthetic biology-towards a 5P governance strategy | journal = Systems and Synthetic Biology | volume = 3 | issue = 1–4 | pages = 85–90 | date = December 2009 | pmid = 19816803 | pmc = 2759433 | doi = 10.1007/s11693-009-9041-8 }}</ref> The biohacking community remains a source of special concern, as the distributed and diffuse nature of open-source biotechnology makes it difficult to track, regulate or mitigate potential concerns over biosafety and biosecurity.<ref>{{cite journal | vauthors = Schmidt M | title = Diffusion of synthetic biology: a challenge to biosafety | journal = Systems and Synthetic Biology | volume = 2 | issue = 1–2 | pages = 1–6 | date = June 2008 | pmid = 19003431 | pmc = 2671588 | doi = 10.1007/s11693-008-9018-z | url = http://www.markusschmidt.eu/pdf/Diffusion_of_synthetic_biology.pdf }}</ref><br />
<br />
<br />
<br />
COSY, another European initiative, focuses on public perception and communication.<ref>[http://www.synbio.at/ COSY: Communicating Synthetic Biology]</ref><ref>{{cite journal | vauthors = Kronberger N, Holtz P, Kerbe W, Strasser E, Wagner W | title = Communicating Synthetic Biology: from the lab via the media to the broader public | journal = Systems and Synthetic Biology | volume = 3 | issue = 1–4 | pages = 19–26 | date = December 2009 | pmid = 19816796 | pmc = 2759424 | doi = 10.1007/s11693-009-9031-x }}</ref><ref>{{cite journal | vauthors = Cserer A, Seiringer A | title = Pictures of Synthetic Biology : A reflective discussion of the representation of Synthetic Biology (SB) in the German-language media and by SB experts | journal = Systems and Synthetic Biology | volume = 3 | issue = 1–4 | pages = 27–35 | date = December 2009 | pmid = 19816797 | pmc = 2759430 | doi = 10.1007/s11693-009-9038-3 }}</ref> To better communicate synthetic biology and its societal ramifications to a broader public, COSY and SYNBIOSAFE published ''SYNBIOSAFE'', a 38-minute documentary film, in October 2009.<ref>[http://www.synbiosafe.eu/DVD COSY/SYNBIOSAFE Documentary]</ref><br />
<br />
<br />
<br />
The International Association Synthetic Biology has proposed self-regulation.<ref>Report of IASB [http://www.ia-sb.eu/tasks/sites/synthetic-biology/assets/File/pdf/iasb_report_biosecurity_syntheticbiology.pdf "Technical solutions for biosecurity in synthetic biology"] {{webarchive |url=https://web.archive.org/web/20110719031805/http://www.ia-sb.eu/tasks/sites/synthetic-biology/assets/File/pdf/iasb_report_biosecurity_syntheticbiology.pdf |date=July 19, 2011 }}, Munich, 2008</ref> This proposes specific measures that the synthetic biology industry, especially DNA synthesis companies, should implement. In 2007, a group led by scientists from leading DNA-synthesis companies published a "practical plan for developing an effective oversight framework for the DNA-synthesis industry".<ref name="Bügl, H. et al. 2007 627–629" /><br />
<br />
<br />
<br />
=== United States ===<br />
<br />
<br />
<br />
In January 2009, the [[Alfred P. Sloan Foundation]] funded the [[Woodrow Wilson Center]], the [[Hastings Center]], and the [[J. Craig Venter Institute]] to examine the public perception, ethics and policy implications of synthetic biology.<ref>Parens E., Johnston J., Moses J. [http://www.thehastingscenter.org/who-we-are/our-research/selected-past-projects/ethical-issues-in-synthetic-biology-2/ Ethical Issues in Synthetic Biology.] 2009.</ref><br />
<br />
<br />
<br />
On July 9–10, 2009, the National Academies' Committee of Science, Technology & Law convened a symposium on "Opportunities and Challenges in the Emerging Field of Synthetic Biology".<ref>[http://sites.nationalacademies.org/PGA/stl/PGA_050738 NAS Symposium official site]</ref><br />
<br />
<br />
<br />
After the publication of the [[Mycoplasma laboratorium|first synthetic genome]] and the accompanying media coverage about "life" being created, President [[Barack Obama]] established the [[Presidential Commission for the Study of Bioethical Issues]] to study synthetic biology.<ref>Presidential Commission for the study of Bioethical Issues, December 2010 [http://bioethics.gov/node/353 FAQ]</ref> The commission convened a series of meetings, and issued a report in December 2010 titled "New Directions: The Ethics of Synthetic Biology and Emerging Technologies." The commission stated that "while Venter’s achievement marked a significant technical advance in demonstrating that a relatively large genome could be accurately synthesized and substituted for another, it did not amount to the “creation of life”.<ref>[http://bioethics.gov/node/353 Synthetic Biology F.A.Q.'s | Presidential Commission for the Study of Bioethical Issues]</ref> It noted that synthetic biology is an emerging field, which creates potential risks and rewards. The commission did not recommend policy or oversight changes and called for continued funding of the research and new funding for monitoring, study of emerging ethical issues and public education.<ref name="bioethics.gov" /><br />
<br />
<br />
<br />
Synthetic biology, as a major tool for biological advances, results in the "potential for developing biological weapons, possible unforeseen negative impacts on human health ... and any potential environmental impact".<ref name=":2">{{cite journal | vauthors = Erickson B, Singh R, Winters P | title = Synthetic biology: regulating industry uses of new biotechnologies | journal = Science | volume = 333 | issue = 6047 | pages = 1254–6 | date = September 2011 | pmid = 21885775 | doi = 10.1126/science.1211066 | bibcode = 2011Sci...333.1254E | s2cid = 1568198 | url = https://semanticscholar.org/paper/6ae989f6b07dc3c8a8694792d6fe8f036a0e0292 }}</ref> These security issues may be avoided by regulating industry uses of biotechnology through policy legislation. Federal guidelines on genetic manipulation are being proposed by "the President's Bioethics Commission ... in response to the announced creation of a self-replicating cell from a chemically synthesized genome, put forward 18 recommendations not only for regulating the science ... for educating the public".<ref name=":2" /><br />
<br />
<br />
<br />
=== Opposition 反对意见 ===<br />
<br />
On March 13, 2012, over 100 environmental and civil society groups, including [[Friends of the Earth]], the [[International Center for Technology Assessment]] and the [[ETC Group (AGETC)|ETC Group]] issued the manifesto ''The Principles for the Oversight of Synthetic Biology''. This manifesto calls for a worldwide moratorium on the release and commercial use of synthetic organisms until more robust regulations and rigorous biosafety measures are established. The groups specifically call for an outright ban on the use of synthetic biology on the [[human genome]] or [[human microbiome]].<ref>Katherine Xue for Harvard Magazine. September–October 2014 [http://harvardmagazine.com/2014/09/synthetic-biologys-new-menagerie Synthetic Biology’s New Menagerie]</ref><ref>Yojana Sharma for Scidev.net March 15, 2012. [http://www.scidev.net/global/genomics/news/ngos-call-for-international-regulation-of-synthetic-biology.html NGOs call for international regulation of synthetic biology]</ref> [[Richard Lewontin]] wrote that some of the safety tenets for oversight discussed in ''The Principles for the Oversight of Synthetic Biology'' are reasonable, but that the main problem with the recommendations in the manifesto is that "the public at large lacks the ability to enforce any meaningful realization of those recommendations".<ref>[http://www.nybooks.com/articles/archives/2014/may/08/new-synthetic-biology-who-gains/?insrc=rel#fnr-1 The New Synthetic Biology: Who Gains?] (2014-05-08), [[Richard C. Lewontin]], ''[[New York Review of Books]]''</ref><br />
<br />
<br />
<br />
== Health and safety 健康和安全 ==<br />
<br />
{{Main|Hazards of synthetic biology}}<br />
<br />
<br />
<br />
The hazards of synthetic biology include [[biosafety]] hazards to workers and the public, [[biosecurity]] hazards stemming from deliberate engineering of organisms to cause harm, and environmental hazards. The biosafety hazards are similar to those for existing fields of biotechnology, mainly exposure to pathogens and toxic chemicals, although novel synthetic organisms may have novel risks.<ref name=":02">{{Cite journal|url=https://blogs.cdc.gov/niosh-science-blog/2017/01/24/synthetic-biology/|title=Synthetic Biology and Occupational Risk|last1=Howard|first1=John|last2=Murashov|first2=Vladimir|date=2017-01-24|journal=Journal of Occupational and Environmental Hygiene|archive-url=|archive-date=|access-date=2018-11-30|last3=Schulte|first3=Paul|volume=14|issue=3|pages=224–236|pmid=27754800|doi=10.1080/15459624.2016.1237031|s2cid=205893358}}</ref><ref name=":12">{{Cite journal|last1=Howard|first1=John|last2=Murashov|first2=Vladimir|last3=Schulte|first3=Paul|date=2016-10-18|title=Synthetic biology and occupational risk|journal=Journal of Occupational and Environmental Hygiene|volume=14|issue=3|pages=224–236|doi=10.1080/15459624.2016.1237031|pmid=27754800|s2cid=205893358|issn=1545-9624}}</ref> For biosecurity, there is concern that synthetic or redesigned organisms could theoretically be used for [[bioterrorism]]. Potential risks include recreating known pathogens from scratch, engineering existing pathogens to be more dangerous, and engineering microbes to produce harmful biochemicals.<ref name=":7">{{Cite book|title=Biodefense in the Age of Synthetic Biology|date=2018-06-19|publisher=[[National Academies of Sciences, Engineering, and Medicine]]|isbn=9780309465182|location=|pages=|doi=10.17226/24890|pmid=30629396|last1=National Academies Of Sciences|first1=Engineering|author2=Division on Earth Life Studies|last3=Board On Life|first3=Sciences|author4=Board on Chemical Sciences Technology|author5=Committee on Strategies for Identifying Addressing Potential Biodefense Vulnerabilities Posed by Synthetic Biology}}</ref> Lastly, environmental hazards include adverse effects on [[biodiversity]] and [[ecosystem services]], including potential changes to land use resulting from agricultural use of synthetic organisms.<ref name=":8">{{Cite web|url=http://ec.europa.eu/environment/integration/research/newsalert/multimedia/synthetic_biology_and_biodiversity.htm|title=Future Brief: Synthetic biology and biodiversity|last=|first=|date=September 2016|website=European Commission|pages=14–15|archive-url=|archive-date=|access-date=2019-01-14}}</ref><ref>{{Cite web|url=https://publications.europa.eu/en/publication-detail/-/publication/9b231c71-faf1-11e5-b713-01aa75ed71a1/language-en/format-PDF|title=Final opinion on synthetic biology III: Risks to the environment and biodiversity related to synthetic biology and research priorities in the field of synthetic biology|last=|first=|date=2016-04-04|website=EU Directorate-General for Health and Food Safety|pages=8, 27|archive-url=|archive-date=|access-date=2019-01-14}}</ref><br />
<br />
<br />
<br />
Existing risk analysis systems for GMOs are generally considered sufficient for synthetic organisms, although there may be difficulties for an organism built "bottom-up" from individual genetic sequences.<ref name=":32" /><ref name=":22">{{Cite web|url=http://www.hse.gov.uk/research/rrpdf/rr944.pdf|title=Synthetic biology: A review of the technology, and current and future needs from the regulatory framework in Great Britain|last1=Bailey|first1=Claire|last2=Metcalf|first2=Heather|date=2012|website=UK [[Health and Safety Executive]]|archive-url=|archive-date=|access-date=2018-11-29|last3=Crook|first3=Brian}}</ref> Synthetic biology generally falls under existing regulations for GMOs and biotechnology in general, and any regulations that exist for downstream commercial products, although there are generally no regulations in any jurisdiction that are specific to synthetic biology.<ref name=":5">{{Citation|last1=Pei|first1=Lei|title=Regulatory Frameworks for Synthetic Biology|date=2012|work=Synthetic Biology|pages=157–226|publisher=John Wiley & Sons, Ltd|doi=10.1002/9783527659296.ch5|isbn=9783527659296|last2=Bar‐Yam|first2=Shlomiya|last3=Byers‐Corbin|first3=Jennifer|last4=Casagrande|first4=Rocco|last5=Eichler|first5=Florentine|last6=Lin|first6=Allen|last7=Österreicher|first7=Martin|last8=Regardh|first8=Pernilla C.|last9=Turlington|first9=Ralph D.}}</ref><ref name=":4">{{Cite journal|last=Trump|first=Benjamin D.|date=2017-11-01|title=Synthetic biology regulation and governance: Lessons from TAPIC for the United States, European Union, and Singapore|journal=Health Policy|volume=121|issue=11|pages=1139–1146|doi=10.1016/j.healthpol.2017.07.010|pmid=28807332|issn=0168-8510|doi-access=free}}</ref><br />
<br />
<br />
<br />
== See also 请参阅 ==<br />
<br />
{{Colbegin|colwidth=20em}}<br />
<br />
* ''[[ACS Synthetic Biology]]'' (journal)<br />
<br />
* [[Bioengineering]]<br />
<br />
* [[Biomimicry]]<br />
<br />
Category:Biotechnology<br />
<br />
类别: 生物技术<br />
<br />
* [[Carlson Curve]]<br />
<br />
Category:Molecular genetics<br />
<br />
类别: 分子遗传学<br />
<br />
* [[Chiral life concept]]<br />
<br />
Category:Systems biology<br />
<br />
分类: 系统生物学<br />
<br />
* [[Computational biology]]<br />
<br />
Category:Bioinformatics<br />
<br />
类别: 生物信息学<br />
<br />
* [[Computational biomodeling]]<br />
<br />
Category:Biocybernetics<br />
<br />
类别: 生物控制论<br />
<br />
* [[DNA digital data storage]]<br />
<br />
Category:Appropriate technology<br />
<br />
类别: 适当的技术<br />
<br />
* [[Engineering biology]]<br />
<br />
Category:Emerging technologies<br />
<br />
类别: 新兴技术<br />
<br />
<noinclude><br />
<br />
<small>This page was moved from [[wikipedia:en:Synthetic biology]]. Its edit history can be viewed at [[合成生物学/edithistory]]</small></noinclude><br />
<br />
[[Category:待整理页面]]</div>粲兰https://wiki.swarma.org/index.php?title=%E8%87%AA%E5%A4%8D%E5%88%B6_Self-replication&diff=18365自复制 Self-replication2020-11-15T09:23:24Z<p>粲兰:</p>
<hr />
<div>此词条暂由袁一博翻译,未经人工整理和审校,带来阅读不便,请见谅。{{see also|Biological reproduction}}<br />
<br />
本词条已由[[用户:Qige96|Ricky]]、[[用户:Paradoxist-Paradoxer|Paradoxist@Paradoxer]]审校。<br />
<br />
{{Use dmy dates|date=April 2019|cs1-dates=y}}<br />
<br />
<br />
[[Image:DNA chemical structure.svg|thumb|right|200px|[[Molecular structure]] of [[DNA]] ]]<br />
<br />
[[Molecular structure of DNA DNA分子结构]]<br />
<br />
[ DNA 的分子结构]<br />
<br />
'''Self-replication''' is any behavior of a [[dynamical system]] that yields construction of an identical or similar copy of itself. [[Cell (biology)|Biological cell]]s, given suitable environments, reproduce by [[cell division]]. During cell division, [[DNA]] is replicated and can be transmitted to offspring during [[reproduction]]. [[virus (biology)|Biological viruses]] can [[Viral replication|replicate]], but only by commandeering the reproductive machinery of cells through a process of infection. Harmful [[prion]] proteins can replicate by converting normal proteins into rogue forms.<ref>{{cite news|url=http://news.bbc.co.uk/1/hi/health/8435320.stm |title='Lifeless' prion proteins are 'capable of evolution' |work=BBC News |date=2010-01-01 |accessdate=2013-10-22}}</ref> [[Computer virus]]es reproduce using the hardware and software already present on computers. Self-replication in [[robotics]] has been an area of research and a subject of interest in [[science fiction]]. Any self-replicating mechanism which does not make a perfect copy ([[mutation]]) will experience [[genetic variation]] and will create variants of itself. These variants will be subject to [[natural selection]], since some will be better at surviving in their current environment than others and will out-breed them.<br />
<br />
Self-replication is any behavior of a dynamical system that yields construction of an identical or similar copy of itself. Biological cells, given suitable environments, reproduce by cell division. During cell division, DNA is replicated and can be transmitted to offspring during reproduction. Biological viruses can replicate, but only by commandeering the reproductive machinery of cells through a process of infection. Harmful prion proteins can replicate by converting normal proteins into rogue forms. Computer viruses reproduce using the hardware and software already present on computers. Self-replication in robotics has been an area of research and a subject of interest in science fiction. Any self-replicating mechanism which does not make a perfect copy (mutation) will experience genetic variation and will create variants of itself. These variants will be subject to natural selection, since some will be better at surviving in their current environment than others and will out-breed them.<br />
<br />
'''<font color="#ff8000">自复制 Self-replication </font>'''是一个动力系统任何能产生与自身相同或相似的复制体的的行为。生物细胞,在适当的环境下,通过细胞分裂进行繁殖。在细胞分裂过程中,DNA 被复制,并在生殖过程中传递给后代。生物病毒可以复制,但只能通过感染过程控制细胞的生殖机制。有害的朊病毒蛋白可以通过将正常的蛋白质转化为反常形式来复制。<ref>{{cite news|url=http://news.bbc.co.uk/1/hi/health/8435320.stm |title='Lifeless' prion proteins are 'capable of evolution' |work=BBC News |date=2010-01-01 |accessdate=2013-10-22}}</ref>计算机病毒利用计算机上已有的硬件和软件进行复制。自我复制机器人学一直是一个研究领域,也是科幻小说中的一个兴趣主题。任何不能完美复制的自复制机制(变异)都会经历遗传变异,产生自身的变异体。这些变异体将受到自然选择的影响,因为有些变异会比其他变异更好地在当前环境中生存,并将超越他们。<br />
<br />
<br />
==综述==<br />
<br />
===理论===<br />
<br />
{{See also|Von Neumann universal constructor}}<br />
<br />
Early research by [[John von Neumann]]<ref name=Hixon_vonNeumann>{{cite book|last=von Neumann|first=John|title=The Hixon Symposium|year=1948|location=Pasadena, California|pages=1–36}}</ref> established that replicators have several parts:<br />
<br />
Early research by John von Neumann established that replicators have several parts:<br />
<br />
[[约翰·冯·诺依曼_John_von_Neumann|约翰·冯·诺伊曼]]的早期研究<ref name=Hixon_vonNeumann>{{cite book|last=von Neumann|first=John|title=The Hixon Symposium|year=1948|location=Pasadena, California|pages=1–36}}</ref>表明复制因子有几个部分:<br />
<br />
*A coded representation of the replicator<br />
*A mechanism to copy the coded representation<br />
*A mechanism for effecting construction within the host environment of the replicator<br />
<br />
*'''<font color="#ff8000">复制机(replicator)</font>'''的的编码表示<br />
*一种能复制编码后的复制机表示的机制<br />
*一种能在复制机所在环境中启动构建过程的机制<br />
<br />
Exceptions to this pattern may be possible, although none have yet been achieved. For example, scientists have come close to constructing [https://arstechnica.com/science/2011/04/investigations-into-the-ancient-rna-world/ RNA that can be copied] in an "environment" that is a solution of RNA monomers and transcriptase. In this case, the body is the genome, and the specialized copy mechanisms are external. The requirement for an outside copy mechanism has not yet been overcome, and such systems are more accurately characterized as "assisted replication" than "self-replication".<br />
<br />
Exceptions to this pattern may be possible, although none have yet been achieved. For example, scientists have come close to constructing [https://arstechnica.com/science/2011/04/investigations-into-the-ancient-rna-world/ RNA that can be copied] in an "environment" that is a solution of RNA monomers and transcriptase. In this case, the body is the genome, and the specialized copy mechanisms are external. The requirement for an outside copy mechanism has not yet been overcome, and such systems are more accurately characterized as "assisted replication" than "self-replication".<br />
<br />
这种模式可能有例外,尽管尚未由任何发现。例如,科学家们已经接近于在 RNA 单体和转录酶的“环境”中构建[https://arstechnica.com/science/2011/04/investigations-into-the-ancient-RNA-world/ 可复制的RNA ]。在这种情况下,身体就是基因组,专门的复制机制是外部的。对外部复制机制的需求尚未被克服,这种系统更准确地描述为“辅助复制”而不是“自我复制”。<br />
<br />
<br />
However, the simplest possible case is that only a genome exists. Without some specification of the self-reproducing steps, a genome-only system is probably better characterized as something like a [[crystal]].<br />
<br />
However, the simplest possible case is that only a genome exists. Without some specification of the self-reproducing steps, a genome-only system is probably better characterized as something like a crystal.<br />
<br />
然而,最简单的可能情况是只有一个基因组存在。如果没有一些自我繁殖步骤的说明,一个只有基因组的系统可能被描述为类似于晶体的东西会更为恰当。<br />
<br />
<br />
===自复制的种类===<br />
<br />
Recent research<ref>{{cite web|url = http://www.MolecularAssembler.com/KSRM/5.1.htm | date = 2004 | accessdate = 29 June 2013 | last = Freitas | first = Robert | last2 = Merkle | first2 = Ralph | title = Kinematic Self-Replicating Machines - General Taxonomy of Replicators}}</ref> has begun to categorize replicators, often based on the amount of support they require.<br />
<br />
Recent research has begun to categorize replicators, often based on the amount of support they require.<br />
<br />
最近的研究<ref>{{cite web|url = http://www.MolecularAssembler.com/KSRM/5.1.htm | date = 2004 | accessdate = 29 June 2013 | last = Freitas | first = Robert | last2 = Merkle | first2 = Ralph | title = Kinematic Self-Replicating Machines - General Taxonomy of Replicators}}</ref>已经开始对复制者进行分类,通常基于它们所需要的支持程度。<br />
<br />
*Natural replicators have all or most of their design from nonhuman sources. Such systems include natural life forms.<br />
*[[Autotroph]]ic replicators can reproduce themselves "in the wild". They mine their own materials. It is conjectured that non-biological autotrophic replicators could be designed by humans, and could easily accept specifications for human products.<br />
*Self-reproductive systems are conjectured systems which would produce copies of themselves from industrial feedstocks such as metal bar and wire.<br />
*Self-assembling systems assemble copies of themselves from finished, delivered parts. Simple examples of such systems have been demonstrated at the macro scale.<br />
<br />
*'''<font color="#ff8000">天然复制机(Natural replicators)</font>'''的设计全部或绝大部分不经人手,浑然天成(😂)。这样的系统包含自然的生命形式。<br />
*'''<font color="#ff8000">无机复制机(Autotrophic replicators)</font>'''可以在自然环境下进行自我复制。它们自己会收集自身的物质。据推测,非生物的无机复制因子可以由人类设计而成,并且可以轻易按照人类人品的规格去设计。<br />
*'''<font color="#ff8000">自生产系统(Self-reproductive systems)</font>'''存在于假想当中,可以利用工业原料,例如金属棒和金属丝,以产生自身的拷贝<br />
*'''<font color="#ff8000">自组装系统(Self-assembling systems)</font>'''自动将它们各种已完成的部分组装起来。这种系统的简单例子已经在宏观尺度得到展示。<br />
<br />
<br />
The design space for machine replicators is very broad. A comprehensive study<ref>{{cite web|url = http://www.MolecularAssembler.com/KSRM/5.1.9.htm | date = 2004 | accessdate = 29 June 2013 | last1 = Freitas | first1 = Robert | last2 = Merkle | first2 = Ralph | title = Kinematic Self-Replicating Machines - Freitas-Merkle Map of the Kinematic Replicator Design Space (2003–2004)}}</ref> to date by [[Robert Freitas]] and [[Ralph Merkle]] has identified 137 design dimensions grouped into a dozen separate categories, including: (1) Replication Control, (2) Replication Information, (3) Replication Substrate, (4) Replicator Structure, (5) Passive Parts, (6) Active Subunits, (7) Replicator Energetics, (8) Replicator Kinematics, (9) Replication Process, (10) Replicator Performance, (11) Product Structure, and (12) Evolvability.<br />
<br />
The design space for machine replicators is very broad. A comprehensive study to date by Robert Freitas and Ralph Merkle has identified 137 design dimensions grouped into a dozen separate categories, including: (1) Replication Control, (2) Replication Information, (3) Replication Substrate, (4) Replicator Structure, (5) Passive Parts, (6) Active Subunits, (7) Replicator Energetics, (8) Replicator Kinematics, (9) Replication Process, (10) Replicator Performance, (11) Product Structure, and (12) Evolvability.<br />
<br />
机械复制机的设计空间非常广阔。迄今为止,罗伯特·弗雷塔斯(Robert Freitas)和拉尔夫·默克尔(Ralph Merkle)的综合研究<ref>{{cite web|url = http://www.MolecularAssembler.com/KSRM/5.1.9.htm | date = 2004 | accessdate = 29 June 2013 | last1 = Freitas | first1 = Robert | last2 = Merkle | first2 = Ralph | title = Kinematic Self-Replicating Machines - Freitas-Merkle Map of the Kinematic Replicator Design Space (2003–2004)}}</ref> 已经确定了137个设计维度并将其分为十几个独立的类别,包括: (1)复制控制,(2)复制信息,(3)复制基质,(4)复制机结构,(5)被动部件,(6)主动子单元,(7)复制机能量学,(8)复制机运动学,(9)复制过程,(10)复制机性能,(11)产物结构,和(12)可演化性。<br />
<br />
<br />
===一种自复制的计算机程序===<br />
<br />
{{Main|Quine (computing)}}<br />
<br />
In [[computer science]] a [[Quine (computing)|quine]] is a self-reproducing computer program that, when executed, outputs its own code. For example, a quine in the [[Python (programming language)|Python programming language]] is:<br />
<br />
In computer science a quine is a self-reproducing computer program that, when executed, outputs its own code. For example, a quine in the Python programming language is:<br />
<br />
在计算机科学中,quine 是一种自我复制的计算机程序,当执行时,输出自己的代码。例如,利用Python语言编写的一个 quine 是:<br />
<br />
:<code>a='a=%r;print(a%%a)';print(a%a)</code><br />
<br />
<br />
A more trivial approach is to write a program that will make a copy of any stream of data that it is directed to, and then direct it at itself. In this case the program is treated as both executable code, and as data to be manipulated. This approach is common in most self-replicating systems, including biological life, and is simpler as it does not require the program to contain a complete description of itself.<br />
<br />
A more trivial approach is to write a program that will make a copy of any stream of data that it is directed to, and then direct it at itself. In this case the program is treated as both executable code, and as data to be manipulated. This approach is common in most self-replicating systems, including biological life, and is simpler as it does not require the program to contain a complete description of itself.<br />
<br />
一种更简单的方法是编写一个程序,这个程序将复制它所指向的任何数据流,然后把程序指向自己。在这种情况下,程序既被当作可执行代码,也被当作要操作的数据。这种方法在包括生物生命在内的大多数自复制系统中都很常见,而且更简单,因为它不需要程序包含对自身的完整描述。<br />
<br />
<br />
In many programming languages an empty program is legal, and executes without producing errors or other output. The output is thus the same as the source code, so the program is trivially self-reproducing.<br />
<br />
In many programming languages an empty program is legal, and executes without producing errors or other output. The output is thus the same as the source code, so the program is trivially self-reproducing.<br />
<br />
在许多编程语言中,空程序是合法的,并且执行时不会产生错误或其他输出。因此,其输出是相同的源代码,所以这种程序是一种简单的自复制机。<br />
<br />
<br />
===自复制式平铺===<br />
<br />
{{See also|Self-similarity}}<br />
<br />
In [[geometry]] a self-replicating tiling is a tiling pattern in which several [[congruence (geometry)|congruent]] tiles may be joined together to form a larger tile that is similar to the original. This is an aspect of the field of study known as [[tessellation]]. The "sphinx" [[hexiamond]] is the only known self-replicating [[pentagon]].<ref>For an image that does not show how this replicates, see: Eric W. Weisstein. "Sphinx." From MathWorld--A Wolfram Web Resource. [http://mathworld.wolfram.com/Sphinx.html http://mathworld.wolfram.com/Sphinx.html]</ref> For example, four such [[concave polygon|concave]] pentagons can be joined together to make one with twice the dimensions.<ref>For further illustrations, see [http://www.geoaustralia.com/italian/Sphinx/Guide.html Teaching TILINGS / TESSELLATIONS with Geo Sphinx]</ref> [[Solomon W. Golomb]] coined the term [[rep-tiles]] for self-replicating tilings.<br />
<br />
In geometry a self-replicating tiling is a tiling pattern in which several congruent tiles may be joined together to form a larger tile that is similar to the original. This is an aspect of the field of study known as tessellation. The "sphinx" hexiamond is the only known self-replicating pentagon. For example, four such concave pentagons can be joined together to make one with twice the dimensions. Solomon W. Golomb coined the term rep-tiles for self-replicating tilings.<br />
<br />
在几何学中,'''<font color="#ff8000">自复制式平铺(self-replicating tiling)</font>'''是一种平铺方法,其中几个全等的图形可以连接在一起,形成一个较大的类似于原来的图形。这属于一个被称为密铺的研究领域。 称为“狮身人面像”的六块多形组 (hexiamond)是唯一已知的自我复制的五边形<ref>For an image that does not show how this replicates, see: Eric W. Weisstein. "Sphinx." From MathWorld--A Wolfram Web Resource. [http://mathworld.wolfram.com/Sphinx.html http://mathworld.wolfram.com/Sphinx.html]</ref> 。例如,4个图中的凹五边形可以一起组成一个和原形状相似但是2倍大小的凹五边形。所罗门·格伦布 <ref>For further illustrations, see [http://www.geoaustralia.com/italian/Sphinx/Guide.html Teaching TILINGS / TESSELLATIONS with Geo Sphinx]</ref>为这样的自我复制纹样创造了 rep-tiles 这个术语。<br />
<br />
--[[用户:粲兰|袁一博]]([[用户讨论:粲兰|讨论]])“‘狮身人面像’双锥六面体是已知唯一能自复制的五角形。”这句对应原句"The 'sphinx' hexiamond is the only known self-replicating pentagon."疑似存在几何上的逻辑错误,hexiamond并不是一个平面几何图形。<br />
--[[用户:Qige96|Ricky]]([[用户讨论:Qige96|讨论]]) 已修改<br />
<br />
In 2012, [[Lee Sallows]] identified rep-tiles as a special instance of a [[self-tiling tile set]] or setiset. A setiset of order ''n'' is a set of ''n'' shapes that can be assembled in ''n'' different ways so as to form larger replicas of themselves. Setisets in which every shape is distinct are called 'perfect'. A rep-''n'' rep-tile is just a setiset composed of ''n'' identical pieces.<br />
<br />
In 2012, Lee Sallows identified rep-tiles as a special instance of a self-tiling tile set or '''<font color="#32CD32">setiset</font>'''. A setiset of order n is a set of n shapes that can be assembled in n different ways so as to form larger replicas of themselves. Setisets in which every shape is distinct are called 'perfect'. A rep-n rep-tile is just a setiset composed of n identical pieces.<br />
<br />
2012年,李·萨洛斯(Lee Sallows) 将 rep-tiles 定义为一种特殊的自平铺纹样集(setiset)。一组 ''n'' 阶的复制品是一组 ''n'' 个形状的复制品,它们可以以 ''n'' 种不同的方式组合,以便形成更大的自复制产物。每个形状各不相同的自平铺纹样集被称为“完美的”。n次重复的 rep-tile 只是由 n 个相同部分组成的一个集合。<br />
--[[用户:粲兰|袁一博]]([[用户讨论:粲兰|讨论]])“setiset”找不到合适的翻译。<br />
<br />
<br />
{|<br />
|- style="vertical-align:bottom;"<br />
[[File:Self-replication of sphynx hexidiamonds.svg|thumb|A rep-tile-based setiset of order 4|left|text-bottom|260px|Four '[[Sphinx tiling|sphinx]]' hexiamonds can be put together to form another sphinx.]]<br />
[[File:A rep-tile-based setiset of order 4.png|thumb|A rep-tile-based setiset of order 9|right|text-bottom|290px|A perfect [[Self-tiling tile set|setiset]] of order 4]]<br />
|}<br />
{{clear}}<br />
<br />
<br />
===自复制的粘土晶体===<br />
<br />
One form of natural self-replication that isn't based on DNA or RNA occurs in clay crystals.<ref>{{cite web|url=http://www.bbc.com/earth/story/20160823-the-idea-that-life-began-as-clay-crystals-is-50-years-old |title=The idea that life began as clay crystals is 50 years old |publisher=bbc.com |date=2016-08-24 |accessdate=2019-11-10}}</ref> Clay consists of a large number of small crystals, and clay is an environment that promotes crystal growth. Crystals consist of a regular lattice of atoms and are able to grow if e.g. placed in a water solution containing the crystal components; automatically arranging atoms at the crystal boundary into the crystalline form. Crystals may have irregularities where the regular atomic structure is broken, and when crystals grow, these irregularities may propagate, creating a form of self-replication of crystal irregularities. Because these irregularities may affect the probability of a crystal breaking apart to form new crystals, crystals with such irregularities could even be considered to undergo evolutionary development.<br />
<br />
One form of natural self-replication that isn't based on DNA or RNA occurs in clay crystals. Clay consists of a large number of small crystals, and clay is an environment that promotes crystal growth. Crystals consist of a regular lattice of atoms and are able to grow if e.g. placed in a water solution containing the crystal components; automatically arranging atoms at the crystal boundary into the crystalline form. Crystals may have irregularities where the regular atomic structure is broken, and when crystals grow, these irregularities may propagate, creating a form of self-replication of crystal irregularities. Because these irregularities may affect the probability of a crystal breaking apart to form new crystals, crystals with such irregularities could even be considered to undergo evolutionary development.<br />
<br />
粘土晶体中存在一种不基于 DNA 或 RNA 的天然自复制。粘土由大量的小晶体组成,粘土是促进晶体生长的环境。晶体由规则的原子晶格组成的,将其放置在含有晶体成分的水溶液中能够生长,并自动地将晶体边界上的原子排列成晶体形式。当正常的原子结构被破坏时,晶体可能具有不规则性,当晶体生长时,这些不规则性可能会传播,形成一种不规则晶体的自我复制。由于这些不规则结构可能会影响晶体分裂形成新晶体的概率,因此这种不规则结构的晶体甚至可以被认为是在经历演化过程。<br />
<br />
<br />
===应用===<br />
<br />
It is a long-term goal of some engineering sciences to achieve a [[clanking replicator]], a material device that can self-replicate. The usual reason is to achieve a low cost per item while retaining the utility of a manufactured good. Many authorities say that in the limit, the cost of self-replicating items should approach the cost-per-weight of wood or other biological substances, because self-replication avoids the costs of [[labour (economics)|labor]], [[Capital (economics)|capital]] and [[distribution (business)|distribution]] in conventional [[factory|manufactured goods]].<br />
<br />
It is a long-term goal of some engineering sciences to achieve a clanking replicator, a material device that can self-replicate. The usual reason is to achieve a low cost per item while retaining the utility of a manufactured good. Many authorities say that in the limit, the cost of self-replicating items should approach the cost-per-weight of wood or other biological substances, because self-replication avoids the costs of labor, capital and distribution in conventional manufactured goods.<br />
<br />
一些工程科学的长期目标是制造出一种可以自复制的铿锵复制机(clanking replicator)。通常的原因是为了在保证产品的功效的同时降低每件商品的成本。许多权威人士表示,自复制产品的成本应该能逼近木材或其他生物材质的单位重量成本,因为自我复制不需要传统工业产品所需的劳动力、资本和分销成本。<br />
<br />
<br />
A fully novel artificial replicator is a reasonable near-term goal.<br />
<br />
A fully novel artificial replicator is a reasonable near-term goal.<br />
<br />
制造出一个全新的人工复制机是一个合理的近期目标。<br />
<br />
<br />
A [[NASA]] study recently placed the complexity of a [[clanking replicator]] at approximately that of [[Intel]]'s [[Pentium (brand)|Pentium]] 4 CPU.<ref>{{cite web|url=http://www.niac.usra.edu/files/studies/final_report/883Toth-Fejel.pdf |title=Modeling Kinematic Cellular Automata Final Report |publisher= |date=April 30, 2004 |accessdate=2013-10-22}}</ref> That is, the technology is achievable with a relatively small engineering group in a reasonable commercial time-scale at a reasonable cost.<br />
<br />
A NASA study recently placed the complexity of a clanking replicator at approximately that of Intel's Pentium 4 CPU. That is, the technology is achievable with a relatively small engineering group in a reasonable commercial time-scale at a reasonable cost.<br />
<br />
美国宇航局最近的一项研究表明,铿锵复制机的复杂度大约相当于英特尔奔腾4处理器的复杂度。也就是说,这项技术在一个合理的商业时间规模内,是可以由一个相对较小的工程团队以一个合理的成本实现的。<br />
<br />
<br />
Given the currently keen interest in biotechnology and the high levels of funding in that field, attempts to exploit the replicative ability of existing cells are timely, and may easily lead to significant insights and advances.<br />
<br />
Given the currently keen interest in biotechnology and the high levels of funding in that field, attempts to exploit the replicative ability of existing cells are timely, and may easily lead to significant insights and advances.<br />
<br />
目前学术界对生物技术的有着浓厚兴趣,这一领域的也有大量资金,这正是尝试利用现有细胞的复制能力的时候,而且可以期望产生重大的洞察和进展。<br />
<br />
<br />
A variation of self replication is of practical relevance in [[compiler]] construction, where a similar [[bootstrapping]] problem occurs as in natural self replication. A compiler ([[phenotype]]) can be applied on the compiler's own [[source code]] ([[genotype]]) producing the compiler itself. During compiler development, a modified ([[Mutation|mutated]]) source is used to create the next generation of the compiler. This process differs from natural self-replication in that the process is directed by an engineer, not by the subject itself.<br />
<br />
A variation of self replication is of practical relevance in compiler construction, where a similar bootstrapping problem occurs as in natural self replication. A compiler (phenotype) can be applied on the compiler's own source code (genotype) producing the compiler itself. During compiler development, a modified (mutated) source is used to create the next generation of the compiler. This process differs from natural self-replication in that the process is directed by an engineer, not by the subject itself.<br />
<br />
自复制的一种变体在编译器构造中具有实际意义,在天然自复制中也会出现类似的自我改进现象。编译器(表现型)可以应用于编译器自身的源代码(基因型) ,从而产生编译器本身。在编译器开发过程中,一般使用修改(变异)的源代码来创建下一代编译器。这个过程不同于天然的自我复制,因为这个过程是由工程师指导的,而不是复制机本身。<br />
<br />
<br />
<br />
==Mechanical self-replication 机械自复制==<br />
<br />
<br />
<br />
{{Main|Self-replicating machine}}<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
An activity in the field of robots is the self-replication of machines. Since all robots (at least in modern times) have a fair number of the same features, a self-replicating robot (or possibly a hive of robots) would need to do the following:<br />
<br />
An activity in the field of robots is the self-replication of machines. Since all robots (at least in modern times) have a fair number of the same features, a self-replicating robot (or possibly a hive of robots) would need to do the following:<br />
<br />
机器人学(robotics)领域的一项活动就是机器的自复制。由于所有机器人(至少在现代)都有相当数量的相同特性,一个自复制机器人(或者可能是一群机器人)需要做到以下几点:<br />
<br />
<br />
<br />
<br />
<br />
*Obtain construction materials<br />
*获得构建材料<br />
<br />
<br />
<br />
*Manufacture new parts including its smallest parts and thinking apparatus<br />
*制造新零件,包括最小的零件和思维组件<br />
<br />
<br />
<br />
*Provide a consistent power source<br />
*提供一个稳定一致的动力源<br />
<br />
<br />
<br />
*Program the new members<br />
*为新成员编程<br />
<br />
<br />
<br />
*error correct any mistakes in the offspring<br />
*改正子代产物的任何错误<br />
<br />
<br />
<br />
<br />
<br />
<br />
On a [[Nanotechnology|nano]] scale, [[Assembler (nanotechnology)|assemblers]] might also be designed to self-replicate under their own power. This, in turn, has given rise to the "[[grey goo]]" version of [[Armageddon]], as featured in such science fiction novels as ''[[Bloom (novel)|Bloom]]'', ''[[Prey (novel)|Prey]]'', and ''[[Recursion (novel)|Recursion]]''.<br />
<br />
On a nano scale, assemblers might also be designed to self-replicate under their own power. This, in turn, has given rise to the "grey goo" version of Armageddon, as featured in such science fiction novels as Bloom, Prey, and Recursion.<br />
<br />
在纳米级别上,组装者也可能被设计成在自身动力下进行自复制。这反过来又导致了“灰蛊”(grey goo)版本的世界末日,就像在诸如《花开》,《掠食》和《递归》这样的科幻小说中描述的那样。<br />
<br />
<br />
<br />
<br />
<br />
The [[Foresight Institute]] has published guidelines for researchers in mechanical self-replication.<ref>{{cite web|url=http://foresight.org/guidelines/ |title=Molecular Nanotechnology Guidelines |publisher=Foresight.org |date= |accessdate=2013-10-22}}</ref> The guidelines recommend that researchers use several specific techniques for preventing mechanical replicators from getting out of control, such as using a [[broadcast architecture]].<br />
<br />
The Foresight Institute has published guidelines for researchers in mechanical self-replication. The guidelines recommend that researchers use several specific techniques for preventing mechanical replicators from getting out of control, such as using a broadcast architecture.<br />
<br />
美国前瞻协会已经为机械自复制领域的研究者们发布了指导方针。指导方针建议研究者使用一些特定的技术来防止机械复制因子失控,比如使用广播结构。<br />
<br />
<br />
<br />
<br />
<br />
For a detailed article on mechanical reproduction as it relates to the industrial age see [[mass production]].<br />
<br />
For a detailed article on mechanical reproduction as it relates to the industrial age see mass production.<br />
<br />
关于与工业时代相关的机械复制的详细文章,请参阅'''<font color="#ff8000">大规模生产(mass production)</font>'''。<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
==Fields 研究领域==<br />
<br />
<br />
{{refimprove section|date=August 2017}}<br />
<br />
<br />
<br />
Research has occurred in the following areas:<br />
<br />
Research has occurred in the following areas:<br />
<br />
以下领域已开展的与自复制相关的研究:<br />
<br />
<br />
<br />
<br />
<br />
* [[Biology]] studies natural replication and replicators, and their interaction. These can be an important guide to avoid design difficulties in self-replicating machinery.<br />
*Biology studies natural replication and replicators, and their interaction. These can be an important guide to avoid design difficulties in self-replicating machinery.<br />
<br />
生物学研究自然复制和复制因子及其相互作用。这些可以成为避免自我复制机器设计困难的重要指导。<br />
<br />
<br />
<br />
* In [[Chemistry]] self-replication studies are typically about how a specific set of molecules can act together to replicate each other within the set <ref>{{cite book |author=Moulin, Giuseppone |title=Constitutional Dynamic Chemistry |volume=322 |pages=87–105 |year=2011|publisher=Springer|doi=10.1007/128_2011_198|pmid=21728135 |series=Topics in Current Chemistry |isbn=978-3-642-28343-7 |chapter=Dynamic Combinatorial Self-Replicating Systems }}</ref> (often part of [[Systems chemistry]] field).<br />
<br />
*In Chemistry self-replication studies are typically about how a specific set of molecules can act together to replicate each other within the set [15] (often part of Systems chemistry field).<br />
<br />
* 在化学领域,自我复制研究通常特指关于一组特定的分子如何在这个分子集群(通常是系统化学领域的一部分)中共同作用以复制对方[15]。<br />
<br />
<br />
<br />
* [[Meme]]tics studies ideas and how they propagate in human culture. Memes require only small amounts of material, and therefore have theoretical similarities to [[virus]]es and are often described as [[virus|viral]].<br />
<br />
* Memetics studies ideas and how they propagate in human culture. Memes require only small amounts of material, and therefore have theoretical similarities to viruses and are often described as viral.<br />
* 模因论研究思想及其在人类文化中的传播。模因只需要很少的材料,因此在理论上与病毒相似,通常被称为病毒性的。<br />
<br />
<br />
<br />
<br />
* [[Nanotechnology]] or more precisely, [[molecular nanotechnology]] is concerned with making [[Nanotechnology|nano]] scale [[assembler (nanotechnology)|assemblers]]. Without self-replication, capital and assembly costs of molecular machines become impossibly large.<br />
<br />
* Nanotechnology or more precisely, molecular nanotechnology is concerned with making nano scale assemblers. Without self-replication, capital and assembly costs of molecular machines become impossibly large.<br />
<br />
* 纳米技术或者更准确地说,分子纳米技术是关于制造纳米级的组装工具。如果没有自我复制,分子机器的资本和组装成本就会变得不可思议的高。<br />
<br />
<br />
<br />
<br />
* Space resources: NASA has sponsored a number of design studies to develop self-replicating mechanisms to mine space resources. Most of these designs include computer-controlled machinery that copies itself.<br />
* Space resources: NASA has sponsored a number of design studies to develop self-replicating mechanisms to mine space resources. Most of these designs include computer-controlled machinery that copies itself.<br />
<br />
* 空间资源: 美国航天局资助了一些设计研究,通过开发自我复制机制来开采空间资源。这些设计大多数包括计算机控制的可复制自己的机器。<br />
<br />
<br />
<br />
* [[Computer security]]: Many computer security problems are caused by self-reproducing computer programs that infect computers — [[computer worm]]s and [[computer virus]]es.<br />
*Computer security: Many computer security problems are caused by self-reproducing computer programs that infect computers — computer worms and computer viruses.<br />
<br />
* 计算机安全: 许多计算机安全问题是由感染计算机的自复制计算机程序造成的——计算机蠕虫和计算机病毒。<br />
<br />
<br />
<br />
* In [[parallel computing]], it takes a long time to manually load a new program on every node of a large [[computer cluster]] or [[distributed computing]] system. Automatically loading new programs using [[mobile agent]]s can save the system administrator a lot of time and give users their results much quicker, as long as they don't get out of control.<br />
<br />
*In parallel computing, it takes a long time to manually load a new program on every node of a large computer cluster or distributed computing system. Automatically loading new programs using mobile agents can save the system administrator a lot of time and give users their results much quicker, as long as they don't get out of control.<br />
<br />
* 在并行计算中,在大型计算机集群或分布式计算系统的每个节点上手动加载一个新程序需要很长时间。使用移动代理程序自动加载新程序可以节省系统管理员大量的时间,并且可以更快地为用户提供结果,只要他们不失去控制。<br />
<br />
==In industry 在工业界==<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
===Space exploration and manufacturing 太空探索和制造业===<br />
<br />
<br />
<br />
The goal of self-replication in space systems is to exploit large amounts of matter with a low launch mass. For example, an [[autotroph]]ic self-replicating machine could cover a moon or planet with solar cells, and beam the power to the Earth using microwaves. Once in place, the same machinery that built itself could also produce raw materials or manufactured objects, including transportation systems to ship the products. [[Von Neumann Probe|Another model]] of self-replicating machine would copy itself through the galaxy and universe, sending information back.<br />
<br />
The goal of self-replication in space systems is to exploit large amounts of matter with a low launch mass. For example, an autotrophic self-replicating machine could cover a moon or planet with solar cells, and beam the power to the Earth using microwaves. Once in place, the same machinery that built itself could also produce raw materials or manufactured objects, including transportation systems to ship the products. Another model of self-replicating machine would copy itself through the galaxy and universe, sending information back.<br />
<br />
太空系统中自复制的目标是利用低发射质量的大量物质。例如,一个自养自复制机械可以用太阳能电池覆盖月球或行星,并通过微波将能量传送到地球。一旦就位,自己建造的同样的机器也可以生产原材料或制成品,包括运输产品的运输系统。另一个自复制机械模型会在星系和宇宙中复制自己,把信息传回来。<br />
<br />
<br />
<br />
<br />
<br />
In general, since these systems are autotrophic, they are the most difficult and complex known replicators. They are also thought to be the most hazardous, because they do not require any inputs from human beings in order to reproduce.<br />
<br />
In general, since these systems are autotrophic, they are the most difficult and complex known replicators. They are also thought to be the most hazardous, because they do not require any inputs from human beings in order to reproduce.<br />
<br />
一般来说,由于这些系统是自养的,他们是已知最困难和复杂的复制因子。它们也被认为是最危险的复制因子,因为它们不需要人类的任何投入来繁殖。<br />
<br />
<br />
<br />
<br />
<br />
A classic theoretical study of replicators in space is the 1980 [[NASA]] study of autotrophic clanking replicators, edited by [[Robert Freitas]].<ref>[[Wikisource:Advanced Automation for Space Missions]]</ref><br />
<br />
A classic theoretical study of replicators in space is the 1980 NASA study of autotrophic clanking replicators, edited by Robert Freitas.<br />
<br />
一个关于太空中复制因子的经典理论研究是1980年由 NASA 的罗伯特·弗雷塔斯(Robert Freitas)编辑的关于自养铿锵复制因子的研究。<br />
<br />
<br />
<br />
<br />
<br />
Much of the design study was concerned with a simple, flexible chemical system for processing lunar [[regolith]], and the differences between the ratio of elements needed by the replicator, and the ratios available in regolith. The limiting element was [[Chlorine]], an essential element to process regolith for [[Aluminium]]. Chlorine is very rare in lunar regolith, and a substantially faster rate of reproduction could be assured by importing modest amounts.<br />
<br />
Much of the design study was concerned with a simple, flexible chemical system for processing lunar regolith, and the differences between the ratio of elements needed by the replicator, and the ratios available in regolith. The limiting element was Chlorine, an essential element to process regolith for Aluminium. Chlorine is very rare in lunar regolith, and a substantially faster rate of reproduction could be assured by importing modest amounts.<br />
<br />
大部分的设计研究都关注于采用一个简单、灵活的化学系统来处理月球表面的风化层,以及复制因子所需要的元素比率和从风化层中获得的元素比率之间的差异。限制元素是氯,它是处理风化层以获得铝的一个必不可少的元素。氯在月球的风化层中非常罕见,通过投入适量的氯,可以保证更快的生殖速度。<br />
<br />
<br />
<br />
<br />
<br />
The reference design specified small computer-controlled electric carts running on rails. Each cart could have a simple hand or a small bull-dozer shovel, forming a basic [[robot]].<br />
<br />
The reference design specified small computer-controlled electric carts running on rails. Each cart could have a simple hand or a small bull-dozer shovel, forming a basic robot.<br />
<br />
参考设计采用了小型计算机控制的在轨道上运行的电动车。每个推车可以有一个简单的手或一个小型推土机铲,形成一个基本的机器人。<br />
<br />
<br />
<br />
<br />
<br />
Power would be provided by a "canopy" of [[solar cell]]s supported on pillars. The other machinery could run under the canopy.<br />
<br />
Power would be provided by a "canopy" of solar cells supported on pillars. The other machinery could run under the canopy.<br />
<br />
电力将由支撑在支柱上的“天篷”状的太阳能电池提供。其他的机器可以在天篷下面运转。<br />
<br />
<br />
<br />
<br />
<br />
A "[[casting]] [[robot]]" would use a robotic arm with a few sculpting tools to make [[plaster]] [[molding (process)|mold]]s. Plaster molds are easy to make, and make precise parts with good surface finishes. The robot would then cast most of the parts either from non-conductive molten rock ([[basalt]]) or purified metals. An [[electricity|electric]] [[oven]] melted the materials.<br />
<br />
A "casting robot" would use a robotic arm with a few sculpting tools to make plaster molds. Plaster molds are easy to make, and make precise parts with good surface finishes. The robot would then cast most of the parts either from non-conductive molten rock (basalt) or purified metals. An electric oven melted the materials.<br />
<br />
一个“铸造机器人”将使用一个机械手臂和一些雕刻工具来制作石膏模具。石膏模具易于制作,而且能够生产表面光洁度好且精密的零件。然后,机器人将用非导电熔岩(玄武岩)或纯金属铸造大部分零件。它内部的电炉可将这些材料熔化。<br />
<br />
<br />
<br />
<br />
<br />
A speculative, more complex "chip factory" was specified to produce the computer and electronic systems, but the designers also said that it might prove practical to ship the chips from Earth as if they were "vitamins".<br />
<br />
A speculative, more complex "chip factory" was specified to produce the computer and electronic systems, but the designers also said that it might prove practical to ship the chips from Earth as if they were "vitamins".<br />
<br />
他们提出了一个探索性的、更为复杂的“芯片工厂”来生产计算机和电子系统,但设计师们还表示,将这些芯片像“维生素”一样从地球运输出去,可能会被证明是可行的。<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
===Molecular manufacturing 分子制造===<br />
<br />
<br />
<br />
{{Main|Molecular nanotechnology#Replicating nanorobots}}<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
[[Nanotechnology|Nanotechnologists]] in particular believe that their work will likely fail to reach a state of maturity until human beings design a self-replicating [[assembler (nanotechnology)|assembler]] of [[nanometer]] dimensions [http://www.MolecularAssembler.com/KSRM/4.11.3.htm].<br />
<br />
Nanotechnologists in particular believe that their work will likely fail to reach a state of maturity until human beings design a self-replicating assembler of nanometer dimensions [http://www.MolecularAssembler.com/KSRM/4.11.3.htm].<br />
<br />
纳米技术学家尤其相信,在人类设计出一种纳米尺度的自复制组装器之前,他们的工作很可能无法达到成熟的状态。 Molecularassembler.com/ksrm/4.11.3.htm.<br />
<br />
<br />
<br />
<br />
<br />
These systems are substantially simpler than autotrophic systems, because they are provided with purified feedstocks and energy. They do not have to reproduce them. This distinction is at the root of some of the controversy about whether [[molecular manufacturing]] is possible or not. Many authorities who find it impossible are clearly citing sources for complex autotrophic self-replicating systems. Many of the authorities who find it possible are clearly citing sources for much simpler self-assembling systems, which have been demonstrated. In the meantime, a [[Lego]]-built autonomous robot able to follow a pre-set track and assemble an exact copy of itself, starting from four externally provided components, was demonstrated experimentally in 2003 [http://www.MolecularAssembler.com/KSRM/3.23.4.htm].<br />
<br />
These systems are substantially simpler than autotrophic systems, because they are provided with purified feedstocks and energy. They do not have to reproduce them. This distinction is at the root of some of the controversy about whether molecular manufacturing is possible or not. Many authorities who find it impossible are clearly citing sources for complex autotrophic self-replicating systems. Many of the authorities who find it possible are clearly citing sources for much simpler self-assembling systems, which have been demonstrated. In the meantime, a Lego-built autonomous robot able to follow a pre-set track and assemble an exact copy of itself, starting from four externally provided components, was demonstrated experimentally in 2003 [http://www.MolecularAssembler.com/KSRM/3.23.4.htm].<br />
<br />
这些系统比自养系统简单得多,因为它们可被提供纯净的原料和能源。它们不需要再生这些材料。这种区别是关于分子制造是否可行的一些争论的根源。许多权威认为这是不可能的,他们明确地引证了复杂自养自复制系统的资料;而许多认同这种可能性的权威人士清楚地引用了已经被证明的更简单的自组装系统的资料。与此同时,2003年的一项实验展示了一个乐高积木自主机器人,它能够按照预先设定的轨道,从外部提供的4个组件开始,精确地组装出自己的复制品。 Molecularassembler.com/ksrm/3.23.4.htm.<br />
<br />
<br />
<br />
<br />
<br />
Merely exploiting the replicative abilities of existing cells is insufficient, because of limitations in the process of [[protein biosynthesis]] (also see the listing for [[RNA]]).<br />
<br />
Merely exploiting the replicative abilities of existing cells is insufficient, because of limitations in the process of protein biosynthesis (also see the listing for RNA).<br />
<br />
仅仅利用现有细胞的复制能力是不够的,因为蛋白质的生物合成过程中存在局限性。<br />
<br />
What is required is the rational design of an entirely novel replicator with a much wider range of synthesis capabilities.<br />
<br />
What is required is the rational design of an entirely novel replicator with a much wider range of synthesis capabilities.<br />
<br />
我们需要的是合理设计一种具有更广泛合成能力的全新复制因子。<br />
<br />
<br />
<br />
<br />
<br />
In 2011, New York University scientists have developed artificial structures that can self-replicate, a process that has the potential to yield new types of materials. They have demonstrated that it is possible to replicate not just molecules like cellular DNA or RNA, but discrete structures that could in principle assume many different shapes, have many different functional features, and be associated with many different types of chemical species.<ref>{{cite journal | doi = 10.1038/nature10500 | last1 = Wang | first1 = Tong | last2 = Sha | first2 = Ruojie | last3 = Dreyfus | first3 = Rémi | last4 = Leunissen | first4 = Mirjam E. | last5 = Maass | first5 = Corinna | last6 = Pine | first6 = David J. | last7 = Chaikin | first7 = Paul M. | last8 = Seeman | first8 = Nadrian C. | year = 2011 | title = Self-replication of information-bearing nanoscale patterns | journal = Nature | volume = 478 | issue = 7368 | pages = 225–228 | pmid=21993758 | pmc=3192504}}</ref><ref>{{cite web | url = https://www.sciencedaily.com/releases/2011/10/111012132651.htm | title = Self-replication process holds promise for production of new materials. | date = 17 October 2011 | website = Science Daily | accessdate=17 October 2011}}</ref><br />
<br />
In 2011, New York University scientists have developed artificial structures that can self-replicate, a process that has the potential to yield new types of materials. They have demonstrated that it is possible to replicate not just molecules like cellular DNA or RNA, but discrete structures that could in principle assume many different shapes, have many different functional features, and be associated with many different types of chemical species.<br />
<br />
2011年,纽约大学的科学家们开发出了可自复制的人造结构,这一过程有产生新型材料的潜力。他们已经证明,这种结构不仅可以复制像细胞 DNA 或 RNA 这样的分子,而且可以复制能够呈现许多不同形态、具有许多不同功能特征、并与许多不同类型的化学物种相关联的离散结构。<br />
<br />
<br />
<br />
<br />
<br />
For a discussion of other chemical bases for hypothetical self-replicating systems, see [[alternative biochemistry]].<br />
<br />
For a discussion of other chemical bases for hypothetical self-replicating systems, see alternative biochemistry.<br />
<br />
有关假设的自我复制系统的其他化学基础的讨论,请参阅替代生物化学。<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
==See also 请参阅==<br />
<br />
<br />
*[[https://zhuanlan.zhihu.com/p/135833919 从自我复制到自我意识]]<br />
<br />
<br />
<br />
* [[Artificial life]]<br />
* 人造生命<br />
<br />
<br />
* [[Astrochicken]]<br />
* 太空鸡实验<br />
<br />
<br />
* [[Autopoiesis]]<br />
* 自创生<br />
<br />
<br />
* [[Complex system]]<br />
* 复杂系统<br />
<br />
<br />
* [[DNA replication]]<br />
* DNA复制<br />
<br />
<br />
* [[Life]]<br />
* 生命<br />
<br />
<br />
* [[Robot]]<br />
* 机器人<br />
<br />
<br />
* [[RepRap]] (self-replicated 3D printer)<br />
* 开源项目<br />
<br />
<br />
* [[Self-replicating machine]]<br />
* 自复制机器<br />
<br />
<br />
** [[Self-replicating spacecraft]]<br />
* 自复制空间飞行器<br />
<br />
<br />
* [[Space manufacturing]]<br />
* 空间制造<br />
<br />
<br />
* [[Von Neumann universal constructor]]<br />
* 冯·诺依曼宇宙构造函数<br />
<br />
<br />
* [[Virus]]<br />
* 病毒<br />
<br />
<br />
* [[Von Neumann machine (disambiguation)]]<br />
* 冯·诺依曼机<br />
<br />
<br />
* [[Self reconfigurable]]<br />
* 自重构<br />
<br />
<br />
* [[Final Anthropic Principle]]<br />
* 最终人存原理<br />
<br />
<br />
* [[Positive feedback]]<br />
* 正反馈<br />
<br />
<br />
* [[Harmonic]]<br />
* 谐<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
==References 参考文献==<br />
<br />
<br />
<br />
{{reflist}}<br />
<br />
<br />
<br />
;Notes<br />
<br />
Notes<br />
<br />
注释<br />
<br />
{{refbegin}}<br />
<br />
<br />
<br />
<br />
* von Neumann, J., 1966, ''The Theory of Self-reproducing Automata'', A. Burks, ed., Univ. of Illinois Press, Urbana, IL.<br />
<br />
<br />
<br />
* [[s:Advanced Automation for Space Missions|Advanced Automation for Space Missions]], a 1980 NASA study edited by [[Robert Freitas]]<br />
<br />
<br />
<br />
* [http://www.MolecularAssembler.com/KSRM.htm Kinematic Self-Replicating Machines] first comprehensive survey of entire field in 2004 by [[Robert Freitas]] and [[Ralph Merkle]]<br />
<br />
<br />
<br />
* [https://web.archive.org/web/20040920220139/http://www.niac.usra.edu/files/studies/final_report/pdf/883Toth-Fejel.pdf NASA Institute for Advance Concepts study by General Dynamics]- concluded that complexity of the development was equal to that of a Pentium 4, and promoted a design based on cellular automata.<br />
<br />
<br />
<br />
* ''[[Gödel, Escher, Bach]]'' by [[Douglas Hofstadter]] (detailed discussion and many examples)<br />
<br />
<br />
<br />
* Kenyon, R., ''Self-replicating tilings'', in: Symbolic Dynamics and Applications (P. Walters, ed.) Contemporary Math. vol. 135 (1992), 239-264.<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
{{refend}}<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
[[Category:Self-replication| ]]<br />
<br />
<br />
<br />
<noinclude><br />
<br />
<small>This page was moved from [[wikipedia:en:Self-replication]]. Its edit history can be viewed at [[自复制/edithistory]]</small></noinclude><br />
<br />
[[Category:待整理页面]]</div>粲兰https://wiki.swarma.org/index.php?title=%E5%90%88%E6%88%90%E7%94%9F%E7%89%A9%E7%94%B5%E8%B7%AF&diff=17882合成生物电路2020-11-04T13:39:22Z<p>粲兰:</p>
<hr />
<div>此词条暂由袁一博翻译,翻译字数共956,未经人工整理和审校,带来阅读不便,请见谅。<br />
<br />
--[[用户:小趣木木|小趣木木]]([[用户讨论:小趣木木|讨论]])文本缺失 需要补充<br />
{{Synthetic biology}}<br />
<br />
[[File:Lac Operon.svg|thumb|275px|Lac Operon|The ''lac'' operon is a natural biological circuit on which many synthetic circuits are based. Top: Repressed, Bottom: Active. <br /><br />
<br />
[[File:Lac Operon.svg|thumb|275px|Lac Operon|The lac operon is a natural biological circuit on which many synthetic circuits are based. Top: Repressed, Bottom: Active. <br /><br />
<br />
[文件: Lac Operon.svg | thumb | 275px | Lac Operon | Lac Operon | Lac Operon 是一种天然的生物电路,许多合成电路都是以它为基础的。上: 压抑,下: 活跃。< br/><br />
<br />
'''''1'': RNA polymerase, ''2'': Repressor, ''3'': Promoter, ''4'': Operator, ''5'': Lactose, ''6'': ''lacZ'', ''7'': ''lacY'', ''8'': ''lacA''.]]<br />
<br />
1: RNA polymerase, 2: Repressor, 3: Promoter, 4: Operator, 5: Lactose, 6: lacZ, 7: lacY, 8: lacA.]]<br />
<br />
1: RNA 聚合酶,2: 抑制子,3: 启动子,4: 操作者,5: 乳糖,6: lacZ,7: lacY,8: lacA ]<br />
<br />
<br />
<br />
'''Synthetic biological circuits''' are an application of [[synthetic biology]] where biological parts inside a [[Cell (biology)|cell]] are designed to perform logical functions mimicking those observed in [[electronic circuit]]s. The applications range from simply inducing production to adding a measurable element, like [[Green Fluorescent Protein|GFP]], to an existing [[Gene regulatory network|natural biological circuit]], to implementing completely new systems of many parts.<ref name="Kobayashi">{{cite journal | last1 = Kobayashi | first1 = H. | last2 = Kærn | first2 = M. | last3 = Araki | first3 = M. | last4 = Chung | first4 = K. | last5 = Gardner | first5 = T. S. | last6 = Cantor | first6 = C. R. | last7 = Collins | first7 = J. J. | year = 2004 | title = Programmable cells: Interfacing natural and engineered gene networks | journal = PNAS | volume = 101 | issue = 22| pages = 8414–8419 | doi=10.1073/pnas.0402940101 | pmid=15159530 | pmc=420408}}</ref><br />
<br />
Synthetic biological circuits are an application of synthetic biology where biological parts inside a cell are designed to perform logical functions mimicking those observed in electronic circuits. The applications range from simply inducing production to adding a measurable element, like GFP, to an existing natural biological circuit, to implementing completely new systems of many parts.<br />
<br />
合成生物电路是合成生物学的一种应用,其所含细胞内的生物部件被设计为具有模仿电子电路的逻辑功能。从简单的诱导生产到添加一个可测量的元素,如绿色荧光蛋白,到现有的自然生物电路,再到实施由许多部分组成的全新系统,都包含在应用范围内。<br />
<br />
[[Image:Protein translation.gif|thumb|300px| A [[ribosome]] is a [[biological machine]].核糖体是一个生物机器。]]<br />
<br />
A [[ribosome is a biological machine.]]<br />
<br />
[核糖体是一种生物机器]<br />
<br />
The goal of synthetic biology is to generate an array of tunable and characterized parts, or modules, with which any desirable synthetic biological circuit can be easily designed and implemented.<ref name="SynBioFaq">{{cite web|title=Synthetic Biology: FAQ|url=http://syntheticbiology.org/FAQ.html|work=SyntheticBiology.org|accessdate=21 December 2011|url-status=dead|archiveurl=https://web.archive.org/web/20021212065409/http://syntheticbiology.org/faq.html|archivedate=12 December 2002}}</ref> These circuits can serve as a method to modify cellular functions, create cellular responses to environmental conditions, or influence cellular development. By implementing rational, controllable logic elements in cellular systems, researchers can use living systems as engineered "[[biological machine]]s" to perform a vast range of useful functions.<ref name="Kobayashi"/><br />
<br />
The goal of synthetic biology is to generate an array of tunable and characterized parts, or modules, with which any desirable synthetic biological circuit can be easily designed and implemented. These circuits can serve as a method to modify cellular functions, create cellular responses to environmental conditions, or influence cellular development. By implementing rational, controllable logic elements in cellular systems, researchers can use living systems as engineered "biological machines" to perform a vast range of useful functions. The second, by Michael Elowitz and Stanislas Leibler, showed that three repressor genes could be connected to form a negative feedback loop termed the Repressilator that produces self-sustaining oscillations of protein levels in E. coli.<br />
<br />
合成生物学的目标是生成一系列可调谐和特征化的部件或模块,用这些部件或模块,任何理想的合成生物电路都可以轻松地设计和实现。这些电路可以作为一种方法来修改细胞功能,创造细胞响应环境条件,或影响细胞的发展。通过在细胞系统中实现合理、可控的逻辑元素,研究人员可以将活体系统作为工程化的“生物机器”来执行大量有用的功能。迈克尔·埃洛维茨(Michael Elowitz)和斯坦尼斯拉斯·雷布勒(Stanislas Leibler)的第二项研究表明,三个阻遏基因可以相互联系,形成一个负反馈环路,称为'''<font color="#ff8000">抑制震荡子(Repressilator)</font>''',它可以在大肠杆菌中产生蛋白质水平的自维持振荡。<br />
<br />
<br />
<br />
== History 发展历程 ==<br />
<br />
Currently, synthetic circuits are a burgeoning area of research in systems biology with more publications detailing synthetic biological circuits published every year. There has been significant interest in encouraging education and outreach as well: the International Genetically Engineered Machines Competition manages the creation and standardization of BioBrick parts as a means to allow undergraduate and high school students to design their own synthetic biological circuits.<br />
<br />
目前,合成电路是系统生物学研究的一个新兴领域,每年都有更多详细介绍合成生物电路的刊物出版。在鼓励教育和推广方面,外界也对合成电路很大的兴趣: 国际基因工程机器竞赛管理着生物积木零件的创造和标准化,作为允许本科生和高中生设计自己的合成生物电路的一种手段。<br />
<br />
The first natural gene circuit studied in detail was the [[lac operon]]. In studies of [[diauxie|diauxic growth]] of ''[[E. coli]]'' on two-sugar media, [[Jacques Monod]] and [[Francois Jacob]] discovered that ''E.coli'' preferentially consumes the more easily processed [[glucose]] before switching to [[lactose]] metabolism. They discovered that the mechanism that controlled the metabolic "switching" function was a two-part control mechanism on the lac operon. When lactose is present in the cell the [[enzyme]] [[β-galactosidase]] is produced to convert lactose into [[glucose]] or [[galactose]]. When lactose is absent in the cell the lac repressor inhibits the production of the enzyme β-galactosidase to prevent any inefficient processes within the cell.<br />
<br />
<br />
<br />
The lac operon is used in the [[biotechnology]] industry for production of [[recombinant DNA|recombinant]] [[proteins]] for therapeutic use. The gene or genes for producing an [[exogenous]] protein are placed on a [[plasmid]] under the control of the lac promoter. Initially the cells are grown in a medium that does not contain lactose or other sugars, so the new genes are not expressed. Once the cells reach a certain point in their growth, [[IPTG|Isopropyl β-D-1-thiogalactopyranoside (IPTG)]] is added. IPTG, a molecule similar to lactose, but with a sulfur bond that is not hydrolyzable so that E. Coli does not digest it, is used to activate or "[[Regulation of gene expression#Inducible vs. repressible systems|induce]]" the production of the new protein. Once the cells are induced, it is difficult to remove IPTG from the cells and therefore it is difficult to stop expression.<br />
<br />
Both immediate and long term applications exist for the use of synthetic biological circuits, including different applications for metabolic engineering, and synthetic biology. Those demonstrated successfully include pharmaceutical production, and fuel production. However methods involving direct genetic introduction are not inherently effective without invoking the basic principles of synthetic cellular circuits. For example, each of these successful systems employs a method to introduce all-or-none induction or expression. This is a biological circuit where a simple repressor or promoter is introduced to facilitate creation of the product, or inhibition of a competing pathway. However, with the limited understanding of cellular networks and natural circuitry, implementation of more robust schemes with more precise control and feedback is hindered. Therein lies the immediate interest in synthetic cellular circuits.<br />
<br />
包括代谢工程学和合成生物学在内的不同应用,合成生物电路的近期和长期应用都存在。这些成功的示范包括制药生产和燃料生产。然而,如果不引用合成细胞电路的基本原理,涉及直接基因导入的方法并不是天生有效的。例如,每个成功的系统都使用一种方法来引入全或非的诱导或表达。这是一个生物电路,其中一个简单的阻遏或启动子介绍,以促进产品的创造,或抑制竞争通路。然而,由于对蜂窝网络和自然电路的了解有限,实施更精确的控制和反馈更健全的方案会受到阻碍。这就是人工合成蜂窝电路的直接利益所在。<br />
<br />
<br />
<br />
Two early examples of synthetic biological circuits were published in [[Nature (journal)|Nature]] in 2000. One, by Tim Gardner, Charles Cantor, and [[James Collins (bioengineer)|Jim Collins]] working at [[Boston University]], demonstrated a "bistable" switch in ''E. coli''. The switch is turned on by heating the culture of bacteria and turned off by addition of IPTG. They used GFP as a reporter for their system.<ref name="Gardner">Gardner, T.s., Cantor, C.R., Collins, J. Construction of a genetic toggle switch in Escherichia coli. ''Nature'' 403, 339-342 (20 January 2000).</ref> The second, by [[Michael Elowitz]] and [[Stanislas Leibler]], showed that three repressor genes could be connected to form a negative feedback loop termed the [[Repressilator]] that produces self-sustaining oscillations of protein levels in ''E. coli.''<ref>{{Cite journal|last=Stanislas Leibler|last2=Elowitz|first2=Michael B.|date=January 2000|title=A synthetic oscillatory network of transcriptional regulators|journal=Nature|volume=403|issue=6767|pages=335–338|doi=10.1038/35002125|pmid=10659856|issn=1476-4687}}</ref><br />
<br />
Development in understanding cellular circuitry can lead to exciting new modifications, such as cells which can respond to environmental stimuli. For example, cells could be developed that signal toxic surroundings and react by activating pathways used to degrade the perceived toxin. To develop such a cell, it is necessary to create a complex synthetic cellular circuit which can respond appropriately to a given stimulus.<br />
<br />
细胞电路理解的发展可以导致令人兴奋的新修正,例如可以对环境刺激作出反应的细胞。例如,细胞可以警示处于有毒环境,并通过激活用于降解感知毒素的途径来进行反应。为了研制这样一个细胞,有必要创建一个复杂的合成细胞电路,它可以适当地响应给定的刺激。<br />
<br />
<br />
<br />
Currently, synthetic circuits are a burgeoning area of research in [[systems biology]] with more publications detailing synthetic biological circuits published every year.<ref>{{cite journal | last1 = Purnick | first1 = Priscilla E. M. | last2 = Weis | first2 = Ron | year = 2009 | title = The second wave of synthetic biology: from modules to systems | url = | journal = Nature Reviews Molecular Cell Biology | volume = 10 | issue = 6| pages = 410–422 | doi = 10.1038/nrm2698 | pmid=19461664}}</ref> There has been significant interest in encouraging education and outreach as well: the International Genetically Engineered Machines Competition<ref>International Genetically Engineered Machines (iGem) http://igem.org/Main_Page</ref> manages the creation and standardization of [[BioBrick]] parts as a means to allow undergraduate and high school students to design their own synthetic biological circuits.<br />
<br />
Given synthetic cellular circuits represent a form of control for cellular activities, it can be reasoned that with complete understanding of cellular pathways, "plug and play" synthetic cells can be developed implementing only the pathways necessary for cell survival reproduction. From this cell, to be thought of as a minimal genome cell, one can add pieces from the toolbox to create a well defined pathway with appropriate synthetic circuitry for an effective feedback system. Because of the basic ground up construction method, and the proposed database of mapped circuitry pieces, techniques mirroring those used to model computer or electronic circuits can be used to redesign cells and model cells for easy troubleshooting and predictive behavior and yields.<br />
<br />
鉴于合成细胞电路代表了一种控制细胞活动的形式,可以推断,只要完全了解细胞通路,就可以开发出只执行细胞生存繁殖所必需的通路的“即插即用”的合成细胞。从这个被认为是一个最小的基因组细胞,我们可以添加工具箱中的片段,为一个有效的反馈系统创造一个拥有合适的合成电路的良定路径。由于基本的基础构造方法,以及提议的映射电路片数据库,反映用于模拟计算机或电子电路的技术可用于重新设计细胞和模型细胞,以便排除故障和预测行为与产量。<br />
<br />
<br />
<br />
== Interest and goals 研究方向和目标==<br />
<br />
Both immediate and long term applications exist for the use of synthetic biological circuits, including different applications for [[metabolic engineering]], and [[synthetic biology]]. Those demonstrated successfully include pharmaceutical production,<ref>{{cite journal | last1 = Ro | first1 = D.-K. | last2 = Paradise | first2 = E.M. | last3 = Ouellet | first3 = M. | last4 = Fisher | first4 = K.J. | last5 = Newman | first5 = K.L. | last6 = Ndungu | first6 = J.M. | last7 = Ho | first7 = K.A. | last8 = Eachus | first8 = R.A. | last9 = Ham | first9 = T.S. | last10 = Kirby | first10 = J. | last11 = Chang | first11 = M.C.Y. | last12 = Withers | first12 = S.T. | last13 = Shiba | first13 = Y. | last14 = Sarpong | first14 = R. | last15 = Keasling | first15 = J.D. | year = 2006 | title = Production of the antimalarial drug precursor artemisinic acid in engineered yeast | url = | journal = Nature | volume = 440 | issue = 7086| pages = 940–943 | doi=10.1038/nature04640 | pmid=16612385}}</ref> and fuel production.<ref>{{cite journal | last1 = Fortman | first1 = J.L. | last2 = Chhabra | first2 = S. | last3 = Mukhopadhyay | first3 = A. | last4 = Chou | first4 = H. | last5 = Lee | first5 = T.S. | last6 = Steen | first6 = E. | last7 = Keasling | first7 = J.D. | year = 2008 | title = Biofuel alternatives to ethanol: pumping the microbial well | url = https://digital.library.unt.edu/ark:/67531/metadc1013351/| journal = Trends Biotechnol | volume = 26 | issue = 7| pages = 375–381 | doi=10.1016/j.tibtech.2008.03.008| pmid = 18471913 }}</ref> However methods involving direct genetic introduction are not inherently effective without invoking the basic principles of synthetic cellular circuits. For example, each of these successful systems employs a method to introduce all-or-none induction or expression. This is a biological circuit where a simple [[repressor]] or [[promoter (genetics)|promoter]] is introduced to facilitate creation of the product, or inhibition of a competing pathway. However, with the limited understanding of cellular networks and natural circuitry, implementation of more robust schemes with more precise control and feedback is hindered. Therein lies the immediate interest in synthetic cellular circuits.<br />
<br />
<br />
<br />
Development in understanding cellular circuitry can lead to exciting new modifications, such as cells which can respond to environmental stimuli. For example, cells could be developed that signal toxic surroundings and react by activating pathways used to degrade the perceived toxin.<ref>{{cite journal | last1 = Keasling | first1 = J.D. | year = 2008 | title = Synthetic biology for synthetic chemistry. | url = | journal = ACS Chem Biol | volume = 3 | issue = 1| pages = 64–76 | doi=10.1021/cb7002434| pmid = 18205292 | title-link = synthetic chemistry }}</ref> To develop such a cell, it is necessary to create a complex synthetic cellular circuit which can respond appropriately to a given stimulus.<br />
<br />
Repressilator<br />
<br />
抑制震荡子<br />
<br />
<br />
<br />
Mammalian tunable synthetic oscillator<br />
<br />
哺乳动物可调谐合成振荡器<br />
<br />
Given synthetic cellular circuits represent a form of control for cellular activities, it can be reasoned that with complete understanding of cellular pathways, "plug and play"<ref name="Kobayashi" /> cells with well defined genetic circuitry can be engineered. It is widely believed that if a proper toolbox of parts is generated,<ref>{{cite journal | last1 = Lucks | first1 = Julius B | last2 = Qi | first2 = Lei | last3 = Whitaker | first3 = Weston R | last4 = Arkin | first4 = Adam P | year = 2008 | title = Toward scalable parts families for predictable design of biological circuits | url = | journal = Current Opinion in Microbiology | volume = 11 | issue = 6| pages = 567–573 | doi = 10.1016/j.mib.2008.10.002 | pmid = 18983935 }}</ref> synthetic cells can be developed implementing only the pathways necessary for cell survival reproduction. From this cell, to be thought of as a minimal [[genome]] cell, one can add pieces from the toolbox to create a well defined pathway with appropriate synthetic circuitry for an effective feedback system. Because of the basic ground up construction method, and the proposed database of mapped circuitry pieces, techniques mirroring those used to model computer or electronic circuits can be used to redesign cells and model cells for easy troubleshooting and predictive behavior and yields.<br />
<br />
Bacterial tunable synthetic oscillator<br />
<br />
细菌可调谐合成振荡器<br />
<br />
<br />
<br />
Coupled bacterial oscillator<br />
<br />
耦合细菌振荡器<br />
<br />
== Example circuits 电路示例 ==<br />
<br />
Globally coupled bacterial oscillator<br />
<br />
球状耦合细菌振荡器<br />
<br />
<br />
<br />
Elowitz et al. and Fung et al. created oscillatory circuits that use multiple self-regulating mechanisms to create a time-dependent oscillation of gene product expression. <br />
<br />
埃洛维茨等人。和 Fung 等人。创造的振荡电路,使用多种自我调节机制,创造一个时间相关的振荡基因产品表达。<br />
<br />
=== Oscillators 振荡器 ===<br />
<br />
# [[Repressilator]]<br />
<br />
# Mammalian tunable synthetic oscillator<br />
<br />
Toggle-switch<br />
<br />
拨动开关<br />
<br />
# Bacterial tunable synthetic oscillator<br />
<br />
Gardner et al. used mutual repression between two control units to create an implementation of a toggle switch capable of controlling cells in a bistable manner: transient stimuli resulting in persistent responses If Signal A AND Signal B are present, then the desired gene product will result. All promoters shown are inducible, activated by the displayed gene product. Each signal activates expression of a separate gene (shown in light blue). The expressed proteins then can either form a complete complex in cytosol, that is capable of activating expression of the output (shown), or can act separately to induce expression, such as separately removing an inhibiting protein and inducing activation of the uninhibited promoter.]]<br />
<br />
加德纳等人使用两个控制单元之间的相互抑制来创建一个能够以双稳态方式控制细胞的拨动开关的实现: 如果信号 A 和信号 B 产生,那么期望的基因产物将被表达出来。所有显示出的启动子都是诱导性的,并被表达的基因产物激活。每个信号激活一个单独基因的表达(如浅蓝色所示)。然后,表达的蛋白质可以在细胞溶胶中形成一个完整的能够激活输出的表达(如图所示)的复合体,或者可以单独作用诱导表达,例如单独去除抑制蛋白和诱导激活不受抑制的启动子。<br />
<br />
# Coupled bacterial oscillator<br />
耦合细菌振荡器<br />
<br />
# Globally coupled bacterial oscillator<br />
球状耦合细菌振荡器<br />
<br />
The logical [[OR gate.<br />
<br />
逻辑的[或门。<br />
<br />
Elowitz et al. and Fung et al. created oscillatory circuits that use multiple self-regulating mechanisms to create a time-dependent oscillation of gene product expression.埃洛维茨等人和冯等人创造了一种振荡电路,它使用多个自调节机制来形成基因表达的依赖于时间的振荡器。<ref>{{cite journal | last1 = Elowitz | first1 = M.B. | last2 = Leibler | first2 = S. | year = 2000 | title = A synthetic oscillatory network of transcriptional regulators | pmid = 10659856| journal = Nature | volume = 403 | issue = 6767| pages = 335–338 | doi=10.1038/35002125}}</ref><ref>{{cite journal | last1 = Fung | first1 = E. | last2 = Wong | first2 = W.W. | last3 = Suen | first3 = J.K. | last4 = Bulter | first4 = T. | last5 = Lee | first5 = S. | last6 = Liao | first6 = J.C. | year = 2005 | title = A synthetic gene–metabolic oscillator | url = | journal = Nature | volume = 435 | issue = 7038| pages = 118–122 | doi=10.1038/nature03508| pmid = 15875027 }}</ref> <br />
<br />
<br />
<br />
=== Bistable switches 双稳态开关===<br />
<br />
Synthetic gene circuits can control gene expression heterogeneity and can be controlled independently of the gene expression mean.<br />
<br />
合成基因电路可以控制基因表达的异质性,并且可以独立于基因表达均值进行控制。<br />
<br />
# Toggle-switch<br />
<br />
Gardner et al. used mutual repression between two control units to create an implementation of a toggle switch capable of controlling cells in a bistable manner: transient stimuli resulting in persistent responses<ref name="Gardner" />.<br />
<br />
<br />
<br />
Engineered systems are the result of implementation of combinations of different control mechanisms. A limited counting mechanism was implemented by a pulse-controlled gene cascade and application of logic elements enables genetic "programming" of cells as in the research of Tabor et al., which synthesized a photosensitive bacterial edge detection program.<br />
<br />
工程系统是不同控制机制组合的结果。一个有限的计数机制通过一个脉冲控制的基因串联得以实现,逻辑元件的应用使细胞的遗传“编程”成为可能。泰伯等人合成了一个光敏细菌边缘检测程序。<br />
<br />
=== Logical operators 逻辑运算===<br />
<br />
[[File:SynBioCirc-AndLogicGate.jpg|frame|center|The logical [[AND gate]].逻辑与门<ref name="rocha">{{cite journal | last1 = Silva-Rocha | first1 = R. | last2 = de Lorenzo | first2 = V. | year = 2008 | title = Mining logic gates in prokaryotic transcriptional regulation networks | url = | journal = FEBS Letters | volume = 582 | issue = 8| pages = 1237–1244 | doi=10.1016/j.febslet.2008.01.060 | pmid=18275855}}</ref><ref name="buchler">{{cite journal | last1 = Buchler | first1 = N.E. | last2 = Gerland | first2 = U. | last3 = Hwa | first3 = T. | year = 2003 | title = On schemes of combinatorial transcription logic | journal = PNAS | volume = 100 | issue = 9| pages = 5136–5141 | doi=10.1073/pnas.0930314100 | pmid=12702751 | pmc=404558}}</ref> If Signal A '''AND''' Signal B are present, then the desired gene product will result. All promoters shown are inducible, activated by the displayed gene product. Each signal activates expression of a separate gene (shown in light blue). The expressed proteins then can either form a complete complex in [[cytosol]], that is capable of activating expression of the output (shown), or can act separately to induce expression, such as separately removing an inhibiting protein and inducing activation of the uninhibited promoter.如果信号 A 和信号 B 产生,那么期望的基因产物将被表达出来。所有显示出的启动子都是诱导性的,并被表达的基因产物激活。每个信号激活一个单独基因的表达(如浅蓝色所示)。然后,表达的蛋白质可以在细胞溶胶中形成一个完整的能够激活输出的表达(如图所示)的复合体,或者可以单独作用诱导表达,例如单独去除抑制蛋白和诱导激活不受抑制的启动子。]]<br />
<br />
<br />
<br />
Computational design and evaluation of DNA circuits to achieve optimal performance<br />
<br />
实现最佳性能的 DNA 电路的计算设计和评估<br />
<br />
[[File:SynBioCirc-OrLogicGate.jpg|frame|center|The logical [[OR gate]].逻辑或门<ref name="rocha" /><ref name="buchler" /> If Signal A '''OR''' Signal B are present, then the desired gene product will result. All promoters shown are inducible. Either signal is capable of activating the expression of the output gene product, and only the action of a single promoter is required for gene expression. Post-transcriptional regulation mechanisms can prevent the presence of both inputs producing a compounded high output, such as implementing a low binding affinity [[ribosome binding site]].如果信号 A 或信号 B 产生,那么期望的基因产物将被表达出来。所有显示出的启动子都是诱导性的。无论何种信号,都能激活基因产物的表达,并且只需一个启动子就可以产生这种表达。转录后的调节机制可以阻止产生复合高产出的两个输入信号的产生,比如插入一个低结合亲和力的核糖体结合点。]]<br />
<br />
<br />
<br />
Recent developments in artificial gene synthesis and the corresponding increase in competition within the industry have led to a significant drop in price and wait time of gene synthesis and helped improve methods used in circuit design. At the moment, circuit design is improving at a slow pace because of insufficient organization of known multiple gene interactions and mathematical models. This issue is being addressed by applying computer-aided design (CAD) software to provide multimedia representations of circuits through images, text and programming language applied to biological circuits. Some of the more well known CAD programs include GenoCAD, Clotho framework and j5. GenoCAD uses grammars, which are either opensource or user generated "rules" which include the available genes and known gene interactions for cloning organisms. Clotho framework uses the Biobrick standard rules.<br />
<br />
最近在人工基因合成法领域的发展和行业内竞争的相应增加已经导致了基因合成的价格和等待时间的显著下降,并且帮助改进了电路设计中使用的方法。目前,由于对已知的多基因相互作用和数学模型的架构不足,电路设计正在缓慢地改进。这个问题目前通过应用计算机辅助设计(CAD)软件,利用图像、文本和应用于生物电路的编程语言来提供电路的多媒体表示来解决。一些更著名的 CAD 程序包括 GenoCAD,Clotho框架和 j5。GenoCAD 使用语法,这些语法要么是开源的,要么是用户生成的“规则” ,其中包括克隆生物的可用基因和已知的基因相互作用。Clotho框架使用“生物积木”标准规则。<br />
<br />
[[File:SynBioCirc-NandLogicGate.jpg|frame|center|The logical [[Negated AND gate]].逻辑非门<ref name="rocha" /><ref name="buchler" /> If Signal A '''AND''' Signal B are present, then the desired gene product will '''NOT''' result. All promoters shown are inducible. The activating promoter for the output gene is constitutive, and thus not shown. The constitutive promoter for the output gene keeps it "on" and is only deactivated when (similar to the AND gate) a complex as a result of two input signal gene products blocks the expression of the output gene.如果信号 A 和信号 B 产生,那么期望的基因产物不会被表达出来。所有显示出的启动子都是诱导性的。输出基因的活性启动子是本构的,因而不会显示出来。输出基因的本构启动子让它始终被表达,只有在两个输入信号基因的产物阻滞输出基因的表达,形成一个复合体时,输出基因才失活。]]<br />
<br />
<br />
<br />
=== Analog tuners 模拟调谐器===<br />
<br />
Using negative feedback and identical promoters, linearizer gene circuits can impose uniform gene expression that depends linearly on extracellular chemical inducer concentration.<br />
<br />
线性化电路使用负反馈和完全相同的启动子,可以利用线性依赖于细胞外化学诱导物浓度的统一基因的表达。<ref name="pmid19279212">{{cite journal | vauthors = Nevozhay D, Adams RM, Murphy KF, Josic K, Balázsi G| title = Negative autoregulation linearizes the dose-response and suppresses the heterogeneity of gene expression | journal = Proc. Natl. Acad. Sci. U.S.A. | volume = 106 | issue = 13 | pages = 5123-8 | date = March 31, 2009 | pmid = 19279212 | pmc = 2654390 | doi = 10.1073/pnas.0809901106 }}</ref><br />
<br />
<br />
<br />
=== Controllers of gene expression heterogeneity 基因表达异质性的控制===<br />
<br />
Synthetic gene circuits can control gene expression heterogeneity and can be controlled independently of the gene expression mean.<br />
<br />
合成基因电路可以控制基因表达的异质性,并且可以独立于基因表达均值来控制。<ref name="pmid17189188">{{cite journal | vauthors = Blake WJ, Balázsi G, Kohanski MA, Isaacs FJ, Murphy KF, Kuang Y, Cantor CR, Walt DR, Collins JJ| title = Phenotypic Consequences of Promoter-Mediated Transcriptional Noise | journal = Molec. Cell | volume = 24 | issue = 6 | pages = 853-65 | date = December 28, 2006 | pmid = 17189188 | doi = 10.1016/j.molcel.2006.11.003 }}</ref><br />
<br />
<br />
<br />
=== Other engineered systems 其他工程系统===<br />
<br />
<!--- Categories ---><br />
<br />
< ! ——类别—— > <br />
<br />
Engineered systems are the result of implementation of combinations of different control mechanisms. A limited counting mechanism was implemented by a pulse-controlled gene cascade<ref>{{cite journal | last1 = Friedland | first1 = A.E. | last2 = Lu | first2 = T.K | last3 = Wang | first3 = X. | last4 = Shi | first4 = D. | last5 = Church | first5 = G. | last6 = Collins | first6 = J.J. | year = 2009 | title = Synthetic Gene Networks That Count | url = | journal = Science | volume = 324 | issue = 5931| pages = 1199–1202 | doi=10.1126/science.1172005 | pmid=19478183 | pmc=2690711}}</ref> and application of logic elements enables genetic "programming" of cells as in the research of Tabor et al., which synthesized a photosensitive bacterial edge detection program.<ref>{{cite journal | last1 = Tabor | first1 = J.J. | last2 = Salis | first2 = H.M. | last3 = Simpson | first3 = Z.B. | last4 = Chevalier | first4 = A.A. | last5 = Levskaya | first5 = A. | last6 = Marcotte | first6 = E.M. | last7 = Voigt | first7 = C.A. | last8 = Ellington | first8 = A.D. | year = 2009 | title = A Synthetic Edge Detection Program | url = | journal = Cell | volume = 137 | issue = 7| pages = 1272–1281 | doi=10.1016/j.cell.2009.04.048| pmid = 19563759 | pmc = 2775486 }}</ref><br />
<br />
Category:Synthetic biology<br />
<br />
类别: 合成生物学<br />
<br />
<noinclude><br />
<br />
<small>This page was moved from [[wikipedia:en:Synthetic biological circuit]]. Its edit history can be viewed at [[合成生物电路/edithistory]]</small></noinclude><br />
<br />
[[Category:待整理页面]]</div>粲兰https://wiki.swarma.org/index.php?title=%E5%90%88%E6%88%90%E7%94%9F%E7%89%A9%E7%94%B5%E8%B7%AF&diff=17881合成生物电路2020-11-04T13:38:27Z<p>粲兰:</p>
<hr />
<div>此词条暂由袁一博翻译,翻译字数共956,未经人工整理和审校,带来阅读不便,请见谅。<br />
<br />
--[[用户:小趣木木|小趣木木]]([[用户讨论:小趣木木|讨论]])文本缺失 需要补充<br />
{{Synthetic biology}}<br />
<br />
[[File:Lac Operon.svg|thumb|275px|Lac Operon|The ''lac'' operon is a natural biological circuit on which many synthetic circuits are based. Top: Repressed, Bottom: Active. <br /><br />
<br />
[[File:Lac Operon.svg|thumb|275px|Lac Operon|The lac operon is a natural biological circuit on which many synthetic circuits are based. Top: Repressed, Bottom: Active. <br /><br />
<br />
[文件: Lac Operon.svg | thumb | 275px | Lac Operon | Lac Operon | Lac Operon 是一种天然的生物电路,许多合成电路都是以它为基础的。上: 压抑,下: 活跃。< br/><br />
<br />
'''''1'': RNA polymerase, ''2'': Repressor, ''3'': Promoter, ''4'': Operator, ''5'': Lactose, ''6'': ''lacZ'', ''7'': ''lacY'', ''8'': ''lacA''.]]<br />
<br />
1: RNA polymerase, 2: Repressor, 3: Promoter, 4: Operator, 5: Lactose, 6: lacZ, 7: lacY, 8: lacA.]]<br />
<br />
1: RNA 聚合酶,2: 抑制子,3: 启动子,4: 操作者,5: 乳糖,6: lacZ,7: lacY,8: lacA ]<br />
<br />
<br />
<br />
'''Synthetic biological circuits''' are an application of [[synthetic biology]] where biological parts inside a [[Cell (biology)|cell]] are designed to perform logical functions mimicking those observed in [[electronic circuit]]s. The applications range from simply inducing production to adding a measurable element, like [[Green Fluorescent Protein|GFP]], to an existing [[Gene regulatory network|natural biological circuit]], to implementing completely new systems of many parts.<ref name="Kobayashi">{{cite journal | last1 = Kobayashi | first1 = H. | last2 = Kærn | first2 = M. | last3 = Araki | first3 = M. | last4 = Chung | first4 = K. | last5 = Gardner | first5 = T. S. | last6 = Cantor | first6 = C. R. | last7 = Collins | first7 = J. J. | year = 2004 | title = Programmable cells: Interfacing natural and engineered gene networks | journal = PNAS | volume = 101 | issue = 22| pages = 8414–8419 | doi=10.1073/pnas.0402940101 | pmid=15159530 | pmc=420408}}</ref><br />
<br />
Synthetic biological circuits are an application of synthetic biology where biological parts inside a cell are designed to perform logical functions mimicking those observed in electronic circuits. The applications range from simply inducing production to adding a measurable element, like GFP, to an existing natural biological circuit, to implementing completely new systems of many parts.<br />
<br />
合成生物电路是合成生物学的一种应用,其所含细胞内的生物部件被设计为具有模仿电子电路的逻辑功能。从简单的诱导生产到添加一个可测量的元素,如绿色荧光蛋白,到现有的自然生物电路,再到实施由许多部分组成的全新系统,都包含在应用范围内。<br />
<br />
[[Image:Protein translation.gif|thumb|300px| A [[ribosome]] is a [[biological machine]].核糖体是一个生物机器。]]<br />
<br />
A [[ribosome is a biological machine.]]<br />
<br />
[核糖体是一种生物机器]<br />
<br />
The goal of synthetic biology is to generate an array of tunable and characterized parts, or modules, with which any desirable synthetic biological circuit can be easily designed and implemented.<ref name="SynBioFaq">{{cite web|title=Synthetic Biology: FAQ|url=http://syntheticbiology.org/FAQ.html|work=SyntheticBiology.org|accessdate=21 December 2011|url-status=dead|archiveurl=https://web.archive.org/web/20021212065409/http://syntheticbiology.org/faq.html|archivedate=12 December 2002}}</ref> These circuits can serve as a method to modify cellular functions, create cellular responses to environmental conditions, or influence cellular development. By implementing rational, controllable logic elements in cellular systems, researchers can use living systems as engineered "[[biological machine]]s" to perform a vast range of useful functions.<ref name="Kobayashi"/><br />
<br />
The goal of synthetic biology is to generate an array of tunable and characterized parts, or modules, with which any desirable synthetic biological circuit can be easily designed and implemented. These circuits can serve as a method to modify cellular functions, create cellular responses to environmental conditions, or influence cellular development. By implementing rational, controllable logic elements in cellular systems, researchers can use living systems as engineered "biological machines" to perform a vast range of useful functions. The second, by Michael Elowitz and Stanislas Leibler, showed that three repressor genes could be connected to form a negative feedback loop termed the Repressilator that produces self-sustaining oscillations of protein levels in E. coli.<br />
<br />
合成生物学的目标是生成一系列可调谐和特征化的部件或模块,用这些部件或模块,任何理想的合成生物电路都可以轻松地设计和实现。这些电路可以作为一种方法来修改细胞功能,创造细胞响应环境条件,或影响细胞的发展。通过在细胞系统中实现合理、可控的逻辑元素,研究人员可以将活体系统作为工程化的“生物机器”来执行大量有用的功能。迈克尔·埃洛维茨(Michael Elowitz)和斯坦尼斯拉斯·雷布勒(Stanislas Leibler)的第二项研究表明,三个阻遏基因可以相互联系,形成一个负反馈环路,称为'''<font color="#ff8000">抑制震荡子(Repressilator)</font>''',它可以在大肠杆菌中产生蛋白质水平的自维持振荡。<br />
<br />
<br />
<br />
== History 发展历程 ==<br />
<br />
Currently, synthetic circuits are a burgeoning area of research in systems biology with more publications detailing synthetic biological circuits published every year. There has been significant interest in encouraging education and outreach as well: the International Genetically Engineered Machines Competition manages the creation and standardization of BioBrick parts as a means to allow undergraduate and high school students to design their own synthetic biological circuits.<br />
<br />
目前,合成电路是系统生物学研究的一个新兴领域,每年都有更多详细介绍合成生物电路的刊物出版。在鼓励教育和推广方面,外界也对合成电路很大的兴趣: 国际基因工程机器竞赛管理着生物积木零件的创造和标准化,作为允许本科生和高中生设计自己的合成生物电路的一种手段。<br />
<br />
The first natural gene circuit studied in detail was the [[lac operon]]. In studies of [[diauxie|diauxic growth]] of ''[[E. coli]]'' on two-sugar media, [[Jacques Monod]] and [[Francois Jacob]] discovered that ''E.coli'' preferentially consumes the more easily processed [[glucose]] before switching to [[lactose]] metabolism. They discovered that the mechanism that controlled the metabolic "switching" function was a two-part control mechanism on the lac operon. When lactose is present in the cell the [[enzyme]] [[β-galactosidase]] is produced to convert lactose into [[glucose]] or [[galactose]]. When lactose is absent in the cell the lac repressor inhibits the production of the enzyme β-galactosidase to prevent any inefficient processes within the cell.<br />
<br />
<br />
<br />
The lac operon is used in the [[biotechnology]] industry for production of [[recombinant DNA|recombinant]] [[proteins]] for therapeutic use. The gene or genes for producing an [[exogenous]] protein are placed on a [[plasmid]] under the control of the lac promoter. Initially the cells are grown in a medium that does not contain lactose or other sugars, so the new genes are not expressed. Once the cells reach a certain point in their growth, [[IPTG|Isopropyl β-D-1-thiogalactopyranoside (IPTG)]] is added. IPTG, a molecule similar to lactose, but with a sulfur bond that is not hydrolyzable so that E. Coli does not digest it, is used to activate or "[[Regulation of gene expression#Inducible vs. repressible systems|induce]]" the production of the new protein. Once the cells are induced, it is difficult to remove IPTG from the cells and therefore it is difficult to stop expression.<br />
<br />
Both immediate and long term applications exist for the use of synthetic biological circuits, including different applications for metabolic engineering, and synthetic biology. Those demonstrated successfully include pharmaceutical production, and fuel production. However methods involving direct genetic introduction are not inherently effective without invoking the basic principles of synthetic cellular circuits. For example, each of these successful systems employs a method to introduce all-or-none induction or expression. This is a biological circuit where a simple repressor or promoter is introduced to facilitate creation of the product, or inhibition of a competing pathway. However, with the limited understanding of cellular networks and natural circuitry, implementation of more robust schemes with more precise control and feedback is hindered. Therein lies the immediate interest in synthetic cellular circuits.<br />
<br />
包括代谢工程学和合成生物学在内的不同应用,合成生物电路的近期和长期应用都存在。这些成功的示范包括制药生产和燃料生产。然而,如果不引用合成细胞电路的基本原理,涉及直接基因导入的方法并不是天生有效的。例如,每个成功的系统都使用一种方法来引入全或非的诱导或表达。这是一个生物电路,其中一个简单的阻遏或启动子介绍,以促进产品的创造,或抑制竞争通路。然而,由于对蜂窝网络和自然电路的了解有限,实施更精确的控制和反馈更健全的方案会受到阻碍。这就是人工合成蜂窝电路的直接利益所在。<br />
<br />
<br />
<br />
Two early examples of synthetic biological circuits were published in [[Nature (journal)|Nature]] in 2000. One, by Tim Gardner, Charles Cantor, and [[James Collins (bioengineer)|Jim Collins]] working at [[Boston University]], demonstrated a "bistable" switch in ''E. coli''. The switch is turned on by heating the culture of bacteria and turned off by addition of IPTG. They used GFP as a reporter for their system.<ref name="Gardner">Gardner, T.s., Cantor, C.R., Collins, J. Construction of a genetic toggle switch in Escherichia coli. ''Nature'' 403, 339-342 (20 January 2000).</ref> The second, by [[Michael Elowitz]] and [[Stanislas Leibler]], showed that three repressor genes could be connected to form a negative feedback loop termed the [[Repressilator]] that produces self-sustaining oscillations of protein levels in ''E. coli.''<ref>{{Cite journal|last=Stanislas Leibler|last2=Elowitz|first2=Michael B.|date=January 2000|title=A synthetic oscillatory network of transcriptional regulators|journal=Nature|volume=403|issue=6767|pages=335–338|doi=10.1038/35002125|pmid=10659856|issn=1476-4687}}</ref><br />
<br />
Development in understanding cellular circuitry can lead to exciting new modifications, such as cells which can respond to environmental stimuli. For example, cells could be developed that signal toxic surroundings and react by activating pathways used to degrade the perceived toxin. To develop such a cell, it is necessary to create a complex synthetic cellular circuit which can respond appropriately to a given stimulus.<br />
<br />
细胞电路理解的发展可以导致令人兴奋的新修正,例如可以对环境刺激作出反应的细胞。例如,细胞可以警示处于有毒环境,并通过激活用于降解感知毒素的途径来进行反应。为了研制这样一个细胞,有必要创建一个复杂的合成细胞电路,它可以适当地响应给定的刺激。<br />
<br />
<br />
<br />
Currently, synthetic circuits are a burgeoning area of research in [[systems biology]] with more publications detailing synthetic biological circuits published every year.<ref>{{cite journal | last1 = Purnick | first1 = Priscilla E. M. | last2 = Weis | first2 = Ron | year = 2009 | title = The second wave of synthetic biology: from modules to systems | url = | journal = Nature Reviews Molecular Cell Biology | volume = 10 | issue = 6| pages = 410–422 | doi = 10.1038/nrm2698 | pmid=19461664}}</ref> There has been significant interest in encouraging education and outreach as well: the International Genetically Engineered Machines Competition<ref>International Genetically Engineered Machines (iGem) http://igem.org/Main_Page</ref> manages the creation and standardization of [[BioBrick]] parts as a means to allow undergraduate and high school students to design their own synthetic biological circuits.<br />
<br />
Given synthetic cellular circuits represent a form of control for cellular activities, it can be reasoned that with complete understanding of cellular pathways, "plug and play" synthetic cells can be developed implementing only the pathways necessary for cell survival reproduction. From this cell, to be thought of as a minimal genome cell, one can add pieces from the toolbox to create a well defined pathway with appropriate synthetic circuitry for an effective feedback system. Because of the basic ground up construction method, and the proposed database of mapped circuitry pieces, techniques mirroring those used to model computer or electronic circuits can be used to redesign cells and model cells for easy troubleshooting and predictive behavior and yields.<br />
<br />
鉴于合成细胞电路代表了一种控制细胞活动的形式,可以推断,只要完全了解细胞通路,就可以开发出只执行细胞生存繁殖所必需的通路的“即插即用”的合成细胞。从这个被认为是一个最小的基因组细胞,我们可以添加工具箱中的片段,为一个有效的反馈系统创造一个拥有合适的合成电路的良定路径。由于基本的基础构造方法,以及提议的映射电路片数据库,反映用于模拟计算机或电子电路的技术可用于重新设计细胞和模型细胞,以便排除故障和预测行为与产量。<br />
<br />
<br />
<br />
== Interest and goals 研究方向和目标==<br />
<br />
Both immediate and long term applications exist for the use of synthetic biological circuits, including different applications for [[metabolic engineering]], and [[synthetic biology]]. Those demonstrated successfully include pharmaceutical production,<ref>{{cite journal | last1 = Ro | first1 = D.-K. | last2 = Paradise | first2 = E.M. | last3 = Ouellet | first3 = M. | last4 = Fisher | first4 = K.J. | last5 = Newman | first5 = K.L. | last6 = Ndungu | first6 = J.M. | last7 = Ho | first7 = K.A. | last8 = Eachus | first8 = R.A. | last9 = Ham | first9 = T.S. | last10 = Kirby | first10 = J. | last11 = Chang | first11 = M.C.Y. | last12 = Withers | first12 = S.T. | last13 = Shiba | first13 = Y. | last14 = Sarpong | first14 = R. | last15 = Keasling | first15 = J.D. | year = 2006 | title = Production of the antimalarial drug precursor artemisinic acid in engineered yeast | url = | journal = Nature | volume = 440 | issue = 7086| pages = 940–943 | doi=10.1038/nature04640 | pmid=16612385}}</ref> and fuel production.<ref>{{cite journal | last1 = Fortman | first1 = J.L. | last2 = Chhabra | first2 = S. | last3 = Mukhopadhyay | first3 = A. | last4 = Chou | first4 = H. | last5 = Lee | first5 = T.S. | last6 = Steen | first6 = E. | last7 = Keasling | first7 = J.D. | year = 2008 | title = Biofuel alternatives to ethanol: pumping the microbial well | url = https://digital.library.unt.edu/ark:/67531/metadc1013351/| journal = Trends Biotechnol | volume = 26 | issue = 7| pages = 375–381 | doi=10.1016/j.tibtech.2008.03.008| pmid = 18471913 }}</ref> However methods involving direct genetic introduction are not inherently effective without invoking the basic principles of synthetic cellular circuits. For example, each of these successful systems employs a method to introduce all-or-none induction or expression. This is a biological circuit where a simple [[repressor]] or [[promoter (genetics)|promoter]] is introduced to facilitate creation of the product, or inhibition of a competing pathway. However, with the limited understanding of cellular networks and natural circuitry, implementation of more robust schemes with more precise control and feedback is hindered. Therein lies the immediate interest in synthetic cellular circuits.<br />
<br />
<br />
<br />
Development in understanding cellular circuitry can lead to exciting new modifications, such as cells which can respond to environmental stimuli. For example, cells could be developed that signal toxic surroundings and react by activating pathways used to degrade the perceived toxin.<ref>{{cite journal | last1 = Keasling | first1 = J.D. | year = 2008 | title = Synthetic biology for synthetic chemistry. | url = | journal = ACS Chem Biol | volume = 3 | issue = 1| pages = 64–76 | doi=10.1021/cb7002434| pmid = 18205292 | title-link = synthetic chemistry }}</ref> To develop such a cell, it is necessary to create a complex synthetic cellular circuit which can respond appropriately to a given stimulus.<br />
<br />
Repressilator<br />
<br />
抑制震荡子<br />
<br />
<br />
<br />
Mammalian tunable synthetic oscillator<br />
<br />
哺乳动物可调谐合成振荡器<br />
<br />
Given synthetic cellular circuits represent a form of control for cellular activities, it can be reasoned that with complete understanding of cellular pathways, "plug and play"<ref name="Kobayashi" /> cells with well defined genetic circuitry can be engineered. It is widely believed that if a proper toolbox of parts is generated,<ref>{{cite journal | last1 = Lucks | first1 = Julius B | last2 = Qi | first2 = Lei | last3 = Whitaker | first3 = Weston R | last4 = Arkin | first4 = Adam P | year = 2008 | title = Toward scalable parts families for predictable design of biological circuits | url = | journal = Current Opinion in Microbiology | volume = 11 | issue = 6| pages = 567–573 | doi = 10.1016/j.mib.2008.10.002 | pmid = 18983935 }}</ref> synthetic cells can be developed implementing only the pathways necessary for cell survival reproduction. From this cell, to be thought of as a minimal [[genome]] cell, one can add pieces from the toolbox to create a well defined pathway with appropriate synthetic circuitry for an effective feedback system. Because of the basic ground up construction method, and the proposed database of mapped circuitry pieces, techniques mirroring those used to model computer or electronic circuits can be used to redesign cells and model cells for easy troubleshooting and predictive behavior and yields.<br />
<br />
Bacterial tunable synthetic oscillator<br />
<br />
细菌可调谐合成振荡器<br />
<br />
<br />
<br />
Coupled bacterial oscillator<br />
<br />
耦合细菌振荡器<br />
<br />
== Example circuits 电路示例 ==<br />
<br />
Globally coupled bacterial oscillator<br />
<br />
球状耦合细菌振荡器<br />
<br />
<br />
<br />
Elowitz et al. and Fung et al. created oscillatory circuits that use multiple self-regulating mechanisms to create a time-dependent oscillation of gene product expression. <br />
<br />
埃洛维茨等人。和 Fung 等人。创造的振荡电路,使用多种自我调节机制,创造一个时间相关的振荡基因产品表达。<br />
<br />
=== Oscillators 振荡器 ===<br />
<br />
# [[Repressilator]]<br />
<br />
# Mammalian tunable synthetic oscillator<br />
<br />
Toggle-switch<br />
<br />
拨动开关<br />
<br />
# Bacterial tunable synthetic oscillator<br />
<br />
Gardner et al. used mutual repression between two control units to create an implementation of a toggle switch capable of controlling cells in a bistable manner: transient stimuli resulting in persistent responses If Signal A AND Signal B are present, then the desired gene product will result. All promoters shown are inducible, activated by the displayed gene product. Each signal activates expression of a separate gene (shown in light blue). The expressed proteins then can either form a complete complex in cytosol, that is capable of activating expression of the output (shown), or can act separately to induce expression, such as separately removing an inhibiting protein and inducing activation of the uninhibited promoter.]]<br />
<br />
加德纳等人使用两个控制单元之间的相互抑制来创建一个能够以双稳态方式控制细胞的拨动开关的实现: 如果信号 A 和信号 B 产生,那么期望的基因产物将被表达出来。所有显示出的启动子都是诱导性的,并被表达的基因产物激活。每个信号激活一个单独基因的表达(如浅蓝色所示)。然后,表达的蛋白质可以在细胞溶胶中形成一个完整的能够激活输出的表达(如图所示)的复合体,或者可以单独作用诱导表达,例如单独去除抑制蛋白和诱导激活不受抑制的启动子。<br />
<br />
# Coupled bacterial oscillator<br />
耦合细菌振荡器<br />
<br />
# Globally coupled bacterial oscillator<br />
球状耦合细菌振荡器<br />
<br />
The logical [[OR gate.<br />
<br />
逻辑的[或门。<br />
<br />
Elowitz et al. and Fung et al. created oscillatory circuits that use multiple self-regulating mechanisms to create a time-dependent oscillation of gene product expression.埃洛维茨等人和冯等人创造了一种振荡电路,它使用多个自调节机制来形成基因表达的依赖于时间的振荡器。<ref>{{cite journal | last1 = Elowitz | first1 = M.B. | last2 = Leibler | first2 = S. | year = 2000 | title = A synthetic oscillatory network of transcriptional regulators | pmid = 10659856| journal = Nature | volume = 403 | issue = 6767| pages = 335–338 | doi=10.1038/35002125}}</ref><ref>{{cite journal | last1 = Fung | first1 = E. | last2 = Wong | first2 = W.W. | last3 = Suen | first3 = J.K. | last4 = Bulter | first4 = T. | last5 = Lee | first5 = S. | last6 = Liao | first6 = J.C. | year = 2005 | title = A synthetic gene–metabolic oscillator | url = | journal = Nature | volume = 435 | issue = 7038| pages = 118–122 | doi=10.1038/nature03508| pmid = 15875027 }}</ref> <br />
<br />
<br />
<br />
=== Bistable switches 双稳态开关===<br />
<br />
Synthetic gene circuits can control gene expression heterogeneity and can be controlled independently of the gene expression mean.<br />
<br />
合成基因电路可以控制基因表达的异质性,并且可以独立于基因表达均值进行控制。<br />
<br />
# Toggle-switch<br />
<br />
Gardner et al. used mutual repression between two control units to create an implementation of a toggle switch capable of controlling cells in a bistable manner: transient stimuli resulting in persistent responses<ref name="Gardner" />.<br />
<br />
<br />
<br />
Engineered systems are the result of implementation of combinations of different control mechanisms. A limited counting mechanism was implemented by a pulse-controlled gene cascade and application of logic elements enables genetic "programming" of cells as in the research of Tabor et al., which synthesized a photosensitive bacterial edge detection program.<br />
<br />
工程系统是不同控制机制组合的结果。一个有限的计数机制通过一个脉冲控制的基因串联得以实现,逻辑元件的应用使细胞的遗传“编程”成为可能。泰伯等人合成了一个光敏细菌边缘检测程序。<br />
<br />
=== Logical operators 逻辑运算===<br />
<br />
[[File:SynBioCirc-AndLogicGate.jpg|frame|center|The logical [[AND gate]].逻辑与门<ref name="rocha">{{cite journal | last1 = Silva-Rocha | first1 = R. | last2 = de Lorenzo | first2 = V. | year = 2008 | title = Mining logic gates in prokaryotic transcriptional regulation networks | url = | journal = FEBS Letters | volume = 582 | issue = 8| pages = 1237–1244 | doi=10.1016/j.febslet.2008.01.060 | pmid=18275855}}</ref><ref name="buchler">{{cite journal | last1 = Buchler | first1 = N.E. | last2 = Gerland | first2 = U. | last3 = Hwa | first3 = T. | year = 2003 | title = On schemes of combinatorial transcription logic | journal = PNAS | volume = 100 | issue = 9| pages = 5136–5141 | doi=10.1073/pnas.0930314100 | pmid=12702751 | pmc=404558}}</ref> If Signal A '''AND''' Signal B are present, then the desired gene product will result. All promoters shown are inducible, activated by the displayed gene product. Each signal activates expression of a separate gene (shown in light blue). The expressed proteins then can either form a complete complex in [[cytosol]], that is capable of activating expression of the output (shown), or can act separately to induce expression, such as separately removing an inhibiting protein and inducing activation of the uninhibited promoter.如果信号 A 和信号 B 产生,那么期望的基因产物将被表达出来。所有显示出的启动子都是诱导性的,并被表达的基因产物激活。每个信号激活一个单独基因的表达(如浅蓝色所示)。然后,表达的蛋白质可以在细胞溶胶中形成一个完整的能够激活输出的表达(如图所示)的复合体,或者可以单独作用诱导表达,例如单独去除抑制蛋白和诱导激活不受抑制的启动子。]]<br />
<br />
<br />
<br />
Computational design and evaluation of DNA circuits to achieve optimal performance<br />
<br />
实现最佳性能的 DNA 电路的计算设计和评估<br />
<br />
[[File:SynBioCirc-OrLogicGate.jpg|frame|center|The logical [[OR gate]].逻辑或门<ref name="rocha" /><ref name="buchler" /> If Signal A '''OR''' Signal B are present, then the desired gene product will result. All promoters shown are inducible. Either signal is capable of activating the expression of the output gene product, and only the action of a single promoter is required for gene expression. Post-transcriptional regulation mechanisms can prevent the presence of both inputs producing a compounded high output, such as implementing a low binding affinity [[ribosome binding site]].如果信号 A 或信号 B 产生,那么期望的基因产物将被表达出来。所有显示出的启动子都是诱导性的。无论何种信号,都能激活基因产物的表达,并且只需一个启动子就可以产生这种表达。转录后的调节机制可以阻止产生复合高产出的两个输入信号的产生,比如插入一个低结合亲和力的核糖体结合点。]]<br />
<br />
<br />
<br />
Recent developments in artificial gene synthesis and the corresponding increase in competition within the industry have led to a significant drop in price and wait time of gene synthesis and helped improve methods used in circuit design. At the moment, circuit design is improving at a slow pace because of insufficient organization of known multiple gene interactions and mathematical models. This issue is being addressed by applying computer-aided design (CAD) software to provide multimedia representations of circuits through images, text and programming language applied to biological circuits. Some of the more well known CAD programs include GenoCAD, Clotho framework and j5. GenoCAD uses grammars, which are either opensource or user generated "rules" which include the available genes and known gene interactions for cloning organisms. Clotho framework uses the Biobrick standard rules.<br />
<br />
最近在人工基因合成法领域的发展和行业内竞争的相应增加已经导致了基因合成的价格和等待时间的显著下降,并且帮助改进了电路设计中使用的方法。目前,由于对已知的多基因相互作用和数学模型的架构不足,电路设计正在缓慢地改进。这个问题目前通过应用计算机辅助设计(CAD)软件,利用图像、文本和应用于生物电路的编程语言来提供电路的多媒体表示来解决。一些更著名的 CAD 程序包括 GenoCAD,Clotho框架和 j5。GenoCAD 使用语法,这些语法要么是开源的,要么是用户生成的“规则” ,其中包括克隆生物的可用基因和已知的基因相互作用。Clotho框架使用“生物积木”标准规则。<br />
<br />
[[File:SynBioCirc-NandLogicGate.jpg|frame|center|The logical [[Negated AND gate]].逻辑非门<ref name="rocha" /><ref name="buchler" /> If Signal A '''AND''' Signal B are present, then the desired gene product will '''NOT''' result. All promoters shown are inducible. The activating promoter for the output gene is constitutive, and thus not shown. The constitutive promoter for the output gene keeps it "on" and is only deactivated when (similar to the AND gate) a complex as a result of two input signal gene products blocks the expression of the output gene.如果信号 A 和信号 B 产生,那么期望的基因产物不会被表达出来。所有显示出的启动子都是诱导性的。输出基因的活性启动子是本构的,因而不会显示出来。输出基因的本构启动子让它始终被表达,只有在两个输入信号基因的产物阻滞输出基因的表达,形成一个复合体时,输出基因才失活。]]<br />
<br />
<br />
<br />
=== Analog tuners 模拟调谐器===<br />
<br />
Using negative feedback and identical promoters, linearizer gene circuits can impose uniform gene expression that depends linearly on extracellular chemical inducer concentration.线性化电路使用负反馈和完全相同的启动子,可以利用线性依赖于细胞外化学诱导物浓度的统一基因的表达。<ref name="pmid19279212">{{cite journal | vauthors = Nevozhay D, Adams RM, Murphy KF, Josic K, Balázsi G| title = Negative autoregulation linearizes the dose-response and suppresses the heterogeneity of gene expression | journal = Proc. Natl. Acad. Sci. U.S.A. | volume = 106 | issue = 13 | pages = 5123-8 | date = March 31, 2009 | pmid = 19279212 | pmc = 2654390 | doi = 10.1073/pnas.0809901106 }}</ref><br />
<br />
<br />
<br />
=== Controllers of gene expression heterogeneity 基因表达异质性的控制===<br />
<br />
Synthetic gene circuits can control gene expression heterogeneity and can be controlled independently of the gene expression mean.合成基因电路可以控制基因表达的异质性,并且可以独立于基因表达均值来控制。<ref name="pmid17189188">{{cite journal | vauthors = Blake WJ, Balázsi G, Kohanski MA, Isaacs FJ, Murphy KF, Kuang Y, Cantor CR, Walt DR, Collins JJ| title = Phenotypic Consequences of Promoter-Mediated Transcriptional Noise | journal = Molec. Cell | volume = 24 | issue = 6 | pages = 853-65 | date = December 28, 2006 | pmid = 17189188 | doi = 10.1016/j.molcel.2006.11.003 }}</ref><br />
<br />
<br />
<br />
=== Other engineered systems 其他工程系统===<br />
<br />
<!--- Categories ---><br />
<br />
< ! ——类别—— > <br />
<br />
Engineered systems are the result of implementation of combinations of different control mechanisms. A limited counting mechanism was implemented by a pulse-controlled gene cascade<ref>{{cite journal | last1 = Friedland | first1 = A.E. | last2 = Lu | first2 = T.K | last3 = Wang | first3 = X. | last4 = Shi | first4 = D. | last5 = Church | first5 = G. | last6 = Collins | first6 = J.J. | year = 2009 | title = Synthetic Gene Networks That Count | url = | journal = Science | volume = 324 | issue = 5931| pages = 1199–1202 | doi=10.1126/science.1172005 | pmid=19478183 | pmc=2690711}}</ref> and application of logic elements enables genetic "programming" of cells as in the research of Tabor et al., which synthesized a photosensitive bacterial edge detection program.<ref>{{cite journal | last1 = Tabor | first1 = J.J. | last2 = Salis | first2 = H.M. | last3 = Simpson | first3 = Z.B. | last4 = Chevalier | first4 = A.A. | last5 = Levskaya | first5 = A. | last6 = Marcotte | first6 = E.M. | last7 = Voigt | first7 = C.A. | last8 = Ellington | first8 = A.D. | year = 2009 | title = A Synthetic Edge Detection Program | url = | journal = Cell | volume = 137 | issue = 7| pages = 1272–1281 | doi=10.1016/j.cell.2009.04.048| pmid = 19563759 | pmc = 2775486 }}</ref><br />
<br />
Category:Synthetic biology<br />
<br />
类别: 合成生物学<br />
<br />
<noinclude><br />
<br />
<small>This page was moved from [[wikipedia:en:Synthetic biological circuit]]. Its edit history can be viewed at [[合成生物电路/edithistory]]</small></noinclude><br />
<br />
[[Category:待整理页面]]</div>粲兰https://wiki.swarma.org/index.php?title=%E5%90%88%E6%88%90%E7%94%9F%E7%89%A9%E7%94%B5%E8%B7%AF&diff=17880合成生物电路2020-11-04T13:30:39Z<p>粲兰:</p>
<hr />
<div>此词条暂由袁一博翻译,翻译字数共956,未经人工整理和审校,带来阅读不便,请见谅。<br />
<br />
--[[用户:小趣木木|小趣木木]]([[用户讨论:小趣木木|讨论]])文本缺失 需要补充<br />
{{Synthetic biology}}<br />
<br />
[[File:Lac Operon.svg|thumb|275px|Lac Operon|The ''lac'' operon is a natural biological circuit on which many synthetic circuits are based. Top: Repressed, Bottom: Active. <br /><br />
<br />
[[File:Lac Operon.svg|thumb|275px|Lac Operon|The lac operon is a natural biological circuit on which many synthetic circuits are based. Top: Repressed, Bottom: Active. <br /><br />
<br />
[文件: Lac Operon.svg | thumb | 275px | Lac Operon | Lac Operon | Lac Operon 是一种天然的生物电路,许多合成电路都是以它为基础的。上: 压抑,下: 活跃。< br/><br />
<br />
'''''1'': RNA polymerase, ''2'': Repressor, ''3'': Promoter, ''4'': Operator, ''5'': Lactose, ''6'': ''lacZ'', ''7'': ''lacY'', ''8'': ''lacA''.]]<br />
<br />
1: RNA polymerase, 2: Repressor, 3: Promoter, 4: Operator, 5: Lactose, 6: lacZ, 7: lacY, 8: lacA.]]<br />
<br />
1: RNA 聚合酶,2: 抑制子,3: 启动子,4: 操作者,5: 乳糖,6: lacZ,7: lacY,8: lacA ]<br />
<br />
<br />
<br />
'''Synthetic biological circuits''' are an application of [[synthetic biology]] where biological parts inside a [[Cell (biology)|cell]] are designed to perform logical functions mimicking those observed in [[electronic circuit]]s. The applications range from simply inducing production to adding a measurable element, like [[Green Fluorescent Protein|GFP]], to an existing [[Gene regulatory network|natural biological circuit]], to implementing completely new systems of many parts.<ref name="Kobayashi">{{cite journal | last1 = Kobayashi | first1 = H. | last2 = Kærn | first2 = M. | last3 = Araki | first3 = M. | last4 = Chung | first4 = K. | last5 = Gardner | first5 = T. S. | last6 = Cantor | first6 = C. R. | last7 = Collins | first7 = J. J. | year = 2004 | title = Programmable cells: Interfacing natural and engineered gene networks | journal = PNAS | volume = 101 | issue = 22| pages = 8414–8419 | doi=10.1073/pnas.0402940101 | pmid=15159530 | pmc=420408}}</ref><br />
<br />
Synthetic biological circuits are an application of synthetic biology where biological parts inside a cell are designed to perform logical functions mimicking those observed in electronic circuits. The applications range from simply inducing production to adding a measurable element, like GFP, to an existing natural biological circuit, to implementing completely new systems of many parts.<br />
<br />
合成生物电路是合成生物学的一种应用,其所含细胞内的生物部件被设计为具有模仿电子电路的逻辑功能。从简单的诱导生产到添加一个可测量的元素,如绿色荧光蛋白,到现有的自然生物电路,再到实施由许多部分组成的全新系统,都包含在应用范围内。<br />
<br />
[[Image:Protein translation.gif|thumb|300px| A [[ribosome]] is a [[biological machine]].核糖体是一个生物机器。]]<br />
<br />
A [[ribosome is a biological machine.]]<br />
<br />
[核糖体是一种生物机器]<br />
<br />
The goal of synthetic biology is to generate an array of tunable and characterized parts, or modules, with which any desirable synthetic biological circuit can be easily designed and implemented.<ref name="SynBioFaq">{{cite web|title=Synthetic Biology: FAQ|url=http://syntheticbiology.org/FAQ.html|work=SyntheticBiology.org|accessdate=21 December 2011|url-status=dead|archiveurl=https://web.archive.org/web/20021212065409/http://syntheticbiology.org/faq.html|archivedate=12 December 2002}}</ref> These circuits can serve as a method to modify cellular functions, create cellular responses to environmental conditions, or influence cellular development. By implementing rational, controllable logic elements in cellular systems, researchers can use living systems as engineered "[[biological machine]]s" to perform a vast range of useful functions.<ref name="Kobayashi"/><br />
<br />
The goal of synthetic biology is to generate an array of tunable and characterized parts, or modules, with which any desirable synthetic biological circuit can be easily designed and implemented. These circuits can serve as a method to modify cellular functions, create cellular responses to environmental conditions, or influence cellular development. By implementing rational, controllable logic elements in cellular systems, researchers can use living systems as engineered "biological machines" to perform a vast range of useful functions. The second, by Michael Elowitz and Stanislas Leibler, showed that three repressor genes could be connected to form a negative feedback loop termed the Repressilator that produces self-sustaining oscillations of protein levels in E. coli.<br />
<br />
合成生物学的目标是生成一系列可调谐和特征化的部件或模块,用这些部件或模块,任何理想的合成生物电路都可以轻松地设计和实现。这些电路可以作为一种方法来修改细胞功能,创造细胞响应环境条件,或影响细胞的发展。通过在细胞系统中实现合理、可控的逻辑元素,研究人员可以将活体系统作为工程化的“生物机器”来执行大量有用的功能。迈克尔·埃洛维茨(Michael Elowitz)和斯坦尼斯拉斯·雷布勒(Stanislas Leibler)的第二项研究表明,三个阻遏基因可以相互联系,形成一个负反馈环路,称为'''<font color="#ff8000">抑制震荡子(Repressilator)</font>''',它可以在大肠杆菌中产生蛋白质水平的自维持振荡。<br />
<br />
<br />
<br />
== History 发展历程 ==<br />
<br />
Currently, synthetic circuits are a burgeoning area of research in systems biology with more publications detailing synthetic biological circuits published every year. There has been significant interest in encouraging education and outreach as well: the International Genetically Engineered Machines Competition manages the creation and standardization of BioBrick parts as a means to allow undergraduate and high school students to design their own synthetic biological circuits.<br />
<br />
目前,合成电路是系统生物学研究的一个新兴领域,每年都有更多详细介绍合成生物电路的刊物出版。在鼓励教育和推广方面,外界也对合成电路很大的兴趣: 国际基因工程机器竞赛管理着生物积木零件的创造和标准化,作为允许本科生和高中生设计自己的合成生物电路的一种手段。<br />
<br />
The first natural gene circuit studied in detail was the [[lac operon]]. In studies of [[diauxie|diauxic growth]] of ''[[E. coli]]'' on two-sugar media, [[Jacques Monod]] and [[Francois Jacob]] discovered that ''E.coli'' preferentially consumes the more easily processed [[glucose]] before switching to [[lactose]] metabolism. They discovered that the mechanism that controlled the metabolic "switching" function was a two-part control mechanism on the lac operon. When lactose is present in the cell the [[enzyme]] [[β-galactosidase]] is produced to convert lactose into [[glucose]] or [[galactose]]. When lactose is absent in the cell the lac repressor inhibits the production of the enzyme β-galactosidase to prevent any inefficient processes within the cell.<br />
<br />
<br />
<br />
The lac operon is used in the [[biotechnology]] industry for production of [[recombinant DNA|recombinant]] [[proteins]] for therapeutic use. The gene or genes for producing an [[exogenous]] protein are placed on a [[plasmid]] under the control of the lac promoter. Initially the cells are grown in a medium that does not contain lactose or other sugars, so the new genes are not expressed. Once the cells reach a certain point in their growth, [[IPTG|Isopropyl β-D-1-thiogalactopyranoside (IPTG)]] is added. IPTG, a molecule similar to lactose, but with a sulfur bond that is not hydrolyzable so that E. Coli does not digest it, is used to activate or "[[Regulation of gene expression#Inducible vs. repressible systems|induce]]" the production of the new protein. Once the cells are induced, it is difficult to remove IPTG from the cells and therefore it is difficult to stop expression.<br />
<br />
Both immediate and long term applications exist for the use of synthetic biological circuits, including different applications for metabolic engineering, and synthetic biology. Those demonstrated successfully include pharmaceutical production, and fuel production. However methods involving direct genetic introduction are not inherently effective without invoking the basic principles of synthetic cellular circuits. For example, each of these successful systems employs a method to introduce all-or-none induction or expression. This is a biological circuit where a simple repressor or promoter is introduced to facilitate creation of the product, or inhibition of a competing pathway. However, with the limited understanding of cellular networks and natural circuitry, implementation of more robust schemes with more precise control and feedback is hindered. Therein lies the immediate interest in synthetic cellular circuits.<br />
<br />
包括代谢工程学和合成生物学在内的不同应用,合成生物电路的近期和长期应用都存在。这些成功的示范包括制药生产和燃料生产。然而,如果不引用合成细胞电路的基本原理,涉及直接基因导入的方法并不是天生有效的。例如,每个成功的系统都使用一种方法来引入全或非的诱导或表达。这是一个生物电路,其中一个简单的阻遏或启动子介绍,以促进产品的创造,或抑制竞争通路。然而,由于对蜂窝网络和自然电路的了解有限,实施更精确的控制和反馈更健全的方案会受到阻碍。这就是人工合成蜂窝电路的直接利益所在。<br />
<br />
<br />
<br />
Two early examples of synthetic biological circuits were published in [[Nature (journal)|Nature]] in 2000. One, by Tim Gardner, Charles Cantor, and [[James Collins (bioengineer)|Jim Collins]] working at [[Boston University]], demonstrated a "bistable" switch in ''E. coli''. The switch is turned on by heating the culture of bacteria and turned off by addition of IPTG. They used GFP as a reporter for their system.<ref name="Gardner">Gardner, T.s., Cantor, C.R., Collins, J. Construction of a genetic toggle switch in Escherichia coli. ''Nature'' 403, 339-342 (20 January 2000).</ref> The second, by [[Michael Elowitz]] and [[Stanislas Leibler]], showed that three repressor genes could be connected to form a negative feedback loop termed the [[Repressilator]] that produces self-sustaining oscillations of protein levels in ''E. coli.''<ref>{{Cite journal|last=Stanislas Leibler|last2=Elowitz|first2=Michael B.|date=January 2000|title=A synthetic oscillatory network of transcriptional regulators|journal=Nature|volume=403|issue=6767|pages=335–338|doi=10.1038/35002125|pmid=10659856|issn=1476-4687}}</ref><br />
<br />
Development in understanding cellular circuitry can lead to exciting new modifications, such as cells which can respond to environmental stimuli. For example, cells could be developed that signal toxic surroundings and react by activating pathways used to degrade the perceived toxin. To develop such a cell, it is necessary to create a complex synthetic cellular circuit which can respond appropriately to a given stimulus.<br />
<br />
细胞电路理解的发展可以导致令人兴奋的新修正,例如可以对环境刺激作出反应的细胞。例如,细胞可以警示处于有毒环境,并通过激活用于降解感知毒素的途径来进行反应。为了研制这样一个细胞,有必要创建一个复杂的合成细胞电路,它可以适当地响应给定的刺激。<br />
<br />
<br />
<br />
Currently, synthetic circuits are a burgeoning area of research in [[systems biology]] with more publications detailing synthetic biological circuits published every year.<ref>{{cite journal | last1 = Purnick | first1 = Priscilla E. M. | last2 = Weis | first2 = Ron | year = 2009 | title = The second wave of synthetic biology: from modules to systems | url = | journal = Nature Reviews Molecular Cell Biology | volume = 10 | issue = 6| pages = 410–422 | doi = 10.1038/nrm2698 | pmid=19461664}}</ref> There has been significant interest in encouraging education and outreach as well: the International Genetically Engineered Machines Competition<ref>International Genetically Engineered Machines (iGem) http://igem.org/Main_Page</ref> manages the creation and standardization of [[BioBrick]] parts as a means to allow undergraduate and high school students to design their own synthetic biological circuits.<br />
<br />
Given synthetic cellular circuits represent a form of control for cellular activities, it can be reasoned that with complete understanding of cellular pathways, "plug and play" synthetic cells can be developed implementing only the pathways necessary for cell survival reproduction. From this cell, to be thought of as a minimal genome cell, one can add pieces from the toolbox to create a well defined pathway with appropriate synthetic circuitry for an effective feedback system. Because of the basic ground up construction method, and the proposed database of mapped circuitry pieces, techniques mirroring those used to model computer or electronic circuits can be used to redesign cells and model cells for easy troubleshooting and predictive behavior and yields.<br />
<br />
鉴于合成细胞电路代表了一种控制细胞活动的形式,可以推断,只要完全了解细胞通路,就可以开发出只执行细胞生存繁殖所必需的通路的“即插即用”的合成细胞。从这个被认为是一个最小的基因组细胞,我们可以添加工具箱中的片段,为一个有效的反馈系统创造一个拥有合适的合成电路的良定路径。由于基本的基础构造方法,以及提议的映射电路片数据库,反映用于模拟计算机或电子电路的技术可用于重新设计细胞和模型细胞,以便排除故障和预测行为与产量。<br />
<br />
<br />
<br />
== Interest and goals 研究方向和目标==<br />
<br />
Both immediate and long term applications exist for the use of synthetic biological circuits, including different applications for [[metabolic engineering]], and [[synthetic biology]]. Those demonstrated successfully include pharmaceutical production,<ref>{{cite journal | last1 = Ro | first1 = D.-K. | last2 = Paradise | first2 = E.M. | last3 = Ouellet | first3 = M. | last4 = Fisher | first4 = K.J. | last5 = Newman | first5 = K.L. | last6 = Ndungu | first6 = J.M. | last7 = Ho | first7 = K.A. | last8 = Eachus | first8 = R.A. | last9 = Ham | first9 = T.S. | last10 = Kirby | first10 = J. | last11 = Chang | first11 = M.C.Y. | last12 = Withers | first12 = S.T. | last13 = Shiba | first13 = Y. | last14 = Sarpong | first14 = R. | last15 = Keasling | first15 = J.D. | year = 2006 | title = Production of the antimalarial drug precursor artemisinic acid in engineered yeast | url = | journal = Nature | volume = 440 | issue = 7086| pages = 940–943 | doi=10.1038/nature04640 | pmid=16612385}}</ref> and fuel production.<ref>{{cite journal | last1 = Fortman | first1 = J.L. | last2 = Chhabra | first2 = S. | last3 = Mukhopadhyay | first3 = A. | last4 = Chou | first4 = H. | last5 = Lee | first5 = T.S. | last6 = Steen | first6 = E. | last7 = Keasling | first7 = J.D. | year = 2008 | title = Biofuel alternatives to ethanol: pumping the microbial well | url = https://digital.library.unt.edu/ark:/67531/metadc1013351/| journal = Trends Biotechnol | volume = 26 | issue = 7| pages = 375–381 | doi=10.1016/j.tibtech.2008.03.008| pmid = 18471913 }}</ref> However methods involving direct genetic introduction are not inherently effective without invoking the basic principles of synthetic cellular circuits. For example, each of these successful systems employs a method to introduce all-or-none induction or expression. This is a biological circuit where a simple [[repressor]] or [[promoter (genetics)|promoter]] is introduced to facilitate creation of the product, or inhibition of a competing pathway. However, with the limited understanding of cellular networks and natural circuitry, implementation of more robust schemes with more precise control and feedback is hindered. Therein lies the immediate interest in synthetic cellular circuits.<br />
<br />
<br />
<br />
Development in understanding cellular circuitry can lead to exciting new modifications, such as cells which can respond to environmental stimuli. For example, cells could be developed that signal toxic surroundings and react by activating pathways used to degrade the perceived toxin.<ref>{{cite journal | last1 = Keasling | first1 = J.D. | year = 2008 | title = Synthetic biology for synthetic chemistry. | url = | journal = ACS Chem Biol | volume = 3 | issue = 1| pages = 64–76 | doi=10.1021/cb7002434| pmid = 18205292 | title-link = synthetic chemistry }}</ref> To develop such a cell, it is necessary to create a complex synthetic cellular circuit which can respond appropriately to a given stimulus.<br />
<br />
Repressilator<br />
<br />
抑制震荡子<br />
<br />
<br />
<br />
Mammalian tunable synthetic oscillator<br />
<br />
哺乳动物可调谐合成振荡器<br />
<br />
Given synthetic cellular circuits represent a form of control for cellular activities, it can be reasoned that with complete understanding of cellular pathways, "plug and play"<ref name="Kobayashi" /> cells with well defined genetic circuitry can be engineered. It is widely believed that if a proper toolbox of parts is generated,<ref>{{cite journal | last1 = Lucks | first1 = Julius B | last2 = Qi | first2 = Lei | last3 = Whitaker | first3 = Weston R | last4 = Arkin | first4 = Adam P | year = 2008 | title = Toward scalable parts families for predictable design of biological circuits | url = | journal = Current Opinion in Microbiology | volume = 11 | issue = 6| pages = 567–573 | doi = 10.1016/j.mib.2008.10.002 | pmid = 18983935 }}</ref> synthetic cells can be developed implementing only the pathways necessary for cell survival reproduction. From this cell, to be thought of as a minimal [[genome]] cell, one can add pieces from the toolbox to create a well defined pathway with appropriate synthetic circuitry for an effective feedback system. Because of the basic ground up construction method, and the proposed database of mapped circuitry pieces, techniques mirroring those used to model computer or electronic circuits can be used to redesign cells and model cells for easy troubleshooting and predictive behavior and yields.<br />
<br />
Bacterial tunable synthetic oscillator<br />
<br />
细菌可调谐合成振荡器<br />
<br />
<br />
<br />
Coupled bacterial oscillator<br />
<br />
耦合细菌振荡器<br />
<br />
== Example circuits 电路示例 ==<br />
<br />
Globally coupled bacterial oscillator<br />
<br />
球状耦合细菌振荡器<br />
<br />
<br />
<br />
Elowitz et al. and Fung et al. created oscillatory circuits that use multiple self-regulating mechanisms to create a time-dependent oscillation of gene product expression. <br />
<br />
埃洛维茨等人。和 Fung 等人。创造的振荡电路,使用多种自我调节机制,创造一个时间相关的振荡基因产品表达。<br />
<br />
=== Oscillators 振荡器 ===<br />
<br />
# [[Repressilator]]<br />
<br />
# Mammalian tunable synthetic oscillator<br />
<br />
Toggle-switch<br />
<br />
拨动开关<br />
<br />
# Bacterial tunable synthetic oscillator<br />
<br />
Gardner et al. used mutual repression between two control units to create an implementation of a toggle switch capable of controlling cells in a bistable manner: transient stimuli resulting in persistent responses If Signal A AND Signal B are present, then the desired gene product will result. All promoters shown are inducible, activated by the displayed gene product. Each signal activates expression of a separate gene (shown in light blue). The expressed proteins then can either form a complete complex in cytosol, that is capable of activating expression of the output (shown), or can act separately to induce expression, such as separately removing an inhibiting protein and inducing activation of the uninhibited promoter.]]<br />
<br />
加德纳等人使用两个控制单元之间的相互抑制来创建一个能够以双稳态方式控制细胞的拨动开关的实现: 如果信号 A 和信号 B 产生,那么期望的基因产物将被表达出来。所有显示出的启动子都是诱导性的,并被表达的基因产物激活。每个信号激活一个单独基因的表达(如浅蓝色所示)。然后,表达的蛋白质可以在细胞溶胶中形成一个完整的能够激活输出的表达(如图所示)的复合体,或者可以单独作用诱导表达,例如单独去除抑制蛋白和诱导激活不受抑制的启动子。<br />
<br />
# Coupled bacterial oscillator<br />
耦合细菌振荡器<br />
<br />
# Globally coupled bacterial oscillator<br />
球状耦合细菌振荡器<br />
<br />
The logical [[OR gate.<br />
<br />
逻辑的[或门。<br />
<br />
Elowitz et al. and Fung et al. created oscillatory circuits that use multiple self-regulating mechanisms to create a time-dependent oscillation of gene product expression.埃洛维茨等人和冯等人创造了一种振荡电路,它使用多个自调节机制来形成基因表达的依赖于时间的振荡器。<ref>{{cite journal | last1 = Elowitz | first1 = M.B. | last2 = Leibler | first2 = S. | year = 2000 | title = A synthetic oscillatory network of transcriptional regulators | pmid = 10659856| journal = Nature | volume = 403 | issue = 6767| pages = 335–338 | doi=10.1038/35002125}}</ref><ref>{{cite journal | last1 = Fung | first1 = E. | last2 = Wong | first2 = W.W. | last3 = Suen | first3 = J.K. | last4 = Bulter | first4 = T. | last5 = Lee | first5 = S. | last6 = Liao | first6 = J.C. | year = 2005 | title = A synthetic gene–metabolic oscillator | url = | journal = Nature | volume = 435 | issue = 7038| pages = 118–122 | doi=10.1038/nature03508| pmid = 15875027 }}</ref> <br />
<br />
<br />
<br />
=== Bistable switches 双稳态开关===<br />
<br />
Synthetic gene circuits can control gene expression heterogeneity and can be controlled independently of the gene expression mean.<br />
<br />
合成基因电路可以控制基因表达的异质性,并且可以独立于基因表达均值进行控制。<br />
<br />
# Toggle-switch<br />
<br />
Gardner et al. used mutual repression between two control units to create an implementation of a toggle switch capable of controlling cells in a bistable manner: transient stimuli resulting in persistent responses<ref name="Gardner" />.<br />
<br />
<br />
<br />
Engineered systems are the result of implementation of combinations of different control mechanisms. A limited counting mechanism was implemented by a pulse-controlled gene cascade and application of logic elements enables genetic "programming" of cells as in the research of Tabor et al., which synthesized a photosensitive bacterial edge detection program.<br />
<br />
工程系统是不同控制机制组合的结果。一个有限的计数机制通过一个脉冲控制的基因串联得以实现,逻辑元件的应用使细胞的遗传“编程”成为可能。泰伯等人合成了一个光敏细菌边缘检测程序。<br />
<br />
=== Logical operators 逻辑运算===<br />
<br />
[[File:SynBioCirc-AndLogicGate.jpg|frame|center|The logical [[AND gate]].逻辑与门<ref name="rocha">{{cite journal | last1 = Silva-Rocha | first1 = R. | last2 = de Lorenzo | first2 = V. | year = 2008 | title = Mining logic gates in prokaryotic transcriptional regulation networks | url = | journal = FEBS Letters | volume = 582 | issue = 8| pages = 1237–1244 | doi=10.1016/j.febslet.2008.01.060 | pmid=18275855}}</ref><ref name="buchler">{{cite journal | last1 = Buchler | first1 = N.E. | last2 = Gerland | first2 = U. | last3 = Hwa | first3 = T. | year = 2003 | title = On schemes of combinatorial transcription logic | journal = PNAS | volume = 100 | issue = 9| pages = 5136–5141 | doi=10.1073/pnas.0930314100 | pmid=12702751 | pmc=404558}}</ref> If Signal A '''AND''' Signal B are present, then the desired gene product will result. All promoters shown are inducible, activated by the displayed gene product. Each signal activates expression of a separate gene (shown in light blue). The expressed proteins then can either form a complete complex in [[cytosol]], that is capable of activating expression of the output (shown), or can act separately to induce expression, such as separately removing an inhibiting protein and inducing activation of the uninhibited promoter.如果信号 A 和信号 B 产生,那么期望的基因产物将被表达出来。所有显示出的启动子都是诱导性的,并被表达的基因产物激活。每个信号激活一个单独基因的表达(如浅蓝色所示)。然后,表达的蛋白质可以在细胞溶胶中形成一个完整的能够激活输出的表达(如图所示)的复合体,或者可以单独作用诱导表达,例如单独去除抑制蛋白和诱导激活不受抑制的启动子。]]<br />
<br />
<br />
<br />
Computational design and evaluation of DNA circuits to achieve optimal performance<br />
<br />
实现最佳性能的 DNA 电路的计算设计和评估<br />
<br />
[[File:SynBioCirc-OrLogicGate.jpg|frame|center|The logical [[OR gate]].逻辑或门<ref name="rocha" /><ref name="buchler" /> If Signal A '''OR''' Signal B are present, then the desired gene product will result. All promoters shown are inducible. Either signal is capable of activating the expression of the output gene product, and only the action of a single promoter is required for gene expression. Post-transcriptional regulation mechanisms can prevent the presence of both inputs producing a compounded high output, such as implementing a low binding affinity [[ribosome binding site]].如果信号 A 或信号 B 产生,那么期望的基因产物将被表达出来。所有显示出的启动子都是诱导性的。无论何种信号,都能激活基因产物的表达,并且只需一个启动子就可以产生这种表达。转录后的调节机制可以阻止产生复合高产出的两个输入信号的产生,比如插入一个低结合亲和力的核糖体结合点。]]<br />
<br />
<br />
<br />
Recent developments in artificial gene synthesis and the corresponding increase in competition within the industry have led to a significant drop in price and wait time of gene synthesis and helped improve methods used in circuit design. At the moment, circuit design is improving at a slow pace because of insufficient organization of known multiple gene interactions and mathematical models. This issue is being addressed by applying computer-aided design (CAD) software to provide multimedia representations of circuits through images, text and programming language applied to biological circuits. Some of the more well known CAD programs include GenoCAD, Clotho framework and j5. GenoCAD uses grammars, which are either opensource or user generated "rules" which include the available genes and known gene interactions for cloning organisms. Clotho framework uses the Biobrick standard rules.<br />
<br />
最近在人工基因合成法领域的发展和行业内竞争的相应增加已经导致了基因合成的价格和等待时间的显著下降,并且帮助改进了电路设计中使用的方法。目前,由于对已知的多基因相互作用和数学模型的架构不足,电路设计正在缓慢地改进。这个问题目前通过应用计算机辅助设计(CAD)软件,利用图像、文本和应用于生物电路的编程语言来提供电路的多媒体表示来解决。一些更著名的 CAD 程序包括 GenoCAD,Clotho框架和 j5。GenoCAD 使用语法,这些语法要么是开源的,要么是用户生成的“规则” ,其中包括克隆生物的可用基因和已知的基因相互作用。Clotho框架使用“生物积木”标准规则。<br />
<br />
[[File:SynBioCirc-NandLogicGate.jpg|frame|center|The logical [[Negated AND gate]].逻辑非门<ref name="rocha" /><ref name="buchler" /> If Signal A '''AND''' Signal B are present, then the desired gene product will '''NOT''' result. All promoters shown are inducible. The activating promoter for the output gene is constitutive, and thus not shown. The constitutive promoter for the output gene keeps it "on" and is only deactivated when (similar to the AND gate) a complex as a result of two input signal gene products blocks the expression of the output gene.如果信号 A 和信号 B 产生,那么期望的基因产物不会被表达出来。所有显示出的启动子都是诱导性的。输出基因的活性启动子是本构的,因而不会显示出来。输出基因的本构启动子让它始终被表达,只有在两个输入信号基因的产物阻滞输出基因的表达,形成一个复合体时,输出基因才失活。]]<br />
<br />
<br />
<br />
=== Analog tuners 模拟调谐器===<br />
<br />
Using negative feedback and identical promoters, linearizer gene circuits can impose uniform gene expression that depends linearly on extracellular chemical inducer concentration.<ref name="pmid19279212">{{cite journal | vauthors = Nevozhay D, Adams RM, Murphy KF, Josic K, Balázsi G| title = Negative autoregulation linearizes the dose-response and suppresses the heterogeneity of gene expression | journal = Proc. Natl. Acad. Sci. U.S.A. | volume = 106 | issue = 13 | pages = 5123-8 | date = March 31, 2009 | pmid = 19279212 | pmc = 2654390 | doi = 10.1073/pnas.0809901106 }}</ref><br />
<br />
<br />
<br />
=== Controllers of gene expression heterogeneity 基因表达异质性的控制===<br />
<br />
Synthetic gene circuits can control gene expression heterogeneity can be controlled independently of the gene expression mean.<ref name="pmid17189188">{{cite journal | vauthors = Blake WJ, Balázsi G, Kohanski MA, Isaacs FJ, Murphy KF, Kuang Y, Cantor CR, Walt DR, Collins JJ| title = Phenotypic Consequences of Promoter-Mediated Transcriptional Noise | journal = Molec. Cell | volume = 24 | issue = 6 | pages = 853-65 | date = December 28, 2006 | pmid = 17189188 | doi = 10.1016/j.molcel.2006.11.003 }}</ref><br />
<br />
<br />
<br />
=== Other engineered systems 其他工程系统===<br />
<br />
<!--- Categories ---><br />
<br />
< ! ——类别—— > <br />
<br />
Engineered systems are the result of implementation of combinations of different control mechanisms. A limited counting mechanism was implemented by a pulse-controlled gene cascade<ref>{{cite journal | last1 = Friedland | first1 = A.E. | last2 = Lu | first2 = T.K | last3 = Wang | first3 = X. | last4 = Shi | first4 = D. | last5 = Church | first5 = G. | last6 = Collins | first6 = J.J. | year = 2009 | title = Synthetic Gene Networks That Count | url = | journal = Science | volume = 324 | issue = 5931| pages = 1199–1202 | doi=10.1126/science.1172005 | pmid=19478183 | pmc=2690711}}</ref> and application of logic elements enables genetic "programming" of cells as in the research of Tabor et al., which synthesized a photosensitive bacterial edge detection program.<ref>{{cite journal | last1 = Tabor | first1 = J.J. | last2 = Salis | first2 = H.M. | last3 = Simpson | first3 = Z.B. | last4 = Chevalier | first4 = A.A. | last5 = Levskaya | first5 = A. | last6 = Marcotte | first6 = E.M. | last7 = Voigt | first7 = C.A. | last8 = Ellington | first8 = A.D. | year = 2009 | title = A Synthetic Edge Detection Program | url = | journal = Cell | volume = 137 | issue = 7| pages = 1272–1281 | doi=10.1016/j.cell.2009.04.048| pmid = 19563759 | pmc = 2775486 }}</ref><br />
<br />
Category:Synthetic biology<br />
<br />
类别: 合成生物学<br />
<br />
<noinclude><br />
<br />
<small>This page was moved from [[wikipedia:en:Synthetic biological circuit]]. Its edit history can be viewed at [[合成生物电路/edithistory]]</small></noinclude><br />
<br />
[[Category:待整理页面]]</div>粲兰https://wiki.swarma.org/index.php?title=%E5%90%88%E6%88%90%E7%94%9F%E7%89%A9%E7%94%B5%E8%B7%AF&diff=17879合成生物电路2020-11-04T12:54:07Z<p>粲兰:</p>
<hr />
<div>此词条暂由彩云小译翻译,翻译字数共956,未经人工整理和审校,带来阅读不便,请见谅。<br />
<br />
--[[用户:小趣木木|小趣木木]]([[用户讨论:小趣木木|讨论]])文本缺失 需要补充<br />
{{Synthetic biology}}<br />
<br />
[[File:Lac Operon.svg|thumb|275px|Lac Operon|The ''lac'' operon is a natural biological circuit on which many synthetic circuits are based. Top: Repressed, Bottom: Active. <br /><br />
<br />
[[File:Lac Operon.svg|thumb|275px|Lac Operon|The lac operon is a natural biological circuit on which many synthetic circuits are based. Top: Repressed, Bottom: Active. <br /><br />
<br />
[文件: Lac Operon.svg | thumb | 275px | Lac Operon | Lac Operon | Lac Operon 是一种天然的生物电路,许多合成电路都是以它为基础的。上: 压抑,下: 活跃。< br/><br />
<br />
'''''1'': RNA polymerase, ''2'': Repressor, ''3'': Promoter, ''4'': Operator, ''5'': Lactose, ''6'': ''lacZ'', ''7'': ''lacY'', ''8'': ''lacA''.]]<br />
<br />
1: RNA polymerase, 2: Repressor, 3: Promoter, 4: Operator, 5: Lactose, 6: lacZ, 7: lacY, 8: lacA.]]<br />
<br />
1: RNA 聚合酶,2: 抑制子,3: 启动子,4: 操作者,5: 乳糖,6: lacZ,7: lacY,8: lacA ]<br />
<br />
<br />
<br />
'''Synthetic biological circuits''' are an application of [[synthetic biology]] where biological parts inside a [[Cell (biology)|cell]] are designed to perform logical functions mimicking those observed in [[electronic circuit]]s. The applications range from simply inducing production to adding a measurable element, like [[Green Fluorescent Protein|GFP]], to an existing [[Gene regulatory network|natural biological circuit]], to implementing completely new systems of many parts.<ref name="Kobayashi">{{cite journal | last1 = Kobayashi | first1 = H. | last2 = Kærn | first2 = M. | last3 = Araki | first3 = M. | last4 = Chung | first4 = K. | last5 = Gardner | first5 = T. S. | last6 = Cantor | first6 = C. R. | last7 = Collins | first7 = J. J. | year = 2004 | title = Programmable cells: Interfacing natural and engineered gene networks | journal = PNAS | volume = 101 | issue = 22| pages = 8414–8419 | doi=10.1073/pnas.0402940101 | pmid=15159530 | pmc=420408}}</ref><br />
<br />
Synthetic biological circuits are an application of synthetic biology where biological parts inside a cell are designed to perform logical functions mimicking those observed in electronic circuits. The applications range from simply inducing production to adding a measurable element, like GFP, to an existing natural biological circuit, to implementing completely new systems of many parts.<br />
<br />
合成生物电路是合成生物学的一种应用,其所含细胞内的生物部件被设计为具有模仿电子电路的逻辑功能。从简单的诱导生产到添加一个可测量的元素,如绿色荧光蛋白,到现有的自然生物电路,再到实施由许多部分组成的全新系统,都包含在应用范围内。<br />
<br />
[[Image:Protein translation.gif|thumb|300px| A [[ribosome]] is a [[biological machine]].]]<br />
<br />
A [[ribosome is a biological machine.]]<br />
<br />
[核糖体是一种生物机器]<br />
<br />
The goal of synthetic biology is to generate an array of tunable and characterized parts, or modules, with which any desirable synthetic biological circuit can be easily designed and implemented.<ref name="SynBioFaq">{{cite web|title=Synthetic Biology: FAQ|url=http://syntheticbiology.org/FAQ.html|work=SyntheticBiology.org|accessdate=21 December 2011|url-status=dead|archiveurl=https://web.archive.org/web/20021212065409/http://syntheticbiology.org/faq.html|archivedate=12 December 2002}}</ref> These circuits can serve as a method to modify cellular functions, create cellular responses to environmental conditions, or influence cellular development. By implementing rational, controllable logic elements in cellular systems, researchers can use living systems as engineered "[[biological machine]]s" to perform a vast range of useful functions.<ref name="Kobayashi"/><br />
<br />
The goal of synthetic biology is to generate an array of tunable and characterized parts, or modules, with which any desirable synthetic biological circuit can be easily designed and implemented. These circuits can serve as a method to modify cellular functions, create cellular responses to environmental conditions, or influence cellular development. By implementing rational, controllable logic elements in cellular systems, researchers can use living systems as engineered "biological machines" to perform a vast range of useful functions. The second, by Michael Elowitz and Stanislas Leibler, showed that three repressor genes could be connected to form a negative feedback loop termed the Repressilator that produces self-sustaining oscillations of protein levels in E. coli.<br />
<br />
合成生物学的目标是生成一系列可调谐和特征化的部件或模块,用这些部件或模块,任何理想的合成生物电路都可以轻松地设计和实现。这些电路可以作为一种方法来修改细胞功能,创造细胞响应环境条件,或影响细胞的发展。通过在细胞系统中实现合理、可控的逻辑元素,研究人员可以将活体系统作为工程化的“生物机器”来执行大量有用的功能。迈克尔·埃洛维茨(Michael Elowitz)和斯坦尼斯拉斯·雷布勒(Stanislas Leibler)的第二项研究表明,三个阻遏基因可以相互联系,形成一个负反馈环路,称为'''<font color="#ff8000">抑制震荡子(Repressilator)</font>''',它可以在大肠杆菌中产生蛋白质水平的自维持振荡。<br />
<br />
<br />
<br />
== History 发展历程 ==<br />
<br />
Currently, synthetic circuits are a burgeoning area of research in systems biology with more publications detailing synthetic biological circuits published every year. There has been significant interest in encouraging education and outreach as well: the International Genetically Engineered Machines Competition manages the creation and standardization of BioBrick parts as a means to allow undergraduate and high school students to design their own synthetic biological circuits.<br />
<br />
目前,合成电路是系统生物学研究的一个新兴领域,每年都有更多详细介绍合成生物电路的刊物出版。在鼓励教育和推广方面,外界也对合成电路很大的兴趣: 国际基因工程机器竞赛管理着生物积木零件的创造和标准化,作为允许本科生和高中生设计自己的合成生物电路的一种手段。<br />
<br />
The first natural gene circuit studied in detail was the [[lac operon]]. In studies of [[diauxie|diauxic growth]] of ''[[E. coli]]'' on two-sugar media, [[Jacques Monod]] and [[Francois Jacob]] discovered that ''E.coli'' preferentially consumes the more easily processed [[glucose]] before switching to [[lactose]] metabolism. They discovered that the mechanism that controlled the metabolic "switching" function was a two-part control mechanism on the lac operon. When lactose is present in the cell the [[enzyme]] [[β-galactosidase]] is produced to convert lactose into [[glucose]] or [[galactose]]. When lactose is absent in the cell the lac repressor inhibits the production of the enzyme β-galactosidase to prevent any inefficient processes within the cell.<br />
<br />
<br />
<br />
The lac operon is used in the [[biotechnology]] industry for production of [[recombinant DNA|recombinant]] [[proteins]] for therapeutic use. The gene or genes for producing an [[exogenous]] protein are placed on a [[plasmid]] under the control of the lac promoter. Initially the cells are grown in a medium that does not contain lactose or other sugars, so the new genes are not expressed. Once the cells reach a certain point in their growth, [[IPTG|Isopropyl β-D-1-thiogalactopyranoside (IPTG)]] is added. IPTG, a molecule similar to lactose, but with a sulfur bond that is not hydrolyzable so that E. Coli does not digest it, is used to activate or "[[Regulation of gene expression#Inducible vs. repressible systems|induce]]" the production of the new protein. Once the cells are induced, it is difficult to remove IPTG from the cells and therefore it is difficult to stop expression.<br />
<br />
Both immediate and long term applications exist for the use of synthetic biological circuits, including different applications for metabolic engineering, and synthetic biology. Those demonstrated successfully include pharmaceutical production, and fuel production. However methods involving direct genetic introduction are not inherently effective without invoking the basic principles of synthetic cellular circuits. For example, each of these successful systems employs a method to introduce all-or-none induction or expression. This is a biological circuit where a simple repressor or promoter is introduced to facilitate creation of the product, or inhibition of a competing pathway. However, with the limited understanding of cellular networks and natural circuitry, implementation of more robust schemes with more precise control and feedback is hindered. Therein lies the immediate interest in synthetic cellular circuits.<br />
<br />
包括代谢工程学和合成生物学在内的不同应用,合成生物电路的近期和长期应用都存在。这些成功的示范包括制药生产和燃料生产。然而,如果不引用合成细胞电路的基本原理,涉及直接基因导入的方法并不是天生有效的。例如,每个成功的系统都使用一种方法来引入全或非的诱导或表达。这是一个生物电路,其中一个简单的阻遏或启动子介绍,以促进产品的创造,或抑制竞争通路。然而,由于对蜂窝网络和自然电路的了解有限,实施更精确的控制和反馈更健全的方案会受到阻碍。这就是人工合成蜂窝电路的直接利益所在。<br />
<br />
<br />
<br />
Two early examples of synthetic biological circuits were published in [[Nature (journal)|Nature]] in 2000. One, by Tim Gardner, Charles Cantor, and [[James Collins (bioengineer)|Jim Collins]] working at [[Boston University]], demonstrated a "bistable" switch in ''E. coli''. The switch is turned on by heating the culture of bacteria and turned off by addition of IPTG. They used GFP as a reporter for their system.<ref name="Gardner">Gardner, T.s., Cantor, C.R., Collins, J. Construction of a genetic toggle switch in Escherichia coli. ''Nature'' 403, 339-342 (20 January 2000).</ref> The second, by [[Michael Elowitz]] and [[Stanislas Leibler]], showed that three repressor genes could be connected to form a negative feedback loop termed the [[Repressilator]] that produces self-sustaining oscillations of protein levels in ''E. coli.''<ref>{{Cite journal|last=Stanislas Leibler|last2=Elowitz|first2=Michael B.|date=January 2000|title=A synthetic oscillatory network of transcriptional regulators|journal=Nature|volume=403|issue=6767|pages=335–338|doi=10.1038/35002125|pmid=10659856|issn=1476-4687}}</ref><br />
<br />
Development in understanding cellular circuitry can lead to exciting new modifications, such as cells which can respond to environmental stimuli. For example, cells could be developed that signal toxic surroundings and react by activating pathways used to degrade the perceived toxin. To develop such a cell, it is necessary to create a complex synthetic cellular circuit which can respond appropriately to a given stimulus.<br />
<br />
细胞电路理解的发展可以导致令人兴奋的新修正,例如可以对环境刺激作出反应的细胞。例如,细胞可以警示处于有毒环境,并通过激活用于降解感知毒素的途径来进行反应。为了研制这样一个细胞,有必要创建一个复杂的合成细胞电路,它可以适当地响应给定的刺激。<br />
<br />
<br />
<br />
Currently, synthetic circuits are a burgeoning area of research in [[systems biology]] with more publications detailing synthetic biological circuits published every year.<ref>{{cite journal | last1 = Purnick | first1 = Priscilla E. M. | last2 = Weis | first2 = Ron | year = 2009 | title = The second wave of synthetic biology: from modules to systems | url = | journal = Nature Reviews Molecular Cell Biology | volume = 10 | issue = 6| pages = 410–422 | doi = 10.1038/nrm2698 | pmid=19461664}}</ref> There has been significant interest in encouraging education and outreach as well: the International Genetically Engineered Machines Competition<ref>International Genetically Engineered Machines (iGem) http://igem.org/Main_Page</ref> manages the creation and standardization of [[BioBrick]] parts as a means to allow undergraduate and high school students to design their own synthetic biological circuits.<br />
<br />
Given synthetic cellular circuits represent a form of control for cellular activities, it can be reasoned that with complete understanding of cellular pathways, "plug and play" synthetic cells can be developed implementing only the pathways necessary for cell survival reproduction. From this cell, to be thought of as a minimal genome cell, one can add pieces from the toolbox to create a well defined pathway with appropriate synthetic circuitry for an effective feedback system. Because of the basic ground up construction method, and the proposed database of mapped circuitry pieces, techniques mirroring those used to model computer or electronic circuits can be used to redesign cells and model cells for easy troubleshooting and predictive behavior and yields.<br />
<br />
鉴于合成细胞电路代表了一种控制细胞活动的形式,可以推断,只要完全了解细胞通路,就可以开发出只执行细胞生存繁殖所必需的通路的“即插即用”的合成细胞。从这个被认为是一个最小的基因组细胞,我们可以添加工具箱中的片段,为一个有效的反馈系统创造一个拥有合适的合成电路的良定路径。由于基本的基础构造方法,以及提议的映射电路片数据库,反映用于模拟计算机或电子电路的技术可用于重新设计细胞和模型细胞,以便排除故障和预测行为与产量。<br />
<br />
<br />
<br />
== Interest and goals 研究方向和目标==<br />
<br />
Both immediate and long term applications exist for the use of synthetic biological circuits, including different applications for [[metabolic engineering]], and [[synthetic biology]]. Those demonstrated successfully include pharmaceutical production,<ref>{{cite journal | last1 = Ro | first1 = D.-K. | last2 = Paradise | first2 = E.M. | last3 = Ouellet | first3 = M. | last4 = Fisher | first4 = K.J. | last5 = Newman | first5 = K.L. | last6 = Ndungu | first6 = J.M. | last7 = Ho | first7 = K.A. | last8 = Eachus | first8 = R.A. | last9 = Ham | first9 = T.S. | last10 = Kirby | first10 = J. | last11 = Chang | first11 = M.C.Y. | last12 = Withers | first12 = S.T. | last13 = Shiba | first13 = Y. | last14 = Sarpong | first14 = R. | last15 = Keasling | first15 = J.D. | year = 2006 | title = Production of the antimalarial drug precursor artemisinic acid in engineered yeast | url = | journal = Nature | volume = 440 | issue = 7086| pages = 940–943 | doi=10.1038/nature04640 | pmid=16612385}}</ref> and fuel production.<ref>{{cite journal | last1 = Fortman | first1 = J.L. | last2 = Chhabra | first2 = S. | last3 = Mukhopadhyay | first3 = A. | last4 = Chou | first4 = H. | last5 = Lee | first5 = T.S. | last6 = Steen | first6 = E. | last7 = Keasling | first7 = J.D. | year = 2008 | title = Biofuel alternatives to ethanol: pumping the microbial well | url = https://digital.library.unt.edu/ark:/67531/metadc1013351/| journal = Trends Biotechnol | volume = 26 | issue = 7| pages = 375–381 | doi=10.1016/j.tibtech.2008.03.008| pmid = 18471913 }}</ref> However methods involving direct genetic introduction are not inherently effective without invoking the basic principles of synthetic cellular circuits. For example, each of these successful systems employs a method to introduce all-or-none induction or expression. This is a biological circuit where a simple [[repressor]] or [[promoter (genetics)|promoter]] is introduced to facilitate creation of the product, or inhibition of a competing pathway. However, with the limited understanding of cellular networks and natural circuitry, implementation of more robust schemes with more precise control and feedback is hindered. Therein lies the immediate interest in synthetic cellular circuits.<br />
<br />
<br />
<br />
Development in understanding cellular circuitry can lead to exciting new modifications, such as cells which can respond to environmental stimuli. For example, cells could be developed that signal toxic surroundings and react by activating pathways used to degrade the perceived toxin.<ref>{{cite journal | last1 = Keasling | first1 = J.D. | year = 2008 | title = Synthetic biology for synthetic chemistry. | url = | journal = ACS Chem Biol | volume = 3 | issue = 1| pages = 64–76 | doi=10.1021/cb7002434| pmid = 18205292 | title-link = synthetic chemistry }}</ref> To develop such a cell, it is necessary to create a complex synthetic cellular circuit which can respond appropriately to a given stimulus.<br />
<br />
Repressilator<br />
<br />
抑制震荡子<br />
<br />
<br />
<br />
Mammalian tunable synthetic oscillator<br />
<br />
哺乳动物可调谐合成振荡器<br />
<br />
Given synthetic cellular circuits represent a form of control for cellular activities, it can be reasoned that with complete understanding of cellular pathways, "plug and play"<ref name="Kobayashi" /> cells with well defined genetic circuitry can be engineered. It is widely believed that if a proper toolbox of parts is generated,<ref>{{cite journal | last1 = Lucks | first1 = Julius B | last2 = Qi | first2 = Lei | last3 = Whitaker | first3 = Weston R | last4 = Arkin | first4 = Adam P | year = 2008 | title = Toward scalable parts families for predictable design of biological circuits | url = | journal = Current Opinion in Microbiology | volume = 11 | issue = 6| pages = 567–573 | doi = 10.1016/j.mib.2008.10.002 | pmid = 18983935 }}</ref> synthetic cells can be developed implementing only the pathways necessary for cell survival reproduction. From this cell, to be thought of as a minimal [[genome]] cell, one can add pieces from the toolbox to create a well defined pathway with appropriate synthetic circuitry for an effective feedback system. Because of the basic ground up construction method, and the proposed database of mapped circuitry pieces, techniques mirroring those used to model computer or electronic circuits can be used to redesign cells and model cells for easy troubleshooting and predictive behavior and yields.<br />
<br />
Bacterial tunable synthetic oscillator<br />
<br />
细菌可调谐合成振荡器<br />
<br />
<br />
<br />
Coupled bacterial oscillator<br />
<br />
耦合细菌振荡器<br />
<br />
== Example circuits 电路示例 ==<br />
<br />
Globally coupled bacterial oscillator<br />
<br />
球状耦合细菌振荡器<br />
<br />
<br />
<br />
Elowitz et al. and Fung et al. created oscillatory circuits that use multiple self-regulating mechanisms to create a time-dependent oscillation of gene product expression. <br />
<br />
埃洛维茨等人。和 Fung 等人。创造的振荡电路,使用多种自我调节机制,创造一个时间相关的振荡基因产品表达。<br />
<br />
=== Oscillators 振荡器 ===<br />
<br />
# [[Repressilator]]<br />
<br />
# Mammalian tunable synthetic oscillator<br />
<br />
Toggle-switch<br />
<br />
拨动开关<br />
<br />
# Bacterial tunable synthetic oscillator<br />
<br />
Gardner et al. used mutual repression between two control units to create an implementation of a toggle switch capable of controlling cells in a bistable manner: transient stimuli resulting in persistent responses If Signal A AND Signal B are present, then the desired gene product will result. All promoters shown are inducible, activated by the displayed gene product. Each signal activates expression of a separate gene (shown in light blue). The expressed proteins then can either form a complete complex in cytosol, that is capable of activating expression of the output (shown), or can act separately to induce expression, such as separately removing an inhibiting protein and inducing activation of the uninhibited promoter.]]<br />
<br />
加德纳等人。使用两个控制单元之间的相互抑制来创建一个能够以双稳态方式控制细胞的拨动开关的实现: 如果信号 a 和信号 b 存在,那么所需的基因产物将产生持续性反应的瞬时刺激。所有显示出的启动子都是诱导性的,并被表达的基因产物激活。每个信号激活一个单独基因的表达(如浅蓝色所示)。然后,表达的蛋白质可以在细胞溶胶中形成一个完整的能够激活输出的表达(如图所示)的复合体,或者可以单独作用诱导表达,例如单独去除抑制蛋白和诱导激活不受抑制的启动子<br />
<br />
# Coupled bacterial oscillator<br />
耦合细菌振荡器<br />
<br />
# Globally coupled bacterial oscillator<br />
球状耦合细菌振荡器<br />
<br />
The logical [[OR gate.<br />
<br />
逻辑的[或门。<br />
<br />
Elowitz et al. and Fung et al. created oscillatory circuits that use multiple self-regulating mechanisms to create a time-dependent oscillation of gene product expression.<ref>{{cite journal | last1 = Elowitz | first1 = M.B. | last2 = Leibler | first2 = S. | year = 2000 | title = A synthetic oscillatory network of transcriptional regulators | pmid = 10659856| journal = Nature | volume = 403 | issue = 6767| pages = 335–338 | doi=10.1038/35002125}}</ref><ref>{{cite journal | last1 = Fung | first1 = E. | last2 = Wong | first2 = W.W. | last3 = Suen | first3 = J.K. | last4 = Bulter | first4 = T. | last5 = Lee | first5 = S. | last6 = Liao | first6 = J.C. | year = 2005 | title = A synthetic gene–metabolic oscillator | url = | journal = Nature | volume = 435 | issue = 7038| pages = 118–122 | doi=10.1038/nature03508| pmid = 15875027 }}</ref> <br />
<br />
<br />
<br />
=== Bistable switches 双稳态开关===<br />
<br />
Synthetic gene circuits can control gene expression heterogeneity and can be controlled independently of the gene expression mean.<br />
<br />
合成基因电路可以控制基因表达的异质性,并且可以独立于基因表达均值进行控制。<br />
<br />
# Toggle-switch<br />
<br />
Gardner et al. used mutual repression between two control units to create an implementation of a toggle switch capable of controlling cells in a bistable manner: transient stimuli resulting in persistent responses<ref name="Gardner" />.<br />
<br />
<br />
<br />
Engineered systems are the result of implementation of combinations of different control mechanisms. A limited counting mechanism was implemented by a pulse-controlled gene cascade and application of logic elements enables genetic "programming" of cells as in the research of Tabor et al., which synthesized a photosensitive bacterial edge detection program.<br />
<br />
工程系统是不同控制机制组合的结果。一个有限的计数机制通过一个脉冲控制的基因串联得以实现,逻辑元件的应用使细胞的遗传“编程”成为可能。泰伯等人合成了一个光敏细菌边缘检测程序。<br />
<br />
=== Logical operators 逻辑运算===<br />
<br />
[[File:SynBioCirc-AndLogicGate.jpg|frame|center|The logical [[AND gate]].<ref name="rocha">{{cite journal | last1 = Silva-Rocha | first1 = R. | last2 = de Lorenzo | first2 = V. | year = 2008 | title = Mining logic gates in prokaryotic transcriptional regulation networks | url = | journal = FEBS Letters | volume = 582 | issue = 8| pages = 1237–1244 | doi=10.1016/j.febslet.2008.01.060 | pmid=18275855}}</ref><ref name="buchler">{{cite journal | last1 = Buchler | first1 = N.E. | last2 = Gerland | first2 = U. | last3 = Hwa | first3 = T. | year = 2003 | title = On schemes of combinatorial transcription logic | journal = PNAS | volume = 100 | issue = 9| pages = 5136–5141 | doi=10.1073/pnas.0930314100 | pmid=12702751 | pmc=404558}}</ref> If Signal A '''AND''' Signal B are present, then the desired gene product will result. All promoters shown are inducible, activated by the displayed gene product. Each signal activates expression of a separate gene (shown in light blue). The expressed proteins then can either form a complete complex in [[cytosol]], that is capable of activating expression of the output (shown), or can act separately to induce expression, such as separately removing an inhibiting protein and inducing activation of the uninhibited promoter.]]<br />
<br />
<br />
<br />
Computational design and evaluation of DNA circuits to achieve optimal performance<br />
<br />
实现最佳性能的 DNA 电路的计算设计和评估<br />
<br />
[[File:SynBioCirc-OrLogicGate.jpg|frame|center|The logical [[OR gate]].<ref name="rocha" /><ref name="buchler" /> If Signal A '''OR''' Signal B are present, then the desired gene product will result. All promoters shown are inducible. Either signal is capable of activating the expression of the output gene product, and only the action of a single promoter is required for gene expression. Post-transcriptional regulation mechanisms can prevent the presence of both inputs producing a compounded high output, such as implementing a low binding affinity [[ribosome binding site]].]]<br />
<br />
<br />
<br />
Recent developments in artificial gene synthesis and the corresponding increase in competition within the industry have led to a significant drop in price and wait time of gene synthesis and helped improve methods used in circuit design. At the moment, circuit design is improving at a slow pace because of insufficient organization of known multiple gene interactions and mathematical models. This issue is being addressed by applying computer-aided design (CAD) software to provide multimedia representations of circuits through images, text and programming language applied to biological circuits. Some of the more well known CAD programs include GenoCAD, Clotho framework and j5. GenoCAD uses grammars, which are either opensource or user generated "rules" which include the available genes and known gene interactions for cloning organisms. Clotho framework uses the Biobrick standard rules.<br />
<br />
最近在人工基因合成法领域的发展和行业内竞争的相应增加已经导致了基因合成的价格和等待时间的显著下降,并且帮助改进了电路设计中使用的方法。目前,由于对已知的多基因相互作用和数学模型的架构不足,电路设计正在缓慢地改进。这个问题目前通过应用计算机辅助设计(CAD)软件,利用图像、文本和应用于生物电路的编程语言来提供电路的多媒体表示来解决。一些更著名的 CAD 程序包括 GenoCAD,Clotho框架和 j5。GenoCAD 使用语法,这些语法要么是开源的,要么是用户生成的“规则” ,其中包括克隆生物的可用基因和已知的基因相互作用。Clotho框架使用“生物积木”标准规则。<br />
<br />
[[File:SynBioCirc-NandLogicGate.jpg|frame|center|The logical [[Negated AND gate]].<ref name="rocha" /><ref name="buchler" /> If Signal A '''AND''' Signal B are present, then the desired gene product will '''NOT''' result. All promoters shown are inducible. The activating promoter for the output gene is constitutive, and thus not shown. The constitutive promoter for the output gene keeps it "on" and is only deactivated when (similar to the AND gate) a complex as a result of two input signal gene products blocks the expression of the output gene.]]<br />
<br />
<br />
<br />
=== Analog tuners 模拟调谐器===<br />
<br />
Using negative feedback and identical promoters, linearizer gene circuits can impose uniform gene expression that depends linearly on extracellular chemical inducer concentration.<ref name="pmid19279212">{{cite journal | vauthors = Nevozhay D, Adams RM, Murphy KF, Josic K, Balázsi G| title = Negative autoregulation linearizes the dose-response and suppresses the heterogeneity of gene expression | journal = Proc. Natl. Acad. Sci. U.S.A. | volume = 106 | issue = 13 | pages = 5123-8 | date = March 31, 2009 | pmid = 19279212 | pmc = 2654390 | doi = 10.1073/pnas.0809901106 }}</ref><br />
<br />
<br />
<br />
=== Controllers of gene expression heterogeneity 基因表达异质性的控制===<br />
<br />
Synthetic gene circuits can control gene expression heterogeneity can be controlled independently of the gene expression mean.<ref name="pmid17189188">{{cite journal | vauthors = Blake WJ, Balázsi G, Kohanski MA, Isaacs FJ, Murphy KF, Kuang Y, Cantor CR, Walt DR, Collins JJ| title = Phenotypic Consequences of Promoter-Mediated Transcriptional Noise | journal = Molec. Cell | volume = 24 | issue = 6 | pages = 853-65 | date = December 28, 2006 | pmid = 17189188 | doi = 10.1016/j.molcel.2006.11.003 }}</ref><br />
<br />
<br />
<br />
=== Other engineered systems 其他工程系统===<br />
<br />
<!--- Categories ---><br />
<br />
< ! ——类别—— > <br />
<br />
Engineered systems are the result of implementation of combinations of different control mechanisms. A limited counting mechanism was implemented by a pulse-controlled gene cascade<ref>{{cite journal | last1 = Friedland | first1 = A.E. | last2 = Lu | first2 = T.K | last3 = Wang | first3 = X. | last4 = Shi | first4 = D. | last5 = Church | first5 = G. | last6 = Collins | first6 = J.J. | year = 2009 | title = Synthetic Gene Networks That Count | url = | journal = Science | volume = 324 | issue = 5931| pages = 1199–1202 | doi=10.1126/science.1172005 | pmid=19478183 | pmc=2690711}}</ref> and application of logic elements enables genetic "programming" of cells as in the research of Tabor et al., which synthesized a photosensitive bacterial edge detection program.<ref>{{cite journal | last1 = Tabor | first1 = J.J. | last2 = Salis | first2 = H.M. | last3 = Simpson | first3 = Z.B. | last4 = Chevalier | first4 = A.A. | last5 = Levskaya | first5 = A. | last6 = Marcotte | first6 = E.M. | last7 = Voigt | first7 = C.A. | last8 = Ellington | first8 = A.D. | year = 2009 | title = A Synthetic Edge Detection Program | url = | journal = Cell | volume = 137 | issue = 7| pages = 1272–1281 | doi=10.1016/j.cell.2009.04.048| pmid = 19563759 | pmc = 2775486 }}</ref><br />
<br />
Category:Synthetic biology<br />
<br />
类别: 合成生物学<br />
<br />
<noinclude><br />
<br />
<small>This page was moved from [[wikipedia:en:Synthetic biological circuit]]. Its edit history can be viewed at [[合成生物电路/edithistory]]</small></noinclude><br />
<br />
[[Category:待整理页面]]</div>粲兰https://wiki.swarma.org/index.php?title=%E8%87%AA%E5%A4%8D%E5%88%B6_Self-replication&diff=15535自复制 Self-replication2020-10-16T15:11:08Z<p>粲兰:</p>
<hr />
<div>此词条暂由袁一博翻译,未经人工整理和审校,带来阅读不便,请见谅。{{see also|Biological reproduction}}<br />
<br />
<br />
<br />
{{Use dmy dates|date=April 2019|cs1-dates=y}}<br />
<br />
<br />
<br />
[[Image:DNA chemical structure.svg|thumb|right|200px|[[Molecular structure]] of [[DNA]] ]]<br />
<br />
[[Molecular structure of DNA DNA分子结构]]<br />
<br />
[ DNA 的分子结构]<br />
<br />
'''Self-replication''' is any behavior of a [[dynamical system]] that yields construction of an identical or similar copy of itself. [[Cell (biology)|Biological cell]]s, given suitable environments, reproduce by [[cell division]]. During cell division, [[DNA]] is replicated and can be transmitted to offspring during [[reproduction]]. [[virus (biology)|Biological viruses]] can [[Viral replication|replicate]], but only by commandeering the reproductive machinery of cells through a process of infection. Harmful [[prion]] proteins can replicate by converting normal proteins into rogue forms.<ref>{{cite news|url=http://news.bbc.co.uk/1/hi/health/8435320.stm |title='Lifeless' prion proteins are 'capable of evolution' |work=BBC News |date=2010-01-01 |accessdate=2013-10-22}}</ref> [[Computer virus]]es reproduce using the hardware and software already present on computers. Self-replication in [[robotics]] has been an area of research and a subject of interest in [[science fiction]]. Any self-replicating mechanism which does not make a perfect copy ([[mutation]]) will experience [[genetic variation]] and will create variants of itself. These variants will be subject to [[natural selection]], since some will be better at surviving in their current environment than others and will out-breed them.<br />
<br />
Self-replication is any behavior of a dynamical system that yields construction of an identical or similar copy of itself. Biological cells, given suitable environments, reproduce by cell division. During cell division, DNA is replicated and can be transmitted to offspring during reproduction. Biological viruses can replicate, but only by commandeering the reproductive machinery of cells through a process of infection. Harmful prion proteins can replicate by converting normal proteins into rogue forms. Computer viruses reproduce using the hardware and software already present on computers. Self-replication in robotics has been an area of research and a subject of interest in science fiction. Any self-replicating mechanism which does not make a perfect copy (mutation) will experience genetic variation and will create variants of itself. These variants will be subject to natural selection, since some will be better at surviving in their current environment than others and will out-breed them.<br />
<br />
自我复制是一个动力系统的任何行为,产生一个相同或相似的复制本身的建设。生物细胞,在适当的环境下,通过细胞分裂进行繁殖。在细胞分裂过程中,DNA 被复制,并在生殖过程中传递给后代。生物病毒可以复制,但只能通过感染过程控制细胞的生殖机制。有害的朊病毒蛋白可以通过将正常的蛋白质转化为流氓形式来复制。计算机病毒利用计算机上已有的硬件和软件进行复制。自我复制机器人学一直是一个研究领域,也是科幻小说中的一个兴趣主题。任何不能完美复制的自我复制机制(变异)都会经历遗传变异,并且会产生自身的变异。这些变异将受到自然选择的影响,因为有些变异会比其他变异更好地在当前环境中生存,并将超越他们。<br />
<br />
<br />
<br />
<br />
<br />
<br />
==Overview 综述==<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
===Theory 理论===<br />
<br />
<br />
<br />
{{See also|Von Neumann universal constructor}}<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
Early research by [[John von Neumann]]<ref name=Hixon_vonNeumann>{{cite book|last=von Neumann|first=John|title=The Hixon Symposium|year=1948|location=Pasadena, California|pages=1–36}}</ref> established that replicators have several parts:<br />
<br />
Early research by John von Neumann established that replicators have several parts:<br />
<br />
约翰·冯·诺伊曼的早期研究表明复制因子有几个部分:<br />
<br />
<br />
<br />
<br />
<br />
*A coded representation of the replicator<br />
<br />
<br />
<br />
*A mechanism to copy the coded representation<br />
<br />
<br />
<br />
*A mechanism for effecting construction within the host environment of the replicator<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
Exceptions to this pattern may be possible, although none have yet been achieved. For example, scientists have come close to constructing [https://arstechnica.com/science/2011/04/investigations-into-the-ancient-rna-world/ RNA that can be copied] in an "environment" that is a solution of RNA monomers and transcriptase. In this case, the body is the genome, and the specialized copy mechanisms are external. The requirement for an outside copy mechanism has not yet been overcome, and such systems are more accurately characterized as "assisted replication" than "self-replication".<br />
<br />
Exceptions to this pattern may be possible, although none have yet been achieved. For example, scientists have come close to constructing [https://arstechnica.com/science/2011/04/investigations-into-the-ancient-rna-world/ RNA that can be copied] in an "environment" that is a solution of RNA monomers and transcriptase. In this case, the body is the genome, and the specialized copy mechanisms are external. The requirement for an outside copy mechanism has not yet been overcome, and such systems are more accurately characterized as "assisted replication" than "self-replication".<br />
<br />
这种模式可能有例外,尽管尚未实现任何例外。例如,科学家们已经接近于在 RNA 单体和转录酶的“环境”中构建[可复制的 https://arstechnica.com/science/2011/04/investigations-into-the-ancient-RNA-world/ RNA ]。在这种情况下,身体就是基因组,专门的复制机制是外部的。对外部复制机制的需求尚未被克服,这种系统更准确地描述为“辅助复制”而不是“自我复制复制”。<br />
<br />
<br />
<br />
<br />
<br />
However, the simplest possible case is that only a genome exists. Without some specification of the self-reproducing steps, a genome-only system is probably better characterized as something like a [[crystal]].<br />
<br />
However, the simplest possible case is that only a genome exists. Without some specification of the self-reproducing steps, a genome-only system is probably better characterized as something like a crystal.<br />
<br />
然而,最简单的可能的情况是只有一个基因组存在。如果没有一些自我繁殖步骤的说明,一个只有基因组的系统可能更好地被描述为类似于晶体的东西。<br />
<br />
<br />
<br />
<br />
<br />
<br />
===Classes of self-replication 自复制的类别===<br />
<br />
<br />
<br />
Recent research<ref>{{cite web|url = http://www.MolecularAssembler.com/KSRM/5.1.htm | date = 2004 | accessdate = 29 June 2013 | last = Freitas | first = Robert | last2 = Merkle | first2 = Ralph | title = Kinematic Self-Replicating Machines - General Taxonomy of Replicators}}</ref> has begun to categorize replicators, often based on the amount of support they require.<br />
<br />
Recent research has begun to categorize replicators, often based on the amount of support they require.<br />
<br />
最近的研究已经开始对复制因子进行分类,通常基于它们所需要的支持程度。<br />
<br />
<br />
<br />
<br />
<br />
*Natural replicators have all or most of their design from nonhuman sources. Such systems include natural life forms.<br />
<br />
*自然复制因子的设计全部或绝大部分来自非人类来源。这样的系统包含自然生命形式。<br />
<br />
<br />
*[[Autotroph]]ic replicators can reproduce themselves "in the wild". They mine their own materials. It is conjectured that non-biological autotrophic replicators could be designed by humans, and could easily accept specifications for human products.<br />
<br />
*无机复制因子可以在自然环境下进行自我复制。它们采掘自身的矿物质。据推测,非生物的无极复制因子可能由人类设计而成,并且可以轻易地接受人类产物的规格。<br />
<br />
<br />
*Self-reproductive systems are conjectured systems which would produce copies of themselves from industrial feedstocks such as metal bar and wire.<br />
<br />
*可以利用工业原料,例如金属棒和金属丝,以产生自身的拷贝的自复制的系统存在于假想当中<br />
<br />
<br />
*Self-assembling systems assemble copies of themselves from finished, delivered parts. Simple examples of such systems have been demonstrated at the macro scale.<br />
<br />
*自组装系统将它们已经完成并运送过来的自复制部分组装起来。这种系统的简单例子已经在宏观尺度得到展示。<br />
<br />
<br />
<br />
<br />
<br />
The design space for machine replicators is very broad. A comprehensive study<ref>{{cite web|url = http://www.MolecularAssembler.com/KSRM/5.1.9.htm | date = 2004 | accessdate = 29 June 2013 | last1 = Freitas | first1 = Robert | last2 = Merkle | first2 = Ralph | title = Kinematic Self-Replicating Machines - Freitas-Merkle Map of the Kinematic Replicator Design Space (2003–2004)}}</ref> to date by [[Robert Freitas]] and [[Ralph Merkle]] has identified 137 design dimensions grouped into a dozen separate categories, including: (1) Replication Control, (2) Replication Information, (3) Replication Substrate, (4) Replicator Structure, (5) Passive Parts, (6) Active Subunits, (7) Replicator Energetics, (8) Replicator Kinematics, (9) Replication Process, (10) Replicator Performance, (11) Product Structure, and (12) Evolvability.<br />
<br />
The design space for machine replicators is very broad. A comprehensive study to date by Robert Freitas and Ralph Merkle has identified 137 design dimensions grouped into a dozen separate categories, including: (1) Replication Control, (2) Replication Information, (3) Replication Substrate, (4) Replicator Structure, (5) Passive Parts, (6) Active Subunits, (7) Replicator Energetics, (8) Replicator Kinematics, (9) Replication Process, (10) Replicator Performance, (11) Product Structure, and (12) Evolvability.<br />
<br />
机器复制因子的设计空间非常广阔。迄今为止,罗伯特·弗雷塔斯(Robert Freitas)和拉尔夫·默克尔(Ralph Merkle)的综合研究已经确定了137个设计维度并将其分为十几个独立的类别,包括: (1)复制控制,(2)复制信息,(3)复制基质,(4)复制因子<br />
结构,(5)被动部件,(6)主动子单元,(7)复制因子能量学,(8)复制因子运动学,(9)复制过程,(10)复制因子性能,(11)产物结构,和(12)进化性。<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
===A self-replicating computer program 一种自复制的电脑程序===<br />
<br />
<br />
<br />
{{Main|Quine (computing)}}<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
In [[computer science]] a [[Quine (computing)|quine]] is a self-reproducing computer program that, when executed, outputs its own code. For example, a quine in the [[Python (programming language)|Python programming language]] is:<br />
<br />
In computer science a quine is a self-reproducing computer program that, when executed, outputs its own code. For example, a quine in the Python programming language is:<br />
<br />
在计算机科学中,quine 是一种自我复制的计算机程序,当执行时,输出自己的代码。例如,利用Python语言编写的一个 quine 是:<br />
<br />
<br />
<br />
<br />
<br />
:<code>a='a=%r;print(a%%a)';print(a%a)</code><br />
<br />
<code>a='a=%r;print(a%%a)';print(a%a)</code><br />
<br />
<br />
<br />
<br />
<br />
A more trivial approach is to write a program that will make a copy of any stream of data that it is directed to, and then direct it at itself. In this case the program is treated as both executable code, and as data to be manipulated. This approach is common in most self-replicating systems, including biological life, and is simpler as it does not require the program to contain a complete description of itself.<br />
<br />
A more trivial approach is to write a program that will make a copy of any stream of data that it is directed to, and then direct it at itself. In this case the program is treated as both executable code, and as data to be manipulated. This approach is common in most self-replicating systems, including biological life, and is simpler as it does not require the program to contain a complete description of itself.<br />
<br />
一种更简单的方法是编写一个程序,这个程序将复制它所指向的任何数据流,然后指向它自己。在这种情况下,程序既被当作可执行代码,也被当作要操作的数据。这种方法在包括生物生命在内的大多数自复制系统中都很常见,而且更简单,因为它不需要程序包含对自身的完整描述。<br />
<br />
<br />
<br />
<br />
<br />
In many programming languages an empty program is legal, and executes without producing errors or other output. The output is thus the same as the source code, so the program is trivially self-reproducing.<br />
<br />
In many programming languages an empty program is legal, and executes without producing errors or other output. The output is thus the same as the source code, so the program is trivially self-reproducing.<br />
<br />
在许多编程语言中,空程序是合法的,并且执行时不会产生错误或其他输出。因此,其输出是相同的源代码,所以这种程序是微不足道的自复制。<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
===Self-replicating tiling 自复制式平铺===<br />
<br />
<br />
<br />
{{See also|Self-similarity}}<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
In [[geometry]] a self-replicating tiling is a tiling pattern in which several [[congruence (geometry)|congruent]] tiles may be joined together to form a larger tile that is similar to the original. This is an aspect of the field of study known as [[tessellation]]. The "sphinx" [[hexiamond]] is the only known self-replicating [[pentagon]].<ref>For an image that does not show how this replicates, see: Eric W. Weisstein. "Sphinx." From MathWorld--A Wolfram Web Resource. [http://mathworld.wolfram.com/Sphinx.html http://mathworld.wolfram.com/Sphinx.html]</ref> For example, four such [[concave polygon|concave]] pentagons can be joined together to make one with twice the dimensions.<ref>For further illustrations, see [http://www.geoaustralia.com/italian/Sphinx/Guide.html Teaching TILINGS / TESSELLATIONS with Geo Sphinx]</ref> [[Solomon W. Golomb]] coined the term [[rep-tiles]] for self-replicating tilings.<br />
<br />
In geometry a self-replicating tiling is a tiling pattern in which several congruent tiles may be joined together to form a larger tile that is similar to the original. This is an aspect of the field of study known as tessellation. The "sphinx" hexiamond is the only known self-replicating pentagon. For example, four such concave pentagons can be joined together to make one with twice the dimensions. Solomon W. Golomb coined the term rep-tiles for self-replicating tilings.<br />
<br />
在几何学中,自复制式平铺是一种平铺方法,其中几个全等的瓷砖可以连接在一起,形成一个较大的类似于原来的瓷砖。这是一个被称为镶嵌的研究领域的一个方面。“狮身人面像”六面双锥体是已知唯一能自复制的五角形。<br />
例如,四个这样的凹形五边形可以连接在一起,形成一个尺寸是原来两倍的五边形。所罗门·W·格伦布(Solomon W. Golomb)创造了术语 rep-tiles 来描述自复制式平铺。<br />
--[[用户:粲兰|袁一博]]([[用户讨论:粲兰|讨论]])“‘狮身人面像’双锥六面体是已知唯一能自复制的五角形。”这句对应原句"The 'sphinx' hexiamond is the only known self-replicating pentagon."疑似存在几何上的逻辑错误,hexiamond并不是一个平面几何图形。<br />
<br />
<br />
<br />
<br />
<br />
In 2012, [[Lee Sallows]] identified rep-tiles as a special instance of a [[self-tiling tile set]] or setiset. A setiset of order ''n'' is a set of ''n'' shapes that can be assembled in ''n'' different ways so as to form larger replicas of themselves. Setisets in which every shape is distinct are called 'perfect'. A rep-''n'' rep-tile is just a setiset composed of ''n'' identical pieces.<br />
<br />
In 2012, Lee Sallows identified rep-tiles as a special instance of a self-tiling tile set or '''<font color="#32CD32">setiset</font>'''. A setiset of order n is a set of n shapes that can be assembled in n different ways so as to form larger replicas of themselves. Setisets in which every shape is distinct are called 'perfect'. A rep-n rep-tile is just a setiset composed of n identical pieces.<br />
<br />
2012年,李·萨洛斯(Lee Sallows) 将 rep-tiles 定义为一种特殊的自平铺纹样集。一组 ''n'' 阶的复制品是一组 ''n'' 个形状的复制品,它们可以以 ''n'' 种不同的方式组合,以便形成更大的自复制产物。每个形状各不相同的 setiset 被称为“完美的”。n次重复的 rep-tile 只是由 n 个相同部分组成的一个集合。<br />
--[[用户:粲兰|袁一博]]([[用户讨论:粲兰|讨论]])“setiset”找不到合适的翻译。<br />
<br />
{|<br />
<br />
{|<br />
<br />
{|<br />
<br />
|- style="vertical-align:bottom;"<br />
<br />
|- style="vertical-align:bottom;"<br />
<br />
|-style“ vertical-align: bottom; ”<br />
<br />
[[File:Self-replication of sphynx hexidiamonds.svg|thumb|A rep-tile-based setiset of order 4|thumb|left|text-bottom|260px|Four '[[Sphinx tiling|sphinx]]' hexiamonds can be put together to form another sphinx. 四个“人面狮身像”双锥六面体可以拼凑成另一个人面狮身像]]<br />
<br />
sphinx' hexiamonds can be put together to form another sphinx.]]<br />
<br />
<br />
<br />
[[File:A rep-tile-based setiset of order 4.png|thumb|A rep-tile-based setiset of order 9|thumb|right|text-bottom|290px|A perfect [[Self-tiling tile set|setiset]] of order 4 一个完美的四阶setiset]]<br />
<br />
setiset of order 4]]<br />
<br />
setiset of order 4]]<br />
<br />
|}<br />
<br />
|}<br />
<br />
|}<br />
<br />
{{clear}}<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
===Self replicating clay crystals 自复制的粘土晶体===<br />
<br />
<br />
<br />
One form of natural self-replication that isn't based on DNA or RNA occurs in clay crystals.<ref>{{cite web|url=http://www.bbc.com/earth/story/20160823-the-idea-that-life-began-as-clay-crystals-is-50-years-old |title=The idea that life began as clay crystals is 50 years old |publisher=bbc.com |date=2016-08-24 |accessdate=2019-11-10}}</ref> Clay consists of a large number of small crystals, and clay is an environment that promotes crystal growth. Crystals consist of a regular lattice of atoms and are able to grow if e.g. placed in a water solution containing the crystal components; automatically arranging atoms at the crystal boundary into the crystalline form. Crystals may have irregularities where the regular atomic structure is broken, and when crystals grow, these irregularities may propagate, creating a form of self-replication of crystal irregularities. Because these irregularities may affect the probability of a crystal breaking apart to form new crystals, crystals with such irregularities could even be considered to undergo evolutionary development.<br />
<br />
One form of natural self-replication that isn't based on DNA or RNA occurs in clay crystals. Clay consists of a large number of small crystals, and clay is an environment that promotes crystal growth. Crystals consist of a regular lattice of atoms and are able to grow if e.g. placed in a water solution containing the crystal components; automatically arranging atoms at the crystal boundary into the crystalline form. Crystals may have irregularities where the regular atomic structure is broken, and when crystals grow, these irregularities may propagate, creating a form of self-replication of crystal irregularities. Because these irregularities may affect the probability of a crystal breaking apart to form new crystals, crystals with such irregularities could even be considered to undergo evolutionary development.<br />
<br />
粘土晶体中存在一种不基于 DNA 或 RNA 的天然自复制。粘土由大量的小晶体组成,粘土是促进晶体生长的环境。晶体是由规则的原子晶格组成的,将其放置在含有晶体成分的水溶液中能够生长,并自动地将晶体边界上的原子排列成晶体形式。当正常的原子结构被破坏时,晶体可能具有不规则性,当晶体生长时,这些不规则性可能会传播,形成一种晶体不规则性的自我复制。由于这些不规则结构可能会影响晶体分裂形成新晶体的概率,因此这种不规则结构的晶体甚至可以被认为是在进化过程中形成的。<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
===Applications 应用===<br />
<br />
<br />
<br />
It is a long-term goal of some engineering sciences to achieve a [[clanking replicator]], a material device that can self-replicate. The usual reason is to achieve a low cost per item while retaining the utility of a manufactured good. Many authorities say that in the limit, the cost of self-replicating items should approach the cost-per-weight of wood or other biological substances, because self-replication avoids the costs of [[labour (economics)|labor]], [[Capital (economics)|capital]] and [[distribution (business)|distribution]] in conventional [[factory|manufactured goods]].<br />
<br />
It is a long-term goal of some engineering sciences to achieve a clanking replicator, a material device that can self-replicate. The usual reason is to achieve a low cost per item while retaining the utility of a manufactured good. Many authorities say that in the limit, the cost of self-replicating items should approach the cost-per-weight of wood or other biological substances, because self-replication avoids the costs of labor, capital and distribution in conventional manufactured goods.<br />
<br />
一些工程科学的长期目标是制造出一种可以自我复制的铿锵复制机器。通常的原因是为了在保证生产品的功效的同时降低每件商品的成本。许多权威人士表示,在这个限度内,自复制产品的成本应该接近木材或其他生物材质的单位重量成本,因为自我复制不需要传统工业产品所需的劳动力、资本和分销成本。<br />
<br />
<br />
<br />
<br />
<br />
A fully novel artificial replicator is a reasonable near-term goal.<br />
<br />
A fully novel artificial replicator is a reasonable near-term goal.<br />
<br />
建立一个全新的人工复制因子是一个合理的近期目标。<br />
<br />
A [[NASA]] study recently placed the complexity of a [[clanking replicator]] at approximately that of [[Intel]]'s [[Pentium (brand)|Pentium]] 4 CPU.<ref>{{cite web|url=http://www.niac.usra.edu/files/studies/final_report/883Toth-Fejel.pdf |title=Modeling Kinematic Cellular Automata Final Report |publisher= |date=April 30, 2004 |accessdate=2013-10-22}}</ref> That is, the technology is achievable with a relatively small engineering group in a reasonable commercial time-scale at a reasonable cost.<br />
<br />
A NASA study recently placed the complexity of a clanking replicator at approximately that of Intel's Pentium 4 CPU. That is, the technology is achievable with a relatively small engineering group in a reasonable commercial time-scale at a reasonable cost.<br />
<br />
美国宇航局最近的一项研究表明,铿锵复制因子的复杂度大约相当于英特尔奔腾4处理器的复杂度。也就是说,这项技术在一个合理的商业时间规模内,是可以由一个相对较小的工程团队以一个合理的成本实现的。<br />
<br />
<br />
<br />
<br />
<br />
Given the currently keen interest in biotechnology and the high levels of funding in that field, attempts to exploit the replicative ability of existing cells are timely, and may easily lead to significant insights and advances.<br />
<br />
Given the currently keen interest in biotechnology and the high levels of funding in that field, attempts to exploit the replicative ability of existing cells are timely, and may easily lead to significant insights and advances.<br />
<br />
鉴于目前学术界对生物技术的浓厚兴趣和这一领域的大量资金,利用现有细胞的复制能力的尝试是适时的,而且易产生重大的理解和进展。<br />
<br />
<br />
<br />
<br />
<br />
A variation of self replication is of practical relevance in [[compiler]] construction, where a similar [[bootstrapping]] problem occurs as in natural self replication. A compiler ([[phenotype]]) can be applied on the compiler's own [[source code]] ([[genotype]]) producing the compiler itself. During compiler development, a modified ([[Mutation|mutated]]) source is used to create the next generation of the compiler. This process differs from natural self-replication in that the process is directed by an engineer, not by the subject itself.<br />
<br />
A variation of self replication is of practical relevance in compiler construction, where a similar bootstrapping problem occurs as in natural self replication. A compiler (phenotype) can be applied on the compiler's own source code (genotype) producing the compiler itself. During compiler development, a modified (mutated) source is used to create the next generation of the compiler. This process differs from natural self-replication in that the process is directed by an engineer, not by the subject itself.<br />
<br />
自复制的一种变体在编译器构造中具有实际意义,在天然自复制中也会出现类似的自举问题。编译器(表型)可以应用于编译器自身的源代码(基因型) ,从而产生编译器本身。在编译器开发过程中,一般使用修改(变异)的源代码来创建下一代编译器。这个过程不同于自然自我复制,因为这个过程是由工程师指导的,而不是主体本身。<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
==Mechanical self-replication 机械自复制==<br />
<br />
<br />
<br />
{{Main|Self-replicating machine}}<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
An activity in the field of robots is the self-replication of machines. Since all robots (at least in modern times) have a fair number of the same features, a self-replicating robot (or possibly a hive of robots) would need to do the following:<br />
<br />
An activity in the field of robots is the self-replication of machines. Since all robots (at least in modern times) have a fair number of the same features, a self-replicating robot (or possibly a hive of robots) would need to do the following:<br />
<br />
机器人领域的一项活动就是机器的自复制。由于所有机器人(至少在现代)都有相当数量的相同特性,一个自复制机器人(或者可能是一群机器人)需要做到以下几点:<br />
<br />
<br />
<br />
<br />
<br />
*Obtain construction materials<br />
*获得结构材料<br />
<br />
<br />
<br />
*Manufacture new parts including its smallest parts and thinking apparatus<br />
*制造新零件,包括最小的零件和思维组件<br />
<br />
<br />
<br />
*Provide a consistent power source<br />
*提供一个已存在的动力源<br />
<br />
<br />
<br />
*Program the new members<br />
*为新成员编程<br />
<br />
<br />
<br />
*error correct any mistakes in the offspring<br />
*改正子代产物的任何错误<br />
<br />
<br />
<br />
<br />
<br />
<br />
On a [[Nanotechnology|nano]] scale, [[Assembler (nanotechnology)|assemblers]] might also be designed to self-replicate under their own power. This, in turn, has given rise to the "[[grey goo]]" version of [[Armageddon]], as featured in such science fiction novels as ''[[Bloom (novel)|Bloom]]'', ''[[Prey (novel)|Prey]]'', and ''[[Recursion (novel)|Recursion]]''.<br />
<br />
On a nano scale, assemblers might also be designed to self-replicate under their own power. This, in turn, has given rise to the "grey goo" version of Armageddon, as featured in such science fiction novels as Bloom, Prey, and Recursion.<br />
<br />
在纳米级别上,组装者也可能被设计成在自身能量下进行自复制。这反过来又导致了“灰色粘质”版本的世界末日,就像在诸如《花开》,《掠食》和《递归》这样的科幻小说中描述的那样。<br />
<br />
<br />
<br />
<br />
<br />
The [[Foresight Institute]] has published guidelines for researchers in mechanical self-replication.<ref>{{cite web|url=http://foresight.org/guidelines/ |title=Molecular Nanotechnology Guidelines |publisher=Foresight.org |date= |accessdate=2013-10-22}}</ref> The guidelines recommend that researchers use several specific techniques for preventing mechanical replicators from getting out of control, such as using a [[broadcast architecture]].<br />
<br />
The Foresight Institute has published guidelines for researchers in mechanical self-replication. The guidelines recommend that researchers use several specific techniques for preventing mechanical replicators from getting out of control, such as using a broadcast architecture.<br />
<br />
美国前瞻学会协会已经为机械自复制的研究人员发布了指导方针。指导方针建议研究人员使用一些特定的技术来防止机械复制因子失控,比如使用广播结构。<br />
<br />
<br />
<br />
<br />
<br />
For a detailed article on mechanical reproduction as it relates to the industrial age see [[mass production]].<br />
<br />
For a detailed article on mechanical reproduction as it relates to the industrial age see mass production.<br />
<br />
有关与工业时代有关的机械复制的详细文章,请参阅'''<font color="#ff8000">大规模生产(mass production)</font>'''。<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
==Fields 研究领域==<br />
<br />
<br />
{{refimprove section|date=August 2017}}<br />
<br />
<br />
<br />
Research has occurred in the following areas:<br />
<br />
Research has occurred in the following areas:<br />
<br />
在以下领域进行了研究:<br />
<br />
<br />
<br />
<br />
<br />
* [[Biology]] studies natural replication and replicators, and their interaction. These can be an important guide to avoid design difficulties in self-replicating machinery.<br />
<br />
<br />
<br />
* In [[Chemistry]] self-replication studies are typically about how a specific set of molecules can act together to replicate each other within the set <ref>{{cite book |author=Moulin, Giuseppone |title=Constitutional Dynamic Chemistry |volume=322 |pages=87–105 |year=2011|publisher=Springer|doi=10.1007/128_2011_198|pmid=21728135 |series=Topics in Current Chemistry |isbn=978-3-642-28343-7 |chapter=Dynamic Combinatorial Self-Replicating Systems }}</ref> (often part of [[Systems chemistry]] field).<br />
<br />
<br />
<br />
* [[Meme]]tics studies ideas and how they propagate in human culture. Memes require only small amounts of material, and therefore have theoretical similarities to [[virus]]es and are often described as [[virus|viral]].<br />
<br />
<br />
<br />
* [[Nanotechnology]] or more precisely, [[molecular nanotechnology]] is concerned with making [[Nanotechnology|nano]] scale [[assembler (nanotechnology)|assemblers]]. Without self-replication, capital and assembly costs of molecular machines become impossibly large.<br />
<br />
<br />
<br />
* Space resources: NASA has sponsored a number of design studies to develop self-replicating mechanisms to mine space resources. Most of these designs include computer-controlled machinery that copies itself.<br />
<br />
<br />
<br />
* [[Computer security]]: Many computer security problems are caused by self-reproducing computer programs that infect computers — [[computer worm]]s and [[computer virus]]es.<br />
<br />
<br />
<br />
* In [[parallel computing]], it takes a long time to manually load a new program on every node of a large [[computer cluster]] or [[distributed computing]] system. Automatically loading new programs using [[mobile agent]]s can save the system administrator a lot of time and give users their results much quicker, as long as they don't get out of control.<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
==In industry 在工业界==<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
===Space exploration and manufacturing 太空探索和制造业===<br />
<br />
<br />
<br />
The goal of self-replication in space systems is to exploit large amounts of matter with a low launch mass. For example, an [[autotroph]]ic self-replicating machine could cover a moon or planet with solar cells, and beam the power to the Earth using microwaves. Once in place, the same machinery that built itself could also produce raw materials or manufactured objects, including transportation systems to ship the products. [[Von Neumann Probe|Another model]] of self-replicating machine would copy itself through the galaxy and universe, sending information back.<br />
<br />
The goal of self-replication in space systems is to exploit large amounts of matter with a low launch mass. For example, an autotrophic self-replicating machine could cover a moon or planet with solar cells, and beam the power to the Earth using microwaves. Once in place, the same machinery that built itself could also produce raw materials or manufactured objects, including transportation systems to ship the products. Another model of self-replicating machine would copy itself through the galaxy and universe, sending information back.<br />
<br />
太空系统中自复制的目标是利用低发射质量的大量物质。例如,一个自养自复制机械可以用太阳能电池覆盖月球或行星,并通过微波将能量传送到地球。一旦就位,自己建造的同样的机器也可以生产原材料或制成品,包括运输产品的运输系统。另一个自复制机械模型会在银河系和宇宙中复制自己,把信息传回来。<br />
<br />
<br />
<br />
<br />
<br />
In general, since these systems are autotrophic, they are the most difficult and complex known replicators. They are also thought to be the most hazardous, because they do not require any inputs from human beings in order to reproduce.<br />
<br />
In general, since these systems are autotrophic, they are the most difficult and complex known replicators. They are also thought to be the most hazardous, because they do not require any inputs from human beings in order to reproduce.<br />
<br />
一般来说,由于这些系统是自养的,他们是已知最困难和复杂的复制因子。它们也被认为是最危险的复制因子,因为它们不需要人类的任何投入来繁殖。<br />
<br />
<br />
<br />
<br />
<br />
A classic theoretical study of replicators in space is the 1980 [[NASA]] study of autotrophic clanking replicators, edited by [[Robert Freitas]].<ref>[[Wikisource:Advanced Automation for Space Missions]]</ref><br />
<br />
A classic theoretical study of replicators in space is the 1980 NASA study of autotrophic clanking replicators, edited by Robert Freitas.<br />
<br />
一个关于太空中复制因子的经典理论研究是1980年由 NASA 的罗伯特·弗雷塔斯(Robert Freitas)编辑的关于自养铿锵复制因子的研究。<br />
<br />
<br />
<br />
<br />
<br />
Much of the design study was concerned with a simple, flexible chemical system for processing lunar [[regolith]], and the differences between the ratio of elements needed by the replicator, and the ratios available in regolith. The limiting element was [[Chlorine]], an essential element to process regolith for [[Aluminium]]. Chlorine is very rare in lunar regolith, and a substantially faster rate of reproduction could be assured by importing modest amounts.<br />
<br />
Much of the design study was concerned with a simple, flexible chemical system for processing lunar regolith, and the differences between the ratio of elements needed by the replicator, and the ratios available in regolith. The limiting element was Chlorine, an essential element to process regolith for Aluminium. Chlorine is very rare in lunar regolith, and a substantially faster rate of reproduction could be assured by importing modest amounts.<br />
<br />
大部分的设计研究都关注于采用一个简单、灵活的化学系统来处理月球表面的风化层,以及复制因子所需要的元素比率和从风化层中获得的比率之间的差异。限制元素是氯,它是处理风化层中的铝的一个必不可少的元素。氯在月球的风化层中非常罕见,通过投入适量的氯,可以保证更快的生殖速度。<br />
<br />
<br />
<br />
<br />
<br />
The reference design specified small computer-controlled electric carts running on rails. Each cart could have a simple hand or a small bull-dozer shovel, forming a basic [[robot]].<br />
<br />
The reference design specified small computer-controlled electric carts running on rails. Each cart could have a simple hand or a small bull-dozer shovel, forming a basic robot.<br />
<br />
参考设计采用了小型计算机控制的在轨道上运行的电动车。每个推车可以有一个简单的手或一个小型推土机铲,形成一个基本的机器人。<br />
<br />
<br />
<br />
<br />
<br />
Power would be provided by a "canopy" of [[solar cell]]s supported on pillars. The other machinery could run under the canopy.<br />
<br />
Power would be provided by a "canopy" of solar cells supported on pillars. The other machinery could run under the canopy.<br />
<br />
电力将由支撑在支柱上的“天篷”状的太阳能电池提供。其他的机器可以在天篷下面运转。<br />
<br />
<br />
<br />
<br />
<br />
A "[[casting]] [[robot]]" would use a robotic arm with a few sculpting tools to make [[plaster]] [[molding (process)|mold]]s. Plaster molds are easy to make, and make precise parts with good surface finishes. The robot would then cast most of the parts either from non-conductive molten rock ([[basalt]]) or purified metals. An [[electricity|electric]] [[oven]] melted the materials.<br />
<br />
A "casting robot" would use a robotic arm with a few sculpting tools to make plaster molds. Plaster molds are easy to make, and make precise parts with good surface finishes. The robot would then cast most of the parts either from non-conductive molten rock (basalt) or purified metals. An electric oven melted the materials.<br />
<br />
一个“铸造机器人”将使用一个机械手臂和一些雕刻工具来制作石膏模具。石膏模具易于制作,而且制作精确的零件表面光洁度好。然后,机器人将用非导电熔岩(玄武岩)或纯金属铸造大部分零件。它内部的电炉可将这些材料熔化。<br />
<br />
<br />
<br />
<br />
<br />
A speculative, more complex "chip factory" was specified to produce the computer and electronic systems, but the designers also said that it might prove practical to ship the chips from Earth as if they were "vitamins".<br />
<br />
A speculative, more complex "chip factory" was specified to produce the computer and electronic systems, but the designers also said that it might prove practical to ship the chips from Earth as if they were "vitamins".<br />
<br />
他们提出了一个更为复杂的推测性“芯片工厂”来生产计算机和电子系统,但设计师们还表示,将这些芯片像“维生素”一样从地球运输出去,可能会被证明是可行的。<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
===Molecular manufacturing 分子制造===<br />
<br />
<br />
<br />
{{Main|Molecular nanotechnology#Replicating nanorobots}}<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
[[Nanotechnology|Nanotechnologists]] in particular believe that their work will likely fail to reach a state of maturity until human beings design a self-replicating [[assembler (nanotechnology)|assembler]] of [[nanometer]] dimensions [http://www.MolecularAssembler.com/KSRM/4.11.3.htm].<br />
<br />
Nanotechnologists in particular believe that their work will likely fail to reach a state of maturity until human beings design a self-replicating assembler of nanometer dimensions [http://www.MolecularAssembler.com/KSRM/4.11.3.htm].<br />
<br />
纳米技术学家尤其相信,在人类设计出一种纳米尺度的自复制组译器之前,他们的工作很可能无法达到成熟的状态。 Molecularassembler.com/ksrm/4.11.3.htm.<br />
<br />
<br />
<br />
<br />
<br />
These systems are substantially simpler than autotrophic systems, because they are provided with purified feedstocks and energy. They do not have to reproduce them. This distinction is at the root of some of the controversy about whether [[molecular manufacturing]] is possible or not. Many authorities who find it impossible are clearly citing sources for complex autotrophic self-replicating systems. Many of the authorities who find it possible are clearly citing sources for much simpler self-assembling systems, which have been demonstrated. In the meantime, a [[Lego]]-built autonomous robot able to follow a pre-set track and assemble an exact copy of itself, starting from four externally provided components, was demonstrated experimentally in 2003 [http://www.MolecularAssembler.com/KSRM/3.23.4.htm].<br />
<br />
These systems are substantially simpler than autotrophic systems, because they are provided with purified feedstocks and energy. They do not have to reproduce them. This distinction is at the root of some of the controversy about whether molecular manufacturing is possible or not. Many authorities who find it impossible are clearly citing sources for complex autotrophic self-replicating systems. Many of the authorities who find it possible are clearly citing sources for much simpler self-assembling systems, which have been demonstrated. In the meantime, a Lego-built autonomous robot able to follow a pre-set track and assemble an exact copy of itself, starting from four externally provided components, was demonstrated experimentally in 2003 [http://www.MolecularAssembler.com/KSRM/3.23.4.htm].<br />
<br />
这些系统比自养系统简单得多,因为它们提供了净化了的原料和能源。它们不需要再生这些材料。这种区别是关于分子制造是否可行的一些争论的根源。许多当局认为这是不可能的,他们明确地引证了复杂的自养自复制系统的资源。许多发现这种可能性的权威人士显然是在引用已经证明的更简单的自组装系统的资料。与此同时,2003年的一项实验展示了一个乐高积木自主机器人,它能够按照预先设定的轨道,从外部提供的4个组件开始,精确地组装出自己的复制品。 Molecularassembler.com/ksrm/3.23.4.htm.<br />
<br />
<br />
<br />
<br />
<br />
Merely exploiting the replicative abilities of existing cells is insufficient, because of limitations in the process of [[protein biosynthesis]] (also see the listing for [[RNA]]).<br />
<br />
Merely exploiting the replicative abilities of existing cells is insufficient, because of limitations in the process of protein biosynthesis (also see the listing for RNA).<br />
<br />
仅仅利用现有细胞的复制能力是不够的,因为蛋白质的生物合成过程中存在局限性。<br />
<br />
What is required is the rational design of an entirely novel replicator with a much wider range of synthesis capabilities.<br />
<br />
What is required is the rational design of an entirely novel replicator with a much wider range of synthesis capabilities.<br />
<br />
我们需要的是合理设计一种具有更广泛合成能力的全新复制因子。<br />
<br />
<br />
<br />
<br />
<br />
In 2011, New York University scientists have developed artificial structures that can self-replicate, a process that has the potential to yield new types of materials. They have demonstrated that it is possible to replicate not just molecules like cellular DNA or RNA, but discrete structures that could in principle assume many different shapes, have many different functional features, and be associated with many different types of chemical species.<ref>{{cite journal | doi = 10.1038/nature10500 | last1 = Wang | first1 = Tong | last2 = Sha | first2 = Ruojie | last3 = Dreyfus | first3 = Rémi | last4 = Leunissen | first4 = Mirjam E. | last5 = Maass | first5 = Corinna | last6 = Pine | first6 = David J. | last7 = Chaikin | first7 = Paul M. | last8 = Seeman | first8 = Nadrian C. | year = 2011 | title = Self-replication of information-bearing nanoscale patterns | journal = Nature | volume = 478 | issue = 7368 | pages = 225–228 | pmid=21993758 | pmc=3192504}}</ref><ref>{{cite web | url = https://www.sciencedaily.com/releases/2011/10/111012132651.htm | title = Self-replication process holds promise for production of new materials. | date = 17 October 2011 | website = Science Daily | accessdate=17 October 2011}}</ref><br />
<br />
In 2011, New York University scientists have developed artificial structures that can self-replicate, a process that has the potential to yield new types of materials. They have demonstrated that it is possible to replicate not just molecules like cellular DNA or RNA, but discrete structures that could in principle assume many different shapes, have many different functional features, and be associated with many different types of chemical species.<br />
<br />
2011年,纽约大学的科学家们开发出了可自复制的人造结构,这一过程有可能产生新型材料。他们已经证明,这种结构不仅可以复制像细胞 DNA 或 RNA 这样的分子,而且可以复制能够呈现许多不同形态、具有许多不同功能特征、并与许多不同类型的化学形态相关联的离散结构。<br />
<br />
<br />
<br />
<br />
<br />
For a discussion of other chemical bases for hypothetical self-replicating systems, see [[alternative biochemistry]].<br />
<br />
For a discussion of other chemical bases for hypothetical self-replicating systems, see alternative biochemistry.<br />
<br />
有关假设的自我复制系统的其他化学基础的讨论,请参阅替代生物化学。<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
==See also 请参阅==<br />
<br />
<br />
*[[https://zhuanlan.zhihu.com/p/135833919 从自我复制到自我意识]]<br />
<br />
<br />
<br />
* [[Artificial life]]<br />
* 人造生命<br />
<br />
<br />
* [[Astrochicken]]<br />
* 太空鸡实验<br />
<br />
<br />
* [[Autopoiesis]]<br />
* 自创生<br />
<br />
<br />
* [[Complex system]]<br />
* 复杂系统<br />
<br />
<br />
* [[DNA replication]]<br />
* DNA复制<br />
<br />
<br />
* [[Life]]<br />
* 生命<br />
<br />
<br />
* [[Robot]]<br />
* 机器人<br />
<br />
<br />
* [[RepRap]] (self-replicated 3D printer)<br />
* 开源项目<br />
<br />
<br />
* [[Self-replicating machine]]<br />
* 自复制机器<br />
<br />
<br />
** [[Self-replicating spacecraft]]<br />
* 自复制空间飞行器<br />
<br />
<br />
* [[Space manufacturing]]<br />
* 空间制造<br />
<br />
<br />
* [[Von Neumann universal constructor]]<br />
* 冯·诺依曼宇宙构造函数<br />
<br />
<br />
* [[Virus]]<br />
* 病毒<br />
<br />
<br />
* [[Von Neumann machine (disambiguation)]]<br />
* 冯·诺依曼机<br />
<br />
<br />
* [[Self reconfigurable]]<br />
* 自重构<br />
<br />
<br />
* [[Final Anthropic Principle]]<br />
* 最终人存原理<br />
<br />
<br />
* [[Positive feedback]]<br />
* 正反馈<br />
<br />
<br />
* [[Harmonic]]<br />
* 谐<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
==References 参考文献==<br />
<br />
<br />
<br />
{{reflist}}<br />
<br />
<br />
<br />
;Notes<br />
<br />
Notes<br />
<br />
注释<br />
<br />
{{refbegin}}<br />
<br />
<br />
<br />
<br />
* von Neumann, J., 1966, ''The Theory of Self-reproducing Automata'', A. Burks, ed., Univ. of Illinois Press, Urbana, IL.<br />
<br />
<br />
<br />
* [[s:Advanced Automation for Space Missions|Advanced Automation for Space Missions]], a 1980 NASA study edited by [[Robert Freitas]]<br />
<br />
<br />
<br />
* [http://www.MolecularAssembler.com/KSRM.htm Kinematic Self-Replicating Machines] first comprehensive survey of entire field in 2004 by [[Robert Freitas]] and [[Ralph Merkle]]<br />
<br />
<br />
<br />
* [https://web.archive.org/web/20040920220139/http://www.niac.usra.edu/files/studies/final_report/pdf/883Toth-Fejel.pdf NASA Institute for Advance Concepts study by General Dynamics]- concluded that complexity of the development was equal to that of a Pentium 4, and promoted a design based on cellular automata.<br />
<br />
<br />
<br />
* ''[[Gödel, Escher, Bach]]'' by [[Douglas Hofstadter]] (detailed discussion and many examples)<br />
<br />
<br />
<br />
* Kenyon, R., ''Self-replicating tilings'', in: Symbolic Dynamics and Applications (P. Walters, ed.) Contemporary Math. vol. 135 (1992), 239-264.<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
{{refend}}<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
[[Category:Self-replication| ]]<br />
<br />
<br />
<br />
<noinclude><br />
<br />
<small>This page was moved from [[wikipedia:en:Self-replication]]. Its edit history can be viewed at [[自复制/edithistory]]</small></noinclude><br />
<br />
[[Category:待整理页面]]</div>粲兰https://wiki.swarma.org/index.php?title=%E8%87%AA%E5%A4%8D%E5%88%B6_Self-replication&diff=15526自复制 Self-replication2020-10-16T13:22:44Z<p>粲兰:</p>
<hr />
<div>此词条暂由袁一博翻译,未经人工整理和审校,带来阅读不便,请见谅。{{see also|Biological reproduction}}<br />
<br />
<br />
<br />
{{Use dmy dates|date=April 2019|cs1-dates=y}}<br />
<br />
<br />
<br />
[[Image:DNA chemical structure.svg|thumb|right|200px|[[Molecular structure]] of [[DNA]] ]]<br />
<br />
[[Molecular structure of DNA DNA分子结构]]<br />
<br />
[ DNA 的分子结构]<br />
<br />
'''Self-replication''' is any behavior of a [[dynamical system]] that yields construction of an identical or similar copy of itself. [[Cell (biology)|Biological cell]]s, given suitable environments, reproduce by [[cell division]]. During cell division, [[DNA]] is replicated and can be transmitted to offspring during [[reproduction]]. [[virus (biology)|Biological viruses]] can [[Viral replication|replicate]], but only by commandeering the reproductive machinery of cells through a process of infection. Harmful [[prion]] proteins can replicate by converting normal proteins into rogue forms.<ref>{{cite news|url=http://news.bbc.co.uk/1/hi/health/8435320.stm |title='Lifeless' prion proteins are 'capable of evolution' |work=BBC News |date=2010-01-01 |accessdate=2013-10-22}}</ref> [[Computer virus]]es reproduce using the hardware and software already present on computers. Self-replication in [[robotics]] has been an area of research and a subject of interest in [[science fiction]]. Any self-replicating mechanism which does not make a perfect copy ([[mutation]]) will experience [[genetic variation]] and will create variants of itself. These variants will be subject to [[natural selection]], since some will be better at surviving in their current environment than others and will out-breed them.<br />
<br />
Self-replication is any behavior of a dynamical system that yields construction of an identical or similar copy of itself. Biological cells, given suitable environments, reproduce by cell division. During cell division, DNA is replicated and can be transmitted to offspring during reproduction. Biological viruses can replicate, but only by commandeering the reproductive machinery of cells through a process of infection. Harmful prion proteins can replicate by converting normal proteins into rogue forms. Computer viruses reproduce using the hardware and software already present on computers. Self-replication in robotics has been an area of research and a subject of interest in science fiction. Any self-replicating mechanism which does not make a perfect copy (mutation) will experience genetic variation and will create variants of itself. These variants will be subject to natural selection, since some will be better at surviving in their current environment than others and will out-breed them.<br />
<br />
自我复制是一个动力系统的任何行为,产生一个相同或相似的复制本身的建设。生物细胞,在适当的环境下,通过细胞分裂进行繁殖。在细胞分裂过程中,DNA 被复制,并在生殖过程中传递给后代。生物病毒可以复制,但只能通过感染过程控制细胞的生殖机制。有害的朊病毒蛋白可以通过将正常的蛋白质转化为流氓形式来复制。计算机病毒利用计算机上已有的硬件和软件进行复制。自我复制机器人学一直是一个研究领域,也是科幻小说中的一个兴趣主题。任何不能完美复制的自我复制机制(变异)都会经历遗传变异,并且会产生自身的变异。这些变异将受到自然选择的影响,因为有些变异会比其他变异更好地在当前环境中生存,并将超越他们。<br />
<br />
<br />
<br />
<br />
<br />
<br />
==Overview 综述==<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
===Theory 理论===<br />
<br />
<br />
<br />
{{See also|Von Neumann universal constructor}}<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
Early research by [[John von Neumann]]<ref name=Hixon_vonNeumann>{{cite book|last=von Neumann|first=John|title=The Hixon Symposium|year=1948|location=Pasadena, California|pages=1–36}}</ref> established that replicators have several parts:<br />
<br />
Early research by John von Neumann established that replicators have several parts:<br />
<br />
约翰·冯·诺伊曼的早期研究表明复制因子有几个部分:<br />
<br />
<br />
<br />
<br />
<br />
*A coded representation of the replicator<br />
<br />
<br />
<br />
*A mechanism to copy the coded representation<br />
<br />
<br />
<br />
*A mechanism for effecting construction within the host environment of the replicator<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
Exceptions to this pattern may be possible, although none have yet been achieved. For example, scientists have come close to constructing [https://arstechnica.com/science/2011/04/investigations-into-the-ancient-rna-world/ RNA that can be copied] in an "environment" that is a solution of RNA monomers and transcriptase. In this case, the body is the genome, and the specialized copy mechanisms are external. The requirement for an outside copy mechanism has not yet been overcome, and such systems are more accurately characterized as "assisted replication" than "self-replication".<br />
<br />
Exceptions to this pattern may be possible, although none have yet been achieved. For example, scientists have come close to constructing [https://arstechnica.com/science/2011/04/investigations-into-the-ancient-rna-world/ RNA that can be copied] in an "environment" that is a solution of RNA monomers and transcriptase. In this case, the body is the genome, and the specialized copy mechanisms are external. The requirement for an outside copy mechanism has not yet been overcome, and such systems are more accurately characterized as "assisted replication" than "self-replication".<br />
<br />
这种模式可能有例外,尽管尚未实现任何例外。例如,科学家们已经接近于在 RNA 单体和转录酶的“环境”中构建[可复制的 https://arstechnica.com/science/2011/04/investigations-into-the-ancient-RNA-world/ RNA ]。在这种情况下,身体就是基因组,专门的复制机制是外部的。对外部复制机制的需求尚未被克服,这种系统更准确地描述为“辅助复制”而不是“自我复制复制”。<br />
<br />
<br />
<br />
<br />
<br />
However, the simplest possible case is that only a genome exists. Without some specification of the self-reproducing steps, a genome-only system is probably better characterized as something like a [[crystal]].<br />
<br />
However, the simplest possible case is that only a genome exists. Without some specification of the self-reproducing steps, a genome-only system is probably better characterized as something like a crystal.<br />
<br />
然而,最简单的可能的情况是只有一个基因组存在。如果没有一些自我繁殖步骤的说明,一个只有基因组的系统可能更好地被描述为类似于晶体的东西。<br />
<br />
<br />
<br />
<br />
<br />
<br />
===Classes of self-replication 自复制的类别===<br />
<br />
<br />
<br />
Recent research<ref>{{cite web|url = http://www.MolecularAssembler.com/KSRM/5.1.htm | date = 2004 | accessdate = 29 June 2013 | last = Freitas | first = Robert | last2 = Merkle | first2 = Ralph | title = Kinematic Self-Replicating Machines - General Taxonomy of Replicators}}</ref> has begun to categorize replicators, often based on the amount of support they require.<br />
<br />
Recent research has begun to categorize replicators, often based on the amount of support they require.<br />
<br />
最近的研究已经开始对复制因子进行分类,通常基于它们所需要的支持程度。<br />
<br />
<br />
<br />
<br />
<br />
*Natural replicators have all or most of their design from nonhuman sources. Such systems include natural life forms.<br />
<br />
*自然复制因子的设计全部或绝大部分来自非人类来源。这样的系统包含自然生命形式。<br />
<br />
<br />
*[[Autotroph]]ic replicators can reproduce themselves "in the wild". They mine their own materials. It is conjectured that non-biological autotrophic replicators could be designed by humans, and could easily accept specifications for human products.<br />
<br />
*无机复制因子可以在自然环境下进行自我复制。它们采掘自身的矿物质。据推测,非生物的无极复制因子可能由人类设计而成,并且可以轻易地接受人类产物的规格。<br />
<br />
<br />
*Self-reproductive systems are conjectured systems which would produce copies of themselves from industrial feedstocks such as metal bar and wire.<br />
<br />
*可以利用工业原料,例如金属棒和金属丝,以产生自身的拷贝的自复制的系统存在于假想当中<br />
<br />
<br />
*Self-assembling systems assemble copies of themselves from finished, delivered parts. Simple examples of such systems have been demonstrated at the macro scale.<br />
<br />
*自组装系统将它们已经完成并运送过来的自复制部分组装起来。这种系统的简单例子已经在宏观尺度得到展示。<br />
<br />
<br />
<br />
<br />
<br />
The design space for machine replicators is very broad. A comprehensive study<ref>{{cite web|url = http://www.MolecularAssembler.com/KSRM/5.1.9.htm | date = 2004 | accessdate = 29 June 2013 | last1 = Freitas | first1 = Robert | last2 = Merkle | first2 = Ralph | title = Kinematic Self-Replicating Machines - Freitas-Merkle Map of the Kinematic Replicator Design Space (2003–2004)}}</ref> to date by [[Robert Freitas]] and [[Ralph Merkle]] has identified 137 design dimensions grouped into a dozen separate categories, including: (1) Replication Control, (2) Replication Information, (3) Replication Substrate, (4) Replicator Structure, (5) Passive Parts, (6) Active Subunits, (7) Replicator Energetics, (8) Replicator Kinematics, (9) Replication Process, (10) Replicator Performance, (11) Product Structure, and (12) Evolvability.<br />
<br />
The design space for machine replicators is very broad. A comprehensive study to date by Robert Freitas and Ralph Merkle has identified 137 design dimensions grouped into a dozen separate categories, including: (1) Replication Control, (2) Replication Information, (3) Replication Substrate, (4) Replicator Structure, (5) Passive Parts, (6) Active Subunits, (7) Replicator Energetics, (8) Replicator Kinematics, (9) Replication Process, (10) Replicator Performance, (11) Product Structure, and (12) Evolvability.<br />
<br />
机器复制因子的设计空间非常广阔。迄今为止,罗伯特·弗雷塔斯(Robert Freitas)和拉尔夫·默克尔(Ralph Merkle)的综合研究已经确定了137个设计维度并将其分为十几个独立的类别,包括: (1)复制控制,(2)复制信息,(3)复制基质,(4)复制因子<br />
结构,(5)被动部件,(6)主动子单元,(7)复制因子能量学,(8)复制因子运动学,(9)复制过程,(10)复制因子性能,(11)产物结构,和(12)进化性。<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
===A self-replicating computer program 一种自复制的电脑程序===<br />
<br />
<br />
<br />
{{Main|Quine (computing)}}<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
In [[computer science]] a [[Quine (computing)|quine]] is a self-reproducing computer program that, when executed, outputs its own code. For example, a quine in the [[Python (programming language)|Python programming language]] is:<br />
<br />
In computer science a quine is a self-reproducing computer program that, when executed, outputs its own code. For example, a quine in the Python programming language is:<br />
<br />
在计算机科学中,quine 是一种自我复制的计算机程序,当执行时,输出自己的代码。例如,利用Python语言编写的一个 quine 是:<br />
<br />
<br />
<br />
<br />
<br />
:<code>a='a=%r;print(a%%a)';print(a%a)</code><br />
<br />
<code>a='a=%r;print(a%%a)';print(a%a)</code><br />
<br />
<br />
<br />
<br />
<br />
A more trivial approach is to write a program that will make a copy of any stream of data that it is directed to, and then direct it at itself. In this case the program is treated as both executable code, and as data to be manipulated. This approach is common in most self-replicating systems, including biological life, and is simpler as it does not require the program to contain a complete description of itself.<br />
<br />
A more trivial approach is to write a program that will make a copy of any stream of data that it is directed to, and then direct it at itself. In this case the program is treated as both executable code, and as data to be manipulated. This approach is common in most self-replicating systems, including biological life, and is simpler as it does not require the program to contain a complete description of itself.<br />
<br />
一种更简单的方法是编写一个程序,这个程序将复制它所指向的任何数据流,然后指向它自己。在这种情况下,程序既被当作可执行代码,也被当作要操作的数据。这种方法在包括生物生命在内的大多数自复制系统中都很常见,而且更简单,因为它不需要程序包含对自身的完整描述。<br />
<br />
<br />
<br />
<br />
<br />
In many programming languages an empty program is legal, and executes without producing errors or other output. The output is thus the same as the source code, so the program is trivially self-reproducing.<br />
<br />
In many programming languages an empty program is legal, and executes without producing errors or other output. The output is thus the same as the source code, so the program is trivially self-reproducing.<br />
<br />
在许多编程语言中,空程序是合法的,并且执行时不会产生错误或其他输出。因此,其输出是相同的源代码,所以这种程序是微不足道的自复制。<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
===Self-replicating tiling 自复制式平铺===<br />
<br />
<br />
<br />
{{See also|Self-similarity}}<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
In [[geometry]] a self-replicating tiling is a tiling pattern in which several [[congruence (geometry)|congruent]] tiles may be joined together to form a larger tile that is similar to the original. This is an aspect of the field of study known as [[tessellation]]. The "sphinx" [[hexiamond]] is the only known self-replicating [[pentagon]].<ref>For an image that does not show how this replicates, see: Eric W. Weisstein. "Sphinx." From MathWorld--A Wolfram Web Resource. [http://mathworld.wolfram.com/Sphinx.html http://mathworld.wolfram.com/Sphinx.html]</ref> For example, four such [[concave polygon|concave]] pentagons can be joined together to make one with twice the dimensions.<ref>For further illustrations, see [http://www.geoaustralia.com/italian/Sphinx/Guide.html Teaching TILINGS / TESSELLATIONS with Geo Sphinx]</ref> [[Solomon W. Golomb]] coined the term [[rep-tiles]] for self-replicating tilings.<br />
<br />
In geometry a self-replicating tiling is a tiling pattern in which several congruent tiles may be joined together to form a larger tile that is similar to the original. This is an aspect of the field of study known as tessellation. The "sphinx" hexiamond is the only known self-replicating pentagon. For example, four such concave pentagons can be joined together to make one with twice the dimensions. Solomon W. Golomb coined the term rep-tiles for self-replicating tilings.<br />
<br />
在几何学中,自复制式平铺是一种平铺方法,其中几个全等的瓷砖可以连接在一起,形成一个较大的类似于原来的瓷砖。这是一个被称为镶嵌的研究领域的一个方面。“狮身人面像”六面双锥体是已知唯一能自复制的五角形。<br />
例如,四个这样的凹形五边形可以连接在一起,形成一个尺寸是原来两倍的五边形。所罗门·W·格伦布(Solomon W. Golomb)创造了术语 rep-tiles 来描述自复制式平铺。<br />
--[[用户:粲兰|袁一博]]([[用户讨论:粲兰|讨论]])“‘狮身人面像’双锥六面体是已知唯一能自复制的五角形。”这句对应原句"The 'sphinx' hexiamond is the only known self-replicating pentagon."疑似存在几何上的逻辑错误,hexiamond并不是一个平面几何图形。<br />
<br />
<br />
<br />
<br />
<br />
In 2012, [[Lee Sallows]] identified rep-tiles as a special instance of a [[self-tiling tile set]] or setiset. A setiset of order ''n'' is a set of ''n'' shapes that can be assembled in ''n'' different ways so as to form larger replicas of themselves. Setisets in which every shape is distinct are called 'perfect'. A rep-''n'' rep-tile is just a setiset composed of ''n'' identical pieces.<br />
<br />
In 2012, Lee Sallows identified rep-tiles as a special instance of a self-tiling tile set or '''<font color="#32CD32">setiset</font>'''. A setiset of order n is a set of n shapes that can be assembled in n different ways so as to form larger replicas of themselves. Setisets in which every shape is distinct are called 'perfect'. A rep-n rep-tile is just a setiset composed of n identical pieces.<br />
<br />
2012年,李·萨洛斯(Lee Sallows) 将 rep-tiles 定义为一种特殊的自平铺纹样集。一组 ''n'' 阶的复制品是一组 ''n'' 个形状的复制品,它们可以以 ''n'' 种不同的方式组合,以便形成更大的自复制产物。每个形状各不相同的 setiset 被称为“完美的”。n次重复的 rep-tile 只是由 n 个相同部分组成的一个集合。<br />
--[[用户:粲兰|袁一博]]([[用户讨论:粲兰|讨论]])“setiset”找不到合适的翻译。<br />
<br />
{|<br />
<br />
{|<br />
<br />
{|<br />
<br />
|- style="vertical-align:bottom;"<br />
<br />
|- style="vertical-align:bottom;"<br />
<br />
|-style“ vertical-align: bottom; ”<br />
<br />
[[File:Self-replication of sphynx hexidiamonds.svg|thumb|A rep-tile-based setiset of order 4|thumb|left|text-bottom|260px|Four '[[Sphinx tiling|sphinx]]' hexiamonds can be put together to form another sphinx. 四个“人面狮身像”双锥六面体可以拼凑成另一个人面狮身像]]<br />
<br />
sphinx' hexiamonds can be put together to form another sphinx.]]<br />
<br />
<br />
<br />
[[File:A rep-tile-based setiset of order 4.png|thumb|A rep-tile-based setiset of order 9|thumb|right|text-bottom|290px|A perfect [[Self-tiling tile set|setiset]] of order 4 一个完美的四阶setiset]]<br />
<br />
setiset of order 4]]<br />
<br />
setiset of order 4]]<br />
<br />
|}<br />
<br />
|}<br />
<br />
|}<br />
<br />
{{clear}}<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
===Self replicating clay crystals 自复制的粘土晶体===<br />
<br />
<br />
<br />
One form of natural self-replication that isn't based on DNA or RNA occurs in clay crystals.<ref>{{cite web|url=http://www.bbc.com/earth/story/20160823-the-idea-that-life-began-as-clay-crystals-is-50-years-old |title=The idea that life began as clay crystals is 50 years old |publisher=bbc.com |date=2016-08-24 |accessdate=2019-11-10}}</ref> Clay consists of a large number of small crystals, and clay is an environment that promotes crystal growth. Crystals consist of a regular lattice of atoms and are able to grow if e.g. placed in a water solution containing the crystal components; automatically arranging atoms at the crystal boundary into the crystalline form. Crystals may have irregularities where the regular atomic structure is broken, and when crystals grow, these irregularities may propagate, creating a form of self-replication of crystal irregularities. Because these irregularities may affect the probability of a crystal breaking apart to form new crystals, crystals with such irregularities could even be considered to undergo evolutionary development.<br />
<br />
One form of natural self-replication that isn't based on DNA or RNA occurs in clay crystals. Clay consists of a large number of small crystals, and clay is an environment that promotes crystal growth. Crystals consist of a regular lattice of atoms and are able to grow if e.g. placed in a water solution containing the crystal components; automatically arranging atoms at the crystal boundary into the crystalline form. Crystals may have irregularities where the regular atomic structure is broken, and when crystals grow, these irregularities may propagate, creating a form of self-replication of crystal irregularities. Because these irregularities may affect the probability of a crystal breaking apart to form new crystals, crystals with such irregularities could even be considered to undergo evolutionary development.<br />
<br />
粘土晶体中存在一种不基于 DNA 或 RNA 的天然自复制。粘土由大量的小晶体组成,粘土是促进晶体生长的环境。晶体是由规则的原子晶格组成的,将其放置在含有晶体成分的水溶液中能够生长,并自动地将晶体边界上的原子排列成晶体形式。当正常的原子结构被破坏时,晶体可能具有不规则性,当晶体生长时,这些不规则性可能会传播,形成一种晶体不规则性的自我复制。由于这些不规则结构可能会影响晶体分裂形成新晶体的概率,因此这种不规则结构的晶体甚至可以被认为是在进化过程中形成的。<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
===Applications 应用===<br />
<br />
<br />
<br />
It is a long-term goal of some engineering sciences to achieve a [[clanking replicator]], a material device that can self-replicate. The usual reason is to achieve a low cost per item while retaining the utility of a manufactured good. Many authorities say that in the limit, the cost of self-replicating items should approach the cost-per-weight of wood or other biological substances, because self-replication avoids the costs of [[labour (economics)|labor]], [[Capital (economics)|capital]] and [[distribution (business)|distribution]] in conventional [[factory|manufactured goods]].<br />
<br />
It is a long-term goal of some engineering sciences to achieve a clanking replicator, a material device that can self-replicate. The usual reason is to achieve a low cost per item while retaining the utility of a manufactured good. Many authorities say that in the limit, the cost of self-replicating items should approach the cost-per-weight of wood or other biological substances, because self-replication avoids the costs of labor, capital and distribution in conventional manufactured goods.<br />
<br />
一些工程科学的长期目标是制造出一种可以自我复制的铿锵复制机器。通常的原因是为了在保证生产品的功效的同时降低每件商品的成本。许多权威人士表示,在这个限度内,自复制产品的成本应该接近木材或其他生物材质的单位重量成本,因为自我复制不需要传统工业产品所需的劳动力、资本和分销成本。<br />
<br />
<br />
<br />
<br />
<br />
A fully novel artificial replicator is a reasonable near-term goal.<br />
<br />
A fully novel artificial replicator is a reasonable near-term goal.<br />
<br />
建立一个全新的人工复制因子是一个合理的近期目标。<br />
<br />
A [[NASA]] study recently placed the complexity of a [[clanking replicator]] at approximately that of [[Intel]]'s [[Pentium (brand)|Pentium]] 4 CPU.<ref>{{cite web|url=http://www.niac.usra.edu/files/studies/final_report/883Toth-Fejel.pdf |title=Modeling Kinematic Cellular Automata Final Report |publisher= |date=April 30, 2004 |accessdate=2013-10-22}}</ref> That is, the technology is achievable with a relatively small engineering group in a reasonable commercial time-scale at a reasonable cost.<br />
<br />
A NASA study recently placed the complexity of a clanking replicator at approximately that of Intel's Pentium 4 CPU. That is, the technology is achievable with a relatively small engineering group in a reasonable commercial time-scale at a reasonable cost.<br />
<br />
美国宇航局最近的一项研究表明,铿锵复制因子的复杂度大约相当于英特尔奔腾4处理器的复杂度。也就是说,这项技术在一个合理的商业时间规模内,是可以由一个相对较小的工程团队以一个合理的成本实现的。<br />
<br />
<br />
<br />
<br />
<br />
Given the currently keen interest in biotechnology and the high levels of funding in that field, attempts to exploit the replicative ability of existing cells are timely, and may easily lead to significant insights and advances.<br />
<br />
Given the currently keen interest in biotechnology and the high levels of funding in that field, attempts to exploit the replicative ability of existing cells are timely, and may easily lead to significant insights and advances.<br />
<br />
鉴于目前学术界对生物技术的浓厚兴趣和这一领域的大量资金,利用现有细胞的复制能力的尝试是适时的,而且易产生重大的理解和进展。<br />
<br />
<br />
<br />
<br />
<br />
A variation of self replication is of practical relevance in [[compiler]] construction, where a similar [[bootstrapping]] problem occurs as in natural self replication. A compiler ([[phenotype]]) can be applied on the compiler's own [[source code]] ([[genotype]]) producing the compiler itself. During compiler development, a modified ([[Mutation|mutated]]) source is used to create the next generation of the compiler. This process differs from natural self-replication in that the process is directed by an engineer, not by the subject itself.<br />
<br />
A variation of self replication is of practical relevance in compiler construction, where a similar bootstrapping problem occurs as in natural self replication. A compiler (phenotype) can be applied on the compiler's own source code (genotype) producing the compiler itself. During compiler development, a modified (mutated) source is used to create the next generation of the compiler. This process differs from natural self-replication in that the process is directed by an engineer, not by the subject itself.<br />
<br />
自复制的一种变体在编译器构造中具有实际意义,在天然自复制中也会出现类似的自举问题。编译器(表型)可以应用于编译器自身的源代码(基因型) ,从而产生编译器本身。在编译器开发过程中,一般使用修改(变异)的源代码来创建下一代编译器。这个过程不同于自然自我复制,因为这个过程是由工程师指导的,而不是主体本身。<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
==Mechanical self-replication 机械自复制==<br />
<br />
<br />
<br />
{{Main|Self-replicating machine}}<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
An activity in the field of robots is the self-replication of machines. Since all robots (at least in modern times) have a fair number of the same features, a self-replicating robot (or possibly a hive of robots) would need to do the following:<br />
<br />
An activity in the field of robots is the self-replication of machines. Since all robots (at least in modern times) have a fair number of the same features, a self-replicating robot (or possibly a hive of robots) would need to do the following:<br />
<br />
机器人领域的一项活动就是机器的自复制。由于所有机器人(至少在现代)都有相当数量的相同特性,一个自复制机器人(或者可能是一群机器人)需要做到以下几点:<br />
<br />
<br />
<br />
<br />
<br />
*Obtain construction materials<br />
*获得结构材料<br />
<br />
<br />
<br />
*Manufacture new parts including its smallest parts and thinking apparatus<br />
*制造新零件,包括最小的零件和思维组件<br />
<br />
<br />
<br />
*Provide a consistent power source<br />
*提供一个已存在的动力源<br />
<br />
<br />
<br />
*Program the new members<br />
*为新成员编程<br />
<br />
<br />
<br />
*error correct any mistakes in the offspring<br />
*改正子代产物的任何错误<br />
<br />
<br />
<br />
<br />
<br />
<br />
On a [[Nanotechnology|nano]] scale, [[Assembler (nanotechnology)|assemblers]] might also be designed to self-replicate under their own power. This, in turn, has given rise to the "[[grey goo]]" version of [[Armageddon]], as featured in such science fiction novels as ''[[Bloom (novel)|Bloom]]'', ''[[Prey (novel)|Prey]]'', and ''[[Recursion (novel)|Recursion]]''.<br />
<br />
On a nano scale, assemblers might also be designed to self-replicate under their own power. This, in turn, has given rise to the "grey goo" version of Armageddon, as featured in such science fiction novels as Bloom, Prey, and Recursion.<br />
<br />
在纳米级别上,组装者也可能被设计成在自身能量下进行自复制。这反过来又导致了“灰色粘质”版本的世界末日,就像在诸如《花开》,《掠食》和《递归》这样的科幻小说中描述的那样。<br />
<br />
<br />
<br />
<br />
<br />
The [[Foresight Institute]] has published guidelines for researchers in mechanical self-replication.<ref>{{cite web|url=http://foresight.org/guidelines/ |title=Molecular Nanotechnology Guidelines |publisher=Foresight.org |date= |accessdate=2013-10-22}}</ref> The guidelines recommend that researchers use several specific techniques for preventing mechanical replicators from getting out of control, such as using a [[broadcast architecture]].<br />
<br />
The Foresight Institute has published guidelines for researchers in mechanical self-replication. The guidelines recommend that researchers use several specific techniques for preventing mechanical replicators from getting out of control, such as using a broadcast architecture.<br />
<br />
美国前瞻学会协会已经为机械自复制的研究人员发布了指导方针。指导方针建议研究人员使用一些特定的技术来防止机械复制因子失控,比如使用广播结构。<br />
<br />
<br />
<br />
<br />
<br />
For a detailed article on mechanical reproduction as it relates to the industrial age see [[mass production]].<br />
<br />
For a detailed article on mechanical reproduction as it relates to the industrial age see mass production.<br />
<br />
有关与工业时代有关的机械复制的详细文章,请参阅'''<font color="#ff8000">大规模生产(mass production)</font>'''。<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
==Fields 研究领域==<br />
<br />
<br />
{{refimprove section|date=August 2017}}<br />
<br />
<br />
<br />
Research has occurred in the following areas:<br />
<br />
Research has occurred in the following areas:<br />
<br />
在以下领域进行了研究:<br />
<br />
<br />
<br />
<br />
<br />
* [[Biology]] studies natural replication and replicators, and their interaction. These can be an important guide to avoid design difficulties in self-replicating machinery.<br />
<br />
<br />
<br />
* In [[Chemistry]] self-replication studies are typically about how a specific set of molecules can act together to replicate each other within the set <ref>{{cite book |author=Moulin, Giuseppone |title=Constitutional Dynamic Chemistry |volume=322 |pages=87–105 |year=2011|publisher=Springer|doi=10.1007/128_2011_198|pmid=21728135 |series=Topics in Current Chemistry |isbn=978-3-642-28343-7 |chapter=Dynamic Combinatorial Self-Replicating Systems }}</ref> (often part of [[Systems chemistry]] field).<br />
<br />
<br />
<br />
* [[Meme]]tics studies ideas and how they propagate in human culture. Memes require only small amounts of material, and therefore have theoretical similarities to [[virus]]es and are often described as [[virus|viral]].<br />
<br />
<br />
<br />
* [[Nanotechnology]] or more precisely, [[molecular nanotechnology]] is concerned with making [[Nanotechnology|nano]] scale [[assembler (nanotechnology)|assemblers]]. Without self-replication, capital and assembly costs of molecular machines become impossibly large.<br />
<br />
<br />
<br />
* Space resources: NASA has sponsored a number of design studies to develop self-replicating mechanisms to mine space resources. Most of these designs include computer-controlled machinery that copies itself.<br />
<br />
<br />
<br />
* [[Computer security]]: Many computer security problems are caused by self-reproducing computer programs that infect computers — [[computer worm]]s and [[computer virus]]es.<br />
<br />
<br />
<br />
* In [[parallel computing]], it takes a long time to manually load a new program on every node of a large [[computer cluster]] or [[distributed computing]] system. Automatically loading new programs using [[mobile agent]]s can save the system administrator a lot of time and give users their results much quicker, as long as they don't get out of control.<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
==In industry 在工业界==<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
===Space exploration and manufacturing 太空探索和制造业===<br />
<br />
<br />
<br />
The goal of self-replication in space systems is to exploit large amounts of matter with a low launch mass. For example, an [[autotroph]]ic self-replicating machine could cover a moon or planet with solar cells, and beam the power to the Earth using microwaves. Once in place, the same machinery that built itself could also produce raw materials or manufactured objects, including transportation systems to ship the products. [[Von Neumann Probe|Another model]] of self-replicating machine would copy itself through the galaxy and universe, sending information back.<br />
<br />
The goal of self-replication in space systems is to exploit large amounts of matter with a low launch mass. For example, an autotrophic self-replicating machine could cover a moon or planet with solar cells, and beam the power to the Earth using microwaves. Once in place, the same machinery that built itself could also produce raw materials or manufactured objects, including transportation systems to ship the products. Another model of self-replicating machine would copy itself through the galaxy and universe, sending information back.<br />
<br />
太空系统中自复制的目标是利用低发射质量的大量物质。例如,一个自养自复制机械可以用太阳能电池覆盖月球或行星,并通过微波将能量传送到地球。一旦就位,自己建造的同样的机器也可以生产原材料或制成品,包括运输产品的运输系统。另一个自复制机械模型会在银河系和宇宙中复制自己,把信息传回来。<br />
<br />
<br />
<br />
<br />
<br />
In general, since these systems are autotrophic, they are the most difficult and complex known replicators. They are also thought to be the most hazardous, because they do not require any inputs from human beings in order to reproduce.<br />
<br />
In general, since these systems are autotrophic, they are the most difficult and complex known replicators. They are also thought to be the most hazardous, because they do not require any inputs from human beings in order to reproduce.<br />
<br />
一般来说,由于这些系统是自养的,他们是已知最困难和复杂的复制因子。它们也被认为是最危险的复制因子,因为它们不需要人类的任何投入来繁殖。<br />
<br />
<br />
<br />
<br />
<br />
A classic theoretical study of replicators in space is the 1980 [[NASA]] study of autotrophic clanking replicators, edited by [[Robert Freitas]].<ref>[[Wikisource:Advanced Automation for Space Missions]]</ref><br />
<br />
A classic theoretical study of replicators in space is the 1980 NASA study of autotrophic clanking replicators, edited by Robert Freitas.<br />
<br />
一个关于太空中复制因子的经典理论研究是1980年 NASA 关于自养铿锵复制因子的研究,由罗伯特·弗雷塔斯(Robert Freitas)编辑。<br />
<br />
<br />
<br />
<br />
<br />
Much of the design study was concerned with a simple, flexible chemical system for processing lunar [[regolith]], and the differences between the ratio of elements needed by the replicator, and the ratios available in regolith. The limiting element was [[Chlorine]], an essential element to process regolith for [[Aluminium]]. Chlorine is very rare in lunar regolith, and a substantially faster rate of reproduction could be assured by importing modest amounts.<br />
<br />
Much of the design study was concerned with a simple, flexible chemical system for processing lunar regolith, and the differences between the ratio of elements needed by the replicator, and the ratios available in regolith. The limiting element was Chlorine, an essential element to process regolith for Aluminium. Chlorine is very rare in lunar regolith, and a substantially faster rate of reproduction could be assured by importing modest amounts.<br />
<br />
大部分的设计研究都是关于一个简单、灵活的化学系统来处理月球表面的风化层,以及复制因子所需要的元素比率和风化层中可用的比率之间的差异。限制元素是氯,一个必不可少的元素处理风化层的铝。氯在月球的风化层中非常罕见,通过进口适量的氯,可以保证更快的生殖速度。<br />
<br />
<br />
<br />
<br />
<br />
The reference design specified small computer-controlled electric carts running on rails. Each cart could have a simple hand or a small bull-dozer shovel, forming a basic [[robot]].<br />
<br />
The reference design specified small computer-controlled electric carts running on rails. Each cart could have a simple hand or a small bull-dozer shovel, forming a basic robot.<br />
<br />
参考设计指定的小型计算机控制的电动车在轨道上运行。每个推车可以有一个简单的手或一个小型推土机铲,形成一个基本的机器人。<br />
<br />
<br />
<br />
<br />
<br />
Power would be provided by a "canopy" of [[solar cell]]s supported on pillars. The other machinery could run under the canopy.<br />
<br />
Power would be provided by a "canopy" of solar cells supported on pillars. The other machinery could run under the canopy.<br />
<br />
电力将由支撑在支柱上的太阳能电池“天篷”提供。其他的机器可以在天篷下面运转。<br />
<br />
<br />
<br />
<br />
<br />
A "[[casting]] [[robot]]" would use a robotic arm with a few sculpting tools to make [[plaster]] [[molding (process)|mold]]s. Plaster molds are easy to make, and make precise parts with good surface finishes. The robot would then cast most of the parts either from non-conductive molten rock ([[basalt]]) or purified metals. An [[electricity|electric]] [[oven]] melted the materials.<br />
<br />
A "casting robot" would use a robotic arm with a few sculpting tools to make plaster molds. Plaster molds are easy to make, and make precise parts with good surface finishes. The robot would then cast most of the parts either from non-conductive molten rock (basalt) or purified metals. An electric oven melted the materials.<br />
<br />
一个“铸造机器人”将使用一个机械手臂和一些雕刻工具来制作石膏模具。石膏模具易于制作,而且制作精确的零件表面光洁度好。然后,机器人将用非导电熔岩(玄武岩)或纯净金属铸造大部分零件。电炉熔化了这些材料。<br />
<br />
<br />
<br />
<br />
<br />
A speculative, more complex "chip factory" was specified to produce the computer and electronic systems, but the designers also said that it might prove practical to ship the chips from Earth as if they were "vitamins".<br />
<br />
A speculative, more complex "chip factory" was specified to produce the computer and electronic systems, but the designers also said that it might prove practical to ship the chips from Earth as if they were "vitamins".<br />
<br />
他们指定了一个更为复杂的推测性“芯片工厂”来生产计算机和电子系统,但设计师们还表示,将这些芯片像“维生素”一样从地球运输出去,可能会被证明是可行的。<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
===Molecular manufacturing===<br />
<br />
分子制造<br />
<br />
{{Main|Molecular nanotechnology#Replicating nanorobots}}<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
[[Nanotechnology|Nanotechnologists]] in particular believe that their work will likely fail to reach a state of maturity until human beings design a self-replicating [[assembler (nanotechnology)|assembler]] of [[nanometer]] dimensions [http://www.MolecularAssembler.com/KSRM/4.11.3.htm].<br />
<br />
Nanotechnologists in particular believe that their work will likely fail to reach a state of maturity until human beings design a self-replicating assembler of nanometer dimensions [http://www.MolecularAssembler.com/KSRM/4.11.3.htm].<br />
<br />
纳米技术专家尤其相信,在人类设计出一种纳米尺度的自我复制装配器之前,他们的工作很可能无法达到成熟的状态。 Molecularassembler.com/ksrm/4.11.3.htm.<br />
<br />
<br />
<br />
<br />
<br />
These systems are substantially simpler than autotrophic systems, because they are provided with purified feedstocks and energy. They do not have to reproduce them. This distinction is at the root of some of the controversy about whether [[molecular manufacturing]] is possible or not. Many authorities who find it impossible are clearly citing sources for complex autotrophic self-replicating systems. Many of the authorities who find it possible are clearly citing sources for much simpler self-assembling systems, which have been demonstrated. In the meantime, a [[Lego]]-built autonomous robot able to follow a pre-set track and assemble an exact copy of itself, starting from four externally provided components, was demonstrated experimentally in 2003 [http://www.MolecularAssembler.com/KSRM/3.23.4.htm].<br />
<br />
These systems are substantially simpler than autotrophic systems, because they are provided with purified feedstocks and energy. They do not have to reproduce them. This distinction is at the root of some of the controversy about whether molecular manufacturing is possible or not. Many authorities who find it impossible are clearly citing sources for complex autotrophic self-replicating systems. Many of the authorities who find it possible are clearly citing sources for much simpler self-assembling systems, which have been demonstrated. In the meantime, a Lego-built autonomous robot able to follow a pre-set track and assemble an exact copy of itself, starting from four externally provided components, was demonstrated experimentally in 2003 [http://www.MolecularAssembler.com/KSRM/3.23.4.htm].<br />
<br />
这些系统比自养系统简单得多,因为它们提供了纯化的原料和能源。他们不需要复制它们。这种区别是关于分子制造是否可行的一些争论的根源。许多当局认为这是不可能的,他们明确地引用了复杂的自养自我复制系统的资源。许多发现这种可能性的权威人士显然是在引用已经证明的更简单的自组装系统的资料。与此同时,2003年的一项实验展示了一个乐高积木自主机器人,它能够按照预先设定的轨道,从外部提供的4个组件开始,精确地组装出自己的副本。 Molecularassembler.com/ksrm/3.23.4.htm.<br />
<br />
<br />
<br />
<br />
<br />
Merely exploiting the replicative abilities of existing cells is insufficient, because of limitations in the process of [[protein biosynthesis]] (also see the listing for [[RNA]]).<br />
<br />
Merely exploiting the replicative abilities of existing cells is insufficient, because of limitations in the process of protein biosynthesis (also see the listing for RNA).<br />
<br />
仅仅利用现有细胞的复制能力是不够的,因为在蛋白质生物合成过程中存在局限性。<br />
<br />
What is required is the rational design of an entirely novel replicator with a much wider range of synthesis capabilities.<br />
<br />
What is required is the rational design of an entirely novel replicator with a much wider range of synthesis capabilities.<br />
<br />
我们需要的是合理设计一种具有更广泛合成能力的全新复制因子。<br />
<br />
<br />
<br />
<br />
<br />
In 2011, New York University scientists have developed artificial structures that can self-replicate, a process that has the potential to yield new types of materials. They have demonstrated that it is possible to replicate not just molecules like cellular DNA or RNA, but discrete structures that could in principle assume many different shapes, have many different functional features, and be associated with many different types of chemical species.<ref>{{cite journal | doi = 10.1038/nature10500 | last1 = Wang | first1 = Tong | last2 = Sha | first2 = Ruojie | last3 = Dreyfus | first3 = Rémi | last4 = Leunissen | first4 = Mirjam E. | last5 = Maass | first5 = Corinna | last6 = Pine | first6 = David J. | last7 = Chaikin | first7 = Paul M. | last8 = Seeman | first8 = Nadrian C. | year = 2011 | title = Self-replication of information-bearing nanoscale patterns | journal = Nature | volume = 478 | issue = 7368 | pages = 225–228 | pmid=21993758 | pmc=3192504}}</ref><ref>{{cite web | url = https://www.sciencedaily.com/releases/2011/10/111012132651.htm | title = Self-replication process holds promise for production of new materials. | date = 17 October 2011 | website = Science Daily | accessdate=17 October 2011}}</ref><br />
<br />
In 2011, New York University scientists have developed artificial structures that can self-replicate, a process that has the potential to yield new types of materials. They have demonstrated that it is possible to replicate not just molecules like cellular DNA or RNA, but discrete structures that could in principle assume many different shapes, have many different functional features, and be associated with many different types of chemical species.<br />
<br />
2011年,纽约大学的科学家们开发出了可以自复制的人造结构,这一过程有可能产生新型材料。他们已经证明,这种结构不仅可以复制像细胞 DNA 或 RNA 这样的分子,而且可以复制呈现许多不同形状、具有许多不同功能特征、并与许多不同类型的化学形态相关联的离散结构。<br />
<br />
<br />
<br />
<br />
<br />
For a discussion of other chemical bases for hypothetical self-replicating systems, see [[alternative biochemistry]].<br />
<br />
For a discussion of other chemical bases for hypothetical self-replicating systems, see alternative biochemistry.<br />
<br />
有关假设的自我复制系统的其他化学基础的讨论,请参阅替代生物化学。<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
==See also 请参阅==<br />
<br />
<br />
<br />
* [[Artificial life]]<br />
<br />
<br />
<br />
* [[Astrochicken]]<br />
<br />
<br />
<br />
* [[Autopoiesis]]<br />
<br />
<br />
<br />
* [[Complex system]]<br />
<br />
<br />
<br />
* [[DNA replication]]<br />
<br />
<br />
<br />
* [[Life]]<br />
<br />
<br />
<br />
* [[Robot]]<br />
<br />
<br />
<br />
* [[RepRap]] (self-replicated 3D printer)<br />
<br />
<br />
<br />
* [[Self-replicating machine]]<br />
<br />
<br />
<br />
** [[Self-replicating spacecraft]]<br />
<br />
<br />
<br />
* [[Space manufacturing]]<br />
<br />
<br />
<br />
* [[Von Neumann universal constructor]]<br />
<br />
<br />
<br />
* [[Virus]]<br />
<br />
<br />
<br />
* [[Von Neumann machine (disambiguation)]]<br />
<br />
<br />
<br />
* [[Self reconfigurable]]<br />
<br />
<br />
<br />
* [[Final Anthropic Principle]]<br />
<br />
<br />
<br />
* [[Positive feedback]]<br />
<br />
<br />
<br />
* [[Harmonic]]<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
==References 参考文献==<br />
<br />
<br />
<br />
{{reflist}}<br />
<br />
<br />
<br />
;Notes<br />
<br />
Notes<br />
<br />
注释<br />
<br />
{{refbegin}}<br />
<br />
<br />
<br />
* von Neumann, J., 1966, ''The Theory of Self-reproducing Automata'', A. Burks, ed., Univ. of Illinois Press, Urbana, IL.<br />
<br />
<br />
<br />
* [[s:Advanced Automation for Space Missions|Advanced Automation for Space Missions]], a 1980 NASA study edited by [[Robert Freitas]]<br />
<br />
<br />
<br />
* [http://www.MolecularAssembler.com/KSRM.htm Kinematic Self-Replicating Machines] first comprehensive survey of entire field in 2004 by [[Robert Freitas]] and [[Ralph Merkle]]<br />
<br />
<br />
<br />
* [https://web.archive.org/web/20040920220139/http://www.niac.usra.edu/files/studies/final_report/pdf/883Toth-Fejel.pdf NASA Institute for Advance Concepts study by General Dynamics]- concluded that complexity of the development was equal to that of a Pentium 4, and promoted a design based on cellular automata.<br />
<br />
<br />
<br />
* ''[[Gödel, Escher, Bach]]'' by [[Douglas Hofstadter]] (detailed discussion and many examples)<br />
<br />
<br />
<br />
* Kenyon, R., ''Self-replicating tilings'', in: Symbolic Dynamics and Applications (P. Walters, ed.) Contemporary Math. vol. 135 (1992), 239-264.<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
{{refend}}<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
[[Category:Self-replication| ]]<br />
<br />
<br />
<br />
<noinclude><br />
<br />
<small>This page was moved from [[wikipedia:en:Self-replication]]. Its edit history can be viewed at [[自复制/edithistory]]</small></noinclude><br />
<br />
[[Category:待整理页面]]</div>粲兰https://wiki.swarma.org/index.php?title=%E8%87%AA%E5%A4%8D%E5%88%B6_Self-replication&diff=15285自复制 Self-replication2020-10-15T07:50:14Z<p>粲兰:</p>
<hr />
<div>此词条暂由袁一博翻译,未经人工整理和审校,带来阅读不便,请见谅。{{see also|Biological reproduction}}<br />
<br />
<br />
<br />
{{Use dmy dates|date=April 2019|cs1-dates=y}}<br />
<br />
<br />
<br />
[[Image:DNA chemical structure.svg|thumb|right|200px|[[Molecular structure]] of [[DNA]] ]]<br />
<br />
[[Molecular structure of DNA DNA分子结构]]<br />
<br />
[ DNA 的分子结构]<br />
<br />
'''Self-replication''' is any behavior of a [[dynamical system]] that yields construction of an identical or similar copy of itself. [[Cell (biology)|Biological cell]]s, given suitable environments, reproduce by [[cell division]]. During cell division, [[DNA]] is replicated and can be transmitted to offspring during [[reproduction]]. [[virus (biology)|Biological viruses]] can [[Viral replication|replicate]], but only by commandeering the reproductive machinery of cells through a process of infection. Harmful [[prion]] proteins can replicate by converting normal proteins into rogue forms.<ref>{{cite news|url=http://news.bbc.co.uk/1/hi/health/8435320.stm |title='Lifeless' prion proteins are 'capable of evolution' |work=BBC News |date=2010-01-01 |accessdate=2013-10-22}}</ref> [[Computer virus]]es reproduce using the hardware and software already present on computers. Self-replication in [[robotics]] has been an area of research and a subject of interest in [[science fiction]]. Any self-replicating mechanism which does not make a perfect copy ([[mutation]]) will experience [[genetic variation]] and will create variants of itself. These variants will be subject to [[natural selection]], since some will be better at surviving in their current environment than others and will out-breed them.<br />
<br />
Self-replication is any behavior of a dynamical system that yields construction of an identical or similar copy of itself. Biological cells, given suitable environments, reproduce by cell division. During cell division, DNA is replicated and can be transmitted to offspring during reproduction. Biological viruses can replicate, but only by commandeering the reproductive machinery of cells through a process of infection. Harmful prion proteins can replicate by converting normal proteins into rogue forms. Computer viruses reproduce using the hardware and software already present on computers. Self-replication in robotics has been an area of research and a subject of interest in science fiction. Any self-replicating mechanism which does not make a perfect copy (mutation) will experience genetic variation and will create variants of itself. These variants will be subject to natural selection, since some will be better at surviving in their current environment than others and will out-breed them.<br />
<br />
自我复制是一个动力系统的任何行为,产生一个相同或相似的复制本身的建设。生物细胞,在适当的环境下,通过细胞分裂进行繁殖。在细胞分裂过程中,DNA 被复制,并在生殖过程中传递给后代。生物病毒可以复制,但只能通过感染过程控制细胞的生殖机制。有害的朊病毒蛋白可以通过将正常的蛋白质转化为流氓形式来复制。计算机病毒利用计算机上已有的硬件和软件进行复制。自我复制机器人学一直是一个研究领域,也是科幻小说中的一个兴趣主题。任何不能完美复制的自我复制机制(变异)都会经历遗传变异,并且会产生自身的变异。这些变异将受到自然选择的影响,因为有些变异会比其他变异更好地在当前环境中生存,并将超越他们。<br />
<br />
<br />
<br />
<br />
<br />
<br />
==Overview 综述==<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
===Theory 理论===<br />
<br />
<br />
<br />
{{See also|Von Neumann universal constructor}}<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
Early research by [[John von Neumann]]<ref name=Hixon_vonNeumann>{{cite book|last=von Neumann|first=John|title=The Hixon Symposium|year=1948|location=Pasadena, California|pages=1–36}}</ref> established that replicators have several parts:<br />
<br />
Early research by John von Neumann established that replicators have several parts:<br />
<br />
约翰·冯·诺伊曼的早期研究表明复制因子有几个部分:<br />
<br />
<br />
<br />
<br />
<br />
*A coded representation of the replicator<br />
<br />
<br />
<br />
*A mechanism to copy the coded representation<br />
<br />
<br />
<br />
*A mechanism for effecting construction within the host environment of the replicator<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
Exceptions to this pattern may be possible, although none have yet been achieved. For example, scientists have come close to constructing [https://arstechnica.com/science/2011/04/investigations-into-the-ancient-rna-world/ RNA that can be copied] in an "environment" that is a solution of RNA monomers and transcriptase. In this case, the body is the genome, and the specialized copy mechanisms are external. The requirement for an outside copy mechanism has not yet been overcome, and such systems are more accurately characterized as "assisted replication" than "self-replication".<br />
<br />
Exceptions to this pattern may be possible, although none have yet been achieved. For example, scientists have come close to constructing [https://arstechnica.com/science/2011/04/investigations-into-the-ancient-rna-world/ RNA that can be copied] in an "environment" that is a solution of RNA monomers and transcriptase. In this case, the body is the genome, and the specialized copy mechanisms are external. The requirement for an outside copy mechanism has not yet been overcome, and such systems are more accurately characterized as "assisted replication" than "self-replication".<br />
<br />
这种模式可能有例外,尽管尚未实现任何例外。例如,科学家们已经接近于在 RNA 单体和转录酶的“环境”中构建[可复制的 https://arstechnica.com/science/2011/04/investigations-into-the-ancient-RNA-world/ RNA ]。在这种情况下,身体就是基因组,专门的复制机制是外部的。对外部复制机制的需求尚未被克服,这种系统更准确地描述为“辅助复制”而不是“自我复制复制”。<br />
<br />
<br />
<br />
<br />
<br />
However, the simplest possible case is that only a genome exists. Without some specification of the self-reproducing steps, a genome-only system is probably better characterized as something like a [[crystal]].<br />
<br />
However, the simplest possible case is that only a genome exists. Without some specification of the self-reproducing steps, a genome-only system is probably better characterized as something like a crystal.<br />
<br />
然而,最简单的可能的情况是只有一个基因组存在。如果没有一些自我繁殖步骤的说明,一个只有基因组的系统可能更好地被描述为类似于晶体的东西。<br />
<br />
<br />
<br />
<br />
<br />
<br />
===Classes of self-replication 自复制的类别===<br />
<br />
<br />
<br />
Recent research<ref>{{cite web|url = http://www.MolecularAssembler.com/KSRM/5.1.htm | date = 2004 | accessdate = 29 June 2013 | last = Freitas | first = Robert | last2 = Merkle | first2 = Ralph | title = Kinematic Self-Replicating Machines - General Taxonomy of Replicators}}</ref> has begun to categorize replicators, often based on the amount of support they require.<br />
<br />
Recent research has begun to categorize replicators, often based on the amount of support they require.<br />
<br />
最近的研究已经开始对复制因子进行分类,通常基于它们所需要的支持程度。<br />
<br />
<br />
<br />
<br />
<br />
*Natural replicators have all or most of their design from nonhuman sources. Such systems include natural life forms.<br />
<br />
*自然复制因子的设计全部或绝大部分来自非人类来源。这样的系统包含自然生命形式。<br />
<br />
<br />
*[[Autotroph]]ic replicators can reproduce themselves "in the wild". They mine their own materials. It is conjectured that non-biological autotrophic replicators could be designed by humans, and could easily accept specifications for human products.<br />
<br />
*无机复制因子可以在自然环境下进行自我复制。它们采掘自身的矿物质。据推测,非生物的无极复制因子可能由人类设计而成,并且可以轻易地接受人类产物的规格。<br />
<br />
<br />
*Self-reproductive systems are conjectured systems which would produce copies of themselves from industrial feedstocks such as metal bar and wire.<br />
<br />
*可以利用工业原料,例如金属棒和金属丝,以产生自身的拷贝的自复制的系统存在于假想当中<br />
<br />
<br />
*Self-assembling systems assemble copies of themselves from finished, delivered parts. Simple examples of such systems have been demonstrated at the macro scale.<br />
<br />
*自组装系统将它们已经完成并运送过来的自复制部分组装起来。这种系统的简单例子已经在宏观尺度得到展示。<br />
<br />
<br />
<br />
<br />
<br />
The design space for machine replicators is very broad. A comprehensive study<ref>{{cite web|url = http://www.MolecularAssembler.com/KSRM/5.1.9.htm | date = 2004 | accessdate = 29 June 2013 | last1 = Freitas | first1 = Robert | last2 = Merkle | first2 = Ralph | title = Kinematic Self-Replicating Machines - Freitas-Merkle Map of the Kinematic Replicator Design Space (2003–2004)}}</ref> to date by [[Robert Freitas]] and [[Ralph Merkle]] has identified 137 design dimensions grouped into a dozen separate categories, including: (1) Replication Control, (2) Replication Information, (3) Replication Substrate, (4) Replicator Structure, (5) Passive Parts, (6) Active Subunits, (7) Replicator Energetics, (8) Replicator Kinematics, (9) Replication Process, (10) Replicator Performance, (11) Product Structure, and (12) Evolvability.<br />
<br />
The design space for machine replicators is very broad. A comprehensive study to date by Robert Freitas and Ralph Merkle has identified 137 design dimensions grouped into a dozen separate categories, including: (1) Replication Control, (2) Replication Information, (3) Replication Substrate, (4) Replicator Structure, (5) Passive Parts, (6) Active Subunits, (7) Replicator Energetics, (8) Replicator Kinematics, (9) Replication Process, (10) Replicator Performance, (11) Product Structure, and (12) Evolvability.<br />
<br />
机器复制因子的设计空间非常广阔。迄今为止,罗伯特·弗雷塔斯(Robert Freitas)和拉尔夫·默克尔(Ralph Merkle)的综合研究已经确定了137个设计维度并将其分为十几个独立的类别,包括: (1)复制控制,(2)复制信息,(3)复制基质,(4)复制因子<br />
结构,(5)被动部件,(6)主动子单元,(7)复制因子能量学,(8)复制因子运动学,(9)复制过程,(10)复制因子性能,(11)产物结构,和(12)进化性。<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
===A self-replicating computer program 一种自复制的电脑程序===<br />
<br />
<br />
<br />
{{Main|Quine (computing)}}<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
In [[computer science]] a [[Quine (computing)|quine]] is a self-reproducing computer program that, when executed, outputs its own code. For example, a quine in the [[Python (programming language)|Python programming language]] is:<br />
<br />
In computer science a quine is a self-reproducing computer program that, when executed, outputs its own code. For example, a quine in the Python programming language is:<br />
<br />
在计算机科学中,quine 是一种自我复制的计算机程序,当执行时,输出自己的代码。例如,利用Python语言编写的一个 quine 是:<br />
<br />
<br />
<br />
<br />
<br />
:<code>a='a=%r;print(a%%a)';print(a%a)</code><br />
<br />
<code>a='a=%r;print(a%%a)';print(a%a)</code><br />
<br />
<br />
<br />
<br />
<br />
A more trivial approach is to write a program that will make a copy of any stream of data that it is directed to, and then direct it at itself. In this case the program is treated as both executable code, and as data to be manipulated. This approach is common in most self-replicating systems, including biological life, and is simpler as it does not require the program to contain a complete description of itself.<br />
<br />
A more trivial approach is to write a program that will make a copy of any stream of data that it is directed to, and then direct it at itself. In this case the program is treated as both executable code, and as data to be manipulated. This approach is common in most self-replicating systems, including biological life, and is simpler as it does not require the program to contain a complete description of itself.<br />
<br />
一种更简单的方法是编写一个程序,这个程序将复制它所指向的任何数据流,然后指向它自己。在这种情况下,程序既被当作可执行代码,也被当作要操作的数据。这种方法在包括生物生命在内的大多数自复制系统中都很常见,而且更简单,因为它不需要程序包含对自身的完整描述。<br />
<br />
<br />
<br />
<br />
<br />
In many programming languages an empty program is legal, and executes without producing errors or other output. The output is thus the same as the source code, so the program is trivially self-reproducing.<br />
<br />
In many programming languages an empty program is legal, and executes without producing errors or other output. The output is thus the same as the source code, so the program is trivially self-reproducing.<br />
<br />
在许多编程语言中,空程序是合法的,并且执行时不会产生错误或其他输出。因此,其输出是相同的源代码,所以这种程序是微不足道的自复制。<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
===Self-replicating tiling 自复制式平铺===<br />
<br />
<br />
<br />
{{See also|Self-similarity}}<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
In [[geometry]] a self-replicating tiling is a tiling pattern in which several [[congruence (geometry)|congruent]] tiles may be joined together to form a larger tile that is similar to the original. This is an aspect of the field of study known as [[tessellation]]. The "sphinx" [[hexiamond]] is the only known self-replicating [[pentagon]].<ref>For an image that does not show how this replicates, see: Eric W. Weisstein. "Sphinx." From MathWorld--A Wolfram Web Resource. [http://mathworld.wolfram.com/Sphinx.html http://mathworld.wolfram.com/Sphinx.html]</ref> For example, four such [[concave polygon|concave]] pentagons can be joined together to make one with twice the dimensions.<ref>For further illustrations, see [http://www.geoaustralia.com/italian/Sphinx/Guide.html Teaching TILINGS / TESSELLATIONS with Geo Sphinx]</ref> [[Solomon W. Golomb]] coined the term [[rep-tiles]] for self-replicating tilings.<br />
<br />
In geometry a self-replicating tiling is a tiling pattern in which several congruent tiles may be joined together to form a larger tile that is similar to the original. This is an aspect of the field of study known as tessellation. The "sphinx" hexiamond is the only known self-replicating pentagon. For example, four such concave pentagons can be joined together to make one with twice the dimensions. Solomon W. Golomb coined the term rep-tiles for self-replicating tilings.<br />
<br />
在几何学中,自复制式平铺是一种平铺方法,其中几个全等的瓷砖可以连接在一起,形成一个较大的类似于原来的瓷砖。这是一个被称为镶嵌的研究领域的一个方面。“狮身人面像”六面双锥体是已知唯一能自复制的五角形。<br />
例如,四个这样的凹形五边形可以连接在一起,形成一个尺寸是原来两倍的五边形。所罗门·W·格伦布(Solomon W. Golomb)创造了术语 rep-tiles 来描述自复制式平铺。<br />
--[[用户:粲兰|袁一博]]([[用户讨论:粲兰|讨论]])“‘狮身人面像’双锥六面体是已知唯一能自复制的五角形。”这句对应原句"The 'sphinx' hexiamond is the only known self-replicating pentagon."疑似存在几何上的逻辑错误,hexiamond并不是一个平面几何图形。<br />
<br />
<br />
<br />
<br />
<br />
In 2012, [[Lee Sallows]] identified rep-tiles as a special instance of a [[self-tiling tile set]] or setiset. A setiset of order ''n'' is a set of ''n'' shapes that can be assembled in ''n'' different ways so as to form larger replicas of themselves. Setisets in which every shape is distinct are called 'perfect'. A rep-''n'' rep-tile is just a setiset composed of ''n'' identical pieces.<br />
<br />
In 2012, Lee Sallows identified rep-tiles as a special instance of a self-tiling tile set or '''<font color="#32CD32">setiset</font>'''. A setiset of order n is a set of n shapes that can be assembled in n different ways so as to form larger replicas of themselves. Setisets in which every shape is distinct are called 'perfect'. A rep-n rep-tile is just a setiset composed of n identical pieces.<br />
<br />
2012年,李·萨洛斯(Lee Sallows) 将 rep-tiles 定义为一种特殊的自平铺纹样集。一组 ''n'' 阶的复制品是一组 ''n'' 个形状的复制品,它们可以以 ''n'' 种不同的方式组合,以便形成更大的自复制产物。每个形状各不相同的 setiset 被称为“完美的”。n次重复的 rep-tile 只是由 n 个相同部分组成的一个集合。<br />
--[[用户:粲兰|袁一博]]([[用户讨论:粲兰|讨论]])“setiset”找不到合适的翻译。<br />
<br />
{|<br />
<br />
{|<br />
<br />
{|<br />
<br />
|- style="vertical-align:bottom;"<br />
<br />
|- style="vertical-align:bottom;"<br />
<br />
|-style“ vertical-align: bottom; ”<br />
<br />
[[File:Self-replication of sphynx hexidiamonds.svg|thumb|A rep-tile-based setiset of order 4|thumb|left|text-bottom|260px|Four '[[Sphinx tiling|sphinx]]' hexiamonds can be put together to form another sphinx. 四个“人面狮身像”双锥六面体可以拼凑成另一个人面狮身像]]<br />
<br />
sphinx' hexiamonds can be put together to form another sphinx.]]<br />
<br />
<br />
<br />
[[File:A rep-tile-based setiset of order 4.png|thumb|A rep-tile-based setiset of order 9|thumb|right|text-bottom|290px|A perfect [[Self-tiling tile set|setiset]] of order 4 一个完美的四阶setiset]]<br />
<br />
setiset of order 4]]<br />
<br />
setiset of order 4]]<br />
<br />
|}<br />
<br />
|}<br />
<br />
|}<br />
<br />
{{clear}}<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
===Self replicating clay crystals 自复制的粘土晶体===<br />
<br />
<br />
<br />
One form of natural self-replication that isn't based on DNA or RNA occurs in clay crystals.<ref>{{cite web|url=http://www.bbc.com/earth/story/20160823-the-idea-that-life-began-as-clay-crystals-is-50-years-old |title=The idea that life began as clay crystals is 50 years old |publisher=bbc.com |date=2016-08-24 |accessdate=2019-11-10}}</ref> Clay consists of a large number of small crystals, and clay is an environment that promotes crystal growth. Crystals consist of a regular lattice of atoms and are able to grow if e.g. placed in a water solution containing the crystal components; automatically arranging atoms at the crystal boundary into the crystalline form. Crystals may have irregularities where the regular atomic structure is broken, and when crystals grow, these irregularities may propagate, creating a form of self-replication of crystal irregularities. Because these irregularities may affect the probability of a crystal breaking apart to form new crystals, crystals with such irregularities could even be considered to undergo evolutionary development.<br />
<br />
One form of natural self-replication that isn't based on DNA or RNA occurs in clay crystals. Clay consists of a large number of small crystals, and clay is an environment that promotes crystal growth. Crystals consist of a regular lattice of atoms and are able to grow if e.g. placed in a water solution containing the crystal components; automatically arranging atoms at the crystal boundary into the crystalline form. Crystals may have irregularities where the regular atomic structure is broken, and when crystals grow, these irregularities may propagate, creating a form of self-replication of crystal irregularities. Because these irregularities may affect the probability of a crystal breaking apart to form new crystals, crystals with such irregularities could even be considered to undergo evolutionary development.<br />
<br />
粘土晶体中存在一种不基于 DNA 或 RNA 的天然自复制。粘土由大量的小晶体组成,粘土是促进晶体生长的环境。晶体是由规则的原子晶格组成的,将其放置在含有晶体成分的水溶液中能够生长,并自动地将晶体边界上的原子排列成晶体形式。当正常的原子结构被破坏时,晶体可能具有不规则性,当晶体生长时,这些不规则性可能会传播,形成一种晶体不规则性的自我复制。由于这些不规则结构可能会影响晶体分裂形成新晶体的概率,因此这种不规则结构的晶体甚至可以被认为是在进化过程中形成的。<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
===Applications 应用===<br />
<br />
<br />
<br />
It is a long-term goal of some engineering sciences to achieve a [[clanking replicator]], a material device that can self-replicate. The usual reason is to achieve a low cost per item while retaining the utility of a manufactured good. Many authorities say that in the limit, the cost of self-replicating items should approach the cost-per-weight of wood or other biological substances, because self-replication avoids the costs of [[labour (economics)|labor]], [[Capital (economics)|capital]] and [[distribution (business)|distribution]] in conventional [[factory|manufactured goods]].<br />
<br />
It is a long-term goal of some engineering sciences to achieve a clanking replicator, a material device that can self-replicate. The usual reason is to achieve a low cost per item while retaining the utility of a manufactured good. Many authorities say that in the limit, the cost of self-replicating items should approach the cost-per-weight of wood or other biological substances, because self-replication avoids the costs of labor, capital and distribution in conventional manufactured goods.<br />
<br />
一些工程科学的长期目标是制造出一种可以自我复制的铿锵复制机器。通常的原因是为了在保证生产品的功效的同时降低每件商品的成本。许多权威人士表示,在这个限度内,自复制产品的成本应该接近木材或其他生物材质的单位重量成本,因为自我复制不需要传统工业产品所需的劳动力、资本和分销成本。<br />
<br />
<br />
<br />
<br />
<br />
A fully novel artificial replicator is a reasonable near-term goal.<br />
<br />
A fully novel artificial replicator is a reasonable near-term goal.<br />
<br />
建立一个全新的人工复制因子是一个合理的近期目标。<br />
<br />
A [[NASA]] study recently placed the complexity of a [[clanking replicator]] at approximately that of [[Intel]]'s [[Pentium (brand)|Pentium]] 4 CPU.<ref>{{cite web|url=http://www.niac.usra.edu/files/studies/final_report/883Toth-Fejel.pdf |title=Modeling Kinematic Cellular Automata Final Report |publisher= |date=April 30, 2004 |accessdate=2013-10-22}}</ref> That is, the technology is achievable with a relatively small engineering group in a reasonable commercial time-scale at a reasonable cost.<br />
<br />
A NASA study recently placed the complexity of a clanking replicator at approximately that of Intel's Pentium 4 CPU. That is, the technology is achievable with a relatively small engineering group in a reasonable commercial time-scale at a reasonable cost.<br />
<br />
美国宇航局最近的一项研究表明,铿锵复制因子的复杂度大约相当于英特尔奔腾4处理器的复杂度。也就是说,这项技术在一个合理的商业时间规模内,是可以由一个相对较小的工程团队以一个合理的成本实现的。<br />
<br />
<br />
<br />
<br />
<br />
Given the currently keen interest in biotechnology and the high levels of funding in that field, attempts to exploit the replicative ability of existing cells are timely, and may easily lead to significant insights and advances.<br />
<br />
Given the currently keen interest in biotechnology and the high levels of funding in that field, attempts to exploit the replicative ability of existing cells are timely, and may easily lead to significant insights and advances.<br />
<br />
鉴于目前学术界对生物技术的浓厚兴趣和这一领域的大量资金,利用现有细胞的复制能力的尝试是适时的,而且易产生重大的理解和进展。<br />
<br />
<br />
<br />
<br />
<br />
A variation of self replication is of practical relevance in [[compiler]] construction, where a similar [[bootstrapping]] problem occurs as in natural self replication. A compiler ([[phenotype]]) can be applied on the compiler's own [[source code]] ([[genotype]]) producing the compiler itself. During compiler development, a modified ([[Mutation|mutated]]) source is used to create the next generation of the compiler. This process differs from natural self-replication in that the process is directed by an engineer, not by the subject itself.<br />
<br />
A variation of self replication is of practical relevance in compiler construction, where a similar bootstrapping problem occurs as in natural self replication. A compiler (phenotype) can be applied on the compiler's own source code (genotype) producing the compiler itself. During compiler development, a modified (mutated) source is used to create the next generation of the compiler. This process differs from natural self-replication in that the process is directed by an engineer, not by the subject itself.<br />
<br />
自复制的一种变体在编译器构造中具有实际意义,在天然自复制中也会出现类似的自举问题。编译器(表型)可以应用于编译器自身的源代码(基因型) ,从而产生编译器本身。在编译器开发过程中,一般使用修改(变异)的源代码来创建下一代编译器。这个过程不同于自然自我复制,因为这个过程是由工程师指导的,而不是主体本身。<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
==Mechanical self-replication 机械自复制==<br />
<br />
<br />
<br />
{{Main|Self-replicating machine}}<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
An activity in the field of robots is the self-replication of machines. Since all robots (at least in modern times) have a fair number of the same features, a self-replicating robot (or possibly a hive of robots) would need to do the following:<br />
<br />
An activity in the field of robots is the self-replication of machines. Since all robots (at least in modern times) have a fair number of the same features, a self-replicating robot (or possibly a hive of robots) would need to do the following:<br />
<br />
机器人领域的一项活动就是机器的自复制。由于所有机器人(至少在现代)都有相当数量的相同特性,一个自复制机器人(或者可能是一群机器人)需要做到以下几点:<br />
<br />
<br />
<br />
<br />
<br />
*Obtain construction materials<br />
*获得结构材料<br />
<br />
<br />
<br />
*Manufacture new parts including its smallest parts and thinking apparatus<br />
*制造新零件,包括最小的零件和思维组件<br />
<br />
<br />
<br />
*Provide a consistent power source<br />
*提供一个已存在的动力源<br />
<br />
<br />
<br />
*Program the new members<br />
*为新成员编程<br />
<br />
<br />
<br />
*error correct any mistakes in the offspring<br />
*改正子代产物的任何错误<br />
<br />
<br />
<br />
<br />
<br />
<br />
On a [[Nanotechnology|nano]] scale, [[Assembler (nanotechnology)|assemblers]] might also be designed to self-replicate under their own power. This, in turn, has given rise to the "[[grey goo]]" version of [[Armageddon]], as featured in such science fiction novels as ''[[Bloom (novel)|Bloom]]'', ''[[Prey (novel)|Prey]]'', and ''[[Recursion (novel)|Recursion]]''.<br />
<br />
On a nano scale, assemblers might also be designed to self-replicate under their own power. This, in turn, has given rise to the "grey goo" version of Armageddon, as featured in such science fiction novels as Bloom, Prey, and Recursion.<br />
<br />
在纳米级别上,组装者也可能被设计成在自身能量下进行自复制。这反过来又导致了“灰色粘质”版本的世界末日,就像在诸如《花开》,《掠食》和《递归》这样的科幻小说中描述的那样。<br />
<br />
<br />
<br />
<br />
<br />
The [[Foresight Institute]] has published guidelines for researchers in mechanical self-replication.<ref>{{cite web|url=http://foresight.org/guidelines/ |title=Molecular Nanotechnology Guidelines |publisher=Foresight.org |date= |accessdate=2013-10-22}}</ref> The guidelines recommend that researchers use several specific techniques for preventing mechanical replicators from getting out of control, such as using a [[broadcast architecture]].<br />
<br />
The Foresight Institute has published guidelines for researchers in mechanical self-replication. The guidelines recommend that researchers use several specific techniques for preventing mechanical replicators from getting out of control, such as using a broadcast architecture.<br />
<br />
美国前瞻学会协会已经为机械自复制的研究人员发布了指导方针。指导方针建议研究人员使用一些特定的技术来防止机械复制因子失控,比如使用广播结构。<br />
<br />
<br />
<br />
<br />
<br />
For a detailed article on mechanical reproduction as it relates to the industrial age see [[mass production]].<br />
<br />
For a detailed article on mechanical reproduction as it relates to the industrial age see mass production.<br />
<br />
有关与工业时代有关的机械复制的详细文章,请参阅'''<font color="#ff8000">大规模生产(mass production)</font>'''。<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
==Fields 研究领域==<br />
<br />
<br />
{{refimprove section|date=August 2017}}<br />
<br />
<br />
<br />
Research has occurred in the following areas:<br />
<br />
Research has occurred in the following areas:<br />
<br />
在以下领域进行了研究:<br />
<br />
<br />
<br />
<br />
<br />
* [[Biology]] studies natural replication and replicators, and their interaction. These can be an important guide to avoid design difficulties in self-replicating machinery.<br />
<br />
<br />
<br />
* In [[Chemistry]] self-replication studies are typically about how a specific set of molecules can act together to replicate each other within the set <ref>{{cite book |author=Moulin, Giuseppone |title=Constitutional Dynamic Chemistry |volume=322 |pages=87–105 |year=2011|publisher=Springer|doi=10.1007/128_2011_198|pmid=21728135 |series=Topics in Current Chemistry |isbn=978-3-642-28343-7 |chapter=Dynamic Combinatorial Self-Replicating Systems }}</ref> (often part of [[Systems chemistry]] field).<br />
<br />
<br />
<br />
* [[Meme]]tics studies ideas and how they propagate in human culture. Memes require only small amounts of material, and therefore have theoretical similarities to [[virus]]es and are often described as [[virus|viral]].<br />
<br />
<br />
<br />
* [[Nanotechnology]] or more precisely, [[molecular nanotechnology]] is concerned with making [[Nanotechnology|nano]] scale [[assembler (nanotechnology)|assemblers]]. Without self-replication, capital and assembly costs of molecular machines become impossibly large.<br />
<br />
<br />
<br />
* Space resources: NASA has sponsored a number of design studies to develop self-replicating mechanisms to mine space resources. Most of these designs include computer-controlled machinery that copies itself.<br />
<br />
<br />
<br />
* [[Computer security]]: Many computer security problems are caused by self-reproducing computer programs that infect computers — [[computer worm]]s and [[computer virus]]es.<br />
<br />
<br />
<br />
* In [[parallel computing]], it takes a long time to manually load a new program on every node of a large [[computer cluster]] or [[distributed computing]] system. Automatically loading new programs using [[mobile agent]]s can save the system administrator a lot of time and give users their results much quicker, as long as they don't get out of control.<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
==In industry 在工业界==<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
===Space exploration and manufacturing 太空探索和制造业===<br />
<br />
<br />
<br />
The goal of self-replication in space systems is to exploit large amounts of matter with a low launch mass. For example, an [[autotroph]]ic self-replicating machine could cover a moon or planet with solar cells, and beam the power to the Earth using microwaves. Once in place, the same machinery that built itself could also produce raw materials or manufactured objects, including transportation systems to ship the products. [[Von Neumann Probe|Another model]] of self-replicating machine would copy itself through the galaxy and universe, sending information back.<br />
<br />
The goal of self-replication in space systems is to exploit large amounts of matter with a low launch mass. For example, an autotrophic self-replicating machine could cover a moon or planet with solar cells, and beam the power to the Earth using microwaves. Once in place, the same machinery that built itself could also produce raw materials or manufactured objects, including transportation systems to ship the products. Another model of self-replicating machine would copy itself through the galaxy and universe, sending information back.<br />
<br />
太空系统中自复制的目标是利用低发射质量的大量物质。例如,一个自养自复制机械可以用太阳能电池覆盖月球或行星,并通过微波将能量传送到地球。一旦就位,自己建造的同样的机器也可以生产原材料或制成品,包括运输产品的运输系统。另一个自复制机械模型会在银河系和宇宙中复制自己,把信息传回来。<br />
<br />
<br />
<br />
<br />
<br />
In general, since these systems are autotrophic, they are the most difficult and complex known replicators. They are also thought to be the most hazardous, because they do not require any inputs from human beings in order to reproduce.<br />
<br />
In general, since these systems are autotrophic, they are the most difficult and complex known replicators. They are also thought to be the most hazardous, because they do not require any inputs from human beings in order to reproduce.<br />
<br />
一般来说,由于这些系统是自养的,他们是已知最困难和复杂的复制因子。它们也被认为是最危险的复制因子,因为它们不需要人类的任何投入来繁殖。<br />
<br />
<br />
<br />
<br />
<br />
A classic theoretical study of replicators in space is the 1980 [[NASA]] study of autotrophic clanking replicators, edited by [[Robert Freitas]].<ref>[[Wikisource:Advanced Automation for Space Missions]]</ref><br />
<br />
A classic theoretical study of replicators in space is the 1980 NASA study of autotrophic clanking replicators, edited by Robert Freitas.<br />
<br />
一个关于太空中复制因子的经典理论研究是1980年 NASA 关于自养铿锵复制因子的研究,由罗伯特·弗雷塔斯(Robert Freitas)编辑。<br />
<br />
<br />
<br />
<br />
<br />
Much of the design study was concerned with a simple, flexible chemical system for processing lunar [[regolith]], and the differences between the ratio of elements needed by the replicator, and the ratios available in regolith. The limiting element was [[Chlorine]], an essential element to process regolith for [[Aluminium]]. Chlorine is very rare in lunar regolith, and a substantially faster rate of reproduction could be assured by importing modest amounts.<br />
<br />
Much of the design study was concerned with a simple, flexible chemical system for processing lunar regolith, and the differences between the ratio of elements needed by the replicator, and the ratios available in regolith. The limiting element was Chlorine, an essential element to process regolith for Aluminium. Chlorine is very rare in lunar regolith, and a substantially faster rate of reproduction could be assured by importing modest amounts.<br />
<br />
大部分的设计研究都是关于一个简单、灵活的化学系统来处理月球表面的风化层,以及复制因子所需要的元素比率和风化层中可用的比率之间的差异。限制元素是氯,一个必不可少的元素处理风化层的铝。氯在月球的风化层中非常罕见,通过进口适量的氯,可以保证更快的生殖速度。<br />
<br />
<br />
<br />
<br />
<br />
The reference design specified small computer-controlled electric carts running on rails. Each cart could have a simple hand or a small bull-dozer shovel, forming a basic [[robot]].<br />
<br />
The reference design specified small computer-controlled electric carts running on rails. Each cart could have a simple hand or a small bull-dozer shovel, forming a basic robot.<br />
<br />
参考设计指定的小型计算机控制的电动车在轨道上运行。每个推车可以有一个简单的手或一个小型推土机铲,形成一个基本的机器人。<br />
<br />
<br />
<br />
<br />
<br />
Power would be provided by a "canopy" of [[solar cell]]s supported on pillars. The other machinery could run under the canopy.<br />
<br />
Power would be provided by a "canopy" of solar cells supported on pillars. The other machinery could run under the canopy.<br />
<br />
电力将由支撑在支柱上的太阳能电池“天篷”提供。其他的机器可以在天篷下面运转。<br />
<br />
<br />
<br />
<br />
<br />
A "[[casting]] [[robot]]" would use a robotic arm with a few sculpting tools to make [[plaster]] [[molding (process)|mold]]s. Plaster molds are easy to make, and make precise parts with good surface finishes. The robot would then cast most of the parts either from non-conductive molten rock ([[basalt]]) or purified metals. An [[electricity|electric]] [[oven]] melted the materials.<br />
<br />
A "casting robot" would use a robotic arm with a few sculpting tools to make plaster molds. Plaster molds are easy to make, and make precise parts with good surface finishes. The robot would then cast most of the parts either from non-conductive molten rock (basalt) or purified metals. An electric oven melted the materials.<br />
<br />
一个“铸造机器人”将使用一个机械手臂和一些雕刻工具来制作石膏模具。石膏模具易于制作,而且制作精确的零件表面光洁度好。然后,机器人将用非导电熔岩(玄武岩)或纯净金属铸造大部分零件。电炉熔化了这些材料。<br />
<br />
<br />
<br />
<br />
<br />
A speculative, more complex "chip factory" was specified to produce the computer and electronic systems, but the designers also said that it might prove practical to ship the chips from Earth as if they were "vitamins".<br />
<br />
A speculative, more complex "chip factory" was specified to produce the computer and electronic systems, but the designers also said that it might prove practical to ship the chips from Earth as if they were "vitamins".<br />
<br />
他们指定了一个更为复杂的推测性“芯片工厂”来生产计算机和电子系统,但设计师们还表示,将这些芯片像“维生素”一样从地球运输出去,可能会被证明是可行的。<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
===Molecular manufacturing===<br />
<br />
分子制造<br />
<br />
{{Main|Molecular nanotechnology#Replicating nanorobots}}<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
[[Nanotechnology|Nanotechnologists]] in particular believe that their work will likely fail to reach a state of maturity until human beings design a self-replicating [[assembler (nanotechnology)|assembler]] of [[nanometer]] dimensions [http://www.MolecularAssembler.com/KSRM/4.11.3.htm].<br />
<br />
Nanotechnologists in particular believe that their work will likely fail to reach a state of maturity until human beings design a self-replicating assembler of nanometer dimensions [http://www.MolecularAssembler.com/KSRM/4.11.3.htm].<br />
<br />
纳米技术专家尤其相信,在人类设计出一种纳米尺度的自我复制装配器之前,他们的工作很可能无法达到成熟的状态。 Molecularassembler.com/ksrm/4.11.3.htm.<br />
<br />
<br />
<br />
<br />
<br />
These systems are substantially simpler than autotrophic systems, because they are provided with purified feedstocks and energy. They do not have to reproduce them. This distinction is at the root of some of the controversy about whether [[molecular manufacturing]] is possible or not. Many authorities who find it impossible are clearly citing sources for complex autotrophic self-replicating systems. Many of the authorities who find it possible are clearly citing sources for much simpler self-assembling systems, which have been demonstrated. In the meantime, a [[Lego]]-built autonomous robot able to follow a pre-set track and assemble an exact copy of itself, starting from four externally provided components, was demonstrated experimentally in 2003 [http://www.MolecularAssembler.com/KSRM/3.23.4.htm].<br />
<br />
These systems are substantially simpler than autotrophic systems, because they are provided with purified feedstocks and energy. They do not have to reproduce them. This distinction is at the root of some of the controversy about whether molecular manufacturing is possible or not. Many authorities who find it impossible are clearly citing sources for complex autotrophic self-replicating systems. Many of the authorities who find it possible are clearly citing sources for much simpler self-assembling systems, which have been demonstrated. In the meantime, a Lego-built autonomous robot able to follow a pre-set track and assemble an exact copy of itself, starting from four externally provided components, was demonstrated experimentally in 2003 [http://www.MolecularAssembler.com/KSRM/3.23.4.htm].<br />
<br />
这些系统比自养系统简单得多,因为它们提供了纯化的原料和能源。他们不需要复制它们。这种区别是关于分子制造是否可行的一些争论的根源。许多当局认为这是不可能的,他们明确地引用了复杂的自养自我复制系统的资源。许多发现这种可能性的权威人士显然是在引用已经证明的更简单的自组装系统的资料。与此同时,2003年的一项实验展示了一个乐高积木自主机器人,它能够按照预先设定的轨道,从外部提供的4个组件开始,精确地组装出自己的副本。 Molecularassembler.com/ksrm/3.23.4.htm.<br />
<br />
<br />
<br />
<br />
<br />
Merely exploiting the replicative abilities of existing cells is insufficient, because of limitations in the process of [[protein biosynthesis]] (also see the listing for [[RNA]]).<br />
<br />
Merely exploiting the replicative abilities of existing cells is insufficient, because of limitations in the process of protein biosynthesis (also see the listing for RNA).<br />
<br />
仅仅利用现有细胞的复制能力是不够的,因为在蛋白质生物合成过程中存在局限性。<br />
<br />
What is required is the rational design of an entirely novel replicator with a much wider range of synthesis capabilities.<br />
<br />
What is required is the rational design of an entirely novel replicator with a much wider range of synthesis capabilities.<br />
<br />
我们需要的是合理设计一种具有更广泛合成能力的全新复制因子。<br />
<br />
<br />
<br />
<br />
<br />
In 2011, New York University scientists have developed artificial structures that can self-replicate, a process that has the potential to yield new types of materials. They have demonstrated that it is possible to replicate not just molecules like cellular DNA or RNA, but discrete structures that could in principle assume many different shapes, have many different functional features, and be associated with many different types of chemical species.<ref>{{cite journal | doi = 10.1038/nature10500 | last1 = Wang | first1 = Tong | last2 = Sha | first2 = Ruojie | last3 = Dreyfus | first3 = Rémi | last4 = Leunissen | first4 = Mirjam E. | last5 = Maass | first5 = Corinna | last6 = Pine | first6 = David J. | last7 = Chaikin | first7 = Paul M. | last8 = Seeman | first8 = Nadrian C. | year = 2011 | title = Self-replication of information-bearing nanoscale patterns | journal = Nature | volume = 478 | issue = 7368 | pages = 225–228 | pmid=21993758 | pmc=3192504}}</ref><ref>{{cite web | url = https://www.sciencedaily.com/releases/2011/10/111012132651.htm | title = Self-replication process holds promise for production of new materials. | date = 17 October 2011 | website = Science Daily | accessdate=17 October 2011}}</ref><br />
<br />
In 2011, New York University scientists have developed artificial structures that can self-replicate, a process that has the potential to yield new types of materials. They have demonstrated that it is possible to replicate not just molecules like cellular DNA or RNA, but discrete structures that could in principle assume many different shapes, have many different functional features, and be associated with many different types of chemical species.<br />
<br />
2011年,纽约大学的科学家们开发出了可以自我复制的人造结构,这一过程有可能产生新型材料。他们已经证明,不仅可以复制像细胞 DNA 或 RNA 这样的分子,而且可以复制原则上呈现许多不同形状、具有许多不同功能特征、并与许多不同类型的化学物种相关联的离散结构。<br />
<br />
<br />
<br />
<br />
<br />
For a discussion of other chemical bases for hypothetical self-replicating systems, see [[alternative biochemistry]].<br />
<br />
For a discussion of other chemical bases for hypothetical self-replicating systems, see alternative biochemistry.<br />
<br />
有关假设的自我复制系统的其他化学基础的讨论,请参阅替代生物化学。<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
==See also 请参阅==<br />
<br />
<br />
<br />
* [[Artificial life]]<br />
<br />
<br />
<br />
* [[Astrochicken]]<br />
<br />
<br />
<br />
* [[Autopoiesis]]<br />
<br />
<br />
<br />
* [[Complex system]]<br />
<br />
<br />
<br />
* [[DNA replication]]<br />
<br />
<br />
<br />
* [[Life]]<br />
<br />
<br />
<br />
* [[Robot]]<br />
<br />
<br />
<br />
* [[RepRap]] (self-replicated 3D printer)<br />
<br />
<br />
<br />
* [[Self-replicating machine]]<br />
<br />
<br />
<br />
** [[Self-replicating spacecraft]]<br />
<br />
<br />
<br />
* [[Space manufacturing]]<br />
<br />
<br />
<br />
* [[Von Neumann universal constructor]]<br />
<br />
<br />
<br />
* [[Virus]]<br />
<br />
<br />
<br />
* [[Von Neumann machine (disambiguation)]]<br />
<br />
<br />
<br />
* [[Self reconfigurable]]<br />
<br />
<br />
<br />
* [[Final Anthropic Principle]]<br />
<br />
<br />
<br />
* [[Positive feedback]]<br />
<br />
<br />
<br />
* [[Harmonic]]<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
==References 参考文献==<br />
<br />
<br />
<br />
{{reflist}}<br />
<br />
<br />
<br />
;Notes<br />
<br />
Notes<br />
<br />
注释<br />
<br />
{{refbegin}}<br />
<br />
<br />
<br />
* von Neumann, J., 1966, ''The Theory of Self-reproducing Automata'', A. Burks, ed., Univ. of Illinois Press, Urbana, IL.<br />
<br />
<br />
<br />
* [[s:Advanced Automation for Space Missions|Advanced Automation for Space Missions]], a 1980 NASA study edited by [[Robert Freitas]]<br />
<br />
<br />
<br />
* [http://www.MolecularAssembler.com/KSRM.htm Kinematic Self-Replicating Machines] first comprehensive survey of entire field in 2004 by [[Robert Freitas]] and [[Ralph Merkle]]<br />
<br />
<br />
<br />
* [https://web.archive.org/web/20040920220139/http://www.niac.usra.edu/files/studies/final_report/pdf/883Toth-Fejel.pdf NASA Institute for Advance Concepts study by General Dynamics]- concluded that complexity of the development was equal to that of a Pentium 4, and promoted a design based on cellular automata.<br />
<br />
<br />
<br />
* ''[[Gödel, Escher, Bach]]'' by [[Douglas Hofstadter]] (detailed discussion and many examples)<br />
<br />
<br />
<br />
* Kenyon, R., ''Self-replicating tilings'', in: Symbolic Dynamics and Applications (P. Walters, ed.) Contemporary Math. vol. 135 (1992), 239-264.<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
{{refend}}<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
[[Category:Self-replication| ]]<br />
<br />
<br />
<br />
<noinclude><br />
<br />
<small>This page was moved from [[wikipedia:en:Self-replication]]. Its edit history can be viewed at [[自复制/edithistory]]</small></noinclude><br />
<br />
[[Category:待整理页面]]</div>粲兰https://wiki.swarma.org/index.php?title=%E8%87%AA%E5%A4%8D%E5%88%B6_Self-replication&diff=15281自复制 Self-replication2020-10-15T07:34:22Z<p>粲兰:</p>
<hr />
<div>此词条暂由袁一博翻译,未经人工整理和审校,带来阅读不便,请见谅。{{see also|Biological reproduction}}<br />
<br />
<br />
<br />
{{Use dmy dates|date=April 2019|cs1-dates=y}}<br />
<br />
<br />
<br />
[[Image:DNA chemical structure.svg|thumb|right|200px|[[Molecular structure]] of [[DNA]] ]]<br />
<br />
[[Molecular structure of DNA DNA分子结构]]<br />
<br />
[ DNA 的分子结构]<br />
<br />
'''Self-replication''' is any behavior of a [[dynamical system]] that yields construction of an identical or similar copy of itself. [[Cell (biology)|Biological cell]]s, given suitable environments, reproduce by [[cell division]]. During cell division, [[DNA]] is replicated and can be transmitted to offspring during [[reproduction]]. [[virus (biology)|Biological viruses]] can [[Viral replication|replicate]], but only by commandeering the reproductive machinery of cells through a process of infection. Harmful [[prion]] proteins can replicate by converting normal proteins into rogue forms.<ref>{{cite news|url=http://news.bbc.co.uk/1/hi/health/8435320.stm |title='Lifeless' prion proteins are 'capable of evolution' |work=BBC News |date=2010-01-01 |accessdate=2013-10-22}}</ref> [[Computer virus]]es reproduce using the hardware and software already present on computers. Self-replication in [[robotics]] has been an area of research and a subject of interest in [[science fiction]]. Any self-replicating mechanism which does not make a perfect copy ([[mutation]]) will experience [[genetic variation]] and will create variants of itself. These variants will be subject to [[natural selection]], since some will be better at surviving in their current environment than others and will out-breed them.<br />
<br />
Self-replication is any behavior of a dynamical system that yields construction of an identical or similar copy of itself. Biological cells, given suitable environments, reproduce by cell division. During cell division, DNA is replicated and can be transmitted to offspring during reproduction. Biological viruses can replicate, but only by commandeering the reproductive machinery of cells through a process of infection. Harmful prion proteins can replicate by converting normal proteins into rogue forms. Computer viruses reproduce using the hardware and software already present on computers. Self-replication in robotics has been an area of research and a subject of interest in science fiction. Any self-replicating mechanism which does not make a perfect copy (mutation) will experience genetic variation and will create variants of itself. These variants will be subject to natural selection, since some will be better at surviving in their current environment than others and will out-breed them.<br />
<br />
自我复制是一个动力系统的任何行为,产生一个相同或相似的复制本身的建设。生物细胞,在适当的环境下,通过细胞分裂进行繁殖。在细胞分裂过程中,DNA 被复制,并在生殖过程中传递给后代。生物病毒可以复制,但只能通过感染过程控制细胞的生殖机制。有害的朊病毒蛋白可以通过将正常的蛋白质转化为流氓形式来复制。计算机病毒利用计算机上已有的硬件和软件进行复制。自我复制机器人学一直是一个研究领域,也是科幻小说中的一个兴趣主题。任何不能完美复制的自我复制机制(变异)都会经历遗传变异,并且会产生自身的变异。这些变异将受到自然选择的影响,因为有些变异会比其他变异更好地在当前环境中生存,并将超越他们。<br />
<br />
<br />
<br />
<br />
<br />
<br />
==Overview 综述==<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
===Theory 理论===<br />
<br />
<br />
<br />
{{See also|Von Neumann universal constructor}}<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
Early research by [[John von Neumann]]<ref name=Hixon_vonNeumann>{{cite book|last=von Neumann|first=John|title=The Hixon Symposium|year=1948|location=Pasadena, California|pages=1–36}}</ref> established that replicators have several parts:<br />
<br />
Early research by John von Neumann established that replicators have several parts:<br />
<br />
约翰·冯·诺伊曼的早期研究表明复制因子有几个部分:<br />
<br />
<br />
<br />
<br />
<br />
*A coded representation of the replicator<br />
<br />
<br />
<br />
*A mechanism to copy the coded representation<br />
<br />
<br />
<br />
*A mechanism for effecting construction within the host environment of the replicator<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
Exceptions to this pattern may be possible, although none have yet been achieved. For example, scientists have come close to constructing [https://arstechnica.com/science/2011/04/investigations-into-the-ancient-rna-world/ RNA that can be copied] in an "environment" that is a solution of RNA monomers and transcriptase. In this case, the body is the genome, and the specialized copy mechanisms are external. The requirement for an outside copy mechanism has not yet been overcome, and such systems are more accurately characterized as "assisted replication" than "self-replication".<br />
<br />
Exceptions to this pattern may be possible, although none have yet been achieved. For example, scientists have come close to constructing [https://arstechnica.com/science/2011/04/investigations-into-the-ancient-rna-world/ RNA that can be copied] in an "environment" that is a solution of RNA monomers and transcriptase. In this case, the body is the genome, and the specialized copy mechanisms are external. The requirement for an outside copy mechanism has not yet been overcome, and such systems are more accurately characterized as "assisted replication" than "self-replication".<br />
<br />
这种模式可能有例外,尽管尚未实现任何例外。例如,科学家们已经接近于在 RNA 单体和转录酶的“环境”中构建[可复制的 https://arstechnica.com/science/2011/04/investigations-into-the-ancient-RNA-world/ RNA ]。在这种情况下,身体就是基因组,专门的复制机制是外部的。对外部复制机制的需求尚未被克服,这种系统更准确地描述为“辅助复制”而不是“自我复制复制”。<br />
<br />
<br />
<br />
<br />
<br />
However, the simplest possible case is that only a genome exists. Without some specification of the self-reproducing steps, a genome-only system is probably better characterized as something like a [[crystal]].<br />
<br />
However, the simplest possible case is that only a genome exists. Without some specification of the self-reproducing steps, a genome-only system is probably better characterized as something like a crystal.<br />
<br />
然而,最简单的可能的情况是只有一个基因组存在。如果没有一些自我繁殖步骤的说明,一个只有基因组的系统可能更好地被描述为类似于晶体的东西。<br />
<br />
<br />
<br />
<br />
<br />
<br />
===Classes of self-replication 自复制的类别===<br />
<br />
<br />
<br />
Recent research<ref>{{cite web|url = http://www.MolecularAssembler.com/KSRM/5.1.htm | date = 2004 | accessdate = 29 June 2013 | last = Freitas | first = Robert | last2 = Merkle | first2 = Ralph | title = Kinematic Self-Replicating Machines - General Taxonomy of Replicators}}</ref> has begun to categorize replicators, often based on the amount of support they require.<br />
<br />
Recent research has begun to categorize replicators, often based on the amount of support they require.<br />
<br />
最近的研究已经开始对复制因子进行分类,通常基于它们所需要的支持程度。<br />
<br />
<br />
<br />
<br />
<br />
*Natural replicators have all or most of their design from nonhuman sources. Such systems include natural life forms.<br />
<br />
*自然复制因子的设计全部或绝大部分来自非人类来源。这样的系统包含自然生命形式。<br />
<br />
<br />
*[[Autotroph]]ic replicators can reproduce themselves "in the wild". They mine their own materials. It is conjectured that non-biological autotrophic replicators could be designed by humans, and could easily accept specifications for human products.<br />
<br />
*无机复制因子可以在自然环境下进行自我复制。它们采掘自身的矿物质。据推测,非生物的无极复制因子可能由人类设计而成,并且可以轻易地接受人类产物的规格。<br />
<br />
<br />
*Self-reproductive systems are conjectured systems which would produce copies of themselves from industrial feedstocks such as metal bar and wire.<br />
<br />
*可以利用工业原料,例如金属棒和金属丝,以产生自身的拷贝的自复制的系统存在于假想当中<br />
<br />
<br />
*Self-assembling systems assemble copies of themselves from finished, delivered parts. Simple examples of such systems have been demonstrated at the macro scale.<br />
<br />
*自组装系统将它们已经完成并运送过来的自复制部分组装起来。这种系统的简单例子已经在宏观尺度得到展示。<br />
<br />
<br />
<br />
<br />
<br />
The design space for machine replicators is very broad. A comprehensive study<ref>{{cite web|url = http://www.MolecularAssembler.com/KSRM/5.1.9.htm | date = 2004 | accessdate = 29 June 2013 | last1 = Freitas | first1 = Robert | last2 = Merkle | first2 = Ralph | title = Kinematic Self-Replicating Machines - Freitas-Merkle Map of the Kinematic Replicator Design Space (2003–2004)}}</ref> to date by [[Robert Freitas]] and [[Ralph Merkle]] has identified 137 design dimensions grouped into a dozen separate categories, including: (1) Replication Control, (2) Replication Information, (3) Replication Substrate, (4) Replicator Structure, (5) Passive Parts, (6) Active Subunits, (7) Replicator Energetics, (8) Replicator Kinematics, (9) Replication Process, (10) Replicator Performance, (11) Product Structure, and (12) Evolvability.<br />
<br />
The design space for machine replicators is very broad. A comprehensive study to date by Robert Freitas and Ralph Merkle has identified 137 design dimensions grouped into a dozen separate categories, including: (1) Replication Control, (2) Replication Information, (3) Replication Substrate, (4) Replicator Structure, (5) Passive Parts, (6) Active Subunits, (7) Replicator Energetics, (8) Replicator Kinematics, (9) Replication Process, (10) Replicator Performance, (11) Product Structure, and (12) Evolvability.<br />
<br />
机器复制因子的设计空间非常广阔。迄今为止,罗伯特·弗雷塔斯(Robert Freitas)和拉尔夫·默克尔(Ralph Merkle)的综合研究已经确定了137个设计维度并将其分为十几个独立的类别,包括: (1)复制控制,(2)复制信息,(3)复制基质,(4)复制因子<br />
结构,(5)被动部件,(6)主动子单元,(7)复制因子能量学,(8)复制因子运动学,(9)复制过程,(10)复制因子性能,(11)产物结构,和(12)进化性。<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
===A self-replicating computer program 一种自复制的电脑程序===<br />
<br />
<br />
<br />
{{Main|Quine (computing)}}<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
In [[computer science]] a [[Quine (computing)|quine]] is a self-reproducing computer program that, when executed, outputs its own code. For example, a quine in the [[Python (programming language)|Python programming language]] is:<br />
<br />
In computer science a quine is a self-reproducing computer program that, when executed, outputs its own code. For example, a quine in the Python programming language is:<br />
<br />
在计算机科学中,quine 是一种自我复制的计算机程序,当执行时,输出自己的代码。例如,利用Python语言编写的一个 quine 是:<br />
<br />
<br />
<br />
<br />
<br />
:<code>a='a=%r;print(a%%a)';print(a%a)</code><br />
<br />
<code>a='a=%r;print(a%%a)';print(a%a)</code><br />
<br />
<br />
<br />
<br />
<br />
A more trivial approach is to write a program that will make a copy of any stream of data that it is directed to, and then direct it at itself. In this case the program is treated as both executable code, and as data to be manipulated. This approach is common in most self-replicating systems, including biological life, and is simpler as it does not require the program to contain a complete description of itself.<br />
<br />
A more trivial approach is to write a program that will make a copy of any stream of data that it is directed to, and then direct it at itself. In this case the program is treated as both executable code, and as data to be manipulated. This approach is common in most self-replicating systems, including biological life, and is simpler as it does not require the program to contain a complete description of itself.<br />
<br />
一种更简单的方法是编写一个程序,这个程序将复制它所指向的任何数据流,然后指向它自己。在这种情况下,程序既被当作可执行代码,也被当作要操作的数据。这种方法在包括生物生命在内的大多数自复制系统中都很常见,而且更简单,因为它不需要程序包含对自身的完整描述。<br />
<br />
<br />
<br />
<br />
<br />
In many programming languages an empty program is legal, and executes without producing errors or other output. The output is thus the same as the source code, so the program is trivially self-reproducing.<br />
<br />
In many programming languages an empty program is legal, and executes without producing errors or other output. The output is thus the same as the source code, so the program is trivially self-reproducing.<br />
<br />
在许多编程语言中,空程序是合法的,并且执行时不会产生错误或其他输出。因此,其输出是相同的源代码,所以这种程序是微不足道的自复制。<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
===Self-replicating tiling 自复制式平铺===<br />
<br />
<br />
<br />
{{See also|Self-similarity}}<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
In [[geometry]] a self-replicating tiling is a tiling pattern in which several [[congruence (geometry)|congruent]] tiles may be joined together to form a larger tile that is similar to the original. This is an aspect of the field of study known as [[tessellation]]. The "sphinx" [[hexiamond]] is the only known self-replicating [[pentagon]].<ref>For an image that does not show how this replicates, see: Eric W. Weisstein. "Sphinx." From MathWorld--A Wolfram Web Resource. [http://mathworld.wolfram.com/Sphinx.html http://mathworld.wolfram.com/Sphinx.html]</ref> For example, four such [[concave polygon|concave]] pentagons can be joined together to make one with twice the dimensions.<ref>For further illustrations, see [http://www.geoaustralia.com/italian/Sphinx/Guide.html Teaching TILINGS / TESSELLATIONS with Geo Sphinx]</ref> [[Solomon W. Golomb]] coined the term [[rep-tiles]] for self-replicating tilings.<br />
<br />
In geometry a self-replicating tiling is a tiling pattern in which several congruent tiles may be joined together to form a larger tile that is similar to the original. This is an aspect of the field of study known as tessellation. The "sphinx" hexiamond is the only known self-replicating pentagon. For example, four such concave pentagons can be joined together to make one with twice the dimensions. Solomon W. Golomb coined the term rep-tiles for self-replicating tilings.<br />
<br />
在几何学中,自复制式平铺是一种平铺方法,其中几个全等的瓷砖可以连接在一起,形成一个较大的类似于原来的瓷砖。这是一个被称为镶嵌的研究领域的一个方面。“狮身人面像”六面双锥体是已知唯一能自复制的五角形。<br />
例如,四个这样的凹形五边形可以连接在一起,形成一个尺寸是原来两倍的五边形。所罗门·W·格伦布(Solomon W. Golomb)创造了术语 rep-tiles 来描述自复制式平铺。<br />
--[[用户:粲兰|袁一博]]([[用户讨论:粲兰|讨论]])“‘狮身人面像’双锥六面体是已知唯一能自复制的五角形。”这句对应原句"The 'sphinx' hexiamond is the only known self-replicating pentagon."疑似存在几何上的逻辑错误,hexiamond并不是一个平面几何图形。<br />
<br />
<br />
<br />
<br />
<br />
In 2012, [[Lee Sallows]] identified rep-tiles as a special instance of a [[self-tiling tile set]] or setiset. A setiset of order ''n'' is a set of ''n'' shapes that can be assembled in ''n'' different ways so as to form larger replicas of themselves. Setisets in which every shape is distinct are called 'perfect'. A rep-''n'' rep-tile is just a setiset composed of ''n'' identical pieces.<br />
<br />
In 2012, Lee Sallows identified rep-tiles as a special instance of a self-tiling tile set or '''<font color="#32CD32">setiset</font>'''. A setiset of order n is a set of n shapes that can be assembled in n different ways so as to form larger replicas of themselves. Setisets in which every shape is distinct are called 'perfect'. A rep-n rep-tile is just a setiset composed of n identical pieces.<br />
<br />
2012年,李·萨洛斯(Lee Sallows) 将 rep-tiles 定义为一种特殊的自平铺纹样集。一组 ''n'' 阶的复制品是一组 ''n'' 个形状的复制品,它们可以以 ''n'' 种不同的方式组合,以便形成更大的自复制产物。每个形状各不相同的 setiset 被称为“完美的”。n次重复的 rep-tile 只是由 n 个相同部分组成的一个集合。<br />
--[[用户:粲兰|袁一博]]([[用户讨论:粲兰|讨论]])“setiset”找不到合适的翻译。<br />
<br />
{|<br />
<br />
{|<br />
<br />
{|<br />
<br />
|- style="vertical-align:bottom;"<br />
<br />
|- style="vertical-align:bottom;"<br />
<br />
|-style“ vertical-align: bottom; ”<br />
<br />
[[File:Self-replication of sphynx hexidiamonds.svg|thumb|A rep-tile-based setiset of order 4|thumb|left|text-bottom|260px|Four '[[Sphinx tiling|sphinx]]' hexiamonds can be put together to form another sphinx. 四个“人面狮身像”双锥六面体可以拼凑成另一个人面狮身像]]<br />
<br />
sphinx' hexiamonds can be put together to form another sphinx.]]<br />
<br />
<br />
<br />
[[File:A rep-tile-based setiset of order 4.png|thumb|A rep-tile-based setiset of order 9|thumb|right|text-bottom|290px|A perfect [[Self-tiling tile set|setiset]] of order 4 一个完美的四阶setiset]]<br />
<br />
setiset of order 4]]<br />
<br />
setiset of order 4]]<br />
<br />
|}<br />
<br />
|}<br />
<br />
|}<br />
<br />
{{clear}}<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
===Self replicating clay crystals 自复制的粘土晶体===<br />
<br />
<br />
<br />
One form of natural self-replication that isn't based on DNA or RNA occurs in clay crystals.<ref>{{cite web|url=http://www.bbc.com/earth/story/20160823-the-idea-that-life-began-as-clay-crystals-is-50-years-old |title=The idea that life began as clay crystals is 50 years old |publisher=bbc.com |date=2016-08-24 |accessdate=2019-11-10}}</ref> Clay consists of a large number of small crystals, and clay is an environment that promotes crystal growth. Crystals consist of a regular lattice of atoms and are able to grow if e.g. placed in a water solution containing the crystal components; automatically arranging atoms at the crystal boundary into the crystalline form. Crystals may have irregularities where the regular atomic structure is broken, and when crystals grow, these irregularities may propagate, creating a form of self-replication of crystal irregularities. Because these irregularities may affect the probability of a crystal breaking apart to form new crystals, crystals with such irregularities could even be considered to undergo evolutionary development.<br />
<br />
One form of natural self-replication that isn't based on DNA or RNA occurs in clay crystals. Clay consists of a large number of small crystals, and clay is an environment that promotes crystal growth. Crystals consist of a regular lattice of atoms and are able to grow if e.g. placed in a water solution containing the crystal components; automatically arranging atoms at the crystal boundary into the crystalline form. Crystals may have irregularities where the regular atomic structure is broken, and when crystals grow, these irregularities may propagate, creating a form of self-replication of crystal irregularities. Because these irregularities may affect the probability of a crystal breaking apart to form new crystals, crystals with such irregularities could even be considered to undergo evolutionary development.<br />
<br />
粘土晶体中存在一种不基于 DNA 或 RNA 的天然自复制。粘土由大量的小晶体组成,粘土是促进晶体生长的环境。晶体是由规则的原子晶格组成的,将其放置在含有晶体成分的水溶液中能够生长,并自动地将晶体边界上的原子排列成晶体形式。当正常的原子结构被破坏时,晶体可能具有不规则性,当晶体生长时,这些不规则性可能会传播,形成一种晶体不规则性的自我复制。由于这些不规则结构可能会影响晶体分裂形成新晶体的概率,因此这种不规则结构的晶体甚至可以被认为是在进化过程中形成的。<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
===Applications 应用===<br />
<br />
<br />
<br />
It is a long-term goal of some engineering sciences to achieve a [[clanking replicator]], a material device that can self-replicate. The usual reason is to achieve a low cost per item while retaining the utility of a manufactured good. Many authorities say that in the limit, the cost of self-replicating items should approach the cost-per-weight of wood or other biological substances, because self-replication avoids the costs of [[labour (economics)|labor]], [[Capital (economics)|capital]] and [[distribution (business)|distribution]] in conventional [[factory|manufactured goods]].<br />
<br />
It is a long-term goal of some engineering sciences to achieve a clanking replicator, a material device that can self-replicate. The usual reason is to achieve a low cost per item while retaining the utility of a manufactured good. Many authorities say that in the limit, the cost of self-replicating items should approach the cost-per-weight of wood or other biological substances, because self-replication avoids the costs of labor, capital and distribution in conventional manufactured goods.<br />
<br />
一些工程科学的长期目标是制造出一种可以自我复制的铿锵复制机器。通常的原因是为了在保证生产品的功效的同时降低每件商品的成本。许多权威人士表示,在这个限度内,自复制产品的成本应该接近木材或其他生物材质的单位重量成本,因为自我复制不需要传统工业产品所需的劳动力、资本和分销成本。<br />
<br />
<br />
<br />
<br />
<br />
A fully novel artificial replicator is a reasonable near-term goal.<br />
<br />
A fully novel artificial replicator is a reasonable near-term goal.<br />
<br />
建立一个全新的人工复制因子是一个合理的近期目标。<br />
<br />
A [[NASA]] study recently placed the complexity of a [[clanking replicator]] at approximately that of [[Intel]]'s [[Pentium (brand)|Pentium]] 4 CPU.<ref>{{cite web|url=http://www.niac.usra.edu/files/studies/final_report/883Toth-Fejel.pdf |title=Modeling Kinematic Cellular Automata Final Report |publisher= |date=April 30, 2004 |accessdate=2013-10-22}}</ref> That is, the technology is achievable with a relatively small engineering group in a reasonable commercial time-scale at a reasonable cost.<br />
<br />
A NASA study recently placed the complexity of a clanking replicator at approximately that of Intel's Pentium 4 CPU. That is, the technology is achievable with a relatively small engineering group in a reasonable commercial time-scale at a reasonable cost.<br />
<br />
美国宇航局最近的一项研究表明,铿锵复制因子的复杂度大约相当于英特尔奔腾4处理器的复杂度。也就是说,这项技术在一个合理的商业时间规模内,是可以由一个相对较小的工程团队以一个合理的成本实现的。<br />
<br />
<br />
<br />
<br />
<br />
Given the currently keen interest in biotechnology and the high levels of funding in that field, attempts to exploit the replicative ability of existing cells are timely, and may easily lead to significant insights and advances.<br />
<br />
Given the currently keen interest in biotechnology and the high levels of funding in that field, attempts to exploit the replicative ability of existing cells are timely, and may easily lead to significant insights and advances.<br />
<br />
鉴于目前学术界对生物技术的浓厚兴趣和这一领域的大量资金,利用现有细胞的复制能力的尝试是适时的,而且易产生重大的理解和进展。<br />
<br />
<br />
<br />
<br />
<br />
A variation of self replication is of practical relevance in [[compiler]] construction, where a similar [[bootstrapping]] problem occurs as in natural self replication. A compiler ([[phenotype]]) can be applied on the compiler's own [[source code]] ([[genotype]]) producing the compiler itself. During compiler development, a modified ([[Mutation|mutated]]) source is used to create the next generation of the compiler. This process differs from natural self-replication in that the process is directed by an engineer, not by the subject itself.<br />
<br />
A variation of self replication is of practical relevance in compiler construction, where a similar bootstrapping problem occurs as in natural self replication. A compiler (phenotype) can be applied on the compiler's own source code (genotype) producing the compiler itself. During compiler development, a modified (mutated) source is used to create the next generation of the compiler. This process differs from natural self-replication in that the process is directed by an engineer, not by the subject itself.<br />
<br />
自复制的一种变体在编译器构造中具有实际意义,在天然自复制中也会出现类似的自举问题。编译器(表型)可以应用于编译器自身的源代码(基因型) ,从而产生编译器本身。在编译器开发过程中,一般使用修改(变异)的源代码来创建下一代编译器。这个过程不同于自然自我复制,因为这个过程是由工程师指导的,而不是主体本身。<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
==Mechanical self-replication 机械自复制==<br />
<br />
<br />
<br />
{{Main|Self-replicating machine}}<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
An activity in the field of robots is the self-replication of machines. Since all robots (at least in modern times) have a fair number of the same features, a self-replicating robot (or possibly a hive of robots) would need to do the following:<br />
<br />
An activity in the field of robots is the self-replication of machines. Since all robots (at least in modern times) have a fair number of the same features, a self-replicating robot (or possibly a hive of robots) would need to do the following:<br />
<br />
机器人领域的一项活动就是机器的自复制。由于所有机器人(至少在现代)都有相当数量的相同特性,一个自复制机器人(或者可能是一群机器人)需要做到以下几点:<br />
<br />
<br />
<br />
<br />
<br />
*Obtain construction materials<br />
*获得结构材料<br />
<br />
<br />
<br />
*Manufacture new parts including its smallest parts and thinking apparatus<br />
*制造新零件,包括最小的零件和思维组件<br />
<br />
<br />
<br />
*Provide a consistent power source<br />
*提供一个已存在的动力源<br />
<br />
<br />
<br />
*Program the new members<br />
*为新成员编程<br />
<br />
<br />
<br />
*error correct any mistakes in the offspring<br />
*改正子代产物的任何错误<br />
<br />
<br />
<br />
<br />
<br />
<br />
On a [[Nanotechnology|nano]] scale, [[Assembler (nanotechnology)|assemblers]] might also be designed to self-replicate under their own power. This, in turn, has given rise to the "[[grey goo]]" version of [[Armageddon]], as featured in such science fiction novels as ''[[Bloom (novel)|Bloom]]'', ''[[Prey (novel)|Prey]]'', and ''[[Recursion (novel)|Recursion]]''.<br />
<br />
On a nano scale, assemblers might also be designed to self-replicate under their own power. This, in turn, has given rise to the "grey goo" version of Armageddon, as featured in such science fiction novels as Bloom, Prey, and Recursion.<br />
<br />
在纳米级别上,组装者也可能被设计成在自身能量下进行自复制。这反过来又导致了“灰色粘质”版本的世界末日,就像在诸如《花开》,《掠食》和《递归》这样的科幻小说中描述的那样。<br />
<br />
<br />
<br />
<br />
<br />
The [[Foresight Institute]] has published guidelines for researchers in mechanical self-replication.<ref>{{cite web|url=http://foresight.org/guidelines/ |title=Molecular Nanotechnology Guidelines |publisher=Foresight.org |date= |accessdate=2013-10-22}}</ref> The guidelines recommend that researchers use several specific techniques for preventing mechanical replicators from getting out of control, such as using a [[broadcast architecture]].<br />
<br />
The Foresight Institute has published guidelines for researchers in mechanical self-replication. The guidelines recommend that researchers use several specific techniques for preventing mechanical replicators from getting out of control, such as using a broadcast architecture.<br />
<br />
美国前瞻学会协会已经为机械自复制的研究人员发布了指导方针。指导方针建议研究人员使用一些特定的技术来防止机械复制因子失控,比如使用广播结构。<br />
<br />
<br />
<br />
<br />
<br />
For a detailed article on mechanical reproduction as it relates to the industrial age see [[mass production]].<br />
<br />
For a detailed article on mechanical reproduction as it relates to the industrial age see mass production.<br />
<br />
有关与工业时代有关的机械复制的详细文章,请参阅'''<font color="#ff8000">大规模生产(mass production)</font>'''。<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
==Fields 研究领域==<br />
<br />
<br />
{{refimprove section|date=August 2017}}<br />
<br />
<br />
<br />
Research has occurred in the following areas:<br />
<br />
Research has occurred in the following areas:<br />
<br />
在以下领域进行了研究:<br />
<br />
<br />
<br />
<br />
<br />
* [[Biology]] studies natural replication and replicators, and their interaction. These can be an important guide to avoid design difficulties in self-replicating machinery.<br />
<br />
<br />
<br />
* In [[Chemistry]] self-replication studies are typically about how a specific set of molecules can act together to replicate each other within the set <ref>{{cite book |author=Moulin, Giuseppone |title=Constitutional Dynamic Chemistry |volume=322 |pages=87–105 |year=2011|publisher=Springer|doi=10.1007/128_2011_198|pmid=21728135 |series=Topics in Current Chemistry |isbn=978-3-642-28343-7 |chapter=Dynamic Combinatorial Self-Replicating Systems }}</ref> (often part of [[Systems chemistry]] field).<br />
<br />
<br />
<br />
* [[Meme]]tics studies ideas and how they propagate in human culture. Memes require only small amounts of material, and therefore have theoretical similarities to [[virus]]es and are often described as [[virus|viral]].<br />
<br />
<br />
<br />
* [[Nanotechnology]] or more precisely, [[molecular nanotechnology]] is concerned with making [[Nanotechnology|nano]] scale [[assembler (nanotechnology)|assemblers]]. Without self-replication, capital and assembly costs of molecular machines become impossibly large.<br />
<br />
<br />
<br />
* Space resources: NASA has sponsored a number of design studies to develop self-replicating mechanisms to mine space resources. Most of these designs include computer-controlled machinery that copies itself.<br />
<br />
<br />
<br />
* [[Computer security]]: Many computer security problems are caused by self-reproducing computer programs that infect computers — [[computer worm]]s and [[computer virus]]es.<br />
<br />
<br />
<br />
* In [[parallel computing]], it takes a long time to manually load a new program on every node of a large [[computer cluster]] or [[distributed computing]] system. Automatically loading new programs using [[mobile agent]]s can save the system administrator a lot of time and give users their results much quicker, as long as they don't get out of control.<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
==In industry 在工业界==<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
===Space exploration and manufacturing 太空探索和制造业===<br />
<br />
<br />
<br />
The goal of self-replication in space systems is to exploit large amounts of matter with a low launch mass. For example, an [[autotroph]]ic self-replicating machine could cover a moon or planet with solar cells, and beam the power to the Earth using microwaves. Once in place, the same machinery that built itself could also produce raw materials or manufactured objects, including transportation systems to ship the products. [[Von Neumann Probe|Another model]] of self-replicating machine would copy itself through the galaxy and universe, sending information back.<br />
<br />
The goal of self-replication in space systems is to exploit large amounts of matter with a low launch mass. For example, an autotrophic self-replicating machine could cover a moon or planet with solar cells, and beam the power to the Earth using microwaves. Once in place, the same machinery that built itself could also produce raw materials or manufactured objects, including transportation systems to ship the products. Another model of self-replicating machine would copy itself through the galaxy and universe, sending information back.<br />
<br />
太空系统中自我复制的目标是利用低发射质量的大量物质。例如,一个自养自我复制机械可以用太阳能电池覆盖月球或行星,并通过微波将能量传送到地球。一旦到位,自己建造的同样的机器也可以生产原材料或制成品,包括运输产品的运输系统。另一个自我复制机械的模型会在银河系和宇宙中复制自己,把信息传回来。<br />
<br />
<br />
<br />
<br />
<br />
In general, since these systems are autotrophic, they are the most difficult and complex known replicators. They are also thought to be the most hazardous, because they do not require any inputs from human beings in order to reproduce.<br />
<br />
In general, since these systems are autotrophic, they are the most difficult and complex known replicators. They are also thought to be the most hazardous, because they do not require any inputs from human beings in order to reproduce.<br />
<br />
一般来说,由于这些系统是自养的,他们是最困难和复杂的已知复制因子。它们也被认为是最危险的,因为它们不需要人类的任何投入来繁殖。<br />
<br />
<br />
<br />
<br />
<br />
A classic theoretical study of replicators in space is the 1980 [[NASA]] study of autotrophic clanking replicators, edited by [[Robert Freitas]].<ref>[[Wikisource:Advanced Automation for Space Missions]]</ref><br />
<br />
A classic theoretical study of replicators in space is the 1980 NASA study of autotrophic clanking replicators, edited by Robert Freitas.<br />
<br />
一个关于太空中复制因子的经典理论研究是1980年 NASA 关于自养叮当复制因子的研究,由 Robert Freitas 编辑。<br />
<br />
<br />
<br />
<br />
<br />
Much of the design study was concerned with a simple, flexible chemical system for processing lunar [[regolith]], and the differences between the ratio of elements needed by the replicator, and the ratios available in regolith. The limiting element was [[Chlorine]], an essential element to process regolith for [[Aluminium]]. Chlorine is very rare in lunar regolith, and a substantially faster rate of reproduction could be assured by importing modest amounts.<br />
<br />
Much of the design study was concerned with a simple, flexible chemical system for processing lunar regolith, and the differences between the ratio of elements needed by the replicator, and the ratios available in regolith. The limiting element was Chlorine, an essential element to process regolith for Aluminium. Chlorine is very rare in lunar regolith, and a substantially faster rate of reproduction could be assured by importing modest amounts.<br />
<br />
大部分的设计研究都是关于一个简单、灵活的化学系统来处理月球表面的风化层,以及复制因子所需要的元素比率和风化层中可用的比率之间的差异。限制元素是氯,一个必不可少的元素处理风化层的铝。氯在月球的风化层中非常罕见,通过进口适量的氯,可以保证更快的生殖速度。<br />
<br />
<br />
<br />
<br />
<br />
The reference design specified small computer-controlled electric carts running on rails. Each cart could have a simple hand or a small bull-dozer shovel, forming a basic [[robot]].<br />
<br />
The reference design specified small computer-controlled electric carts running on rails. Each cart could have a simple hand or a small bull-dozer shovel, forming a basic robot.<br />
<br />
参考设计指定的小型计算机控制的电动车在轨道上运行。每个推车可以有一个简单的手或一个小型推土机铲,形成一个基本的机器人。<br />
<br />
<br />
<br />
<br />
<br />
Power would be provided by a "canopy" of [[solar cell]]s supported on pillars. The other machinery could run under the canopy.<br />
<br />
Power would be provided by a "canopy" of solar cells supported on pillars. The other machinery could run under the canopy.<br />
<br />
电力将由支撑在支柱上的太阳能电池“天篷”提供。其他的机器可以在天篷下面运转。<br />
<br />
<br />
<br />
<br />
<br />
A "[[casting]] [[robot]]" would use a robotic arm with a few sculpting tools to make [[plaster]] [[molding (process)|mold]]s. Plaster molds are easy to make, and make precise parts with good surface finishes. The robot would then cast most of the parts either from non-conductive molten rock ([[basalt]]) or purified metals. An [[electricity|electric]] [[oven]] melted the materials.<br />
<br />
A "casting robot" would use a robotic arm with a few sculpting tools to make plaster molds. Plaster molds are easy to make, and make precise parts with good surface finishes. The robot would then cast most of the parts either from non-conductive molten rock (basalt) or purified metals. An electric oven melted the materials.<br />
<br />
一个“铸造机器人”将使用一个机械手臂和一些雕刻工具来制作石膏模具。石膏模具易于制作,而且制作精确的零件表面光洁度好。然后,机器人将用非导电熔岩(玄武岩)或纯净金属铸造大部分零件。电炉熔化了这些材料。<br />
<br />
<br />
<br />
<br />
<br />
A speculative, more complex "chip factory" was specified to produce the computer and electronic systems, but the designers also said that it might prove practical to ship the chips from Earth as if they were "vitamins".<br />
<br />
A speculative, more complex "chip factory" was specified to produce the computer and electronic systems, but the designers also said that it might prove practical to ship the chips from Earth as if they were "vitamins".<br />
<br />
他们指定了一个更为复杂的推测性“芯片工厂”来生产计算机和电子系统,但设计师们还表示,将这些芯片像“维生素”一样从地球运输出去,可能会被证明是可行的。<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
===Molecular manufacturing===<br />
<br />
分子制造<br />
<br />
{{Main|Molecular nanotechnology#Replicating nanorobots}}<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
[[Nanotechnology|Nanotechnologists]] in particular believe that their work will likely fail to reach a state of maturity until human beings design a self-replicating [[assembler (nanotechnology)|assembler]] of [[nanometer]] dimensions [http://www.MolecularAssembler.com/KSRM/4.11.3.htm].<br />
<br />
Nanotechnologists in particular believe that their work will likely fail to reach a state of maturity until human beings design a self-replicating assembler of nanometer dimensions [http://www.MolecularAssembler.com/KSRM/4.11.3.htm].<br />
<br />
纳米技术专家尤其相信,在人类设计出一种纳米尺度的自我复制装配器之前,他们的工作很可能无法达到成熟的状态。 Molecularassembler.com/ksrm/4.11.3.htm.<br />
<br />
<br />
<br />
<br />
<br />
These systems are substantially simpler than autotrophic systems, because they are provided with purified feedstocks and energy. They do not have to reproduce them. This distinction is at the root of some of the controversy about whether [[molecular manufacturing]] is possible or not. Many authorities who find it impossible are clearly citing sources for complex autotrophic self-replicating systems. Many of the authorities who find it possible are clearly citing sources for much simpler self-assembling systems, which have been demonstrated. In the meantime, a [[Lego]]-built autonomous robot able to follow a pre-set track and assemble an exact copy of itself, starting from four externally provided components, was demonstrated experimentally in 2003 [http://www.MolecularAssembler.com/KSRM/3.23.4.htm].<br />
<br />
These systems are substantially simpler than autotrophic systems, because they are provided with purified feedstocks and energy. They do not have to reproduce them. This distinction is at the root of some of the controversy about whether molecular manufacturing is possible or not. Many authorities who find it impossible are clearly citing sources for complex autotrophic self-replicating systems. Many of the authorities who find it possible are clearly citing sources for much simpler self-assembling systems, which have been demonstrated. In the meantime, a Lego-built autonomous robot able to follow a pre-set track and assemble an exact copy of itself, starting from four externally provided components, was demonstrated experimentally in 2003 [http://www.MolecularAssembler.com/KSRM/3.23.4.htm].<br />
<br />
这些系统比自养系统简单得多,因为它们提供了纯化的原料和能源。他们不需要复制它们。这种区别是关于分子制造是否可行的一些争论的根源。许多当局认为这是不可能的,他们明确地引用了复杂的自养自我复制系统的资源。许多发现这种可能性的权威人士显然是在引用已经证明的更简单的自组装系统的资料。与此同时,2003年的一项实验展示了一个乐高积木自主机器人,它能够按照预先设定的轨道,从外部提供的4个组件开始,精确地组装出自己的副本。 Molecularassembler.com/ksrm/3.23.4.htm.<br />
<br />
<br />
<br />
<br />
<br />
Merely exploiting the replicative abilities of existing cells is insufficient, because of limitations in the process of [[protein biosynthesis]] (also see the listing for [[RNA]]).<br />
<br />
Merely exploiting the replicative abilities of existing cells is insufficient, because of limitations in the process of protein biosynthesis (also see the listing for RNA).<br />
<br />
仅仅利用现有细胞的复制能力是不够的,因为在蛋白质生物合成过程中存在局限性。<br />
<br />
What is required is the rational design of an entirely novel replicator with a much wider range of synthesis capabilities.<br />
<br />
What is required is the rational design of an entirely novel replicator with a much wider range of synthesis capabilities.<br />
<br />
我们需要的是合理设计一种具有更广泛合成能力的全新复制因子。<br />
<br />
<br />
<br />
<br />
<br />
In 2011, New York University scientists have developed artificial structures that can self-replicate, a process that has the potential to yield new types of materials. They have demonstrated that it is possible to replicate not just molecules like cellular DNA or RNA, but discrete structures that could in principle assume many different shapes, have many different functional features, and be associated with many different types of chemical species.<ref>{{cite journal | doi = 10.1038/nature10500 | last1 = Wang | first1 = Tong | last2 = Sha | first2 = Ruojie | last3 = Dreyfus | first3 = Rémi | last4 = Leunissen | first4 = Mirjam E. | last5 = Maass | first5 = Corinna | last6 = Pine | first6 = David J. | last7 = Chaikin | first7 = Paul M. | last8 = Seeman | first8 = Nadrian C. | year = 2011 | title = Self-replication of information-bearing nanoscale patterns | journal = Nature | volume = 478 | issue = 7368 | pages = 225–228 | pmid=21993758 | pmc=3192504}}</ref><ref>{{cite web | url = https://www.sciencedaily.com/releases/2011/10/111012132651.htm | title = Self-replication process holds promise for production of new materials. | date = 17 October 2011 | website = Science Daily | accessdate=17 October 2011}}</ref><br />
<br />
In 2011, New York University scientists have developed artificial structures that can self-replicate, a process that has the potential to yield new types of materials. They have demonstrated that it is possible to replicate not just molecules like cellular DNA or RNA, but discrete structures that could in principle assume many different shapes, have many different functional features, and be associated with many different types of chemical species.<br />
<br />
2011年,纽约大学的科学家们开发出了可以自我复制的人造结构,这一过程有可能产生新型材料。他们已经证明,不仅可以复制像细胞 DNA 或 RNA 这样的分子,而且可以复制原则上呈现许多不同形状、具有许多不同功能特征、并与许多不同类型的化学物种相关联的离散结构。<br />
<br />
<br />
<br />
<br />
<br />
For a discussion of other chemical bases for hypothetical self-replicating systems, see [[alternative biochemistry]].<br />
<br />
For a discussion of other chemical bases for hypothetical self-replicating systems, see alternative biochemistry.<br />
<br />
有关假设的自我复制系统的其他化学基础的讨论,请参阅替代生物化学。<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
==See also 请参阅==<br />
<br />
<br />
<br />
* [[Artificial life]]<br />
<br />
<br />
<br />
* [[Astrochicken]]<br />
<br />
<br />
<br />
* [[Autopoiesis]]<br />
<br />
<br />
<br />
* [[Complex system]]<br />
<br />
<br />
<br />
* [[DNA replication]]<br />
<br />
<br />
<br />
* [[Life]]<br />
<br />
<br />
<br />
* [[Robot]]<br />
<br />
<br />
<br />
* [[RepRap]] (self-replicated 3D printer)<br />
<br />
<br />
<br />
* [[Self-replicating machine]]<br />
<br />
<br />
<br />
** [[Self-replicating spacecraft]]<br />
<br />
<br />
<br />
* [[Space manufacturing]]<br />
<br />
<br />
<br />
* [[Von Neumann universal constructor]]<br />
<br />
<br />
<br />
* [[Virus]]<br />
<br />
<br />
<br />
* [[Von Neumann machine (disambiguation)]]<br />
<br />
<br />
<br />
* [[Self reconfigurable]]<br />
<br />
<br />
<br />
* [[Final Anthropic Principle]]<br />
<br />
<br />
<br />
* [[Positive feedback]]<br />
<br />
<br />
<br />
* [[Harmonic]]<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
==References==<br />
<br />
==References==<br />
<br />
参考资料<br />
<br />
{{reflist}}<br />
<br />
<br />
<br />
;Notes<br />
<br />
Notes<br />
<br />
注释<br />
<br />
{{refbegin}}<br />
<br />
<br />
<br />
* von Neumann, J., 1966, ''The Theory of Self-reproducing Automata'', A. Burks, ed., Univ. of Illinois Press, Urbana, IL.<br />
<br />
<br />
<br />
* [[s:Advanced Automation for Space Missions|Advanced Automation for Space Missions]], a 1980 NASA study edited by [[Robert Freitas]]<br />
<br />
<br />
<br />
* [http://www.MolecularAssembler.com/KSRM.htm Kinematic Self-Replicating Machines] first comprehensive survey of entire field in 2004 by [[Robert Freitas]] and [[Ralph Merkle]]<br />
<br />
<br />
<br />
* [https://web.archive.org/web/20040920220139/http://www.niac.usra.edu/files/studies/final_report/pdf/883Toth-Fejel.pdf NASA Institute for Advance Concepts study by General Dynamics]- concluded that complexity of the development was equal to that of a Pentium 4, and promoted a design based on cellular automata.<br />
<br />
<br />
<br />
* ''[[Gödel, Escher, Bach]]'' by [[Douglas Hofstadter]] (detailed discussion and many examples)<br />
<br />
<br />
<br />
* Kenyon, R., ''Self-replicating tilings'', in: Symbolic Dynamics and Applications (P. Walters, ed.) Contemporary Math. vol. 135 (1992), 239-264.<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
{{refend}}<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
[[Category:Self-replication| ]]<br />
<br />
<br />
<br />
<noinclude><br />
<br />
<small>This page was moved from [[wikipedia:en:Self-replication]]. Its edit history can be viewed at [[自复制/edithistory]]</small></noinclude><br />
<br />
[[Category:待整理页面]]</div>粲兰https://wiki.swarma.org/index.php?title=%E8%87%AA%E5%A4%8D%E5%88%B6_Self-replication&diff=15264自复制 Self-replication2020-10-15T06:28:06Z<p>粲兰:</p>
<hr />
<div>此词条暂由袁一博翻译,未经人工整理和审校,带来阅读不便,请见谅。{{see also|Biological reproduction}}<br />
<br />
<br />
<br />
{{Use dmy dates|date=April 2019|cs1-dates=y}}<br />
<br />
<br />
<br />
[[Image:DNA chemical structure.svg|thumb|right|200px|[[Molecular structure]] of [[DNA]] ]]<br />
<br />
[[Molecular structure of DNA ]]<br />
<br />
[ DNA 的分子结构]<br />
<br />
'''Self-replication''' is any behavior of a [[dynamical system]] that yields construction of an identical or similar copy of itself. [[Cell (biology)|Biological cell]]s, given suitable environments, reproduce by [[cell division]]. During cell division, [[DNA]] is replicated and can be transmitted to offspring during [[reproduction]]. [[virus (biology)|Biological viruses]] can [[Viral replication|replicate]], but only by commandeering the reproductive machinery of cells through a process of infection. Harmful [[prion]] proteins can replicate by converting normal proteins into rogue forms.<ref>{{cite news|url=http://news.bbc.co.uk/1/hi/health/8435320.stm |title='Lifeless' prion proteins are 'capable of evolution' |work=BBC News |date=2010-01-01 |accessdate=2013-10-22}}</ref> [[Computer virus]]es reproduce using the hardware and software already present on computers. Self-replication in [[robotics]] has been an area of research and a subject of interest in [[science fiction]]. Any self-replicating mechanism which does not make a perfect copy ([[mutation]]) will experience [[genetic variation]] and will create variants of itself. These variants will be subject to [[natural selection]], since some will be better at surviving in their current environment than others and will out-breed them.<br />
<br />
Self-replication is any behavior of a dynamical system that yields construction of an identical or similar copy of itself. Biological cells, given suitable environments, reproduce by cell division. During cell division, DNA is replicated and can be transmitted to offspring during reproduction. Biological viruses can replicate, but only by commandeering the reproductive machinery of cells through a process of infection. Harmful prion proteins can replicate by converting normal proteins into rogue forms. Computer viruses reproduce using the hardware and software already present on computers. Self-replication in robotics has been an area of research and a subject of interest in science fiction. Any self-replicating mechanism which does not make a perfect copy (mutation) will experience genetic variation and will create variants of itself. These variants will be subject to natural selection, since some will be better at surviving in their current environment than others and will out-breed them.<br />
<br />
自我复制是一个动力系统的任何行为,产生一个相同或相似的复制本身的建设。生物细胞,在适当的环境下,通过细胞分裂进行繁殖。在细胞分裂过程中,DNA 被复制,并在生殖过程中传递给后代。生物病毒可以复制,但只能通过感染过程控制细胞的生殖机制。有害的朊病毒蛋白可以通过将正常的蛋白质转化为流氓形式来复制。计算机病毒利用计算机上已有的硬件和软件进行复制。自我复制机器人学一直是一个研究领域,也是科幻小说中的一个兴趣主题。任何不能完美复制的自我复制机制(变异)都会经历遗传变异,并且会产生自身的变异。这些变异将受到自然选择的影响,因为有些变异会比其他变异更好地在当前环境中生存,并将超越他们。<br />
<br />
<br />
<br />
<br />
<br />
==Overview==<br />
<br />
==Overview 综述==<br />
<br />
<br />
<br />
<br />
<br />
<br />
===Theory===<br />
<br />
===Theory 理论===<br />
<br />
<br />
<br />
{{See also|Von Neumann universal constructor}}<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
Early research by [[John von Neumann]]<ref name=Hixon_vonNeumann>{{cite book|last=von Neumann|first=John|title=The Hixon Symposium|year=1948|location=Pasadena, California|pages=1–36}}</ref> established that replicators have several parts:<br />
<br />
Early research by John von Neumann established that replicators have several parts:<br />
<br />
约翰·冯·诺伊曼的早期研究表明复制因子有几个部分:<br />
<br />
<br />
<br />
<br />
<br />
*A coded representation of the replicator<br />
<br />
<br />
<br />
*A mechanism to copy the coded representation<br />
<br />
<br />
<br />
*A mechanism for effecting construction within the host environment of the replicator<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
Exceptions to this pattern may be possible, although none have yet been achieved. For example, scientists have come close to constructing [https://arstechnica.com/science/2011/04/investigations-into-the-ancient-rna-world/ RNA that can be copied] in an "environment" that is a solution of RNA monomers and transcriptase. In this case, the body is the genome, and the specialized copy mechanisms are external. The requirement for an outside copy mechanism has not yet been overcome, and such systems are more accurately characterized as "assisted replication" than "self-replication".<br />
<br />
Exceptions to this pattern may be possible, although none have yet been achieved. For example, scientists have come close to constructing [https://arstechnica.com/science/2011/04/investigations-into-the-ancient-rna-world/ RNA that can be copied] in an "environment" that is a solution of RNA monomers and transcriptase. In this case, the body is the genome, and the specialized copy mechanisms are external. The requirement for an outside copy mechanism has not yet been overcome, and such systems are more accurately characterized as "assisted replication" than "self-replication".<br />
<br />
这种模式可能有例外,尽管尚未实现任何例外。例如,科学家们已经接近于在 RNA 单体和转录酶的“环境”中构建[可复制的 https://arstechnica.com/science/2011/04/investigations-into-the-ancient-RNA-world/ RNA ]。在这种情况下,身体就是基因组,专门的复制机制是外部的。对外部复制机制的需求尚未被克服,这种系统更准确地描述为“辅助复制”而不是“自我复制复制”。<br />
<br />
<br />
<br />
<br />
<br />
However, the simplest possible case is that only a genome exists. Without some specification of the self-reproducing steps, a genome-only system is probably better characterized as something like a [[crystal]].<br />
<br />
However, the simplest possible case is that only a genome exists. Without some specification of the self-reproducing steps, a genome-only system is probably better characterized as something like a crystal.<br />
<br />
然而,最简单的可能的情况是只有一个基因组存在。如果没有一些自我繁殖步骤的说明,一个只有基因组的系统可能更好地被描述为类似于晶体的东西。<br />
<br />
<br />
<br />
<br />
<br />
===Classes of self-replication===<br />
<br />
===Classes of self-replication 自复制的类别===<br />
<br />
<br />
<br />
Recent research<ref>{{cite web|url = http://www.MolecularAssembler.com/KSRM/5.1.htm | date = 2004 | accessdate = 29 June 2013 | last = Freitas | first = Robert | last2 = Merkle | first2 = Ralph | title = Kinematic Self-Replicating Machines - General Taxonomy of Replicators}}</ref> has begun to categorize replicators, often based on the amount of support they require.<br />
<br />
Recent research has begun to categorize replicators, often based on the amount of support they require.<br />
<br />
最近的研究已经开始对复制因子进行分类,通常基于它们所需要的支持程度。<br />
<br />
<br />
<br />
<br />
<br />
*Natural replicators have all or most of their design from nonhuman sources. Such systems include natural life forms.<br />
<br />
*自然复制因子的设计全部或绝大部分来自非人类来源。这样的系统包含自然生命形式。<br />
<br />
<br />
*[[Autotroph]]ic replicators can reproduce themselves "in the wild". They mine their own materials. It is conjectured that non-biological autotrophic replicators could be designed by humans, and could easily accept specifications for human products.<br />
<br />
*无机复制因子可以在自然环境下进行自我复制。它们采掘自身的矿物质。据推测,非生物的无极复制因子可能由人类设计而成,并且可以轻易地接受人类产物的规格。<br />
<br />
<br />
*Self-reproductive systems are conjectured systems which would produce copies of themselves from industrial feedstocks such as metal bar and wire.<br />
<br />
*可以利用工业原料,例如金属棒和金属丝,以产生自身的拷贝的自复制的系统存在于假想当中<br />
<br />
<br />
*Self-assembling systems assemble copies of themselves from finished, delivered parts. Simple examples of such systems have been demonstrated at the macro scale.<br />
<br />
*自组装系统将它们已经完成并运送过来的自复制部分组装起来。这种系统的简单例子已经在宏观尺度得到展示。<br />
<br />
<br />
<br />
<br />
<br />
The design space for machine replicators is very broad. A comprehensive study<ref>{{cite web|url = http://www.MolecularAssembler.com/KSRM/5.1.9.htm | date = 2004 | accessdate = 29 June 2013 | last1 = Freitas | first1 = Robert | last2 = Merkle | first2 = Ralph | title = Kinematic Self-Replicating Machines - Freitas-Merkle Map of the Kinematic Replicator Design Space (2003–2004)}}</ref> to date by [[Robert Freitas]] and [[Ralph Merkle]] has identified 137 design dimensions grouped into a dozen separate categories, including: (1) Replication Control, (2) Replication Information, (3) Replication Substrate, (4) Replicator Structure, (5) Passive Parts, (6) Active Subunits, (7) Replicator Energetics, (8) Replicator Kinematics, (9) Replication Process, (10) Replicator Performance, (11) Product Structure, and (12) Evolvability.<br />
<br />
The design space for machine replicators is very broad. A comprehensive study to date by Robert Freitas and Ralph Merkle has identified 137 design dimensions grouped into a dozen separate categories, including: (1) Replication Control, (2) Replication Information, (3) Replication Substrate, (4) Replicator Structure, (5) Passive Parts, (6) Active Subunits, (7) Replicator Energetics, (8) Replicator Kinematics, (9) Replication Process, (10) Replicator Performance, (11) Product Structure, and (12) Evolvability.<br />
<br />
机器复制因子的设计空间非常广阔。迄今为止,罗伯特·弗雷塔斯(Robert Freitas)和拉尔夫·默克尔(Ralph Merkle)的综合研究已经确定了137个设计维度并将其分为十几个独立的类别,包括: (1)复制控制,(2)复制信息,(3)复制基质,(4)复制因子<br />
结构,(5)被动部件,(6)主动子单元,(7)复制因子能量学,(8)复制因子运动学,(9)复制过程,(10)复制因子性能,(11)产物结构,和(12)进化性。<br />
<br />
<br />
<br />
<br />
<br />
===A self-replicating computer program===<br />
<br />
===A self-replicating computer program 一种自复制的电脑程序===<br />
<br />
<br />
<br />
{{Main|Quine (computing)}}<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
In [[computer science]] a [[Quine (computing)|quine]] is a self-reproducing computer program that, when executed, outputs its own code. For example, a quine in the [[Python (programming language)|Python programming language]] is:<br />
<br />
In computer science a quine is a self-reproducing computer program that, when executed, outputs its own code. For example, a quine in the Python programming language is:<br />
<br />
在计算机科学中,quine 是一种自我复制的计算机程序,当执行时,输出自己的代码。例如,利用Python语言编写的一个 quine 是:<br />
<br />
<br />
<br />
<br />
<br />
:<code>a='a=%r;print(a%%a)';print(a%a)</code><br />
<br />
<code>a='a=%r;print(a%%a)';print(a%a)</code><br />
<br />
<br />
<br />
<br />
<br />
A more trivial approach is to write a program that will make a copy of any stream of data that it is directed to, and then direct it at itself. In this case the program is treated as both executable code, and as data to be manipulated. This approach is common in most self-replicating systems, including biological life, and is simpler as it does not require the program to contain a complete description of itself.<br />
<br />
A more trivial approach is to write a program that will make a copy of any stream of data that it is directed to, and then direct it at itself. In this case the program is treated as both executable code, and as data to be manipulated. This approach is common in most self-replicating systems, including biological life, and is simpler as it does not require the program to contain a complete description of itself.<br />
<br />
一种更简单的方法是编写一个程序,这个程序将复制它所指向的任何数据流,然后指向它自己。在这种情况下,程序既被当作可执行代码,也被当作要操作的数据。这种方法在包括生物生命在内的大多数自复制系统中都很常见,而且更简单,因为它不需要程序包含对自身的完整描述。<br />
<br />
<br />
<br />
<br />
<br />
In many programming languages an empty program is legal, and executes without producing errors or other output. The output is thus the same as the source code, so the program is trivially self-reproducing.<br />
<br />
In many programming languages an empty program is legal, and executes without producing errors or other output. The output is thus the same as the source code, so the program is trivially self-reproducing.<br />
<br />
在许多编程语言中,空程序是合法的,并且执行时不会产生错误或其他输出。因此,其输出是相同的源代码,所以这种程序是微不足道的自复制。<br />
<br />
<br />
<br />
<br />
<br />
===Self-replicating tiling===<br />
<br />
===Self-replicating tiling 自复制式平铺===<br />
<br />
<br />
<br />
{{See also|Self-similarity}}<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
In [[geometry]] a self-replicating tiling is a tiling pattern in which several [[congruence (geometry)|congruent]] tiles may be joined together to form a larger tile that is similar to the original. This is an aspect of the field of study known as [[tessellation]]. The "sphinx" [[hexiamond]] is the only known self-replicating [[pentagon]].<ref>For an image that does not show how this replicates, see: Eric W. Weisstein. "Sphinx." From MathWorld--A Wolfram Web Resource. [http://mathworld.wolfram.com/Sphinx.html http://mathworld.wolfram.com/Sphinx.html]</ref> For example, four such [[concave polygon|concave]] pentagons can be joined together to make one with twice the dimensions.<ref>For further illustrations, see [http://www.geoaustralia.com/italian/Sphinx/Guide.html Teaching TILINGS / TESSELLATIONS with Geo Sphinx]</ref> [[Solomon W. Golomb]] coined the term [[rep-tiles]] for self-replicating tilings.<br />
<br />
In geometry a self-replicating tiling is a tiling pattern in which several congruent tiles may be joined together to form a larger tile that is similar to the original. This is an aspect of the field of study known as tessellation. The "sphinx" hexiamond is the only known self-replicating pentagon. For example, four such concave pentagons can be joined together to make one with twice the dimensions. Solomon W. Golomb coined the term rep-tiles for self-replicating tilings.<br />
<br />
在几何学中,自复制式平铺是一种平铺方法,其中几个全等的瓷砖可以连接在一起,形成一个较大的类似于原来的瓷砖。这是一个被称为镶嵌的研究领域的一个方面。“狮身人面像”六面双锥体是已知唯一能自复制的五角形。<br />
--[[用户:粲兰|袁一博]]([[用户讨论:粲兰|讨论]])“‘狮身人面像’六面双锥体是已知唯一能自复制的五角形。”这句对应原句"The 'sphinx' hexiamond is the only known self-replicating pentagon."疑似存在集合上的逻辑错误,hexiamond并不是一个平面几何图形。<br />
例如,四个这样的凹形五边形可以连接在一起,形成一个尺寸是原来两倍的五边形。所罗门·W·格伦布(Solomon W. Golomb)创造了术语 rep-tiles 来描述自复制式平铺。<br />
<br />
<br />
<br />
<br />
<br />
In 2012, [[Lee Sallows]] identified rep-tiles as a special instance of a [[self-tiling tile set]] or setiset. A setiset of order ''n'' is a set of ''n'' shapes that can be assembled in ''n'' different ways so as to form larger replicas of themselves. Setisets in which every shape is distinct are called 'perfect'. A rep-''n'' rep-tile is just a setiset composed of ''n'' identical pieces.<br />
<br />
In 2012, Lee Sallows identified rep-tiles as a special instance of a self-tiling tile set or setiset. A setiset of order n is a set of n shapes that can be assembled in n different ways so as to form larger replicas of themselves. Setisets in which every shape is distinct are called 'perfect'. A rep-n rep-tile is just a setiset composed of n identical pieces.<br />
<br />
2012年,Lee Sallows 将 rep-tiles 定义为一种特殊的自组合纹样集。一组 n 阶的复制品是一组 n 个形状的复制品,它们可以以 n 种不同的方式组合,以便形成更大的自身复制品。每个形状各不相同的塞提塞称为“完美”。Rep-n rep-tile 只是由 n 个相同部分组成的一个集合。<br />
<br />
{|<br />
<br />
{|<br />
<br />
{|<br />
<br />
|- style="vertical-align:bottom;"<br />
<br />
|- style="vertical-align:bottom;"<br />
<br />
|-style“ vertical-align: bottom; ”<br />
<br />
[[File:Self-replication of sphynx hexidiamonds.svg|thumb|A rep-tile-based setiset of order 4|thumb|left|text-bottom|260px|Four '[[Sphinx tiling|sphinx]]' hexiamonds can be put together to form another sphinx.]]<br />
<br />
sphinx' hexiamonds can be put together to form another sphinx.]]<br />
<br />
[人面狮身人面像可以拼凑成另一个人面狮身人面像]<br />
<br />
[[File:A rep-tile-based setiset of order 4.png|thumb|A rep-tile-based setiset of order 9|thumb|right|text-bottom|290px|A perfect [[Self-tiling tile set|setiset]] of order 4]]<br />
<br />
setiset of order 4]]<br />
<br />
setiset of order 4]]<br />
<br />
|}<br />
<br />
|}<br />
<br />
|}<br />
<br />
{{clear}}<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
===Self replicating clay crystals===<br />
<br />
===Self replicating clay crystals===<br />
<br />
自我复制的粘土晶体<br />
<br />
One form of natural self-replication that isn't based on DNA or RNA occurs in clay crystals.<ref>{{cite web|url=http://www.bbc.com/earth/story/20160823-the-idea-that-life-began-as-clay-crystals-is-50-years-old |title=The idea that life began as clay crystals is 50 years old |publisher=bbc.com |date=2016-08-24 |accessdate=2019-11-10}}</ref> Clay consists of a large number of small crystals, and clay is an environment that promotes crystal growth. Crystals consist of a regular lattice of atoms and are able to grow if e.g. placed in a water solution containing the crystal components; automatically arranging atoms at the crystal boundary into the crystalline form. Crystals may have irregularities where the regular atomic structure is broken, and when crystals grow, these irregularities may propagate, creating a form of self-replication of crystal irregularities. Because these irregularities may affect the probability of a crystal breaking apart to form new crystals, crystals with such irregularities could even be considered to undergo evolutionary development.<br />
<br />
One form of natural self-replication that isn't based on DNA or RNA occurs in clay crystals. Clay consists of a large number of small crystals, and clay is an environment that promotes crystal growth. Crystals consist of a regular lattice of atoms and are able to grow if e.g. placed in a water solution containing the crystal components; automatically arranging atoms at the crystal boundary into the crystalline form. Crystals may have irregularities where the regular atomic structure is broken, and when crystals grow, these irregularities may propagate, creating a form of self-replication of crystal irregularities. Because these irregularities may affect the probability of a crystal breaking apart to form new crystals, crystals with such irregularities could even be considered to undergo evolutionary development.<br />
<br />
一种不是基于 DNA 或 RNA 的天然自我复制存在于粘土晶体中。粘土由大量的小晶体组成,粘土是促进晶体生长的环境。晶体是由规则的原子晶格组成的,如果没有原子晶格,晶体就能够生长。放置在含有晶体成分的水溶液中,自动地将晶体边界上的原子排列成晶体形式。当正常的原子结构被破坏时,晶体可能具有不规则性,当晶体生长时,这些不规则性可能会传播,形成一种晶体不规则性的自我复制。由于这些不规则结构可能会影响晶体分裂形成新晶体的概率,因此这种不规则结构的晶体甚至可以被认为是在进化过程中形成的。<br />
<br />
<br />
<br />
<br />
<br />
===Applications===<br />
<br />
===Applications===<br />
<br />
申请<br />
<br />
It is a long-term goal of some engineering sciences to achieve a [[clanking replicator]], a material device that can self-replicate. The usual reason is to achieve a low cost per item while retaining the utility of a manufactured good. Many authorities say that in the limit, the cost of self-replicating items should approach the cost-per-weight of wood or other biological substances, because self-replication avoids the costs of [[labour (economics)|labor]], [[Capital (economics)|capital]] and [[distribution (business)|distribution]] in conventional [[factory|manufactured goods]].<br />
<br />
It is a long-term goal of some engineering sciences to achieve a clanking replicator, a material device that can self-replicate. The usual reason is to achieve a low cost per item while retaining the utility of a manufactured good. Many authorities say that in the limit, the cost of self-replicating items should approach the cost-per-weight of wood or other biological substances, because self-replication avoids the costs of labor, capital and distribution in conventional manufactured goods.<br />
<br />
一些工程科学的长期目标是制造出一种可以自我复制的铿锵复制机器。通常的原因是为了在保留生产品的同时降低每件商品的成本。许多权威人士表示,在这个限度内,自我复制产品的成本应该接近木材或其他生物物质的单位重量成本,因为自我复制可以避免传统制成品的劳动力、资本和分销成本。<br />
<br />
<br />
<br />
<br />
<br />
A fully novel artificial replicator is a reasonable near-term goal.<br />
<br />
A fully novel artificial replicator is a reasonable near-term goal.<br />
<br />
一个全新的人工复制因子是一个合理的近期目标。<br />
<br />
A [[NASA]] study recently placed the complexity of a [[clanking replicator]] at approximately that of [[Intel]]'s [[Pentium (brand)|Pentium]] 4 CPU.<ref>{{cite web|url=http://www.niac.usra.edu/files/studies/final_report/883Toth-Fejel.pdf |title=Modeling Kinematic Cellular Automata Final Report |publisher= |date=April 30, 2004 |accessdate=2013-10-22}}</ref> That is, the technology is achievable with a relatively small engineering group in a reasonable commercial time-scale at a reasonable cost.<br />
<br />
A NASA study recently placed the complexity of a clanking replicator at approximately that of Intel's Pentium 4 CPU. That is, the technology is achievable with a relatively small engineering group in a reasonable commercial time-scale at a reasonable cost.<br />
<br />
美国宇航局最近的一项研究表明,铿锵复制机器的复杂度大约相当于英特尔奔腾4处理器的复杂度。也就是说,这项技术是可以实现的与一个相对较小的工程小组在一个合理的商业时间规模在一个合理的成本。<br />
<br />
<br />
<br />
<br />
<br />
Given the currently keen interest in biotechnology and the high levels of funding in that field, attempts to exploit the replicative ability of existing cells are timely, and may easily lead to significant insights and advances.<br />
<br />
Given the currently keen interest in biotechnology and the high levels of funding in that field, attempts to exploit the replicative ability of existing cells are timely, and may easily lead to significant insights and advances.<br />
<br />
鉴于目前对生物技术的浓厚兴趣和这一领域的大量资金,利用现有细胞的复制能力的尝试是及时的,而且很容易产生重大的见解和进展。<br />
<br />
<br />
<br />
<br />
<br />
A variation of self replication is of practical relevance in [[compiler]] construction, where a similar [[bootstrapping]] problem occurs as in natural self replication. A compiler ([[phenotype]]) can be applied on the compiler's own [[source code]] ([[genotype]]) producing the compiler itself. During compiler development, a modified ([[Mutation|mutated]]) source is used to create the next generation of the compiler. This process differs from natural self-replication in that the process is directed by an engineer, not by the subject itself.<br />
<br />
A variation of self replication is of practical relevance in compiler construction, where a similar bootstrapping problem occurs as in natural self replication. A compiler (phenotype) can be applied on the compiler's own source code (genotype) producing the compiler itself. During compiler development, a modified (mutated) source is used to create the next generation of the compiler. This process differs from natural self-replication in that the process is directed by an engineer, not by the subject itself.<br />
<br />
自我复制的一种变体在编译器构造中具有实际意义,在自然自我复制中也会出现类似的自举问题。编译器(表型)可以应用于编译器自身的源代码(基因型) ,从而产生编译器本身。在编译器开发过程中,使用修改(变异)的源代码来创建下一代编译器。这个过程不同于自然自我复制,因为这个过程是由工程师指导的,而不是主体本身。<br />
<br />
<br />
<br />
<br />
<br />
==Mechanical self-replication==<br />
<br />
==Mechanical self-replication==<br />
<br />
机械自我复制<br />
<br />
{{Main|Self-replicating machine}}<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
An activity in the field of robots is the self-replication of machines. Since all robots (at least in modern times) have a fair number of the same features, a self-replicating robot (or possibly a hive of robots) would need to do the following:<br />
<br />
An activity in the field of robots is the self-replication of machines. Since all robots (at least in modern times) have a fair number of the same features, a self-replicating robot (or possibly a hive of robots) would need to do the following:<br />
<br />
机器人领域的一项活动就是机器的自我复制。由于所有机器人(至少在现代)都有相当数量的相同特性,一个自我复制的机器人(或者可能是一群机器人)需要做以下工作:<br />
<br />
<br />
<br />
<br />
<br />
*Obtain construction materials<br />
<br />
<br />
<br />
*Manufacture new parts including its smallest parts and thinking apparatus<br />
<br />
<br />
<br />
*Provide a consistent power source<br />
<br />
<br />
<br />
*Program the new members<br />
<br />
<br />
<br />
*error correct any mistakes in the offspring<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
On a [[Nanotechnology|nano]] scale, [[Assembler (nanotechnology)|assemblers]] might also be designed to self-replicate under their own power. This, in turn, has given rise to the "[[grey goo]]" version of [[Armageddon]], as featured in such science fiction novels as ''[[Bloom (novel)|Bloom]]'', ''[[Prey (novel)|Prey]]'', and ''[[Recursion (novel)|Recursion]]''.<br />
<br />
On a nano scale, assemblers might also be designed to self-replicate under their own power. This, in turn, has given rise to the "grey goo" version of Armageddon, as featured in such science fiction novels as Bloom, Prey, and Recursion.<br />
<br />
在纳米级别上,组装者也可能被设计成在自身能量下进行自我复制。这反过来又导致了“灰色粘性”版本的世界末日,就像在诸如 Bloom,Prey 和 Recursion 这样的科幻小说中描述的那样。<br />
<br />
<br />
<br />
<br />
<br />
The [[Foresight Institute]] has published guidelines for researchers in mechanical self-replication.<ref>{{cite web|url=http://foresight.org/guidelines/ |title=Molecular Nanotechnology Guidelines |publisher=Foresight.org |date= |accessdate=2013-10-22}}</ref> The guidelines recommend that researchers use several specific techniques for preventing mechanical replicators from getting out of control, such as using a [[broadcast architecture]].<br />
<br />
The Foresight Institute has published guidelines for researchers in mechanical self-replication. The guidelines recommend that researchers use several specific techniques for preventing mechanical replicators from getting out of control, such as using a broadcast architecture.<br />
<br />
美国前瞻学会协会已经为机械自我复制的研究人员发布了指导方针。指导方针建议研究人员使用一些特定的技术来防止机械复制器失控,比如使用广播结构。<br />
<br />
<br />
<br />
<br />
<br />
For a detailed article on mechanical reproduction as it relates to the industrial age see [[mass production]].<br />
<br />
For a detailed article on mechanical reproduction as it relates to the industrial age see mass production.<br />
<br />
有关与工业时代有关的机械复制的详细文章,请参阅大规模生产。<br />
<br />
<br />
<br />
<br />
<br />
==Fields==<br />
<br />
==Fields==<br />
<br />
田野<br />
<br />
{{refimprove section|date=August 2017}}<br />
<br />
<br />
<br />
Research has occurred in the following areas:<br />
<br />
Research has occurred in the following areas:<br />
<br />
在以下领域进行了研究:<br />
<br />
<br />
<br />
<br />
<br />
* [[Biology]] studies natural replication and replicators, and their interaction. These can be an important guide to avoid design difficulties in self-replicating machinery.<br />
<br />
<br />
<br />
* In [[Chemistry]] self-replication studies are typically about how a specific set of molecules can act together to replicate each other within the set <ref>{{cite book |author=Moulin, Giuseppone |title=Constitutional Dynamic Chemistry |volume=322 |pages=87–105 |year=2011|publisher=Springer|doi=10.1007/128_2011_198|pmid=21728135 |series=Topics in Current Chemistry |isbn=978-3-642-28343-7 |chapter=Dynamic Combinatorial Self-Replicating Systems }}</ref> (often part of [[Systems chemistry]] field).<br />
<br />
<br />
<br />
* [[Meme]]tics studies ideas and how they propagate in human culture. Memes require only small amounts of material, and therefore have theoretical similarities to [[virus]]es and are often described as [[virus|viral]].<br />
<br />
<br />
<br />
* [[Nanotechnology]] or more precisely, [[molecular nanotechnology]] is concerned with making [[Nanotechnology|nano]] scale [[assembler (nanotechnology)|assemblers]]. Without self-replication, capital and assembly costs of molecular machines become impossibly large.<br />
<br />
<br />
<br />
* Space resources: NASA has sponsored a number of design studies to develop self-replicating mechanisms to mine space resources. Most of these designs include computer-controlled machinery that copies itself.<br />
<br />
<br />
<br />
* [[Computer security]]: Many computer security problems are caused by self-reproducing computer programs that infect computers — [[computer worm]]s and [[computer virus]]es.<br />
<br />
<br />
<br />
* In [[parallel computing]], it takes a long time to manually load a new program on every node of a large [[computer cluster]] or [[distributed computing]] system. Automatically loading new programs using [[mobile agent]]s can save the system administrator a lot of time and give users their results much quicker, as long as they don't get out of control.<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
==In industry==<br />
<br />
==In industry==<br />
<br />
在工业界<br />
<br />
<br />
<br />
<br />
<br />
===Space exploration and manufacturing===<br />
<br />
===Space exploration and manufacturing===<br />
<br />
太空探索和制造业<br />
<br />
The goal of self-replication in space systems is to exploit large amounts of matter with a low launch mass. For example, an [[autotroph]]ic self-replicating machine could cover a moon or planet with solar cells, and beam the power to the Earth using microwaves. Once in place, the same machinery that built itself could also produce raw materials or manufactured objects, including transportation systems to ship the products. [[Von Neumann Probe|Another model]] of self-replicating machine would copy itself through the galaxy and universe, sending information back.<br />
<br />
The goal of self-replication in space systems is to exploit large amounts of matter with a low launch mass. For example, an autotrophic self-replicating machine could cover a moon or planet with solar cells, and beam the power to the Earth using microwaves. Once in place, the same machinery that built itself could also produce raw materials or manufactured objects, including transportation systems to ship the products. Another model of self-replicating machine would copy itself through the galaxy and universe, sending information back.<br />
<br />
太空系统中自我复制的目标是利用低发射质量的大量物质。例如,一个自养自我复制机械可以用太阳能电池覆盖月球或行星,并通过微波将能量传送到地球。一旦到位,自己建造的同样的机器也可以生产原材料或制成品,包括运输产品的运输系统。另一个自我复制机械的模型会在银河系和宇宙中复制自己,把信息传回来。<br />
<br />
<br />
<br />
<br />
<br />
In general, since these systems are autotrophic, they are the most difficult and complex known replicators. They are also thought to be the most hazardous, because they do not require any inputs from human beings in order to reproduce.<br />
<br />
In general, since these systems are autotrophic, they are the most difficult and complex known replicators. They are also thought to be the most hazardous, because they do not require any inputs from human beings in order to reproduce.<br />
<br />
一般来说,由于这些系统是自养的,他们是最困难和复杂的已知复制因子。它们也被认为是最危险的,因为它们不需要人类的任何投入来繁殖。<br />
<br />
<br />
<br />
<br />
<br />
A classic theoretical study of replicators in space is the 1980 [[NASA]] study of autotrophic clanking replicators, edited by [[Robert Freitas]].<ref>[[Wikisource:Advanced Automation for Space Missions]]</ref><br />
<br />
A classic theoretical study of replicators in space is the 1980 NASA study of autotrophic clanking replicators, edited by Robert Freitas.<br />
<br />
一个关于太空中复制因子的经典理论研究是1980年 NASA 关于自养叮当复制因子的研究,由 Robert Freitas 编辑。<br />
<br />
<br />
<br />
<br />
<br />
Much of the design study was concerned with a simple, flexible chemical system for processing lunar [[regolith]], and the differences between the ratio of elements needed by the replicator, and the ratios available in regolith. The limiting element was [[Chlorine]], an essential element to process regolith for [[Aluminium]]. Chlorine is very rare in lunar regolith, and a substantially faster rate of reproduction could be assured by importing modest amounts.<br />
<br />
Much of the design study was concerned with a simple, flexible chemical system for processing lunar regolith, and the differences between the ratio of elements needed by the replicator, and the ratios available in regolith. The limiting element was Chlorine, an essential element to process regolith for Aluminium. Chlorine is very rare in lunar regolith, and a substantially faster rate of reproduction could be assured by importing modest amounts.<br />
<br />
大部分的设计研究都是关于一个简单、灵活的化学系统来处理月球表面的风化层,以及复制因子所需要的元素比率和风化层中可用的比率之间的差异。限制元素是氯,一个必不可少的元素处理风化层的铝。氯在月球的风化层中非常罕见,通过进口适量的氯,可以保证更快的生殖速度。<br />
<br />
<br />
<br />
<br />
<br />
The reference design specified small computer-controlled electric carts running on rails. Each cart could have a simple hand or a small bull-dozer shovel, forming a basic [[robot]].<br />
<br />
The reference design specified small computer-controlled electric carts running on rails. Each cart could have a simple hand or a small bull-dozer shovel, forming a basic robot.<br />
<br />
参考设计指定的小型计算机控制的电动车在轨道上运行。每个推车可以有一个简单的手或一个小型推土机铲,形成一个基本的机器人。<br />
<br />
<br />
<br />
<br />
<br />
Power would be provided by a "canopy" of [[solar cell]]s supported on pillars. The other machinery could run under the canopy.<br />
<br />
Power would be provided by a "canopy" of solar cells supported on pillars. The other machinery could run under the canopy.<br />
<br />
电力将由支撑在支柱上的太阳能电池“天篷”提供。其他的机器可以在天篷下面运转。<br />
<br />
<br />
<br />
<br />
<br />
A "[[casting]] [[robot]]" would use a robotic arm with a few sculpting tools to make [[plaster]] [[molding (process)|mold]]s. Plaster molds are easy to make, and make precise parts with good surface finishes. The robot would then cast most of the parts either from non-conductive molten rock ([[basalt]]) or purified metals. An [[electricity|electric]] [[oven]] melted the materials.<br />
<br />
A "casting robot" would use a robotic arm with a few sculpting tools to make plaster molds. Plaster molds are easy to make, and make precise parts with good surface finishes. The robot would then cast most of the parts either from non-conductive molten rock (basalt) or purified metals. An electric oven melted the materials.<br />
<br />
一个“铸造机器人”将使用一个机械手臂和一些雕刻工具来制作石膏模具。石膏模具易于制作,而且制作精确的零件表面光洁度好。然后,机器人将用非导电熔岩(玄武岩)或纯净金属铸造大部分零件。电炉熔化了这些材料。<br />
<br />
<br />
<br />
<br />
<br />
A speculative, more complex "chip factory" was specified to produce the computer and electronic systems, but the designers also said that it might prove practical to ship the chips from Earth as if they were "vitamins".<br />
<br />
A speculative, more complex "chip factory" was specified to produce the computer and electronic systems, but the designers also said that it might prove practical to ship the chips from Earth as if they were "vitamins".<br />
<br />
他们指定了一个更为复杂的推测性“芯片工厂”来生产计算机和电子系统,但设计师们还表示,将这些芯片像“维生素”一样从地球运输出去,可能会被证明是可行的。<br />
<br />
<br />
<br />
<br />
<br />
===Molecular manufacturing===<br />
<br />
===Molecular manufacturing===<br />
<br />
分子制造<br />
<br />
{{Main|Molecular nanotechnology#Replicating nanorobots}}<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
[[Nanotechnology|Nanotechnologists]] in particular believe that their work will likely fail to reach a state of maturity until human beings design a self-replicating [[assembler (nanotechnology)|assembler]] of [[nanometer]] dimensions [http://www.MolecularAssembler.com/KSRM/4.11.3.htm].<br />
<br />
Nanotechnologists in particular believe that their work will likely fail to reach a state of maturity until human beings design a self-replicating assembler of nanometer dimensions [http://www.MolecularAssembler.com/KSRM/4.11.3.htm].<br />
<br />
纳米技术专家尤其相信,在人类设计出一种纳米尺度的自我复制装配器之前,他们的工作很可能无法达到成熟的状态。 Molecularassembler.com/ksrm/4.11.3.htm.<br />
<br />
<br />
<br />
<br />
<br />
These systems are substantially simpler than autotrophic systems, because they are provided with purified feedstocks and energy. They do not have to reproduce them. This distinction is at the root of some of the controversy about whether [[molecular manufacturing]] is possible or not. Many authorities who find it impossible are clearly citing sources for complex autotrophic self-replicating systems. Many of the authorities who find it possible are clearly citing sources for much simpler self-assembling systems, which have been demonstrated. In the meantime, a [[Lego]]-built autonomous robot able to follow a pre-set track and assemble an exact copy of itself, starting from four externally provided components, was demonstrated experimentally in 2003 [http://www.MolecularAssembler.com/KSRM/3.23.4.htm].<br />
<br />
These systems are substantially simpler than autotrophic systems, because they are provided with purified feedstocks and energy. They do not have to reproduce them. This distinction is at the root of some of the controversy about whether molecular manufacturing is possible or not. Many authorities who find it impossible are clearly citing sources for complex autotrophic self-replicating systems. Many of the authorities who find it possible are clearly citing sources for much simpler self-assembling systems, which have been demonstrated. In the meantime, a Lego-built autonomous robot able to follow a pre-set track and assemble an exact copy of itself, starting from four externally provided components, was demonstrated experimentally in 2003 [http://www.MolecularAssembler.com/KSRM/3.23.4.htm].<br />
<br />
这些系统比自养系统简单得多,因为它们提供了纯化的原料和能源。他们不需要复制它们。这种区别是关于分子制造是否可行的一些争论的根源。许多当局认为这是不可能的,他们明确地引用了复杂的自养自我复制系统的资源。许多发现这种可能性的权威人士显然是在引用已经证明的更简单的自组装系统的资料。与此同时,2003年的一项实验展示了一个乐高积木自主机器人,它能够按照预先设定的轨道,从外部提供的4个组件开始,精确地组装出自己的副本。 Molecularassembler.com/ksrm/3.23.4.htm.<br />
<br />
<br />
<br />
<br />
<br />
Merely exploiting the replicative abilities of existing cells is insufficient, because of limitations in the process of [[protein biosynthesis]] (also see the listing for [[RNA]]).<br />
<br />
Merely exploiting the replicative abilities of existing cells is insufficient, because of limitations in the process of protein biosynthesis (also see the listing for RNA).<br />
<br />
仅仅利用现有细胞的复制能力是不够的,因为在蛋白质生物合成过程中存在局限性。<br />
<br />
What is required is the rational design of an entirely novel replicator with a much wider range of synthesis capabilities.<br />
<br />
What is required is the rational design of an entirely novel replicator with a much wider range of synthesis capabilities.<br />
<br />
我们需要的是合理设计一种具有更广泛合成能力的全新复制因子。<br />
<br />
<br />
<br />
<br />
<br />
In 2011, New York University scientists have developed artificial structures that can self-replicate, a process that has the potential to yield new types of materials. They have demonstrated that it is possible to replicate not just molecules like cellular DNA or RNA, but discrete structures that could in principle assume many different shapes, have many different functional features, and be associated with many different types of chemical species.<ref>{{cite journal | doi = 10.1038/nature10500 | last1 = Wang | first1 = Tong | last2 = Sha | first2 = Ruojie | last3 = Dreyfus | first3 = Rémi | last4 = Leunissen | first4 = Mirjam E. | last5 = Maass | first5 = Corinna | last6 = Pine | first6 = David J. | last7 = Chaikin | first7 = Paul M. | last8 = Seeman | first8 = Nadrian C. | year = 2011 | title = Self-replication of information-bearing nanoscale patterns | journal = Nature | volume = 478 | issue = 7368 | pages = 225–228 | pmid=21993758 | pmc=3192504}}</ref><ref>{{cite web | url = https://www.sciencedaily.com/releases/2011/10/111012132651.htm | title = Self-replication process holds promise for production of new materials. | date = 17 October 2011 | website = Science Daily | accessdate=17 October 2011}}</ref><br />
<br />
In 2011, New York University scientists have developed artificial structures that can self-replicate, a process that has the potential to yield new types of materials. They have demonstrated that it is possible to replicate not just molecules like cellular DNA or RNA, but discrete structures that could in principle assume many different shapes, have many different functional features, and be associated with many different types of chemical species.<br />
<br />
2011年,纽约大学的科学家们开发出了可以自我复制的人造结构,这一过程有可能产生新型材料。他们已经证明,不仅可以复制像细胞 DNA 或 RNA 这样的分子,而且可以复制原则上呈现许多不同形状、具有许多不同功能特征、并与许多不同类型的化学物种相关联的离散结构。<br />
<br />
<br />
<br />
<br />
<br />
For a discussion of other chemical bases for hypothetical self-replicating systems, see [[alternative biochemistry]].<br />
<br />
For a discussion of other chemical bases for hypothetical self-replicating systems, see alternative biochemistry.<br />
<br />
有关假设的自我复制系统的其他化学基础的讨论,请参阅替代生物化学。<br />
<br />
<br />
<br />
<br />
<br />
==See also==<br />
<br />
==See also==<br />
<br />
参见<br />
<br />
* [[Artificial life]]<br />
<br />
<br />
<br />
* [[Astrochicken]]<br />
<br />
<br />
<br />
* [[Autopoiesis]]<br />
<br />
<br />
<br />
* [[Complex system]]<br />
<br />
<br />
<br />
* [[DNA replication]]<br />
<br />
<br />
<br />
* [[Life]]<br />
<br />
<br />
<br />
* [[Robot]]<br />
<br />
<br />
<br />
* [[RepRap]] (self-replicated 3D printer)<br />
<br />
<br />
<br />
* [[Self-replicating machine]]<br />
<br />
<br />
<br />
** [[Self-replicating spacecraft]]<br />
<br />
<br />
<br />
* [[Space manufacturing]]<br />
<br />
<br />
<br />
* [[Von Neumann universal constructor]]<br />
<br />
<br />
<br />
* [[Virus]]<br />
<br />
<br />
<br />
* [[Von Neumann machine (disambiguation)]]<br />
<br />
<br />
<br />
* [[Self reconfigurable]]<br />
<br />
<br />
<br />
* [[Final Anthropic Principle]]<br />
<br />
<br />
<br />
* [[Positive feedback]]<br />
<br />
<br />
<br />
* [[Harmonic]]<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
==References==<br />
<br />
==References==<br />
<br />
参考资料<br />
<br />
{{reflist}}<br />
<br />
<br />
<br />
;Notes<br />
<br />
Notes<br />
<br />
注释<br />
<br />
{{refbegin}}<br />
<br />
<br />
<br />
* von Neumann, J., 1966, ''The Theory of Self-reproducing Automata'', A. Burks, ed., Univ. of Illinois Press, Urbana, IL.<br />
<br />
<br />
<br />
* [[s:Advanced Automation for Space Missions|Advanced Automation for Space Missions]], a 1980 NASA study edited by [[Robert Freitas]]<br />
<br />
<br />
<br />
* [http://www.MolecularAssembler.com/KSRM.htm Kinematic Self-Replicating Machines] first comprehensive survey of entire field in 2004 by [[Robert Freitas]] and [[Ralph Merkle]]<br />
<br />
<br />
<br />
* [https://web.archive.org/web/20040920220139/http://www.niac.usra.edu/files/studies/final_report/pdf/883Toth-Fejel.pdf NASA Institute for Advance Concepts study by General Dynamics]- concluded that complexity of the development was equal to that of a Pentium 4, and promoted a design based on cellular automata.<br />
<br />
<br />
<br />
* ''[[Gödel, Escher, Bach]]'' by [[Douglas Hofstadter]] (detailed discussion and many examples)<br />
<br />
<br />
<br />
* Kenyon, R., ''Self-replicating tilings'', in: Symbolic Dynamics and Applications (P. Walters, ed.) Contemporary Math. vol. 135 (1992), 239-264.<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
{{refend}}<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
[[Category:Self-replication| ]]<br />
<br />
<br />
<br />
<noinclude><br />
<br />
<small>This page was moved from [[wikipedia:en:Self-replication]]. Its edit history can be viewed at [[自复制/edithistory]]</small></noinclude><br />
<br />
[[Category:待整理页面]]</div>粲兰https://wiki.swarma.org/index.php?title=%E8%87%AA%E5%A4%8D%E5%88%B6_Self-replication&diff=15225自复制 Self-replication2020-10-14T15:36:31Z<p>粲兰:</p>
<hr />
<div>此词条暂由袁一博翻译,未经人工整理和审校,带来阅读不便,请见谅。{{see also|Biological reproduction}}<br />
<br />
<br />
<br />
{{Use dmy dates|date=April 2019|cs1-dates=y}}<br />
<br />
<br />
<br />
[[Image:DNA chemical structure.svg|thumb|right|200px|[[Molecular structure]] of [[DNA]] ]]<br />
<br />
[[Molecular structure of DNA ]]<br />
<br />
[ DNA 的分子结构]<br />
<br />
'''Self-replication''' is any behavior of a [[dynamical system]] that yields construction of an identical or similar copy of itself. [[Cell (biology)|Biological cell]]s, given suitable environments, reproduce by [[cell division]]. During cell division, [[DNA]] is replicated and can be transmitted to offspring during [[reproduction]]. [[virus (biology)|Biological viruses]] can [[Viral replication|replicate]], but only by commandeering the reproductive machinery of cells through a process of infection. Harmful [[prion]] proteins can replicate by converting normal proteins into rogue forms.<ref>{{cite news|url=http://news.bbc.co.uk/1/hi/health/8435320.stm |title='Lifeless' prion proteins are 'capable of evolution' |work=BBC News |date=2010-01-01 |accessdate=2013-10-22}}</ref> [[Computer virus]]es reproduce using the hardware and software already present on computers. Self-replication in [[robotics]] has been an area of research and a subject of interest in [[science fiction]]. Any self-replicating mechanism which does not make a perfect copy ([[mutation]]) will experience [[genetic variation]] and will create variants of itself. These variants will be subject to [[natural selection]], since some will be better at surviving in their current environment than others and will out-breed them.<br />
<br />
Self-replication is any behavior of a dynamical system that yields construction of an identical or similar copy of itself. Biological cells, given suitable environments, reproduce by cell division. During cell division, DNA is replicated and can be transmitted to offspring during reproduction. Biological viruses can replicate, but only by commandeering the reproductive machinery of cells through a process of infection. Harmful prion proteins can replicate by converting normal proteins into rogue forms. Computer viruses reproduce using the hardware and software already present on computers. Self-replication in robotics has been an area of research and a subject of interest in science fiction. Any self-replicating mechanism which does not make a perfect copy (mutation) will experience genetic variation and will create variants of itself. These variants will be subject to natural selection, since some will be better at surviving in their current environment than others and will out-breed them.<br />
<br />
自我复制是一个动力系统的任何行为,产生一个相同或相似的复制本身的建设。生物细胞,在适当的环境下,通过细胞分裂进行繁殖。在细胞分裂过程中,DNA 被复制,并在生殖过程中传递给后代。生物病毒可以复制,但只能通过感染过程控制细胞的生殖机制。有害的朊病毒蛋白可以通过将正常的蛋白质转化为流氓形式来复制。计算机病毒利用计算机上已有的硬件和软件进行复制。自我复制机器人学一直是一个研究领域,也是科幻小说中的一个兴趣主题。任何不能完美复制的自我复制机制(变异)都会经历遗传变异,并且会产生自身的变异。这些变异将受到自然选择的影响,因为有些变异会比其他变异更好地在当前环境中生存,并将超越他们。<br />
<br />
<br />
<br />
<br />
<br />
==Overview==<br />
<br />
==Overview 综述==<br />
<br />
<br />
<br />
<br />
<br />
<br />
===Theory===<br />
<br />
===Theory 理论===<br />
<br />
<br />
<br />
{{See also|Von Neumann universal constructor}}<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
Early research by [[John von Neumann]]<ref name=Hixon_vonNeumann>{{cite book|last=von Neumann|first=John|title=The Hixon Symposium|year=1948|location=Pasadena, California|pages=1–36}}</ref> established that replicators have several parts:<br />
<br />
Early research by John von Neumann established that replicators have several parts:<br />
<br />
约翰·冯·诺伊曼的早期研究表明复制因子有几个部分:<br />
<br />
<br />
<br />
<br />
<br />
*A coded representation of the replicator<br />
<br />
<br />
<br />
*A mechanism to copy the coded representation<br />
<br />
<br />
<br />
*A mechanism for effecting construction within the host environment of the replicator<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
Exceptions to this pattern may be possible, although none have yet been achieved. For example, scientists have come close to constructing [https://arstechnica.com/science/2011/04/investigations-into-the-ancient-rna-world/ RNA that can be copied] in an "environment" that is a solution of RNA monomers and transcriptase. In this case, the body is the genome, and the specialized copy mechanisms are external. The requirement for an outside copy mechanism has not yet been overcome, and such systems are more accurately characterized as "assisted replication" than "self-replication".<br />
<br />
Exceptions to this pattern may be possible, although none have yet been achieved. For example, scientists have come close to constructing [https://arstechnica.com/science/2011/04/investigations-into-the-ancient-rna-world/ RNA that can be copied] in an "environment" that is a solution of RNA monomers and transcriptase. In this case, the body is the genome, and the specialized copy mechanisms are external. The requirement for an outside copy mechanism has not yet been overcome, and such systems are more accurately characterized as "assisted replication" than "self-replication".<br />
<br />
这种模式可能有例外,尽管尚未实现任何例外。例如,科学家们已经接近于在 RNA 单体和转录酶的“环境”中构建[可复制的 https://arstechnica.com/science/2011/04/investigations-into-the-ancient-RNA-world/ RNA ]。在这种情况下,身体就是基因组,专门的复制机制是外部的。对外部复制机制的需求尚未被克服,这种系统更准确地描述为“辅助复制”而不是“自我复制复制”。<br />
<br />
<br />
<br />
<br />
<br />
However, the simplest possible case is that only a genome exists. Without some specification of the self-reproducing steps, a genome-only system is probably better characterized as something like a [[crystal]].<br />
<br />
However, the simplest possible case is that only a genome exists. Without some specification of the self-reproducing steps, a genome-only system is probably better characterized as something like a crystal.<br />
<br />
然而,最简单的可能的情况是只有一个基因组存在。如果没有一些自我繁殖步骤的说明,一个只有基因组的系统可能更好地被描述为类似于晶体的东西。<br />
<br />
<br />
<br />
<br />
<br />
===Classes of self-replication===<br />
<br />
===Classes of self-replication 自复制的类别===<br />
<br />
<br />
<br />
Recent research<ref>{{cite web|url = http://www.MolecularAssembler.com/KSRM/5.1.htm | date = 2004 | accessdate = 29 June 2013 | last = Freitas | first = Robert | last2 = Merkle | first2 = Ralph | title = Kinematic Self-Replicating Machines - General Taxonomy of Replicators}}</ref> has begun to categorize replicators, often based on the amount of support they require.<br />
<br />
Recent research has begun to categorize replicators, often based on the amount of support they require.<br />
<br />
最近的研究已经开始对复制因子进行分类,通常基于它们所需要的支持程度。<br />
<br />
<br />
<br />
<br />
<br />
*Natural replicators have all or most of their design from nonhuman sources. Such systems include natural life forms.<br />
<br />
*自然复制因子的设计全部或绝大部分来自非人类来源。这样的系统包含自然生命形式。<br />
<br />
<br />
*[[Autotroph]]ic replicators can reproduce themselves "in the wild". They mine their own materials. It is conjectured that non-biological autotrophic replicators could be designed by humans, and could easily accept specifications for human products.<br />
<br />
*无机复制因子可以在自然环境下进行自我复制。它们采掘自身的矿物质。据推测,非生物的无极复制因子可能由人类设计而成,并且可以轻易地接受人类产物的规格。<br />
<br />
<br />
*Self-reproductive systems are conjectured systems which would produce copies of themselves from industrial feedstocks such as metal bar and wire.<br />
<br />
*可以利用工业原料,例如金属棒和金属丝,以产生自身的拷贝的自复制的系统存在于假想当中<br />
<br />
<br />
*Self-assembling systems assemble copies of themselves from finished, delivered parts. Simple examples of such systems have been demonstrated at the macro scale.<br />
<br />
*自组装系统将它们已经完成并运送过来的自复制部分组装起来。这种系统的简单例子已经在宏观尺度得到展示。<br />
<br />
<br />
<br />
<br />
<br />
The design space for machine replicators is very broad. A comprehensive study<ref>{{cite web|url = http://www.MolecularAssembler.com/KSRM/5.1.9.htm | date = 2004 | accessdate = 29 June 2013 | last1 = Freitas | first1 = Robert | last2 = Merkle | first2 = Ralph | title = Kinematic Self-Replicating Machines - Freitas-Merkle Map of the Kinematic Replicator Design Space (2003–2004)}}</ref> to date by [[Robert Freitas]] and [[Ralph Merkle]] has identified 137 design dimensions grouped into a dozen separate categories, including: (1) Replication Control, (2) Replication Information, (3) Replication Substrate, (4) Replicator Structure, (5) Passive Parts, (6) Active Subunits, (7) Replicator Energetics, (8) Replicator Kinematics, (9) Replication Process, (10) Replicator Performance, (11) Product Structure, and (12) Evolvability.<br />
<br />
The design space for machine replicators is very broad. A comprehensive study to date by Robert Freitas and Ralph Merkle has identified 137 design dimensions grouped into a dozen separate categories, including: (1) Replication Control, (2) Replication Information, (3) Replication Substrate, (4) Replicator Structure, (5) Passive Parts, (6) Active Subunits, (7) Replicator Energetics, (8) Replicator Kinematics, (9) Replication Process, (10) Replicator Performance, (11) Product Structure, and (12) Evolvability.<br />
<br />
机器复制因子的设计空间非常广阔。迄今为止,罗伯特·弗雷塔斯(Robert Freitas)和拉尔夫·默克尔(Ralph Merkle)的综合研究已经确定了137个设计维度并将其分为十几个独立的类别,包括: (1)复制控制,(2)复制信息,(3)复制基质,(4)复制因子<br />
结构,(5)被动部件,(6)主动子单元,(7)复制因子能量学,(8)复制因子运动学,(9)复制过程,(10)复制因子性能,(11)产物结构,和(12)进化性。<br />
<br />
<br />
<br />
<br />
<br />
===A self-replicating computer program===<br />
<br />
===A self-replicating computer program 一种自复制的电脑程序===<br />
<br />
<br />
<br />
{{Main|Quine (computing)}}<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
In [[computer science]] a [[Quine (computing)|quine]] is a self-reproducing computer program that, when executed, outputs its own code. For example, a quine in the [[Python (programming language)|Python programming language]] is:<br />
<br />
In computer science a quine is a self-reproducing computer program that, when executed, outputs its own code. For example, a quine in the Python programming language is:<br />
<br />
在计算机科学中,quine 是一种自我复制的计算机程序,当执行时,输出自己的代码。例如,Python 的一个 quine 是:<br />
<br />
<br />
<br />
<br />
<br />
:<code>a='a=%r;print(a%%a)';print(a%a)</code><br />
<br />
<code>a='a=%r;print(a%%a)';print(a%a)</code><br />
<br />
代码为‘ a% r; print (a% a)’ ; print (a% a) / code<br />
<br />
<br />
<br />
<br />
<br />
A more trivial approach is to write a program that will make a copy of any stream of data that it is directed to, and then direct it at itself. In this case the program is treated as both executable code, and as data to be manipulated. This approach is common in most self-replicating systems, including biological life, and is simpler as it does not require the program to contain a complete description of itself.<br />
<br />
A more trivial approach is to write a program that will make a copy of any stream of data that it is directed to, and then direct it at itself. In this case the program is treated as both executable code, and as data to be manipulated. This approach is common in most self-replicating systems, including biological life, and is simpler as it does not require the program to contain a complete description of itself.<br />
<br />
一种更简单的方法是编写一个程序,该程序将复制它所指向的任何数据流,然后指向它自己。在这种情况下,程序既被当作可执行代码,也被当作要操作的数据。这种方法在包括生物生命在内的大多数自我复制系统中都很常见,而且更简单,因为它不需要程序包含对自身的完整描述。<br />
<br />
<br />
<br />
<br />
<br />
In many programming languages an empty program is legal, and executes without producing errors or other output. The output is thus the same as the source code, so the program is trivially self-reproducing.<br />
<br />
In many programming languages an empty program is legal, and executes without producing errors or other output. The output is thus the same as the source code, so the program is trivially self-reproducing.<br />
<br />
在许多编程语言中,空程序是合法的,并且执行时不会产生错误或其他输出。因此,输出是相同的源代码,所以程序是微不足道的自我复制。<br />
<br />
<br />
<br />
<br />
<br />
===Self-replicating tiling===<br />
<br />
===Self-replicating tiling===<br />
<br />
自我复制瓷砖<br />
<br />
{{See also|Self-similarity}}<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
In [[geometry]] a self-replicating tiling is a tiling pattern in which several [[congruence (geometry)|congruent]] tiles may be joined together to form a larger tile that is similar to the original. This is an aspect of the field of study known as [[tessellation]]. The "sphinx" [[hexiamond]] is the only known self-replicating [[pentagon]].<ref>For an image that does not show how this replicates, see: Eric W. Weisstein. "Sphinx." From MathWorld--A Wolfram Web Resource. [http://mathworld.wolfram.com/Sphinx.html http://mathworld.wolfram.com/Sphinx.html]</ref> For example, four such [[concave polygon|concave]] pentagons can be joined together to make one with twice the dimensions.<ref>For further illustrations, see [http://www.geoaustralia.com/italian/Sphinx/Guide.html Teaching TILINGS / TESSELLATIONS with Geo Sphinx]</ref> [[Solomon W. Golomb]] coined the term [[rep-tiles]] for self-replicating tilings.<br />
<br />
In geometry a self-replicating tiling is a tiling pattern in which several congruent tiles may be joined together to form a larger tile that is similar to the original. This is an aspect of the field of study known as tessellation. The "sphinx" hexiamond is the only known self-replicating pentagon. For example, four such concave pentagons can be joined together to make one with twice the dimensions. Solomon W. Golomb coined the term rep-tiles for self-replicating tilings.<br />
<br />
在几何学中,一个自我复制的瓷砖是一个瓷砖模式,其中几个相等的瓷砖可以连接在一起,形成一个较大的瓷砖,类似于原来的。这是一个方面的研究领域称为镶嵌。“狮身人面像”赫西阿蒙德是已知唯一能自我复制的五角大楼。例如,四个这样的凹形五边形可以连接在一起,形成一个尺寸是原来两倍的五边形。所罗门·格伦布创造了术语 rep-tiles 来描述自我复制的耕作。<br />
<br />
<br />
<br />
<br />
<br />
In 2012, [[Lee Sallows]] identified rep-tiles as a special instance of a [[self-tiling tile set]] or setiset. A setiset of order ''n'' is a set of ''n'' shapes that can be assembled in ''n'' different ways so as to form larger replicas of themselves. Setisets in which every shape is distinct are called 'perfect'. A rep-''n'' rep-tile is just a setiset composed of ''n'' identical pieces.<br />
<br />
In 2012, Lee Sallows identified rep-tiles as a special instance of a self-tiling tile set or setiset. A setiset of order n is a set of n shapes that can be assembled in n different ways so as to form larger replicas of themselves. Setisets in which every shape is distinct are called 'perfect'. A rep-n rep-tile is just a setiset composed of n identical pieces.<br />
<br />
2012年,Lee Sallows 将 rep-tiles 定义为一种特殊的自组合纹样集。一组 n 阶的复制品是一组 n 个形状的复制品,它们可以以 n 种不同的方式组合,以便形成更大的自身复制品。每个形状各不相同的塞提塞称为“完美”。Rep-n rep-tile 只是由 n 个相同部分组成的一个集合。<br />
<br />
{|<br />
<br />
{|<br />
<br />
{|<br />
<br />
|- style="vertical-align:bottom;"<br />
<br />
|- style="vertical-align:bottom;"<br />
<br />
|-style“ vertical-align: bottom; ”<br />
<br />
[[File:Self-replication of sphynx hexidiamonds.svg|thumb|A rep-tile-based setiset of order 4|thumb|left|text-bottom|260px|Four '[[Sphinx tiling|sphinx]]' hexiamonds can be put together to form another sphinx.]]<br />
<br />
sphinx' hexiamonds can be put together to form another sphinx.]]<br />
<br />
[人面狮身人面像可以拼凑成另一个人面狮身人面像]<br />
<br />
[[File:A rep-tile-based setiset of order 4.png|thumb|A rep-tile-based setiset of order 9|thumb|right|text-bottom|290px|A perfect [[Self-tiling tile set|setiset]] of order 4]]<br />
<br />
setiset of order 4]]<br />
<br />
setiset of order 4]]<br />
<br />
|}<br />
<br />
|}<br />
<br />
|}<br />
<br />
{{clear}}<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
===Self replicating clay crystals===<br />
<br />
===Self replicating clay crystals===<br />
<br />
自我复制的粘土晶体<br />
<br />
One form of natural self-replication that isn't based on DNA or RNA occurs in clay crystals.<ref>{{cite web|url=http://www.bbc.com/earth/story/20160823-the-idea-that-life-began-as-clay-crystals-is-50-years-old |title=The idea that life began as clay crystals is 50 years old |publisher=bbc.com |date=2016-08-24 |accessdate=2019-11-10}}</ref> Clay consists of a large number of small crystals, and clay is an environment that promotes crystal growth. Crystals consist of a regular lattice of atoms and are able to grow if e.g. placed in a water solution containing the crystal components; automatically arranging atoms at the crystal boundary into the crystalline form. Crystals may have irregularities where the regular atomic structure is broken, and when crystals grow, these irregularities may propagate, creating a form of self-replication of crystal irregularities. Because these irregularities may affect the probability of a crystal breaking apart to form new crystals, crystals with such irregularities could even be considered to undergo evolutionary development.<br />
<br />
One form of natural self-replication that isn't based on DNA or RNA occurs in clay crystals. Clay consists of a large number of small crystals, and clay is an environment that promotes crystal growth. Crystals consist of a regular lattice of atoms and are able to grow if e.g. placed in a water solution containing the crystal components; automatically arranging atoms at the crystal boundary into the crystalline form. Crystals may have irregularities where the regular atomic structure is broken, and when crystals grow, these irregularities may propagate, creating a form of self-replication of crystal irregularities. Because these irregularities may affect the probability of a crystal breaking apart to form new crystals, crystals with such irregularities could even be considered to undergo evolutionary development.<br />
<br />
一种不是基于 DNA 或 RNA 的天然自我复制存在于粘土晶体中。粘土由大量的小晶体组成,粘土是促进晶体生长的环境。晶体是由规则的原子晶格组成的,如果没有原子晶格,晶体就能够生长。放置在含有晶体成分的水溶液中,自动地将晶体边界上的原子排列成晶体形式。当正常的原子结构被破坏时,晶体可能具有不规则性,当晶体生长时,这些不规则性可能会传播,形成一种晶体不规则性的自我复制。由于这些不规则结构可能会影响晶体分裂形成新晶体的概率,因此这种不规则结构的晶体甚至可以被认为是在进化过程中形成的。<br />
<br />
<br />
<br />
<br />
<br />
===Applications===<br />
<br />
===Applications===<br />
<br />
申请<br />
<br />
It is a long-term goal of some engineering sciences to achieve a [[clanking replicator]], a material device that can self-replicate. The usual reason is to achieve a low cost per item while retaining the utility of a manufactured good. Many authorities say that in the limit, the cost of self-replicating items should approach the cost-per-weight of wood or other biological substances, because self-replication avoids the costs of [[labour (economics)|labor]], [[Capital (economics)|capital]] and [[distribution (business)|distribution]] in conventional [[factory|manufactured goods]].<br />
<br />
It is a long-term goal of some engineering sciences to achieve a clanking replicator, a material device that can self-replicate. The usual reason is to achieve a low cost per item while retaining the utility of a manufactured good. Many authorities say that in the limit, the cost of self-replicating items should approach the cost-per-weight of wood or other biological substances, because self-replication avoids the costs of labor, capital and distribution in conventional manufactured goods.<br />
<br />
一些工程科学的长期目标是制造出一种可以自我复制的铿锵复制机器。通常的原因是为了在保留生产品的同时降低每件商品的成本。许多权威人士表示,在这个限度内,自我复制产品的成本应该接近木材或其他生物物质的单位重量成本,因为自我复制可以避免传统制成品的劳动力、资本和分销成本。<br />
<br />
<br />
<br />
<br />
<br />
A fully novel artificial replicator is a reasonable near-term goal.<br />
<br />
A fully novel artificial replicator is a reasonable near-term goal.<br />
<br />
一个全新的人工复制因子是一个合理的近期目标。<br />
<br />
A [[NASA]] study recently placed the complexity of a [[clanking replicator]] at approximately that of [[Intel]]'s [[Pentium (brand)|Pentium]] 4 CPU.<ref>{{cite web|url=http://www.niac.usra.edu/files/studies/final_report/883Toth-Fejel.pdf |title=Modeling Kinematic Cellular Automata Final Report |publisher= |date=April 30, 2004 |accessdate=2013-10-22}}</ref> That is, the technology is achievable with a relatively small engineering group in a reasonable commercial time-scale at a reasonable cost.<br />
<br />
A NASA study recently placed the complexity of a clanking replicator at approximately that of Intel's Pentium 4 CPU. That is, the technology is achievable with a relatively small engineering group in a reasonable commercial time-scale at a reasonable cost.<br />
<br />
美国宇航局最近的一项研究表明,铿锵复制机器的复杂度大约相当于英特尔奔腾4处理器的复杂度。也就是说,这项技术是可以实现的与一个相对较小的工程小组在一个合理的商业时间规模在一个合理的成本。<br />
<br />
<br />
<br />
<br />
<br />
Given the currently keen interest in biotechnology and the high levels of funding in that field, attempts to exploit the replicative ability of existing cells are timely, and may easily lead to significant insights and advances.<br />
<br />
Given the currently keen interest in biotechnology and the high levels of funding in that field, attempts to exploit the replicative ability of existing cells are timely, and may easily lead to significant insights and advances.<br />
<br />
鉴于目前对生物技术的浓厚兴趣和这一领域的大量资金,利用现有细胞的复制能力的尝试是及时的,而且很容易产生重大的见解和进展。<br />
<br />
<br />
<br />
<br />
<br />
A variation of self replication is of practical relevance in [[compiler]] construction, where a similar [[bootstrapping]] problem occurs as in natural self replication. A compiler ([[phenotype]]) can be applied on the compiler's own [[source code]] ([[genotype]]) producing the compiler itself. During compiler development, a modified ([[Mutation|mutated]]) source is used to create the next generation of the compiler. This process differs from natural self-replication in that the process is directed by an engineer, not by the subject itself.<br />
<br />
A variation of self replication is of practical relevance in compiler construction, where a similar bootstrapping problem occurs as in natural self replication. A compiler (phenotype) can be applied on the compiler's own source code (genotype) producing the compiler itself. During compiler development, a modified (mutated) source is used to create the next generation of the compiler. This process differs from natural self-replication in that the process is directed by an engineer, not by the subject itself.<br />
<br />
自我复制的一种变体在编译器构造中具有实际意义,在自然自我复制中也会出现类似的自举问题。编译器(表型)可以应用于编译器自身的源代码(基因型) ,从而产生编译器本身。在编译器开发过程中,使用修改(变异)的源代码来创建下一代编译器。这个过程不同于自然自我复制,因为这个过程是由工程师指导的,而不是主体本身。<br />
<br />
<br />
<br />
<br />
<br />
==Mechanical self-replication==<br />
<br />
==Mechanical self-replication==<br />
<br />
机械自我复制<br />
<br />
{{Main|Self-replicating machine}}<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
An activity in the field of robots is the self-replication of machines. Since all robots (at least in modern times) have a fair number of the same features, a self-replicating robot (or possibly a hive of robots) would need to do the following:<br />
<br />
An activity in the field of robots is the self-replication of machines. Since all robots (at least in modern times) have a fair number of the same features, a self-replicating robot (or possibly a hive of robots) would need to do the following:<br />
<br />
机器人领域的一项活动就是机器的自我复制。由于所有机器人(至少在现代)都有相当数量的相同特性,一个自我复制的机器人(或者可能是一群机器人)需要做以下工作:<br />
<br />
<br />
<br />
<br />
<br />
*Obtain construction materials<br />
<br />
<br />
<br />
*Manufacture new parts including its smallest parts and thinking apparatus<br />
<br />
<br />
<br />
*Provide a consistent power source<br />
<br />
<br />
<br />
*Program the new members<br />
<br />
<br />
<br />
*error correct any mistakes in the offspring<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
On a [[Nanotechnology|nano]] scale, [[Assembler (nanotechnology)|assemblers]] might also be designed to self-replicate under their own power. This, in turn, has given rise to the "[[grey goo]]" version of [[Armageddon]], as featured in such science fiction novels as ''[[Bloom (novel)|Bloom]]'', ''[[Prey (novel)|Prey]]'', and ''[[Recursion (novel)|Recursion]]''.<br />
<br />
On a nano scale, assemblers might also be designed to self-replicate under their own power. This, in turn, has given rise to the "grey goo" version of Armageddon, as featured in such science fiction novels as Bloom, Prey, and Recursion.<br />
<br />
在纳米级别上,组装者也可能被设计成在自身能量下进行自我复制。这反过来又导致了“灰色粘性”版本的世界末日,就像在诸如 Bloom,Prey 和 Recursion 这样的科幻小说中描述的那样。<br />
<br />
<br />
<br />
<br />
<br />
The [[Foresight Institute]] has published guidelines for researchers in mechanical self-replication.<ref>{{cite web|url=http://foresight.org/guidelines/ |title=Molecular Nanotechnology Guidelines |publisher=Foresight.org |date= |accessdate=2013-10-22}}</ref> The guidelines recommend that researchers use several specific techniques for preventing mechanical replicators from getting out of control, such as using a [[broadcast architecture]].<br />
<br />
The Foresight Institute has published guidelines for researchers in mechanical self-replication. The guidelines recommend that researchers use several specific techniques for preventing mechanical replicators from getting out of control, such as using a broadcast architecture.<br />
<br />
美国前瞻学会协会已经为机械自我复制的研究人员发布了指导方针。指导方针建议研究人员使用一些特定的技术来防止机械复制器失控,比如使用广播结构。<br />
<br />
<br />
<br />
<br />
<br />
For a detailed article on mechanical reproduction as it relates to the industrial age see [[mass production]].<br />
<br />
For a detailed article on mechanical reproduction as it relates to the industrial age see mass production.<br />
<br />
有关与工业时代有关的机械复制的详细文章,请参阅大规模生产。<br />
<br />
<br />
<br />
<br />
<br />
==Fields==<br />
<br />
==Fields==<br />
<br />
田野<br />
<br />
{{refimprove section|date=August 2017}}<br />
<br />
<br />
<br />
Research has occurred in the following areas:<br />
<br />
Research has occurred in the following areas:<br />
<br />
在以下领域进行了研究:<br />
<br />
<br />
<br />
<br />
<br />
* [[Biology]] studies natural replication and replicators, and their interaction. These can be an important guide to avoid design difficulties in self-replicating machinery.<br />
<br />
<br />
<br />
* In [[Chemistry]] self-replication studies are typically about how a specific set of molecules can act together to replicate each other within the set <ref>{{cite book |author=Moulin, Giuseppone |title=Constitutional Dynamic Chemistry |volume=322 |pages=87–105 |year=2011|publisher=Springer|doi=10.1007/128_2011_198|pmid=21728135 |series=Topics in Current Chemistry |isbn=978-3-642-28343-7 |chapter=Dynamic Combinatorial Self-Replicating Systems }}</ref> (often part of [[Systems chemistry]] field).<br />
<br />
<br />
<br />
* [[Meme]]tics studies ideas and how they propagate in human culture. Memes require only small amounts of material, and therefore have theoretical similarities to [[virus]]es and are often described as [[virus|viral]].<br />
<br />
<br />
<br />
* [[Nanotechnology]] or more precisely, [[molecular nanotechnology]] is concerned with making [[Nanotechnology|nano]] scale [[assembler (nanotechnology)|assemblers]]. Without self-replication, capital and assembly costs of molecular machines become impossibly large.<br />
<br />
<br />
<br />
* Space resources: NASA has sponsored a number of design studies to develop self-replicating mechanisms to mine space resources. Most of these designs include computer-controlled machinery that copies itself.<br />
<br />
<br />
<br />
* [[Computer security]]: Many computer security problems are caused by self-reproducing computer programs that infect computers — [[computer worm]]s and [[computer virus]]es.<br />
<br />
<br />
<br />
* In [[parallel computing]], it takes a long time to manually load a new program on every node of a large [[computer cluster]] or [[distributed computing]] system. Automatically loading new programs using [[mobile agent]]s can save the system administrator a lot of time and give users their results much quicker, as long as they don't get out of control.<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
==In industry==<br />
<br />
==In industry==<br />
<br />
在工业界<br />
<br />
<br />
<br />
<br />
<br />
===Space exploration and manufacturing===<br />
<br />
===Space exploration and manufacturing===<br />
<br />
太空探索和制造业<br />
<br />
The goal of self-replication in space systems is to exploit large amounts of matter with a low launch mass. For example, an [[autotroph]]ic self-replicating machine could cover a moon or planet with solar cells, and beam the power to the Earth using microwaves. Once in place, the same machinery that built itself could also produce raw materials or manufactured objects, including transportation systems to ship the products. [[Von Neumann Probe|Another model]] of self-replicating machine would copy itself through the galaxy and universe, sending information back.<br />
<br />
The goal of self-replication in space systems is to exploit large amounts of matter with a low launch mass. For example, an autotrophic self-replicating machine could cover a moon or planet with solar cells, and beam the power to the Earth using microwaves. Once in place, the same machinery that built itself could also produce raw materials or manufactured objects, including transportation systems to ship the products. Another model of self-replicating machine would copy itself through the galaxy and universe, sending information back.<br />
<br />
太空系统中自我复制的目标是利用低发射质量的大量物质。例如,一个自养自我复制机械可以用太阳能电池覆盖月球或行星,并通过微波将能量传送到地球。一旦到位,自己建造的同样的机器也可以生产原材料或制成品,包括运输产品的运输系统。另一个自我复制机械的模型会在银河系和宇宙中复制自己,把信息传回来。<br />
<br />
<br />
<br />
<br />
<br />
In general, since these systems are autotrophic, they are the most difficult and complex known replicators. They are also thought to be the most hazardous, because they do not require any inputs from human beings in order to reproduce.<br />
<br />
In general, since these systems are autotrophic, they are the most difficult and complex known replicators. They are also thought to be the most hazardous, because they do not require any inputs from human beings in order to reproduce.<br />
<br />
一般来说,由于这些系统是自养的,他们是最困难和复杂的已知复制因子。它们也被认为是最危险的,因为它们不需要人类的任何投入来繁殖。<br />
<br />
<br />
<br />
<br />
<br />
A classic theoretical study of replicators in space is the 1980 [[NASA]] study of autotrophic clanking replicators, edited by [[Robert Freitas]].<ref>[[Wikisource:Advanced Automation for Space Missions]]</ref><br />
<br />
A classic theoretical study of replicators in space is the 1980 NASA study of autotrophic clanking replicators, edited by Robert Freitas.<br />
<br />
一个关于太空中复制因子的经典理论研究是1980年 NASA 关于自养叮当复制因子的研究,由 Robert Freitas 编辑。<br />
<br />
<br />
<br />
<br />
<br />
Much of the design study was concerned with a simple, flexible chemical system for processing lunar [[regolith]], and the differences between the ratio of elements needed by the replicator, and the ratios available in regolith. The limiting element was [[Chlorine]], an essential element to process regolith for [[Aluminium]]. Chlorine is very rare in lunar regolith, and a substantially faster rate of reproduction could be assured by importing modest amounts.<br />
<br />
Much of the design study was concerned with a simple, flexible chemical system for processing lunar regolith, and the differences between the ratio of elements needed by the replicator, and the ratios available in regolith. The limiting element was Chlorine, an essential element to process regolith for Aluminium. Chlorine is very rare in lunar regolith, and a substantially faster rate of reproduction could be assured by importing modest amounts.<br />
<br />
大部分的设计研究都是关于一个简单、灵活的化学系统来处理月球表面的风化层,以及复制因子所需要的元素比率和风化层中可用的比率之间的差异。限制元素是氯,一个必不可少的元素处理风化层的铝。氯在月球的风化层中非常罕见,通过进口适量的氯,可以保证更快的生殖速度。<br />
<br />
<br />
<br />
<br />
<br />
The reference design specified small computer-controlled electric carts running on rails. Each cart could have a simple hand or a small bull-dozer shovel, forming a basic [[robot]].<br />
<br />
The reference design specified small computer-controlled electric carts running on rails. Each cart could have a simple hand or a small bull-dozer shovel, forming a basic robot.<br />
<br />
参考设计指定的小型计算机控制的电动车在轨道上运行。每个推车可以有一个简单的手或一个小型推土机铲,形成一个基本的机器人。<br />
<br />
<br />
<br />
<br />
<br />
Power would be provided by a "canopy" of [[solar cell]]s supported on pillars. The other machinery could run under the canopy.<br />
<br />
Power would be provided by a "canopy" of solar cells supported on pillars. The other machinery could run under the canopy.<br />
<br />
电力将由支撑在支柱上的太阳能电池“天篷”提供。其他的机器可以在天篷下面运转。<br />
<br />
<br />
<br />
<br />
<br />
A "[[casting]] [[robot]]" would use a robotic arm with a few sculpting tools to make [[plaster]] [[molding (process)|mold]]s. Plaster molds are easy to make, and make precise parts with good surface finishes. The robot would then cast most of the parts either from non-conductive molten rock ([[basalt]]) or purified metals. An [[electricity|electric]] [[oven]] melted the materials.<br />
<br />
A "casting robot" would use a robotic arm with a few sculpting tools to make plaster molds. Plaster molds are easy to make, and make precise parts with good surface finishes. The robot would then cast most of the parts either from non-conductive molten rock (basalt) or purified metals. An electric oven melted the materials.<br />
<br />
一个“铸造机器人”将使用一个机械手臂和一些雕刻工具来制作石膏模具。石膏模具易于制作,而且制作精确的零件表面光洁度好。然后,机器人将用非导电熔岩(玄武岩)或纯净金属铸造大部分零件。电炉熔化了这些材料。<br />
<br />
<br />
<br />
<br />
<br />
A speculative, more complex "chip factory" was specified to produce the computer and electronic systems, but the designers also said that it might prove practical to ship the chips from Earth as if they were "vitamins".<br />
<br />
A speculative, more complex "chip factory" was specified to produce the computer and electronic systems, but the designers also said that it might prove practical to ship the chips from Earth as if they were "vitamins".<br />
<br />
他们指定了一个更为复杂的推测性“芯片工厂”来生产计算机和电子系统,但设计师们还表示,将这些芯片像“维生素”一样从地球运输出去,可能会被证明是可行的。<br />
<br />
<br />
<br />
<br />
<br />
===Molecular manufacturing===<br />
<br />
===Molecular manufacturing===<br />
<br />
分子制造<br />
<br />
{{Main|Molecular nanotechnology#Replicating nanorobots}}<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
[[Nanotechnology|Nanotechnologists]] in particular believe that their work will likely fail to reach a state of maturity until human beings design a self-replicating [[assembler (nanotechnology)|assembler]] of [[nanometer]] dimensions [http://www.MolecularAssembler.com/KSRM/4.11.3.htm].<br />
<br />
Nanotechnologists in particular believe that their work will likely fail to reach a state of maturity until human beings design a self-replicating assembler of nanometer dimensions [http://www.MolecularAssembler.com/KSRM/4.11.3.htm].<br />
<br />
纳米技术专家尤其相信,在人类设计出一种纳米尺度的自我复制装配器之前,他们的工作很可能无法达到成熟的状态。 Molecularassembler.com/ksrm/4.11.3.htm.<br />
<br />
<br />
<br />
<br />
<br />
These systems are substantially simpler than autotrophic systems, because they are provided with purified feedstocks and energy. They do not have to reproduce them. This distinction is at the root of some of the controversy about whether [[molecular manufacturing]] is possible or not. Many authorities who find it impossible are clearly citing sources for complex autotrophic self-replicating systems. Many of the authorities who find it possible are clearly citing sources for much simpler self-assembling systems, which have been demonstrated. In the meantime, a [[Lego]]-built autonomous robot able to follow a pre-set track and assemble an exact copy of itself, starting from four externally provided components, was demonstrated experimentally in 2003 [http://www.MolecularAssembler.com/KSRM/3.23.4.htm].<br />
<br />
These systems are substantially simpler than autotrophic systems, because they are provided with purified feedstocks and energy. They do not have to reproduce them. This distinction is at the root of some of the controversy about whether molecular manufacturing is possible or not. Many authorities who find it impossible are clearly citing sources for complex autotrophic self-replicating systems. Many of the authorities who find it possible are clearly citing sources for much simpler self-assembling systems, which have been demonstrated. In the meantime, a Lego-built autonomous robot able to follow a pre-set track and assemble an exact copy of itself, starting from four externally provided components, was demonstrated experimentally in 2003 [http://www.MolecularAssembler.com/KSRM/3.23.4.htm].<br />
<br />
这些系统比自养系统简单得多,因为它们提供了纯化的原料和能源。他们不需要复制它们。这种区别是关于分子制造是否可行的一些争论的根源。许多当局认为这是不可能的,他们明确地引用了复杂的自养自我复制系统的资源。许多发现这种可能性的权威人士显然是在引用已经证明的更简单的自组装系统的资料。与此同时,2003年的一项实验展示了一个乐高积木自主机器人,它能够按照预先设定的轨道,从外部提供的4个组件开始,精确地组装出自己的副本。 Molecularassembler.com/ksrm/3.23.4.htm.<br />
<br />
<br />
<br />
<br />
<br />
Merely exploiting the replicative abilities of existing cells is insufficient, because of limitations in the process of [[protein biosynthesis]] (also see the listing for [[RNA]]).<br />
<br />
Merely exploiting the replicative abilities of existing cells is insufficient, because of limitations in the process of protein biosynthesis (also see the listing for RNA).<br />
<br />
仅仅利用现有细胞的复制能力是不够的,因为在蛋白质生物合成过程中存在局限性。<br />
<br />
What is required is the rational design of an entirely novel replicator with a much wider range of synthesis capabilities.<br />
<br />
What is required is the rational design of an entirely novel replicator with a much wider range of synthesis capabilities.<br />
<br />
我们需要的是合理设计一种具有更广泛合成能力的全新复制因子。<br />
<br />
<br />
<br />
<br />
<br />
In 2011, New York University scientists have developed artificial structures that can self-replicate, a process that has the potential to yield new types of materials. They have demonstrated that it is possible to replicate not just molecules like cellular DNA or RNA, but discrete structures that could in principle assume many different shapes, have many different functional features, and be associated with many different types of chemical species.<ref>{{cite journal | doi = 10.1038/nature10500 | last1 = Wang | first1 = Tong | last2 = Sha | first2 = Ruojie | last3 = Dreyfus | first3 = Rémi | last4 = Leunissen | first4 = Mirjam E. | last5 = Maass | first5 = Corinna | last6 = Pine | first6 = David J. | last7 = Chaikin | first7 = Paul M. | last8 = Seeman | first8 = Nadrian C. | year = 2011 | title = Self-replication of information-bearing nanoscale patterns | journal = Nature | volume = 478 | issue = 7368 | pages = 225–228 | pmid=21993758 | pmc=3192504}}</ref><ref>{{cite web | url = https://www.sciencedaily.com/releases/2011/10/111012132651.htm | title = Self-replication process holds promise for production of new materials. | date = 17 October 2011 | website = Science Daily | accessdate=17 October 2011}}</ref><br />
<br />
In 2011, New York University scientists have developed artificial structures that can self-replicate, a process that has the potential to yield new types of materials. They have demonstrated that it is possible to replicate not just molecules like cellular DNA or RNA, but discrete structures that could in principle assume many different shapes, have many different functional features, and be associated with many different types of chemical species.<br />
<br />
2011年,纽约大学的科学家们开发出了可以自我复制的人造结构,这一过程有可能产生新型材料。他们已经证明,不仅可以复制像细胞 DNA 或 RNA 这样的分子,而且可以复制原则上呈现许多不同形状、具有许多不同功能特征、并与许多不同类型的化学物种相关联的离散结构。<br />
<br />
<br />
<br />
<br />
<br />
For a discussion of other chemical bases for hypothetical self-replicating systems, see [[alternative biochemistry]].<br />
<br />
For a discussion of other chemical bases for hypothetical self-replicating systems, see alternative biochemistry.<br />
<br />
有关假设的自我复制系统的其他化学基础的讨论,请参阅替代生物化学。<br />
<br />
<br />
<br />
<br />
<br />
==See also==<br />
<br />
==See also==<br />
<br />
参见<br />
<br />
* [[Artificial life]]<br />
<br />
<br />
<br />
* [[Astrochicken]]<br />
<br />
<br />
<br />
* [[Autopoiesis]]<br />
<br />
<br />
<br />
* [[Complex system]]<br />
<br />
<br />
<br />
* [[DNA replication]]<br />
<br />
<br />
<br />
* [[Life]]<br />
<br />
<br />
<br />
* [[Robot]]<br />
<br />
<br />
<br />
* [[RepRap]] (self-replicated 3D printer)<br />
<br />
<br />
<br />
* [[Self-replicating machine]]<br />
<br />
<br />
<br />
** [[Self-replicating spacecraft]]<br />
<br />
<br />
<br />
* [[Space manufacturing]]<br />
<br />
<br />
<br />
* [[Von Neumann universal constructor]]<br />
<br />
<br />
<br />
* [[Virus]]<br />
<br />
<br />
<br />
* [[Von Neumann machine (disambiguation)]]<br />
<br />
<br />
<br />
* [[Self reconfigurable]]<br />
<br />
<br />
<br />
* [[Final Anthropic Principle]]<br />
<br />
<br />
<br />
* [[Positive feedback]]<br />
<br />
<br />
<br />
* [[Harmonic]]<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
==References==<br />
<br />
==References==<br />
<br />
参考资料<br />
<br />
{{reflist}}<br />
<br />
<br />
<br />
;Notes<br />
<br />
Notes<br />
<br />
注释<br />
<br />
{{refbegin}}<br />
<br />
<br />
<br />
* von Neumann, J., 1966, ''The Theory of Self-reproducing Automata'', A. Burks, ed., Univ. of Illinois Press, Urbana, IL.<br />
<br />
<br />
<br />
* [[s:Advanced Automation for Space Missions|Advanced Automation for Space Missions]], a 1980 NASA study edited by [[Robert Freitas]]<br />
<br />
<br />
<br />
* [http://www.MolecularAssembler.com/KSRM.htm Kinematic Self-Replicating Machines] first comprehensive survey of entire field in 2004 by [[Robert Freitas]] and [[Ralph Merkle]]<br />
<br />
<br />
<br />
* [https://web.archive.org/web/20040920220139/http://www.niac.usra.edu/files/studies/final_report/pdf/883Toth-Fejel.pdf NASA Institute for Advance Concepts study by General Dynamics]- concluded that complexity of the development was equal to that of a Pentium 4, and promoted a design based on cellular automata.<br />
<br />
<br />
<br />
* ''[[Gödel, Escher, Bach]]'' by [[Douglas Hofstadter]] (detailed discussion and many examples)<br />
<br />
<br />
<br />
* Kenyon, R., ''Self-replicating tilings'', in: Symbolic Dynamics and Applications (P. Walters, ed.) Contemporary Math. vol. 135 (1992), 239-264.<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
{{refend}}<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
[[Category:Self-replication| ]]<br />
<br />
<br />
<br />
<noinclude><br />
<br />
<small>This page was moved from [[wikipedia:en:Self-replication]]. Its edit history can be viewed at [[自复制/edithistory]]</small></noinclude><br />
<br />
[[Category:待整理页面]]</div>粲兰https://wiki.swarma.org/index.php?title=%E9%80%9A%E7%94%A8%E4%BA%BA%E5%B7%A5%E6%99%BA%E8%83%BD&diff=15045通用人工智能2020-10-13T07:08:49Z<p>粲兰:</p>
<hr />
<div>此词条由袁一博翻译,未经人工整理和审校,带来阅读不便,请见谅。<br />
<br />
{{Short description|Hypothetical human-level or stronger AI}}<br />
<br />
{{Use British English|date = March 2019}}<br />
<br />
{{Use dmy dates|date=December 2019}}<br />
<br />
{{Artificial intelligence}}<br />
<br />
'''Artificial general intelligence''' ('''AGI''') is the hypothetical<ref>{{cite news |title=DeepMind and Google: the battle to control artificial intelligence |url=https://www.economist.com/news/2019/04/05/deepmind-and-google-the-battle-to-control-artificial-intelligence |accessdate=15 March 2020 |work=[[The Economist]] ([[1843 (magazine)]]) |date=2019 |quote=AGI stands for Artificial General Intelligence, a hypothetical computer program...}}</ref> intelligence of a machine that has the capacity to understand or learn any intellectual task that a [[human being]] can. It is a primary goal of some [[artificial intelligence]] research and a common topic in [[science fiction]] and [[futures studies]]. AGI can also be referred to as '''strong AI''',<ref>Kurzweil, ''Singularity'' (2005) p. 260</ref><ref name = "Kurzweil 2005-08-05">{{Citation|url=https://www.forbes.com/home/free_forbes/2005/0815/030.htmlhttps://www.forbes.com/home/free_forbes/2005/0815/030.html |first=Ray |last=Kurzweil |date=5 August 2005 |magazine=[[Forbes]]|title=Long Live AI}}: Kurzweil describes strong AI as "machine intelligence with the full range of human intelligence."</ref><ref>{{Citation|url=https://crnano.typepad.com/crnblog/2005/08/advanced_human_.html |title=Advanced Human ntelligence<br />
<br />
Artificial general intelligence (AGI) is the hypothetical intelligence of a machine that has the capacity to understand or learn any intellectual task that a human being can. It is a primary goal of some artificial intelligence research and a common topic in science fiction and futures studies. AGI can also be referred to as strong AI,<ref>{{Citation|url=https://crnano.typepad.com/crnblog/2005/08/advanced_human_.html |title=Advanced Human ntelligence<br />
<br />
人工通用智能(Artificial general intelligence,AGI)是一种假设的机器智能,它有能力理解或学习任何人类能够完成的智力任务。这是一些人工智能研究的主要目标,也是科幻小说和未来研究的共同话题。也可以被称为强人工智能,参考文献{ Citation | url https://crnano.typepad.com/crnblog/2005/08/advanced_human_.html | title Advanced Human intelligence<br />
<br />
|first=Mike|last=Treder|work=Responsible Nanotechnology|date=10 August 2005 |archive-url=https://web.archive.org/web/20191016214415/https://crnano.typepad.com/crnblog/2005/08/advanced_human_.html|archive-date=16 October 2019 |url-status=live}}</ref> '''full AI''',<ref>{{Cite web |url=http://tedxtalks.ted.com/video/The-Age-of-Artificial-Intellige |title=The Age of Artificial Intelligence: George John at TEDxLondonBusinessSchool 2013 |access-date=22 February 2014 |archive-url=https://web.archive.org/web/20140226123940/http://tedxtalks.ted.com/video/The-Age-of-Artificial-Intellige |archive-date=26 February 2014 |url-status=live }}</ref> <br />
<br />
|first=Mike|last=Treder|work=Responsible Nanotechnology|date=10 August 2005 |archive-url=https://web.archive.org/web/20191016214415/https://crnano.typepad.com/crnblog/2005/08/advanced_human_.html|archive-date=16 October 2019 |url-status=live}}</ref> full AI, <br />
<br />
2005年8月10日2019年10月 https://web.archive.org/web/20191016214415/https://crnano.typepad.com/crnblog/2005/08/advanced_human_.html|archive-date=16,<br />
<br />
or '''general intelligent action'''.{{sfn|Newell|Simon|1976|ps=, This is the term they use for "human-level" intelligence in the [[physical symbol system]] hypothesis.}} <br />
<br />
or general intelligent action. <br />
<br />
或者是通用智能行为。<br />
<br />
Some academic sources reserve the term "strong AI" for machines that can experience [[Chinese room#Strong AI|consciousness]].{{sfn|Searle|1980|ps=, See below for the origin of the term "strong AI", and see the academic definition of "[[Chinese room#Strong AI|strong AI]]" in the article [[Chinese room]].}} Today's AI is speculated to be many years, if not decades, away from AGI.<ref>[https://www.europarl.europa.eu/at-your-service/files/be-heard/religious-and-non-confessional-dialogue/events/en-20190319-how-artificial-intelligence-works.pdf europarl.europa.eu: How artificial intelligence works], "Concluding remarks: Today's AI is powerful and useful, but remains far from speculated AGI or ASI.", European Parliamentary Research Service, retrieved March 3, 2020</ref><ref>{{Cite journal|last=Grace|first=Katja|last2=Salvatier|first2=John|last3=Dafoe|first3=Allan|last4=Zhang|first4=Baobao|last5=Evans|first5=Owain|date=2018-07-31|title=Viewpoint: When Will AI Exceed Human Performance? Evidence from AI Experts|journal=Journal of Artificial Intelligence Research|volume=62|pages=729–754|doi=10.1613/jair.1.11222|issn=1076-9757}}</ref><br />
<br />
Some academic sources reserve the term "strong AI" for machines that can experience consciousness. Today's AI is speculated to be many years, if not decades, away from AGI.<br />
<br />
一些学术资源保留了“强人工智能”这个术语,用来形容能够体会意识的机器。据推测,如果不是几十年的话,今天的人工智能将在很多年之后才能达到通用人工智能的地步。<br />
<br />
<br />
<br />
Some authorities emphasize a distinction between ''strong AI'' and ''applied AI'',<ref>Encyclopædia Britannica [http://www.britannica.com/eb/article-219086/artificial-intelligence Strong AI, applied AI, and cognitive simulation] {{Webarchive|url=https://web.archive.org/web/20071015054758/http://www.britannica.com/eb/article-219086/artificial-intelligence |date=15 October 2007 }} or Jack Copeland [http://www.cs.usfca.edu/www.AlanTuring.net/turing_archive/pages/Reference%20Articles/what_is_AI/What%20is%20AI02.html What is artificial intelligence?] {{Webarchive|url=https://web.archive.org/web/20070818125256/http://www.cs.usfca.edu/www.AlanTuring.net/turing_archive/pages/Reference%20Articles/what_is_AI/What%20is%20AI02.html |date=18 August 2007 }} on AlanTuring.net</ref> also called ''narrow AI''<ref name = "Kurzweil 2005-08-05"/> or ''[[weak AI]]''.<ref>{{Cite web |url=http://www.open2.net/nextbigthing/ai/ai_in_depth/in_depth.htm |title=The Open University on Strong and Weak AI |access-date=8 October 2007 |archive-url=https://web.archive.org/web/20090925043908/http://www.open2.net/nextbigthing/ai/ai_in_depth/in_depth.htm |archive-date=25 September 2009 |url-status=live }} {{Dead link |date=October 2019}}</ref> In contrast to strong AI, weak AI is not intended to perform human [[cognitive]] abilities. Rather, weak AI is limited to the use of software to study or accomplish specific [[problem solving]] or [[reason]]ing tasks.<br />
<br />
Some authorities emphasize a distinction between strong AI and applied AI, also called narrow AI In contrast to strong AI, weak AI is not intended to perform human cognitive abilities. Rather, weak AI is limited to the use of software to study or accomplish specific problem solving or reasoning tasks.<br />
<br />
一些权威机构强调'' 强人工智能 ''和'' 应用人工智能 ''之间的区别,也称为'' 狭义人工智能 ''与'' 强人工智能 ''相比,弱人工智能并不是为了执行人类的认知能力。相反,弱人工智能仅限于使用软件来研究或完成特定问题的解决或完成推理任务。<br />
<br />
<br />
<br />
As of 2017, over forty organizations are researching AGI.<ref name=baum/><br />
<br />
As of 2017, over forty organizations are researching AGI.<br />
<br />
截止到2017年,已经有超过四十家机构在研究 AGI。<br />
<br />
<br />
<br />
==Requirements 判定要求==<br />
<br />
{{main|Cognitive science}}<br />
<br />
<br />
<br />
Various criteria for [[intelligence]] have been proposed (most famously the [[Turing test]]) but to date, there is no definition that satisfies everyone.<ref>AI founder [[John McCarthy (computer scientist)|John McCarthy]] writes: "we cannot yet characterize in general what kinds of computational procedures we want to call intelligent." {{cite web| url=http://www-formal.stanford.edu/jmc/whatisai/node1.html| title=Basic Questions| last=McCarthy| first=John| authorlink=John McCarthy (computer scientist)| publisher=[[Stanford University]]| year=2007| access-date=6 December 2007| archive-url=https://web.archive.org/web/20071026100601/http://www-formal.stanford.edu/jmc/whatisai/node1.html| archive-date=26 October 2007| url-status=live}} (For a discussion of some definitions of intelligence used by [[artificial intelligence]] researchers, see [[philosophy of artificial intelligence]].)</ref> However, there ''is'' wide agreement among artificial intelligence researchers that intelligence is required to do the following:<ref><br />
<br />
Various criteria for intelligence have been proposed (most famously the Turing test) but to date, there is no definition that satisfies everyone. However, there is wide agreement among artificial intelligence researchers that intelligence is required to do the following:<ref><br />
<br />
人们提出了各种各样的智能标准(最著名的是图灵测试) ,但到目前为止,还没有一个定义能使所有人满意。然而,人工智能研究人员普遍认为,智能需要做到以下几点: <br />
<br />
This list of intelligent traits is based on the topics covered by major AI textbooks, including:<br />
<br />
This list of intelligent traits is based on the topics covered by major AI textbooks, including:<br />
<br />
这个智能特征的列表基于主流的人工智能教科书所涉及的主题,包括:<br />
<br />
{{Harvnb|Russell|Norvig|2003}},<br />
<br />
,<br />
<br />
,<br />
<br />
{{Harvnb|Luger|Stubblefield|2004}},<br />
<br />
,<br />
<br />
,<br />
<br />
{{Harvnb|Poole|Mackworth|Goebel|1998}} and<br />
<br />
and<br />
<br />
及<br />
<br />
{{Harvnb|Nilsson|1998}}.<br />
<br />
.<br />
<br />
.<br />
<br />
</ref><br />
<br />
</ref><br />
<br />
/ 参考<br />
<br />
* [[automated reasoning|reason]], use strategy, solve puzzles, and make judgments under [[uncertainty]];使用策略,解决问题,并且在不确定条件下做出决策。<br />
<br />
* [[knowledge representation|represent knowledge]], including [[Commonsense knowledge base|commonsense knowledge]];展现知识,包括常识。<br />
<br />
* [[automated planning and scheduling|plan]];规划。<br />
<br />
* [[machine learning|learn]];学习。<br />
<br />
* communicate in [[natural language processing|natural language]];使用自然语言进行交流。<br />
<br />
* and [[Artificial intelligence systems integration|integrate all these skills]] towards common goals.综合运用所有技巧以达到某个目的。<br />
<br />
<br />
<br />
Other important capabilities include the ability to [[machine perception|sense]] (e.g. [[computer vision|see]]) and the ability to act (e.g. [[robotics|move and manipulate objects]]) in the world where intelligent behaviour is to be observed.<ref>Pfeifer, R. and Bongard J. C., How the body shapes the way we think: a new view of intelligence (The MIT Press, 2007). {{ISBN|0-262-16239-3}}</ref> This would include an ability to detect and respond to [[hazard]].<ref>{{cite journal | last1 = White | first1 = R. W. | year = 1959 | title = Motivation reconsidered: The concept of competence | journal = Psychological Review | volume = 66 | issue = 5| pages = 297–333 | doi=10.1037/h0040934| pmid = 13844397 }}</ref> Many interdisciplinary approaches to intelligence (e.g. [[cognitive science]], [[computational intelligence]] and [[decision making]]) tend to emphasise the need to consider additional traits such as [[imagination]] (taken as the ability to form mental images and concepts that were not programmed in)<ref>{{Harvnb|Johnson|1987}}</ref> and [[Self-determination theory|autonomy]].<ref>deCharms, R. (1968). Personal causation. New York: Academic Press.</ref><br />
<br />
Other important capabilities include the ability to sense (e.g. see) and the ability to act (e.g. move and manipulate objects) in the world where intelligent behaviour is to be observed. This would include an ability to detect and respond to hazard. Many interdisciplinary approaches to intelligence (e.g. cognitive science, computational intelligence and decision making) tend to emphasise the need to consider additional traits such as imagination (taken as the ability to form mental images and concepts that were not programmed in) and autonomy.<br />
<br />
其他重要的能力包括在客观世界感知(例如:视觉)和行动(例如:移动和操纵物体)的能力。智能行为在客观世界中是可观测的。这将包括检测和应对危险的能力。许多跨学科的智力研究方法(例如:。认知科学、计算智能和决策)倾向于强调考虑额外特征的必要性,例如想象力(被认为是未编入程序的形成意象和概念的能力)和自主性。<br />
<br />
Computer based systems that exhibit many of these capabilities do exist (e.g. see [[computational creativity]], [[automated reasoning]], [[decision support system]], [[robot]], [[evolutionary computation]], [[intelligent agent]]), but not yet at human levels.<br />
<br />
Computer based systems that exhibit many of these capabilities do exist (e.g. see computational creativity, automated reasoning, decision support system, robot, evolutionary computation, intelligent agent), but not yet at human levels.<br />
<br />
基于计算机的系统,展示了许多这些能力确实存在(例如:。参见计算创造性、自动推理、决策支持系统、机器人、进化计算、智能代理) ,但还没有达到人类的水平。<br />
<br />
<br />
<br />
===Tests for confirming human-level AGI{{anchor|Tests_for_confirming_human-level_AGI}}===<br />
<br />
The following tests to confirm human-level AGI have been considered:<ref>{{cite web|last=Muehlhauser|first=Luke|title=What is AGI?|url=http://intelligence.org/2013/08/11/what-is-agi/|publisher=Machine Intelligence Research Institute|accessdate=1 May 2014|date=11 August 2013|archive-url=https://web.archive.org/web/20140425115445/http://intelligence.org/2013/08/11/what-is-agi/|archive-date=25 April 2014|url-status=live}}</ref><ref>{{Cite web|url=https://www.talkyblog.com/artificial_general_intelligence_agi/|title=What is Artificial General Intelligence (AGI)? {{!}} 4 Tests For Ensuring Artificial General Intelligence|date=13 July 2019|website=Talky Blog|language=en-US|access-date=17 July 2019|archive-url=https://web.archive.org/web/20190717071152/https://www.talkyblog.com/artificial_general_intelligence_agi/|archive-date=17 July 2019|url-status=live}}</ref><br />
<br />
The following tests to confirm human-level AGI have been considered:<br />
<br />
考虑了下列测试以确认人类水平 AGI:<br />
<br />
;[[Turing test|The Turing Test]] ([[Alan Turing|''Turing'']])<br />
<br />
The Turing Test (Turing)<br />
<br />
图灵测试(图灵)<br />
<br />
: A machine and a human both converse sight unseen with a second human, who must evaluate which of the two is the machine, which passes the test if it can fool the evaluator a significant fraction of the time. Note: Turing does not prescribe what should qualify as intelligence, only that knowing that it is a machine should disqualify it.<br />
<br />
A machine and a human both converse sight unseen with a second human, who must evaluate which of the two is the machine, which passes the test if it can fool the evaluator a significant fraction of the time. Note: Turing does not prescribe what should qualify as intelligence, only that knowing that it is a machine should disqualify it.<br />
<br />
一个机器人和一个人类都与另一个人类进行眼神交流,后者必须评估两者中哪一个是机器,如果它能骗过评估者很大一部分时间,那么机器就通过了测试。注意: 图灵并没有规定什么是智能,只要能认出它是一台机器就不应该认为它是智能的。<br />
<br />
;The Coffee Test ([[Steve Wozniak|''Wozniak'']])<br />
<br />
The Coffee Test (Wozniak)<br />
<br />
咖啡测试(沃兹尼亚克)<br />
<br />
: A machine is required to enter an average American home and figure out how to make coffee: find the coffee machine, find the coffee, add water, find a mug, and brew the coffee by pushing the proper buttons.<br />
<br />
A machine is required to enter an average American home and figure out how to make coffee: find the coffee machine, find the coffee, add water, find a mug, and brew the coffee by pushing the proper buttons.<br />
<br />
一台机器需要进入一个普通的美国家庭,并弄清楚如何制作咖啡: 找到咖啡机,找到咖啡,加水,找到一个马克杯,并通过按下正确的按钮来煮咖啡。<br />
<br />
;The Robot College Student Test ([[Ben Goertzel|''Goertzel'']])<br />
<br />
The Robot College Student Test (Goertzel)<br />
<br />
机器人大学生考试(格兹尔)<br />
<br />
: A machine enrolls in a university, taking and passing the same classes that humans would, and obtaining a degree.<br />
<br />
A machine enrolls in a university, taking and passing the same classes that humans would, and obtaining a degree.<br />
<br />
一台机器进入一所大学,学习并通过与人类相同的课程,并获得学位。<br />
<br />
;The Employment Test ([[Nils John Nilsson|''Nilsson'']])<br />
<br />
The Employment Test (Nilsson)<br />
<br />
就业测试(尼尔森)<br />
<br />
: A machine works an economically important job, performing at least as well as humans in the same job.<br />
<br />
A machine works an economically important job, performing at least as well as humans in the same job.<br />
<br />
机器从事一项经济上重要的工作,在同一项工作中表现至少和人类一样好。<br />
<br />
<br />
<br />
=== Problems requiring AGI to solve 等待通用人工智能解决的问题===<br />
<br />
{{Main|AI-complete}}<br />
<br />
<br />
<br />
The most difficult problems for computers are informally known as "AI-complete" or "AI-hard", implying that solving them is equivalent to the general aptitude of human intelligence, or strong AI, beyond the capabilities of a purpose-specific algorithm.<ref name="Shapiro92">Shapiro, Stuart C. (1992). [http://www.cse.buffalo.edu/~shapiro/Papers/ai.pdf Artificial Intelligence] {{Webarchive|url=https://web.archive.org/web/20160201014644/http://www.cse.buffalo.edu/~shapiro/Papers/ai.pdf |date=1 February 2016 }} In Stuart C. Shapiro (Ed.), ''Encyclopedia of Artificial Intelligence'' (Second Edition, pp.&nbsp;54–57). New York: John Wiley. (Section 4 is on "AI-Complete Tasks".)</ref><br />
<br />
The most difficult problems for computers are informally known as "AI-complete" or "AI-hard", implying that solving them is equivalent to the general aptitude of human intelligence, or strong AI, beyond the capabilities of a purpose-specific algorithm.<br />
<br />
对于计算机来说,最困难的问题被非正式地称为“ AI完全问题”或“ AI困难问题” ,这意味着解决这些问题相当于人类智能的一般才能,或强人工智能,超出了特定目的算法的能力。<br />
<br />
<br />
<br />
AI-complete problems are hypothesised to include general [[computer vision]], [[natural language understanding]], and dealing with unexpected circumstances while solving any real world problem.<ref>Roman V. Yampolskiy. Turing Test as a Defining Feature of AI-Completeness. In Artificial Intelligence, Evolutionary Computation and Metaheuristics (AIECM) --In the footsteps of Alan Turing. Xin-She Yang (Ed.). pp. 3–17. (Chapter 1). Springer, London. 2013. http://cecs.louisville.edu/ry/TuringTestasaDefiningFeature04270003.pdf {{Webarchive|url=https://web.archive.org/web/20130522094547/http://cecs.louisville.edu/ry/TuringTestasaDefiningFeature04270003.pdf |date=22 May 2013 }}</ref><br />
<br />
AI-complete problems are hypothesised to include general computer vision, natural language understanding, and dealing with unexpected circumstances while solving any real world problem.<br />
<br />
人工智能完全问题假设包括一般的计算机视觉,自然语言理解,以及在解决任何现实世界问题的同时处理意外情况。<br />
<br />
<br />
<br />
AI-complete problems cannot be solved with current computer technology alone, and also require [[human computation]]. This property could be useful, for example, to test for the presence of humans, as [[CAPTCHA]]s aim to do; and for [[computer security]] to repel [[brute-force attack]]s.<ref>Luis von Ahn, Manuel Blum, Nicholas Hopper, and John Langford. [http://www.captcha.net/captcha_crypt.pdf CAPTCHA: Using Hard AI Problems for Security] {{Webarchive|url=https://web.archive.org/web/20160304001102/http://www.captcha.net/captcha_crypt.pdf |date=4 March 2016 }}. In Proceedings of Eurocrypt, Vol. 2656 (2003), pp. 294–311.</ref><ref>{{cite journal | first = Richard | last = Bergmair | title = Natural Language Steganography and an "AI-complete" Security Primitive | citeseerx = 10.1.1.105.129 | date = 7 January 2006 }} (unpublished?)</ref><br />
<br />
AI-complete problems cannot be solved with current computer technology alone, and also require human computation. This property could be useful, for example, to test for the presence of humans, as CAPTCHAs aim to do; and for computer security to repel brute-force attacks.<br />
<br />
目前的计算机技术不能单独解决AI完全问题,而且还需要人工计算。例如,这个特性可以用来测试人类是否存在(CAPTCHAs 的目标就是这样做) ,以及应用于计算机安全以抵御强力攻击。<br />
<br />
<br />
<br />
== History 历史 == <br />
<br />
=== Classical AI 经典人工智能 ===<br />
<br />
{{Main|History of artificial intelligence}}<br />
<br />
Modern AI research began in the mid 1950s.<ref>{{Harvnb|Crevier|1993|pp=48–50}}</ref> The first generation of AI researchers were convinced that artificial general intelligence was possible and that it would exist in just a few decades. AI pioneer [[Herbert A. Simon]] wrote in 1965: "machines will be capable, within twenty years, of doing any work a man can do."<ref>{{Harvnb|Simon|1965|p=96}} quoted in {{Harvnb|Crevier|1993|p=109}}</ref> Their predictions were the inspiration for [[Stanley Kubrick]] and [[Arthur C. Clarke]]'s character [[HAL 9000]], who embodied what AI researchers believed they could create by the year 2001. AI pioneer [[Marvin Minsky]] was a consultant<ref>{{Cite web |url=http://mitpress.mit.edu/e-books/Hal/chap2/two1.html |title=Scientist on the Set: An Interview with Marvin Minsky |access-date=5 April 2008 |archive-url=https://web.archive.org/web/20120716182537/http://mitpress.mit.edu/e-books/Hal/chap2/two1.html |archive-date=16 July 2012 |url-status=live }}</ref> on the project of making HAL 9000 as realistic as possible according to the consensus predictions of the time; Crevier quotes him as having said on the subject in 1967, "Within a generation ... the problem of creating 'artificial intelligence' will substantially be solved,"<ref>Marvin Minsky to {{Harvtxt|Darrach|1970}}, quoted in {{Harvtxt|Crevier|1993|p=109}}.</ref> although Minsky states that he was misquoted.{{Citation needed|date=June 2011}}<br />
<br />
Modern AI research began in the mid 1950s. The first generation of AI researchers were convinced that artificial general intelligence was possible and that it would exist in just a few decades. AI pioneer Herbert A. Simon wrote in 1965: "machines will be capable, within twenty years, of doing any work a man can do." Their predictions were the inspiration for Stanley Kubrick and Arthur C. Clarke's character HAL 9000, who embodied what AI researchers believed they could create by the year 2001. AI pioneer Marvin Minsky was a consultant on the project of making HAL 9000 as realistic as possible according to the consensus prediction of the time; Crevier quotes him as having said on the subject in 1967, "Within a generation ... the problem of creating 'artificial intelligence' will substantially be solved," although Minsky states that he was misquoted.<br />
<br />
现代人工智能研究始于20世纪50年代中期。第一代人工智能研究人员确信,通用人工智能是可能的,并将在短短几十年内出现。人工智能的先驱赫伯特·A·西蒙(Herbert A. Simon)在1965年写道: “机器将在20年内拥有完成人类能做的任何工作的能力。”他们的预言启发了斯坦利·库布里克和亚瑟·查理斯·克拉克塑造的角色哈尔9000,它代表了人工智能研究人员相信他们截至2001年能够创造出的东西。人工智能先驱马文·明斯基(Marvin Minsky)是一个项目顾问,该项目旨在根据当时的一致预测,使哈尔9000尽可能逼真; 克里维尔援引他在1967年关于这个问题的话说,“在一代人的时间里... ... 创造‘人工智能’的问题将大体上得到解决,”尽管明斯基声称,他的话被错误引用了。<br />
<br />
<br />
<br />
However, in the early 1970s, it became obvious that researchers had grossly underestimated the difficulty of the project. Funding agencies became skeptical of AGI and put researchers under increasing pressure to produce useful "applied AI".<ref>The [[Lighthill report]] specifically criticized AI's "grandiose objectives" and led the dismantling of AI research in England. ({{Harvnb|Lighthill|1973}}; {{Harvnb|Howe|1994}}) In the U.S., [[DARPA]] became determined to fund only "mission-oriented direct research, rather than basic undirected research". See {{Harv|NRC|1999}} under "Shift to Applied Research Increases Investment". See also {{Harv|Crevier|1993|pp=115–117}} and {{Harv|Russell|Norvig|2003|pp=21–22}}</ref> As the 1980s began, Japan's [[Fifth Generation Computer]] Project revived interest in AGI, setting out a ten-year timeline that included AGI goals like "carry on a casual conversation".<ref>{{harvnb|Crevier|1993|p=211}}, {{harvnb|Russell|Norvig|2003|p=24}} and see also {{Harvnb|Feigenbaum|McCorduck|1983}}</ref> In response to this and the success of [[expert systems]], both industry and government pumped money back into the field.<ref>{{Harvnb|Crevier| 1993|pp=161–162,197–203,240}}; {{harvnb|Russell|Norvig|2003|p=25}}; {{harvnb|NRC|1999|loc=under "Shift to Applied Research Increases Investment"}}</ref> However, confidence in AI spectacularly collapsed in the late 1980s, and the goals of the Fifth Generation Computer Project were never fulfilled.<ref>{{Harvnb|Crevier|1993|pp=209–212}}</ref> For the second time in 20 years, AI researchers who had predicted the imminent achievement of AGI had been shown to be fundamentally mistaken. By the 1990s, AI researchers had gained a reputation for making vain promises. They became reluctant to make predictions at all<ref>As AI founder [[John McCarthy (computer scientist)|John McCarthy]] writes "it would be a great relief to the rest of the workers in AI if the inventors of new general formalisms would express their hopes in a more guarded form than has sometimes been the case." {{cite web | url=http://www-formal.stanford.edu/jmc/reviews/lighthill/lighthill.html | title=Reply to Lighthill | last=McCarthy | first=John | authorlink=John McCarthy (computer scientist) | publisher=Stanford University | year=2000 | access-date=29 September 2007 | archive-url=https://web.archive.org/web/20080930164952/http://www-formal.stanford.edu/jmc/reviews/lighthill/lighthill.html | archive-date=30 September 2008 | url-status=live }}</ref> and to avoid any mention of "human level" artificial intelligence for fear of being labeled "wild-eyed dreamer[s]."<ref>"At its low point, some computer scientists and software engineers avoided the term artificial intelligence for fear of being viewed as wild-eyed dreamers."{{cite news |first=John |last=Markoff |title=Behind Artificial Intelligence, a Squadron of Bright Real People |url=https://www.nytimes.com/2005/10/14/technology/14artificial.html?ei=5070&en=11ab55edb7cead5e&ex=1185940800&adxnnl=1&adxnnlx=1185805173-o7WsfW7qaP0x5/NUs1cQCQ |work=The New York Times|date=14 October 2005}}</ref><br />
<br />
However, in the early 1970s, it became obvious that researchers had grossly underestimated the difficulty of the project. Funding agencies became skeptical of AGI and put researchers under increasing pressure to produce useful "applied AI". As the 1980s began, Japan's Fifth Generation Computer Project revived interest in AGI, setting out a ten-year timeline that included AGI goals like "carry on a casual conversation". In response to this and the success of expert systems, both industry and government pumped money back into the field. However, confidence in AI spectacularly collapsed in the late 1980s, and the goals of the Fifth Generation Computer Project were never fulfilled. For the second time in 20 years, AI researchers who had predicted the imminent achievement of AGI had been shown to be fundamentally mistaken. By the 1990s, AI researchers had gained a reputation for making vain promises. They became reluctant to make predictions at all and to avoid any mention of "human level" artificial intelligence for fear of being labeled "wild-eyed dreamer[s]."<br />
<br />
然而,在20世纪70年代初,很明显,研究人员严重低估了该项目的难度。资助机构开始对通用人工智能持怀疑态度,并对研究人员施加越来越大的压力,要求他们生产出有用的“应用人工智能”。随着20世纪80年代的开始,日本的'''<font color="#ff8000">第五代计算机项目(Fifth Generation Computer Project)</font>'''重新唤起了人们对通用人工智能的兴趣,并设定了一个长达10年的时间线,其中包括通用人工智能的目标,比如“进行一次随意的交谈”。为了应对这种情况和专家系统的成功建立,工业界和政府都重新将资金投入这一领域。然而,人们对人工智能的信心在20世纪80年代末大幅下降,第五代计算机项目的目标从未实现。20年来的第二次,人工智能研究人员预测通用人工智能即将取得的成果已经被证明根本是错误的。到了20世纪90年代,人工智能研究人员因做出虚假承诺而臭名昭著。他们根本不愿意做预测,也不愿意提及“人类水平”的人工智能,因为他们害怕被贴上“狂热梦想家”的标签<br />
<br />
<br />
<br />
=== Narrow AI research 狭义人工智能的研究===<br />
<br />
{{Main|Artificial intelligence}}<br />
<br />
<br />
<br />
In the 1990s and early 21st century, mainstream AI achieved far greater commercial success and academic respectability by focusing on specific sub-problems where they can produce verifiable results and commercial applications, such as [[artificial neural networks]] and statistical [[machine learning]].<ref>{{Harvnb|Russell|Norvig|2003|pp=25–26}}</ref> These "applied AI" systems are now used extensively throughout the technology industry, and research in this vein is very heavily funded in both academia and industry. Currently, development on this field is considered an emerging trend, and a mature stage is expected to happen in more than 10 years.<ref>{{cite web |title=Trends in the Emerging Tech Hype Cycle |url=https://blogs.gartner.com/smarterwithgartner/files/2018/08/PR_490866_5_Trends_in_the_Emerging_Tech_Hype_Cycle_2018_Hype_Cycle.png |publisher=Gartner Reports |accessdate=7 May 2019 |archive-url=https://web.archive.org/web/20190522024829/https://blogs.gartner.com/smarterwithgartner/files/2018/08/PR_490866_5_Trends_in_the_Emerging_Tech_Hype_Cycle_2018_Hype_Cycle.png |archive-date=22 May 2019 |url-status=live }}</ref><br />
<br />
In the 1990s and early 21st century, mainstream AI achieved far greater commercial success and academic respectability by focusing on specific sub-problems where they can produce verifiable results and commercial applications, such as artificial neural networks and statistical machine learning. These "applied AI" systems are now used extensively throughout the technology industry, and research in this vein is very heavily funded in both academia and industry. Currently, development on this field is considered an emerging trend, and a mature stage is expected to happen in more than 10 years.<br />
<br />
在1990年代和21世纪初,主流人工智能取得了更大的商业成功和学术声望,因为它们把重点放在能够产生可验证结果和商业应用的具体子问题上,例如人工神经网络和统计机器学习。这些“应用人工智能”系统现在在整个技术产业中得到广泛应用,这方面的研究得到了学术界和产业界的大量资助。目前,这一领域的发展被认为是一个新兴的趋势,并有望在10多年内进入一个成熟的阶段。<br />
<br />
<br />
<br />
Most mainstream AI researchers hope that strong AI can be developed by combining the programs that solve various sub-problems. [[Hans Moravec]] wrote in 1988: <blockquote>"I am confident that this bottom-up route to artificial intelligence will one day meet the traditional top-down route more than half way, ready to provide the real world competence and the [[Commonsense knowledge base|commonsense knowledge]] that has been so frustratingly elusive in reasoning programs. Fully intelligent machines will result when the metaphorical [[golden spike]] is driven uniting the two efforts."<ref>{{Harvnb|Moravec|1988|p=20}}</ref></blockquote><br />
<br />
Most mainstream AI researchers hope that strong AI can be developed by combining the programs that solve various sub-problems. Hans Moravec wrote in 1988: <blockquote>"I am confident that this bottom-up route to artificial intelligence will one day meet the traditional top-down route more than half way, ready to provide the real world competence and the commonsense knowledge that has been so frustratingly elusive in reasoning programs. Fully intelligent machines will result when the metaphorical golden spike is driven uniting the two efforts."</blockquote><br />
<br />
大多数主流人工智能研究人员希望,通过结合解决各种子问题的项目,可以开发出强人工智能。汉斯·莫拉维克(Hans Moravec)在1988年写道: “我相信,这种自下而上的人工智能路线,终有一天会与传统的自上而下的路线在后半程相遇。令人沮丧的是,当下,真实世界的能力和常识知识在推理程序中一直难以捉摸。而这两种路线结合的人工智能将能为我们解决这些疑难。当一个黄金钉一样的东西将二者结合起来时,就会产生完全智能的机器。” <br />
<br />
<br />
<br />
However, even this fundamental philosophy has been disputed; for example, Stevan Harnad of Princeton concluded his 1990 paper on the [[Symbol grounding problem|Symbol Grounding Hypothesis]] by stating: <blockquote>"The expectation has often been voiced that "top-down" (symbolic) approaches to modeling cognition will somehow meet "bottom-up" (sensory) approaches somewhere in between. If the grounding considerations in this paper are valid, then this expectation is hopelessly modular and there is really only one viable route from sense to symbols: from the ground up. A free-floating symbolic level like the software level of a computer will never be reached by this route (or vice versa) – nor is it clear why we should even try to reach such a level, since it looks as if getting there would just amount to uprooting our symbols from their intrinsic meanings (thereby merely reducing ourselves to the functional equivalent of a programmable computer)."<ref>{{cite journal | last1 = Harnad | first1 = S | year = 1990 | title = The Symbol Grounding Problem | journal = Physica D | volume = 42 | issue = 1–3| pages = 335–346 | doi=10.1016/0167-2789(90)90087-6| bibcode = 1990PhyD...42..335H | arxiv = cs/9906002}}</ref></blockquote><br />
<br />
However, even this fundamental philosophy has been disputed; for example, Stevan Harnad of Princeton concluded his 1990 paper on the Symbol Grounding Hypothesis by stating: <blockquote>"The expectation has often been voiced that "top-down" (symbolic) approaches to modeling cognition will somehow meet "bottom-up" (sensory) approaches somewhere in between. If the grounding considerations in this paper are valid, then this expectation is hopelessly modular and there is really only one viable route from sense to symbols: from the ground up. A free-floating symbolic level like the software level of a computer will never be reached by this route (or vice versa) – nor is it clear why we should even try to reach such a level, since it looks as if getting there would just amount to uprooting our symbols from their intrinsic meanings (thereby merely reducing ourselves to the functional equivalent of a programmable computer)."</blockquote><br />
<br />
然而,连如此基本的哲学问题也存在争议; 例如,普林斯顿大学的斯蒂文·哈纳德(Stevan Harnad)在1990年关于'''<font color="#ff8000">符号基础假说(the Symbol Grounding Hypothesis)</font>'''的论文中总结道: “人们经常提出这样的期望,即建立“自上而下”(符号)的认知模型的方法将在某种程度上与“自下而上”(感官)的方法在建模过程中的某处相会。如果本文中的基本考虑是正确的,那么绝望的是,这种期望是模块化的,并且从认知到符号真的只有一条可行的路径: 从头开始。类似计算机软件级别的自由浮动的符号永远不可能通过这条路径实现,反之亦然——甚至也不清楚为什么我们应该尝试达到这样一个级别,因为它看起来就像是把我们的符号从它们的内在意义上连根拔起(从而仅仅把我们自己降低为可编程计算机的功能等价物)。”<br />
<br />
<br />
<br />
=== Modern artificial general intelligence research 现代通用人工智能的研究===<br />
<br />
The term "artificial general intelligence" was used as early as 1997, by Mark Gubrud<ref>{{Harvnb|Gubrud|1997}}</ref> in a discussion of the implications of fully automated military production and operations. The term was re-introduced and popularized by [[Shane Legg]] and [[Ben Goertzel]] around 2002.<ref>{{Cite web|url=http://goertzel.org/who-coined-the-term-agi/|title=Who coined the term "AGI"? » goertzel.org|language=en-US|access-date=28 December 2018|archive-url=https://web.archive.org/web/20181228083048/http://goertzel.org/who-coined-the-term-agi/|archive-date=28 December 2018|url-status=live}}, via [[Life 3.0]]: 'The term "AGI" was popularized by... Shane Legg, Mark Gubrud and Ben Goertzel'</ref> The research objective is much older, for example [[Doug Lenat]]'s [[Cyc]] project (that began in 1984), and [[Allen Newell]]'s [[Soar (cognitive architecture)|Soar]] project are regarded as within the scope of AGI. AGI research activity in 2006 was described by Pei Wang and Ben Goertzel<ref>{{harvnb|Goertzel|Wang|2006}}. See also {{harvtxt|Wang|2006}} with an up-to-date summary and lots of links.</ref> as "producing publications and preliminary results". The first summer school in AGI was organized in Xiamen, China in 2009<ref>https://goertzel.org/AGI_Summer_School_2009.htm</ref> by the Xiamen university's Artificial Brain Laboratory and OpenCog. The first university course was given in 2010<ref>http://fmi-plovdiv.org/index.jsp?id=1054&ln=1</ref> and 2011<ref>http://fmi.uni-plovdiv.bg/index.jsp?id=1139&ln=1</ref> at Plovdiv University, Bulgaria by Todor Arnaudov. MIT presented a course in AGI in 2018, organized by Lex Fridman and featuring a number of guest lecturers. However, as yet, most AI researchers have devoted little attention to AGI, with some claiming that intelligence is too complex to be completely replicated in the near term. However, a small number of computer scientists are active in AGI research, and many of this group are contributing to a series of [[Conference on Artificial General Intelligence|AGI conferences]]. The research is extremely diverse and often pioneering in nature. In the introduction to his book,{{sfn|Goertzel|Pennachin|2006}} Goertzel says that estimates of the time needed before a truly flexible AGI is built vary from 10 years to over a century, but the consensus in the AGI research community seems to be that the timeline discussed by [[Ray Kurzweil]] in ''[[The Singularity is Near]]''<ref name="K">{{Harv|Kurzweil|2005|p=260}} or see [http://crnano.typepad.com/crnblog/2005/08/advanced_human_.html Advanced Human Intelligence] {{Webarchive|url=https://web.archive.org/web/20110630032301/http://crnano.typepad.com/crnblog/2005/08/advanced_human_.html |date=30 June 2011 }} where he defines strong AI as "machine intelligence with the full range of human intelligence."</ref> (i.e. between 2015 and 2045) is plausible.{{sfn|Goertzel|2007}}<br />
<br />
The term "artificial general intelligence" was used as early as 1997, by Mark Gubrud in a discussion of the implications of fully automated military production and operations. The term was re-introduced and popularized by Shane Legg and Ben Goertzel around 2002. The research objective is much older, for example Doug Lenat's Cyc project (that began in 1984), and Allen Newell's Soar project are regarded as within the scope of AGI. AGI research activity in 2006 was described by Pei Wang and Ben Goertzel as "producing publications and preliminary results". The first summer school in AGI was organized in Xiamen, China in 2009 by the Xiamen university's Artificial Brain Laboratory and OpenCog. The first university course was given in 2010 and 2011 at Plovdiv University, Bulgaria by Todor Arnaudov. MIT presented a course in AGI in 2018, organized by Lex Fridman and featuring a number of guest lecturers. However, as yet, most AI researchers have devoted little attention to AGI, with some claiming that intelligence is too complex to be completely replicated in the near term. However, a small number of computer scientists are active in AGI research, and many of this group are contributing to a series of AGI conferences. The research is extremely diverse and often pioneering in nature. In the introduction to his book, Goertzel says that estimates of the time needed before a truly flexible AGI is built vary from 10 years to over a century, but the consensus in the AGI research community seems to be that the timeline discussed by Ray Kurzweil in The Singularity is Near (i.e. between 2015 and 2045) is plausible.<br />
<br />
“通用人工智能”一词早在1997年就由马克·古布鲁德(Mark Gubrud)在讨论全自动化军事生产和作业的影响时使用。这个术语在2002年左右被肖恩·莱格(Shane Legg)和本·格兹尔(Ben Goertzel)重新引入并推广。通用人工智能的研究目标要古老得多,例如道格•雷纳特(Doug Lenat)的 Cyc 项目(始于1984年) ,以及艾伦•纽厄尔(Allen Newell)的 Soar 项目被认为属于通用人工智能的范畴。王培(Pei Wang)和本·格兹尔将2006年的通用人工智能研究活动描述为“发表论文和取得初步成果”。2009年,厦门大学人工脑实验室和 OpenCog 在中国厦门组织了通用人工智能的第一个暑期学校。第一个大学课程于2010年和2011年在保加利亚普罗夫迪夫大学由托多尔·阿瑙多夫(Todor Arnaudov)开设。2018年,麻省理工学院开设了一门通用人工智能课程,由莱克斯·弗里德曼(Lex Fridman)组织,并邀请了一些客座讲师。然而,迄今为止,大多数人工智能研究人员对通用人工智能关注甚少,一些人声称,智能过于复杂,在短期内无法完全复制。然而,少数计算机科学家积极参与通用人工智能的研究,其中许多人正在为通用人工智能的一系列会议做出贡献。这项研究极其多样化,而且往往具有开创性。格兹尔在他的书的序言中,说,制造一个真正灵活的通用人工智能所需的时间约为10年到超过一个世纪不等,但是通用人工智能研究团体的似乎一致认为雷·库兹韦尔(Ray Kurzweil)在'''<font color="#ff8000">《奇点临近》(The Singularity is Near)</font>'''(即在2015年至2045年之间)中讨论的时间线是可信的。<br />
<br />
<br />
<br />
However, most mainstream AI researchers doubt that progress will be this rapid.{{citation_needed|date=January 2017}} Organizations explicitly pursuing AGI include the Swiss AI lab [[IDSIA]],{{citation needed|date=December 2017}} Nnaisense,<ref>{{cite news|last1=Markoff|first1=John|title=When A.I. Matures, It May Call Jürgen Schmidhuber 'Dad'|url=https://www.nytimes.com/2016/11/27/technology/artificial-intelligence-pioneer-jurgen-schmidhuber-overlooked.html|accessdate=26 December 2017|work=The New York Times|date=27 November 2016|archive-url=https://web.archive.org/web/20171226234555/https://www.nytimes.com/2016/11/27/technology/artificial-intelligence-pioneer-jurgen-schmidhuber-overlooked.html|archive-date=26 December 2017|url-status=live}}</ref> [[Vicarious (company)|Vicarious]],<!--<ref name=baum/>--> [[Maluuba]],<ref name=baum/> the [[OpenCog|OpenCog Foundation]], Adaptive AI, [[LIDA (cognitive architecture)|LIDA]], and [[Numenta]] and the associated [[Redwood Neuroscience Institute]].<ref>{{cite book|author1=James Barrat|title=Our Final Invention: Artificial Intelligence and the End of the Human Era|date=2013|publisher=St. Martin's Press|location=New York|isbn=9780312622374|edition=First|chapter=Chapter 11: A Hard Takeoff|title-link=Our Final Invention|author1-link=James Barrat}}</ref> In addition, organizations such as the [[Machine Intelligence Research Institute]]<ref>{{cite web|title=About the Machine Intelligence Research Institute|url=https://intelligence.org/about/|website=Machine Intelligence Research Institute|accessdate=26 December 2017|archive-url=https://web.archive.org/web/20180121025925/https://intelligence.org/about/|archive-date=21 January 2018|url-status=live}}</ref> and [[OpenAI]]<ref>{{cite news|title=About OpenAI|url=https://openai.com/about/|accessdate=26 December 2017|work=[[OpenAI]]|language=en-us|archive-url=https://web.archive.org/web/20171222181056/https://openai.com/about/|archive-date=22 December 2017|url-status=live}}</ref> have been founded to influence the development path of AGI. Finally, projects such as the [[Human Brain Project]]<ref>{{cite news|last1=Theil|first1=Stefan|title=Trouble in Mind|url=https://www.scientificamerican.com/article/why-the-human-brain-project-went-wrong-and-how-to-fix-it/|accessdate=26 December 2017|work=Scientific American|pages=36–42|language=en|doi=10.1038/scientificamerican1015-36|bibcode=2015SciAm.313d..36T|archive-url=https://web.archive.org/web/20171109234151/https://www.scientificamerican.com/article/why-the-human-brain-project-went-wrong-and-how-to-fix-it/|archive-date=9 November 2017|url-status=live}}</ref> have the goal of building a functioning simulation of the human brain. A 2017 survey of AGI categorized forty-five known "active R&D projects" that explicitly or implicitly (through published research) research AGI, with the largest three being [[DeepMind]], the Human Brain Project, and [[OpenAI]].<ref name=baum>{{cite journal|title=Baum, Seth, A Survey of Artificial General Intelligence Projects for Ethics, Risk, and Policy (November 12, 2017). Global Catastrophic Risk Institute Working Paper 17-1|url=https://ssrn.com/abstract=3070741|date=12 November 2017|last1=Baum|first1=Seth}}</ref><br />
<br />
However, most mainstream AI researchers doubt that progress will be this rapid. Organizations explicitly pursuing AGI include the Swiss AI lab IDSIA, Nnaisense, Vicarious. In addition, organizations such as the Machine Intelligence Research Institute and OpenAI have been founded to influence the development path of AGI. Finally, projects such as the Human Brain Project have the goal of building a functioning simulation of the human brain. A 2017 survey of AGI categorized forty-five known "active R&D projects" that explicitly or implicitly (through published research) research AGI, with the largest three being DeepMind, the Human Brain Project, and OpenAI.<br />
<br />
然而,大多数主流的人工智能研究人员怀疑进展是否会如此之快。明确寻求通用人工智能的组织包括瑞士人工智能实验室IDSIA,Nnaisense,Vicarious。此外,机器智能研究所和 OpenAI 等机构也建立起来以影响通用人工智能的发展道路。最后,像人脑计划这样的项目的目标是建立一个人脑的功能模拟。2017年针对通用人工智能的一项调查(通过已发表的研究)对45个已知的明确的或暗中研究通用人工智能的“活跃研发项目”进行了分类 ,其中最大的三个是 DeepMind、人类大脑项目和 OpenAI。<br />
<br />
<br />
<br />
In 2017, researchers Feng Liu, Yong Shi and Ying Liu conducted intelligence tests on publicly available and freely accessible weak AI such as Google AI or Apple's Siri and others. At the maximum, these AI reached a value of about 47, which corresponds approximately to a six-year-old child in first grade. An adult comes to about 100 on average. Similar tests had been carried out in 2014, with the IQ score reaching a maximum value of 27.<ref>{{cite journal|title=Intelligence Quotient and Intelligence Grade of Artificial Intelligence|journal=Annals of Data Science|volume=4|issue=2|pages=179–191|arxiv=1709.10242|doi=10.1007/s40745-017-0109-0|year=2017|last1=Liu|first1=Feng|last2=Shi|first2=Yong|last3=Liu|first3=Ying|bibcode=2017arXiv170910242L}}</ref><ref>{{cite web|title=Google-KI doppelt so schlau wie Siri|url=https://t3n.de/news/iq-kind-schlauer-google-ki-siri-864003|accessdate=2 January 2019|archive-url=https://web.archive.org/web/20190103055657/https://t3n.de/news/iq-kind-schlauer-google-ki-siri-864003/|archive-date=3 January 2019|url-status=live}}</ref><br />
<br />
In 2017, researchers Feng Liu, Yong Shi and Ying Liu conducted intelligence tests on publicly available and freely accessible weak AI such as Google AI or Apple's Siri and others. At the maximum, these AI reached a value of about 47, which corresponds approximately to a six-year-old child in first grade. An adult comes to about 100 on average. Similar tests had been carried out in 2014, with the IQ score reaching a maximum value of 27.<br />
<br />
2017年,研究人员 Feng Liu,Yong Shi 和 Ying Liu 对公开的和可自由访问的弱智能进行了智能测试,如谷歌人工智能或苹果的 Siri 等。在最大值,这些人工智能达到了约47,这大约相当于一个的六岁儿童。一个成年人平均智商为100。2014年也进行了类似的测试,智商分数的最高值达到了27。<br />
<br />
<br />
<br />
In 2019, video game programmer and aerospace engineer [[John Carmack]] announced plans to research AGI.<ref name="lawler">{{cite web |url=https://www.engadget.com/2019-11-13-john-carmack-agi.html |title=John Carmack takes a step back at Oculus to work on human-like AI |date=November 13, 2019 |first=Richard |last=Lawler |accessdate=April 4, 2020 |publisher=[[Engadget]]}}</ref><br />
<br />
In 2019, video game programmer and aerospace engineer John Carmack announced plans to research AGI.<br />
<br />
2019年,游戏程序师和航空工程师约翰·卡迈克(John Carmack)宣布了研究通用人工智能的计划。<br />
<br />
<br />
<br />
==Processing power needed to simulate a brain 模拟人脑所需要的处理能力==<br />
<br />
<br />
<br />
===Whole brain emulation 全脑模拟===<br />
<br />
{{main|Mind uploading}}<br />
<br />
A popular discussed approach to achieving general intelligent action is [[whole brain emulation]]. A low-level brain model is built by [[brain scanning|scanning]] and [[Brain mapping|mapping]] a biological brain in detail and copying its state into a computer system or another computational device. The computer runs a [[computer simulation|simulation]] model so faithful to the original that it will behave in essentially the same way as the original brain, or for all practical purposes, indistinguishably.<br />
<br />
A popular discussed approach to achieving general intelligent action is whole brain emulation. A low-level brain model is built by scanning and mapping a biological brain in detail and copying its state into a computer system or another computational device. The computer runs a simulation model so faithful to the original that it will behave in essentially the same way as the original brain, or for all practical purposes, indistinguishably.<ref name=Roadmap><br />
<br />
实现通用智能行为的一种流行的讨论方法是全脑模拟。一个低层次的大脑模型是通过扫描和绘制生物大脑的详细情况,并将其状态复制到计算机系统或其他计算设备中来建立的。计算机运行的模拟模型无比忠实于原始模型,以至于它的行为在本质,或者所有实际目的上与原始大脑相同,难以区分。<br />
<br />
{{Harvnb|Sandberg|Boström|2008}}. "The basic idea is to take a particular brain, scan its structure in detail, and construct a software model of it that is so faithful to the original that, when run on appropriate hardware, it will behave in essentially the same way as the original brain."</ref> Whole brain emulation is discussed in [[computational neuroscience]] and [[neuroinformatics]], in the context of [[brain simulation]] for medical research purposes. It is discussed in [[artificial intelligence]] research{{sfn|Goertzel|2007}} as an approach to strong AI. [[Neuroimaging]] technologies that could deliver the necessary detailed understanding are improving rapidly, and [[futurist]] Ray Kurzweil in the book ''The Singularity Is Near''<ref name=K/> predicts that a map of sufficient quality will become available on a similar timescale to the required computing power.<br />
<br />
."The basic idea is to take a particular brain, scan its structure in detail, and construct a software model of it that is so faithful to the original that, when run on appropriate hardware, it will behave in essentially the same way as the original brain."</ref> Whole brain emulation is discussed in computational neuroscience and neuroinformatics, in the context of brain simulation for medical research purposes. It is discussed in artificial intelligence research as an approach to strong AI. Neuroimaging technologies that could deliver the necessary detailed understanding are improving rapidly, and futurist Ray Kurzweil in the book The Singularity Is Near predicts that a map of sufficient quality will become available on a similar timescale to the required computing power.<br />
<br />
“基本思路是,取一个特定的大脑,详细地扫描其结构,并构建一个无比还原的原始大脑的软件模型,以至于在适当的硬件上运行时,它基本上与原始大脑的行为方式相同。”基于医学研究的大脑模拟背景下,全脑模拟在计算神经科学和神经信息学医学期刊上被讨论过。它是人工智能研究中讨论的一种强人工智能的方法。可提供必要详细的理解的神经成像技术正在迅速提高,未来学家雷·库兹韦尔(Ray Kurzweil)在《奇点临近》书中预测,一张质量足够高的地图将在类似的时间尺度上达到所需的计算能力。<br />
<br />
<br />
<br />
===Early estimates 初步预测===<br />
<br />
[[File:Estimations of Human Brain Emulation Required Performance.svg|thumb|right|400px|Estimates of how much processing power is needed to emulate a human brain at various levels (from Ray Kurzweil, and [[Anders Sandberg]] and [[Nick Bostrom]]), along with the fastest supercomputer from [[TOP500]] mapped by year. Note the logarithmic scale and exponential trendline, which assumes the computational capacity doubles every 1.1 years. Kurzweil believes that mind uploading will be possible at neural simulation, while the Sandberg, Bostrom report is less certain about where [[consciousness]] arises.根据对在不同水平上模拟人类大脑的所需处理能力的估计(来自 Ray Kurzweil,[ Anders Sandberg 和 Nick Bostrom ]) ,以及每年从最快的五百台超级计算机获得的数据,绘制出对数尺度趋势线和指数趋势线。它呈现出计算能力每1.1年增长一倍。库兹韦尔相信,在神经模拟中上传思维是可能的,而桑德伯格和博斯特罗姆的报告对意识从何产生则不太确定。{{sfn|Sandberg|Boström|2008}}]] For low-level brain simulation, an extremely powerful computer would be required. The [[human brain]] has a huge number of [[synapses]]. Each of the 10<sup>11</sup> (one hundred billion) [[neurons]] has on average 7,000 synaptic connections (synapses) to other neurons. It has been estimated that the brain of a three-year-old child has about 10<sup>15</sup> synapses (1 quadrillion). This number declines with age, stabilizing by adulthood. Estimates vary for an adult, ranging from 10<sup>14</sup> to 5×10<sup>14</sup> synapses (100 to 500 trillion).{{sfn|Drachman|2005}} An estimate of the brain's processing power, based on a simple switch model for neuron activity, is around 10<sup>14</sup> (100 trillion) synaptic updates per second ([[SUPS]]).{{sfn|Russell|Norvig|2003}} In 1997, Kurzweil looked at various estimates for the hardware required to equal the human brain and adopted a figure of 10<sup>16</sup> computations per second (cps).<ref>In "Mind Children" {{Harvnb|Moravec|1988|page=61}} 10<sup>15</sup> cps is used. More recently, in 1997, <{{cite web|url=http://www.transhumanist.com/volume1/moravec.htm |title=Archived copy |accessdate=23 June 2006 |url-status=dead |archiveurl=https://web.archive.org/web/20060615031852/http://transhumanist.com/volume1/moravec.htm |archivedate=15 June 2006 }}> Moravec argued for 10<sup>8</sup> MIPS which would roughly correspond to 10<sup>14</sup> cps. Moravec talks in terms of MIPS, not "cps", which is a non-standard term Kurzweil introduced.</ref> (For comparison, if a "computation" was equivalent to one "[[FLOPS|floating point operation]]" – a measure used to rate current [[supercomputer]]s – then 10<sup>16</sup> "computations" would be equivalent to 10 [[Peta-|petaFLOPS]], [[FLOPS#Performance records|achieved in 2011]]). He used this figure to predict the necessary hardware would be available sometime between 2015 and 2025, if the exponential growth in computer power at the time of writing continued.<br />
<br />
Estimates of how much processing power is needed to emulate a human brain at various levels (from Ray Kurzweil, and [[Anders Sandberg and Nick Bostrom), along with the fastest supercomputer from TOP500 mapped by year. Note the logarithmic scale and exponential trendline, which assumes the computational capacity doubles every 1.1 years. Kurzweil believes that mind uploading will be possible at neural simulation, while the Sandberg, Bostrom report is less certain about where consciousness arises.]] For low-level brain simulation, an extremely powerful computer would be required. The human brain has a huge number of synapses. Each of the 10<sup>11</sup> (one hundred billion) neurons has on average 7,000 synaptic connections (synapses) to other neurons. It has been estimated that the brain of a three-year-old child has about 10<sup>15</sup> synapses (1 quadrillion). This number declines with age, stabilizing by adulthood. Estimates vary for an adult, ranging from 10<sup>14</sup> to 5×10<sup>14</sup> synapses (100 to 500 trillion). An estimate of the brain's processing power, based on a simple switch model for neuron activity, is around 10<sup>14</sup> (100 trillion) synaptic updates per second (SUPS). In 1997, Kurzweil looked at various estimates for the hardware required to equal the human brain and adopted a figure of 10<sup>16</sup> computations per second (cps). (For comparison, if a "computation" was equivalent to one "floating point operation" – a measure used to rate current supercomputers – then 10<sup>16</sup> "computations" would be equivalent to 10 petaFLOPS, achieved in 2011). He used this figure to predict the necessary hardware would be available sometime between 2015 and 2025, if the exponential growth in computer power at the time of writing continued.<br />
<br />
根据对在不同水平上模拟人类大脑的所需处理能力的估计(来自 Ray Kurzweil,[ Anders Sandberg 和 Nick Bostrom ]) ,以及每年从最快的五百台超级计算机获得的数据,绘制出对数尺度趋势线和指数趋势线。它呈现出计算能力每1.1年增长一倍。库兹韦尔相信,在神经模拟中上传思维是可能的,而桑德伯格和博斯特罗姆的报告对意识从何产生则不太确定。为进行低层次的大脑模拟,需要一个非常强大的计算机。人类的大脑有大量的突触。每10个 sup 11 / sup (1000亿)神经元平均与其他神经元有7000个突触连接(突触)。据估计,一个三岁儿童的大脑约有10个 sup 15 / sup 突触(1千万亿)。这个数字随着年龄的增长而下降,成年后趋于稳定。而每个成年人的估计情况互不相同,从10个 sup 14 / sup 到5个 sup 10 sup 14 / sup 突触(100万亿到500万亿)不等。基于神经元活动的简单开关模型,对大脑处理能力的估计大约是每秒10次 / 秒(100万亿)突触更新(SUPS)。1997年,库兹韦尔研究了等价模拟人脑所需硬件的各种估计,并采纳了每秒10 sup 16 / sup 计算(cps)这个估计结果。(作为比较,如果一次“计算”相当于一次“浮点运算”——一种用于对当前超级计算机进行评级的措施——那么10 sup 16 / sup次“计算”相当于2011年达到的的每秒10000亿次浮点运算)。他用这个数字来预测,如果在撰写本文时计算机能力方面的指数增长继续下去的话,那么在2015年到2025年之间的某个时候,必要的硬件将会出现。<br />
<br />
<br />
<br />
===Modelling the neurons in more detail 对神经元的更精细的模拟===<br />
<br />
The [[artificial neuron]] model assumed by Kurzweil and used in many current [[artificial neural network]] implementations is simple compared with [[biological neuron model|biological neurons]]. A brain simulation would likely have to capture the detailed cellular behaviour of biological [[neurons]], presently understood only in the broadest of outlines. The overhead introduced by full modeling of the biological, chemical, and physical details of neural behaviour (especially on a molecular scale) would require computational powers several orders of magnitude larger than Kurzweil's estimate. In addition the estimates do not account for [[glial cells]], which are at least as numerous as neurons, and which may outnumber neurons by as much as 10:1, and are now known to play a role in cognitive processes.<ref name="Discover2011JanFeb">{{Cite journal|author=Swaminathan, Nikhil|title=Glia—the other brain cells|journal=Discover|date=Jan–Feb 2011|url=http://discovermagazine.com/2011/jan-feb/62|access-date=24 January 2014|archive-url=https://web.archive.org/web/20140208071350/http://discovermagazine.com/2011/jan-feb/62|archive-date=8 February 2014|url-status=live}}</ref><br />
<br />
The artificial neuron model assumed by Kurzweil and used in many current artificial neural network implementations is simple compared with biological neurons. A brain simulation would likely have to capture the detailed cellular behaviour of biological neurons, presently understood only in the broadest of outlines. The overhead introduced by full modeling of the biological, chemical, and physical details of neural behaviour (especially on a molecular scale) would require computational powers several orders of magnitude larger than Kurzweil's estimate. In addition the estimates do not account for glial cells, which are at least as numerous as neurons, and which may outnumber neurons by as much as 10:1, and are now known to play a role in cognitive processes.<br />
<br />
与生物神经元相比,库兹韦尔假设的人工神经元模型在当前许多'''<font color="#ff8000">人工神经网络(artificial neural network)</font>'''实现中的应用是简单的。大脑模拟可能需要捕捉生物神经元细胞行为的细节,目前只能从最广泛的概要中理解。对神经行为的生物、化学和物理细节(特别是在分子尺度上)进行全面建模所需要的计算能力将比库兹韦尔的估计大数个数量级。此外,这些估计没有考虑到至少和神经元一样多的'''<font color="#ff8000">胶质细胞(glial cells)</font>''',其数量可能比神经元多十分之一,且现已知它们在认知过程中发挥作用。<br />
<br />
<br />
<br />
=== Current research 研究现状===<br />
<br />
There are some research projects that are investigating brain simulation using more sophisticated neural models, implemented on conventional computing architectures. The [[Artificial Intelligence System]] project implemented non-real time simulations of a "brain" (with 10<sup>11</sup> neurons) in 2005. It took 50 days on a cluster of 27 processors to simulate 1 second of a model.<ref>{{cite journal |last=Izhikevich |first=Eugene M. |last2=Edelman |first2=Gerald M. |date=4 March 2008 |title=Large-scale model of mammalian thalamocortical systems |url=http://vesicle.nsi.edu/users/izhikevich/publications/large-scale_model_of_human_brain.pdf |journal=PNAS |volume=105 |issue=9 |pages=3593–3598 |doi= 10.1073/pnas.0712231105|access-date=23 June 2015 |archive-url=https://web.archive.org/web/20090612095651/http://vesicle.nsi.edu/users/izhikevich/publications/large-scale_model_of_human_brain.pdf |archive-date=12 June 2009 |pmid=18292226 |pmc=2265160|bibcode=2008PNAS..105.3593I }}</ref> The [[Blue Brain]] project used one of the fastest supercomputer architectures in the world, [[IBM]]'s [[Blue Gene]] platform, to create a real time simulation of a single rat [[Neocortex|neocortical column]] consisting of approximately 10,000 neurons and 10<sup>8</sup> synapses in 2006.<ref>{{cite web|url=http://bluebrain.epfl.ch/Jahia/site/bluebrain/op/edit/pid/19085|title=Project Milestones|work=Blue Brain|accessdate=11 August 2008}}</ref> A longer term goal is to build a detailed, functional simulation of the physiological processes in the human brain: "It is not impossible to build a human brain and we can do it in 10 years," [[Henry Markram]], director of the Blue Brain Project said in 2009 at the [[TED (conference)|TED conference]] in Oxford.<ref>{{Cite news |url=http://news.bbc.co.uk/1/hi/technology/8164060.stm |title=Artificial brain '10 years away' 2009 BBC news |date=22 July 2009 |access-date=25 July 2009 |archive-url=https://web.archive.org/web/20170726040959/http://news.bbc.co.uk/1/hi/technology/8164060.stm |archive-date=26 July 2017 |url-status=live }}</ref> There have also been controversial claims to have simulated a [[cat intelligence#Computer simulation of the cat brain|cat brain]]. Neuro-silicon interfaces have been proposed as an alternative implementation strategy that may scale better.<ref>[http://gauntlet.ucalgary.ca/story/10343 University of Calgary news] {{Webarchive|url=https://web.archive.org/web/20090818081044/http://gauntlet.ucalgary.ca/story/10343 |date=18 August 2009 }}, [http://www.nbcnews.com/id/12037941 NBC News news] {{Webarchive|url=https://web.archive.org/web/20170704063922/http://www.nbcnews.com/id/12037941/ |date=4 July 2017 }}</ref><br />
<br />
There are some research projects that are investigating brain simulation using more sophisticated neural models, implemented on conventional computing architectures. The Artificial Intelligence System project implemented non-real time simulations of a "brain" (with 10<sup>11</sup> neurons) in 2005. It took 50 days on a cluster of 27 processors to simulate 1 second of a model. The Blue Brain project used one of the fastest supercomputer architectures in the world, IBM's Blue Gene platform, to create a real time simulation of a single rat neocortical column consisting of approximately 10,000 neurons and 10<sup>8</sup> synapses in 2006. A longer term goal is to build a detailed, functional simulation of the physiological processes in the human brain: "It is not impossible to build a human brain and we can do it in 10 years," Henry Markram, director of the Blue Brain Project said in 2009 at the TED conference in Oxford. There have also been controversial claims to have simulated a cat brain. Neuro-silicon interfaces have been proposed as an alternative implementation strategy that may scale better.<br />
<br />
有一些研究项目正在使用更复杂的神经模型研究大脑模拟,这些模型是在传统的计算机体系结构上实现的。人工智能系统项目在2005年实现了对一个“大脑”(有10个 sup 11 / sup 神经元)的非实时模拟。在一个由27个处理器组成的集群上,模拟一个模型的一秒钟花费了50天时间。2006年,蓝脑项目利用世界上最快的超级计算机架构之一——IBM 的蓝色基因平台,创建了一个包含大约10,000个神经元和10个 sup 8 / sup 突触的单个大鼠的'''<font color="#ff8000">新皮质柱(neocortical column)</font>'''的实时模拟。一个更长期的目标是建立一个人脑生理过程的详细的功能模拟: “建立一个人脑并不是不可能的,我们可以在10年内完成,”蓝脑项目主任亨利·马克拉姆(Henry Markram)于2009年在牛津举行的 TED 大会上说道。还有一些有争议的说法是模拟猫的大脑。神经-硅接口已作为一种可替代的实施策略被提出,它可能会更好地进行模拟。<br />
<br />
<br />
<br />
[[Hans Moravec]] addressed the above arguments ("brains are more complicated", "neurons have to be modeled in more detail") in his 1997 paper "When will computer hardware match the human brain?".<ref>{{cite web|url=http://www.transhumanist.com/volume1/moravec.htm |title=Archived copy |accessdate=23 June 2006 |url-status=dead |archiveurl=https://web.archive.org/web/20060615031852/http://transhumanist.com/volume1/moravec.htm |archivedate=15 June 2006 }}</ref> He measured the ability of existing software to simulate the functionality of neural tissue, specifically the retina. His results do not depend on the number of glial cells, nor on what kinds of processing neurons perform where.<br />
<br />
Hans Moravec addressed the above arguments ("brains are more complicated", "neurons have to be modeled in more detail") in his 1997 paper "When will computer hardware match the human brain?". He measured the ability of existing software to simulate the functionality of neural tissue, specifically the retina. His results do not depend on the number of glial cells, nor on what kinds of processing neurons perform where.<br />
<br />
汉斯·莫拉维克(Hans Moravec)在他1997年的论文《计算机硬件何时能与人脑匹敌》中提出了上述观点(“大脑更复杂” ,“神经元的建模必须更详细”) .他测量了现有软件模拟神经组织,特别是视网膜功能的能力。他的研究结果并不取决于神经胶质细胞的数量,也不取决于处理神经元在哪里工作。<br />
<br />
<br />
<br />
The actual complexity of modeling biological neurons has been explored in [[OpenWorm|OpenWorm project]] that was aimed on complete simulation of a worm that has only 302 neurons in its neural network (among about 1000 cells in total). The animal's neural network has been well documented before the start of the project. However, although the task seemed simple at the beginning, the models based on a generic neural network did not work. Currently, the efforts are focused on precise emulation of biological neurons (partly on the molecular level), but the result cannot be called a total success yet. Even if the number of issues to be solved in a human-brain-scale model is not proportional to the number of neurons, the amount of work along this path is obvious.<br />
<br />
The actual complexity of modeling biological neurons has been explored in OpenWorm project that was aimed on complete simulation of a worm that has only 302 neurons in its neural network (among about 1000 cells in total). The animal's neural network has been well documented before the start of the project. However, although the task seemed simple at the beginning, the models based on a generic neural network did not work. Currently, the efforts are focused on precise emulation of biological neurons (partly on the molecular level), but the result cannot be called a total success yet. Even if the number of issues to be solved in a human-brain-scale model is not proportional to the number of neurons, the amount of work along this path is obvious.<br />
<br />
OpenWorm 项目已经探讨了建模生物神经元的实际复杂性。该项目旨在完全模拟一个蠕虫,其神经网络中只有302个神经元(在总共约1000个细胞中)。项目开始之前,蠕虫的神经网络已经被很好地记录了下来。然而,尽管任务一开始看起来很简单,基于一般神经网络的模型并不起作用。目前,研究的重点是精确模拟生物神经元(部分在分子水平上) ,但结果还不能被称为完全成功。即使在人脑尺度的模型中需要解决的问题的数量与神经元的数量不成比例,沿着这条路径走下去的工作量也是显而易见的。<br />
<br />
<br />
<br />
===Criticisms of simulation-based approaches 对基于模拟的研究方法的批评===<br />
<br />
A fundamental criticism of the simulated brain approach derives from [[embodied cognition]] where human embodiment is taken as an essential aspect of human intelligence. Many researchers believe that embodiment is necessary to ground meaning.<ref>{{Harvnb|de Vega|Glenberg|Graesser|2008}}. A wide range of views in current research, all of which require grounding to some degree</ref> If this view is correct, any fully functional brain model will need to encompass more than just the neurons (i.e., a robotic body). Goertzel{{sfn|Goertzel|2007}} proposes virtual embodiment (like in ''[[Second Life]]''), but it is not yet known whether this would be sufficient.<br />
<br />
A fundamental criticism of the simulated brain approach derives from embodied cognition where human embodiment is taken as an essential aspect of human intelligence. Many researchers believe that embodiment is necessary to ground meaning. If this view is correct, any fully functional brain model will need to encompass more than just the neurons (i.e., a robotic body). Goertzel proposes virtual embodiment (like in Second Life), but it is not yet known whether this would be sufficient.<br />
<br />
对模拟大脑的方法的一个基本批评来自具象认知,其中人形化被视为人类智力的一个重要方面。许多研究者认为,具体化是必要的基础意义。如果这种观点是正确的,那么任何功能齐全的大脑模型除了神经元还要包含更多东西(例如,一个机器人身体)。格兹尔提出了虚拟体(就像在《第二人生》中那样) ,但是目前还不知道这是否足够。<br />
<br />
<br />
<br />
Desktop computers using microprocessors capable of more than 10<sup>9</sup> cps (Kurzweil's non-standard unit "computations per second", see above) have been available since 2005. According to the brain power estimates used by Kurzweil (and Moravec), this computer should be capable of supporting a simulation of a bee brain, but despite some interest<ref>{{Cite web |url=http://www.setiai.com/archives/cat_honey_bee_brain.html |title=some links to bee brain studies |access-date=30 March 2010 |archive-url=https://web.archive.org/web/20111005162232/http://www.setiai.com/archives/cat_honey_bee_brain.html |archive-date=5 October 2011 |url-status=dead }}</ref> no such simulation exists {{Citation needed|date=April 2011}}. There are at least three reasons for this:<br />
<br />
Desktop computers using microprocessors capable of more than 10<sup>9</sup> cps (Kurzweil's non-standard unit "computations per second", see above) have been available since 2005. According to the brain power estimates used by Kurzweil (and Moravec), this computer should be capable of supporting a simulation of a bee brain, but despite some interest no such simulation exists . There are at least three reasons for this:<br />
<br />
自2005年以来,台式计算机使用的微处理器能够超过10 sup 9 / sup cps (库兹韦尔的非标准单位“每秒计算”,见上文)。根据库兹韦尔(和莫拉维克)使用的大脑能量估算,这台计算机应该能够支持蜜蜂大脑的模拟,但是尽管有些人感兴趣,这样的模拟并不存在。这至少有三个原因:<br />
<br />
#The neuron model seems to be oversimplified (see next section).<br />
<br />
The neuron model seems to be oversimplified (see next section).<br />
<br />
神经元模型似乎过于简化了(见下一节)。<br />
<br />
#There is insufficient understanding of higher cognitive processes{{refn|In Goertzels' AGI book ([[#CITEREFYudkowsky2006|Yudkowsky 2006]]), Yudkowsky proposes 5 levels of organisation that must be understood – code/data, sensory modality, concept & category, thought, and deliberation (consciousness) – in order to use the available hardware}} to establish accurately what the brain's neural activity, observed using techniques such as [[Neuroimaging#Functional magnetic resonance imaging|functional magnetic resonance imaging]], correlates with.<br />
<br />
There is insufficient understanding of higher cognitive processes to establish accurately what the brain's neural activity, observed using techniques such as functional magnetic resonance imaging, correlates with.<br />
<br />
人们对高级认知过程的理解不够充分,无法准确地确定大脑的神经活动---- 与之相关的是使用功能性磁共振成像等技术观察到的大脑活动。<br />
<br />
#Even if our understanding of cognition advances sufficiently, early simulation programs are likely to be very inefficient and will, therefore, need considerably more hardware.<br />
<br />
Even if our understanding of cognition advances sufficiently, early simulation programs are likely to be very inefficient and will, therefore, need considerably more hardware.<br />
<br />
即使我们对认知的理解有了足够的进步,早期的仿真程序也可能非常低效,因此需要更多的硬件。<br />
<br />
#The brain of an organism, while critical, may not be an appropriate boundary for a cognitive model. To simulate a bee brain, it may be necessary to simulate the body, and the environment. [[The Extended Mind]] thesis formalizes the philosophical concept, and research into [[Cephalopoda|cephalopods]] has demonstrated clear examples of a decentralized system.<ref>{{cite journal | pmid = 15829594 | doi=10.1152/jn.00684.2004 | volume=94 | issue=2 | title=Dynamic model of the octopus arm. I. Biomechanics of the octopus reaching movement |date=August 2005 | journal=J. Neurophysiol. | pages=1443–58 | last1 = Yekutieli | first1 = Y | last2 = Sagiv-Zohar | first2 = R | last3 = Aharonov | first3 = R | last4 = Engel | first4 = Y | last5 = Hochner | first5 = B | last6 = Flash | first6 = T}}</ref><br />
<br />
The brain of an organism, while critical, may not be an appropriate boundary for a cognitive model. To simulate a bee brain, it may be necessary to simulate the body, and the environment. The Extended Mind thesis formalizes the philosophical concept, and research into cephalopods has demonstrated clear examples of a decentralized system.<br />
<br />
有机体的大脑虽然关键,但可能不是认知模型的合适边界。为了模拟蜜蜂的大脑,可能需要模拟身体和环境。'''<font color="#ff8000">延展心灵论题(The Extended Mind thesis)</font>'''形式化了哲学概念,对头足类动物的研究已经展示了分散系统的明显的例子。<br />
<br />
<br />
<br />
In addition, the scale of the human brain is not currently well-constrained. One estimate puts the human brain at about 100 billion neurons and 100 trillion synapses.<ref>{{Harvnb|Williams|Herrup|1988}}</ref><ref>[http://search.eb.com/eb/article-75525 "nervous system, human."] ''[[Encyclopædia Britannica]]''. 9 January 2007</ref> Another estimate is 86 billion neurons of which 16.3 billion are in the [[cerebral cortex]] and 69 billion in the [[cerebellum]].{{sfn|Azevedo et al.|2009}} [[Glial cell]] synapses are currently unquantified but are known to be extremely numerous.<br />
<br />
In addition, the scale of the human brain is not currently well-constrained. One estimate puts the human brain at about 100 billion neurons and 100 trillion synapses. Another estimate is 86 billion neurons of which 16.3 billion are in the cerebral cortex and 69 billion in the cerebellum. Glial cell synapses are currently unquantified but are known to be extremely numerous.<br />
<br />
此外,人类大脑的规模目前还没有得到很好的限制。据估计,人类大脑大约有1000亿个神经元和100万亿个突触。另一个估计是860亿个神经元,其中163亿个在大脑皮层,690亿个在小脑。神经胶质细胞突触目前尚未定量,但已知数量极多。<br />
<br />
<br />
<br />
==Strong AI and consciousness 强人工智能和意识==<br />
<br />
{{See also|Philosophy of artificial intelligence|Turing test}}<br />
<br />
<br />
<br />
In 1980, philosopher [[John Searle]] coined the term "strong AI" as part of his [[Chinese room]] argument.<ref>{{Harvnb|Searle|1980}}</ref> He wanted to distinguish between two different hypotheses about artificial intelligence:<ref>As defined in a standard AI textbook: "The assertion that machines could possibly act intelligently (or, perhaps better, act as if they were intelligent) is called the 'weak AI' hypothesis by philosophers, and the assertion that machines that do so are actually thinking (as opposed to simulating thinking) is called the 'strong AI' hypothesis." {{Harv|Russell|Norvig|2003}}</ref><br />
<br />
In 1980, philosopher John Searle coined the term "strong AI" as part of his Chinese room argument. He wanted to distinguish between two different hypotheses about artificial intelligence:<br />
<br />
1980年,哲学家约翰•塞尔(John Searle)将“强人工智能”(strong AI)一词作为他在中文屋论证的一部分。他想要区分关于人工智能的两种不同假设:<br />
<br />
* An artificial intelligence system can ''think'' and have a ''mind''. (The word "mind" has a specific meaning for philosophers, as used in "the [[mind body problem]]" or "the [[philosophy of mind]]".)<br />
*一个人工智能系统可以思考并拥有思维。(词语“思维”对哲学家来说有特殊意义,正如在“身心问题”或“心灵哲学”中的使用一样。)<br />
<br />
* An artificial intelligence system can (only) ''act like'' it thinks and has a mind.<br />
*一个人工智能系统可以(仅仅)按它所想而“行动”,并且拥有思维。<br />
<br />
The first one is called "the ''strong'' AI hypothesis" and the second is "the ''weak'' AI hypothesis" because the first one makes the ''stronger'' statement: it assumes something special has happened to the machine that goes beyond all its abilities that we can test. Searle referred to the "strong AI hypothesis" as "strong AI". This usage is also common in academic AI research and textbooks.<ref>For example:<br />
<br />
The first one is called "the strong AI hypothesis" and the second is "the weak AI hypothesis" because the first one makes the stronger statement: it assumes something special has happened to the machine that goes beyond all its abilities that we can test. Searle referred to the "strong AI hypothesis" as "strong AI". This usage is also common in academic AI research and textbooks.<ref>For example:<br />
<br />
第一条被称为“强人工智能假设” ,第二条被称为“弱人工智能假设”,因为第一条假设提出了更强的陈述: 它假定机器发生了某种特殊的事件,超出了我们能够测试的所有能力。塞尔将“'''<font color="#ff8000">强人工智能假说(strong AI hypothesis)</font>'''”称为“强人工智能”。这种用法在人工智能学术研究和教科书中也很常见。例如:<br />
<br />
* {{Harvnb|Russell|Norvig|2003}},<br />
<br />
* [http://www.encyclopedia.com/doc/1O87-strongAI.html Oxford University Press Dictionary of Psychology] {{Webarchive|url=https://web.archive.org/web/20071203103022/http://www.encyclopedia.com/doc/1O87-strongAI.html |date=3 December 2007 }} (quoted in "High Beam Encyclopedia"),<br />
<br />
* [http://www.aaai.org/AITopics/html/phil.html MIT Encyclopedia of Cognitive Science] {{Webarchive|url=https://web.archive.org/web/20080719074502/http://www.aaai.org/AITopics/html/phil.html |date=19 July 2008 }} (quoted in "AITopics")<br />
<br />
* [http://planetmath.org/encyclopedia/StrongAIThesis.html Planet Math] {{Webarchive|url=https://web.archive.org/web/20070919012830/http://planetmath.org/encyclopedia/StrongAIThesis.html |date=19 September 2007 }}<br />
<br />
* [http://www.cbhd.org/resources/biotech/tongen_2003-11-07.htm Will Biological Computers Enable Artificially Intelligent Machines to Become Persons?] {{Webarchive|url=https://web.archive.org/web/20080513031753/http://www.cbhd.org/resources/biotech/tongen_2003-11-07.htm |date=13 May 2008 }} Anthony Tongen</ref><br />
<br />
<br />
<br />
The weak AI hypothesis is equivalent to the hypothesis that artificial general intelligence is possible. According to [[Stuart J. Russell|Russell]] and [[Peter Norvig|Norvig]], "Most AI researchers take the weak AI hypothesis for granted, and don't care about the strong AI hypothesis."{{sfn|Russell|Norvig|2003|p=947}}<br />
<br />
The weak AI hypothesis is equivalent to the hypothesis that artificial general intelligence is possible. According to Russell and Norvig, "Most AI researchers take the weak AI hypothesis for granted, and don't care about the strong AI hypothesis."<br />
<br />
弱人工智能假说等同于通用人工智能是可能的假说。根据罗素和诺维格的说法,“大多数人工智能研究人员认为弱人工智能假说是理所当然的,并且不关心强人工智能假说。”<br />
<br />
<br />
<br />
In contrast to Searle, [[Ray Kurzweil]] uses the term "strong AI" to describe any artificial intelligence system that acts like it has a mind,<ref name=K/> regardless of whether a philosopher would be able to determine if it ''actually'' has a mind or not.<br />
<br />
In contrast to Searle, Ray Kurzweil uses the term "strong AI" to describe any artificial intelligence system that acts like it has a mind, regardless of whether a philosopher would be able to determine if it actually has a mind or not.<br />
<br />
与 Searle 不同的是,Ray Kurzweil 使用“强人工智能”这个词来描述任何人工智能系统,这个系统的行为就像它有思想一样,不管哲学家能否确定它是否真的有思想。<br />
<br />
In science fiction, AGI is associated with traits such as [[consciousness]], [[sentience]], [[sapience]], and [[self-awareness]] observed in living beings. However, according to Searle, it is an open question whether general intelligence is sufficient for consciousness. "Strong AI" (as defined above by Kurzweil) should not be confused with Searle's "[[strong AI hypothesis]]." The strong AI hypothesis is the claim that a computer which behaves as intelligently as a person must also necessarily have a [[mind]] and [[consciousness]]. AGI refers only to the amount of intelligence that the machine displays, with or without a mind.<br />
<br />
In science fiction, AGI is associated with traits such as consciousness, sentience, sapience, and self-awareness observed in living beings. However, according to Searle, it is an open question whether general intelligence is sufficient for consciousness. "Strong AI" (as defined above by Kurzweil) should not be confused with Searle's "strong AI hypothesis." The strong AI hypothesis is the claim that a computer which behaves as intelligently as a person must also necessarily have a mind and consciousness. AGI refers only to the amount of intelligence that the machine displays, with or without a mind.<br />
<br />
在科幻小说中,通用人工智能与生物所具有的的意识、知觉、智慧和自我意识等特征有关。然而,根据塞尔的说法,一般智力是否足以产生意识还是一个悬而未决的问题。“强人工智能”(如上文库兹韦尔所定义的)不应与塞尔的“强人工智能假设”相混淆。强人工智能假说认为,一台像人一样智能运行的计算机必然具有思想和意识。通用人工智能只是指机器显示的智能程度,不管有没有头脑。<br />
<br />
<br />
<br />
===Consciousness 意识===<br />
<br />
There are other aspects of the human mind besides intelligence that are relevant to the concept of strong AI which play a major role in [[science fiction]] and the [[ethics of artificial intelligence]]:<br />
<br />
There are other aspects of the human mind besides intelligence that are relevant to the concept of strong AI which play a major role in science fiction and the ethics of artificial intelligence:<br />
<br />
除了与在科幻小说和人工智能伦理中扮演重要角色的强人工智能概念有关的智能,人类思维还有其他方面:<br />
<br />
* [[consciousness]]: To have [[qualia|subjective experience]] and [[thought]].<ref>Note that [[consciousness]] is difficult to define. A popular definition, due to [[Thomas Nagel]], is that it "feels like" something to be conscious. If we are not conscious, then it doesn't feel like anything. Nagel uses the example of a bat: we can sensibly ask "what does it feel like to be a bat?" However, we are unlikely to ask "what does it feel like to be a toaster?" Nagel concludes that a bat appears to be conscious (i.e. has consciousness) but a toaster does not. See {{Harv|Nagel|1974}}</ref><br />
意识:拥有主观体验和思想。值得一提的是意识是很难定义的。由托马斯·内格尔给出的一个著名定义陈述如下:一个事物如果能体会到某种感觉,那么它是有意识的。如果我们不是有意识的,那么我们不会有任何感觉。内格尔以蝙蝠为例:我们可以凭借感觉问出:“成为一只蝙蝠的感觉如何?”但是,我们不大可能问出:“成为一个吐司机的感觉如何?”内格尔总结认为蝙蝠像是有意识的(即拥有意识),但是吐司机却不是。<br />
<br />
* [[self-awareness]]: To be aware of oneself as a separate individual, especially to be aware of one's own thoughts.<br />
自我意识:能够意识到自己是一个独立的个体,尤其是意识到自己的思想。<br />
<br />
* [[sentience]]: The ability to "feel" perceptions or emotions subjectively.<br />
知觉:主观地感受概念或者情感的能力。<br />
<br />
* [[sapience]]: The capacity for wisdom.<br />
智慧:容纳知识的能力。<br />
<br />
<br />
<br />
These traits have a moral dimension, because a machine with this form of strong AI may have legal rights, analogous to the [[animal rights|rights of non-human animals]]. As such, preliminary work has been conducted on approaches to integrating full ethical agents with existing legal and social frameworks. These approaches have focused on the legal position and rights of 'strong' AI.<ref>{{Cite journal|last=Sotala|first=Kaj|last2=Yampolskiy|first2=Roman V|date=2014-12-19|title=Responses to catastrophic AGI risk: a survey|journal=Physica Scripta|volume=90|issue=1|pages=8|doi=10.1088/0031-8949/90/1/018001|issn=0031-8949|doi-access=free}}</ref><br />
<br />
These traits have a moral dimension, because a machine with this form of strong AI may have legal rights, analogous to the rights of non-human animals. As such, preliminary work has been conducted on approaches to integrating full ethical agents with existing legal and social frameworks. These approaches have focused on the legal position and rights of 'strong' AI.<br />
<br />
这些特征具有道德维度,因为拥有这种强人工智能形式的机器可能拥有法律权利,类似于非人类动物的权利。因此,已经开展了初步工作,探讨如何将全面的道德行为者纳入现有的法律和社会框架。这些方法都集中在强大的人工智能的法律地位和权利上。<br />
<br />
<br />
<br />
However, [[Bill Joy]], among others, argues a machine with these traits may be a threat to human life or dignity.<ref>{{cite journal| title=Why the future doesn't need us | last=Joy | first=Bill |author-link=Bill Joy | journal=Wired |date=April 2000 }}</ref> It remains to be shown whether any of these traits are [[Necessary and sufficient condition|necessary]] for strong AI. The role of [[consciousness]] is not clear, and currently there is no agreed test for its presence. If a machine is built with a device that simulates the [[neural correlates of consciousness]], would it automatically have self-awareness? It is also possible that some of these properties, such as sentience, [[Emergence|naturally emerge]] from a fully intelligent machine, or that it becomes natural to ''ascribe'' these properties to machines once they begin to act in a way that is clearly intelligent. For example, intelligent action may be sufficient for sentience, rather than the other way around.<br />
<br />
However, Bill Joy, among others, argues a machine with these traits may be a threat to human life or dignity. It remains to be shown whether any of these traits are necessary for strong AI. The role of consciousness is not clear, and currently there is no agreed test for its presence. If a machine is built with a device that simulates the neural correlates of consciousness, would it automatically have self-awareness? It is also possible that some of these properties, such as sentience, naturally emerge from a fully intelligent machine, or that it becomes natural to ascribe these properties to machines once they begin to act in a way that is clearly intelligent. For example, intelligent action may be sufficient for sentience, rather than the other way around.<br />
<br />
然而,比尔·乔伊(Bill Joy)等人认为,具有这些特征的机器可能会威胁到人类的生命或尊严。这些特征对于强人工智能来说是否是必要的还有待证明。意识的作用并不清楚,目前也没有针对其存在而进行的一致的测试。如果一台机器装有一个模拟与意识相关的神经的装置,它会自动具有自我意识吗?也有可能这些特性中的一些,比如感知能力,自然而然地从一个完全智能的机器中产生,或者一旦机器开始以一种明显智能的方式行动,人们就会自然而然地认为这些特性是机器自主产生的。<br />
--[[用户:粲兰|袁一博]]([[用户讨论:粲兰|讨论]])“或者一旦机器开始以一种明显智能的方式行动,,人们就会自然而然地认为这些特性是机器自主产生的。”对应原句“or that it becomes natural to ascribe these properties to machines once they begin to act in a way that is clearly intelligent.”与原句在语序和措辞上略有不同,是译者考虑到中文的阅读习惯在不改变原意的条件下意译得出的。<br />
例如,智能行为可能足以判定机器产生了知觉,而非反过来。<br />
<br />
<br />
<br />
===Artificial consciousness research 人工意识研究===<br />
<br />
{{Main|Artificial consciousness}}<br />
<br />
<br />
<br />
Although the role of consciousness in strong AI/AGI is debatable, many AGI researchers{{sfn|Yudkowsky|2006}} regard research that investigates possibilities for implementing consciousness as vital. In an early effort [[Igor Aleksander]]{{sfn|Aleksander|1996}} argued that the principles for creating a conscious machine already existed but that it would take forty years to train such a machine to understand [[language]].<br />
<br />
Although the role of consciousness in strong AI/AGI is debatable, many AGI researchers regard research that investigates possibilities for implementing consciousness as vital. In an early effort Igor Aleksander argued that the principles for creating a conscious machine already existed but that it would take forty years to train such a machine to understand language.<br />
<br />
虽然意识在强人工智能/通用人工智能中的作用是有争议的,但是很多通用人工智能的研究人员认为研究实现意识的可能性是至关重要的。在早期的努力中,伊格尔·亚历山大(Igor Aleksander)认为创造一个有意识的机器的原则已经存在,但是训练这样一个机器去理解语言可能需要四十年。<br />
<br />
<br />
<br />
==Possible explanations for the slow progress of AI research 人工智能研究进展缓慢的可能解释==<br />
<br />
{{See also|History of artificial intelligence#The problems}}<br />
<br />
<br />
<br />
Since the launch of AI research in 1956, the growth of this field has slowed down over time and has stalled the aims of creating machines skilled with intelligent action at the human level.{{sfn|Clocksin|2003}} A possible explanation for this delay is that computers lack a sufficient scope of memory or processing power.{{sfn|Clocksin|2003}} In addition, the level of complexity that connects to the process of AI research may also limit the progress of AI research.{{sfn|Clocksin|2003}}<br />
<br />
Since the launch of AI research in 1956, the growth of this field has slowed down over time and has stalled the aims of creating machines skilled with intelligent action at the human level. A possible explanation for this delay is that computers lack a sufficient scope of memory or processing power. In addition, the level of complexity that connects to the process of AI research may also limit the progress of AI research.<br />
<br />
自从1956年开始人工智能研究以来,这一领域的发展速度已经随着时间的推移而放缓,并且阻碍了创造具有人类水平的智能行为的机器的目标。这种延迟的一个可能的解释是计算机缺乏足够的存储空间或处理能力。此外,与人工智能研究过程相关的复杂程度也可能限制人工智能研究的进展。<br />
<br />
<br />
<br />
While most AI researchers believe strong AI can be achieved in the future, there are some individuals like [[Hubert Dreyfus]] and [[Roger Penrose]] who deny the possibility of achieving strong AI.{{sfn|Clocksin|2003}} [[John McCarthy (computer scientist)|John McCarthy]] was one of various computer scientists who believe human-level AI will be accomplished, but a date cannot accurately be predicted.{{sfn|McCarthy|2003}}<br />
<br />
While most AI researchers believe strong AI can be achieved in the future, there are some individuals like Hubert Dreyfus and Roger Penrose who deny the possibility of achieving strong AI. John McCarthy was one of various computer scientists who believe human-level AI will be accomplished, but a date cannot accurately be predicted.<br />
<br />
虽然大多数人工智能研究人员认为强大的人工智能可以在未来实现,但也有一些人像休伯特·德雷福斯(Hubert Dreyfus)和罗杰·彭罗斯(Roger Penrose)否认实现强大人工智能的可能性。约翰·麦卡锡(John McCarthy)是众多计算机科学家之一,他们相信人类水平的人工智能将会实现,但是日期无法准确预测。<br />
<br />
<br />
<br />
Conceptual limitations are another possible reason for the slowness in AI research.{{sfn|Clocksin|2003}} AI researchers may need to modify the conceptual framework of their discipline in order to provide a stronger base and contribution to the quest of achieving strong AI. As William Clocksin wrote in 2003: "the framework starts from Weizenbaum's observation that intelligence manifests itself only relative to specific social and cultural contexts".{{sfn|Clocksin|2003}}<br />
<br />
Conceptual limitations are another possible reason for the slowness in AI research. AI researchers may need to modify the conceptual framework of their discipline in order to provide a stronger base and contribution to the quest of achieving strong AI. As William Clocksin wrote in 2003: "the framework starts from Weizenbaum's observation that intelligence manifests itself only relative to specific social and cultural contexts".<br />
<br />
概念上的局限性是人工智能研究缓慢的另一个可能原因。人工智能研究人员可能需要修改他们学科的概念框架,以便为实现强大的人工智能提供一个更强大的基础和贡献。正如威廉·克罗克森(William Clocksin)在2003年写的那样: “这个框架始于魏泽堡的观察,即智力只在特定的社会和文化背景下表现出来。”。<br />
<br />
<br />
<br />
Furthermore, AI researchers have been able to create computers that can perform jobs that are complicated for people to do, such as mathematics, but conversely they have struggled to develop a computer that is capable of carrying out tasks that are simple for humans to do, such as walking ([[Moravec's paradox]]).{{sfn|Clocksin|2003}} A problem described by David Gelernter is that some people assume thinking and reasoning are equivalent.{{sfn|Gelernter|2010}} However, the idea of whether thoughts and the creator of those thoughts are isolated individually has intrigued AI researchers.{{sfn|Gelernter|2010}}<br />
<br />
Furthermore, AI researchers have been able to create computers that can perform jobs that are complicated for people to do, such as mathematics, but conversely they have struggled to develop a computer that is capable of carrying out tasks that are simple for humans to do, such as walking (Moravec's paradox). A problem described by David Gelernter is that some people assume thinking and reasoning are equivalent. However, the idea of whether thoughts and the creator of those thoughts are isolated individually has intrigued AI researchers.<br />
<br />
此外,人工智能研究人员已经能够创造出能够执行对人类而言复杂的工作(如数学)的计算机,但相反地,他们却难以开发出能够执行人类简单任务(如行走)的计算机(莫拉维克悖论)。大卫·格勒尼特(David Gelernter)描述的一个问题是,有些人认为思考和推理是等价的。然而,思想和这些思想的创造者是否被孤立的想法引起了人工智能研究者的兴趣。<br />
<br />
<br />
<br />
The problems that have been encountered in AI research over the past decades have further impeded the progress of AI. The failed predictions that have been promised by AI researchers and the lack of a complete understanding of human behaviors have helped diminish the primary idea of human-level AI.{{sfn|Goertzel|2007}} Although the progress of AI research has brought both improvement and disappointment, most investigators have established optimism about potentially achieving the goal of AI in the 21st century.{{sfn|Goertzel|2007}}<br />
<br />
The problems that have been encountered in AI research over the past decades have further impeded the progress of AI. The failed predictions that have been promised by AI researchers and the lack of a complete understanding of human behaviors have helped diminish the primary idea of human-level AI. Although the progress of AI research has brought both improvement and disappointment, most investigators have established optimism about potentially achieving the goal of AI in the 21st century.<br />
<br />
过去几十年人工智能研究中遇到的问题进一步阻碍了人工智能的发展。人工智能研究人员所承诺的失败的预测,以及对人类行为缺乏完整理解,已经帮助削弱了人类水平人工智能的最初设想。尽管人工智能研究的进展带来了进步和失望,但大多数研究人员对人工智能在21世纪可能实现的目标持乐观态度。<br />
<br />
<br />
<br />
Other possible reasons have been proposed for the lengthy research in the progress of strong AI. The intricacy of scientific problems and the need to fully understand the human brain through psychology and neurophysiology have limited many researchers in emulating the function of the human brain in computer hardware.{{sfn|McCarthy|2007}} Many researchers tend to underestimate any doubt that is involved with future predictions of AI, but without taking those issues seriously, people can then overlook solutions to problematic questions.{{sfn|Goertzel|2007}}<br />
<br />
Other possible reasons have been proposed for the lengthy research in the progress of strong AI. The intricacy of scientific problems and the need to fully understand the human brain through psychology and neurophysiology have limited many researchers in emulating the function of the human brain in computer hardware. Many researchers tend to underestimate any doubt that is involved with future predictions of AI, but without taking those issues seriously, people can then overlook solutions to problematic questions.<br />
<br />
还有其他可能的原因可以解释为什么对于强人工智能进行了长时间的研究。错综复杂的科学问题,以及通过心理学和神经生理学充分了解人脑的必要性,限制了许多研究人员在计算机硬件中模拟人脑的功能。许多研究人员倾向于低估与人工智能未来预测有关的任何怀疑,但是如果不认真对待这些问题,人们就会忽视不确定问题的解决方案。<br />
<br />
<br />
<br />
Clocksin says that a conceptual limitation that may impede the progress of AI research is that people may be using the wrong techniques for computer programs and implementation of equipment.{{sfn|Clocksin|2003}} When AI researchers first began to aim for the goal of artificial intelligence, a main interest was human reasoning.{{sfn|Holte|Choueiry|2003}} Researchers hoped to establish computational models of human knowledge through reasoning and to find out how to design a computer with a specific cognitive task.{{sfn|Holte|Choueiry|2003}}<br />
<br />
Clocksin says that a conceptual limitation that may impede the progress of AI research is that people may be using the wrong techniques for computer programs and implementation of equipment. When AI researchers first began to aim for the goal of artificial intelligence, a main interest was human reasoning. Researchers hoped to establish computational models of human knowledge through reasoning and to find out how to design a computer with a specific cognitive task.<br />
<br />
克罗克森说,阻碍人工智能研究进展的一个概念上的限制是,人们可能在计算机程序和安装设备方面使用了错误的技术。当人工智能研究人员第一次开始瞄准人工智能的目标时,主要的兴趣是人类推理。研究人员希望通过推理建立人类知识的计算模型,并找出如何设计一台具有特定认知任务的计算机。<br />
<br />
<br />
<br />
The practice of abstraction, which people tend to redefine when working with a particular context in research, provides researchers with a concentration on just a few concepts.{{sfn|Holte|Choueiry|2003}} The most productive use of abstraction in AI research comes from planning and problem solving.{{sfn|Holte|Choueiry|2003}} Although the aim is to increase the speed of a computation, the role of abstraction has posed questions about the involvement of abstraction operators.{{sfn|Zucker|2003}}<br />
<br />
The practice of abstraction, which people tend to redefine when working with a particular context in research, provides researchers with a concentration on just a few concepts. The most productive use of abstraction in AI research comes from planning and problem solving. Although the aim is to increase the speed of a computation, the role of abstraction has posed questions about the involvement of abstraction operators.<br />
<br />
人们在研究中使用特定的语境时倾向于重新定义抽象的实践,使得研究人员集中在数个概念上。抽象在人工智能研究中最有效的应用来自规划和解决问题。虽然目标是提高计算速度,但是抽象的作用已经对抽象算子的参与提出了问题。<br />
<br />
<br />
<br />
A possible reason for the slowness in AI relates to the acknowledgement by many AI researchers that heuristics is a section that contains a significant breach between computer performance and human performance.{{sfn|McCarthy|2007}} The specific functions that are programmed to a computer may be able to account for many of the requirements that allow it to match human intelligence. These explanations are not necessarily guaranteed to be the fundamental causes for the delay in achieving strong AI, but they are widely agreed by numerous researchers.<br />
<br />
A possible reason for the slowness in AI relates to the acknowledgement by many AI researchers that heuristics is a section that contains a significant breach between computer performance and human performance. The specific functions that are programmed to a computer may be able to account for many of the requirements that allow it to match human intelligence. These explanations are not necessarily guaranteed to be the fundamental causes for the delay in achieving strong AI, but they are widely agreed by numerous researchers.<br />
<br />
人工智能发展缓慢的一个可能原因是许多人工智能研究人员承认启发式算法是计算机性能和人类表现之间的一个重大缺口。为编入计算机的特定功能可以满足许多要求,使计算机与人类智能相匹配。这些解释并不一定是造成人工智能实现延迟的根本原因,但它们得到了众多研究人员的广泛认同。<br />
<br />
<br />
<br />
There have been many AI researchers that debate over the idea whether [[affective computing|machines should be created with emotions]]. There are no emotions in typical models of AI and some researchers say programming emotions into machines allows them to have a mind of their own.{{sfn|Clocksin|2003}} Emotion sums up the experiences of humans because it allows them to remember those experiences.{{sfn|Gelernter|2010}} David Gelernter writes, "No computer will be creative unless it can simulate all the nuances of human emotion."{{sfn|Gelernter|2010}} This concern about emotion has posed problems for AI researchers and it connects to the concept of strong AI as its research progresses into the future.<ref>{{cite journal|doi=10.1016/j.bushor.2018.08.004|title=Kaplan Andreas and Haelein Michael (2019) Siri, Siri, in my hand: Who's the fairest in the land? On the interpretations, illustrations, and implications of artificial intelligence | volume=62 | year=2019|journal=Business Horizons|pages=15–25 | last1 = Kaplan | first1 = Andreas | last2 = Haenlein | first2 = Michael}}</ref><br />
<br />
There have been many AI researchers that debate over the idea whether machines should be created with emotions. There are no emotions in typical models of AI and some researchers say programming emotions into machines allows them to have a mind of their own. Emotion sums up the experiences of humans because it allows them to remember those experiences. David Gelernter writes, "No computer will be creative unless it can simulate all the nuances of human emotion." This concern about emotion has posed problems for AI researchers and it connects to the concept of strong AI as its research progresses into the future.<br />
<br />
许多人工智能研究人员一直在争论机器是否应该带有情感。典型的人工智能模型中没有情感,一些研究人员说,将情感编程到机器中可以让它们拥有自己的思想。情感总结了人类的经历,因为它允许人们记住那些经历。大卫·格勒尼特(David Gelernter)写道: “除非计算机能够模拟人类情感的所有细微差别,否则它不会具有创造力。”这种对情绪的关注给人工智能研究人员带来了一些问题,随着未来人工智能研究的进展,它与强人工智能的概念联系起来。<br />
<br />
<br />
<br />
==Controversies and dangers 争议和风险==<br />
<br />
<br />
<br />
===Feasibility 可能性===<br />
<br />
{{expand section|date=February 2016}}<br />
<br />
As of March 2020, AGI remains speculative<ref name="spec1">[https://www.europarl.europa.eu/at-your-service/files/be-heard/religious-and-non-confessional-dialogue/events/en-20190319-how-artificial-intelligence-works.pdf europarl.europa.eu: How artificial intelligence works], "Concluding remarks: Today's AI is powerful and useful, but remains far from speculated AGI or ASI.", European Parliamentary Research Service, retrieved March 3, 2020</ref><ref name="spec2">[https://www.itu.int/en/journal/001/Documents/itu2018-9.pdf itu.int: Beyond Mad?: The Race For Artificial General Intelligence], "AGI represents a level of power that remains firmly in the realm of speculative fiction as on date." February 2, 2018, retrieved March 3, 2020</ref> as no such system has been demonstrated yet. Opinions vary both on ''whether'' and ''when'' artificial general intelligence will arrive. At one extreme, AI pioneer [[Herbert A. Simon]] wrote in 1965: "machines will be capable, within twenty years, of doing any work a man can do". However, this prediction failed to come true. Microsoft co-founder [[Paul Allen]] believed that such intelligence is unlikely in the 21st century because it would require "unforeseeable and fundamentally unpredictable breakthroughs" and a "scientifically deep understanding of cognition".<ref>{{cite news|last1=Allen|first1=Paul|title=The Singularity Isn't Near|url=http://www.technologyreview.com/view/425733/paul-allen-the-singularity-isnt-near/|accessdate=17 September 2014|work=[[MIT Technology Review]]}}</ref> Writing in [[The Guardian]], roboticist Alan Winfield claimed the gulf between modern computing and human-level artificial intelligence is as wide as the gulf between current space flight and practical faster-than-light spaceflight.<ref>{{cite news|last1=Winfield|first1=Alan|title=Artificial intelligence will not turn into a Frankenstein's monster|url=https://www.theguardian.com/technology/2014/aug/10/artificial-intelligence-will-not-become-a-frankensteins-monster-ian-winfield|accessdate=17 September 2014|work=[[The Guardian]]|archive-url=https://web.archive.org/web/20140917135230/http://www.theguardian.com/technology/2014/aug/10/artificial-intelligence-will-not-become-a-frankensteins-monster-ian-winfield|archive-date=17 September 2014|url-status=live}}</ref><br />
<br />
As of March 2020, AGI remains speculative as no such system has been demonstrated yet. Opinions vary both on whether and when artificial general intelligence will arrive. At one extreme, AI pioneer Herbert A. Simon wrote in 1965: "machines will be capable, within twenty years, of doing any work a man can do". However, this prediction failed to come true. Microsoft co-founder Paul Allen believed that such intelligence is unlikely in the 21st century because it would require "unforeseeable and fundamentally unpredictable breakthroughs" and a "scientifically deep understanding of cognition". Writing in The Guardian, roboticist Alan Winfield claimed the gulf between modern computing and human-level artificial intelligence is as wide as the gulf between current space flight and practical faster-than-light spaceflight.<br />
<br />
截至2020年3月,通用人工智能仍处于推测性的状态,因为迄今此类系统尚未被展示。对于人工通用智能是否会到来以及何时到来,人们的看法各不相同。一个极端如,人工智能的先驱赫伯特·西蒙在1965年写道: “机器将能在20年内具有完成人类能做的任何工作的能力。”然而,这个预言并没有实现。微软(Microsoft)联合创始人保罗•艾伦(Paul Allen)认为,这种情报在21世纪不太可能出现,因为它需要“不可预见且根本无法预测的突破”和“对认知的科学的深入理解”。机器人专家阿兰·温菲尔德(Alan Winfield)在《卫报》上发表文章称,现代计算机和人类水平的人工智能之间的鸿沟就像当前的太空飞行和实际的超光速空间飞行之间的鸿沟一样宽。<br />
<br />
<br />
<br />
AI experts' views on the feasibility of AGI wax and wane, and may have seen a resurgence in the 2010s. Four polls conducted in 2012 and 2013 suggested that the median guess among experts for when they would be 50% confident AGI would arrive was 2040 to 2050, depending on the poll, with the mean being 2081. Of the experts, 16.5% answered with "never" when asked the same question but with a 90% confidence instead.<ref name="new yorker doomsday">{{cite news|author1=Raffi Khatchadourian|title=The Doomsday Invention: Will artificial intelligence bring us utopia or destruction?|url=http://www.newyorker.com/magazine/2015/11/23/doomsday-invention-artificial-intelligence-nick-bostrom|accessdate=7 February 2016|work=[[The New Yorker (magazine)|The New Yorker]]|date=23 November 2015|archive-url=https://web.archive.org/web/20160128105955/http://www.newyorker.com/magazine/2015/11/23/doomsday-invention-artificial-intelligence-nick-bostrom|archive-date=28 January 2016|url-status=live}}</ref><ref>Müller, V. C., & Bostrom, N. (2016). Future progress in artificial intelligence: A survey of expert opinion. In Fundamental issues of artificial intelligence (pp. 555–572). Springer, Cham.</ref> Further current AGI progress considerations can be found below [[#Tests_for_confirming_human-level_AGI|''Tests for confirming human-level AGI'']] and [[#IQ-Tests_AGI|''IQ-tests AGI'']].<br />
<br />
AI experts' views on the feasibility of AGI wax and wane, and may have seen a resurgence in the 2010s. Four polls conducted in 2012 and 2013 suggested that the median guess among experts for when they would be 50% confident AGI would arrive was 2040 to 2050, depending on the poll, with the mean being 2081. Of the experts, 16.5% answered with "never" when asked the same question but with a 90% confidence instead. Further current AGI progress considerations can be found below Tests for confirming human-level AGI and IQ-tests AGI.<br />
<br />
人工智能专家一直在考虑通用人工智能兴衰可能性,可能在2010年代出现了复苏。2012年和2013年进行的四次民意调查显示,专家们平均有50%的信心相信通用人工智能会在2040年到2050年间到来,具体取决于调查结果,平均猜测是2081年。在这些专家中,16.5% 的人在被问到同样的问题时回答“从来没有” ,但他们的自信心却达到了90%。当前通用人工智能项目进展中的进一步考虑可以在确认人类水平通用人工智能和基于智商测试的通用人工智能测试下面找到。<br />
<br />
<br />
<br />
===Potential threat to human existence{{anchor|Risk_of_human_extinction}} 对人类的潜在威胁===<br />
<br />
{{Main|Existential risk from artificial general intelligence}}<br />
<br />
<br />
<br />
The thesis that AI poses an existential risk, and that this risk needs much more attention than it currently gets, has been endorsed by many public figures; perhaps the most famous are [[Elon Musk]], [[Bill Gates]], and [[Stephen Hawking]]. The most notable AI researcher to endorse the thesis is [[Stuart J. Russell]]. Endorsers of the thesis sometimes express bafflement at skeptics: Gates states he does not "understand why some people are not concerned",<ref name="BBC News">{{cite news|last1=Rawlinson|first1=Kevin|title=Microsoft's Bill Gates insists AI is a threat|url=https://www.bbc.co.uk/news/31047780|work=[[BBC News]]|accessdate=30 January 2015}}</ref> and Hawking criticized widespread indifference in his 2014 editorial: {{cquote|'So, facing possible futures of incalculable benefits and risks, the experts are surely doing everything possible to ensure the best outcome, right? Wrong. If a superior alien civilisation sent us a message saying, 'We'll arrive in a few decades,' would we just reply, 'OK, call us when you get here{{endash}}we'll leave the lights on?' Probably not{{endash}}but this is more or less what is happening with AI.'<ref name="hawking editorial">{{cite news |title=Stephen Hawking: 'Transcendence looks at the implications of artificial intelligence&nbsp;– but are we taking AI seriously enough?' |url=https://www.independent.co.uk/news/science/stephen-hawking-transcendence-looks-at-the-implications-of-artificial-intelligence--but-are-we-taking-ai-seriously-enough-9313474.html |accessdate=3 December 2014 |publisher=[[The Independent (UK)]]}}</ref>}}<br />
<br />
The thesis that AI poses an existential risk, and that this risk needs much more attention than it currently gets, has been endorsed by many public figures; perhaps the most famous are Elon Musk, Bill Gates, and Stephen Hawking. The most notable AI researcher to endorse the thesis is Stuart J. Russell. Endorsers of the thesis sometimes express bafflement at skeptics: Gates states he does not "understand why some people are not concerned", and Hawking criticized widespread indifference in his 2014 editorial: we'll leave the lights on?' Probably not, but this is more or less what is happening with AI.'}}<br />
<br />
人工智能构成了世界末日,这种风险需要比现在更多的关注,这一论点已经得到了许多公众人物的支持; 也许最著名的是埃隆·马斯克,比尔·盖茨和斯蒂芬·霍金。支持这一观点的最著名的人工智能研究者是斯图尔特·罗素。这篇论文的支持者有时会对怀疑论者表示困惑: 盖茨表示,他不“理解为什么有些人不关心” ,霍金在2014年的社论中批评了普遍的冷漠: 我们就这么让人工智能这盏灯亮着吗?可能不是,但这或多或少是正在发生在人工智能领域内的。<br />
<br />
<br />
<br />
Many of the scholars who are concerned about existential risk believe that the best way forward would be to conduct (possibly massive) research into solving the difficult "[[AI control problem|control problem]]" to answer the question: what types of safeguards, algorithms, or architectures can programmers implement to maximize the probability that their recursively-improving AI would continue to behave in a friendly, rather than destructive, manner after it reaches superintelligence?<ref name="superintelligence" >{{cite book|last1=Bostrom|first1=Nick|author-link=Nick Bostrom|title=Superintelligence: Paths, Dangers, Strategies|date=2014|isbn=978-0199678112|edition=First|quote=|title-link=Superintelligence: Paths, Dangers, Strategies}}<!-- preface --></ref><ref name="physica_scripta" >{{cite journal|title=Responses to catastrophic AGI risk: a survey|journal=[[Physica Scripta]]|date=19 December 2014|volume=90|issue=1|author1=Kaj Sotala|author2-link=Roman Yampolskiy|author2=Roman Yampolskiy}}</ref><br />
<br />
Many of the scholars who are concerned about existential risk believe that the best way forward would be to conduct (possibly massive) research into solving the difficult "control problem" to answer the question: what types of safeguards, algorithms, or architectures can programmers implement to maximize the probability that their recursively-improving AI would continue to behave in a friendly, rather than destructive, manner after it reaches superintelligence?<br />
<br />
许多关注世界末日的学者认为,最好的方法是进行(可能是大规模的)研究,解决困难的“控制问题” ,以回答这个问题: 程序员可以实现哪些类型的保障措施、算法或架构,以最大程度地提高其递归改进的人工智能在达到超级智能后继续以友好,而非破坏性的方式运行的可能性?<br />
<br />
<br />
<br />
The thesis that AI can pose existential risk also has many strong detractors. Skeptics sometimes charge that the thesis is crypto-religious, with an irrational belief in the possibility of superintelligence replacing an irrational belief in an omnipotent God; at an extreme, [[Jaron Lanier]] argues that the whole concept that current machines are in any way intelligent is "an illusion" and a "stupendous con" by the wealthy.<ref name="atlantic-but-what">{{cite magazine |url=https://www.theatlantic.com/health/archive/2014/05/but-what-does-the-end-of-humanity-mean-for-me/361931/ |title=But What Would the End of Humanity Mean for Me? |magazine=The Atlantic | date = 9 May 2014 | accessdate =12 December 2015}}</ref><br />
<br />
The thesis that AI can pose existential risk also has many strong detractors. Skeptics sometimes charge that the thesis is crypto-religious, with an irrational belief in the possibility of superintelligence replacing an irrational belief in an omnipotent God; at an extreme, Jaron Lanier argues that the whole concept that current machines are in any way intelligent is "an illusion" and a "stupendous con" by the wealthy.<br />
<br />
人工智能可以造成世界末日的观点也遭到了许多强烈的反对。怀疑论者有时指责该论点是神秘的、宗教性的,他们非理性地相信超级智能可能取代对万能的上帝的非理性信仰; 一个极端如,杰伦·拉尼尔(Jaron Lanier)认为,目前的机器以任何方式具有智能的整个概念是“一种幻觉” ,是富人的“惊人骗局”。<br />
<br />
<br />
<br />
Much of existing criticism argues that AGI is unlikely in the short term. Computer scientist [[Gordon Bell]] argues that the human race will already destroy itself before it reaches the [[technological singularity]]. [[Gordon Moore]], the original proponent of [[Moore's Law]], declares that "I am a skeptic. I don't believe [a technological singularity] is likely to happen, at least for a long time. And I don't know why I feel that way."<ref>{{cite news |title=Tech Luminaries Address Singularity |url=https://spectrum.ieee.org/computing/hardware/tech-luminaries-address-singularity |accessdate=8 April 2020 |work=IEEE Spectrum: Technology, Engineering, and Science News |issue=SPECIAL REPORT: THE SINGULARITY |date=1 June 2008 |language=en}}</ref> [[Baidu]] Vice President [[Andrew Ng]] states AI existential risk is "like worrying about overpopulation on Mars when we have not even set foot on the planet yet."<ref name=shermer>{{cite news|last1=Shermer|first1=Michael|title=Apocalypse AI|url=https://www.scientificamerican.com/article/artificial-intelligence-is-not-a-threat-mdash-yet/|accessdate=27 November 2017|work=Scientific American|date=1 March 2017|pages=77|language=en|doi=10.1038/scientificamerican0317-77|bibcode=2017SciAm.316c..77S}}</ref><br />
<br />
Much of existing criticism argues that AGI is unlikely in the short term. Computer scientist Gordon Bell argues that the human race will already destroy itself before it reaches the technological singularity. Gordon Moore, the original proponent of Moore's Law, declares that "I am a skeptic. I don't believe [a technological singularity] is likely to happen, at least for a long time. And I don't know why I feel that way." Baidu Vice President Andrew Ng states AI existential risk is "like worrying about overpopulation on Mars when we have not even set foot on the planet yet."<br />
<br />
现有的许多批评认为,通用人工智能短期内不太可能成功。计算机科学家戈登·贝尔(Gordon Bell)认为人类在到达'''<font color="#ff8000">技术奇点(technological singularity)</font>'''之前就已经自我毁灭了。戈登·摩尔(Gordon Moore),'''<font color="#ff8000">摩尔定律(Moore's Law)</font>'''的最初提出者,宣称“我是一个怀疑论者。我不认为技术奇点会发生,至少在很长一段时间内不会。我不知道为什么会有这种感觉。”百度副总裁吴恩达(Andrew Ng)说,人工智能世界末日就像是在担心火星人口过剩,而我们甚至还没有踏上这个星球。<br />
<br />
<br />
<br />
==See also 请参阅==<br />
<br />
{{div col|colwidth=30em}}<br />
<br />
* [[Automated machine learning]] 自动机器学习<br />
<br />
* [[Machine ethics]] 机器伦理<br />
<br />
* [[Multi-task learning]] 多任务学习<br />
<br />
* [[Superintelligence: Paths, Dangers, Strategies|Superintelligence]] 超级智能<br />
<br />
* [[Nick Bostrom]] 尼克·博斯特罗姆<br />
<br />
* [[Eliezer Yudkowsky]] 埃利泽·尤德科夫斯基<br />
<br />
* [[Future of Humanity Institute]] 人类未来研究所<br />
<br />
* [[Outline of artificial intelligence]] 人工智能概要<br />
<br />
* [[Artificial brain]] 人工大脑<br />
<br />
* [[Transfer learning]] 学习迁移<br />
<br />
* [[Outline of transhumanism]] 超人类主义概要<br />
<br />
* [[General game playing]] 一般博弈<br />
<br />
* [[Synthetic intelligence]] 合成智能<br />
<br />
* [[Intelligence amplification]] (IA), the use of information technology in augmenting human intelligence instead of creating an external autonomous "AGI"{{div col end}}<br />
智能放大,利用信息技术加强人类智慧而不是建造外在的通用人工智能<br />
<br />
<br />
==Notes 附注==<br />
<br />
{{reflist|colwidth=30em}}<br />
<br />
<br />
<br />
==References 参考文献==<br />
<br />
• Stages of Artificial Intelligence"[https://www.computerscience0.xyz/2020/04/ai-artificial-intelligence-stages-of.html Computer Science]" on 2nd April, 2020.{{refbegin|2}}<br />
<br />
• Stages of Artificial Intelligence"[https://www.computerscience0.xyz/2020/04/ai-artificial-intelligence-stages-of.html Computer Science]" on 2nd April, 2020.<br />
<br />
•2020年4月2日,《人工智能的阶段》[ https://www.computerscience0.xyz/2020/04/ai-Artificial-Intelligence-Stages-of.html 计算机科学]。<br />
<br />
* {{cite web |url=http://www.techcast.org/Upload/PDFs/633615794236495345_TCTheAutomationofThought.pdf |title=TechCast Article Series: The Automation of Thought |last1=Halal |first1=William E. |website= |access-date= |archive-url=https://web.archive.org/web/20130606101835/http://www.techcast.org/Upload/PDFs/633615794236495345_TCTheAutomationofThought.pdf |archive-date=6 June 2013}}<br />
<br />
* {{Citation | last=Aleksander | first=Igor | author-link=Igor Aleksander | year=1996 | title=Impossible Minds | publisher=World Scientific Publishing Company | isbn=978-1-86094-036-1 | url-access=registration | url=https://archive.org/details/impossiblemindsm0000alek }}<br />
<br />
* {{Citation | last = Omohundro|first= Steve| author-link= Steve Omohundro | year = 2008| title= The Nature of Self-Improving Artificial Intelligence| publisher= presented and distributed at the 2007 Singularity Summit, San Francisco, CA.}}<br />
<br />
* {{Citation|last=Sandberg |first=Anders|last2=Boström|first2=Nick|title=Whole Brain Emulation: A Roadmap|url=http://www.fhi.ox.ac.uk/Reports/2008-3.pdf|accessdate=5 April 2009|series= Technical Report #2008‐3|year=2008| publisher = Future of Humanity Institute, Oxford University}}<br />
<br />
* {{Citation|vauthors=Azevedo FA, Carvalho LR, Grinberg LT | title = Equal numbers of neuronal and nonneuronal cells make the human brain an isometrically scaled-up primate brain| journal = The Journal of Comparative Neurology| volume = 513| issue = 5| pages = 532–41|date=April 2009| pmid = 19226510| doi = 10.1002/cne.21974|url=https://www.researchgate.net/publication/24024444 |accessdate=4 September 2013| ref={{harvid|Azevedo et al.|2009}}|display-authors=etal}}<br />
<br />
* {{Citation<br />
<br />
| first=Anthony<br />
<br />
| first=Anthony<br />
<br />
首先是安东尼<br />
<br />
| last=Berglas<br />
<br />
| last=Berglas<br />
<br />
最后一个贝格拉斯<br />
<br />
| title=Artificial Intelligence will Kill our Grandchildren<br />
<br />
| title=Artificial Intelligence will Kill our Grandchildren<br />
<br />
人工智能会杀死我们的孙子<br />
<br />
| year=2008<br />
<br />
| year=2008<br />
<br />
2008年<br />
<br />
| url=http://berglas.org/Articles/AIKillGrandchildren/AIKillGrandchildren.html<br />
<br />
| url=http://berglas.org/Articles/AIKillGrandchildren/AIKillGrandchildren.html<br />
<br />
Http://berglas.org/articles/aikillgrandchildren/aikillgrandchildren.html<br />
<br />
}}<br />
<br />
}}<br />
<br />
}}<br />
<br />
* {{Citation | last = Chalmers |first = David | author-link=David Chalmers | year=1996 | title = The Conscious Mind |publisher=Oxford University Press.}}<br />
<br />
* {{Citation | last = Clocksin|first=William |date=Aug 2003 |title=Artificial intelligence and the future|journal=[[Philosophical Transactions of the Royal Society A]] |pmid=12952683 |volume=361 |issue=1809 |pages=1721–1748 |doi=10.1098/rsta.2003.1232 |postscript=.|bibcode=2003RSPTA.361.1721C }}<br />
<br />
* {{Crevier 1993}}<br />
<br />
* {{Citation | first = Brad | last = Darrach | date=20 November 1970 | title=Meet Shakey, the First Electronic Person | magazine=[[Life Magazine]] | pages = 58–68 }}.<br />
<br />
* {{Citation | last = Drachman | first = D | title = Do we have brain to spare? | journal = Neurology | volume = 64 | issue = 12 | pages = 2004–5 | year = 2005 | pmid = 15985565 | doi = 10.1212/01.WNL.0000166914.38327.BB | postscript = .}}<br />
<br />
* {{Citation | last = Feigenbaum | first = Edward A. | first2=Pamela | last2=McCorduck | author-link=Edward Feigenbaum | author2-link = Pamela McCorduck | title = The Fifth Generation: Artificial Intelligence and Japan's Computer Challenge to the World | publisher = Michael Joseph | year = 1983 | isbn = 978-0-7181-2401-4 }}<br />
<br />
* {{Citation<br />
<br />
| last = Gelernter | first = David<br />
<br />
| last = Gelernter | first = David<br />
<br />
最后的盖兰特 | 第一个大卫<br />
<br />
| year = 2010<br />
<br />
| year = 2010<br />
<br />
2010年<br />
<br />
| title = Dream-logic, the Internet and Artificial Thought<br />
<br />
| title = Dream-logic, the Internet and Artificial Thought<br />
<br />
梦的逻辑,互联网和人工思维<br />
<br />
| url=http://www.edge.org/3rd_culture/gelernter10.1/gelernter10.1_index.html | accessdate= 25 July 2010<br />
<br />
| url=http://www.edge.org/3rd_culture/gelernter10.1/gelernter10.1_index.html | accessdate= 25 July 2010<br />
<br />
Http://www.edge.org/3rd_culture/gelernter10.1/gelernter10.1_index.html : 2010年7月25日<br />
<br />
}}<br />
<br />
}}<br />
<br />
}}<br />
<br />
* {{Citation<br />
<br />
| editor1-last = Goertzel | editor1-first = Ben | authorlink = Ben Goertzel<br />
<br />
| editor1-last = Goertzel | editor1-first = Ben | authorlink = Ben Goertzel<br />
<br />
1-first Ben | authorlink Ben Goertzel<br />
<br />
| editor2-last = Pennachin | editor2-first= Cassio<br />
<br />
| editor2-last = Pennachin | editor2-first= Cassio<br />
<br />
| 编辑2-last Pennachin | 编辑2-first Cassio<br />
<br />
| year = 2006<br />
<br />
| year = 2006<br />
<br />
2006年<br />
<br />
| title=Artificial General Intelligence<br />
<br />
| title=Artificial General Intelligence<br />
<br />
人工通用智能<br />
<br />
| publisher = Springer <br />
<br />
| publisher = Springer <br />
<br />
出版商斯普林格<br />
<br />
| url=http://people.inf.elte.hu/csizsekp/ai/books/artificial-general-intelligence-cognitive-technologies.9783540237334.27156.pdf<br />
<br />
| url=http://people.inf.elte.hu/csizsekp/ai/books/artificial-general-intelligence-cognitive-technologies.9783540237334.27156.pdf<br />
<br />
Http://people.inf.elte.hu/csizsekp/ai/books/artificial-general-intelligence-cognitive-technologies.9783540237334.27156.pdf<br />
<br />
| isbn = 978-3-540-23733-4<br />
<br />
| isbn = 978-3-540-23733-4<br />
<br />
[国际标准图书馆编号978-3-540-23733-4]<br />
<br />
| archive-url = https://web.archive.org/web/20130320184603/http://people.inf.elte.hu/csizsekp/ai/books/artificial-general-intelligence-cognitive-technologies.9783540237334.27156.pdf<br />
<br />
| archive-url = https://web.archive.org/web/20130320184603/http://people.inf.elte.hu/csizsekp/ai/books/artificial-general-intelligence-cognitive-technologies.9783540237334.27156.pdf<br />
<br />
| 档案-url https://web.archive.org/web/20130320184603/http://people.inf.elte.hu/csizsekp/ai/books/artificial-general-intelligence-cognitive-technologies.9783540237334.27156.pdf<br />
<br />
| archive-date = 20 March 2013<br />
<br />
| archive-date = 20 March 2013<br />
<br />
| 档案-日期2013年3月20日<br />
<br />
}}<br />
<br />
}}<br />
<br />
}}<br />
<br />
* {{Citation<br />
<br />
| last = Goertzel | first = Ben | authorlink = Ben Goertzel<br />
<br />
| last = Goertzel | first = Ben | authorlink = Ben Goertzel<br />
<br />
作者: Ben Goertzel<br />
<br />
| last2 = Wang | first2 = Pei<br />
<br />
| last2 = Wang | first2 = Pei<br />
<br />
2 Wang | first2 Pei<br />
<br />
| year = 2006<br />
<br />
| year = 2006<br />
<br />
2006年<br />
<br />
| title = Introduction: Aspects of Artificial General Intelligence<br />
<br />
| title = Introduction: Aspects of Artificial General Intelligence<br />
<br />
| 题目简介: 人工通用智能的方方面面<br />
<br />
| url=http://sites.google.com/site/narswang/publications/wang-goertzel.AGI_Aspects.pdf?attredirects=1<br />
<br />
| url=http://sites.google.com/site/narswang/publications/wang-goertzel.AGI_Aspects.pdf?attredirects=1<br />
<br />
Http://sites.google.com/site/narswang/publications/wang-goertzel.agi_aspects.pdf?attredirects=1<br />
<br />
}}<br />
<br />
}}<br />
<br />
}}<br />
<br />
* {{Citation |last=Goertzel|first=Ben |authorlink=Ben Goertzel |date=Dec 2007 |title=Human-level artificial general intelligence and the possibility of a technological singularity: a reaction to Ray Kurzweil's The Singularity Is Near, and McDermott's critique of Kurzweil|journal=Artificial Intelligence |volume=171 |issue=18, Special Review Issue |pages=1161–1173 |url=https://scholar.google.com/scholar?hl=sv&lr=&cluster=15189798216526465792 |accessdate=1 April 2009 |doi=10.1016/j.artint.2007.10.011 |postscript=.}}<br />
<br />
* {{Citation | last = Gubrud | first = Mark | url=http://www.foresight.org/Conferences/MNT05/Papers/Gubrud/ | title = Nanotechnology and International Security | journal= Fifth Foresight Conference on Molecular Nanotechnology |date = November 1997| accessdate= 7 May 2011}}<br />
<br />
* {{Citation| last1 = Holte | first1=RC | last2=Choueiry |first2=BY| title = Abstraction and reformulation in artificial intelligence| journal = [[Philosophical Transactions of the Royal Society B]]| volume = 358| issue = 1435| pages = 1197–1204| year = 2003| pmid = 12903653| pmc = 1693218| doi = 10.1098/rstb.2003.1317| postscript = .}}<br />
<br />
* {{Citation | last = Howe | first = J. | url=http://www.dai.ed.ac.uk/AI_at_Edinburgh_perspective.html | title = Artificial Intelligence at Edinburgh University : a Perspective |date = November 1994| accessdate= 30 August 2007}}<br />
<br />
* {{Citation | last = Johnson| first= Mark | year = 1987| title =The body in the mind| publisher =Chicago|isbn= 978-0-226-40317-5}}<br />
<br />
* {{Citation | last = Kurzweil | first = Ray | author-link = Ray Kurzweil | title = The Singularity is Near | year = 2005 | publisher = Viking Press | title-link = The Singularity is Near }}<br />
<br />
* {{Citation | last = Lighthill | first = Professor Sir James | author-link=James Lighthill | year = 1973 | contribution= Artificial Intelligence: A General Survey | title = Artificial Intelligence: a paper symposium| publisher = Science Research Council }}<br />
<br />
* {{Citation|last=Luger|first=George|first2=William|last2=Stubblefield|year=2004|title=Artificial Intelligence: Structures and Strategies for Complex Problem Solving|edition=5th|publisher=The Benjamin/Cummings Publishing Company, Inc.|page=[https://archive.org/details/artificialintell0000luge/page/720 720]|isbn=978-0-8053-4780-7|url=https://archive.org/details/artificialintell0000luge/page/720}}<br />
<br />
* {{Citation |last=McCarthy|first=John |authorlink=John McCarthy (computer scientist)|date=Oct 2007 |title=From here to human-level AI|journal=Artificial Intelligence |volume=171 |pages=1174–1182 |doi=10.1016/j.artint.2007.10.009 | postscript=. |issue=18}}<br />
<br />
* {{McCorduck 2004}}<br />
<br />
* {{Citation | last = Moravec | first = Hans | author-link = Hans Moravec | year = 1976 | url = http://www.frc.ri.cmu.edu/users/hpm/project.archive/general.articles/1975/Raw.Power.html | title = The Role of Raw Power in Intelligence | access-date = 29 September 2007 | archive-url = https://web.archive.org/web/20160303232511/http://www.frc.ri.cmu.edu/users/hpm/project.archive/general.articles/1975/Raw.Power.html | archive-date = 3 March 2016 | url-status = dead }}<br />
<br />
* {{Citation | last = Moravec | first = Hans | author-link=Hans Moravec | year = 1988 | title = Mind Children | publisher = Harvard University Press}}<br />
<br />
* {{Citation| last=Nagel | year=1974 | title =What Is it Like to Be a Bat | journal = Philosophical Review | volume=83 | issue=4 | pages=435–50 | url=http://organizations.utep.edu/Portals/1475/nagel_bat.pdf| postscript=. | doi=10.2307/2183914 | jstor=2183914 }}<br />
<br />
* {{Citation | last = Newell | first = Allen | author-link=Allen Newell | last2 = Simon | first2=H. A. | year = 1963 | contribution=GPS: A Program that Simulates Human Thought| title=Computers and Thought | editor-last= Feigenbaum | editor-first= E.A. |editor2-last= Feldman |editor2-first= J. |publisher= McGraw-Hill | authorlink2 = Herbert A. Simon|location= New York }}<br />
<br />
* {{cite journal | doi = 10.1145/360018.360022 | last = Newell | first = Allen | last2 = Simon | first2=H. A. | year = 1976 | title=Computer Science as Empirical Inquiry: Symbols and Search| volume= 19 | pages = 113–126 | journal = Communications of the ACM| author-link=Allen Newell | authorlink2=Herbert A. Simon|issue=3 | ref=harv| doi-access= free }}<br />
<br />
* {{Citation| last=Nilsson | first=Nils | author-link=Nils Nilsson (researcher) | year=1998|title=Artificial Intelligence: A New Synthesis|publisher=Morgan Kaufmann Publishers|isbn=978-1-55860-467-4}}<br />
<br />
* {{Russell Norvig 2003}}<br />
<br />
* {{Citation | last = NRC| author-link=United States National Research Council | chapter=Developments in Artificial Intelligence|chapter-url=http://www.nap.edu/readingroom/books/far/ch9.html|title=Funding a Revolution: Government Support for Computing Research|publisher=National Academy Press|year=1999 }}<br />
<br />
* {{Citation | last = Poole | first = David | first2 = Alan | last2 = Mackworth | first3 = Randy | last3 = Goebel | publisher = Oxford University Press | year = 1998 | title = Computational Intelligence: A Logical Approach | url = http://www.cs.ubc.ca/spider/poole/ci.html | author-link=David Poole (researcher) | location = New York }}<br />
<br />
* {{Citation|last=Searle |first=John |author-link=John Searle |title=Minds, Brains and Programs |journal=Behavioral and Brain Sciences |volume=3 |issue=3 |pages=417–457 |year=1980 |doi=10.1017/S0140525X00005756 }}<br />
<br />
* {{Citation | last= Simon | first = H. A. | author-link=Herbert A. Simon | year = 1965 | title=The Shape of Automation for Men and Management | publisher =Harper & Row | location = New York }}<br />
<br />
* {{Citation |last=Sutherland|first= J.G. |year =1990| title= Holographic Model of Memory, Learning, and Expression|journal=International Journal of Neural Systems|volume= 1–3|pages= 256–267 |postscript=.}}<br />
<br />
* {{Citation|vauthors=Williams RW, Herrup K | title = The control of neuron number| journal = Annual Review of Neuroscience| volume = 11| pages = 423–53| year = 1988| pmid = 3284447| doi = 10.1146/annurev.ne.11.030188.002231| postscript = .}}<!--| accessdate = 20 June 2009--><br />
<br />
* {{Citation<br />
<br />
| editor1-last = de Vega | editor1-first = Manuel<br />
<br />
| editor1-last = de Vega | editor1-first = Manuel<br />
<br />
| 编辑1-last de Vega | 编辑1-first Manuel<br />
<br />
| editor2-last = Glenberg | editor2-first = Arthur<br />
<br />
| editor2-last = Glenberg | editor2-first = Arthur<br />
<br />
2- 最后的格伦伯格2- 第一个亚瑟<br />
<br />
| editor3-last = Graesser | editor3-first = Arthur<br />
<br />
| editor3-last = Graesser | editor3-first = Arthur<br />
<br />
| 编辑3-last Graesser | 编辑3-first Arthur<br />
<br />
| year = 2008<br />
<br />
| year = 2008<br />
<br />
2008年<br />
<br />
| title = Symbols and Embodiment: Debates on meaning and cognition<br />
<br />
| title = Symbols and Embodiment: Debates on meaning and cognition<br />
<br />
标题符号与具体化: 关于意义与认知的争论<br />
<br />
| publisher = Oxford University Press<br />
<br />
| publisher = Oxford University Press<br />
<br />
牛津大学出版社<br />
<br />
| isbn=978-0-19-921727-4<br />
<br />
| isbn=978-0-19-921727-4<br />
<br />
[国际标准图书编号978-0-19-921727-4]<br />
<br />
}}<br />
<br />
}}<br />
<br />
}}<br />
<br />
* {{Citation|last=Yudkowsky |first=Eliezer |author-link=Eliezer Yudkowsky |editor-last=Goertzel |editor-first=Ben |editor2-last=Pennachin |editor2-first=Cassio |title=Artificial General Intelligence |journal=Annual Review of Psychology |publisher=Springer |year=2006 |url=http://www.singinst.org/upload/LOGI//LOGI.pdf |doi=10.1146/annurev.psych.49.1.585 |pmid=9496632 |isbn=978-3-540-23733-4 |url-status=dead |archiveurl=https://web.archive.org/web/20090411050423/http://www.singinst.org/upload/LOGI/LOGI.pdf |archivedate=11 April 2009 |volume=49 |pages=585–612}} <br />
<br />
* {{Citation |last=Zucker|first=Jean-Daniel |date=July 2003 |title=A grounded theory of abstraction in artificial intelligence|journal=[[Philosophical Transactions of the Royal Society B]] |pmid=12903672 |volume=358 |issue=1435 |pmc=1693211 |pages=1293–1309 |doi=10.1098/rstb.2003.1308 |postscript=.}}<br />
<br />
* {{Citation| last=Yudkowsky | first=Eliezer | author-link=Eliezer Yudkowsky| year=2008 |title=Artificial Intelligence as a Positive and Negative Factor in Global Risk |journal=Global Catastrophic Risks | bibcode=2008gcr..book..303Y }}.<br />
<br />
{{refend}}<br />
<br />
<br />
<br />
==External links 拓展链接==<br />
<br />
* [https://www.zhihu.com/question/50049187/answer/1361795900 强人工智能目前发展怎样,有希望实现吗?]<br />
<br />
* [https://zhuanlan.zhihu.com/p/59966491 AI寒冬论作者:通用人工智能仍是白日梦]<br />
<br />
* [https://cis.temple.edu/~pwang/AGI-Intro.html The AGI portal maintained by Pei Wang]<br />
<br />
* [https://web.archive.org/web/20050405071221/http://genesis.csail.mit.edu/index.html The Genesis Group at MIT's CSAIL] – Modern research on the computations that underlay human intelligence<br />
<br />
* [http://www.opencog.org/ OpenCog – open source project to develop a human-level AI]<br />
<br />
* [http://academia.wikia.com/wiki/A_Method_for_Simulating_the_Process_of_Logical_Human_Thought Simulating logical human thought]<br />
<br />
* [http://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/ai-timelines What Do We Know about AI Timelines?] – Literature review<br />
<br />
<br />
<br />
{{Existential risk from artificial intelligence}}<br />
<br />
<br />
<br />
{{DEFAULTSORT:Artificial general intelligence}}<br />
<br />
[[Category:Hypothetical technology]]<br />
<br />
Category:Hypothetical technology<br />
<br />
类别: 假设技术<br />
<br />
[[Category:Artificial intelligence]]<br />
<br />
Category:Artificial intelligence<br />
<br />
类别: 人工智能<br />
<br />
[[Category:Computational neuroscience]]<br />
<br />
Category:Computational neuroscience<br />
<br />
类别: 计算神经科学<br />
<br />
<br />
<br />
[[fr:Intelligence artificielle#Intelligence artificielle forte]]<br />
<br />
fr:Intelligence artificielle#Intelligence artificielle forte<br />
<br />
智力人工 # 智力人工强项<br />
<br />
<noinclude><br />
<br />
<small>This page was moved from [[wikipedia:en:Artificial general intelligence]]. Its edit history can be viewed at [[通用人工智能/edithistory]]</small></noinclude><br />
<br />
[[Category:待整理页面]]</div>粲兰https://wiki.swarma.org/index.php?title=%E9%80%9A%E7%94%A8%E4%BA%BA%E5%B7%A5%E6%99%BA%E8%83%BD&diff=15044通用人工智能2020-10-13T06:49:27Z<p>粲兰:</p>
<hr />
<div>此词条暂由彩云小译翻译,未经人工整理和审校,带来阅读不便,请见谅。<br />
<br />
{{Short description|Hypothetical human-level or stronger AI}}<br />
<br />
{{Use British English|date = March 2019}}<br />
<br />
{{Use dmy dates|date=December 2019}}<br />
<br />
{{Artificial intelligence}}<br />
<br />
'''Artificial general intelligence''' ('''AGI''') is the hypothetical<ref>{{cite news |title=DeepMind and Google: the battle to control artificial intelligence |url=https://www.economist.com/news/2019/04/05/deepmind-and-google-the-battle-to-control-artificial-intelligence |accessdate=15 March 2020 |work=[[The Economist]] ([[1843 (magazine)]]) |date=2019 |quote=AGI stands for Artificial General Intelligence, a hypothetical computer program...}}</ref> intelligence of a machine that has the capacity to understand or learn any intellectual task that a [[human being]] can. It is a primary goal of some [[artificial intelligence]] research and a common topic in [[science fiction]] and [[futures studies]]. AGI can also be referred to as '''strong AI''',<ref>Kurzweil, ''Singularity'' (2005) p. 260</ref><ref name = "Kurzweil 2005-08-05">{{Citation|url=https://www.forbes.com/home/free_forbes/2005/0815/030.htmlhttps://www.forbes.com/home/free_forbes/2005/0815/030.html |first=Ray |last=Kurzweil |date=5 August 2005 |magazine=[[Forbes]]|title=Long Live AI}}: Kurzweil describes strong AI as "machine intelligence with the full range of human intelligence."</ref><ref>{{Citation|url=https://crnano.typepad.com/crnblog/2005/08/advanced_human_.html |title=Advanced Human ntelligence<br />
<br />
Artificial general intelligence (AGI) is the hypothetical intelligence of a machine that has the capacity to understand or learn any intellectual task that a human being can. It is a primary goal of some artificial intelligence research and a common topic in science fiction and futures studies. AGI can also be referred to as strong AI,<ref>{{Citation|url=https://crnano.typepad.com/crnblog/2005/08/advanced_human_.html |title=Advanced Human ntelligence<br />
<br />
人工通用智能(Artificial general intelligence,AGI)是一种假设的机器智能,它有能力理解或学习任何人类能够完成的智力任务。这是一些人工智能研究的主要目标,也是科幻小说和未来研究的共同话题。也可以被称为强人工智能,参考文献{ Citation | url https://crnano.typepad.com/crnblog/2005/08/advanced_human_.html | title Advanced Human intelligence<br />
<br />
|first=Mike|last=Treder|work=Responsible Nanotechnology|date=10 August 2005 |archive-url=https://web.archive.org/web/20191016214415/https://crnano.typepad.com/crnblog/2005/08/advanced_human_.html|archive-date=16 October 2019 |url-status=live}}</ref> '''full AI''',<ref>{{Cite web |url=http://tedxtalks.ted.com/video/The-Age-of-Artificial-Intellige |title=The Age of Artificial Intelligence: George John at TEDxLondonBusinessSchool 2013 |access-date=22 February 2014 |archive-url=https://web.archive.org/web/20140226123940/http://tedxtalks.ted.com/video/The-Age-of-Artificial-Intellige |archive-date=26 February 2014 |url-status=live }}</ref> <br />
<br />
|first=Mike|last=Treder|work=Responsible Nanotechnology|date=10 August 2005 |archive-url=https://web.archive.org/web/20191016214415/https://crnano.typepad.com/crnblog/2005/08/advanced_human_.html|archive-date=16 October 2019 |url-status=live}}</ref> full AI, <br />
<br />
2005年8月10日2019年10月 https://web.archive.org/web/20191016214415/https://crnano.typepad.com/crnblog/2005/08/advanced_human_.html|archive-date=16,<br />
<br />
or '''general intelligent action'''.{{sfn|Newell|Simon|1976|ps=, This is the term they use for "human-level" intelligence in the [[physical symbol system]] hypothesis.}} <br />
<br />
or general intelligent action. <br />
<br />
或者是通用智能行为。<br />
<br />
Some academic sources reserve the term "strong AI" for machines that can experience [[Chinese room#Strong AI|consciousness]].{{sfn|Searle|1980|ps=, See below for the origin of the term "strong AI", and see the academic definition of "[[Chinese room#Strong AI|strong AI]]" in the article [[Chinese room]].}} Today's AI is speculated to be many years, if not decades, away from AGI.<ref>[https://www.europarl.europa.eu/at-your-service/files/be-heard/religious-and-non-confessional-dialogue/events/en-20190319-how-artificial-intelligence-works.pdf europarl.europa.eu: How artificial intelligence works], "Concluding remarks: Today's AI is powerful and useful, but remains far from speculated AGI or ASI.", European Parliamentary Research Service, retrieved March 3, 2020</ref><ref>{{Cite journal|last=Grace|first=Katja|last2=Salvatier|first2=John|last3=Dafoe|first3=Allan|last4=Zhang|first4=Baobao|last5=Evans|first5=Owain|date=2018-07-31|title=Viewpoint: When Will AI Exceed Human Performance? Evidence from AI Experts|journal=Journal of Artificial Intelligence Research|volume=62|pages=729–754|doi=10.1613/jair.1.11222|issn=1076-9757}}</ref><br />
<br />
Some academic sources reserve the term "strong AI" for machines that can experience consciousness. Today's AI is speculated to be many years, if not decades, away from AGI.<br />
<br />
一些学术资源保留了“强人工智能”这个术语,用来形容能够体会意识的机器。据推测,如果不是几十年的话,今天的人工智能将在很多年之后才能达到通用人工智能的地步。<br />
<br />
<br />
<br />
Some authorities emphasize a distinction between ''strong AI'' and ''applied AI'',<ref>Encyclopædia Britannica [http://www.britannica.com/eb/article-219086/artificial-intelligence Strong AI, applied AI, and cognitive simulation] {{Webarchive|url=https://web.archive.org/web/20071015054758/http://www.britannica.com/eb/article-219086/artificial-intelligence |date=15 October 2007 }} or Jack Copeland [http://www.cs.usfca.edu/www.AlanTuring.net/turing_archive/pages/Reference%20Articles/what_is_AI/What%20is%20AI02.html What is artificial intelligence?] {{Webarchive|url=https://web.archive.org/web/20070818125256/http://www.cs.usfca.edu/www.AlanTuring.net/turing_archive/pages/Reference%20Articles/what_is_AI/What%20is%20AI02.html |date=18 August 2007 }} on AlanTuring.net</ref> also called ''narrow AI''<ref name = "Kurzweil 2005-08-05"/> or ''[[weak AI]]''.<ref>{{Cite web |url=http://www.open2.net/nextbigthing/ai/ai_in_depth/in_depth.htm |title=The Open University on Strong and Weak AI |access-date=8 October 2007 |archive-url=https://web.archive.org/web/20090925043908/http://www.open2.net/nextbigthing/ai/ai_in_depth/in_depth.htm |archive-date=25 September 2009 |url-status=live }} {{Dead link |date=October 2019}}</ref> In contrast to strong AI, weak AI is not intended to perform human [[cognitive]] abilities. Rather, weak AI is limited to the use of software to study or accomplish specific [[problem solving]] or [[reason]]ing tasks.<br />
<br />
Some authorities emphasize a distinction between strong AI and applied AI, also called narrow AI In contrast to strong AI, weak AI is not intended to perform human cognitive abilities. Rather, weak AI is limited to the use of software to study or accomplish specific problem solving or reasoning tasks.<br />
<br />
一些权威机构强调'' 强人工智能 ''和'' 应用人工智能 ''之间的区别,也称为'' 狭义人工智能 ''与'' 强人工智能 ''相比,弱人工智能并不是为了执行人类的认知能力。相反,弱人工智能仅限于使用软件来研究或完成特定问题的解决或完成推理任务。<br />
<br />
<br />
<br />
As of 2017, over forty organizations are researching AGI.<ref name=baum/><br />
<br />
As of 2017, over forty organizations are researching AGI.<br />
<br />
截止到2017年,已经有超过四十家机构在研究 AGI。<br />
<br />
<br />
<br />
==Requirements 判定要求==<br />
<br />
{{main|Cognitive science}}<br />
<br />
<br />
<br />
Various criteria for [[intelligence]] have been proposed (most famously the [[Turing test]]) but to date, there is no definition that satisfies everyone.<ref>AI founder [[John McCarthy (computer scientist)|John McCarthy]] writes: "we cannot yet characterize in general what kinds of computational procedures we want to call intelligent." {{cite web| url=http://www-formal.stanford.edu/jmc/whatisai/node1.html| title=Basic Questions| last=McCarthy| first=John| authorlink=John McCarthy (computer scientist)| publisher=[[Stanford University]]| year=2007| access-date=6 December 2007| archive-url=https://web.archive.org/web/20071026100601/http://www-formal.stanford.edu/jmc/whatisai/node1.html| archive-date=26 October 2007| url-status=live}} (For a discussion of some definitions of intelligence used by [[artificial intelligence]] researchers, see [[philosophy of artificial intelligence]].)</ref> However, there ''is'' wide agreement among artificial intelligence researchers that intelligence is required to do the following:<ref><br />
<br />
Various criteria for intelligence have been proposed (most famously the Turing test) but to date, there is no definition that satisfies everyone. However, there is wide agreement among artificial intelligence researchers that intelligence is required to do the following:<ref><br />
<br />
人们提出了各种各样的智能标准(最著名的是图灵测试) ,但到目前为止,还没有一个定义能使所有人满意。然而,人工智能研究人员普遍认为,智能需要做到以下几点: <br />
<br />
This list of intelligent traits is based on the topics covered by major AI textbooks, including:<br />
<br />
This list of intelligent traits is based on the topics covered by major AI textbooks, including:<br />
<br />
这个智能特征的列表基于主流的人工智能教科书所涉及的主题,包括:<br />
<br />
{{Harvnb|Russell|Norvig|2003}},<br />
<br />
,<br />
<br />
,<br />
<br />
{{Harvnb|Luger|Stubblefield|2004}},<br />
<br />
,<br />
<br />
,<br />
<br />
{{Harvnb|Poole|Mackworth|Goebel|1998}} and<br />
<br />
and<br />
<br />
及<br />
<br />
{{Harvnb|Nilsson|1998}}.<br />
<br />
.<br />
<br />
.<br />
<br />
</ref><br />
<br />
</ref><br />
<br />
/ 参考<br />
<br />
* [[automated reasoning|reason]], use strategy, solve puzzles, and make judgments under [[uncertainty]];使用策略,解决问题,并且在不确定条件下做出决策。<br />
<br />
* [[knowledge representation|represent knowledge]], including [[Commonsense knowledge base|commonsense knowledge]];展现知识,包括常识。<br />
<br />
* [[automated planning and scheduling|plan]];规划。<br />
<br />
* [[machine learning|learn]];学习。<br />
<br />
* communicate in [[natural language processing|natural language]];使用自然语言进行交流。<br />
<br />
* and [[Artificial intelligence systems integration|integrate all these skills]] towards common goals.综合运用所有技巧以达到某个目的。<br />
<br />
<br />
<br />
Other important capabilities include the ability to [[machine perception|sense]] (e.g. [[computer vision|see]]) and the ability to act (e.g. [[robotics|move and manipulate objects]]) in the world where intelligent behaviour is to be observed.<ref>Pfeifer, R. and Bongard J. C., How the body shapes the way we think: a new view of intelligence (The MIT Press, 2007). {{ISBN|0-262-16239-3}}</ref> This would include an ability to detect and respond to [[hazard]].<ref>{{cite journal | last1 = White | first1 = R. W. | year = 1959 | title = Motivation reconsidered: The concept of competence | journal = Psychological Review | volume = 66 | issue = 5| pages = 297–333 | doi=10.1037/h0040934| pmid = 13844397 }}</ref> Many interdisciplinary approaches to intelligence (e.g. [[cognitive science]], [[computational intelligence]] and [[decision making]]) tend to emphasise the need to consider additional traits such as [[imagination]] (taken as the ability to form mental images and concepts that were not programmed in)<ref>{{Harvnb|Johnson|1987}}</ref> and [[Self-determination theory|autonomy]].<ref>deCharms, R. (1968). Personal causation. New York: Academic Press.</ref><br />
<br />
Other important capabilities include the ability to sense (e.g. see) and the ability to act (e.g. move and manipulate objects) in the world where intelligent behaviour is to be observed. This would include an ability to detect and respond to hazard. Many interdisciplinary approaches to intelligence (e.g. cognitive science, computational intelligence and decision making) tend to emphasise the need to consider additional traits such as imagination (taken as the ability to form mental images and concepts that were not programmed in) and autonomy.<br />
<br />
其他重要的能力包括在客观世界感知(例如:视觉)和行动(例如:移动和操纵物体)的能力。智能行为在客观世界中是可观测的。这将包括检测和应对危险的能力。许多跨学科的智力研究方法(例如:。认知科学、计算智能和决策)倾向于强调考虑额外特征的必要性,例如想象力(被认为是未编入程序的形成意象和概念的能力)和自主性。<br />
<br />
Computer based systems that exhibit many of these capabilities do exist (e.g. see [[computational creativity]], [[automated reasoning]], [[decision support system]], [[robot]], [[evolutionary computation]], [[intelligent agent]]), but not yet at human levels.<br />
<br />
Computer based systems that exhibit many of these capabilities do exist (e.g. see computational creativity, automated reasoning, decision support system, robot, evolutionary computation, intelligent agent), but not yet at human levels.<br />
<br />
基于计算机的系统,展示了许多这些能力确实存在(例如:。参见计算创造性、自动推理、决策支持系统、机器人、进化计算、智能代理) ,但还没有达到人类的水平。<br />
<br />
<br />
<br />
===Tests for confirming human-level AGI{{anchor|Tests_for_confirming_human-level_AGI}}===<br />
<br />
The following tests to confirm human-level AGI have been considered:<ref>{{cite web|last=Muehlhauser|first=Luke|title=What is AGI?|url=http://intelligence.org/2013/08/11/what-is-agi/|publisher=Machine Intelligence Research Institute|accessdate=1 May 2014|date=11 August 2013|archive-url=https://web.archive.org/web/20140425115445/http://intelligence.org/2013/08/11/what-is-agi/|archive-date=25 April 2014|url-status=live}}</ref><ref>{{Cite web|url=https://www.talkyblog.com/artificial_general_intelligence_agi/|title=What is Artificial General Intelligence (AGI)? {{!}} 4 Tests For Ensuring Artificial General Intelligence|date=13 July 2019|website=Talky Blog|language=en-US|access-date=17 July 2019|archive-url=https://web.archive.org/web/20190717071152/https://www.talkyblog.com/artificial_general_intelligence_agi/|archive-date=17 July 2019|url-status=live}}</ref><br />
<br />
The following tests to confirm human-level AGI have been considered:<br />
<br />
考虑了下列测试以确认人类水平 AGI:<br />
<br />
;[[Turing test|The Turing Test]] ([[Alan Turing|''Turing'']])<br />
<br />
The Turing Test (Turing)<br />
<br />
图灵测试(图灵)<br />
<br />
: A machine and a human both converse sight unseen with a second human, who must evaluate which of the two is the machine, which passes the test if it can fool the evaluator a significant fraction of the time. Note: Turing does not prescribe what should qualify as intelligence, only that knowing that it is a machine should disqualify it.<br />
<br />
A machine and a human both converse sight unseen with a second human, who must evaluate which of the two is the machine, which passes the test if it can fool the evaluator a significant fraction of the time. Note: Turing does not prescribe what should qualify as intelligence, only that knowing that it is a machine should disqualify it.<br />
<br />
一个机器人和一个人类都与另一个人类进行眼神交流,后者必须评估两者中哪一个是机器,如果它能骗过评估者很大一部分时间,那么机器就通过了测试。注意: 图灵并没有规定什么是智能,只要能认出它是一台机器就不应该认为它是智能的。<br />
<br />
;The Coffee Test ([[Steve Wozniak|''Wozniak'']])<br />
<br />
The Coffee Test (Wozniak)<br />
<br />
咖啡测试(沃兹尼亚克)<br />
<br />
: A machine is required to enter an average American home and figure out how to make coffee: find the coffee machine, find the coffee, add water, find a mug, and brew the coffee by pushing the proper buttons.<br />
<br />
A machine is required to enter an average American home and figure out how to make coffee: find the coffee machine, find the coffee, add water, find a mug, and brew the coffee by pushing the proper buttons.<br />
<br />
一台机器需要进入一个普通的美国家庭,并弄清楚如何制作咖啡: 找到咖啡机,找到咖啡,加水,找到一个马克杯,并通过按下正确的按钮来煮咖啡。<br />
<br />
;The Robot College Student Test ([[Ben Goertzel|''Goertzel'']])<br />
<br />
The Robot College Student Test (Goertzel)<br />
<br />
机器人大学生考试(格兹尔)<br />
<br />
: A machine enrolls in a university, taking and passing the same classes that humans would, and obtaining a degree.<br />
<br />
A machine enrolls in a university, taking and passing the same classes that humans would, and obtaining a degree.<br />
<br />
一台机器进入一所大学,学习并通过与人类相同的课程,并获得学位。<br />
<br />
;The Employment Test ([[Nils John Nilsson|''Nilsson'']])<br />
<br />
The Employment Test (Nilsson)<br />
<br />
就业测试(尼尔森)<br />
<br />
: A machine works an economically important job, performing at least as well as humans in the same job.<br />
<br />
A machine works an economically important job, performing at least as well as humans in the same job.<br />
<br />
机器从事一项经济上重要的工作,在同一项工作中表现至少和人类一样好。<br />
<br />
<br />
<br />
=== Problems requiring AGI to solve 等待通用人工智能解决的问题===<br />
<br />
{{Main|AI-complete}}<br />
<br />
<br />
<br />
The most difficult problems for computers are informally known as "AI-complete" or "AI-hard", implying that solving them is equivalent to the general aptitude of human intelligence, or strong AI, beyond the capabilities of a purpose-specific algorithm.<ref name="Shapiro92">Shapiro, Stuart C. (1992). [http://www.cse.buffalo.edu/~shapiro/Papers/ai.pdf Artificial Intelligence] {{Webarchive|url=https://web.archive.org/web/20160201014644/http://www.cse.buffalo.edu/~shapiro/Papers/ai.pdf |date=1 February 2016 }} In Stuart C. Shapiro (Ed.), ''Encyclopedia of Artificial Intelligence'' (Second Edition, pp.&nbsp;54–57). New York: John Wiley. (Section 4 is on "AI-Complete Tasks".)</ref><br />
<br />
The most difficult problems for computers are informally known as "AI-complete" or "AI-hard", implying that solving them is equivalent to the general aptitude of human intelligence, or strong AI, beyond the capabilities of a purpose-specific algorithm.<br />
<br />
对于计算机来说,最困难的问题被非正式地称为“ AI完全问题”或“ AI困难问题” ,这意味着解决这些问题相当于人类智能的一般才能,或强人工智能,超出了特定目的算法的能力。<br />
<br />
<br />
<br />
AI-complete problems are hypothesised to include general [[computer vision]], [[natural language understanding]], and dealing with unexpected circumstances while solving any real world problem.<ref>Roman V. Yampolskiy. Turing Test as a Defining Feature of AI-Completeness. In Artificial Intelligence, Evolutionary Computation and Metaheuristics (AIECM) --In the footsteps of Alan Turing. Xin-She Yang (Ed.). pp. 3–17. (Chapter 1). Springer, London. 2013. http://cecs.louisville.edu/ry/TuringTestasaDefiningFeature04270003.pdf {{Webarchive|url=https://web.archive.org/web/20130522094547/http://cecs.louisville.edu/ry/TuringTestasaDefiningFeature04270003.pdf |date=22 May 2013 }}</ref><br />
<br />
AI-complete problems are hypothesised to include general computer vision, natural language understanding, and dealing with unexpected circumstances while solving any real world problem.<br />
<br />
人工智能完全问题假设包括一般的计算机视觉,自然语言理解,以及在解决任何现实世界问题的同时处理意外情况。<br />
<br />
<br />
<br />
AI-complete problems cannot be solved with current computer technology alone, and also require [[human computation]]. This property could be useful, for example, to test for the presence of humans, as [[CAPTCHA]]s aim to do; and for [[computer security]] to repel [[brute-force attack]]s.<ref>Luis von Ahn, Manuel Blum, Nicholas Hopper, and John Langford. [http://www.captcha.net/captcha_crypt.pdf CAPTCHA: Using Hard AI Problems for Security] {{Webarchive|url=https://web.archive.org/web/20160304001102/http://www.captcha.net/captcha_crypt.pdf |date=4 March 2016 }}. In Proceedings of Eurocrypt, Vol. 2656 (2003), pp. 294–311.</ref><ref>{{cite journal | first = Richard | last = Bergmair | title = Natural Language Steganography and an "AI-complete" Security Primitive | citeseerx = 10.1.1.105.129 | date = 7 January 2006 }} (unpublished?)</ref><br />
<br />
AI-complete problems cannot be solved with current computer technology alone, and also require human computation. This property could be useful, for example, to test for the presence of humans, as CAPTCHAs aim to do; and for computer security to repel brute-force attacks.<br />
<br />
目前的计算机技术不能单独解决AI完全问题,而且还需要人工计算。例如,这个特性可以用来测试人类是否存在(CAPTCHAs 的目标就是这样做) ,以及应用于计算机安全以抵御强力攻击。<br />
<br />
<br />
<br />
== History 历史 == <br />
<br />
=== Classical AI 经典人工智能 ===<br />
<br />
{{Main|History of artificial intelligence}}<br />
<br />
Modern AI research began in the mid 1950s.<ref>{{Harvnb|Crevier|1993|pp=48–50}}</ref> The first generation of AI researchers were convinced that artificial general intelligence was possible and that it would exist in just a few decades. AI pioneer [[Herbert A. Simon]] wrote in 1965: "machines will be capable, within twenty years, of doing any work a man can do."<ref>{{Harvnb|Simon|1965|p=96}} quoted in {{Harvnb|Crevier|1993|p=109}}</ref> Their predictions were the inspiration for [[Stanley Kubrick]] and [[Arthur C. Clarke]]'s character [[HAL 9000]], who embodied what AI researchers believed they could create by the year 2001. AI pioneer [[Marvin Minsky]] was a consultant<ref>{{Cite web |url=http://mitpress.mit.edu/e-books/Hal/chap2/two1.html |title=Scientist on the Set: An Interview with Marvin Minsky |access-date=5 April 2008 |archive-url=https://web.archive.org/web/20120716182537/http://mitpress.mit.edu/e-books/Hal/chap2/two1.html |archive-date=16 July 2012 |url-status=live }}</ref> on the project of making HAL 9000 as realistic as possible according to the consensus predictions of the time; Crevier quotes him as having said on the subject in 1967, "Within a generation ... the problem of creating 'artificial intelligence' will substantially be solved,"<ref>Marvin Minsky to {{Harvtxt|Darrach|1970}}, quoted in {{Harvtxt|Crevier|1993|p=109}}.</ref> although Minsky states that he was misquoted.{{Citation needed|date=June 2011}}<br />
<br />
Modern AI research began in the mid 1950s. The first generation of AI researchers were convinced that artificial general intelligence was possible and that it would exist in just a few decades. AI pioneer Herbert A. Simon wrote in 1965: "machines will be capable, within twenty years, of doing any work a man can do." Their predictions were the inspiration for Stanley Kubrick and Arthur C. Clarke's character HAL 9000, who embodied what AI researchers believed they could create by the year 2001. AI pioneer Marvin Minsky was a consultant on the project of making HAL 9000 as realistic as possible according to the consensus prediction of the time; Crevier quotes him as having said on the subject in 1967, "Within a generation ... the problem of creating 'artificial intelligence' will substantially be solved," although Minsky states that he was misquoted.<br />
<br />
现代人工智能研究始于20世纪50年代中期。第一代人工智能研究人员确信,通用人工智能是可能的,并将在短短几十年内出现。人工智能的先驱赫伯特·A·西蒙(Herbert A. Simon)在1965年写道: “机器将在20年内拥有完成人类能做的任何工作的能力。”他们的预言启发了斯坦利·库布里克和亚瑟·查理斯·克拉克塑造的角色哈尔9000,它代表了人工智能研究人员相信他们截至2001年能够创造出的东西。人工智能先驱马文·明斯基(Marvin Minsky)是一个项目顾问,该项目旨在根据当时的一致预测,使哈尔9000尽可能逼真; 克里维尔援引他在1967年关于这个问题的话说,“在一代人的时间里... ... 创造‘人工智能’的问题将大体上得到解决,”尽管明斯基声称,他的话被错误引用了。<br />
<br />
<br />
<br />
However, in the early 1970s, it became obvious that researchers had grossly underestimated the difficulty of the project. Funding agencies became skeptical of AGI and put researchers under increasing pressure to produce useful "applied AI".<ref>The [[Lighthill report]] specifically criticized AI's "grandiose objectives" and led the dismantling of AI research in England. ({{Harvnb|Lighthill|1973}}; {{Harvnb|Howe|1994}}) In the U.S., [[DARPA]] became determined to fund only "mission-oriented direct research, rather than basic undirected research". See {{Harv|NRC|1999}} under "Shift to Applied Research Increases Investment". See also {{Harv|Crevier|1993|pp=115–117}} and {{Harv|Russell|Norvig|2003|pp=21–22}}</ref> As the 1980s began, Japan's [[Fifth Generation Computer]] Project revived interest in AGI, setting out a ten-year timeline that included AGI goals like "carry on a casual conversation".<ref>{{harvnb|Crevier|1993|p=211}}, {{harvnb|Russell|Norvig|2003|p=24}} and see also {{Harvnb|Feigenbaum|McCorduck|1983}}</ref> In response to this and the success of [[expert systems]], both industry and government pumped money back into the field.<ref>{{Harvnb|Crevier| 1993|pp=161–162,197–203,240}}; {{harvnb|Russell|Norvig|2003|p=25}}; {{harvnb|NRC|1999|loc=under "Shift to Applied Research Increases Investment"}}</ref> However, confidence in AI spectacularly collapsed in the late 1980s, and the goals of the Fifth Generation Computer Project were never fulfilled.<ref>{{Harvnb|Crevier|1993|pp=209–212}}</ref> For the second time in 20 years, AI researchers who had predicted the imminent achievement of AGI had been shown to be fundamentally mistaken. By the 1990s, AI researchers had gained a reputation for making vain promises. They became reluctant to make predictions at all<ref>As AI founder [[John McCarthy (computer scientist)|John McCarthy]] writes "it would be a great relief to the rest of the workers in AI if the inventors of new general formalisms would express their hopes in a more guarded form than has sometimes been the case." {{cite web | url=http://www-formal.stanford.edu/jmc/reviews/lighthill/lighthill.html | title=Reply to Lighthill | last=McCarthy | first=John | authorlink=John McCarthy (computer scientist) | publisher=Stanford University | year=2000 | access-date=29 September 2007 | archive-url=https://web.archive.org/web/20080930164952/http://www-formal.stanford.edu/jmc/reviews/lighthill/lighthill.html | archive-date=30 September 2008 | url-status=live }}</ref> and to avoid any mention of "human level" artificial intelligence for fear of being labeled "wild-eyed dreamer[s]."<ref>"At its low point, some computer scientists and software engineers avoided the term artificial intelligence for fear of being viewed as wild-eyed dreamers."{{cite news |first=John |last=Markoff |title=Behind Artificial Intelligence, a Squadron of Bright Real People |url=https://www.nytimes.com/2005/10/14/technology/14artificial.html?ei=5070&en=11ab55edb7cead5e&ex=1185940800&adxnnl=1&adxnnlx=1185805173-o7WsfW7qaP0x5/NUs1cQCQ |work=The New York Times|date=14 October 2005}}</ref><br />
<br />
However, in the early 1970s, it became obvious that researchers had grossly underestimated the difficulty of the project. Funding agencies became skeptical of AGI and put researchers under increasing pressure to produce useful "applied AI". As the 1980s began, Japan's Fifth Generation Computer Project revived interest in AGI, setting out a ten-year timeline that included AGI goals like "carry on a casual conversation". In response to this and the success of expert systems, both industry and government pumped money back into the field. However, confidence in AI spectacularly collapsed in the late 1980s, and the goals of the Fifth Generation Computer Project were never fulfilled. For the second time in 20 years, AI researchers who had predicted the imminent achievement of AGI had been shown to be fundamentally mistaken. By the 1990s, AI researchers had gained a reputation for making vain promises. They became reluctant to make predictions at all and to avoid any mention of "human level" artificial intelligence for fear of being labeled "wild-eyed dreamer[s]."<br />
<br />
然而,在20世纪70年代初,很明显,研究人员严重低估了该项目的难度。资助机构开始对通用人工智能持怀疑态度,并对研究人员施加越来越大的压力,要求他们生产出有用的“应用人工智能”。随着20世纪80年代的开始,日本的'''<font color="#ff8000">第五代计算机项目(Fifth Generation Computer Project)</font>'''重新唤起了人们对通用人工智能的兴趣,并设定了一个长达10年的时间线,其中包括通用人工智能的目标,比如“进行一次随意的交谈”。为了应对这种情况和专家系统的成功建立,工业界和政府都重新将资金投入这一领域。然而,人们对人工智能的信心在20世纪80年代末大幅下降,第五代计算机项目的目标从未实现。20年来的第二次,人工智能研究人员预测通用人工智能即将取得的成果已经被证明根本是错误的。到了20世纪90年代,人工智能研究人员因做出虚假承诺而臭名昭著。他们根本不愿意做预测,也不愿意提及“人类水平”的人工智能,因为他们害怕被贴上“狂热梦想家”的标签<br />
<br />
<br />
<br />
=== Narrow AI research 狭义人工智能的研究===<br />
<br />
{{Main|Artificial intelligence}}<br />
<br />
<br />
<br />
In the 1990s and early 21st century, mainstream AI achieved far greater commercial success and academic respectability by focusing on specific sub-problems where they can produce verifiable results and commercial applications, such as [[artificial neural networks]] and statistical [[machine learning]].<ref>{{Harvnb|Russell|Norvig|2003|pp=25–26}}</ref> These "applied AI" systems are now used extensively throughout the technology industry, and research in this vein is very heavily funded in both academia and industry. Currently, development on this field is considered an emerging trend, and a mature stage is expected to happen in more than 10 years.<ref>{{cite web |title=Trends in the Emerging Tech Hype Cycle |url=https://blogs.gartner.com/smarterwithgartner/files/2018/08/PR_490866_5_Trends_in_the_Emerging_Tech_Hype_Cycle_2018_Hype_Cycle.png |publisher=Gartner Reports |accessdate=7 May 2019 |archive-url=https://web.archive.org/web/20190522024829/https://blogs.gartner.com/smarterwithgartner/files/2018/08/PR_490866_5_Trends_in_the_Emerging_Tech_Hype_Cycle_2018_Hype_Cycle.png |archive-date=22 May 2019 |url-status=live }}</ref><br />
<br />
In the 1990s and early 21st century, mainstream AI achieved far greater commercial success and academic respectability by focusing on specific sub-problems where they can produce verifiable results and commercial applications, such as artificial neural networks and statistical machine learning. These "applied AI" systems are now used extensively throughout the technology industry, and research in this vein is very heavily funded in both academia and industry. Currently, development on this field is considered an emerging trend, and a mature stage is expected to happen in more than 10 years.<br />
<br />
在1990年代和21世纪初,主流人工智能取得了更大的商业成功和学术声望,因为它们把重点放在能够产生可验证结果和商业应用的具体子问题上,例如人工神经网络和统计机器学习。这些“应用人工智能”系统现在在整个技术产业中得到广泛应用,这方面的研究得到了学术界和产业界的大量资助。目前,这一领域的发展被认为是一个新兴的趋势,并有望在10多年内进入一个成熟的阶段。<br />
<br />
<br />
<br />
Most mainstream AI researchers hope that strong AI can be developed by combining the programs that solve various sub-problems. [[Hans Moravec]] wrote in 1988: <blockquote>"I am confident that this bottom-up route to artificial intelligence will one day meet the traditional top-down route more than half way, ready to provide the real world competence and the [[Commonsense knowledge base|commonsense knowledge]] that has been so frustratingly elusive in reasoning programs. Fully intelligent machines will result when the metaphorical [[golden spike]] is driven uniting the two efforts."<ref>{{Harvnb|Moravec|1988|p=20}}</ref></blockquote><br />
<br />
Most mainstream AI researchers hope that strong AI can be developed by combining the programs that solve various sub-problems. Hans Moravec wrote in 1988: <blockquote>"I am confident that this bottom-up route to artificial intelligence will one day meet the traditional top-down route more than half way, ready to provide the real world competence and the commonsense knowledge that has been so frustratingly elusive in reasoning programs. Fully intelligent machines will result when the metaphorical golden spike is driven uniting the two efforts."</blockquote><br />
<br />
大多数主流人工智能研究人员希望,通过结合解决各种子问题的项目,可以开发出强人工智能。汉斯·莫拉维克(Hans Moravec)在1988年写道: “我相信,这种自下而上的人工智能路线,终有一天会与传统的自上而下的路线在后半程相遇。令人沮丧的是,当下,真实世界的能力和常识知识在推理程序中一直难以捉摸。而这两种路线结合的人工智能将能为我们解决这些疑难。当一个黄金钉一样的东西将二者结合起来时,就会产生完全智能的机器。” <br />
<br />
<br />
<br />
However, even this fundamental philosophy has been disputed; for example, Stevan Harnad of Princeton concluded his 1990 paper on the [[Symbol grounding problem|Symbol Grounding Hypothesis]] by stating: <blockquote>"The expectation has often been voiced that "top-down" (symbolic) approaches to modeling cognition will somehow meet "bottom-up" (sensory) approaches somewhere in between. If the grounding considerations in this paper are valid, then this expectation is hopelessly modular and there is really only one viable route from sense to symbols: from the ground up. A free-floating symbolic level like the software level of a computer will never be reached by this route (or vice versa) – nor is it clear why we should even try to reach such a level, since it looks as if getting there would just amount to uprooting our symbols from their intrinsic meanings (thereby merely reducing ourselves to the functional equivalent of a programmable computer)."<ref>{{cite journal | last1 = Harnad | first1 = S | year = 1990 | title = The Symbol Grounding Problem | journal = Physica D | volume = 42 | issue = 1–3| pages = 335–346 | doi=10.1016/0167-2789(90)90087-6| bibcode = 1990PhyD...42..335H | arxiv = cs/9906002}}</ref></blockquote><br />
<br />
However, even this fundamental philosophy has been disputed; for example, Stevan Harnad of Princeton concluded his 1990 paper on the Symbol Grounding Hypothesis by stating: <blockquote>"The expectation has often been voiced that "top-down" (symbolic) approaches to modeling cognition will somehow meet "bottom-up" (sensory) approaches somewhere in between. If the grounding considerations in this paper are valid, then this expectation is hopelessly modular and there is really only one viable route from sense to symbols: from the ground up. A free-floating symbolic level like the software level of a computer will never be reached by this route (or vice versa) – nor is it clear why we should even try to reach such a level, since it looks as if getting there would just amount to uprooting our symbols from their intrinsic meanings (thereby merely reducing ourselves to the functional equivalent of a programmable computer)."</blockquote><br />
<br />
然而,连如此基本的哲学问题也存在争议; 例如,普林斯顿大学的斯蒂文·哈纳德(Stevan Harnad)在1990年关于'''<font color="#ff8000">符号基础假说(the Symbol Grounding Hypothesis)</font>'''的论文中总结道: “人们经常提出这样的期望,即建立“自上而下”(符号)的认知模型的方法将在某种程度上与“自下而上”(感官)的方法在建模过程中的某处相会。如果本文中的基本考虑是正确的,那么绝望的是,这种期望是模块化的,并且从认知到符号真的只有一条可行的路径: 从头开始。类似计算机软件级别的自由浮动的符号永远不可能通过这条路径实现,反之亦然——甚至也不清楚为什么我们应该尝试达到这样一个级别,因为它看起来就像是把我们的符号从它们的内在意义上连根拔起(从而仅仅把我们自己降低为可编程计算机的功能等价物)。”<br />
<br />
<br />
<br />
=== Modern artificial general intelligence research 现代通用人工智能的研究===<br />
<br />
The term "artificial general intelligence" was used as early as 1997, by Mark Gubrud<ref>{{Harvnb|Gubrud|1997}}</ref> in a discussion of the implications of fully automated military production and operations. The term was re-introduced and popularized by [[Shane Legg]] and [[Ben Goertzel]] around 2002.<ref>{{Cite web|url=http://goertzel.org/who-coined-the-term-agi/|title=Who coined the term "AGI"? » goertzel.org|language=en-US|access-date=28 December 2018|archive-url=https://web.archive.org/web/20181228083048/http://goertzel.org/who-coined-the-term-agi/|archive-date=28 December 2018|url-status=live}}, via [[Life 3.0]]: 'The term "AGI" was popularized by... Shane Legg, Mark Gubrud and Ben Goertzel'</ref> The research objective is much older, for example [[Doug Lenat]]'s [[Cyc]] project (that began in 1984), and [[Allen Newell]]'s [[Soar (cognitive architecture)|Soar]] project are regarded as within the scope of AGI. AGI research activity in 2006 was described by Pei Wang and Ben Goertzel<ref>{{harvnb|Goertzel|Wang|2006}}. See also {{harvtxt|Wang|2006}} with an up-to-date summary and lots of links.</ref> as "producing publications and preliminary results". The first summer school in AGI was organized in Xiamen, China in 2009<ref>https://goertzel.org/AGI_Summer_School_2009.htm</ref> by the Xiamen university's Artificial Brain Laboratory and OpenCog. The first university course was given in 2010<ref>http://fmi-plovdiv.org/index.jsp?id=1054&ln=1</ref> and 2011<ref>http://fmi.uni-plovdiv.bg/index.jsp?id=1139&ln=1</ref> at Plovdiv University, Bulgaria by Todor Arnaudov. MIT presented a course in AGI in 2018, organized by Lex Fridman and featuring a number of guest lecturers. However, as yet, most AI researchers have devoted little attention to AGI, with some claiming that intelligence is too complex to be completely replicated in the near term. However, a small number of computer scientists are active in AGI research, and many of this group are contributing to a series of [[Conference on Artificial General Intelligence|AGI conferences]]. The research is extremely diverse and often pioneering in nature. In the introduction to his book,{{sfn|Goertzel|Pennachin|2006}} Goertzel says that estimates of the time needed before a truly flexible AGI is built vary from 10 years to over a century, but the consensus in the AGI research community seems to be that the timeline discussed by [[Ray Kurzweil]] in ''[[The Singularity is Near]]''<ref name="K">{{Harv|Kurzweil|2005|p=260}} or see [http://crnano.typepad.com/crnblog/2005/08/advanced_human_.html Advanced Human Intelligence] {{Webarchive|url=https://web.archive.org/web/20110630032301/http://crnano.typepad.com/crnblog/2005/08/advanced_human_.html |date=30 June 2011 }} where he defines strong AI as "machine intelligence with the full range of human intelligence."</ref> (i.e. between 2015 and 2045) is plausible.{{sfn|Goertzel|2007}}<br />
<br />
The term "artificial general intelligence" was used as early as 1997, by Mark Gubrud in a discussion of the implications of fully automated military production and operations. The term was re-introduced and popularized by Shane Legg and Ben Goertzel around 2002. The research objective is much older, for example Doug Lenat's Cyc project (that began in 1984), and Allen Newell's Soar project are regarded as within the scope of AGI. AGI research activity in 2006 was described by Pei Wang and Ben Goertzel as "producing publications and preliminary results". The first summer school in AGI was organized in Xiamen, China in 2009 by the Xiamen university's Artificial Brain Laboratory and OpenCog. The first university course was given in 2010 and 2011 at Plovdiv University, Bulgaria by Todor Arnaudov. MIT presented a course in AGI in 2018, organized by Lex Fridman and featuring a number of guest lecturers. However, as yet, most AI researchers have devoted little attention to AGI, with some claiming that intelligence is too complex to be completely replicated in the near term. However, a small number of computer scientists are active in AGI research, and many of this group are contributing to a series of AGI conferences. The research is extremely diverse and often pioneering in nature. In the introduction to his book, Goertzel says that estimates of the time needed before a truly flexible AGI is built vary from 10 years to over a century, but the consensus in the AGI research community seems to be that the timeline discussed by Ray Kurzweil in The Singularity is Near (i.e. between 2015 and 2045) is plausible.<br />
<br />
“通用人工智能”一词早在1997年就由马克·古布鲁德(Mark Gubrud)在讨论全自动化军事生产和作业的影响时使用。这个术语在2002年左右被肖恩·莱格(Shane Legg)和本·格兹尔(Ben Goertzel)重新引入并推广。通用人工智能的研究目标要古老得多,例如道格•雷纳特(Doug Lenat)的 Cyc 项目(始于1984年) ,以及艾伦•纽厄尔(Allen Newell)的 Soar 项目被认为属于通用人工智能的范畴。王培(Pei Wang)和本·格兹尔将2006年的通用人工智能研究活动描述为“发表论文和取得初步成果”。2009年,厦门大学人工脑实验室和 OpenCog 在中国厦门组织了通用人工智能的第一个暑期学校。第一个大学课程于2010年和2011年在保加利亚普罗夫迪夫大学由托多尔·阿瑙多夫(Todor Arnaudov)开设。2018年,麻省理工学院开设了一门通用人工智能课程,由莱克斯·弗里德曼(Lex Fridman)组织,并邀请了一些客座讲师。然而,迄今为止,大多数人工智能研究人员对通用人工智能关注甚少,一些人声称,智能过于复杂,在短期内无法完全复制。然而,少数计算机科学家积极参与通用人工智能的研究,其中许多人正在为通用人工智能的一系列会议做出贡献。这项研究极其多样化,而且往往具有开创性。格兹尔在他的书的序言中,说,制造一个真正灵活的通用人工智能所需的时间约为10年到超过一个世纪不等,但是通用人工智能研究团体的似乎一致认为雷·库兹韦尔(Ray Kurzweil)在'''<font color="#ff8000">《奇点临近》(The Singularity is Near)</font>'''(即在2015年至2045年之间)中讨论的时间线是可信的。<br />
<br />
<br />
<br />
However, most mainstream AI researchers doubt that progress will be this rapid.{{citation_needed|date=January 2017}} Organizations explicitly pursuing AGI include the Swiss AI lab [[IDSIA]],{{citation needed|date=December 2017}} Nnaisense,<ref>{{cite news|last1=Markoff|first1=John|title=When A.I. Matures, It May Call Jürgen Schmidhuber 'Dad'|url=https://www.nytimes.com/2016/11/27/technology/artificial-intelligence-pioneer-jurgen-schmidhuber-overlooked.html|accessdate=26 December 2017|work=The New York Times|date=27 November 2016|archive-url=https://web.archive.org/web/20171226234555/https://www.nytimes.com/2016/11/27/technology/artificial-intelligence-pioneer-jurgen-schmidhuber-overlooked.html|archive-date=26 December 2017|url-status=live}}</ref> [[Vicarious (company)|Vicarious]],<!--<ref name=baum/>--> [[Maluuba]],<ref name=baum/> the [[OpenCog|OpenCog Foundation]], Adaptive AI, [[LIDA (cognitive architecture)|LIDA]], and [[Numenta]] and the associated [[Redwood Neuroscience Institute]].<ref>{{cite book|author1=James Barrat|title=Our Final Invention: Artificial Intelligence and the End of the Human Era|date=2013|publisher=St. Martin's Press|location=New York|isbn=9780312622374|edition=First|chapter=Chapter 11: A Hard Takeoff|title-link=Our Final Invention|author1-link=James Barrat}}</ref> In addition, organizations such as the [[Machine Intelligence Research Institute]]<ref>{{cite web|title=About the Machine Intelligence Research Institute|url=https://intelligence.org/about/|website=Machine Intelligence Research Institute|accessdate=26 December 2017|archive-url=https://web.archive.org/web/20180121025925/https://intelligence.org/about/|archive-date=21 January 2018|url-status=live}}</ref> and [[OpenAI]]<ref>{{cite news|title=About OpenAI|url=https://openai.com/about/|accessdate=26 December 2017|work=[[OpenAI]]|language=en-us|archive-url=https://web.archive.org/web/20171222181056/https://openai.com/about/|archive-date=22 December 2017|url-status=live}}</ref> have been founded to influence the development path of AGI. Finally, projects such as the [[Human Brain Project]]<ref>{{cite news|last1=Theil|first1=Stefan|title=Trouble in Mind|url=https://www.scientificamerican.com/article/why-the-human-brain-project-went-wrong-and-how-to-fix-it/|accessdate=26 December 2017|work=Scientific American|pages=36–42|language=en|doi=10.1038/scientificamerican1015-36|bibcode=2015SciAm.313d..36T|archive-url=https://web.archive.org/web/20171109234151/https://www.scientificamerican.com/article/why-the-human-brain-project-went-wrong-and-how-to-fix-it/|archive-date=9 November 2017|url-status=live}}</ref> have the goal of building a functioning simulation of the human brain. A 2017 survey of AGI categorized forty-five known "active R&D projects" that explicitly or implicitly (through published research) research AGI, with the largest three being [[DeepMind]], the Human Brain Project, and [[OpenAI]].<ref name=baum>{{cite journal|title=Baum, Seth, A Survey of Artificial General Intelligence Projects for Ethics, Risk, and Policy (November 12, 2017). Global Catastrophic Risk Institute Working Paper 17-1|url=https://ssrn.com/abstract=3070741|date=12 November 2017|last1=Baum|first1=Seth}}</ref><br />
<br />
However, most mainstream AI researchers doubt that progress will be this rapid. Organizations explicitly pursuing AGI include the Swiss AI lab IDSIA, Nnaisense, Vicarious. In addition, organizations such as the Machine Intelligence Research Institute and OpenAI have been founded to influence the development path of AGI. Finally, projects such as the Human Brain Project have the goal of building a functioning simulation of the human brain. A 2017 survey of AGI categorized forty-five known "active R&D projects" that explicitly or implicitly (through published research) research AGI, with the largest three being DeepMind, the Human Brain Project, and OpenAI.<br />
<br />
然而,大多数主流的人工智能研究人员怀疑进展是否会如此之快。明确寻求通用人工智能的组织包括瑞士人工智能实验室IDSIA,Nnaisense,Vicarious。此外,机器智能研究所和 OpenAI 等机构也建立起来以影响通用人工智能的发展道路。最后,像人脑计划这样的项目的目标是建立一个人脑的功能模拟。2017年针对通用人工智能的一项调查(通过已发表的研究)对45个已知的明确的或暗中研究通用人工智能的“活跃研发项目”进行了分类 ,其中最大的三个是 DeepMind、人类大脑项目和 OpenAI。<br />
<br />
<br />
<br />
In 2017, researchers Feng Liu, Yong Shi and Ying Liu conducted intelligence tests on publicly available and freely accessible weak AI such as Google AI or Apple's Siri and others. At the maximum, these AI reached a value of about 47, which corresponds approximately to a six-year-old child in first grade. An adult comes to about 100 on average. Similar tests had been carried out in 2014, with the IQ score reaching a maximum value of 27.<ref>{{cite journal|title=Intelligence Quotient and Intelligence Grade of Artificial Intelligence|journal=Annals of Data Science|volume=4|issue=2|pages=179–191|arxiv=1709.10242|doi=10.1007/s40745-017-0109-0|year=2017|last1=Liu|first1=Feng|last2=Shi|first2=Yong|last3=Liu|first3=Ying|bibcode=2017arXiv170910242L}}</ref><ref>{{cite web|title=Google-KI doppelt so schlau wie Siri|url=https://t3n.de/news/iq-kind-schlauer-google-ki-siri-864003|accessdate=2 January 2019|archive-url=https://web.archive.org/web/20190103055657/https://t3n.de/news/iq-kind-schlauer-google-ki-siri-864003/|archive-date=3 January 2019|url-status=live}}</ref><br />
<br />
In 2017, researchers Feng Liu, Yong Shi and Ying Liu conducted intelligence tests on publicly available and freely accessible weak AI such as Google AI or Apple's Siri and others. At the maximum, these AI reached a value of about 47, which corresponds approximately to a six-year-old child in first grade. An adult comes to about 100 on average. Similar tests had been carried out in 2014, with the IQ score reaching a maximum value of 27.<br />
<br />
2017年,研究人员 Feng Liu,Yong Shi 和 Ying Liu 对公开的和可自由访问的弱智能进行了智能测试,如谷歌人工智能或苹果的 Siri 等。在最大值,这些人工智能达到了约47,这大约相当于一个的六岁儿童。一个成年人平均智商为100。2014年也进行了类似的测试,智商分数的最高值达到了27。<br />
<br />
<br />
<br />
In 2019, video game programmer and aerospace engineer [[John Carmack]] announced plans to research AGI.<ref name="lawler">{{cite web |url=https://www.engadget.com/2019-11-13-john-carmack-agi.html |title=John Carmack takes a step back at Oculus to work on human-like AI |date=November 13, 2019 |first=Richard |last=Lawler |accessdate=April 4, 2020 |publisher=[[Engadget]]}}</ref><br />
<br />
In 2019, video game programmer and aerospace engineer John Carmack announced plans to research AGI.<br />
<br />
2019年,游戏程序师和航空工程师约翰·卡迈克(John Carmack)宣布了研究通用人工智能的计划。<br />
<br />
<br />
<br />
==Processing power needed to simulate a brain 模拟人脑所需要的处理能力==<br />
<br />
<br />
<br />
===Whole brain emulation 全脑模拟===<br />
<br />
{{main|Mind uploading}}<br />
<br />
A popular discussed approach to achieving general intelligent action is [[whole brain emulation]]. A low-level brain model is built by [[brain scanning|scanning]] and [[Brain mapping|mapping]] a biological brain in detail and copying its state into a computer system or another computational device. The computer runs a [[computer simulation|simulation]] model so faithful to the original that it will behave in essentially the same way as the original brain, or for all practical purposes, indistinguishably.<br />
<br />
A popular discussed approach to achieving general intelligent action is whole brain emulation. A low-level brain model is built by scanning and mapping a biological brain in detail and copying its state into a computer system or another computational device. The computer runs a simulation model so faithful to the original that it will behave in essentially the same way as the original brain, or for all practical purposes, indistinguishably.<ref name=Roadmap><br />
<br />
实现通用智能行为的一种流行的讨论方法是全脑模拟。一个低层次的大脑模型是通过扫描和绘制生物大脑的详细情况,并将其状态复制到计算机系统或其他计算设备中来建立的。计算机运行的模拟模型无比忠实于原始模型,以至于它的行为在本质,或者所有实际目的上与原始大脑相同,难以区分。<br />
<br />
{{Harvnb|Sandberg|Boström|2008}}. "The basic idea is to take a particular brain, scan its structure in detail, and construct a software model of it that is so faithful to the original that, when run on appropriate hardware, it will behave in essentially the same way as the original brain."</ref> Whole brain emulation is discussed in [[computational neuroscience]] and [[neuroinformatics]], in the context of [[brain simulation]] for medical research purposes. It is discussed in [[artificial intelligence]] research{{sfn|Goertzel|2007}} as an approach to strong AI. [[Neuroimaging]] technologies that could deliver the necessary detailed understanding are improving rapidly, and [[futurist]] Ray Kurzweil in the book ''The Singularity Is Near''<ref name=K/> predicts that a map of sufficient quality will become available on a similar timescale to the required computing power.<br />
<br />
."The basic idea is to take a particular brain, scan its structure in detail, and construct a software model of it that is so faithful to the original that, when run on appropriate hardware, it will behave in essentially the same way as the original brain."</ref> Whole brain emulation is discussed in computational neuroscience and neuroinformatics, in the context of brain simulation for medical research purposes. It is discussed in artificial intelligence research as an approach to strong AI. Neuroimaging technologies that could deliver the necessary detailed understanding are improving rapidly, and futurist Ray Kurzweil in the book The Singularity Is Near predicts that a map of sufficient quality will become available on a similar timescale to the required computing power.<br />
<br />
“基本思路是,取一个特定的大脑,详细地扫描其结构,并构建一个无比还原的原始大脑的软件模型,以至于在适当的硬件上运行时,它基本上与原始大脑的行为方式相同。”基于医学研究的大脑模拟背景下,全脑模拟在计算神经科学和神经信息学医学期刊上被讨论过。它是人工智能研究中讨论的一种强人工智能的方法。可提供必要详细的理解的神经成像技术正在迅速提高,未来学家雷·库兹韦尔(Ray Kurzweil)在《奇点临近》书中预测,一张质量足够高的地图将在类似的时间尺度上达到所需的计算能力。<br />
<br />
<br />
<br />
===Early estimates 初步预测===<br />
<br />
[[File:Estimations of Human Brain Emulation Required Performance.svg|thumb|right|400px|Estimates of how much processing power is needed to emulate a human brain at various levels (from Ray Kurzweil, and [[Anders Sandberg]] and [[Nick Bostrom]]), along with the fastest supercomputer from [[TOP500]] mapped by year. Note the logarithmic scale and exponential trendline, which assumes the computational capacity doubles every 1.1 years. Kurzweil believes that mind uploading will be possible at neural simulation, while the Sandberg, Bostrom report is less certain about where [[consciousness]] arises.根据对在不同水平上模拟人类大脑的所需处理能力的估计(来自 Ray Kurzweil,[ Anders Sandberg 和 Nick Bostrom ]) ,以及每年从最快的五百台超级计算机获得的数据,绘制出对数尺度趋势线和指数趋势线。它呈现出计算能力每1.1年增长一倍。库兹韦尔相信,在神经模拟中上传思维是可能的,而桑德伯格和博斯特罗姆的报告对意识从何产生则不太确定。{{sfn|Sandberg|Boström|2008}}]] For low-level brain simulation, an extremely powerful computer would be required. The [[human brain]] has a huge number of [[synapses]]. Each of the 10<sup>11</sup> (one hundred billion) [[neurons]] has on average 7,000 synaptic connections (synapses) to other neurons. It has been estimated that the brain of a three-year-old child has about 10<sup>15</sup> synapses (1 quadrillion). This number declines with age, stabilizing by adulthood. Estimates vary for an adult, ranging from 10<sup>14</sup> to 5×10<sup>14</sup> synapses (100 to 500 trillion).{{sfn|Drachman|2005}} An estimate of the brain's processing power, based on a simple switch model for neuron activity, is around 10<sup>14</sup> (100 trillion) synaptic updates per second ([[SUPS]]).{{sfn|Russell|Norvig|2003}} In 1997, Kurzweil looked at various estimates for the hardware required to equal the human brain and adopted a figure of 10<sup>16</sup> computations per second (cps).<ref>In "Mind Children" {{Harvnb|Moravec|1988|page=61}} 10<sup>15</sup> cps is used. More recently, in 1997, <{{cite web|url=http://www.transhumanist.com/volume1/moravec.htm |title=Archived copy |accessdate=23 June 2006 |url-status=dead |archiveurl=https://web.archive.org/web/20060615031852/http://transhumanist.com/volume1/moravec.htm |archivedate=15 June 2006 }}> Moravec argued for 10<sup>8</sup> MIPS which would roughly correspond to 10<sup>14</sup> cps. Moravec talks in terms of MIPS, not "cps", which is a non-standard term Kurzweil introduced.</ref> (For comparison, if a "computation" was equivalent to one "[[FLOPS|floating point operation]]" – a measure used to rate current [[supercomputer]]s – then 10<sup>16</sup> "computations" would be equivalent to 10 [[Peta-|petaFLOPS]], [[FLOPS#Performance records|achieved in 2011]]). He used this figure to predict the necessary hardware would be available sometime between 2015 and 2025, if the exponential growth in computer power at the time of writing continued.<br />
<br />
Estimates of how much processing power is needed to emulate a human brain at various levels (from Ray Kurzweil, and [[Anders Sandberg and Nick Bostrom), along with the fastest supercomputer from TOP500 mapped by year. Note the logarithmic scale and exponential trendline, which assumes the computational capacity doubles every 1.1 years. Kurzweil believes that mind uploading will be possible at neural simulation, while the Sandberg, Bostrom report is less certain about where consciousness arises.]] For low-level brain simulation, an extremely powerful computer would be required. The human brain has a huge number of synapses. Each of the 10<sup>11</sup> (one hundred billion) neurons has on average 7,000 synaptic connections (synapses) to other neurons. It has been estimated that the brain of a three-year-old child has about 10<sup>15</sup> synapses (1 quadrillion). This number declines with age, stabilizing by adulthood. Estimates vary for an adult, ranging from 10<sup>14</sup> to 5×10<sup>14</sup> synapses (100 to 500 trillion). An estimate of the brain's processing power, based on a simple switch model for neuron activity, is around 10<sup>14</sup> (100 trillion) synaptic updates per second (SUPS). In 1997, Kurzweil looked at various estimates for the hardware required to equal the human brain and adopted a figure of 10<sup>16</sup> computations per second (cps). (For comparison, if a "computation" was equivalent to one "floating point operation" – a measure used to rate current supercomputers – then 10<sup>16</sup> "computations" would be equivalent to 10 petaFLOPS, achieved in 2011). He used this figure to predict the necessary hardware would be available sometime between 2015 and 2025, if the exponential growth in computer power at the time of writing continued.<br />
<br />
根据对在不同水平上模拟人类大脑的所需处理能力的估计(来自 Ray Kurzweil,[ Anders Sandberg 和 Nick Bostrom ]) ,以及每年从最快的五百台超级计算机获得的数据,绘制出对数尺度趋势线和指数趋势线。它呈现出计算能力每1.1年增长一倍。库兹韦尔相信,在神经模拟中上传思维是可能的,而桑德伯格和博斯特罗姆的报告对意识从何产生则不太确定。为进行低层次的大脑模拟,需要一个非常强大的计算机。人类的大脑有大量的突触。每10个 sup 11 / sup (1000亿)神经元平均与其他神经元有7000个突触连接(突触)。据估计,一个三岁儿童的大脑约有10个 sup 15 / sup 突触(1千万亿)。这个数字随着年龄的增长而下降,成年后趋于稳定。而每个成年人的估计情况互不相同,从10个 sup 14 / sup 到5个 sup 10 sup 14 / sup 突触(100万亿到500万亿)不等。基于神经元活动的简单开关模型,对大脑处理能力的估计大约是每秒10次 / 秒(100万亿)突触更新(SUPS)。1997年,库兹韦尔研究了等价模拟人脑所需硬件的各种估计,并采纳了每秒10 sup 16 / sup 计算(cps)这个估计结果。(作为比较,如果一次“计算”相当于一次“浮点运算”——一种用于对当前超级计算机进行评级的措施——那么10 sup 16 / sup次“计算”相当于2011年达到的的每秒10000亿次浮点运算)。他用这个数字来预测,如果在撰写本文时计算机能力方面的指数增长继续下去的话,那么在2015年到2025年之间的某个时候,必要的硬件将会出现。<br />
<br />
<br />
<br />
===Modelling the neurons in more detail 对神经元的更精细的模拟===<br />
<br />
The [[artificial neuron]] model assumed by Kurzweil and used in many current [[artificial neural network]] implementations is simple compared with [[biological neuron model|biological neurons]]. A brain simulation would likely have to capture the detailed cellular behaviour of biological [[neurons]], presently understood only in the broadest of outlines. The overhead introduced by full modeling of the biological, chemical, and physical details of neural behaviour (especially on a molecular scale) would require computational powers several orders of magnitude larger than Kurzweil's estimate. In addition the estimates do not account for [[glial cells]], which are at least as numerous as neurons, and which may outnumber neurons by as much as 10:1, and are now known to play a role in cognitive processes.<ref name="Discover2011JanFeb">{{Cite journal|author=Swaminathan, Nikhil|title=Glia—the other brain cells|journal=Discover|date=Jan–Feb 2011|url=http://discovermagazine.com/2011/jan-feb/62|access-date=24 January 2014|archive-url=https://web.archive.org/web/20140208071350/http://discovermagazine.com/2011/jan-feb/62|archive-date=8 February 2014|url-status=live}}</ref><br />
<br />
The artificial neuron model assumed by Kurzweil and used in many current artificial neural network implementations is simple compared with biological neurons. A brain simulation would likely have to capture the detailed cellular behaviour of biological neurons, presently understood only in the broadest of outlines. The overhead introduced by full modeling of the biological, chemical, and physical details of neural behaviour (especially on a molecular scale) would require computational powers several orders of magnitude larger than Kurzweil's estimate. In addition the estimates do not account for glial cells, which are at least as numerous as neurons, and which may outnumber neurons by as much as 10:1, and are now known to play a role in cognitive processes.<br />
<br />
与生物神经元相比,库兹韦尔假设的人工神经元模型在当前许多'''<font color="#ff8000">人工神经网络(artificial neural network)</font>'''实现中的应用是简单的。大脑模拟可能需要捕捉生物神经元细胞行为的细节,目前只能从最广泛的概要中理解。对神经行为的生物、化学和物理细节(特别是在分子尺度上)进行全面建模所需要的计算能力将比库兹韦尔的估计大数个数量级。此外,这些估计没有考虑到至少和神经元一样多的'''<font color="#ff8000">胶质细胞(glial cells)</font>''',其数量可能比神经元多十分之一,且现已知它们在认知过程中发挥作用。<br />
<br />
<br />
<br />
=== Current research 研究现状===<br />
<br />
There are some research projects that are investigating brain simulation using more sophisticated neural models, implemented on conventional computing architectures. The [[Artificial Intelligence System]] project implemented non-real time simulations of a "brain" (with 10<sup>11</sup> neurons) in 2005. It took 50 days on a cluster of 27 processors to simulate 1 second of a model.<ref>{{cite journal |last=Izhikevich |first=Eugene M. |last2=Edelman |first2=Gerald M. |date=4 March 2008 |title=Large-scale model of mammalian thalamocortical systems |url=http://vesicle.nsi.edu/users/izhikevich/publications/large-scale_model_of_human_brain.pdf |journal=PNAS |volume=105 |issue=9 |pages=3593–3598 |doi= 10.1073/pnas.0712231105|access-date=23 June 2015 |archive-url=https://web.archive.org/web/20090612095651/http://vesicle.nsi.edu/users/izhikevich/publications/large-scale_model_of_human_brain.pdf |archive-date=12 June 2009 |pmid=18292226 |pmc=2265160|bibcode=2008PNAS..105.3593I }}</ref> The [[Blue Brain]] project used one of the fastest supercomputer architectures in the world, [[IBM]]'s [[Blue Gene]] platform, to create a real time simulation of a single rat [[Neocortex|neocortical column]] consisting of approximately 10,000 neurons and 10<sup>8</sup> synapses in 2006.<ref>{{cite web|url=http://bluebrain.epfl.ch/Jahia/site/bluebrain/op/edit/pid/19085|title=Project Milestones|work=Blue Brain|accessdate=11 August 2008}}</ref> A longer term goal is to build a detailed, functional simulation of the physiological processes in the human brain: "It is not impossible to build a human brain and we can do it in 10 years," [[Henry Markram]], director of the Blue Brain Project said in 2009 at the [[TED (conference)|TED conference]] in Oxford.<ref>{{Cite news |url=http://news.bbc.co.uk/1/hi/technology/8164060.stm |title=Artificial brain '10 years away' 2009 BBC news |date=22 July 2009 |access-date=25 July 2009 |archive-url=https://web.archive.org/web/20170726040959/http://news.bbc.co.uk/1/hi/technology/8164060.stm |archive-date=26 July 2017 |url-status=live }}</ref> There have also been controversial claims to have simulated a [[cat intelligence#Computer simulation of the cat brain|cat brain]]. Neuro-silicon interfaces have been proposed as an alternative implementation strategy that may scale better.<ref>[http://gauntlet.ucalgary.ca/story/10343 University of Calgary news] {{Webarchive|url=https://web.archive.org/web/20090818081044/http://gauntlet.ucalgary.ca/story/10343 |date=18 August 2009 }}, [http://www.nbcnews.com/id/12037941 NBC News news] {{Webarchive|url=https://web.archive.org/web/20170704063922/http://www.nbcnews.com/id/12037941/ |date=4 July 2017 }}</ref><br />
<br />
There are some research projects that are investigating brain simulation using more sophisticated neural models, implemented on conventional computing architectures. The Artificial Intelligence System project implemented non-real time simulations of a "brain" (with 10<sup>11</sup> neurons) in 2005. It took 50 days on a cluster of 27 processors to simulate 1 second of a model. The Blue Brain project used one of the fastest supercomputer architectures in the world, IBM's Blue Gene platform, to create a real time simulation of a single rat neocortical column consisting of approximately 10,000 neurons and 10<sup>8</sup> synapses in 2006. A longer term goal is to build a detailed, functional simulation of the physiological processes in the human brain: "It is not impossible to build a human brain and we can do it in 10 years," Henry Markram, director of the Blue Brain Project said in 2009 at the TED conference in Oxford. There have also been controversial claims to have simulated a cat brain. Neuro-silicon interfaces have been proposed as an alternative implementation strategy that may scale better.<br />
<br />
有一些研究项目正在使用更复杂的神经模型研究大脑模拟,这些模型是在传统的计算机体系结构上实现的。人工智能系统项目在2005年实现了对一个“大脑”(有10个 sup 11 / sup 神经元)的非实时模拟。在一个由27个处理器组成的集群上,模拟一个模型的一秒钟花费了50天时间。2006年,蓝脑项目利用世界上最快的超级计算机架构之一——IBM 的蓝色基因平台,创建了一个包含大约10,000个神经元和10个 sup 8 / sup 突触的单个大鼠的'''<font color="#ff8000">新皮质柱(neocortical column)</font>'''的实时模拟。一个更长期的目标是建立一个人脑生理过程的详细的功能模拟: “建立一个人脑并不是不可能的,我们可以在10年内完成,”蓝脑项目主任亨利·马克拉姆(Henry Markram)于2009年在牛津举行的 TED 大会上说道。还有一些有争议的说法是模拟猫的大脑。神经-硅接口已作为一种可替代的实施策略被提出,它可能会更好地进行模拟。<br />
<br />
<br />
<br />
[[Hans Moravec]] addressed the above arguments ("brains are more complicated", "neurons have to be modeled in more detail") in his 1997 paper "When will computer hardware match the human brain?".<ref>{{cite web|url=http://www.transhumanist.com/volume1/moravec.htm |title=Archived copy |accessdate=23 June 2006 |url-status=dead |archiveurl=https://web.archive.org/web/20060615031852/http://transhumanist.com/volume1/moravec.htm |archivedate=15 June 2006 }}</ref> He measured the ability of existing software to simulate the functionality of neural tissue, specifically the retina. His results do not depend on the number of glial cells, nor on what kinds of processing neurons perform where.<br />
<br />
Hans Moravec addressed the above arguments ("brains are more complicated", "neurons have to be modeled in more detail") in his 1997 paper "When will computer hardware match the human brain?". He measured the ability of existing software to simulate the functionality of neural tissue, specifically the retina. His results do not depend on the number of glial cells, nor on what kinds of processing neurons perform where.<br />
<br />
汉斯·莫拉维克(Hans Moravec)在他1997年的论文《计算机硬件何时能与人脑匹敌》中提出了上述观点(“大脑更复杂” ,“神经元的建模必须更详细”) .他测量了现有软件模拟神经组织,特别是视网膜功能的能力。他的研究结果并不取决于神经胶质细胞的数量,也不取决于处理神经元在哪里工作。<br />
<br />
<br />
<br />
The actual complexity of modeling biological neurons has been explored in [[OpenWorm|OpenWorm project]] that was aimed on complete simulation of a worm that has only 302 neurons in its neural network (among about 1000 cells in total). The animal's neural network has been well documented before the start of the project. However, although the task seemed simple at the beginning, the models based on a generic neural network did not work. Currently, the efforts are focused on precise emulation of biological neurons (partly on the molecular level), but the result cannot be called a total success yet. Even if the number of issues to be solved in a human-brain-scale model is not proportional to the number of neurons, the amount of work along this path is obvious.<br />
<br />
The actual complexity of modeling biological neurons has been explored in OpenWorm project that was aimed on complete simulation of a worm that has only 302 neurons in its neural network (among about 1000 cells in total). The animal's neural network has been well documented before the start of the project. However, although the task seemed simple at the beginning, the models based on a generic neural network did not work. Currently, the efforts are focused on precise emulation of biological neurons (partly on the molecular level), but the result cannot be called a total success yet. Even if the number of issues to be solved in a human-brain-scale model is not proportional to the number of neurons, the amount of work along this path is obvious.<br />
<br />
OpenWorm 项目已经探讨了建模生物神经元的实际复杂性。该项目旨在完全模拟一个蠕虫,其神经网络中只有302个神经元(在总共约1000个细胞中)。项目开始之前,蠕虫的神经网络已经被很好地记录了下来。然而,尽管任务一开始看起来很简单,基于一般神经网络的模型并不起作用。目前,研究的重点是精确模拟生物神经元(部分在分子水平上) ,但结果还不能被称为完全成功。即使在人脑尺度的模型中需要解决的问题的数量与神经元的数量不成比例,沿着这条路径走下去的工作量也是显而易见的。<br />
<br />
<br />
<br />
===Criticisms of simulation-based approaches 对基于模拟的研究方法的批评===<br />
<br />
A fundamental criticism of the simulated brain approach derives from [[embodied cognition]] where human embodiment is taken as an essential aspect of human intelligence. Many researchers believe that embodiment is necessary to ground meaning.<ref>{{Harvnb|de Vega|Glenberg|Graesser|2008}}. A wide range of views in current research, all of which require grounding to some degree</ref> If this view is correct, any fully functional brain model will need to encompass more than just the neurons (i.e., a robotic body). Goertzel{{sfn|Goertzel|2007}} proposes virtual embodiment (like in ''[[Second Life]]''), but it is not yet known whether this would be sufficient.<br />
<br />
A fundamental criticism of the simulated brain approach derives from embodied cognition where human embodiment is taken as an essential aspect of human intelligence. Many researchers believe that embodiment is necessary to ground meaning. If this view is correct, any fully functional brain model will need to encompass more than just the neurons (i.e., a robotic body). Goertzel proposes virtual embodiment (like in Second Life), but it is not yet known whether this would be sufficient.<br />
<br />
对模拟大脑的方法的一个基本批评来自具象认知,其中人形化被视为人类智力的一个重要方面。许多研究者认为,具体化是必要的基础意义。如果这种观点是正确的,那么任何功能齐全的大脑模型除了神经元还要包含更多东西(例如,一个机器人身体)。格兹尔提出了虚拟体(就像在《第二人生》中那样) ,但是目前还不知道这是否足够。<br />
<br />
<br />
<br />
Desktop computers using microprocessors capable of more than 10<sup>9</sup> cps (Kurzweil's non-standard unit "computations per second", see above) have been available since 2005. According to the brain power estimates used by Kurzweil (and Moravec), this computer should be capable of supporting a simulation of a bee brain, but despite some interest<ref>{{Cite web |url=http://www.setiai.com/archives/cat_honey_bee_brain.html |title=some links to bee brain studies |access-date=30 March 2010 |archive-url=https://web.archive.org/web/20111005162232/http://www.setiai.com/archives/cat_honey_bee_brain.html |archive-date=5 October 2011 |url-status=dead }}</ref> no such simulation exists {{Citation needed|date=April 2011}}. There are at least three reasons for this:<br />
<br />
Desktop computers using microprocessors capable of more than 10<sup>9</sup> cps (Kurzweil's non-standard unit "computations per second", see above) have been available since 2005. According to the brain power estimates used by Kurzweil (and Moravec), this computer should be capable of supporting a simulation of a bee brain, but despite some interest no such simulation exists . There are at least three reasons for this:<br />
<br />
自2005年以来,台式计算机使用的微处理器能够超过10 sup 9 / sup cps (库兹韦尔的非标准单位“每秒计算”,见上文)。根据库兹韦尔(和莫拉维克)使用的大脑能量估算,这台计算机应该能够支持蜜蜂大脑的模拟,但是尽管有些人感兴趣,这样的模拟并不存在。这至少有三个原因:<br />
<br />
#The neuron model seems to be oversimplified (see next section).<br />
<br />
The neuron model seems to be oversimplified (see next section).<br />
<br />
神经元模型似乎过于简化了(见下一节)。<br />
<br />
#There is insufficient understanding of higher cognitive processes{{refn|In Goertzels' AGI book ([[#CITEREFYudkowsky2006|Yudkowsky 2006]]), Yudkowsky proposes 5 levels of organisation that must be understood – code/data, sensory modality, concept & category, thought, and deliberation (consciousness) – in order to use the available hardware}} to establish accurately what the brain's neural activity, observed using techniques such as [[Neuroimaging#Functional magnetic resonance imaging|functional magnetic resonance imaging]], correlates with.<br />
<br />
There is insufficient understanding of higher cognitive processes to establish accurately what the brain's neural activity, observed using techniques such as functional magnetic resonance imaging, correlates with.<br />
<br />
人们对高级认知过程的理解不够充分,无法准确地确定大脑的神经活动---- 与之相关的是使用功能性磁共振成像等技术观察到的大脑活动。<br />
<br />
#Even if our understanding of cognition advances sufficiently, early simulation programs are likely to be very inefficient and will, therefore, need considerably more hardware.<br />
<br />
Even if our understanding of cognition advances sufficiently, early simulation programs are likely to be very inefficient and will, therefore, need considerably more hardware.<br />
<br />
即使我们对认知的理解有了足够的进步,早期的仿真程序也可能非常低效,因此需要更多的硬件。<br />
<br />
#The brain of an organism, while critical, may not be an appropriate boundary for a cognitive model. To simulate a bee brain, it may be necessary to simulate the body, and the environment. [[The Extended Mind]] thesis formalizes the philosophical concept, and research into [[Cephalopoda|cephalopods]] has demonstrated clear examples of a decentralized system.<ref>{{cite journal | pmid = 15829594 | doi=10.1152/jn.00684.2004 | volume=94 | issue=2 | title=Dynamic model of the octopus arm. I. Biomechanics of the octopus reaching movement |date=August 2005 | journal=J. Neurophysiol. | pages=1443–58 | last1 = Yekutieli | first1 = Y | last2 = Sagiv-Zohar | first2 = R | last3 = Aharonov | first3 = R | last4 = Engel | first4 = Y | last5 = Hochner | first5 = B | last6 = Flash | first6 = T}}</ref><br />
<br />
The brain of an organism, while critical, may not be an appropriate boundary for a cognitive model. To simulate a bee brain, it may be necessary to simulate the body, and the environment. The Extended Mind thesis formalizes the philosophical concept, and research into cephalopods has demonstrated clear examples of a decentralized system.<br />
<br />
有机体的大脑虽然关键,但可能不是认知模型的合适边界。为了模拟蜜蜂的大脑,可能需要模拟身体和环境。'''<font color="#ff8000">延展心灵论题(The Extended Mind thesis)</font>'''形式化了哲学概念,对头足类动物的研究已经展示了分散系统的明显的例子。<br />
<br />
<br />
<br />
In addition, the scale of the human brain is not currently well-constrained. One estimate puts the human brain at about 100 billion neurons and 100 trillion synapses.<ref>{{Harvnb|Williams|Herrup|1988}}</ref><ref>[http://search.eb.com/eb/article-75525 "nervous system, human."] ''[[Encyclopædia Britannica]]''. 9 January 2007</ref> Another estimate is 86 billion neurons of which 16.3 billion are in the [[cerebral cortex]] and 69 billion in the [[cerebellum]].{{sfn|Azevedo et al.|2009}} [[Glial cell]] synapses are currently unquantified but are known to be extremely numerous.<br />
<br />
In addition, the scale of the human brain is not currently well-constrained. One estimate puts the human brain at about 100 billion neurons and 100 trillion synapses. Another estimate is 86 billion neurons of which 16.3 billion are in the cerebral cortex and 69 billion in the cerebellum. Glial cell synapses are currently unquantified but are known to be extremely numerous.<br />
<br />
此外,人类大脑的规模目前还没有得到很好的限制。据估计,人类大脑大约有1000亿个神经元和100万亿个突触。另一个估计是860亿个神经元,其中163亿个在大脑皮层,690亿个在小脑。神经胶质细胞突触目前尚未定量,但已知数量极多。<br />
<br />
<br />
<br />
==Strong AI and consciousness 强人工智能和意识==<br />
<br />
{{See also|Philosophy of artificial intelligence|Turing test}}<br />
<br />
<br />
<br />
In 1980, philosopher [[John Searle]] coined the term "strong AI" as part of his [[Chinese room]] argument.<ref>{{Harvnb|Searle|1980}}</ref> He wanted to distinguish between two different hypotheses about artificial intelligence:<ref>As defined in a standard AI textbook: "The assertion that machines could possibly act intelligently (or, perhaps better, act as if they were intelligent) is called the 'weak AI' hypothesis by philosophers, and the assertion that machines that do so are actually thinking (as opposed to simulating thinking) is called the 'strong AI' hypothesis." {{Harv|Russell|Norvig|2003}}</ref><br />
<br />
In 1980, philosopher John Searle coined the term "strong AI" as part of his Chinese room argument. He wanted to distinguish between two different hypotheses about artificial intelligence:<br />
<br />
1980年,哲学家约翰•塞尔(John Searle)将“强人工智能”(strong AI)一词作为他在中文屋论证的一部分。他想要区分关于人工智能的两种不同假设:<br />
<br />
* An artificial intelligence system can ''think'' and have a ''mind''. (The word "mind" has a specific meaning for philosophers, as used in "the [[mind body problem]]" or "the [[philosophy of mind]]".)<br />
*一个人工智能系统可以思考并拥有思维。(词语“思维”对哲学家来说有特殊意义,正如在“身心问题”或“心灵哲学”中的使用一样。)<br />
<br />
* An artificial intelligence system can (only) ''act like'' it thinks and has a mind.<br />
*一个人工智能系统可以(仅仅)按它所想而“行动”,并且拥有思维。<br />
<br />
The first one is called "the ''strong'' AI hypothesis" and the second is "the ''weak'' AI hypothesis" because the first one makes the ''stronger'' statement: it assumes something special has happened to the machine that goes beyond all its abilities that we can test. Searle referred to the "strong AI hypothesis" as "strong AI". This usage is also common in academic AI research and textbooks.<ref>For example:<br />
<br />
The first one is called "the strong AI hypothesis" and the second is "the weak AI hypothesis" because the first one makes the stronger statement: it assumes something special has happened to the machine that goes beyond all its abilities that we can test. Searle referred to the "strong AI hypothesis" as "strong AI". This usage is also common in academic AI research and textbooks.<ref>For example:<br />
<br />
第一条被称为“强人工智能假设” ,第二条被称为“弱人工智能假设”,因为第一条假设提出了更强的陈述: 它假定机器发生了某种特殊的事件,超出了我们能够测试的所有能力。塞尔将“'''<font color="#ff8000">强人工智能假说(strong AI hypothesis)</font>'''”称为“强人工智能”。这种用法在人工智能学术研究和教科书中也很常见。例如:<br />
<br />
* {{Harvnb|Russell|Norvig|2003}},<br />
<br />
* [http://www.encyclopedia.com/doc/1O87-strongAI.html Oxford University Press Dictionary of Psychology] {{Webarchive|url=https://web.archive.org/web/20071203103022/http://www.encyclopedia.com/doc/1O87-strongAI.html |date=3 December 2007 }} (quoted in "High Beam Encyclopedia"),<br />
<br />
* [http://www.aaai.org/AITopics/html/phil.html MIT Encyclopedia of Cognitive Science] {{Webarchive|url=https://web.archive.org/web/20080719074502/http://www.aaai.org/AITopics/html/phil.html |date=19 July 2008 }} (quoted in "AITopics")<br />
<br />
* [http://planetmath.org/encyclopedia/StrongAIThesis.html Planet Math] {{Webarchive|url=https://web.archive.org/web/20070919012830/http://planetmath.org/encyclopedia/StrongAIThesis.html |date=19 September 2007 }}<br />
<br />
* [http://www.cbhd.org/resources/biotech/tongen_2003-11-07.htm Will Biological Computers Enable Artificially Intelligent Machines to Become Persons?] {{Webarchive|url=https://web.archive.org/web/20080513031753/http://www.cbhd.org/resources/biotech/tongen_2003-11-07.htm |date=13 May 2008 }} Anthony Tongen</ref><br />
<br />
<br />
<br />
The weak AI hypothesis is equivalent to the hypothesis that artificial general intelligence is possible. According to [[Stuart J. Russell|Russell]] and [[Peter Norvig|Norvig]], "Most AI researchers take the weak AI hypothesis for granted, and don't care about the strong AI hypothesis."{{sfn|Russell|Norvig|2003|p=947}}<br />
<br />
The weak AI hypothesis is equivalent to the hypothesis that artificial general intelligence is possible. According to Russell and Norvig, "Most AI researchers take the weak AI hypothesis for granted, and don't care about the strong AI hypothesis."<br />
<br />
弱人工智能假说等同于通用人工智能是可能的假说。根据罗素和诺维格的说法,“大多数人工智能研究人员认为弱人工智能假说是理所当然的,并且不关心强人工智能假说。”<br />
<br />
<br />
<br />
In contrast to Searle, [[Ray Kurzweil]] uses the term "strong AI" to describe any artificial intelligence system that acts like it has a mind,<ref name=K/> regardless of whether a philosopher would be able to determine if it ''actually'' has a mind or not.<br />
<br />
In contrast to Searle, Ray Kurzweil uses the term "strong AI" to describe any artificial intelligence system that acts like it has a mind, regardless of whether a philosopher would be able to determine if it actually has a mind or not.<br />
<br />
与 Searle 不同的是,Ray Kurzweil 使用“强人工智能”这个词来描述任何人工智能系统,这个系统的行为就像它有思想一样,不管哲学家能否确定它是否真的有思想。<br />
<br />
In science fiction, AGI is associated with traits such as [[consciousness]], [[sentience]], [[sapience]], and [[self-awareness]] observed in living beings. However, according to Searle, it is an open question whether general intelligence is sufficient for consciousness. "Strong AI" (as defined above by Kurzweil) should not be confused with Searle's "[[strong AI hypothesis]]." The strong AI hypothesis is the claim that a computer which behaves as intelligently as a person must also necessarily have a [[mind]] and [[consciousness]]. AGI refers only to the amount of intelligence that the machine displays, with or without a mind.<br />
<br />
In science fiction, AGI is associated with traits such as consciousness, sentience, sapience, and self-awareness observed in living beings. However, according to Searle, it is an open question whether general intelligence is sufficient for consciousness. "Strong AI" (as defined above by Kurzweil) should not be confused with Searle's "strong AI hypothesis." The strong AI hypothesis is the claim that a computer which behaves as intelligently as a person must also necessarily have a mind and consciousness. AGI refers only to the amount of intelligence that the machine displays, with or without a mind.<br />
<br />
在科幻小说中,通用人工智能与生物所具有的的意识、知觉、智慧和自我意识等特征有关。然而,根据塞尔的说法,一般智力是否足以产生意识还是一个悬而未决的问题。“强人工智能”(如上文库兹韦尔所定义的)不应与塞尔的“强人工智能假设”相混淆。强人工智能假说认为,一台像人一样智能运行的计算机必然具有思想和意识。通用人工智能只是指机器显示的智能程度,不管有没有头脑。<br />
<br />
<br />
<br />
===Consciousness 意识===<br />
<br />
There are other aspects of the human mind besides intelligence that are relevant to the concept of strong AI which play a major role in [[science fiction]] and the [[ethics of artificial intelligence]]:<br />
<br />
There are other aspects of the human mind besides intelligence that are relevant to the concept of strong AI which play a major role in science fiction and the ethics of artificial intelligence:<br />
<br />
除了与在科幻小说和人工智能伦理中扮演重要角色的强人工智能概念有关的智能,人类思维还有其他方面:<br />
<br />
* [[consciousness]]: To have [[qualia|subjective experience]] and [[thought]].<ref>Note that [[consciousness]] is difficult to define. A popular definition, due to [[Thomas Nagel]], is that it "feels like" something to be conscious. If we are not conscious, then it doesn't feel like anything. Nagel uses the example of a bat: we can sensibly ask "what does it feel like to be a bat?" However, we are unlikely to ask "what does it feel like to be a toaster?" Nagel concludes that a bat appears to be conscious (i.e. has consciousness) but a toaster does not. See {{Harv|Nagel|1974}}</ref><br />
意识:拥有主观体验和思想。值得一提的是意识是很难定义的。由托马斯·内格尔给出的一个著名定义陈述如下:一个事物如果能体会到某种感觉,那么它是有意识的。如果我们不是有意识的,那么我们不会有任何感觉。内格尔以蝙蝠为例:我们可以凭借感觉问出:“成为一只蝙蝠的感觉如何?”但是,我们不大可能问出:“成为一个吐司机的感觉如何?”内格尔总结认为蝙蝠像是有意识的(即拥有意识),但是吐司机却不是。<br />
<br />
* [[self-awareness]]: To be aware of oneself as a separate individual, especially to be aware of one's own thoughts.<br />
自我意识:能够意识到自己是一个独立的个体,尤其是意识到自己的思想。<br />
<br />
* [[sentience]]: The ability to "feel" perceptions or emotions subjectively.<br />
知觉:主观地感受概念或者情感的能力。<br />
<br />
* [[sapience]]: The capacity for wisdom.<br />
智慧:容纳知识的能力。<br />
<br />
<br />
<br />
These traits have a moral dimension, because a machine with this form of strong AI may have legal rights, analogous to the [[animal rights|rights of non-human animals]]. As such, preliminary work has been conducted on approaches to integrating full ethical agents with existing legal and social frameworks. These approaches have focused on the legal position and rights of 'strong' AI.<ref>{{Cite journal|last=Sotala|first=Kaj|last2=Yampolskiy|first2=Roman V|date=2014-12-19|title=Responses to catastrophic AGI risk: a survey|journal=Physica Scripta|volume=90|issue=1|pages=8|doi=10.1088/0031-8949/90/1/018001|issn=0031-8949|doi-access=free}}</ref><br />
<br />
These traits have a moral dimension, because a machine with this form of strong AI may have legal rights, analogous to the rights of non-human animals. As such, preliminary work has been conducted on approaches to integrating full ethical agents with existing legal and social frameworks. These approaches have focused on the legal position and rights of 'strong' AI.<br />
<br />
这些特征具有道德维度,因为拥有这种强人工智能形式的机器可能拥有法律权利,类似于非人类动物的权利。因此,已经开展了初步工作,探讨如何将全面的道德行为者纳入现有的法律和社会框架。这些方法都集中在强大的人工智能的法律地位和权利上。<br />
<br />
<br />
<br />
However, [[Bill Joy]], among others, argues a machine with these traits may be a threat to human life or dignity.<ref>{{cite journal| title=Why the future doesn't need us | last=Joy | first=Bill |author-link=Bill Joy | journal=Wired |date=April 2000 }}</ref> It remains to be shown whether any of these traits are [[Necessary and sufficient condition|necessary]] for strong AI. The role of [[consciousness]] is not clear, and currently there is no agreed test for its presence. If a machine is built with a device that simulates the [[neural correlates of consciousness]], would it automatically have self-awareness? It is also possible that some of these properties, such as sentience, [[Emergence|naturally emerge]] from a fully intelligent machine, or that it becomes natural to ''ascribe'' these properties to machines once they begin to act in a way that is clearly intelligent. For example, intelligent action may be sufficient for sentience, rather than the other way around.<br />
<br />
However, Bill Joy, among others, argues a machine with these traits may be a threat to human life or dignity. It remains to be shown whether any of these traits are necessary for strong AI. The role of consciousness is not clear, and currently there is no agreed test for its presence. If a machine is built with a device that simulates the neural correlates of consciousness, would it automatically have self-awareness? It is also possible that some of these properties, such as sentience, naturally emerge from a fully intelligent machine, or that it becomes natural to ascribe these properties to machines once they begin to act in a way that is clearly intelligent. For example, intelligent action may be sufficient for sentience, rather than the other way around.<br />
<br />
然而,比尔·乔伊(Bill Joy)等人认为,具有这些特征的机器可能会威胁到人类的生命或尊严。这些特征对于强人工智能来说是否是必要的还有待证明。意识的作用并不清楚,目前也没有针对其存在而进行的一致的测试。如果一台机器装有一个模拟与意识相关的神经的装置,它会自动具有自我意识吗?也有可能这些特性中的一些,比如感知能力,自然而然地从一个完全智能的机器中产生,或者一旦机器开始以一种明显智能的方式行动,人们就会自然而然地认为这些特性是机器自主产生的。<br />
--[[用户:粲兰|袁一博]]([[用户讨论:粲兰|讨论]])“或者一旦机器开始以一种明显智能的方式行动,,人们就会自然而然地认为这些特性是机器自主产生的。”对应原句“or that it becomes natural to ascribe these properties to machines once they begin to act in a way that is clearly intelligent.”与原句在语序和措辞上略有不同,是译者考虑到中文的阅读习惯在不改变原意的条件下意译得出的。<br />
例如,智能行为可能足以判定机器产生了知觉,而非反过来。<br />
<br />
<br />
<br />
===Artificial consciousness research 人工意识研究===<br />
<br />
{{Main|Artificial consciousness}}<br />
<br />
<br />
<br />
Although the role of consciousness in strong AI/AGI is debatable, many AGI researchers{{sfn|Yudkowsky|2006}} regard research that investigates possibilities for implementing consciousness as vital. In an early effort [[Igor Aleksander]]{{sfn|Aleksander|1996}} argued that the principles for creating a conscious machine already existed but that it would take forty years to train such a machine to understand [[language]].<br />
<br />
Although the role of consciousness in strong AI/AGI is debatable, many AGI researchers regard research that investigates possibilities for implementing consciousness as vital. In an early effort Igor Aleksander argued that the principles for creating a conscious machine already existed but that it would take forty years to train such a machine to understand language.<br />
<br />
虽然意识在强人工智能/通用人工智能中的作用是有争议的,但是很多通用人工智能的研究人员认为研究实现意识的可能性是至关重要的。在早期的努力中,伊格尔·亚历山大(Igor Aleksander)认为创造一个有意识的机器的原则已经存在,但是训练这样一个机器去理解语言可能需要四十年。<br />
<br />
<br />
<br />
==Possible explanations for the slow progress of AI research 人工智能研究进展缓慢的可能解释==<br />
<br />
{{See also|History of artificial intelligence#The problems}}<br />
<br />
<br />
<br />
Since the launch of AI research in 1956, the growth of this field has slowed down over time and has stalled the aims of creating machines skilled with intelligent action at the human level.{{sfn|Clocksin|2003}} A possible explanation for this delay is that computers lack a sufficient scope of memory or processing power.{{sfn|Clocksin|2003}} In addition, the level of complexity that connects to the process of AI research may also limit the progress of AI research.{{sfn|Clocksin|2003}}<br />
<br />
Since the launch of AI research in 1956, the growth of this field has slowed down over time and has stalled the aims of creating machines skilled with intelligent action at the human level. A possible explanation for this delay is that computers lack a sufficient scope of memory or processing power. In addition, the level of complexity that connects to the process of AI research may also limit the progress of AI research.<br />
<br />
自从1956年开始人工智能研究以来,这一领域的发展速度已经随着时间的推移而放缓,并且阻碍了创造具有人类水平的智能行为的机器的目标。这种延迟的一个可能的解释是计算机缺乏足够的存储空间或处理能力。此外,与人工智能研究过程相关的复杂程度也可能限制人工智能研究的进展。<br />
<br />
<br />
<br />
While most AI researchers believe strong AI can be achieved in the future, there are some individuals like [[Hubert Dreyfus]] and [[Roger Penrose]] who deny the possibility of achieving strong AI.{{sfn|Clocksin|2003}} [[John McCarthy (computer scientist)|John McCarthy]] was one of various computer scientists who believe human-level AI will be accomplished, but a date cannot accurately be predicted.{{sfn|McCarthy|2003}}<br />
<br />
While most AI researchers believe strong AI can be achieved in the future, there are some individuals like Hubert Dreyfus and Roger Penrose who deny the possibility of achieving strong AI. John McCarthy was one of various computer scientists who believe human-level AI will be accomplished, but a date cannot accurately be predicted.<br />
<br />
虽然大多数人工智能研究人员认为强大的人工智能可以在未来实现,但也有一些人像休伯特·德雷福斯(Hubert Dreyfus)和罗杰·彭罗斯(Roger Penrose)否认实现强大人工智能的可能性。约翰·麦卡锡(John McCarthy)是众多计算机科学家之一,他们相信人类水平的人工智能将会实现,但是日期无法准确预测。<br />
<br />
<br />
<br />
Conceptual limitations are another possible reason for the slowness in AI research.{{sfn|Clocksin|2003}} AI researchers may need to modify the conceptual framework of their discipline in order to provide a stronger base and contribution to the quest of achieving strong AI. As William Clocksin wrote in 2003: "the framework starts from Weizenbaum's observation that intelligence manifests itself only relative to specific social and cultural contexts".{{sfn|Clocksin|2003}}<br />
<br />
Conceptual limitations are another possible reason for the slowness in AI research. AI researchers may need to modify the conceptual framework of their discipline in order to provide a stronger base and contribution to the quest of achieving strong AI. As William Clocksin wrote in 2003: "the framework starts from Weizenbaum's observation that intelligence manifests itself only relative to specific social and cultural contexts".<br />
<br />
概念上的局限性是人工智能研究缓慢的另一个可能原因。人工智能研究人员可能需要修改他们学科的概念框架,以便为实现强大的人工智能提供一个更强大的基础和贡献。正如威廉·克罗克森(William Clocksin)在2003年写的那样: “这个框架始于魏泽堡的观察,即智力只在特定的社会和文化背景下表现出来。”。<br />
<br />
<br />
<br />
Furthermore, AI researchers have been able to create computers that can perform jobs that are complicated for people to do, such as mathematics, but conversely they have struggled to develop a computer that is capable of carrying out tasks that are simple for humans to do, such as walking ([[Moravec's paradox]]).{{sfn|Clocksin|2003}} A problem described by David Gelernter is that some people assume thinking and reasoning are equivalent.{{sfn|Gelernter|2010}} However, the idea of whether thoughts and the creator of those thoughts are isolated individually has intrigued AI researchers.{{sfn|Gelernter|2010}}<br />
<br />
Furthermore, AI researchers have been able to create computers that can perform jobs that are complicated for people to do, such as mathematics, but conversely they have struggled to develop a computer that is capable of carrying out tasks that are simple for humans to do, such as walking (Moravec's paradox). A problem described by David Gelernter is that some people assume thinking and reasoning are equivalent. However, the idea of whether thoughts and the creator of those thoughts are isolated individually has intrigued AI researchers.<br />
<br />
此外,人工智能研究人员已经能够创造出能够执行对人类而言复杂的工作(如数学)的计算机,但相反地,他们却难以开发出能够执行人类简单任务(如行走)的计算机(莫拉维克悖论)。大卫·格勒尼特(David Gelernter)描述的一个问题是,有些人认为思考和推理是等价的。然而,思想和这些思想的创造者是否被孤立的想法引起了人工智能研究者的兴趣。<br />
<br />
<br />
<br />
The problems that have been encountered in AI research over the past decades have further impeded the progress of AI. The failed predictions that have been promised by AI researchers and the lack of a complete understanding of human behaviors have helped diminish the primary idea of human-level AI.{{sfn|Goertzel|2007}} Although the progress of AI research has brought both improvement and disappointment, most investigators have established optimism about potentially achieving the goal of AI in the 21st century.{{sfn|Goertzel|2007}}<br />
<br />
The problems that have been encountered in AI research over the past decades have further impeded the progress of AI. The failed predictions that have been promised by AI researchers and the lack of a complete understanding of human behaviors have helped diminish the primary idea of human-level AI. Although the progress of AI research has brought both improvement and disappointment, most investigators have established optimism about potentially achieving the goal of AI in the 21st century.<br />
<br />
过去几十年人工智能研究中遇到的问题进一步阻碍了人工智能的发展。人工智能研究人员所承诺的失败的预测,以及对人类行为缺乏完整理解,已经帮助削弱了人类水平人工智能的最初设想。尽管人工智能研究的进展带来了进步和失望,但大多数研究人员对人工智能在21世纪可能实现的目标持乐观态度。<br />
<br />
<br />
<br />
Other possible reasons have been proposed for the lengthy research in the progress of strong AI. The intricacy of scientific problems and the need to fully understand the human brain through psychology and neurophysiology have limited many researchers in emulating the function of the human brain in computer hardware.{{sfn|McCarthy|2007}} Many researchers tend to underestimate any doubt that is involved with future predictions of AI, but without taking those issues seriously, people can then overlook solutions to problematic questions.{{sfn|Goertzel|2007}}<br />
<br />
Other possible reasons have been proposed for the lengthy research in the progress of strong AI. The intricacy of scientific problems and the need to fully understand the human brain through psychology and neurophysiology have limited many researchers in emulating the function of the human brain in computer hardware. Many researchers tend to underestimate any doubt that is involved with future predictions of AI, but without taking those issues seriously, people can then overlook solutions to problematic questions.<br />
<br />
还有其他可能的原因可以解释为什么对于强人工智能进行了长时间的研究。错综复杂的科学问题,以及通过心理学和神经生理学充分了解人脑的必要性,限制了许多研究人员在计算机硬件中模拟人脑的功能。许多研究人员倾向于低估与人工智能未来预测有关的任何怀疑,但是如果不认真对待这些问题,人们就会忽视不确定问题的解决方案。<br />
<br />
<br />
<br />
Clocksin says that a conceptual limitation that may impede the progress of AI research is that people may be using the wrong techniques for computer programs and implementation of equipment.{{sfn|Clocksin|2003}} When AI researchers first began to aim for the goal of artificial intelligence, a main interest was human reasoning.{{sfn|Holte|Choueiry|2003}} Researchers hoped to establish computational models of human knowledge through reasoning and to find out how to design a computer with a specific cognitive task.{{sfn|Holte|Choueiry|2003}}<br />
<br />
Clocksin says that a conceptual limitation that may impede the progress of AI research is that people may be using the wrong techniques for computer programs and implementation of equipment. When AI researchers first began to aim for the goal of artificial intelligence, a main interest was human reasoning. Researchers hoped to establish computational models of human knowledge through reasoning and to find out how to design a computer with a specific cognitive task.<br />
<br />
克罗克森说,阻碍人工智能研究进展的一个概念上的限制是,人们可能在计算机程序和安装设备方面使用了错误的技术。当人工智能研究人员第一次开始瞄准人工智能的目标时,主要的兴趣是人类推理。研究人员希望通过推理建立人类知识的计算模型,并找出如何设计一台具有特定认知任务的计算机。<br />
<br />
<br />
<br />
The practice of abstraction, which people tend to redefine when working with a particular context in research, provides researchers with a concentration on just a few concepts.{{sfn|Holte|Choueiry|2003}} The most productive use of abstraction in AI research comes from planning and problem solving.{{sfn|Holte|Choueiry|2003}} Although the aim is to increase the speed of a computation, the role of abstraction has posed questions about the involvement of abstraction operators.{{sfn|Zucker|2003}}<br />
<br />
The practice of abstraction, which people tend to redefine when working with a particular context in research, provides researchers with a concentration on just a few concepts. The most productive use of abstraction in AI research comes from planning and problem solving. Although the aim is to increase the speed of a computation, the role of abstraction has posed questions about the involvement of abstraction operators.<br />
<br />
人们在研究中使用特定的语境时倾向于重新定义抽象的实践,使得研究人员集中在数个概念上。抽象在人工智能研究中最有效的应用来自规划和解决问题。虽然目标是提高计算速度,但是抽象的作用已经对抽象算子的参与提出了问题。<br />
<br />
<br />
<br />
A possible reason for the slowness in AI relates to the acknowledgement by many AI researchers that heuristics is a section that contains a significant breach between computer performance and human performance.{{sfn|McCarthy|2007}} The specific functions that are programmed to a computer may be able to account for many of the requirements that allow it to match human intelligence. These explanations are not necessarily guaranteed to be the fundamental causes for the delay in achieving strong AI, but they are widely agreed by numerous researchers.<br />
<br />
A possible reason for the slowness in AI relates to the acknowledgement by many AI researchers that heuristics is a section that contains a significant breach between computer performance and human performance. The specific functions that are programmed to a computer may be able to account for many of the requirements that allow it to match human intelligence. These explanations are not necessarily guaranteed to be the fundamental causes for the delay in achieving strong AI, but they are widely agreed by numerous researchers.<br />
<br />
人工智能发展缓慢的一个可能原因是许多人工智能研究人员承认启发式算法是计算机性能和人类表现之间的一个重大缺口。为编入计算机的特定功能可以满足许多要求,使计算机与人类智能相匹配。这些解释并不一定是造成人工智能实现延迟的根本原因,但它们得到了众多研究人员的广泛认同。<br />
<br />
<br />
<br />
There have been many AI researchers that debate over the idea whether [[affective computing|machines should be created with emotions]]. There are no emotions in typical models of AI and some researchers say programming emotions into machines allows them to have a mind of their own.{{sfn|Clocksin|2003}} Emotion sums up the experiences of humans because it allows them to remember those experiences.{{sfn|Gelernter|2010}} David Gelernter writes, "No computer will be creative unless it can simulate all the nuances of human emotion."{{sfn|Gelernter|2010}} This concern about emotion has posed problems for AI researchers and it connects to the concept of strong AI as its research progresses into the future.<ref>{{cite journal|doi=10.1016/j.bushor.2018.08.004|title=Kaplan Andreas and Haelein Michael (2019) Siri, Siri, in my hand: Who's the fairest in the land? On the interpretations, illustrations, and implications of artificial intelligence | volume=62 | year=2019|journal=Business Horizons|pages=15–25 | last1 = Kaplan | first1 = Andreas | last2 = Haenlein | first2 = Michael}}</ref><br />
<br />
There have been many AI researchers that debate over the idea whether machines should be created with emotions. There are no emotions in typical models of AI and some researchers say programming emotions into machines allows them to have a mind of their own. Emotion sums up the experiences of humans because it allows them to remember those experiences. David Gelernter writes, "No computer will be creative unless it can simulate all the nuances of human emotion." This concern about emotion has posed problems for AI researchers and it connects to the concept of strong AI as its research progresses into the future.<br />
<br />
许多人工智能研究人员一直在争论机器是否应该带有情感。典型的人工智能模型中没有情感,一些研究人员说,将情感编程到机器中可以让它们拥有自己的思想。情感总结了人类的经历,因为它允许人们记住那些经历。大卫·格勒尼特(David Gelernter)写道: “除非计算机能够模拟人类情感的所有细微差别,否则它不会具有创造力。”这种对情绪的关注给人工智能研究人员带来了一些问题,随着未来人工智能研究的进展,它与强人工智能的概念联系起来。<br />
<br />
<br />
<br />
==Controversies and dangers 争议和风险==<br />
<br />
<br />
<br />
===Feasibility 可能性===<br />
<br />
{{expand section|date=February 2016}}<br />
<br />
As of March 2020, AGI remains speculative<ref name="spec1">[https://www.europarl.europa.eu/at-your-service/files/be-heard/religious-and-non-confessional-dialogue/events/en-20190319-how-artificial-intelligence-works.pdf europarl.europa.eu: How artificial intelligence works], "Concluding remarks: Today's AI is powerful and useful, but remains far from speculated AGI or ASI.", European Parliamentary Research Service, retrieved March 3, 2020</ref><ref name="spec2">[https://www.itu.int/en/journal/001/Documents/itu2018-9.pdf itu.int: Beyond Mad?: The Race For Artificial General Intelligence], "AGI represents a level of power that remains firmly in the realm of speculative fiction as on date." February 2, 2018, retrieved March 3, 2020</ref> as no such system has been demonstrated yet. Opinions vary both on ''whether'' and ''when'' artificial general intelligence will arrive. At one extreme, AI pioneer [[Herbert A. Simon]] wrote in 1965: "machines will be capable, within twenty years, of doing any work a man can do". However, this prediction failed to come true. Microsoft co-founder [[Paul Allen]] believed that such intelligence is unlikely in the 21st century because it would require "unforeseeable and fundamentally unpredictable breakthroughs" and a "scientifically deep understanding of cognition".<ref>{{cite news|last1=Allen|first1=Paul|title=The Singularity Isn't Near|url=http://www.technologyreview.com/view/425733/paul-allen-the-singularity-isnt-near/|accessdate=17 September 2014|work=[[MIT Technology Review]]}}</ref> Writing in [[The Guardian]], roboticist Alan Winfield claimed the gulf between modern computing and human-level artificial intelligence is as wide as the gulf between current space flight and practical faster-than-light spaceflight.<ref>{{cite news|last1=Winfield|first1=Alan|title=Artificial intelligence will not turn into a Frankenstein's monster|url=https://www.theguardian.com/technology/2014/aug/10/artificial-intelligence-will-not-become-a-frankensteins-monster-ian-winfield|accessdate=17 September 2014|work=[[The Guardian]]|archive-url=https://web.archive.org/web/20140917135230/http://www.theguardian.com/technology/2014/aug/10/artificial-intelligence-will-not-become-a-frankensteins-monster-ian-winfield|archive-date=17 September 2014|url-status=live}}</ref><br />
<br />
As of March 2020, AGI remains speculative as no such system has been demonstrated yet. Opinions vary both on whether and when artificial general intelligence will arrive. At one extreme, AI pioneer Herbert A. Simon wrote in 1965: "machines will be capable, within twenty years, of doing any work a man can do". However, this prediction failed to come true. Microsoft co-founder Paul Allen believed that such intelligence is unlikely in the 21st century because it would require "unforeseeable and fundamentally unpredictable breakthroughs" and a "scientifically deep understanding of cognition". Writing in The Guardian, roboticist Alan Winfield claimed the gulf between modern computing and human-level artificial intelligence is as wide as the gulf between current space flight and practical faster-than-light spaceflight.<br />
<br />
截至2020年3月,通用人工智能仍处于推测性的状态,因为迄今此类系统尚未被展示。对于人工通用智能是否会到来以及何时到来,人们的看法各不相同。一个极端如,人工智能的先驱赫伯特·西蒙在1965年写道: “机器将能在20年内具有完成人类能做的任何工作的能力。”然而,这个预言并没有实现。微软(Microsoft)联合创始人保罗•艾伦(Paul Allen)认为,这种情报在21世纪不太可能出现,因为它需要“不可预见且根本无法预测的突破”和“对认知的科学的深入理解”。机器人专家阿兰·温菲尔德(Alan Winfield)在《卫报》上发表文章称,现代计算机和人类水平的人工智能之间的鸿沟就像当前的太空飞行和实际的超光速空间飞行之间的鸿沟一样宽。<br />
<br />
<br />
<br />
AI experts' views on the feasibility of AGI wax and wane, and may have seen a resurgence in the 2010s. Four polls conducted in 2012 and 2013 suggested that the median guess among experts for when they would be 50% confident AGI would arrive was 2040 to 2050, depending on the poll, with the mean being 2081. Of the experts, 16.5% answered with "never" when asked the same question but with a 90% confidence instead.<ref name="new yorker doomsday">{{cite news|author1=Raffi Khatchadourian|title=The Doomsday Invention: Will artificial intelligence bring us utopia or destruction?|url=http://www.newyorker.com/magazine/2015/11/23/doomsday-invention-artificial-intelligence-nick-bostrom|accessdate=7 February 2016|work=[[The New Yorker (magazine)|The New Yorker]]|date=23 November 2015|archive-url=https://web.archive.org/web/20160128105955/http://www.newyorker.com/magazine/2015/11/23/doomsday-invention-artificial-intelligence-nick-bostrom|archive-date=28 January 2016|url-status=live}}</ref><ref>Müller, V. C., & Bostrom, N. (2016). Future progress in artificial intelligence: A survey of expert opinion. In Fundamental issues of artificial intelligence (pp. 555–572). Springer, Cham.</ref> Further current AGI progress considerations can be found below [[#Tests_for_confirming_human-level_AGI|''Tests for confirming human-level AGI'']] and [[#IQ-Tests_AGI|''IQ-tests AGI'']].<br />
<br />
AI experts' views on the feasibility of AGI wax and wane, and may have seen a resurgence in the 2010s. Four polls conducted in 2012 and 2013 suggested that the median guess among experts for when they would be 50% confident AGI would arrive was 2040 to 2050, depending on the poll, with the mean being 2081. Of the experts, 16.5% answered with "never" when asked the same question but with a 90% confidence instead. Further current AGI progress considerations can be found below Tests for confirming human-level AGI and IQ-tests AGI.<br />
<br />
人工智能专家一直在考虑通用人工智能兴衰可能性,可能在2010年代出现了复苏。2012年和2013年进行的四次民意调查显示,专家们平均有50%的信心相信通用人工智能会在2040年到2050年间到来,具体取决于调查结果,平均猜测是2081年。在这些专家中,16.5% 的人在被问到同样的问题时回答“从来没有” ,但他们的自信心却达到了90%。当前通用人工智能项目进展中的进一步考虑可以在确认人类水平通用人工智能和基于智商测试的通用人工智能测试下面找到。<br />
<br />
<br />
<br />
===Potential threat to human existence{{anchor|Risk_of_human_extinction}} 对人类的潜在威胁===<br />
<br />
{{Main|Existential risk from artificial general intelligence}}<br />
<br />
<br />
<br />
The thesis that AI poses an existential risk, and that this risk needs much more attention than it currently gets, has been endorsed by many public figures; perhaps the most famous are [[Elon Musk]], [[Bill Gates]], and [[Stephen Hawking]]. The most notable AI researcher to endorse the thesis is [[Stuart J. Russell]]. Endorsers of the thesis sometimes express bafflement at skeptics: Gates states he does not "understand why some people are not concerned",<ref name="BBC News">{{cite news|last1=Rawlinson|first1=Kevin|title=Microsoft's Bill Gates insists AI is a threat|url=https://www.bbc.co.uk/news/31047780|work=[[BBC News]]|accessdate=30 January 2015}}</ref> and Hawking criticized widespread indifference in his 2014 editorial: {{cquote|'So, facing possible futures of incalculable benefits and risks, the experts are surely doing everything possible to ensure the best outcome, right? Wrong. If a superior alien civilisation sent us a message saying, 'We'll arrive in a few decades,' would we just reply, 'OK, call us when you get here{{endash}}we'll leave the lights on?' Probably not{{endash}}but this is more or less what is happening with AI.'<ref name="hawking editorial">{{cite news |title=Stephen Hawking: 'Transcendence looks at the implications of artificial intelligence&nbsp;– but are we taking AI seriously enough?' |url=https://www.independent.co.uk/news/science/stephen-hawking-transcendence-looks-at-the-implications-of-artificial-intelligence--but-are-we-taking-ai-seriously-enough-9313474.html |accessdate=3 December 2014 |publisher=[[The Independent (UK)]]}}</ref>}}<br />
<br />
The thesis that AI poses an existential risk, and that this risk needs much more attention than it currently gets, has been endorsed by many public figures; perhaps the most famous are Elon Musk, Bill Gates, and Stephen Hawking. The most notable AI researcher to endorse the thesis is Stuart J. Russell. Endorsers of the thesis sometimes express bafflement at skeptics: Gates states he does not "understand why some people are not concerned", and Hawking criticized widespread indifference in his 2014 editorial: we'll leave the lights on?' Probably not, but this is more or less what is happening with AI.'}}<br />
<br />
人工智能构成了世界末日,这种风险需要比现在更多的关注,这一论点已经得到了许多公众人物的支持; 也许最著名的是埃隆·马斯克,比尔·盖茨和斯蒂芬·霍金。支持这一观点的最著名的人工智能研究者是斯图尔特·罗素。这篇论文的支持者有时会对怀疑论者表示困惑: 盖茨表示,他不“理解为什么有些人不关心” ,霍金在2014年的社论中批评了普遍的冷漠: 我们就这么让人工智能这盏灯亮着吗?可能不是,但这或多或少是正在发生在人工智能领域内的。<br />
<br />
<br />
<br />
Many of the scholars who are concerned about existential risk believe that the best way forward would be to conduct (possibly massive) research into solving the difficult "[[AI control problem|control problem]]" to answer the question: what types of safeguards, algorithms, or architectures can programmers implement to maximize the probability that their recursively-improving AI would continue to behave in a friendly, rather than destructive, manner after it reaches superintelligence?<ref name="superintelligence" >{{cite book|last1=Bostrom|first1=Nick|author-link=Nick Bostrom|title=Superintelligence: Paths, Dangers, Strategies|date=2014|isbn=978-0199678112|edition=First|quote=|title-link=Superintelligence: Paths, Dangers, Strategies}}<!-- preface --></ref><ref name="physica_scripta" >{{cite journal|title=Responses to catastrophic AGI risk: a survey|journal=[[Physica Scripta]]|date=19 December 2014|volume=90|issue=1|author1=Kaj Sotala|author2-link=Roman Yampolskiy|author2=Roman Yampolskiy}}</ref><br />
<br />
Many of the scholars who are concerned about existential risk believe that the best way forward would be to conduct (possibly massive) research into solving the difficult "control problem" to answer the question: what types of safeguards, algorithms, or architectures can programmers implement to maximize the probability that their recursively-improving AI would continue to behave in a friendly, rather than destructive, manner after it reaches superintelligence?<br />
<br />
许多关注世界末日的学者认为,最好的方法是进行(可能是大规模的)研究,解决困难的“控制问题” ,以回答这个问题: 程序员可以实现哪些类型的保障措施、算法或架构,以最大程度地提高其递归改进的人工智能在达到超级智能后继续以友好,而非破坏性的方式运行的可能性?<br />
<br />
<br />
<br />
The thesis that AI can pose existential risk also has many strong detractors. Skeptics sometimes charge that the thesis is crypto-religious, with an irrational belief in the possibility of superintelligence replacing an irrational belief in an omnipotent God; at an extreme, [[Jaron Lanier]] argues that the whole concept that current machines are in any way intelligent is "an illusion" and a "stupendous con" by the wealthy.<ref name="atlantic-but-what">{{cite magazine |url=https://www.theatlantic.com/health/archive/2014/05/but-what-does-the-end-of-humanity-mean-for-me/361931/ |title=But What Would the End of Humanity Mean for Me? |magazine=The Atlantic | date = 9 May 2014 | accessdate =12 December 2015}}</ref><br />
<br />
The thesis that AI can pose existential risk also has many strong detractors. Skeptics sometimes charge that the thesis is crypto-religious, with an irrational belief in the possibility of superintelligence replacing an irrational belief in an omnipotent God; at an extreme, Jaron Lanier argues that the whole concept that current machines are in any way intelligent is "an illusion" and a "stupendous con" by the wealthy.<br />
<br />
人工智能可以造成世界末日的观点也遭到了许多强烈的反对。怀疑论者有时指责该论点是神秘的、宗教性的,他们非理性地相信超级智能可能取代对万能的上帝的非理性信仰; 一个极端如,杰伦·拉尼尔(Jaron Lanier)认为,目前的机器以任何方式具有智能的整个概念是“一种幻觉” ,是富人的“惊人骗局”。<br />
<br />
<br />
<br />
Much of existing criticism argues that AGI is unlikely in the short term. Computer scientist [[Gordon Bell]] argues that the human race will already destroy itself before it reaches the [[technological singularity]]. [[Gordon Moore]], the original proponent of [[Moore's Law]], declares that "I am a skeptic. I don't believe [a technological singularity] is likely to happen, at least for a long time. And I don't know why I feel that way."<ref>{{cite news |title=Tech Luminaries Address Singularity |url=https://spectrum.ieee.org/computing/hardware/tech-luminaries-address-singularity |accessdate=8 April 2020 |work=IEEE Spectrum: Technology, Engineering, and Science News |issue=SPECIAL REPORT: THE SINGULARITY |date=1 June 2008 |language=en}}</ref> [[Baidu]] Vice President [[Andrew Ng]] states AI existential risk is "like worrying about overpopulation on Mars when we have not even set foot on the planet yet."<ref name=shermer>{{cite news|last1=Shermer|first1=Michael|title=Apocalypse AI|url=https://www.scientificamerican.com/article/artificial-intelligence-is-not-a-threat-mdash-yet/|accessdate=27 November 2017|work=Scientific American|date=1 March 2017|pages=77|language=en|doi=10.1038/scientificamerican0317-77|bibcode=2017SciAm.316c..77S}}</ref><br />
<br />
Much of existing criticism argues that AGI is unlikely in the short term. Computer scientist Gordon Bell argues that the human race will already destroy itself before it reaches the technological singularity. Gordon Moore, the original proponent of Moore's Law, declares that "I am a skeptic. I don't believe [a technological singularity] is likely to happen, at least for a long time. And I don't know why I feel that way." Baidu Vice President Andrew Ng states AI existential risk is "like worrying about overpopulation on Mars when we have not even set foot on the planet yet."<br />
<br />
现有的许多批评认为,通用人工智能短期内不太可能成功。计算机科学家戈登·贝尔(Gordon Bell)认为人类在到达'''<font color="#ff8000">技术奇点(technological singularity)</font>'''之前就已经自我毁灭了。戈登·摩尔(Gordon Moore),'''<font color="#ff8000">摩尔定律(Moore's Law)</font>'''的最初提出者,宣称“我是一个怀疑论者。我不认为技术奇点会发生,至少在很长一段时间内不会。我不知道为什么会有这种感觉。”百度副总裁吴恩达(Andrew Ng)说,人工智能世界末日就像是在担心火星人口过剩,而我们甚至还没有踏上这个星球。<br />
<br />
<br />
<br />
==See also 请参阅==<br />
<br />
{{div col|colwidth=30em}}<br />
<br />
* [[Automated machine learning]] 自动机器学习<br />
<br />
* [[Machine ethics]] 机器伦理<br />
<br />
* [[Multi-task learning]] 多任务学习<br />
<br />
* [[Superintelligence: Paths, Dangers, Strategies|Superintelligence]] 超级智能<br />
<br />
* [[Nick Bostrom]] 尼克·博斯特罗姆<br />
<br />
* [[Eliezer Yudkowsky]] 埃利泽·尤德科夫斯基<br />
<br />
* [[Future of Humanity Institute]] 人类未来研究所<br />
<br />
* [[Outline of artificial intelligence]] 人工智能概要<br />
<br />
* [[Artificial brain]] 人工大脑<br />
<br />
* [[Transfer learning]] 学习迁移<br />
<br />
* [[Outline of transhumanism]] 超人类主义概要<br />
<br />
* [[General game playing]] 一般博弈<br />
<br />
* [[Synthetic intelligence]] 合成智能<br />
<br />
* [[Intelligence amplification]] (IA), the use of information technology in augmenting human intelligence instead of creating an external autonomous "AGI"{{div col end}}<br />
智能放大,利用信息技术加强人类智慧而不是建造外在的通用人工智能<br />
<br />
<br />
==Notes 附注==<br />
<br />
{{reflist|colwidth=30em}}<br />
<br />
<br />
<br />
==References 参考文献==<br />
<br />
• Stages of Artificial Intelligence"[https://www.computerscience0.xyz/2020/04/ai-artificial-intelligence-stages-of.html Computer Science]" on 2nd April, 2020.{{refbegin|2}}<br />
<br />
• Stages of Artificial Intelligence"[https://www.computerscience0.xyz/2020/04/ai-artificial-intelligence-stages-of.html Computer Science]" on 2nd April, 2020.<br />
<br />
•2020年4月2日,《人工智能的阶段》[ https://www.computerscience0.xyz/2020/04/ai-Artificial-Intelligence-Stages-of.html 计算机科学]。<br />
<br />
* {{cite web |url=http://www.techcast.org/Upload/PDFs/633615794236495345_TCTheAutomationofThought.pdf |title=TechCast Article Series: The Automation of Thought |last1=Halal |first1=William E. |website= |access-date= |archive-url=https://web.archive.org/web/20130606101835/http://www.techcast.org/Upload/PDFs/633615794236495345_TCTheAutomationofThought.pdf |archive-date=6 June 2013}}<br />
<br />
* {{Citation | last=Aleksander | first=Igor | author-link=Igor Aleksander | year=1996 | title=Impossible Minds | publisher=World Scientific Publishing Company | isbn=978-1-86094-036-1 | url-access=registration | url=https://archive.org/details/impossiblemindsm0000alek }}<br />
<br />
* {{Citation | last = Omohundro|first= Steve| author-link= Steve Omohundro | year = 2008| title= The Nature of Self-Improving Artificial Intelligence| publisher= presented and distributed at the 2007 Singularity Summit, San Francisco, CA.}}<br />
<br />
* {{Citation|last=Sandberg |first=Anders|last2=Boström|first2=Nick|title=Whole Brain Emulation: A Roadmap|url=http://www.fhi.ox.ac.uk/Reports/2008-3.pdf|accessdate=5 April 2009|series= Technical Report #2008‐3|year=2008| publisher = Future of Humanity Institute, Oxford University}}<br />
<br />
* {{Citation|vauthors=Azevedo FA, Carvalho LR, Grinberg LT | title = Equal numbers of neuronal and nonneuronal cells make the human brain an isometrically scaled-up primate brain| journal = The Journal of Comparative Neurology| volume = 513| issue = 5| pages = 532–41|date=April 2009| pmid = 19226510| doi = 10.1002/cne.21974|url=https://www.researchgate.net/publication/24024444 |accessdate=4 September 2013| ref={{harvid|Azevedo et al.|2009}}|display-authors=etal}}<br />
<br />
* {{Citation<br />
<br />
| first=Anthony<br />
<br />
| first=Anthony<br />
<br />
首先是安东尼<br />
<br />
| last=Berglas<br />
<br />
| last=Berglas<br />
<br />
最后一个贝格拉斯<br />
<br />
| title=Artificial Intelligence will Kill our Grandchildren<br />
<br />
| title=Artificial Intelligence will Kill our Grandchildren<br />
<br />
人工智能会杀死我们的孙子<br />
<br />
| year=2008<br />
<br />
| year=2008<br />
<br />
2008年<br />
<br />
| url=http://berglas.org/Articles/AIKillGrandchildren/AIKillGrandchildren.html<br />
<br />
| url=http://berglas.org/Articles/AIKillGrandchildren/AIKillGrandchildren.html<br />
<br />
Http://berglas.org/articles/aikillgrandchildren/aikillgrandchildren.html<br />
<br />
}}<br />
<br />
}}<br />
<br />
}}<br />
<br />
* {{Citation | last = Chalmers |first = David | author-link=David Chalmers | year=1996 | title = The Conscious Mind |publisher=Oxford University Press.}}<br />
<br />
* {{Citation | last = Clocksin|first=William |date=Aug 2003 |title=Artificial intelligence and the future|journal=[[Philosophical Transactions of the Royal Society A]] |pmid=12952683 |volume=361 |issue=1809 |pages=1721–1748 |doi=10.1098/rsta.2003.1232 |postscript=.|bibcode=2003RSPTA.361.1721C }}<br />
<br />
* {{Crevier 1993}}<br />
<br />
* {{Citation | first = Brad | last = Darrach | date=20 November 1970 | title=Meet Shakey, the First Electronic Person | magazine=[[Life Magazine]] | pages = 58–68 }}.<br />
<br />
* {{Citation | last = Drachman | first = D | title = Do we have brain to spare? | journal = Neurology | volume = 64 | issue = 12 | pages = 2004–5 | year = 2005 | pmid = 15985565 | doi = 10.1212/01.WNL.0000166914.38327.BB | postscript = .}}<br />
<br />
* {{Citation | last = Feigenbaum | first = Edward A. | first2=Pamela | last2=McCorduck | author-link=Edward Feigenbaum | author2-link = Pamela McCorduck | title = The Fifth Generation: Artificial Intelligence and Japan's Computer Challenge to the World | publisher = Michael Joseph | year = 1983 | isbn = 978-0-7181-2401-4 }}<br />
<br />
* {{Citation<br />
<br />
| last = Gelernter | first = David<br />
<br />
| last = Gelernter | first = David<br />
<br />
最后的盖兰特 | 第一个大卫<br />
<br />
| year = 2010<br />
<br />
| year = 2010<br />
<br />
2010年<br />
<br />
| title = Dream-logic, the Internet and Artificial Thought<br />
<br />
| title = Dream-logic, the Internet and Artificial Thought<br />
<br />
梦的逻辑,互联网和人工思维<br />
<br />
| url=http://www.edge.org/3rd_culture/gelernter10.1/gelernter10.1_index.html | accessdate= 25 July 2010<br />
<br />
| url=http://www.edge.org/3rd_culture/gelernter10.1/gelernter10.1_index.html | accessdate= 25 July 2010<br />
<br />
Http://www.edge.org/3rd_culture/gelernter10.1/gelernter10.1_index.html : 2010年7月25日<br />
<br />
}}<br />
<br />
}}<br />
<br />
}}<br />
<br />
* {{Citation<br />
<br />
| editor1-last = Goertzel | editor1-first = Ben | authorlink = Ben Goertzel<br />
<br />
| editor1-last = Goertzel | editor1-first = Ben | authorlink = Ben Goertzel<br />
<br />
1-first Ben | authorlink Ben Goertzel<br />
<br />
| editor2-last = Pennachin | editor2-first= Cassio<br />
<br />
| editor2-last = Pennachin | editor2-first= Cassio<br />
<br />
| 编辑2-last Pennachin | 编辑2-first Cassio<br />
<br />
| year = 2006<br />
<br />
| year = 2006<br />
<br />
2006年<br />
<br />
| title=Artificial General Intelligence<br />
<br />
| title=Artificial General Intelligence<br />
<br />
人工通用智能<br />
<br />
| publisher = Springer <br />
<br />
| publisher = Springer <br />
<br />
出版商斯普林格<br />
<br />
| url=http://people.inf.elte.hu/csizsekp/ai/books/artificial-general-intelligence-cognitive-technologies.9783540237334.27156.pdf<br />
<br />
| url=http://people.inf.elte.hu/csizsekp/ai/books/artificial-general-intelligence-cognitive-technologies.9783540237334.27156.pdf<br />
<br />
Http://people.inf.elte.hu/csizsekp/ai/books/artificial-general-intelligence-cognitive-technologies.9783540237334.27156.pdf<br />
<br />
| isbn = 978-3-540-23733-4<br />
<br />
| isbn = 978-3-540-23733-4<br />
<br />
[国际标准图书馆编号978-3-540-23733-4]<br />
<br />
| archive-url = https://web.archive.org/web/20130320184603/http://people.inf.elte.hu/csizsekp/ai/books/artificial-general-intelligence-cognitive-technologies.9783540237334.27156.pdf<br />
<br />
| archive-url = https://web.archive.org/web/20130320184603/http://people.inf.elte.hu/csizsekp/ai/books/artificial-general-intelligence-cognitive-technologies.9783540237334.27156.pdf<br />
<br />
| 档案-url https://web.archive.org/web/20130320184603/http://people.inf.elte.hu/csizsekp/ai/books/artificial-general-intelligence-cognitive-technologies.9783540237334.27156.pdf<br />
<br />
| archive-date = 20 March 2013<br />
<br />
| archive-date = 20 March 2013<br />
<br />
| 档案-日期2013年3月20日<br />
<br />
}}<br />
<br />
}}<br />
<br />
}}<br />
<br />
* {{Citation<br />
<br />
| last = Goertzel | first = Ben | authorlink = Ben Goertzel<br />
<br />
| last = Goertzel | first = Ben | authorlink = Ben Goertzel<br />
<br />
作者: Ben Goertzel<br />
<br />
| last2 = Wang | first2 = Pei<br />
<br />
| last2 = Wang | first2 = Pei<br />
<br />
2 Wang | first2 Pei<br />
<br />
| year = 2006<br />
<br />
| year = 2006<br />
<br />
2006年<br />
<br />
| title = Introduction: Aspects of Artificial General Intelligence<br />
<br />
| title = Introduction: Aspects of Artificial General Intelligence<br />
<br />
| 题目简介: 人工通用智能的方方面面<br />
<br />
| url=http://sites.google.com/site/narswang/publications/wang-goertzel.AGI_Aspects.pdf?attredirects=1<br />
<br />
| url=http://sites.google.com/site/narswang/publications/wang-goertzel.AGI_Aspects.pdf?attredirects=1<br />
<br />
Http://sites.google.com/site/narswang/publications/wang-goertzel.agi_aspects.pdf?attredirects=1<br />
<br />
}}<br />
<br />
}}<br />
<br />
}}<br />
<br />
* {{Citation |last=Goertzel|first=Ben |authorlink=Ben Goertzel |date=Dec 2007 |title=Human-level artificial general intelligence and the possibility of a technological singularity: a reaction to Ray Kurzweil's The Singularity Is Near, and McDermott's critique of Kurzweil|journal=Artificial Intelligence |volume=171 |issue=18, Special Review Issue |pages=1161–1173 |url=https://scholar.google.com/scholar?hl=sv&lr=&cluster=15189798216526465792 |accessdate=1 April 2009 |doi=10.1016/j.artint.2007.10.011 |postscript=.}}<br />
<br />
* {{Citation | last = Gubrud | first = Mark | url=http://www.foresight.org/Conferences/MNT05/Papers/Gubrud/ | title = Nanotechnology and International Security | journal= Fifth Foresight Conference on Molecular Nanotechnology |date = November 1997| accessdate= 7 May 2011}}<br />
<br />
* {{Citation| last1 = Holte | first1=RC | last2=Choueiry |first2=BY| title = Abstraction and reformulation in artificial intelligence| journal = [[Philosophical Transactions of the Royal Society B]]| volume = 358| issue = 1435| pages = 1197–1204| year = 2003| pmid = 12903653| pmc = 1693218| doi = 10.1098/rstb.2003.1317| postscript = .}}<br />
<br />
* {{Citation | last = Howe | first = J. | url=http://www.dai.ed.ac.uk/AI_at_Edinburgh_perspective.html | title = Artificial Intelligence at Edinburgh University : a Perspective |date = November 1994| accessdate= 30 August 2007}}<br />
<br />
* {{Citation | last = Johnson| first= Mark | year = 1987| title =The body in the mind| publisher =Chicago|isbn= 978-0-226-40317-5}}<br />
<br />
* {{Citation | last = Kurzweil | first = Ray | author-link = Ray Kurzweil | title = The Singularity is Near | year = 2005 | publisher = Viking Press | title-link = The Singularity is Near }}<br />
<br />
* {{Citation | last = Lighthill | first = Professor Sir James | author-link=James Lighthill | year = 1973 | contribution= Artificial Intelligence: A General Survey | title = Artificial Intelligence: a paper symposium| publisher = Science Research Council }}<br />
<br />
* {{Citation|last=Luger|first=George|first2=William|last2=Stubblefield|year=2004|title=Artificial Intelligence: Structures and Strategies for Complex Problem Solving|edition=5th|publisher=The Benjamin/Cummings Publishing Company, Inc.|page=[https://archive.org/details/artificialintell0000luge/page/720 720]|isbn=978-0-8053-4780-7|url=https://archive.org/details/artificialintell0000luge/page/720}}<br />
<br />
* {{Citation |last=McCarthy|first=John |authorlink=John McCarthy (computer scientist)|date=Oct 2007 |title=From here to human-level AI|journal=Artificial Intelligence |volume=171 |pages=1174–1182 |doi=10.1016/j.artint.2007.10.009 | postscript=. |issue=18}}<br />
<br />
* {{McCorduck 2004}}<br />
<br />
* {{Citation | last = Moravec | first = Hans | author-link = Hans Moravec | year = 1976 | url = http://www.frc.ri.cmu.edu/users/hpm/project.archive/general.articles/1975/Raw.Power.html | title = The Role of Raw Power in Intelligence | access-date = 29 September 2007 | archive-url = https://web.archive.org/web/20160303232511/http://www.frc.ri.cmu.edu/users/hpm/project.archive/general.articles/1975/Raw.Power.html | archive-date = 3 March 2016 | url-status = dead }}<br />
<br />
* {{Citation | last = Moravec | first = Hans | author-link=Hans Moravec | year = 1988 | title = Mind Children | publisher = Harvard University Press}}<br />
<br />
* {{Citation| last=Nagel | year=1974 | title =What Is it Like to Be a Bat | journal = Philosophical Review | volume=83 | issue=4 | pages=435–50 | url=http://organizations.utep.edu/Portals/1475/nagel_bat.pdf| postscript=. | doi=10.2307/2183914 | jstor=2183914 }}<br />
<br />
* {{Citation | last = Newell | first = Allen | author-link=Allen Newell | last2 = Simon | first2=H. A. | year = 1963 | contribution=GPS: A Program that Simulates Human Thought| title=Computers and Thought | editor-last= Feigenbaum | editor-first= E.A. |editor2-last= Feldman |editor2-first= J. |publisher= McGraw-Hill | authorlink2 = Herbert A. Simon|location= New York }}<br />
<br />
* {{cite journal | doi = 10.1145/360018.360022 | last = Newell | first = Allen | last2 = Simon | first2=H. A. | year = 1976 | title=Computer Science as Empirical Inquiry: Symbols and Search| volume= 19 | pages = 113–126 | journal = Communications of the ACM| author-link=Allen Newell | authorlink2=Herbert A. Simon|issue=3 | ref=harv| doi-access= free }}<br />
<br />
* {{Citation| last=Nilsson | first=Nils | author-link=Nils Nilsson (researcher) | year=1998|title=Artificial Intelligence: A New Synthesis|publisher=Morgan Kaufmann Publishers|isbn=978-1-55860-467-4}}<br />
<br />
* {{Russell Norvig 2003}}<br />
<br />
* {{Citation | last = NRC| author-link=United States National Research Council | chapter=Developments in Artificial Intelligence|chapter-url=http://www.nap.edu/readingroom/books/far/ch9.html|title=Funding a Revolution: Government Support for Computing Research|publisher=National Academy Press|year=1999 }}<br />
<br />
* {{Citation | last = Poole | first = David | first2 = Alan | last2 = Mackworth | first3 = Randy | last3 = Goebel | publisher = Oxford University Press | year = 1998 | title = Computational Intelligence: A Logical Approach | url = http://www.cs.ubc.ca/spider/poole/ci.html | author-link=David Poole (researcher) | location = New York }}<br />
<br />
* {{Citation|last=Searle |first=John |author-link=John Searle |title=Minds, Brains and Programs |journal=Behavioral and Brain Sciences |volume=3 |issue=3 |pages=417–457 |year=1980 |doi=10.1017/S0140525X00005756 }}<br />
<br />
* {{Citation | last= Simon | first = H. A. | author-link=Herbert A. Simon | year = 1965 | title=The Shape of Automation for Men and Management | publisher =Harper & Row | location = New York }}<br />
<br />
* {{Citation |last=Sutherland|first= J.G. |year =1990| title= Holographic Model of Memory, Learning, and Expression|journal=International Journal of Neural Systems|volume= 1–3|pages= 256–267 |postscript=.}}<br />
<br />
* {{Citation|vauthors=Williams RW, Herrup K | title = The control of neuron number| journal = Annual Review of Neuroscience| volume = 11| pages = 423–53| year = 1988| pmid = 3284447| doi = 10.1146/annurev.ne.11.030188.002231| postscript = .}}<!--| accessdate = 20 June 2009--><br />
<br />
* {{Citation<br />
<br />
| editor1-last = de Vega | editor1-first = Manuel<br />
<br />
| editor1-last = de Vega | editor1-first = Manuel<br />
<br />
| 编辑1-last de Vega | 编辑1-first Manuel<br />
<br />
| editor2-last = Glenberg | editor2-first = Arthur<br />
<br />
| editor2-last = Glenberg | editor2-first = Arthur<br />
<br />
2- 最后的格伦伯格2- 第一个亚瑟<br />
<br />
| editor3-last = Graesser | editor3-first = Arthur<br />
<br />
| editor3-last = Graesser | editor3-first = Arthur<br />
<br />
| 编辑3-last Graesser | 编辑3-first Arthur<br />
<br />
| year = 2008<br />
<br />
| year = 2008<br />
<br />
2008年<br />
<br />
| title = Symbols and Embodiment: Debates on meaning and cognition<br />
<br />
| title = Symbols and Embodiment: Debates on meaning and cognition<br />
<br />
标题符号与具体化: 关于意义与认知的争论<br />
<br />
| publisher = Oxford University Press<br />
<br />
| publisher = Oxford University Press<br />
<br />
牛津大学出版社<br />
<br />
| isbn=978-0-19-921727-4<br />
<br />
| isbn=978-0-19-921727-4<br />
<br />
[国际标准图书编号978-0-19-921727-4]<br />
<br />
}}<br />
<br />
}}<br />
<br />
}}<br />
<br />
* {{Citation|last=Yudkowsky |first=Eliezer |author-link=Eliezer Yudkowsky |editor-last=Goertzel |editor-first=Ben |editor2-last=Pennachin |editor2-first=Cassio |title=Artificial General Intelligence |journal=Annual Review of Psychology |publisher=Springer |year=2006 |url=http://www.singinst.org/upload/LOGI//LOGI.pdf |doi=10.1146/annurev.psych.49.1.585 |pmid=9496632 |isbn=978-3-540-23733-4 |url-status=dead |archiveurl=https://web.archive.org/web/20090411050423/http://www.singinst.org/upload/LOGI/LOGI.pdf |archivedate=11 April 2009 |volume=49 |pages=585–612}} <br />
<br />
* {{Citation |last=Zucker|first=Jean-Daniel |date=July 2003 |title=A grounded theory of abstraction in artificial intelligence|journal=[[Philosophical Transactions of the Royal Society B]] |pmid=12903672 |volume=358 |issue=1435 |pmc=1693211 |pages=1293–1309 |doi=10.1098/rstb.2003.1308 |postscript=.}}<br />
<br />
* {{Citation| last=Yudkowsky | first=Eliezer | author-link=Eliezer Yudkowsky| year=2008 |title=Artificial Intelligence as a Positive and Negative Factor in Global Risk |journal=Global Catastrophic Risks | bibcode=2008gcr..book..303Y }}.<br />
<br />
{{refend}}<br />
<br />
<br />
<br />
==External links 拓展链接==<br />
<br />
* [https://www.zhihu.com/question/50049187/answer/1361795900 强人工智能目前发展怎样,有希望实现吗?]<br />
<br />
* [https://zhuanlan.zhihu.com/p/59966491 AI寒冬论作者:通用人工智能仍是白日梦]<br />
<br />
* [https://cis.temple.edu/~pwang/AGI-Intro.html The AGI portal maintained by Pei Wang]<br />
<br />
* [https://web.archive.org/web/20050405071221/http://genesis.csail.mit.edu/index.html The Genesis Group at MIT's CSAIL] – Modern research on the computations that underlay human intelligence<br />
<br />
* [http://www.opencog.org/ OpenCog – open source project to develop a human-level AI]<br />
<br />
* [http://academia.wikia.com/wiki/A_Method_for_Simulating_the_Process_of_Logical_Human_Thought Simulating logical human thought]<br />
<br />
* [http://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/ai-timelines What Do We Know about AI Timelines?] – Literature review<br />
<br />
<br />
<br />
{{Existential risk from artificial intelligence}}<br />
<br />
<br />
<br />
{{DEFAULTSORT:Artificial general intelligence}}<br />
<br />
[[Category:Hypothetical technology]]<br />
<br />
Category:Hypothetical technology<br />
<br />
类别: 假设技术<br />
<br />
[[Category:Artificial intelligence]]<br />
<br />
Category:Artificial intelligence<br />
<br />
类别: 人工智能<br />
<br />
[[Category:Computational neuroscience]]<br />
<br />
Category:Computational neuroscience<br />
<br />
类别: 计算神经科学<br />
<br />
<br />
<br />
[[fr:Intelligence artificielle#Intelligence artificielle forte]]<br />
<br />
fr:Intelligence artificielle#Intelligence artificielle forte<br />
<br />
智力人工 # 智力人工强项<br />
<br />
<noinclude><br />
<br />
<small>This page was moved from [[wikipedia:en:Artificial general intelligence]]. Its edit history can be viewed at [[通用人工智能/edithistory]]</small></noinclude><br />
<br />
[[Category:待整理页面]]</div>粲兰https://wiki.swarma.org/index.php?title=%E9%80%9A%E7%94%A8%E4%BA%BA%E5%B7%A5%E6%99%BA%E8%83%BD&diff=15043通用人工智能2020-10-13T06:48:18Z<p>粲兰:</p>
<hr />
<div>此词条暂由彩云小译翻译,未经人工整理和审校,带来阅读不便,请见谅。<br />
<br />
{{Short description|Hypothetical human-level or stronger AI}}<br />
<br />
{{Use British English|date = March 2019}}<br />
<br />
{{Use dmy dates|date=December 2019}}<br />
<br />
{{Artificial intelligence}}<br />
<br />
'''Artificial general intelligence''' ('''AGI''') is the hypothetical<ref>{{cite news |title=DeepMind and Google: the battle to control artificial intelligence |url=https://www.economist.com/news/2019/04/05/deepmind-and-google-the-battle-to-control-artificial-intelligence |accessdate=15 March 2020 |work=[[The Economist]] ([[1843 (magazine)]]) |date=2019 |quote=AGI stands for Artificial General Intelligence, a hypothetical computer program...}}</ref> intelligence of a machine that has the capacity to understand or learn any intellectual task that a [[human being]] can. It is a primary goal of some [[artificial intelligence]] research and a common topic in [[science fiction]] and [[futures studies]]. AGI can also be referred to as '''strong AI''',<ref>Kurzweil, ''Singularity'' (2005) p. 260</ref><ref name = "Kurzweil 2005-08-05">{{Citation|url=https://www.forbes.com/home/free_forbes/2005/0815/030.htmlhttps://www.forbes.com/home/free_forbes/2005/0815/030.html |first=Ray |last=Kurzweil |date=5 August 2005 |magazine=[[Forbes]]|title=Long Live AI}}: Kurzweil describes strong AI as "machine intelligence with the full range of human intelligence."</ref><ref>{{Citation|url=https://crnano.typepad.com/crnblog/2005/08/advanced_human_.html |title=Advanced Human ntelligence<br />
<br />
Artificial general intelligence (AGI) is the hypothetical intelligence of a machine that has the capacity to understand or learn any intellectual task that a human being can. It is a primary goal of some artificial intelligence research and a common topic in science fiction and futures studies. AGI can also be referred to as strong AI,<ref>{{Citation|url=https://crnano.typepad.com/crnblog/2005/08/advanced_human_.html |title=Advanced Human ntelligence<br />
<br />
人工通用智能(Artificial general intelligence,AGI)是一种假设的机器智能,它有能力理解或学习任何人类能够完成的智力任务。这是一些人工智能研究的主要目标,也是科幻小说和未来研究的共同话题。也可以被称为强人工智能,参考文献{ Citation | url https://crnano.typepad.com/crnblog/2005/08/advanced_human_.html | title Advanced Human intelligence<br />
<br />
|first=Mike|last=Treder|work=Responsible Nanotechnology|date=10 August 2005 |archive-url=https://web.archive.org/web/20191016214415/https://crnano.typepad.com/crnblog/2005/08/advanced_human_.html|archive-date=16 October 2019 |url-status=live}}</ref> '''full AI''',<ref>{{Cite web |url=http://tedxtalks.ted.com/video/The-Age-of-Artificial-Intellige |title=The Age of Artificial Intelligence: George John at TEDxLondonBusinessSchool 2013 |access-date=22 February 2014 |archive-url=https://web.archive.org/web/20140226123940/http://tedxtalks.ted.com/video/The-Age-of-Artificial-Intellige |archive-date=26 February 2014 |url-status=live }}</ref> <br />
<br />
|first=Mike|last=Treder|work=Responsible Nanotechnology|date=10 August 2005 |archive-url=https://web.archive.org/web/20191016214415/https://crnano.typepad.com/crnblog/2005/08/advanced_human_.html|archive-date=16 October 2019 |url-status=live}}</ref> full AI, <br />
<br />
2005年8月10日2019年10月 https://web.archive.org/web/20191016214415/https://crnano.typepad.com/crnblog/2005/08/advanced_human_.html|archive-date=16,<br />
<br />
or '''general intelligent action'''.{{sfn|Newell|Simon|1976|ps=, This is the term they use for "human-level" intelligence in the [[physical symbol system]] hypothesis.}} <br />
<br />
or general intelligent action. <br />
<br />
或者是通用智能行为。<br />
<br />
Some academic sources reserve the term "strong AI" for machines that can experience [[Chinese room#Strong AI|consciousness]].{{sfn|Searle|1980|ps=, See below for the origin of the term "strong AI", and see the academic definition of "[[Chinese room#Strong AI|strong AI]]" in the article [[Chinese room]].}} Today's AI is speculated to be many years, if not decades, away from AGI.<ref>[https://www.europarl.europa.eu/at-your-service/files/be-heard/religious-and-non-confessional-dialogue/events/en-20190319-how-artificial-intelligence-works.pdf europarl.europa.eu: How artificial intelligence works], "Concluding remarks: Today's AI is powerful and useful, but remains far from speculated AGI or ASI.", European Parliamentary Research Service, retrieved March 3, 2020</ref><ref>{{Cite journal|last=Grace|first=Katja|last2=Salvatier|first2=John|last3=Dafoe|first3=Allan|last4=Zhang|first4=Baobao|last5=Evans|first5=Owain|date=2018-07-31|title=Viewpoint: When Will AI Exceed Human Performance? Evidence from AI Experts|journal=Journal of Artificial Intelligence Research|volume=62|pages=729–754|doi=10.1613/jair.1.11222|issn=1076-9757}}</ref><br />
<br />
Some academic sources reserve the term "strong AI" for machines that can experience consciousness. Today's AI is speculated to be many years, if not decades, away from AGI.<br />
<br />
一些学术资源保留了“强人工智能”这个术语,用来形容能够体会意识的机器。据推测,如果不是几十年的话,今天的人工智能将在很多年之后才能达到通用人工智能的地步。<br />
<br />
<br />
<br />
Some authorities emphasize a distinction between ''strong AI'' and ''applied AI'',<ref>Encyclopædia Britannica [http://www.britannica.com/eb/article-219086/artificial-intelligence Strong AI, applied AI, and cognitive simulation] {{Webarchive|url=https://web.archive.org/web/20071015054758/http://www.britannica.com/eb/article-219086/artificial-intelligence |date=15 October 2007 }} or Jack Copeland [http://www.cs.usfca.edu/www.AlanTuring.net/turing_archive/pages/Reference%20Articles/what_is_AI/What%20is%20AI02.html What is artificial intelligence?] {{Webarchive|url=https://web.archive.org/web/20070818125256/http://www.cs.usfca.edu/www.AlanTuring.net/turing_archive/pages/Reference%20Articles/what_is_AI/What%20is%20AI02.html |date=18 August 2007 }} on AlanTuring.net</ref> also called ''narrow AI''<ref name = "Kurzweil 2005-08-05"/> or ''[[weak AI]]''.<ref>{{Cite web |url=http://www.open2.net/nextbigthing/ai/ai_in_depth/in_depth.htm |title=The Open University on Strong and Weak AI |access-date=8 October 2007 |archive-url=https://web.archive.org/web/20090925043908/http://www.open2.net/nextbigthing/ai/ai_in_depth/in_depth.htm |archive-date=25 September 2009 |url-status=live }} {{Dead link |date=October 2019}}</ref> In contrast to strong AI, weak AI is not intended to perform human [[cognitive]] abilities. Rather, weak AI is limited to the use of software to study or accomplish specific [[problem solving]] or [[reason]]ing tasks.<br />
<br />
Some authorities emphasize a distinction between strong AI and applied AI, also called narrow AI In contrast to strong AI, weak AI is not intended to perform human cognitive abilities. Rather, weak AI is limited to the use of software to study or accomplish specific problem solving or reasoning tasks.<br />
<br />
一些权威机构强调'' 强人工智能 ''和'' 应用人工智能 ''之间的区别,也称为'' 狭义人工智能 ''与'' 强人工智能 ''相比,弱人工智能并不是为了执行人类的认知能力。相反,弱人工智能仅限于使用软件来研究或完成特定问题的解决或完成推理任务。<br />
<br />
<br />
<br />
As of 2017, over forty organizations are researching AGI.<ref name=baum/><br />
<br />
As of 2017, over forty organizations are researching AGI.<br />
<br />
截止到2017年,已经有超过四十家机构在研究 AGI。<br />
<br />
<br />
<br />
==Requirements 判定要求==<br />
<br />
{{main|Cognitive science}}<br />
<br />
<br />
<br />
Various criteria for [[intelligence]] have been proposed (most famously the [[Turing test]]) but to date, there is no definition that satisfies everyone.<ref>AI founder [[John McCarthy (computer scientist)|John McCarthy]] writes: "we cannot yet characterize in general what kinds of computational procedures we want to call intelligent." {{cite web| url=http://www-formal.stanford.edu/jmc/whatisai/node1.html| title=Basic Questions| last=McCarthy| first=John| authorlink=John McCarthy (computer scientist)| publisher=[[Stanford University]]| year=2007| access-date=6 December 2007| archive-url=https://web.archive.org/web/20071026100601/http://www-formal.stanford.edu/jmc/whatisai/node1.html| archive-date=26 October 2007| url-status=live}} (For a discussion of some definitions of intelligence used by [[artificial intelligence]] researchers, see [[philosophy of artificial intelligence]].)</ref> However, there ''is'' wide agreement among artificial intelligence researchers that intelligence is required to do the following:<ref><br />
<br />
Various criteria for intelligence have been proposed (most famously the Turing test) but to date, there is no definition that satisfies everyone. However, there is wide agreement among artificial intelligence researchers that intelligence is required to do the following:<ref><br />
<br />
人们提出了各种各样的智能标准(最著名的是图灵测试) ,但到目前为止,还没有一个定义能使所有人满意。然而,人工智能研究人员普遍认为,智能需要做到以下几点: <br />
<br />
This list of intelligent traits is based on the topics covered by major AI textbooks, including:<br />
<br />
This list of intelligent traits is based on the topics covered by major AI textbooks, including:<br />
<br />
这个智能特征的列表基于主流的人工智能教科书所涉及的主题,包括:<br />
<br />
{{Harvnb|Russell|Norvig|2003}},<br />
<br />
,<br />
<br />
,<br />
<br />
{{Harvnb|Luger|Stubblefield|2004}},<br />
<br />
,<br />
<br />
,<br />
<br />
{{Harvnb|Poole|Mackworth|Goebel|1998}} and<br />
<br />
and<br />
<br />
及<br />
<br />
{{Harvnb|Nilsson|1998}}.<br />
<br />
.<br />
<br />
.<br />
<br />
</ref><br />
<br />
</ref><br />
<br />
/ 参考<br />
<br />
* [[automated reasoning|reason]], use strategy, solve puzzles, and make judgments under [[uncertainty]];使用策略,解决问题,并且在不确定条件下做出决策。<br />
<br />
* [[knowledge representation|represent knowledge]], including [[Commonsense knowledge base|commonsense knowledge]];展现知识,包括常识。<br />
<br />
* [[automated planning and scheduling|plan]];规划。<br />
<br />
* [[machine learning|learn]];学习。<br />
<br />
* communicate in [[natural language processing|natural language]];使用自然语言进行交流。<br />
<br />
* and [[Artificial intelligence systems integration|integrate all these skills]] towards common goals.综合运用所有技巧以达到某个目的。<br />
<br />
<br />
<br />
Other important capabilities include the ability to [[machine perception|sense]] (e.g. [[computer vision|see]]) and the ability to act (e.g. [[robotics|move and manipulate objects]]) in the world where intelligent behaviour is to be observed.<ref>Pfeifer, R. and Bongard J. C., How the body shapes the way we think: a new view of intelligence (The MIT Press, 2007). {{ISBN|0-262-16239-3}}</ref> This would include an ability to detect and respond to [[hazard]].<ref>{{cite journal | last1 = White | first1 = R. W. | year = 1959 | title = Motivation reconsidered: The concept of competence | journal = Psychological Review | volume = 66 | issue = 5| pages = 297–333 | doi=10.1037/h0040934| pmid = 13844397 }}</ref> Many interdisciplinary approaches to intelligence (e.g. [[cognitive science]], [[computational intelligence]] and [[decision making]]) tend to emphasise the need to consider additional traits such as [[imagination]] (taken as the ability to form mental images and concepts that were not programmed in)<ref>{{Harvnb|Johnson|1987}}</ref> and [[Self-determination theory|autonomy]].<ref>deCharms, R. (1968). Personal causation. New York: Academic Press.</ref><br />
<br />
Other important capabilities include the ability to sense (e.g. see) and the ability to act (e.g. move and manipulate objects) in the world where intelligent behaviour is to be observed. This would include an ability to detect and respond to hazard. Many interdisciplinary approaches to intelligence (e.g. cognitive science, computational intelligence and decision making) tend to emphasise the need to consider additional traits such as imagination (taken as the ability to form mental images and concepts that were not programmed in) and autonomy.<br />
<br />
其他重要的能力包括在客观世界感知(例如:视觉)和行动(例如:移动和操纵物体)的能力。智能行为在客观世界中是可观测的。这将包括检测和应对危险的能力。许多跨学科的智力研究方法(例如:。认知科学、计算智能和决策)倾向于强调考虑额外特征的必要性,例如想象力(被认为是未编入程序的形成意象和概念的能力)和自主性。<br />
<br />
Computer based systems that exhibit many of these capabilities do exist (e.g. see [[computational creativity]], [[automated reasoning]], [[decision support system]], [[robot]], [[evolutionary computation]], [[intelligent agent]]), but not yet at human levels.<br />
<br />
Computer based systems that exhibit many of these capabilities do exist (e.g. see computational creativity, automated reasoning, decision support system, robot, evolutionary computation, intelligent agent), but not yet at human levels.<br />
<br />
基于计算机的系统,展示了许多这些能力确实存在(例如:。参见计算创造性、自动推理、决策支持系统、机器人、进化计算、智能代理) ,但还没有达到人类的水平。<br />
<br />
<br />
<br />
===Tests for confirming human-level AGI{{anchor|Tests_for_confirming_human-level_AGI}}===<br />
<br />
The following tests to confirm human-level AGI have been considered:<ref>{{cite web|last=Muehlhauser|first=Luke|title=What is AGI?|url=http://intelligence.org/2013/08/11/what-is-agi/|publisher=Machine Intelligence Research Institute|accessdate=1 May 2014|date=11 August 2013|archive-url=https://web.archive.org/web/20140425115445/http://intelligence.org/2013/08/11/what-is-agi/|archive-date=25 April 2014|url-status=live}}</ref><ref>{{Cite web|url=https://www.talkyblog.com/artificial_general_intelligence_agi/|title=What is Artificial General Intelligence (AGI)? {{!}} 4 Tests For Ensuring Artificial General Intelligence|date=13 July 2019|website=Talky Blog|language=en-US|access-date=17 July 2019|archive-url=https://web.archive.org/web/20190717071152/https://www.talkyblog.com/artificial_general_intelligence_agi/|archive-date=17 July 2019|url-status=live}}</ref><br />
<br />
The following tests to confirm human-level AGI have been considered:<br />
<br />
考虑了下列测试以确认人类水平 AGI:<br />
<br />
;[[Turing test|The Turing Test]] ([[Alan Turing|''Turing'']])<br />
<br />
The Turing Test (Turing)<br />
<br />
图灵测试(图灵)<br />
<br />
: A machine and a human both converse sight unseen with a second human, who must evaluate which of the two is the machine, which passes the test if it can fool the evaluator a significant fraction of the time. Note: Turing does not prescribe what should qualify as intelligence, only that knowing that it is a machine should disqualify it.<br />
<br />
A machine and a human both converse sight unseen with a second human, who must evaluate which of the two is the machine, which passes the test if it can fool the evaluator a significant fraction of the time. Note: Turing does not prescribe what should qualify as intelligence, only that knowing that it is a machine should disqualify it.<br />
<br />
一个机器人和一个人类都与另一个人类进行眼神交流,后者必须评估两者中哪一个是机器,如果它能骗过评估者很大一部分时间,那么机器就通过了测试。注意: 图灵并没有规定什么是智能,只要能认出它是一台机器就不应该认为它是智能的。<br />
<br />
;The Coffee Test ([[Steve Wozniak|''Wozniak'']])<br />
<br />
The Coffee Test (Wozniak)<br />
<br />
咖啡测试(沃兹尼亚克)<br />
<br />
: A machine is required to enter an average American home and figure out how to make coffee: find the coffee machine, find the coffee, add water, find a mug, and brew the coffee by pushing the proper buttons.<br />
<br />
A machine is required to enter an average American home and figure out how to make coffee: find the coffee machine, find the coffee, add water, find a mug, and brew the coffee by pushing the proper buttons.<br />
<br />
一台机器需要进入一个普通的美国家庭,并弄清楚如何制作咖啡: 找到咖啡机,找到咖啡,加水,找到一个马克杯,并通过按下正确的按钮来煮咖啡。<br />
<br />
;The Robot College Student Test ([[Ben Goertzel|''Goertzel'']])<br />
<br />
The Robot College Student Test (Goertzel)<br />
<br />
机器人大学生考试(格兹尔)<br />
<br />
: A machine enrolls in a university, taking and passing the same classes that humans would, and obtaining a degree.<br />
<br />
A machine enrolls in a university, taking and passing the same classes that humans would, and obtaining a degree.<br />
<br />
一台机器进入一所大学,学习并通过与人类相同的课程,并获得学位。<br />
<br />
;The Employment Test ([[Nils John Nilsson|''Nilsson'']])<br />
<br />
The Employment Test (Nilsson)<br />
<br />
就业测试(尼尔森)<br />
<br />
: A machine works an economically important job, performing at least as well as humans in the same job.<br />
<br />
A machine works an economically important job, performing at least as well as humans in the same job.<br />
<br />
机器从事一项经济上重要的工作,在同一项工作中表现至少和人类一样好。<br />
<br />
<br />
<br />
=== Problems requiring AGI to solve 等待通用人工智能解决的问题===<br />
<br />
{{Main|AI-complete}}<br />
<br />
<br />
<br />
The most difficult problems for computers are informally known as "AI-complete" or "AI-hard", implying that solving them is equivalent to the general aptitude of human intelligence, or strong AI, beyond the capabilities of a purpose-specific algorithm.<ref name="Shapiro92">Shapiro, Stuart C. (1992). [http://www.cse.buffalo.edu/~shapiro/Papers/ai.pdf Artificial Intelligence] {{Webarchive|url=https://web.archive.org/web/20160201014644/http://www.cse.buffalo.edu/~shapiro/Papers/ai.pdf |date=1 February 2016 }} In Stuart C. Shapiro (Ed.), ''Encyclopedia of Artificial Intelligence'' (Second Edition, pp.&nbsp;54–57). New York: John Wiley. (Section 4 is on "AI-Complete Tasks".)</ref><br />
<br />
The most difficult problems for computers are informally known as "AI-complete" or "AI-hard", implying that solving them is equivalent to the general aptitude of human intelligence, or strong AI, beyond the capabilities of a purpose-specific algorithm.<br />
<br />
对于计算机来说,最困难的问题被非正式地称为“ AI完全问题”或“ AI困难问题” ,这意味着解决这些问题相当于人类智能的一般才能,或强人工智能,超出了特定目的算法的能力。<br />
<br />
<br />
<br />
AI-complete problems are hypothesised to include general [[computer vision]], [[natural language understanding]], and dealing with unexpected circumstances while solving any real world problem.<ref>Roman V. Yampolskiy. Turing Test as a Defining Feature of AI-Completeness. In Artificial Intelligence, Evolutionary Computation and Metaheuristics (AIECM) --In the footsteps of Alan Turing. Xin-She Yang (Ed.). pp. 3–17. (Chapter 1). Springer, London. 2013. http://cecs.louisville.edu/ry/TuringTestasaDefiningFeature04270003.pdf {{Webarchive|url=https://web.archive.org/web/20130522094547/http://cecs.louisville.edu/ry/TuringTestasaDefiningFeature04270003.pdf |date=22 May 2013 }}</ref><br />
<br />
AI-complete problems are hypothesised to include general computer vision, natural language understanding, and dealing with unexpected circumstances while solving any real world problem.<br />
<br />
人工智能完全问题假设包括一般的计算机视觉,自然语言理解,以及在解决任何现实世界问题的同时处理意外情况。<br />
<br />
<br />
<br />
AI-complete problems cannot be solved with current computer technology alone, and also require [[human computation]]. This property could be useful, for example, to test for the presence of humans, as [[CAPTCHA]]s aim to do; and for [[computer security]] to repel [[brute-force attack]]s.<ref>Luis von Ahn, Manuel Blum, Nicholas Hopper, and John Langford. [http://www.captcha.net/captcha_crypt.pdf CAPTCHA: Using Hard AI Problems for Security] {{Webarchive|url=https://web.archive.org/web/20160304001102/http://www.captcha.net/captcha_crypt.pdf |date=4 March 2016 }}. In Proceedings of Eurocrypt, Vol. 2656 (2003), pp. 294–311.</ref><ref>{{cite journal | first = Richard | last = Bergmair | title = Natural Language Steganography and an "AI-complete" Security Primitive | citeseerx = 10.1.1.105.129 | date = 7 January 2006 }} (unpublished?)</ref><br />
<br />
AI-complete problems cannot be solved with current computer technology alone, and also require human computation. This property could be useful, for example, to test for the presence of humans, as CAPTCHAs aim to do; and for computer security to repel brute-force attacks.<br />
<br />
目前的计算机技术不能单独解决AI完全问题,而且还需要人工计算。例如,这个特性可以用来测试人类是否存在(CAPTCHAs 的目标就是这样做) ,以及应用于计算机安全以抵御强力攻击。<br />
<br />
<br />
<br />
== History 历史 == <br />
<br />
=== Classical AI 经典人工智能 ===<br />
<br />
{{Main|History of artificial intelligence}}<br />
<br />
Modern AI research began in the mid 1950s.<ref>{{Harvnb|Crevier|1993|pp=48–50}}</ref> The first generation of AI researchers were convinced that artificial general intelligence was possible and that it would exist in just a few decades. AI pioneer [[Herbert A. Simon]] wrote in 1965: "machines will be capable, within twenty years, of doing any work a man can do."<ref>{{Harvnb|Simon|1965|p=96}} quoted in {{Harvnb|Crevier|1993|p=109}}</ref> Their predictions were the inspiration for [[Stanley Kubrick]] and [[Arthur C. Clarke]]'s character [[HAL 9000]], who embodied what AI researchers believed they could create by the year 2001. AI pioneer [[Marvin Minsky]] was a consultant<ref>{{Cite web |url=http://mitpress.mit.edu/e-books/Hal/chap2/two1.html |title=Scientist on the Set: An Interview with Marvin Minsky |access-date=5 April 2008 |archive-url=https://web.archive.org/web/20120716182537/http://mitpress.mit.edu/e-books/Hal/chap2/two1.html |archive-date=16 July 2012 |url-status=live }}</ref> on the project of making HAL 9000 as realistic as possible according to the consensus predictions of the time; Crevier quotes him as having said on the subject in 1967, "Within a generation ... the problem of creating 'artificial intelligence' will substantially be solved,"<ref>Marvin Minsky to {{Harvtxt|Darrach|1970}}, quoted in {{Harvtxt|Crevier|1993|p=109}}.</ref> although Minsky states that he was misquoted.{{Citation needed|date=June 2011}}<br />
<br />
Modern AI research began in the mid 1950s. The first generation of AI researchers were convinced that artificial general intelligence was possible and that it would exist in just a few decades. AI pioneer Herbert A. Simon wrote in 1965: "machines will be capable, within twenty years, of doing any work a man can do." Their predictions were the inspiration for Stanley Kubrick and Arthur C. Clarke's character HAL 9000, who embodied what AI researchers believed they could create by the year 2001. AI pioneer Marvin Minsky was a consultant on the project of making HAL 9000 as realistic as possible according to the consensus prediction of the time; Crevier quotes him as having said on the subject in 1967, "Within a generation ... the problem of creating 'artificial intelligence' will substantially be solved," although Minsky states that he was misquoted.<br />
<br />
现代人工智能研究始于20世纪50年代中期。第一代人工智能研究人员确信,通用人工智能是可能的,并将在短短几十年内出现。人工智能的先驱赫伯特·A·西蒙(Herbert A. Simon)在1965年写道: “机器将在20年内拥有完成人类能做的任何工作的能力。”他们的预言启发了斯坦利·库布里克和亚瑟·查理斯·克拉克塑造的角色哈尔9000,它代表了人工智能研究人员相信他们截至2001年能够创造出的东西。人工智能先驱马文·明斯基(Marvin Minsky)是一个项目顾问,该项目旨在根据当时的一致预测,使哈尔9000尽可能逼真; 克里维尔援引他在1967年关于这个问题的话说,“在一代人的时间里... ... 创造‘人工智能’的问题将大体上得到解决,”尽管明斯基声称,他的话被错误引用了。<br />
<br />
<br />
<br />
However, in the early 1970s, it became obvious that researchers had grossly underestimated the difficulty of the project. Funding agencies became skeptical of AGI and put researchers under increasing pressure to produce useful "applied AI".<ref>The [[Lighthill report]] specifically criticized AI's "grandiose objectives" and led the dismantling of AI research in England. ({{Harvnb|Lighthill|1973}}; {{Harvnb|Howe|1994}}) In the U.S., [[DARPA]] became determined to fund only "mission-oriented direct research, rather than basic undirected research". See {{Harv|NRC|1999}} under "Shift to Applied Research Increases Investment". See also {{Harv|Crevier|1993|pp=115–117}} and {{Harv|Russell|Norvig|2003|pp=21–22}}</ref> As the 1980s began, Japan's [[Fifth Generation Computer]] Project revived interest in AGI, setting out a ten-year timeline that included AGI goals like "carry on a casual conversation".<ref>{{harvnb|Crevier|1993|p=211}}, {{harvnb|Russell|Norvig|2003|p=24}} and see also {{Harvnb|Feigenbaum|McCorduck|1983}}</ref> In response to this and the success of [[expert systems]], both industry and government pumped money back into the field.<ref>{{Harvnb|Crevier| 1993|pp=161–162,197–203,240}}; {{harvnb|Russell|Norvig|2003|p=25}}; {{harvnb|NRC|1999|loc=under "Shift to Applied Research Increases Investment"}}</ref> However, confidence in AI spectacularly collapsed in the late 1980s, and the goals of the Fifth Generation Computer Project were never fulfilled.<ref>{{Harvnb|Crevier|1993|pp=209–212}}</ref> For the second time in 20 years, AI researchers who had predicted the imminent achievement of AGI had been shown to be fundamentally mistaken. By the 1990s, AI researchers had gained a reputation for making vain promises. They became reluctant to make predictions at all<ref>As AI founder [[John McCarthy (computer scientist)|John McCarthy]] writes "it would be a great relief to the rest of the workers in AI if the inventors of new general formalisms would express their hopes in a more guarded form than has sometimes been the case." {{cite web | url=http://www-formal.stanford.edu/jmc/reviews/lighthill/lighthill.html | title=Reply to Lighthill | last=McCarthy | first=John | authorlink=John McCarthy (computer scientist) | publisher=Stanford University | year=2000 | access-date=29 September 2007 | archive-url=https://web.archive.org/web/20080930164952/http://www-formal.stanford.edu/jmc/reviews/lighthill/lighthill.html | archive-date=30 September 2008 | url-status=live }}</ref> and to avoid any mention of "human level" artificial intelligence for fear of being labeled "wild-eyed dreamer[s]."<ref>"At its low point, some computer scientists and software engineers avoided the term artificial intelligence for fear of being viewed as wild-eyed dreamers."{{cite news |first=John |last=Markoff |title=Behind Artificial Intelligence, a Squadron of Bright Real People |url=https://www.nytimes.com/2005/10/14/technology/14artificial.html?ei=5070&en=11ab55edb7cead5e&ex=1185940800&adxnnl=1&adxnnlx=1185805173-o7WsfW7qaP0x5/NUs1cQCQ |work=The New York Times|date=14 October 2005}}</ref><br />
<br />
However, in the early 1970s, it became obvious that researchers had grossly underestimated the difficulty of the project. Funding agencies became skeptical of AGI and put researchers under increasing pressure to produce useful "applied AI". As the 1980s began, Japan's Fifth Generation Computer Project revived interest in AGI, setting out a ten-year timeline that included AGI goals like "carry on a casual conversation". In response to this and the success of expert systems, both industry and government pumped money back into the field. However, confidence in AI spectacularly collapsed in the late 1980s, and the goals of the Fifth Generation Computer Project were never fulfilled. For the second time in 20 years, AI researchers who had predicted the imminent achievement of AGI had been shown to be fundamentally mistaken. By the 1990s, AI researchers had gained a reputation for making vain promises. They became reluctant to make predictions at all and to avoid any mention of "human level" artificial intelligence for fear of being labeled "wild-eyed dreamer[s]."<br />
<br />
然而,在20世纪70年代初,很明显,研究人员严重低估了该项目的难度。资助机构开始对通用人工智能持怀疑态度,并对研究人员施加越来越大的压力,要求他们生产出有用的“应用人工智能”。随着20世纪80年代的开始,日本的'''<font color="#ff8000">第五代计算机项目(Fifth Generation Computer Project)</font>'''重新唤起了人们对通用人工智能的兴趣,并设定了一个长达10年的时间线,其中包括通用人工智能的目标,比如“进行一次随意的交谈”。为了应对这种情况和专家系统的成功建立,工业界和政府都重新将资金投入这一领域。然而,人们对人工智能的信心在20世纪80年代末大幅下降,第五代计算机项目的目标从未实现。20年来的第二次,人工智能研究人员预测通用人工智能即将取得的成果已经被证明根本是错误的。到了20世纪90年代,人工智能研究人员因做出虚假承诺而臭名昭著。他们根本不愿意做预测,也不愿意提及“人类水平”的人工智能,因为他们害怕被贴上“狂热梦想家”的标签<br />
<br />
<br />
<br />
=== Narrow AI research 狭义人工智能的研究===<br />
<br />
{{Main|Artificial intelligence}}<br />
<br />
<br />
<br />
In the 1990s and early 21st century, mainstream AI achieved far greater commercial success and academic respectability by focusing on specific sub-problems where they can produce verifiable results and commercial applications, such as [[artificial neural networks]] and statistical [[machine learning]].<ref>{{Harvnb|Russell|Norvig|2003|pp=25–26}}</ref> These "applied AI" systems are now used extensively throughout the technology industry, and research in this vein is very heavily funded in both academia and industry. Currently, development on this field is considered an emerging trend, and a mature stage is expected to happen in more than 10 years.<ref>{{cite web |title=Trends in the Emerging Tech Hype Cycle |url=https://blogs.gartner.com/smarterwithgartner/files/2018/08/PR_490866_5_Trends_in_the_Emerging_Tech_Hype_Cycle_2018_Hype_Cycle.png |publisher=Gartner Reports |accessdate=7 May 2019 |archive-url=https://web.archive.org/web/20190522024829/https://blogs.gartner.com/smarterwithgartner/files/2018/08/PR_490866_5_Trends_in_the_Emerging_Tech_Hype_Cycle_2018_Hype_Cycle.png |archive-date=22 May 2019 |url-status=live }}</ref><br />
<br />
In the 1990s and early 21st century, mainstream AI achieved far greater commercial success and academic respectability by focusing on specific sub-problems where they can produce verifiable results and commercial applications, such as artificial neural networks and statistical machine learning. These "applied AI" systems are now used extensively throughout the technology industry, and research in this vein is very heavily funded in both academia and industry. Currently, development on this field is considered an emerging trend, and a mature stage is expected to happen in more than 10 years.<br />
<br />
在1990年代和21世纪初,主流人工智能取得了更大的商业成功和学术声望,因为它们把重点放在能够产生可验证结果和商业应用的具体子问题上,例如人工神经网络和统计机器学习。这些“应用人工智能”系统现在在整个技术产业中得到广泛应用,这方面的研究得到了学术界和产业界的大量资助。目前,这一领域的发展被认为是一个新兴的趋势,并有望在10多年内进入一个成熟的阶段。<br />
<br />
<br />
<br />
Most mainstream AI researchers hope that strong AI can be developed by combining the programs that solve various sub-problems. [[Hans Moravec]] wrote in 1988: <blockquote>"I am confident that this bottom-up route to artificial intelligence will one day meet the traditional top-down route more than half way, ready to provide the real world competence and the [[Commonsense knowledge base|commonsense knowledge]] that has been so frustratingly elusive in reasoning programs. Fully intelligent machines will result when the metaphorical [[golden spike]] is driven uniting the two efforts."<ref>{{Harvnb|Moravec|1988|p=20}}</ref></blockquote><br />
<br />
Most mainstream AI researchers hope that strong AI can be developed by combining the programs that solve various sub-problems. Hans Moravec wrote in 1988: <blockquote>"I am confident that this bottom-up route to artificial intelligence will one day meet the traditional top-down route more than half way, ready to provide the real world competence and the commonsense knowledge that has been so frustratingly elusive in reasoning programs. Fully intelligent machines will result when the metaphorical golden spike is driven uniting the two efforts."</blockquote><br />
<br />
大多数主流人工智能研究人员希望,通过结合解决各种子问题的项目,可以开发出强人工智能。汉斯·莫拉维克(Hans Moravec)在1988年写道: “我相信,这种自下而上的人工智能路线,终有一天会与传统的自上而下的路线在后半程相遇。令人沮丧的是,当下,真实世界的能力和常识知识在推理程序中一直难以捉摸。而这两种路线结合的人工智能将能为我们解决这些疑难。当一个黄金钉一样的东西将二者结合起来时,就会产生完全智能的机器。” <br />
<br />
<br />
<br />
However, even this fundamental philosophy has been disputed; for example, Stevan Harnad of Princeton concluded his 1990 paper on the [[Symbol grounding problem|Symbol Grounding Hypothesis]] by stating: <blockquote>"The expectation has often been voiced that "top-down" (symbolic) approaches to modeling cognition will somehow meet "bottom-up" (sensory) approaches somewhere in between. If the grounding considerations in this paper are valid, then this expectation is hopelessly modular and there is really only one viable route from sense to symbols: from the ground up. A free-floating symbolic level like the software level of a computer will never be reached by this route (or vice versa) – nor is it clear why we should even try to reach such a level, since it looks as if getting there would just amount to uprooting our symbols from their intrinsic meanings (thereby merely reducing ourselves to the functional equivalent of a programmable computer)."<ref>{{cite journal | last1 = Harnad | first1 = S | year = 1990 | title = The Symbol Grounding Problem | journal = Physica D | volume = 42 | issue = 1–3| pages = 335–346 | doi=10.1016/0167-2789(90)90087-6| bibcode = 1990PhyD...42..335H | arxiv = cs/9906002}}</ref></blockquote><br />
<br />
However, even this fundamental philosophy has been disputed; for example, Stevan Harnad of Princeton concluded his 1990 paper on the Symbol Grounding Hypothesis by stating: <blockquote>"The expectation has often been voiced that "top-down" (symbolic) approaches to modeling cognition will somehow meet "bottom-up" (sensory) approaches somewhere in between. If the grounding considerations in this paper are valid, then this expectation is hopelessly modular and there is really only one viable route from sense to symbols: from the ground up. A free-floating symbolic level like the software level of a computer will never be reached by this route (or vice versa) – nor is it clear why we should even try to reach such a level, since it looks as if getting there would just amount to uprooting our symbols from their intrinsic meanings (thereby merely reducing ourselves to the functional equivalent of a programmable computer)."</blockquote><br />
<br />
然而,连如此基本的哲学问题也存在争议; 例如,普林斯顿大学的斯蒂文·哈纳德(Stevan Harnad)在1990年关于'''<font color="#ff8000">符号基础假说(the Symbol Grounding Hypothesis)</font>'''的论文中总结道: “人们经常提出这样的期望,即建立“自上而下”(符号)的认知模型的方法将在某种程度上与“自下而上”(感官)的方法在建模过程中的某处相会。如果本文中的基本考虑是正确的,那么绝望的是,这种期望是模块化的,并且从认知到符号真的只有一条可行的路径: 从头开始。类似计算机软件级别的自由浮动的符号永远不可能通过这条路径实现,反之亦然——甚至也不清楚为什么我们应该尝试达到这样一个级别,因为它看起来就像是把我们的符号从它们的内在意义上连根拔起(从而仅仅把我们自己降低为可编程计算机的功能等价物)。”<br />
<br />
<br />
<br />
=== Modern artificial general intelligence research 现代通用人工智能的研究===<br />
<br />
The term "artificial general intelligence" was used as early as 1997, by Mark Gubrud<ref>{{Harvnb|Gubrud|1997}}</ref> in a discussion of the implications of fully automated military production and operations. The term was re-introduced and popularized by [[Shane Legg]] and [[Ben Goertzel]] around 2002.<ref>{{Cite web|url=http://goertzel.org/who-coined-the-term-agi/|title=Who coined the term "AGI"? » goertzel.org|language=en-US|access-date=28 December 2018|archive-url=https://web.archive.org/web/20181228083048/http://goertzel.org/who-coined-the-term-agi/|archive-date=28 December 2018|url-status=live}}, via [[Life 3.0]]: 'The term "AGI" was popularized by... Shane Legg, Mark Gubrud and Ben Goertzel'</ref> The research objective is much older, for example [[Doug Lenat]]'s [[Cyc]] project (that began in 1984), and [[Allen Newell]]'s [[Soar (cognitive architecture)|Soar]] project are regarded as within the scope of AGI. AGI research activity in 2006 was described by Pei Wang and Ben Goertzel<ref>{{harvnb|Goertzel|Wang|2006}}. See also {{harvtxt|Wang|2006}} with an up-to-date summary and lots of links.</ref> as "producing publications and preliminary results". The first summer school in AGI was organized in Xiamen, China in 2009<ref>https://goertzel.org/AGI_Summer_School_2009.htm</ref> by the Xiamen university's Artificial Brain Laboratory and OpenCog. The first university course was given in 2010<ref>http://fmi-plovdiv.org/index.jsp?id=1054&ln=1</ref> and 2011<ref>http://fmi.uni-plovdiv.bg/index.jsp?id=1139&ln=1</ref> at Plovdiv University, Bulgaria by Todor Arnaudov. MIT presented a course in AGI in 2018, organized by Lex Fridman and featuring a number of guest lecturers. However, as yet, most AI researchers have devoted little attention to AGI, with some claiming that intelligence is too complex to be completely replicated in the near term. However, a small number of computer scientists are active in AGI research, and many of this group are contributing to a series of [[Conference on Artificial General Intelligence|AGI conferences]]. The research is extremely diverse and often pioneering in nature. In the introduction to his book,{{sfn|Goertzel|Pennachin|2006}} Goertzel says that estimates of the time needed before a truly flexible AGI is built vary from 10 years to over a century, but the consensus in the AGI research community seems to be that the timeline discussed by [[Ray Kurzweil]] in ''[[The Singularity is Near]]''<ref name="K">{{Harv|Kurzweil|2005|p=260}} or see [http://crnano.typepad.com/crnblog/2005/08/advanced_human_.html Advanced Human Intelligence] {{Webarchive|url=https://web.archive.org/web/20110630032301/http://crnano.typepad.com/crnblog/2005/08/advanced_human_.html |date=30 June 2011 }} where he defines strong AI as "machine intelligence with the full range of human intelligence."</ref> (i.e. between 2015 and 2045) is plausible.{{sfn|Goertzel|2007}}<br />
<br />
The term "artificial general intelligence" was used as early as 1997, by Mark Gubrud in a discussion of the implications of fully automated military production and operations. The term was re-introduced and popularized by Shane Legg and Ben Goertzel around 2002. The research objective is much older, for example Doug Lenat's Cyc project (that began in 1984), and Allen Newell's Soar project are regarded as within the scope of AGI. AGI research activity in 2006 was described by Pei Wang and Ben Goertzel as "producing publications and preliminary results". The first summer school in AGI was organized in Xiamen, China in 2009 by the Xiamen university's Artificial Brain Laboratory and OpenCog. The first university course was given in 2010 and 2011 at Plovdiv University, Bulgaria by Todor Arnaudov. MIT presented a course in AGI in 2018, organized by Lex Fridman and featuring a number of guest lecturers. However, as yet, most AI researchers have devoted little attention to AGI, with some claiming that intelligence is too complex to be completely replicated in the near term. However, a small number of computer scientists are active in AGI research, and many of this group are contributing to a series of AGI conferences. The research is extremely diverse and often pioneering in nature. In the introduction to his book, Goertzel says that estimates of the time needed before a truly flexible AGI is built vary from 10 years to over a century, but the consensus in the AGI research community seems to be that the timeline discussed by Ray Kurzweil in The Singularity is Near (i.e. between 2015 and 2045) is plausible.<br />
<br />
“通用人工智能”一词早在1997年就由马克·古布鲁德(Mark Gubrud)在讨论全自动化军事生产和作业的影响时使用。这个术语在2002年左右被肖恩·莱格(Shane Legg)和本·格兹尔(Ben Goertzel)重新引入并推广。通用人工智能的研究目标要古老得多,例如道格•雷纳特(Doug Lenat)的 Cyc 项目(始于1984年) ,以及艾伦•纽厄尔(Allen Newell)的 Soar 项目被认为属于通用人工智能的范畴。王培(Pei Wang)和本·格兹尔将2006年的通用人工智能研究活动描述为“发表论文和取得初步成果”。2009年,厦门大学人工脑实验室和 OpenCog 在中国厦门组织了通用人工智能的第一个暑期学校。第一个大学课程于2010年和2011年在保加利亚普罗夫迪夫大学由托多尔·阿瑙多夫(Todor Arnaudov)开设。2018年,麻省理工学院开设了一门通用人工智能课程,由莱克斯·弗里德曼(Lex Fridman)组织,并邀请了一些客座讲师。然而,迄今为止,大多数人工智能研究人员对通用人工智能关注甚少,一些人声称,智能过于复杂,在短期内无法完全复制。然而,少数计算机科学家积极参与通用人工智能的研究,其中许多人正在为通用人工智能的一系列会议做出贡献。这项研究极其多样化,而且往往具有开创性。格兹尔在他的书的序言中,说,制造一个真正灵活的通用人工智能所需的时间约为10年到超过一个世纪不等,但是通用人工智能研究团体的似乎一致认为雷·库兹韦尔(Ray Kurzweil)在'''<font color="#ff8000">《奇点临近》(The Singularity is Near)</font>'''(即在2015年至2045年之间)中讨论的时间线是可信的。<br />
<br />
<br />
<br />
However, most mainstream AI researchers doubt that progress will be this rapid.{{citation_needed|date=January 2017}} Organizations explicitly pursuing AGI include the Swiss AI lab [[IDSIA]],{{citation needed|date=December 2017}} Nnaisense,<ref>{{cite news|last1=Markoff|first1=John|title=When A.I. Matures, It May Call Jürgen Schmidhuber 'Dad'|url=https://www.nytimes.com/2016/11/27/technology/artificial-intelligence-pioneer-jurgen-schmidhuber-overlooked.html|accessdate=26 December 2017|work=The New York Times|date=27 November 2016|archive-url=https://web.archive.org/web/20171226234555/https://www.nytimes.com/2016/11/27/technology/artificial-intelligence-pioneer-jurgen-schmidhuber-overlooked.html|archive-date=26 December 2017|url-status=live}}</ref> [[Vicarious (company)|Vicarious]],<!--<ref name=baum/>--> [[Maluuba]],<ref name=baum/> the [[OpenCog|OpenCog Foundation]], Adaptive AI, [[LIDA (cognitive architecture)|LIDA]], and [[Numenta]] and the associated [[Redwood Neuroscience Institute]].<ref>{{cite book|author1=James Barrat|title=Our Final Invention: Artificial Intelligence and the End of the Human Era|date=2013|publisher=St. Martin's Press|location=New York|isbn=9780312622374|edition=First|chapter=Chapter 11: A Hard Takeoff|title-link=Our Final Invention|author1-link=James Barrat}}</ref> In addition, organizations such as the [[Machine Intelligence Research Institute]]<ref>{{cite web|title=About the Machine Intelligence Research Institute|url=https://intelligence.org/about/|website=Machine Intelligence Research Institute|accessdate=26 December 2017|archive-url=https://web.archive.org/web/20180121025925/https://intelligence.org/about/|archive-date=21 January 2018|url-status=live}}</ref> and [[OpenAI]]<ref>{{cite news|title=About OpenAI|url=https://openai.com/about/|accessdate=26 December 2017|work=[[OpenAI]]|language=en-us|archive-url=https://web.archive.org/web/20171222181056/https://openai.com/about/|archive-date=22 December 2017|url-status=live}}</ref> have been founded to influence the development path of AGI. Finally, projects such as the [[Human Brain Project]]<ref>{{cite news|last1=Theil|first1=Stefan|title=Trouble in Mind|url=https://www.scientificamerican.com/article/why-the-human-brain-project-went-wrong-and-how-to-fix-it/|accessdate=26 December 2017|work=Scientific American|pages=36–42|language=en|doi=10.1038/scientificamerican1015-36|bibcode=2015SciAm.313d..36T|archive-url=https://web.archive.org/web/20171109234151/https://www.scientificamerican.com/article/why-the-human-brain-project-went-wrong-and-how-to-fix-it/|archive-date=9 November 2017|url-status=live}}</ref> have the goal of building a functioning simulation of the human brain. A 2017 survey of AGI categorized forty-five known "active R&D projects" that explicitly or implicitly (through published research) research AGI, with the largest three being [[DeepMind]], the Human Brain Project, and [[OpenAI]].<ref name=baum>{{cite journal|title=Baum, Seth, A Survey of Artificial General Intelligence Projects for Ethics, Risk, and Policy (November 12, 2017). Global Catastrophic Risk Institute Working Paper 17-1|url=https://ssrn.com/abstract=3070741|date=12 November 2017|last1=Baum|first1=Seth}}</ref><br />
<br />
However, most mainstream AI researchers doubt that progress will be this rapid. Organizations explicitly pursuing AGI include the Swiss AI lab IDSIA, Nnaisense, Vicarious. In addition, organizations such as the Machine Intelligence Research Institute and OpenAI have been founded to influence the development path of AGI. Finally, projects such as the Human Brain Project have the goal of building a functioning simulation of the human brain. A 2017 survey of AGI categorized forty-five known "active R&D projects" that explicitly or implicitly (through published research) research AGI, with the largest three being DeepMind, the Human Brain Project, and OpenAI.<br />
<br />
然而,大多数主流的人工智能研究人员怀疑进展是否会如此之快。明确寻求通用人工智能的组织包括瑞士人工智能实验室IDSIA,Nnaisense,Vicarious。此外,机器智能研究所和 OpenAI 等机构也建立起来以影响通用人工智能的发展道路。最后,像人脑计划这样的项目的目标是建立一个人脑的功能模拟。2017年针对通用人工智能的一项调查(通过已发表的研究)对45个已知的明确的或暗中研究通用人工智能的“活跃研发项目”进行了分类 ,其中最大的三个是 DeepMind、人类大脑项目和 OpenAI。<br />
<br />
<br />
<br />
In 2017, researchers Feng Liu, Yong Shi and Ying Liu conducted intelligence tests on publicly available and freely accessible weak AI such as Google AI or Apple's Siri and others. At the maximum, these AI reached a value of about 47, which corresponds approximately to a six-year-old child in first grade. An adult comes to about 100 on average. Similar tests had been carried out in 2014, with the IQ score reaching a maximum value of 27.<ref>{{cite journal|title=Intelligence Quotient and Intelligence Grade of Artificial Intelligence|journal=Annals of Data Science|volume=4|issue=2|pages=179–191|arxiv=1709.10242|doi=10.1007/s40745-017-0109-0|year=2017|last1=Liu|first1=Feng|last2=Shi|first2=Yong|last3=Liu|first3=Ying|bibcode=2017arXiv170910242L}}</ref><ref>{{cite web|title=Google-KI doppelt so schlau wie Siri|url=https://t3n.de/news/iq-kind-schlauer-google-ki-siri-864003|accessdate=2 January 2019|archive-url=https://web.archive.org/web/20190103055657/https://t3n.de/news/iq-kind-schlauer-google-ki-siri-864003/|archive-date=3 January 2019|url-status=live}}</ref><br />
<br />
In 2017, researchers Feng Liu, Yong Shi and Ying Liu conducted intelligence tests on publicly available and freely accessible weak AI such as Google AI or Apple's Siri and others. At the maximum, these AI reached a value of about 47, which corresponds approximately to a six-year-old child in first grade. An adult comes to about 100 on average. Similar tests had been carried out in 2014, with the IQ score reaching a maximum value of 27.<br />
<br />
2017年,研究人员 Feng Liu,Yong Shi 和 Ying Liu 对公开的和可自由访问的弱智能进行了智能测试,如谷歌人工智能或苹果的 Siri 等。在最大值,这些人工智能达到了约47,这大约相当于一个的六岁儿童。一个成年人平均智商为100。2014年也进行了类似的测试,智商分数的最高值达到了27。<br />
<br />
<br />
<br />
In 2019, video game programmer and aerospace engineer [[John Carmack]] announced plans to research AGI.<ref name="lawler">{{cite web |url=https://www.engadget.com/2019-11-13-john-carmack-agi.html |title=John Carmack takes a step back at Oculus to work on human-like AI |date=November 13, 2019 |first=Richard |last=Lawler |accessdate=April 4, 2020 |publisher=[[Engadget]]}}</ref><br />
<br />
In 2019, video game programmer and aerospace engineer John Carmack announced plans to research AGI.<br />
<br />
2019年,游戏程序师和航空工程师约翰·卡迈克(John Carmack)宣布了研究通用人工智能的计划。<br />
<br />
<br />
<br />
==Processing power needed to simulate a brain 模拟人脑所需要的处理能力==<br />
<br />
<br />
<br />
===Whole brain emulation 全脑模拟===<br />
<br />
{{main|Mind uploading}}<br />
<br />
A popular discussed approach to achieving general intelligent action is [[whole brain emulation]]. A low-level brain model is built by [[brain scanning|scanning]] and [[Brain mapping|mapping]] a biological brain in detail and copying its state into a computer system or another computational device. The computer runs a [[computer simulation|simulation]] model so faithful to the original that it will behave in essentially the same way as the original brain, or for all practical purposes, indistinguishably.<br />
<br />
A popular discussed approach to achieving general intelligent action is whole brain emulation. A low-level brain model is built by scanning and mapping a biological brain in detail and copying its state into a computer system or another computational device. The computer runs a simulation model so faithful to the original that it will behave in essentially the same way as the original brain, or for all practical purposes, indistinguishably.<ref name=Roadmap><br />
<br />
实现通用智能行为的一种流行的讨论方法是全脑模拟。一个低层次的大脑模型是通过扫描和绘制生物大脑的详细情况,并将其状态复制到计算机系统或其他计算设备中来建立的。计算机运行的模拟模型无比忠实于原始模型,以至于它的行为在本质,或者所有实际目的上与原始大脑相同,难以区分。<br />
<br />
{{Harvnb|Sandberg|Boström|2008}}. "The basic idea is to take a particular brain, scan its structure in detail, and construct a software model of it that is so faithful to the original that, when run on appropriate hardware, it will behave in essentially the same way as the original brain."</ref> Whole brain emulation is discussed in [[computational neuroscience]] and [[neuroinformatics]], in the context of [[brain simulation]] for medical research purposes. It is discussed in [[artificial intelligence]] research{{sfn|Goertzel|2007}} as an approach to strong AI. [[Neuroimaging]] technologies that could deliver the necessary detailed understanding are improving rapidly, and [[futurist]] Ray Kurzweil in the book ''The Singularity Is Near''<ref name=K/> predicts that a map of sufficient quality will become available on a similar timescale to the required computing power.<br />
<br />
."The basic idea is to take a particular brain, scan its structure in detail, and construct a software model of it that is so faithful to the original that, when run on appropriate hardware, it will behave in essentially the same way as the original brain."</ref> Whole brain emulation is discussed in computational neuroscience and neuroinformatics, in the context of brain simulation for medical research purposes. It is discussed in artificial intelligence research as an approach to strong AI. Neuroimaging technologies that could deliver the necessary detailed understanding are improving rapidly, and futurist Ray Kurzweil in the book The Singularity Is Near predicts that a map of sufficient quality will become available on a similar timescale to the required computing power.<br />
<br />
“基本思路是,取一个特定的大脑,详细地扫描其结构,并构建一个无比还原的原始大脑的软件模型,以至于在适当的硬件上运行时,它基本上与原始大脑的行为方式相同。”基于医学研究的大脑模拟背景下,全脑模拟在计算神经科学和神经信息学医学期刊上被讨论过。它是人工智能研究中讨论的一种强人工智能的方法。可提供必要详细的理解的神经成像技术正在迅速提高,未来学家雷·库兹韦尔(Ray Kurzweil)在《奇点临近》书中预测,一张质量足够高的地图将在类似的时间尺度上达到所需的计算能力。<br />
<br />
<br />
<br />
===Early estimates 初步预测===<br />
<br />
[[File:Estimations of Human Brain Emulation Required Performance.svg|thumb|right|400px|Estimates of how much processing power is needed to emulate a human brain at various levels (from Ray Kurzweil, and [[Anders Sandberg]] and [[Nick Bostrom]]), along with the fastest supercomputer from [[TOP500]] mapped by year. Note the logarithmic scale and exponential trendline, which assumes the computational capacity doubles every 1.1 years. Kurzweil believes that mind uploading will be possible at neural simulation, while the Sandberg, Bostrom report is less certain about where [[consciousness]] arises.根据对在不同水平上模拟人类大脑的所需处理能力的估计(来自 Ray Kurzweil,[ Anders Sandberg 和 Nick Bostrom ]) ,以及每年从最快的五百台超级计算机获得的数据,绘制出对数尺度趋势线和指数趋势线。它呈现出计算能力每1.1年增长一倍。库兹韦尔相信,在神经模拟中上传思维是可能的,而桑德伯格和博斯特罗姆的报告对意识从何产生则不太确定。{{sfn|Sandberg|Boström|2008}}]] For low-level brain simulation, an extremely powerful computer would be required. The [[human brain]] has a huge number of [[synapses]]. Each of the 10<sup>11</sup> (one hundred billion) [[neurons]] has on average 7,000 synaptic connections (synapses) to other neurons. It has been estimated that the brain of a three-year-old child has about 10<sup>15</sup> synapses (1 quadrillion). This number declines with age, stabilizing by adulthood. Estimates vary for an adult, ranging from 10<sup>14</sup> to 5×10<sup>14</sup> synapses (100 to 500 trillion).{{sfn|Drachman|2005}} An estimate of the brain's processing power, based on a simple switch model for neuron activity, is around 10<sup>14</sup> (100 trillion) synaptic updates per second ([[SUPS]]).{{sfn|Russell|Norvig|2003}} In 1997, Kurzweil looked at various estimates for the hardware required to equal the human brain and adopted a figure of 10<sup>16</sup> computations per second (cps).<ref>In "Mind Children" {{Harvnb|Moravec|1988|page=61}} 10<sup>15</sup> cps is used. More recently, in 1997, <{{cite web|url=http://www.transhumanist.com/volume1/moravec.htm |title=Archived copy |accessdate=23 June 2006 |url-status=dead |archiveurl=https://web.archive.org/web/20060615031852/http://transhumanist.com/volume1/moravec.htm |archivedate=15 June 2006 }}> Moravec argued for 10<sup>8</sup> MIPS which would roughly correspond to 10<sup>14</sup> cps. Moravec talks in terms of MIPS, not "cps", which is a non-standard term Kurzweil introduced.</ref> (For comparison, if a "computation" was equivalent to one "[[FLOPS|floating point operation]]" – a measure used to rate current [[supercomputer]]s – then 10<sup>16</sup> "computations" would be equivalent to 10 [[Peta-|petaFLOPS]], [[FLOPS#Performance records|achieved in 2011]]). He used this figure to predict the necessary hardware would be available sometime between 2015 and 2025, if the exponential growth in computer power at the time of writing continued.<br />
<br />
Estimates of how much processing power is needed to emulate a human brain at various levels (from Ray Kurzweil, and [[Anders Sandberg and Nick Bostrom), along with the fastest supercomputer from TOP500 mapped by year. Note the logarithmic scale and exponential trendline, which assumes the computational capacity doubles every 1.1 years. Kurzweil believes that mind uploading will be possible at neural simulation, while the Sandberg, Bostrom report is less certain about where consciousness arises.]] For low-level brain simulation, an extremely powerful computer would be required. The human brain has a huge number of synapses. Each of the 10<sup>11</sup> (one hundred billion) neurons has on average 7,000 synaptic connections (synapses) to other neurons. It has been estimated that the brain of a three-year-old child has about 10<sup>15</sup> synapses (1 quadrillion). This number declines with age, stabilizing by adulthood. Estimates vary for an adult, ranging from 10<sup>14</sup> to 5×10<sup>14</sup> synapses (100 to 500 trillion). An estimate of the brain's processing power, based on a simple switch model for neuron activity, is around 10<sup>14</sup> (100 trillion) synaptic updates per second (SUPS). In 1997, Kurzweil looked at various estimates for the hardware required to equal the human brain and adopted a figure of 10<sup>16</sup> computations per second (cps). (For comparison, if a "computation" was equivalent to one "floating point operation" – a measure used to rate current supercomputers – then 10<sup>16</sup> "computations" would be equivalent to 10 petaFLOPS, achieved in 2011). He used this figure to predict the necessary hardware would be available sometime between 2015 and 2025, if the exponential growth in computer power at the time of writing continued.<br />
<br />
根据对在不同水平上模拟人类大脑的所需处理能力的估计(来自 Ray Kurzweil,[ Anders Sandberg 和 Nick Bostrom ]) ,以及每年从最快的五百台超级计算机获得的数据,绘制出对数尺度趋势线和指数趋势线。它呈现出计算能力每1.1年增长一倍。库兹韦尔相信,在神经模拟中上传思维是可能的,而桑德伯格和博斯特罗姆的报告对意识从何产生则不太确定。为进行低层次的大脑模拟,需要一个非常强大的计算机。人类的大脑有大量的突触。每10个 sup 11 / sup (1000亿)神经元平均与其他神经元有7000个突触连接(突触)。据估计,一个三岁儿童的大脑约有10个 sup 15 / sup 突触(1千万亿)。这个数字随着年龄的增长而下降,成年后趋于稳定。而每个成年人的估计情况互不相同,从10个 sup 14 / sup 到5个 sup 10 sup 14 / sup 突触(100万亿到500万亿)不等。基于神经元活动的简单开关模型,对大脑处理能力的估计大约是每秒10次 / 秒(100万亿)突触更新(SUPS)。1997年,库兹韦尔研究了等价模拟人脑所需硬件的各种估计,并采纳了每秒10 sup 16 / sup 计算(cps)这个估计结果。(作为比较,如果一次“计算”相当于一次“浮点运算”——一种用于对当前超级计算机进行评级的措施——那么10 sup 16 / sup次“计算”相当于2011年达到的的每秒10000亿次浮点运算)。他用这个数字来预测,如果在撰写本文时计算机能力方面的指数增长继续下去的话,那么在2015年到2025年之间的某个时候,必要的硬件将会出现。<br />
<br />
<br />
<br />
===Modelling the neurons in more detail 对神经元的更精细的模拟===<br />
<br />
The [[artificial neuron]] model assumed by Kurzweil and used in many current [[artificial neural network]] implementations is simple compared with [[biological neuron model|biological neurons]]. A brain simulation would likely have to capture the detailed cellular behaviour of biological [[neurons]], presently understood only in the broadest of outlines. The overhead introduced by full modeling of the biological, chemical, and physical details of neural behaviour (especially on a molecular scale) would require computational powers several orders of magnitude larger than Kurzweil's estimate. In addition the estimates do not account for [[glial cells]], which are at least as numerous as neurons, and which may outnumber neurons by as much as 10:1, and are now known to play a role in cognitive processes.<ref name="Discover2011JanFeb">{{Cite journal|author=Swaminathan, Nikhil|title=Glia—the other brain cells|journal=Discover|date=Jan–Feb 2011|url=http://discovermagazine.com/2011/jan-feb/62|access-date=24 January 2014|archive-url=https://web.archive.org/web/20140208071350/http://discovermagazine.com/2011/jan-feb/62|archive-date=8 February 2014|url-status=live}}</ref><br />
<br />
The artificial neuron model assumed by Kurzweil and used in many current artificial neural network implementations is simple compared with biological neurons. A brain simulation would likely have to capture the detailed cellular behaviour of biological neurons, presently understood only in the broadest of outlines. The overhead introduced by full modeling of the biological, chemical, and physical details of neural behaviour (especially on a molecular scale) would require computational powers several orders of magnitude larger than Kurzweil's estimate. In addition the estimates do not account for glial cells, which are at least as numerous as neurons, and which may outnumber neurons by as much as 10:1, and are now known to play a role in cognitive processes.<br />
<br />
与生物神经元相比,库兹韦尔假设的人工神经元模型在当前许多'''<font color="#ff8000">人工神经网络(artificial neural network)</font>'''实现中的应用是简单的。大脑模拟可能需要捕捉生物神经元细胞行为的细节,目前只能从最广泛的概要中理解。对神经行为的生物、化学和物理细节(特别是在分子尺度上)进行全面建模所需要的计算能力将比库兹韦尔的估计大数个数量级。此外,这些估计没有考虑到至少和神经元一样多的'''<font color="#ff8000">胶质细胞(glial cells)</font>''',其数量可能比神经元多十分之一,且现已知它们在认知过程中发挥作用。<br />
<br />
<br />
<br />
=== Current research 研究现状===<br />
<br />
There are some research projects that are investigating brain simulation using more sophisticated neural models, implemented on conventional computing architectures. The [[Artificial Intelligence System]] project implemented non-real time simulations of a "brain" (with 10<sup>11</sup> neurons) in 2005. It took 50 days on a cluster of 27 processors to simulate 1 second of a model.<ref>{{cite journal |last=Izhikevich |first=Eugene M. |last2=Edelman |first2=Gerald M. |date=4 March 2008 |title=Large-scale model of mammalian thalamocortical systems |url=http://vesicle.nsi.edu/users/izhikevich/publications/large-scale_model_of_human_brain.pdf |journal=PNAS |volume=105 |issue=9 |pages=3593–3598 |doi= 10.1073/pnas.0712231105|access-date=23 June 2015 |archive-url=https://web.archive.org/web/20090612095651/http://vesicle.nsi.edu/users/izhikevich/publications/large-scale_model_of_human_brain.pdf |archive-date=12 June 2009 |pmid=18292226 |pmc=2265160|bibcode=2008PNAS..105.3593I }}</ref> The [[Blue Brain]] project used one of the fastest supercomputer architectures in the world, [[IBM]]'s [[Blue Gene]] platform, to create a real time simulation of a single rat [[Neocortex|neocortical column]] consisting of approximately 10,000 neurons and 10<sup>8</sup> synapses in 2006.<ref>{{cite web|url=http://bluebrain.epfl.ch/Jahia/site/bluebrain/op/edit/pid/19085|title=Project Milestones|work=Blue Brain|accessdate=11 August 2008}}</ref> A longer term goal is to build a detailed, functional simulation of the physiological processes in the human brain: "It is not impossible to build a human brain and we can do it in 10 years," [[Henry Markram]], director of the Blue Brain Project said in 2009 at the [[TED (conference)|TED conference]] in Oxford.<ref>{{Cite news |url=http://news.bbc.co.uk/1/hi/technology/8164060.stm |title=Artificial brain '10 years away' 2009 BBC news |date=22 July 2009 |access-date=25 July 2009 |archive-url=https://web.archive.org/web/20170726040959/http://news.bbc.co.uk/1/hi/technology/8164060.stm |archive-date=26 July 2017 |url-status=live }}</ref> There have also been controversial claims to have simulated a [[cat intelligence#Computer simulation of the cat brain|cat brain]]. Neuro-silicon interfaces have been proposed as an alternative implementation strategy that may scale better.<ref>[http://gauntlet.ucalgary.ca/story/10343 University of Calgary news] {{Webarchive|url=https://web.archive.org/web/20090818081044/http://gauntlet.ucalgary.ca/story/10343 |date=18 August 2009 }}, [http://www.nbcnews.com/id/12037941 NBC News news] {{Webarchive|url=https://web.archive.org/web/20170704063922/http://www.nbcnews.com/id/12037941/ |date=4 July 2017 }}</ref><br />
<br />
There are some research projects that are investigating brain simulation using more sophisticated neural models, implemented on conventional computing architectures. The Artificial Intelligence System project implemented non-real time simulations of a "brain" (with 10<sup>11</sup> neurons) in 2005. It took 50 days on a cluster of 27 processors to simulate 1 second of a model. The Blue Brain project used one of the fastest supercomputer architectures in the world, IBM's Blue Gene platform, to create a real time simulation of a single rat neocortical column consisting of approximately 10,000 neurons and 10<sup>8</sup> synapses in 2006. A longer term goal is to build a detailed, functional simulation of the physiological processes in the human brain: "It is not impossible to build a human brain and we can do it in 10 years," Henry Markram, director of the Blue Brain Project said in 2009 at the TED conference in Oxford. There have also been controversial claims to have simulated a cat brain. Neuro-silicon interfaces have been proposed as an alternative implementation strategy that may scale better.<br />
<br />
有一些研究项目正在使用更复杂的神经模型研究大脑模拟,这些模型是在传统的计算机体系结构上实现的。人工智能系统项目在2005年实现了对一个“大脑”(有10个 sup 11 / sup 神经元)的非实时模拟。在一个由27个处理器组成的集群上,模拟一个模型的一秒钟花费了50天时间。2006年,蓝脑项目利用世界上最快的超级计算机架构之一——IBM 的蓝色基因平台,创建了一个包含大约10,000个神经元和10个 sup 8 / sup 突触的单个大鼠的'''<font color="#ff8000">新皮质柱(neocortical column)</font>'''的实时模拟。一个更长期的目标是建立一个人脑生理过程的详细的功能模拟: “建立一个人脑并不是不可能的,我们可以在10年内完成,”蓝脑项目主任亨利·马克拉姆(Henry Markram)于2009年在牛津举行的 TED 大会上说道。还有一些有争议的说法是模拟猫的大脑。神经-硅接口已作为一种可替代的实施策略被提出,它可能会更好地进行模拟。<br />
<br />
<br />
<br />
[[Hans Moravec]] addressed the above arguments ("brains are more complicated", "neurons have to be modeled in more detail") in his 1997 paper "When will computer hardware match the human brain?".<ref>{{cite web|url=http://www.transhumanist.com/volume1/moravec.htm |title=Archived copy |accessdate=23 June 2006 |url-status=dead |archiveurl=https://web.archive.org/web/20060615031852/http://transhumanist.com/volume1/moravec.htm |archivedate=15 June 2006 }}</ref> He measured the ability of existing software to simulate the functionality of neural tissue, specifically the retina. His results do not depend on the number of glial cells, nor on what kinds of processing neurons perform where.<br />
<br />
Hans Moravec addressed the above arguments ("brains are more complicated", "neurons have to be modeled in more detail") in his 1997 paper "When will computer hardware match the human brain?". He measured the ability of existing software to simulate the functionality of neural tissue, specifically the retina. His results do not depend on the number of glial cells, nor on what kinds of processing neurons perform where.<br />
<br />
汉斯·莫拉维克(Hans Moravec)在他1997年的论文《计算机硬件何时能与人脑匹敌》中提出了上述观点(“大脑更复杂” ,“神经元的建模必须更详细”) .他测量了现有软件模拟神经组织,特别是视网膜功能的能力。他的研究结果并不取决于神经胶质细胞的数量,也不取决于处理神经元在哪里工作。<br />
<br />
<br />
<br />
The actual complexity of modeling biological neurons has been explored in [[OpenWorm|OpenWorm project]] that was aimed on complete simulation of a worm that has only 302 neurons in its neural network (among about 1000 cells in total). The animal's neural network has been well documented before the start of the project. However, although the task seemed simple at the beginning, the models based on a generic neural network did not work. Currently, the efforts are focused on precise emulation of biological neurons (partly on the molecular level), but the result cannot be called a total success yet. Even if the number of issues to be solved in a human-brain-scale model is not proportional to the number of neurons, the amount of work along this path is obvious.<br />
<br />
The actual complexity of modeling biological neurons has been explored in OpenWorm project that was aimed on complete simulation of a worm that has only 302 neurons in its neural network (among about 1000 cells in total). The animal's neural network has been well documented before the start of the project. However, although the task seemed simple at the beginning, the models based on a generic neural network did not work. Currently, the efforts are focused on precise emulation of biological neurons (partly on the molecular level), but the result cannot be called a total success yet. Even if the number of issues to be solved in a human-brain-scale model is not proportional to the number of neurons, the amount of work along this path is obvious.<br />
<br />
OpenWorm 项目已经探讨了建模生物神经元的实际复杂性。该项目旨在完全模拟一个蠕虫,其神经网络中只有302个神经元(在总共约1000个细胞中)。项目开始之前,蠕虫的神经网络已经被很好地记录了下来。然而,尽管任务一开始看起来很简单,基于一般神经网络的模型并不起作用。目前,研究的重点是精确模拟生物神经元(部分在分子水平上) ,但结果还不能被称为完全成功。即使在人脑尺度的模型中需要解决的问题的数量与神经元的数量不成比例,沿着这条路径走下去的工作量也是显而易见的。<br />
<br />
<br />
<br />
===Criticisms of simulation-based approaches 对基于模拟的研究方法的批评===<br />
<br />
A fundamental criticism of the simulated brain approach derives from [[embodied cognition]] where human embodiment is taken as an essential aspect of human intelligence. Many researchers believe that embodiment is necessary to ground meaning.<ref>{{Harvnb|de Vega|Glenberg|Graesser|2008}}. A wide range of views in current research, all of which require grounding to some degree</ref> If this view is correct, any fully functional brain model will need to encompass more than just the neurons (i.e., a robotic body). Goertzel{{sfn|Goertzel|2007}} proposes virtual embodiment (like in ''[[Second Life]]''), but it is not yet known whether this would be sufficient.<br />
<br />
A fundamental criticism of the simulated brain approach derives from embodied cognition where human embodiment is taken as an essential aspect of human intelligence. Many researchers believe that embodiment is necessary to ground meaning. If this view is correct, any fully functional brain model will need to encompass more than just the neurons (i.e., a robotic body). Goertzel proposes virtual embodiment (like in Second Life), but it is not yet known whether this would be sufficient.<br />
<br />
对模拟大脑的方法的一个基本批评来自具象认知,其中人形化被视为人类智力的一个重要方面。许多研究者认为,具体化是必要的基础意义。如果这种观点是正确的,那么任何功能齐全的大脑模型除了神经元还要包含更多东西(例如,一个机器人身体)。格兹尔提出了虚拟体(就像在《第二人生》中那样) ,但是目前还不知道这是否足够。<br />
<br />
<br />
<br />
Desktop computers using microprocessors capable of more than 10<sup>9</sup> cps (Kurzweil's non-standard unit "computations per second", see above) have been available since 2005. According to the brain power estimates used by Kurzweil (and Moravec), this computer should be capable of supporting a simulation of a bee brain, but despite some interest<ref>{{Cite web |url=http://www.setiai.com/archives/cat_honey_bee_brain.html |title=some links to bee brain studies |access-date=30 March 2010 |archive-url=https://web.archive.org/web/20111005162232/http://www.setiai.com/archives/cat_honey_bee_brain.html |archive-date=5 October 2011 |url-status=dead }}</ref> no such simulation exists {{Citation needed|date=April 2011}}. There are at least three reasons for this:<br />
<br />
Desktop computers using microprocessors capable of more than 10<sup>9</sup> cps (Kurzweil's non-standard unit "computations per second", see above) have been available since 2005. According to the brain power estimates used by Kurzweil (and Moravec), this computer should be capable of supporting a simulation of a bee brain, but despite some interest no such simulation exists . There are at least three reasons for this:<br />
<br />
自2005年以来,台式计算机使用的微处理器能够超过10 sup 9 / sup cps (库兹韦尔的非标准单位“每秒计算”,见上文)。根据库兹韦尔(和莫拉维克)使用的大脑能量估算,这台计算机应该能够支持蜜蜂大脑的模拟,但是尽管有些人感兴趣,这样的模拟并不存在。这至少有三个原因:<br />
<br />
#The neuron model seems to be oversimplified (see next section).<br />
<br />
The neuron model seems to be oversimplified (see next section).<br />
<br />
神经元模型似乎过于简化了(见下一节)。<br />
<br />
#There is insufficient understanding of higher cognitive processes{{refn|In Goertzels' AGI book ([[#CITEREFYudkowsky2006|Yudkowsky 2006]]), Yudkowsky proposes 5 levels of organisation that must be understood – code/data, sensory modality, concept & category, thought, and deliberation (consciousness) – in order to use the available hardware}} to establish accurately what the brain's neural activity, observed using techniques such as [[Neuroimaging#Functional magnetic resonance imaging|functional magnetic resonance imaging]], correlates with.<br />
<br />
There is insufficient understanding of higher cognitive processes to establish accurately what the brain's neural activity, observed using techniques such as functional magnetic resonance imaging, correlates with.<br />
<br />
人们对高级认知过程的理解不够充分,无法准确地确定大脑的神经活动---- 与之相关的是使用功能性磁共振成像等技术观察到的大脑活动。<br />
<br />
#Even if our understanding of cognition advances sufficiently, early simulation programs are likely to be very inefficient and will, therefore, need considerably more hardware.<br />
<br />
Even if our understanding of cognition advances sufficiently, early simulation programs are likely to be very inefficient and will, therefore, need considerably more hardware.<br />
<br />
即使我们对认知的理解有了足够的进步,早期的仿真程序也可能非常低效,因此需要更多的硬件。<br />
<br />
#The brain of an organism, while critical, may not be an appropriate boundary for a cognitive model. To simulate a bee brain, it may be necessary to simulate the body, and the environment. [[The Extended Mind]] thesis formalizes the philosophical concept, and research into [[Cephalopoda|cephalopods]] has demonstrated clear examples of a decentralized system.<ref>{{cite journal | pmid = 15829594 | doi=10.1152/jn.00684.2004 | volume=94 | issue=2 | title=Dynamic model of the octopus arm. I. Biomechanics of the octopus reaching movement |date=August 2005 | journal=J. Neurophysiol. | pages=1443–58 | last1 = Yekutieli | first1 = Y | last2 = Sagiv-Zohar | first2 = R | last3 = Aharonov | first3 = R | last4 = Engel | first4 = Y | last5 = Hochner | first5 = B | last6 = Flash | first6 = T}}</ref><br />
<br />
The brain of an organism, while critical, may not be an appropriate boundary for a cognitive model. To simulate a bee brain, it may be necessary to simulate the body, and the environment. The Extended Mind thesis formalizes the philosophical concept, and research into cephalopods has demonstrated clear examples of a decentralized system.<br />
<br />
有机体的大脑虽然关键,但可能不是认知模型的合适边界。为了模拟蜜蜂的大脑,可能需要模拟身体和环境。'''<font color="#ff8000">延展心灵论题(The Extended Mind thesis)</font>'''形式化了哲学概念,对头足类动物的研究已经展示了分散系统的明显的例子。<br />
<br />
<br />
<br />
In addition, the scale of the human brain is not currently well-constrained. One estimate puts the human brain at about 100 billion neurons and 100 trillion synapses.<ref>{{Harvnb|Williams|Herrup|1988}}</ref><ref>[http://search.eb.com/eb/article-75525 "nervous system, human."] ''[[Encyclopædia Britannica]]''. 9 January 2007</ref> Another estimate is 86 billion neurons of which 16.3 billion are in the [[cerebral cortex]] and 69 billion in the [[cerebellum]].{{sfn|Azevedo et al.|2009}} [[Glial cell]] synapses are currently unquantified but are known to be extremely numerous.<br />
<br />
In addition, the scale of the human brain is not currently well-constrained. One estimate puts the human brain at about 100 billion neurons and 100 trillion synapses. Another estimate is 86 billion neurons of which 16.3 billion are in the cerebral cortex and 69 billion in the cerebellum. Glial cell synapses are currently unquantified but are known to be extremely numerous.<br />
<br />
此外,人类大脑的规模目前还没有得到很好的限制。据估计,人类大脑大约有1000亿个神经元和100万亿个突触。另一个估计是860亿个神经元,其中163亿个在大脑皮层,690亿个在小脑。神经胶质细胞突触目前尚未定量,但已知数量极多。<br />
<br />
<br />
<br />
==Strong AI and consciousness 强人工智能和意识==<br />
<br />
{{See also|Philosophy of artificial intelligence|Turing test}}<br />
<br />
<br />
<br />
In 1980, philosopher [[John Searle]] coined the term "strong AI" as part of his [[Chinese room]] argument.<ref>{{Harvnb|Searle|1980}}</ref> He wanted to distinguish between two different hypotheses about artificial intelligence:<ref>As defined in a standard AI textbook: "The assertion that machines could possibly act intelligently (or, perhaps better, act as if they were intelligent) is called the 'weak AI' hypothesis by philosophers, and the assertion that machines that do so are actually thinking (as opposed to simulating thinking) is called the 'strong AI' hypothesis." {{Harv|Russell|Norvig|2003}}</ref><br />
<br />
In 1980, philosopher John Searle coined the term "strong AI" as part of his Chinese room argument. He wanted to distinguish between two different hypotheses about artificial intelligence:<br />
<br />
1980年,哲学家约翰•塞尔(John Searle)将“强人工智能”(strong AI)一词作为他在中文屋论证的一部分。他想要区分关于人工智能的两种不同假设:<br />
<br />
* An artificial intelligence system can ''think'' and have a ''mind''. (The word "mind" has a specific meaning for philosophers, as used in "the [[mind body problem]]" or "the [[philosophy of mind]]".)<br />
*一个人工智能系统可以思考并拥有思维。(词语“思维”对哲学家来说有特殊意义,正如在“身心问题”或“心灵哲学”中的使用一样。)<br />
<br />
* An artificial intelligence system can (only) ''act like'' it thinks and has a mind.<br />
*一个人工智能系统可以(仅仅)按它所想而“行动”,并且拥有思维。<br />
<br />
The first one is called "the ''strong'' AI hypothesis" and the second is "the ''weak'' AI hypothesis" because the first one makes the ''stronger'' statement: it assumes something special has happened to the machine that goes beyond all its abilities that we can test. Searle referred to the "strong AI hypothesis" as "strong AI". This usage is also common in academic AI research and textbooks.<ref>For example:<br />
<br />
The first one is called "the strong AI hypothesis" and the second is "the weak AI hypothesis" because the first one makes the stronger statement: it assumes something special has happened to the machine that goes beyond all its abilities that we can test. Searle referred to the "strong AI hypothesis" as "strong AI". This usage is also common in academic AI research and textbooks.<ref>For example:<br />
<br />
第一条被称为“强人工智能假设” ,第二条被称为“弱人工智能假设”,因为第一条假设提出了更强的陈述: 它假定机器发生了某种特殊的事件,超出了我们能够测试的所有能力。塞尔将“'''<font color="#ff8000">强人工智能假说(strong AI hypothesis)</font>'''”称为“强人工智能”。这种用法在人工智能学术研究和教科书中也很常见。例如:<br />
<br />
* {{Harvnb|Russell|Norvig|2003}},<br />
<br />
* [http://www.encyclopedia.com/doc/1O87-strongAI.html Oxford University Press Dictionary of Psychology] {{Webarchive|url=https://web.archive.org/web/20071203103022/http://www.encyclopedia.com/doc/1O87-strongAI.html |date=3 December 2007 }} (quoted in "High Beam Encyclopedia"),<br />
<br />
* [http://www.aaai.org/AITopics/html/phil.html MIT Encyclopedia of Cognitive Science] {{Webarchive|url=https://web.archive.org/web/20080719074502/http://www.aaai.org/AITopics/html/phil.html |date=19 July 2008 }} (quoted in "AITopics")<br />
<br />
* [http://planetmath.org/encyclopedia/StrongAIThesis.html Planet Math] {{Webarchive|url=https://web.archive.org/web/20070919012830/http://planetmath.org/encyclopedia/StrongAIThesis.html |date=19 September 2007 }}<br />
<br />
* [http://www.cbhd.org/resources/biotech/tongen_2003-11-07.htm Will Biological Computers Enable Artificially Intelligent Machines to Become Persons?] {{Webarchive|url=https://web.archive.org/web/20080513031753/http://www.cbhd.org/resources/biotech/tongen_2003-11-07.htm |date=13 May 2008 }} Anthony Tongen</ref><br />
<br />
<br />
<br />
The weak AI hypothesis is equivalent to the hypothesis that artificial general intelligence is possible. According to [[Stuart J. Russell|Russell]] and [[Peter Norvig|Norvig]], "Most AI researchers take the weak AI hypothesis for granted, and don't care about the strong AI hypothesis."{{sfn|Russell|Norvig|2003|p=947}}<br />
<br />
The weak AI hypothesis is equivalent to the hypothesis that artificial general intelligence is possible. According to Russell and Norvig, "Most AI researchers take the weak AI hypothesis for granted, and don't care about the strong AI hypothesis."<br />
<br />
弱人工智能假说等同于通用人工智能是可能的假说。根据罗素和诺维格的说法,“大多数人工智能研究人员认为弱人工智能假说是理所当然的,并且不关心强人工智能假说。”<br />
<br />
<br />
<br />
In contrast to Searle, [[Ray Kurzweil]] uses the term "strong AI" to describe any artificial intelligence system that acts like it has a mind,<ref name=K/> regardless of whether a philosopher would be able to determine if it ''actually'' has a mind or not.<br />
<br />
In contrast to Searle, Ray Kurzweil uses the term "strong AI" to describe any artificial intelligence system that acts like it has a mind, regardless of whether a philosopher would be able to determine if it actually has a mind or not.<br />
<br />
与 Searle 不同的是,Ray Kurzweil 使用“强人工智能”这个词来描述任何人工智能系统,这个系统的行为就像它有思想一样,不管哲学家能否确定它是否真的有思想。<br />
<br />
In science fiction, AGI is associated with traits such as [[consciousness]], [[sentience]], [[sapience]], and [[self-awareness]] observed in living beings. However, according to Searle, it is an open question whether general intelligence is sufficient for consciousness. "Strong AI" (as defined above by Kurzweil) should not be confused with Searle's "[[strong AI hypothesis]]." The strong AI hypothesis is the claim that a computer which behaves as intelligently as a person must also necessarily have a [[mind]] and [[consciousness]]. AGI refers only to the amount of intelligence that the machine displays, with or without a mind.<br />
<br />
In science fiction, AGI is associated with traits such as consciousness, sentience, sapience, and self-awareness observed in living beings. However, according to Searle, it is an open question whether general intelligence is sufficient for consciousness. "Strong AI" (as defined above by Kurzweil) should not be confused with Searle's "strong AI hypothesis." The strong AI hypothesis is the claim that a computer which behaves as intelligently as a person must also necessarily have a mind and consciousness. AGI refers only to the amount of intelligence that the machine displays, with or without a mind.<br />
<br />
在科幻小说中,通用人工智能与生物所具有的的意识、知觉、智慧和自我意识等特征有关。然而,根据塞尔的说法,一般智力是否足以产生意识还是一个悬而未决的问题。“强人工智能”(如上文库兹韦尔所定义的)不应与塞尔的“强人工智能假设”相混淆。强人工智能假说认为,一台像人一样智能运行的计算机必然具有思想和意识。通用人工智能只是指机器显示的智能程度,不管有没有头脑。<br />
<br />
<br />
<br />
===Consciousness 意识===<br />
<br />
There are other aspects of the human mind besides intelligence that are relevant to the concept of strong AI which play a major role in [[science fiction]] and the [[ethics of artificial intelligence]]:<br />
<br />
There are other aspects of the human mind besides intelligence that are relevant to the concept of strong AI which play a major role in science fiction and the ethics of artificial intelligence:<br />
<br />
除了与在科幻小说和人工智能伦理中扮演重要角色的强人工智能概念有关的智能,人类思维还有其他方面:<br />
<br />
* [[consciousness]]: To have [[qualia|subjective experience]] and [[thought]].<ref>Note that [[consciousness]] is difficult to define. A popular definition, due to [[Thomas Nagel]], is that it "feels like" something to be conscious. If we are not conscious, then it doesn't feel like anything. Nagel uses the example of a bat: we can sensibly ask "what does it feel like to be a bat?" However, we are unlikely to ask "what does it feel like to be a toaster?" Nagel concludes that a bat appears to be conscious (i.e. has consciousness) but a toaster does not. See {{Harv|Nagel|1974}}</ref><br />
意识:拥有主观体验和思想。值得一提的是意识是很难定义的。由托马斯·内格尔给出的一个著名定义陈述如下:一个事物如果能体会到某种感觉,那么它是有意识的。如果我们不是有意识的,那么我们不会有任何感觉。内格尔以蝙蝠为例:我们可以凭借感觉问出:“成为一只蝙蝠的感觉如何?”但是,我们不大可能问出:“成为一个吐司机的感觉如何?”内格尔总结认为蝙蝠像是有意识的(即拥有意识),但是吐司机却不是。<br />
<br />
* [[self-awareness]]: To be aware of oneself as a separate individual, especially to be aware of one's own thoughts.<br />
自我意识:能够意识到自己是一个独立的个体,尤其是意识到自己的思想。<br />
<br />
* [[sentience]]: The ability to "feel" perceptions or emotions subjectively.<br />
知觉:主观地感受概念或者情感的能力。<br />
<br />
* [[sapience]]: The capacity for wisdom.<br />
智慧:容纳知识的能力。<br />
<br />
<br />
<br />
These traits have a moral dimension, because a machine with this form of strong AI may have legal rights, analogous to the [[animal rights|rights of non-human animals]]. As such, preliminary work has been conducted on approaches to integrating full ethical agents with existing legal and social frameworks. These approaches have focused on the legal position and rights of 'strong' AI.<ref>{{Cite journal|last=Sotala|first=Kaj|last2=Yampolskiy|first2=Roman V|date=2014-12-19|title=Responses to catastrophic AGI risk: a survey|journal=Physica Scripta|volume=90|issue=1|pages=8|doi=10.1088/0031-8949/90/1/018001|issn=0031-8949|doi-access=free}}</ref><br />
<br />
These traits have a moral dimension, because a machine with this form of strong AI may have legal rights, analogous to the rights of non-human animals. As such, preliminary work has been conducted on approaches to integrating full ethical agents with existing legal and social frameworks. These approaches have focused on the legal position and rights of 'strong' AI.<br />
<br />
这些特征具有道德维度,因为拥有这种强人工智能形式的机器可能拥有法律权利,类似于非人类动物的权利。因此,已经开展了初步工作,探讨如何将全面的道德行为者纳入现有的法律和社会框架。这些方法都集中在强大的人工智能的法律地位和权利上。<br />
<br />
<br />
<br />
However, [[Bill Joy]], among others, argues a machine with these traits may be a threat to human life or dignity.<ref>{{cite journal| title=Why the future doesn't need us | last=Joy | first=Bill |author-link=Bill Joy | journal=Wired |date=April 2000 }}</ref> It remains to be shown whether any of these traits are [[Necessary and sufficient condition|necessary]] for strong AI. The role of [[consciousness]] is not clear, and currently there is no agreed test for its presence. If a machine is built with a device that simulates the [[neural correlates of consciousness]], would it automatically have self-awareness? It is also possible that some of these properties, such as sentience, [[Emergence|naturally emerge]] from a fully intelligent machine, or that it becomes natural to ''ascribe'' these properties to machines once they begin to act in a way that is clearly intelligent. For example, intelligent action may be sufficient for sentience, rather than the other way around.<br />
<br />
However, Bill Joy, among others, argues a machine with these traits may be a threat to human life or dignity. It remains to be shown whether any of these traits are necessary for strong AI. The role of consciousness is not clear, and currently there is no agreed test for its presence. If a machine is built with a device that simulates the neural correlates of consciousness, would it automatically have self-awareness? It is also possible that some of these properties, such as sentience, naturally emerge from a fully intelligent machine, or that it becomes natural to ascribe these properties to machines once they begin to act in a way that is clearly intelligent. For example, intelligent action may be sufficient for sentience, rather than the other way around.<br />
<br />
然而,比尔·乔伊(Bill Joy)等人认为,具有这些特征的机器可能会威胁到人类的生命或尊严。这些特征对于强人工智能来说是否是必要的还有待证明。意识的作用并不清楚,目前也没有针对其存在而进行的一致的测试。如果一台机器装有一个模拟与意识相关的神经的装置,它会自动具有自我意识吗?也有可能这些特性中的一些,比如感知能力,自然而然地从一个完全智能的机器中产生,或者一旦机器开始以一种明显智能的方式行动,人们就会自然而然地认为这些特性是机器自主产生的。<br />
--[[用户:粲兰|袁一博]]([[用户讨论:粲兰|讨论]])“或者一旦机器开始以一种明显智能的方式行动,,人们就会自然而然地认为这些特性是机器自主产生的。”对应原句“or that it becomes natural to ascribe these properties to machines once they begin to act in a way that is clearly intelligent.”与原句在语序和措辞上略有不同,是译者考虑到中文的阅读习惯在不改变原意的条件下意译得出的。<br />
例如,智能行为可能足以判定机器产生了知觉,而非反过来。<br />
<br />
<br />
<br />
===Artificial consciousness research 人工意识研究===<br />
<br />
{{Main|Artificial consciousness}}<br />
<br />
<br />
<br />
Although the role of consciousness in strong AI/AGI is debatable, many AGI researchers{{sfn|Yudkowsky|2006}} regard research that investigates possibilities for implementing consciousness as vital. In an early effort [[Igor Aleksander]]{{sfn|Aleksander|1996}} argued that the principles for creating a conscious machine already existed but that it would take forty years to train such a machine to understand [[language]].<br />
<br />
Although the role of consciousness in strong AI/AGI is debatable, many AGI researchers regard research that investigates possibilities for implementing consciousness as vital. In an early effort Igor Aleksander argued that the principles for creating a conscious machine already existed but that it would take forty years to train such a machine to understand language.<br />
<br />
虽然意识在强人工智能/通用人工智能中的作用是有争议的,但是很多通用人工智能的研究人员认为研究实现意识的可能性是至关重要的。在早期的努力中,伊格尔·亚历山大(Igor Aleksander)认为创造一个有意识的机器的原则已经存在,但是训练这样一个机器去理解语言可能需要四十年。<br />
<br />
<br />
<br />
==Possible explanations for the slow progress of AI research 人工智能研究进展缓慢的可能解释==<br />
<br />
{{See also|History of artificial intelligence#The problems}}<br />
<br />
<br />
<br />
Since the launch of AI research in 1956, the growth of this field has slowed down over time and has stalled the aims of creating machines skilled with intelligent action at the human level.{{sfn|Clocksin|2003}} A possible explanation for this delay is that computers lack a sufficient scope of memory or processing power.{{sfn|Clocksin|2003}} In addition, the level of complexity that connects to the process of AI research may also limit the progress of AI research.{{sfn|Clocksin|2003}}<br />
<br />
Since the launch of AI research in 1956, the growth of this field has slowed down over time and has stalled the aims of creating machines skilled with intelligent action at the human level. A possible explanation for this delay is that computers lack a sufficient scope of memory or processing power. In addition, the level of complexity that connects to the process of AI research may also limit the progress of AI research.<br />
<br />
自从1956年开始人工智能研究以来,这一领域的发展速度已经随着时间的推移而放缓,并且阻碍了创造具有人类水平的智能行为的机器的目标。这种延迟的一个可能的解释是计算机缺乏足够的存储空间或处理能力。此外,与人工智能研究过程相关的复杂程度也可能限制人工智能研究的进展。<br />
<br />
<br />
<br />
While most AI researchers believe strong AI can be achieved in the future, there are some individuals like [[Hubert Dreyfus]] and [[Roger Penrose]] who deny the possibility of achieving strong AI.{{sfn|Clocksin|2003}} [[John McCarthy (computer scientist)|John McCarthy]] was one of various computer scientists who believe human-level AI will be accomplished, but a date cannot accurately be predicted.{{sfn|McCarthy|2003}}<br />
<br />
While most AI researchers believe strong AI can be achieved in the future, there are some individuals like Hubert Dreyfus and Roger Penrose who deny the possibility of achieving strong AI. John McCarthy was one of various computer scientists who believe human-level AI will be accomplished, but a date cannot accurately be predicted.<br />
<br />
虽然大多数人工智能研究人员认为强大的人工智能可以在未来实现,但也有一些人像休伯特·德雷福斯(Hubert Dreyfus)和罗杰·彭罗斯(Roger Penrose)否认实现强大人工智能的可能性。约翰·麦卡锡(John McCarthy)是众多计算机科学家之一,他们相信人类水平的人工智能将会实现,但是日期无法准确预测。<br />
<br />
<br />
<br />
Conceptual limitations are another possible reason for the slowness in AI research.{{sfn|Clocksin|2003}} AI researchers may need to modify the conceptual framework of their discipline in order to provide a stronger base and contribution to the quest of achieving strong AI. As William Clocksin wrote in 2003: "the framework starts from Weizenbaum's observation that intelligence manifests itself only relative to specific social and cultural contexts".{{sfn|Clocksin|2003}}<br />
<br />
Conceptual limitations are another possible reason for the slowness in AI research. AI researchers may need to modify the conceptual framework of their discipline in order to provide a stronger base and contribution to the quest of achieving strong AI. As William Clocksin wrote in 2003: "the framework starts from Weizenbaum's observation that intelligence manifests itself only relative to specific social and cultural contexts".<br />
<br />
概念上的局限性是人工智能研究缓慢的另一个可能原因。人工智能研究人员可能需要修改他们学科的概念框架,以便为实现强大的人工智能提供一个更强大的基础和贡献。正如威廉·克罗克森(William Clocksin)在2003年写的那样: “这个框架始于魏泽堡的观察,即智力只在特定的社会和文化背景下表现出来。”。<br />
<br />
<br />
<br />
Furthermore, AI researchers have been able to create computers that can perform jobs that are complicated for people to do, such as mathematics, but conversely they have struggled to develop a computer that is capable of carrying out tasks that are simple for humans to do, such as walking ([[Moravec's paradox]]).{{sfn|Clocksin|2003}} A problem described by David Gelernter is that some people assume thinking and reasoning are equivalent.{{sfn|Gelernter|2010}} However, the idea of whether thoughts and the creator of those thoughts are isolated individually has intrigued AI researchers.{{sfn|Gelernter|2010}}<br />
<br />
Furthermore, AI researchers have been able to create computers that can perform jobs that are complicated for people to do, such as mathematics, but conversely they have struggled to develop a computer that is capable of carrying out tasks that are simple for humans to do, such as walking (Moravec's paradox). A problem described by David Gelernter is that some people assume thinking and reasoning are equivalent. However, the idea of whether thoughts and the creator of those thoughts are isolated individually has intrigued AI researchers.<br />
<br />
此外,人工智能研究人员已经能够创造出能够执行对人类而言复杂的工作(如数学)的计算机,但相反地,他们却难以开发出能够执行人类简单任务(如行走)的计算机(莫拉维克悖论)。大卫·格勒尼特(David Gelernter)描述的一个问题是,有些人认为思考和推理是等价的。然而,思想和这些思想的创造者是否被孤立的想法引起了人工智能研究者的兴趣。<br />
<br />
<br />
<br />
The problems that have been encountered in AI research over the past decades have further impeded the progress of AI. The failed predictions that have been promised by AI researchers and the lack of a complete understanding of human behaviors have helped diminish the primary idea of human-level AI.{{sfn|Goertzel|2007}} Although the progress of AI research has brought both improvement and disappointment, most investigators have established optimism about potentially achieving the goal of AI in the 21st century.{{sfn|Goertzel|2007}}<br />
<br />
The problems that have been encountered in AI research over the past decades have further impeded the progress of AI. The failed predictions that have been promised by AI researchers and the lack of a complete understanding of human behaviors have helped diminish the primary idea of human-level AI. Although the progress of AI research has brought both improvement and disappointment, most investigators have established optimism about potentially achieving the goal of AI in the 21st century.<br />
<br />
过去几十年人工智能研究中遇到的问题进一步阻碍了人工智能的发展。人工智能研究人员所承诺的失败的预测,以及对人类行为缺乏完整理解,已经帮助削弱了人类水平人工智能的最初设想。尽管人工智能研究的进展带来了进步和失望,但大多数研究人员对人工智能在21世纪可能实现的目标持乐观态度。<br />
<br />
<br />
<br />
Other possible reasons have been proposed for the lengthy research in the progress of strong AI. The intricacy of scientific problems and the need to fully understand the human brain through psychology and neurophysiology have limited many researchers in emulating the function of the human brain in computer hardware.{{sfn|McCarthy|2007}} Many researchers tend to underestimate any doubt that is involved with future predictions of AI, but without taking those issues seriously, people can then overlook solutions to problematic questions.{{sfn|Goertzel|2007}}<br />
<br />
Other possible reasons have been proposed for the lengthy research in the progress of strong AI. The intricacy of scientific problems and the need to fully understand the human brain through psychology and neurophysiology have limited many researchers in emulating the function of the human brain in computer hardware. Many researchers tend to underestimate any doubt that is involved with future predictions of AI, but without taking those issues seriously, people can then overlook solutions to problematic questions.<br />
<br />
还有其他可能的原因可以解释为什么对于强人工智能进行了长时间的研究。错综复杂的科学问题,以及通过心理学和神经生理学充分了解人脑的必要性,限制了许多研究人员在计算机硬件中模拟人脑的功能。许多研究人员倾向于低估与人工智能未来预测有关的任何怀疑,但是如果不认真对待这些问题,人们就会忽视不确定问题的解决方案。<br />
<br />
<br />
<br />
Clocksin says that a conceptual limitation that may impede the progress of AI research is that people may be using the wrong techniques for computer programs and implementation of equipment.{{sfn|Clocksin|2003}} When AI researchers first began to aim for the goal of artificial intelligence, a main interest was human reasoning.{{sfn|Holte|Choueiry|2003}} Researchers hoped to establish computational models of human knowledge through reasoning and to find out how to design a computer with a specific cognitive task.{{sfn|Holte|Choueiry|2003}}<br />
<br />
Clocksin says that a conceptual limitation that may impede the progress of AI research is that people may be using the wrong techniques for computer programs and implementation of equipment. When AI researchers first began to aim for the goal of artificial intelligence, a main interest was human reasoning. Researchers hoped to establish computational models of human knowledge through reasoning and to find out how to design a computer with a specific cognitive task.<br />
<br />
克罗克森说,阻碍人工智能研究进展的一个概念上的限制是,人们可能在计算机程序和安装设备方面使用了错误的技术。当人工智能研究人员第一次开始瞄准人工智能的目标时,主要的兴趣是人类推理。研究人员希望通过推理建立人类知识的计算模型,并找出如何设计一台具有特定认知任务的计算机。<br />
<br />
<br />
<br />
The practice of abstraction, which people tend to redefine when working with a particular context in research, provides researchers with a concentration on just a few concepts.{{sfn|Holte|Choueiry|2003}} The most productive use of abstraction in AI research comes from planning and problem solving.{{sfn|Holte|Choueiry|2003}} Although the aim is to increase the speed of a computation, the role of abstraction has posed questions about the involvement of abstraction operators.{{sfn|Zucker|2003}}<br />
<br />
The practice of abstraction, which people tend to redefine when working with a particular context in research, provides researchers with a concentration on just a few concepts. The most productive use of abstraction in AI research comes from planning and problem solving. Although the aim is to increase the speed of a computation, the role of abstraction has posed questions about the involvement of abstraction operators.<br />
<br />
人们在研究中使用特定的语境时倾向于重新定义抽象的实践,使得研究人员集中在数个概念上。抽象在人工智能研究中最有效的应用来自规划和解决问题。虽然目标是提高计算速度,但是抽象的作用已经对抽象算子的参与提出了问题。<br />
<br />
<br />
<br />
A possible reason for the slowness in AI relates to the acknowledgement by many AI researchers that heuristics is a section that contains a significant breach between computer performance and human performance.{{sfn|McCarthy|2007}} The specific functions that are programmed to a computer may be able to account for many of the requirements that allow it to match human intelligence. These explanations are not necessarily guaranteed to be the fundamental causes for the delay in achieving strong AI, but they are widely agreed by numerous researchers.<br />
<br />
A possible reason for the slowness in AI relates to the acknowledgement by many AI researchers that heuristics is a section that contains a significant breach between computer performance and human performance. The specific functions that are programmed to a computer may be able to account for many of the requirements that allow it to match human intelligence. These explanations are not necessarily guaranteed to be the fundamental causes for the delay in achieving strong AI, but they are widely agreed by numerous researchers.<br />
<br />
人工智能发展缓慢的一个可能原因是许多人工智能研究人员承认启发式算法是计算机性能和人类表现之间的一个重大缺口。为编入计算机的特定功能可以满足许多要求,使计算机与人类智能相匹配。这些解释并不一定是造成人工智能实现延迟的根本原因,但它们得到了众多研究人员的广泛认同。<br />
<br />
<br />
<br />
There have been many AI researchers that debate over the idea whether [[affective computing|machines should be created with emotions]]. There are no emotions in typical models of AI and some researchers say programming emotions into machines allows them to have a mind of their own.{{sfn|Clocksin|2003}} Emotion sums up the experiences of humans because it allows them to remember those experiences.{{sfn|Gelernter|2010}} David Gelernter writes, "No computer will be creative unless it can simulate all the nuances of human emotion."{{sfn|Gelernter|2010}} This concern about emotion has posed problems for AI researchers and it connects to the concept of strong AI as its research progresses into the future.<ref>{{cite journal|doi=10.1016/j.bushor.2018.08.004|title=Kaplan Andreas and Haelein Michael (2019) Siri, Siri, in my hand: Who's the fairest in the land? On the interpretations, illustrations, and implications of artificial intelligence | volume=62 | year=2019|journal=Business Horizons|pages=15–25 | last1 = Kaplan | first1 = Andreas | last2 = Haenlein | first2 = Michael}}</ref><br />
<br />
There have been many AI researchers that debate over the idea whether machines should be created with emotions. There are no emotions in typical models of AI and some researchers say programming emotions into machines allows them to have a mind of their own. Emotion sums up the experiences of humans because it allows them to remember those experiences. David Gelernter writes, "No computer will be creative unless it can simulate all the nuances of human emotion." This concern about emotion has posed problems for AI researchers and it connects to the concept of strong AI as its research progresses into the future.<br />
<br />
许多人工智能研究人员一直在争论机器是否应该带有情感。典型的人工智能模型中没有情感,一些研究人员说,将情感编程到机器中可以让它们拥有自己的思想。情感总结了人类的经历,因为它允许人们记住那些经历。大卫·格勒尼特(David Gelernter)写道: “除非计算机能够模拟人类情感的所有细微差别,否则它不会具有创造力。”这种对情绪的关注给人工智能研究人员带来了一些问题,随着未来人工智能研究的进展,它与强人工智能的概念联系起来。<br />
<br />
<br />
<br />
==Controversies and dangers 争议和风险==<br />
<br />
<br />
<br />
===Feasibility 可能性===<br />
<br />
{{expand section|date=February 2016}}<br />
<br />
As of March 2020, AGI remains speculative<ref name="spec1">[https://www.europarl.europa.eu/at-your-service/files/be-heard/religious-and-non-confessional-dialogue/events/en-20190319-how-artificial-intelligence-works.pdf europarl.europa.eu: How artificial intelligence works], "Concluding remarks: Today's AI is powerful and useful, but remains far from speculated AGI or ASI.", European Parliamentary Research Service, retrieved March 3, 2020</ref><ref name="spec2">[https://www.itu.int/en/journal/001/Documents/itu2018-9.pdf itu.int: Beyond Mad?: The Race For Artificial General Intelligence], "AGI represents a level of power that remains firmly in the realm of speculative fiction as on date." February 2, 2018, retrieved March 3, 2020</ref> as no such system has been demonstrated yet. Opinions vary both on ''whether'' and ''when'' artificial general intelligence will arrive. At one extreme, AI pioneer [[Herbert A. Simon]] wrote in 1965: "machines will be capable, within twenty years, of doing any work a man can do". However, this prediction failed to come true. Microsoft co-founder [[Paul Allen]] believed that such intelligence is unlikely in the 21st century because it would require "unforeseeable and fundamentally unpredictable breakthroughs" and a "scientifically deep understanding of cognition".<ref>{{cite news|last1=Allen|first1=Paul|title=The Singularity Isn't Near|url=http://www.technologyreview.com/view/425733/paul-allen-the-singularity-isnt-near/|accessdate=17 September 2014|work=[[MIT Technology Review]]}}</ref> Writing in [[The Guardian]], roboticist Alan Winfield claimed the gulf between modern computing and human-level artificial intelligence is as wide as the gulf between current space flight and practical faster-than-light spaceflight.<ref>{{cite news|last1=Winfield|first1=Alan|title=Artificial intelligence will not turn into a Frankenstein's monster|url=https://www.theguardian.com/technology/2014/aug/10/artificial-intelligence-will-not-become-a-frankensteins-monster-ian-winfield|accessdate=17 September 2014|work=[[The Guardian]]|archive-url=https://web.archive.org/web/20140917135230/http://www.theguardian.com/technology/2014/aug/10/artificial-intelligence-will-not-become-a-frankensteins-monster-ian-winfield|archive-date=17 September 2014|url-status=live}}</ref><br />
<br />
As of March 2020, AGI remains speculative as no such system has been demonstrated yet. Opinions vary both on whether and when artificial general intelligence will arrive. At one extreme, AI pioneer Herbert A. Simon wrote in 1965: "machines will be capable, within twenty years, of doing any work a man can do". However, this prediction failed to come true. Microsoft co-founder Paul Allen believed that such intelligence is unlikely in the 21st century because it would require "unforeseeable and fundamentally unpredictable breakthroughs" and a "scientifically deep understanding of cognition". Writing in The Guardian, roboticist Alan Winfield claimed the gulf between modern computing and human-level artificial intelligence is as wide as the gulf between current space flight and practical faster-than-light spaceflight.<br />
<br />
截至2020年3月,通用人工智能仍处于推测性的状态,因为迄今此类系统尚未被展示。对于人工通用智能是否会到来以及何时到来,人们的看法各不相同。一个极端如,人工智能的先驱赫伯特·西蒙在1965年写道: “机器将能在20年内具有完成人类能做的任何工作的能力。”然而,这个预言并没有实现。微软(Microsoft)联合创始人保罗•艾伦(Paul Allen)认为,这种情报在21世纪不太可能出现,因为它需要“不可预见且根本无法预测的突破”和“对认知的科学的深入理解”。机器人专家阿兰·温菲尔德(Alan Winfield)在《卫报》上发表文章称,现代计算机和人类水平的人工智能之间的鸿沟就像当前的太空飞行和实际的超光速空间飞行之间的鸿沟一样宽。<br />
<br />
<br />
<br />
AI experts' views on the feasibility of AGI wax and wane, and may have seen a resurgence in the 2010s. Four polls conducted in 2012 and 2013 suggested that the median guess among experts for when they would be 50% confident AGI would arrive was 2040 to 2050, depending on the poll, with the mean being 2081. Of the experts, 16.5% answered with "never" when asked the same question but with a 90% confidence instead.<ref name="new yorker doomsday">{{cite news|author1=Raffi Khatchadourian|title=The Doomsday Invention: Will artificial intelligence bring us utopia or destruction?|url=http://www.newyorker.com/magazine/2015/11/23/doomsday-invention-artificial-intelligence-nick-bostrom|accessdate=7 February 2016|work=[[The New Yorker (magazine)|The New Yorker]]|date=23 November 2015|archive-url=https://web.archive.org/web/20160128105955/http://www.newyorker.com/magazine/2015/11/23/doomsday-invention-artificial-intelligence-nick-bostrom|archive-date=28 January 2016|url-status=live}}</ref><ref>Müller, V. C., & Bostrom, N. (2016). Future progress in artificial intelligence: A survey of expert opinion. In Fundamental issues of artificial intelligence (pp. 555–572). Springer, Cham.</ref> Further current AGI progress considerations can be found below [[#Tests_for_confirming_human-level_AGI|''Tests for confirming human-level AGI'']] and [[#IQ-Tests_AGI|''IQ-tests AGI'']].<br />
<br />
AI experts' views on the feasibility of AGI wax and wane, and may have seen a resurgence in the 2010s. Four polls conducted in 2012 and 2013 suggested that the median guess among experts for when they would be 50% confident AGI would arrive was 2040 to 2050, depending on the poll, with the mean being 2081. Of the experts, 16.5% answered with "never" when asked the same question but with a 90% confidence instead. Further current AGI progress considerations can be found below Tests for confirming human-level AGI and IQ-tests AGI.<br />
<br />
人工智能专家一直在考虑通用人工智能兴衰可能性,可能在2010年代出现了复苏。2012年和2013年进行的四次民意调查显示,专家们平均有50%的信心相信通用人工智能会在2040年到2050年间到来,具体取决于调查结果,平均猜测是2081年。在这些专家中,16.5% 的人在被问到同样的问题时回答“从来没有” ,但他们的自信心却达到了90%。当前通用人工智能项目进展中的进一步考虑可以在确认人类水平通用人工智能和基于智商测试的通用人工智能测试下面找到。<br />
<br />
<br />
<br />
===Potential threat to human existence{{anchor|Risk_of_human_extinction}} 对人类的潜在威胁===<br />
<br />
{{Main|Existential risk from artificial general intelligence}}<br />
<br />
<br />
<br />
The thesis that AI poses an existential risk, and that this risk needs much more attention than it currently gets, has been endorsed by many public figures; perhaps the most famous are [[Elon Musk]], [[Bill Gates]], and [[Stephen Hawking]]. The most notable AI researcher to endorse the thesis is [[Stuart J. Russell]]. Endorsers of the thesis sometimes express bafflement at skeptics: Gates states he does not "understand why some people are not concerned",<ref name="BBC News">{{cite news|last1=Rawlinson|first1=Kevin|title=Microsoft's Bill Gates insists AI is a threat|url=https://www.bbc.co.uk/news/31047780|work=[[BBC News]]|accessdate=30 January 2015}}</ref> and Hawking criticized widespread indifference in his 2014 editorial: {{cquote|'So, facing possible futures of incalculable benefits and risks, the experts are surely doing everything possible to ensure the best outcome, right? Wrong. If a superior alien civilisation sent us a message saying, 'We'll arrive in a few decades,' would we just reply, 'OK, call us when you get here{{endash}}we'll leave the lights on?' Probably not{{endash}}but this is more or less what is happening with AI.'<ref name="hawking editorial">{{cite news |title=Stephen Hawking: 'Transcendence looks at the implications of artificial intelligence&nbsp;– but are we taking AI seriously enough?' |url=https://www.independent.co.uk/news/science/stephen-hawking-transcendence-looks-at-the-implications-of-artificial-intelligence--but-are-we-taking-ai-seriously-enough-9313474.html |accessdate=3 December 2014 |publisher=[[The Independent (UK)]]}}</ref>}}<br />
<br />
The thesis that AI poses an existential risk, and that this risk needs much more attention than it currently gets, has been endorsed by many public figures; perhaps the most famous are Elon Musk, Bill Gates, and Stephen Hawking. The most notable AI researcher to endorse the thesis is Stuart J. Russell. Endorsers of the thesis sometimes express bafflement at skeptics: Gates states he does not "understand why some people are not concerned", and Hawking criticized widespread indifference in his 2014 editorial: we'll leave the lights on?' Probably not, but this is more or less what is happening with AI.'}}<br />
<br />
人工智能构成了世界末日,这种风险需要比现在更多的关注,这一论点已经得到了许多公众人物的支持; 也许最著名的是埃隆·马斯克,比尔·盖茨和斯蒂芬·霍金。支持这一观点的最著名的人工智能研究者是斯图尔特·罗素。这篇论文的支持者有时会对怀疑论者表示困惑: 盖茨表示,他不“理解为什么有些人不关心” ,霍金在2014年的社论中批评了普遍的冷漠: 我们就这么让人工智能这盏灯亮着吗?可能不是,但这或多或少是正在发生在人工智能领域内的。<br />
<br />
<br />
<br />
Many of the scholars who are concerned about existential risk believe that the best way forward would be to conduct (possibly massive) research into solving the difficult "[[AI control problem|control problem]]" to answer the question: what types of safeguards, algorithms, or architectures can programmers implement to maximize the probability that their recursively-improving AI would continue to behave in a friendly, rather than destructive, manner after it reaches superintelligence?<ref name="superintelligence" >{{cite book|last1=Bostrom|first1=Nick|author-link=Nick Bostrom|title=Superintelligence: Paths, Dangers, Strategies|date=2014|isbn=978-0199678112|edition=First|quote=|title-link=Superintelligence: Paths, Dangers, Strategies}}<!-- preface --></ref><ref name="physica_scripta" >{{cite journal|title=Responses to catastrophic AGI risk: a survey|journal=[[Physica Scripta]]|date=19 December 2014|volume=90|issue=1|author1=Kaj Sotala|author2-link=Roman Yampolskiy|author2=Roman Yampolskiy}}</ref><br />
<br />
Many of the scholars who are concerned about existential risk believe that the best way forward would be to conduct (possibly massive) research into solving the difficult "control problem" to answer the question: what types of safeguards, algorithms, or architectures can programmers implement to maximize the probability that their recursively-improving AI would continue to behave in a friendly, rather than destructive, manner after it reaches superintelligence?<br />
<br />
许多关注世界末日的学者认为,最好的方法是进行(可能是大规模的)研究,解决困难的“控制问题” ,以回答这个问题: 程序员可以实现哪些类型的保障措施、算法或架构,以最大程度地提高其递归改进的人工智能在达到超级智能后继续以友好,而非破坏性的方式运行的可能性?<br />
<br />
<br />
<br />
The thesis that AI can pose existential risk also has many strong detractors. Skeptics sometimes charge that the thesis is crypto-religious, with an irrational belief in the possibility of superintelligence replacing an irrational belief in an omnipotent God; at an extreme, [[Jaron Lanier]] argues that the whole concept that current machines are in any way intelligent is "an illusion" and a "stupendous con" by the wealthy.<ref name="atlantic-but-what">{{cite magazine |url=https://www.theatlantic.com/health/archive/2014/05/but-what-does-the-end-of-humanity-mean-for-me/361931/ |title=But What Would the End of Humanity Mean for Me? |magazine=The Atlantic | date = 9 May 2014 | accessdate =12 December 2015}}</ref><br />
<br />
The thesis that AI can pose existential risk also has many strong detractors. Skeptics sometimes charge that the thesis is crypto-religious, with an irrational belief in the possibility of superintelligence replacing an irrational belief in an omnipotent God; at an extreme, Jaron Lanier argues that the whole concept that current machines are in any way intelligent is "an illusion" and a "stupendous con" by the wealthy.<br />
<br />
人工智能可以造成世界末日的观点也遭到了许多强烈的反对。怀疑论者有时指责该论点是神秘的、宗教性的,他们非理性地相信超级智能可能取代对万能的上帝的非理性信仰; 一个极端如,杰伦·拉尼尔(Jaron Lanier)认为,目前的机器以任何方式具有智能的整个概念是“一种幻觉” ,是富人的“惊人骗局”。<br />
<br />
<br />
<br />
Much of existing criticism argues that AGI is unlikely in the short term. Computer scientist [[Gordon Bell]] argues that the human race will already destroy itself before it reaches the [[technological singularity]]. [[Gordon Moore]], the original proponent of [[Moore's Law]], declares that "I am a skeptic. I don't believe [a technological singularity] is likely to happen, at least for a long time. And I don't know why I feel that way."<ref>{{cite news |title=Tech Luminaries Address Singularity |url=https://spectrum.ieee.org/computing/hardware/tech-luminaries-address-singularity |accessdate=8 April 2020 |work=IEEE Spectrum: Technology, Engineering, and Science News |issue=SPECIAL REPORT: THE SINGULARITY |date=1 June 2008 |language=en}}</ref> [[Baidu]] Vice President [[Andrew Ng]] states AI existential risk is "like worrying about overpopulation on Mars when we have not even set foot on the planet yet."<ref name=shermer>{{cite news|last1=Shermer|first1=Michael|title=Apocalypse AI|url=https://www.scientificamerican.com/article/artificial-intelligence-is-not-a-threat-mdash-yet/|accessdate=27 November 2017|work=Scientific American|date=1 March 2017|pages=77|language=en|doi=10.1038/scientificamerican0317-77|bibcode=2017SciAm.316c..77S}}</ref><br />
<br />
Much of existing criticism argues that AGI is unlikely in the short term. Computer scientist Gordon Bell argues that the human race will already destroy itself before it reaches the technological singularity. Gordon Moore, the original proponent of Moore's Law, declares that "I am a skeptic. I don't believe [a technological singularity] is likely to happen, at least for a long time. And I don't know why I feel that way." Baidu Vice President Andrew Ng states AI existential risk is "like worrying about overpopulation on Mars when we have not even set foot on the planet yet."<br />
<br />
现有的许多批评认为,通用人工智能短期内不太可能成功。计算机科学家戈登·贝尔(Gordon Bell)认为人类在到达'''<font color="#ff8000">技术奇点(technological singularity)</font>'''之前就已经自我毁灭了。戈登·摩尔(Gordon Moore),'''<font color="#ff8000">摩尔定律(Moore's Law)</font>'''的最初提出者,宣称“我是一个怀疑论者。我不认为技术奇点会发生,至少在很长一段时间内不会。我不知道为什么会有这种感觉。”百度副总裁吴恩达(Andrew Ng)说,人工智能世界末日就像是在担心火星人口过剩,而我们甚至还没有踏上这个星球。<br />
<br />
<br />
<br />
==See also 请参阅==<br />
<br />
{{div col|colwidth=30em}}<br />
<br />
* [[Automated machine learning]] 自动机器学习<br />
<br />
* [[Machine ethics]] 机器伦理<br />
<br />
* [[Multi-task learning]] 多任务学习<br />
<br />
* [[Superintelligence: Paths, Dangers, Strategies|Superintelligence]] 超级智能<br />
<br />
* [[Nick Bostrom]] 尼克·博斯特罗姆<br />
<br />
* [[Eliezer Yudkowsky]] 埃利泽·尤德科夫斯基<br />
<br />
* [[Future of Humanity Institute]] 人类未来研究所<br />
<br />
* [[Outline of artificial intelligence]] 人工智能概要<br />
<br />
* [[Artificial brain]] 人工大脑<br />
<br />
* [[Transfer learning]] 学习迁移<br />
<br />
* [[Outline of transhumanism]] 超人类主义概要<br />
<br />
* [[General game playing]] 一般博弈<br />
<br />
* [[Synthetic intelligence]] 合成智能<br />
<br />
* [[Intelligence amplification]] (IA), the use of information technology in augmenting human intelligence instead of creating an external autonomous "AGI"{{div col end}}<br />
智能放大,利用信息技术加强人类智慧而不是建造外在的通用人工智能<br />
<br />
<br />
==Notes 附注==<br />
<br />
{{reflist|colwidth=30em}}<br />
<br />
<br />
<br />
==References 参考文献==<br />
<br />
• Stages of Artificial Intelligence"[https://www.computerscience0.xyz/2020/04/ai-artificial-intelligence-stages-of.html Computer Science]" on 2nd April, 2020.{{refbegin|2}}<br />
<br />
• Stages of Artificial Intelligence"[https://www.computerscience0.xyz/2020/04/ai-artificial-intelligence-stages-of.html Computer Science]" on 2nd April, 2020.<br />
<br />
•2020年4月2日,《人工智能的阶段》[ https://www.computerscience0.xyz/2020/04/ai-Artificial-Intelligence-Stages-of.html 计算机科学]。<br />
<br />
* {{cite web |url=http://www.techcast.org/Upload/PDFs/633615794236495345_TCTheAutomationofThought.pdf |title=TechCast Article Series: The Automation of Thought |last1=Halal |first1=William E. |website= |access-date= |archive-url=https://web.archive.org/web/20130606101835/http://www.techcast.org/Upload/PDFs/633615794236495345_TCTheAutomationofThought.pdf |archive-date=6 June 2013}}<br />
<br />
* {{Citation | last=Aleksander | first=Igor | author-link=Igor Aleksander | year=1996 | title=Impossible Minds | publisher=World Scientific Publishing Company | isbn=978-1-86094-036-1 | url-access=registration | url=https://archive.org/details/impossiblemindsm0000alek }}<br />
<br />
* {{Citation | last = Omohundro|first= Steve| author-link= Steve Omohundro | year = 2008| title= The Nature of Self-Improving Artificial Intelligence| publisher= presented and distributed at the 2007 Singularity Summit, San Francisco, CA.}}<br />
<br />
* {{Citation|last=Sandberg |first=Anders|last2=Boström|first2=Nick|title=Whole Brain Emulation: A Roadmap|url=http://www.fhi.ox.ac.uk/Reports/2008-3.pdf|accessdate=5 April 2009|series= Technical Report #2008‐3|year=2008| publisher = Future of Humanity Institute, Oxford University}}<br />
<br />
* {{Citation|vauthors=Azevedo FA, Carvalho LR, Grinberg LT | title = Equal numbers of neuronal and nonneuronal cells make the human brain an isometrically scaled-up primate brain| journal = The Journal of Comparative Neurology| volume = 513| issue = 5| pages = 532–41|date=April 2009| pmid = 19226510| doi = 10.1002/cne.21974|url=https://www.researchgate.net/publication/24024444 |accessdate=4 September 2013| ref={{harvid|Azevedo et al.|2009}}|display-authors=etal}}<br />
<br />
* {{Citation<br />
<br />
| first=Anthony<br />
<br />
| first=Anthony<br />
<br />
首先是安东尼<br />
<br />
| last=Berglas<br />
<br />
| last=Berglas<br />
<br />
最后一个贝格拉斯<br />
<br />
| title=Artificial Intelligence will Kill our Grandchildren<br />
<br />
| title=Artificial Intelligence will Kill our Grandchildren<br />
<br />
人工智能会杀死我们的孙子<br />
<br />
| year=2008<br />
<br />
| year=2008<br />
<br />
2008年<br />
<br />
| url=http://berglas.org/Articles/AIKillGrandchildren/AIKillGrandchildren.html<br />
<br />
| url=http://berglas.org/Articles/AIKillGrandchildren/AIKillGrandchildren.html<br />
<br />
Http://berglas.org/articles/aikillgrandchildren/aikillgrandchildren.html<br />
<br />
}}<br />
<br />
}}<br />
<br />
}}<br />
<br />
* {{Citation | last = Chalmers |first = David | author-link=David Chalmers | year=1996 | title = The Conscious Mind |publisher=Oxford University Press.}}<br />
<br />
* {{Citation | last = Clocksin|first=William |date=Aug 2003 |title=Artificial intelligence and the future|journal=[[Philosophical Transactions of the Royal Society A]] |pmid=12952683 |volume=361 |issue=1809 |pages=1721–1748 |doi=10.1098/rsta.2003.1232 |postscript=.|bibcode=2003RSPTA.361.1721C }}<br />
<br />
* {{Crevier 1993}}<br />
<br />
* {{Citation | first = Brad | last = Darrach | date=20 November 1970 | title=Meet Shakey, the First Electronic Person | magazine=[[Life Magazine]] | pages = 58–68 }}.<br />
<br />
* {{Citation | last = Drachman | first = D | title = Do we have brain to spare? | journal = Neurology | volume = 64 | issue = 12 | pages = 2004–5 | year = 2005 | pmid = 15985565 | doi = 10.1212/01.WNL.0000166914.38327.BB | postscript = .}}<br />
<br />
* {{Citation | last = Feigenbaum | first = Edward A. | first2=Pamela | last2=McCorduck | author-link=Edward Feigenbaum | author2-link = Pamela McCorduck | title = The Fifth Generation: Artificial Intelligence and Japan's Computer Challenge to the World | publisher = Michael Joseph | year = 1983 | isbn = 978-0-7181-2401-4 }}<br />
<br />
* {{Citation<br />
<br />
| last = Gelernter | first = David<br />
<br />
| last = Gelernter | first = David<br />
<br />
最后的盖兰特 | 第一个大卫<br />
<br />
| year = 2010<br />
<br />
| year = 2010<br />
<br />
2010年<br />
<br />
| title = Dream-logic, the Internet and Artificial Thought<br />
<br />
| title = Dream-logic, the Internet and Artificial Thought<br />
<br />
梦的逻辑,互联网和人工思维<br />
<br />
| url=http://www.edge.org/3rd_culture/gelernter10.1/gelernter10.1_index.html | accessdate= 25 July 2010<br />
<br />
| url=http://www.edge.org/3rd_culture/gelernter10.1/gelernter10.1_index.html | accessdate= 25 July 2010<br />
<br />
Http://www.edge.org/3rd_culture/gelernter10.1/gelernter10.1_index.html : 2010年7月25日<br />
<br />
}}<br />
<br />
}}<br />
<br />
}}<br />
<br />
* {{Citation<br />
<br />
| editor1-last = Goertzel | editor1-first = Ben | authorlink = Ben Goertzel<br />
<br />
| editor1-last = Goertzel | editor1-first = Ben | authorlink = Ben Goertzel<br />
<br />
1-first Ben | authorlink Ben Goertzel<br />
<br />
| editor2-last = Pennachin | editor2-first= Cassio<br />
<br />
| editor2-last = Pennachin | editor2-first= Cassio<br />
<br />
| 编辑2-last Pennachin | 编辑2-first Cassio<br />
<br />
| year = 2006<br />
<br />
| year = 2006<br />
<br />
2006年<br />
<br />
| title=Artificial General Intelligence<br />
<br />
| title=Artificial General Intelligence<br />
<br />
人工通用智能<br />
<br />
| publisher = Springer <br />
<br />
| publisher = Springer <br />
<br />
出版商斯普林格<br />
<br />
| url=http://people.inf.elte.hu/csizsekp/ai/books/artificial-general-intelligence-cognitive-technologies.9783540237334.27156.pdf<br />
<br />
| url=http://people.inf.elte.hu/csizsekp/ai/books/artificial-general-intelligence-cognitive-technologies.9783540237334.27156.pdf<br />
<br />
Http://people.inf.elte.hu/csizsekp/ai/books/artificial-general-intelligence-cognitive-technologies.9783540237334.27156.pdf<br />
<br />
| isbn = 978-3-540-23733-4<br />
<br />
| isbn = 978-3-540-23733-4<br />
<br />
[国际标准图书馆编号978-3-540-23733-4]<br />
<br />
| archive-url = https://web.archive.org/web/20130320184603/http://people.inf.elte.hu/csizsekp/ai/books/artificial-general-intelligence-cognitive-technologies.9783540237334.27156.pdf<br />
<br />
| archive-url = https://web.archive.org/web/20130320184603/http://people.inf.elte.hu/csizsekp/ai/books/artificial-general-intelligence-cognitive-technologies.9783540237334.27156.pdf<br />
<br />
| 档案-url https://web.archive.org/web/20130320184603/http://people.inf.elte.hu/csizsekp/ai/books/artificial-general-intelligence-cognitive-technologies.9783540237334.27156.pdf<br />
<br />
| archive-date = 20 March 2013<br />
<br />
| archive-date = 20 March 2013<br />
<br />
| 档案-日期2013年3月20日<br />
<br />
}}<br />
<br />
}}<br />
<br />
}}<br />
<br />
* {{Citation<br />
<br />
| last = Goertzel | first = Ben | authorlink = Ben Goertzel<br />
<br />
| last = Goertzel | first = Ben | authorlink = Ben Goertzel<br />
<br />
作者: Ben Goertzel<br />
<br />
| last2 = Wang | first2 = Pei<br />
<br />
| last2 = Wang | first2 = Pei<br />
<br />
2 Wang | first2 Pei<br />
<br />
| year = 2006<br />
<br />
| year = 2006<br />
<br />
2006年<br />
<br />
| title = Introduction: Aspects of Artificial General Intelligence<br />
<br />
| title = Introduction: Aspects of Artificial General Intelligence<br />
<br />
| 题目简介: 人工通用智能的方方面面<br />
<br />
| url=http://sites.google.com/site/narswang/publications/wang-goertzel.AGI_Aspects.pdf?attredirects=1<br />
<br />
| url=http://sites.google.com/site/narswang/publications/wang-goertzel.AGI_Aspects.pdf?attredirects=1<br />
<br />
Http://sites.google.com/site/narswang/publications/wang-goertzel.agi_aspects.pdf?attredirects=1<br />
<br />
}}<br />
<br />
}}<br />
<br />
}}<br />
<br />
* {{Citation |last=Goertzel|first=Ben |authorlink=Ben Goertzel |date=Dec 2007 |title=Human-level artificial general intelligence and the possibility of a technological singularity: a reaction to Ray Kurzweil's The Singularity Is Near, and McDermott's critique of Kurzweil|journal=Artificial Intelligence |volume=171 |issue=18, Special Review Issue |pages=1161–1173 |url=https://scholar.google.com/scholar?hl=sv&lr=&cluster=15189798216526465792 |accessdate=1 April 2009 |doi=10.1016/j.artint.2007.10.011 |postscript=.}}<br />
<br />
* {{Citation | last = Gubrud | first = Mark | url=http://www.foresight.org/Conferences/MNT05/Papers/Gubrud/ | title = Nanotechnology and International Security | journal= Fifth Foresight Conference on Molecular Nanotechnology |date = November 1997| accessdate= 7 May 2011}}<br />
<br />
* {{Citation| last1 = Holte | first1=RC | last2=Choueiry |first2=BY| title = Abstraction and reformulation in artificial intelligence| journal = [[Philosophical Transactions of the Royal Society B]]| volume = 358| issue = 1435| pages = 1197–1204| year = 2003| pmid = 12903653| pmc = 1693218| doi = 10.1098/rstb.2003.1317| postscript = .}}<br />
<br />
* {{Citation | last = Howe | first = J. | url=http://www.dai.ed.ac.uk/AI_at_Edinburgh_perspective.html | title = Artificial Intelligence at Edinburgh University : a Perspective |date = November 1994| accessdate= 30 August 2007}}<br />
<br />
* {{Citation | last = Johnson| first= Mark | year = 1987| title =The body in the mind| publisher =Chicago|isbn= 978-0-226-40317-5}}<br />
<br />
* {{Citation | last = Kurzweil | first = Ray | author-link = Ray Kurzweil | title = The Singularity is Near | year = 2005 | publisher = Viking Press | title-link = The Singularity is Near }}<br />
<br />
* {{Citation | last = Lighthill | first = Professor Sir James | author-link=James Lighthill | year = 1973 | contribution= Artificial Intelligence: A General Survey | title = Artificial Intelligence: a paper symposium| publisher = Science Research Council }}<br />
<br />
* {{Citation|last=Luger|first=George|first2=William|last2=Stubblefield|year=2004|title=Artificial Intelligence: Structures and Strategies for Complex Problem Solving|edition=5th|publisher=The Benjamin/Cummings Publishing Company, Inc.|page=[https://archive.org/details/artificialintell0000luge/page/720 720]|isbn=978-0-8053-4780-7|url=https://archive.org/details/artificialintell0000luge/page/720}}<br />
<br />
* {{Citation |last=McCarthy|first=John |authorlink=John McCarthy (computer scientist)|date=Oct 2007 |title=From here to human-level AI|journal=Artificial Intelligence |volume=171 |pages=1174–1182 |doi=10.1016/j.artint.2007.10.009 | postscript=. |issue=18}}<br />
<br />
* {{McCorduck 2004}}<br />
<br />
* {{Citation | last = Moravec | first = Hans | author-link = Hans Moravec | year = 1976 | url = http://www.frc.ri.cmu.edu/users/hpm/project.archive/general.articles/1975/Raw.Power.html | title = The Role of Raw Power in Intelligence | access-date = 29 September 2007 | archive-url = https://web.archive.org/web/20160303232511/http://www.frc.ri.cmu.edu/users/hpm/project.archive/general.articles/1975/Raw.Power.html | archive-date = 3 March 2016 | url-status = dead }}<br />
<br />
* {{Citation | last = Moravec | first = Hans | author-link=Hans Moravec | year = 1988 | title = Mind Children | publisher = Harvard University Press}}<br />
<br />
* {{Citation| last=Nagel | year=1974 | title =What Is it Like to Be a Bat | journal = Philosophical Review | volume=83 | issue=4 | pages=435–50 | url=http://organizations.utep.edu/Portals/1475/nagel_bat.pdf| postscript=. | doi=10.2307/2183914 | jstor=2183914 }}<br />
<br />
* {{Citation | last = Newell | first = Allen | author-link=Allen Newell | last2 = Simon | first2=H. A. | year = 1963 | contribution=GPS: A Program that Simulates Human Thought| title=Computers and Thought | editor-last= Feigenbaum | editor-first= E.A. |editor2-last= Feldman |editor2-first= J. |publisher= McGraw-Hill | authorlink2 = Herbert A. Simon|location= New York }}<br />
<br />
* {{cite journal | doi = 10.1145/360018.360022 | last = Newell | first = Allen | last2 = Simon | first2=H. A. | year = 1976 | title=Computer Science as Empirical Inquiry: Symbols and Search| volume= 19 | pages = 113–126 | journal = Communications of the ACM| author-link=Allen Newell | authorlink2=Herbert A. Simon|issue=3 | ref=harv| doi-access= free }}<br />
<br />
* {{Citation| last=Nilsson | first=Nils | author-link=Nils Nilsson (researcher) | year=1998|title=Artificial Intelligence: A New Synthesis|publisher=Morgan Kaufmann Publishers|isbn=978-1-55860-467-4}}<br />
<br />
* {{Russell Norvig 2003}}<br />
<br />
* {{Citation | last = NRC| author-link=United States National Research Council | chapter=Developments in Artificial Intelligence|chapter-url=http://www.nap.edu/readingroom/books/far/ch9.html|title=Funding a Revolution: Government Support for Computing Research|publisher=National Academy Press|year=1999 }}<br />
<br />
* {{Citation | last = Poole | first = David | first2 = Alan | last2 = Mackworth | first3 = Randy | last3 = Goebel | publisher = Oxford University Press | year = 1998 | title = Computational Intelligence: A Logical Approach | url = http://www.cs.ubc.ca/spider/poole/ci.html | author-link=David Poole (researcher) | location = New York }}<br />
<br />
* {{Citation|last=Searle |first=John |author-link=John Searle |title=Minds, Brains and Programs |journal=Behavioral and Brain Sciences |volume=3 |issue=3 |pages=417–457 |year=1980 |doi=10.1017/S0140525X00005756 }}<br />
<br />
* {{Citation | last= Simon | first = H. A. | author-link=Herbert A. Simon | year = 1965 | title=The Shape of Automation for Men and Management | publisher =Harper & Row | location = New York }}<br />
<br />
* {{Citation |last=Sutherland|first= J.G. |year =1990| title= Holographic Model of Memory, Learning, and Expression|journal=International Journal of Neural Systems|volume= 1–3|pages= 256–267 |postscript=.}}<br />
<br />
* {{Citation|vauthors=Williams RW, Herrup K | title = The control of neuron number| journal = Annual Review of Neuroscience| volume = 11| pages = 423–53| year = 1988| pmid = 3284447| doi = 10.1146/annurev.ne.11.030188.002231| postscript = .}}<!--| accessdate = 20 June 2009--><br />
<br />
* {{Citation<br />
<br />
| editor1-last = de Vega | editor1-first = Manuel<br />
<br />
| editor1-last = de Vega | editor1-first = Manuel<br />
<br />
| 编辑1-last de Vega | 编辑1-first Manuel<br />
<br />
| editor2-last = Glenberg | editor2-first = Arthur<br />
<br />
| editor2-last = Glenberg | editor2-first = Arthur<br />
<br />
2- 最后的格伦伯格2- 第一个亚瑟<br />
<br />
| editor3-last = Graesser | editor3-first = Arthur<br />
<br />
| editor3-last = Graesser | editor3-first = Arthur<br />
<br />
| 编辑3-last Graesser | 编辑3-first Arthur<br />
<br />
| year = 2008<br />
<br />
| year = 2008<br />
<br />
2008年<br />
<br />
| title = Symbols and Embodiment: Debates on meaning and cognition<br />
<br />
| title = Symbols and Embodiment: Debates on meaning and cognition<br />
<br />
标题符号与具体化: 关于意义与认知的争论<br />
<br />
| publisher = Oxford University Press<br />
<br />
| publisher = Oxford University Press<br />
<br />
牛津大学出版社<br />
<br />
| isbn=978-0-19-921727-4<br />
<br />
| isbn=978-0-19-921727-4<br />
<br />
[国际标准图书编号978-0-19-921727-4]<br />
<br />
}}<br />
<br />
}}<br />
<br />
}}<br />
<br />
* {{Citation|last=Yudkowsky |first=Eliezer |author-link=Eliezer Yudkowsky |editor-last=Goertzel |editor-first=Ben |editor2-last=Pennachin |editor2-first=Cassio |title=Artificial General Intelligence |journal=Annual Review of Psychology |publisher=Springer |year=2006 |url=http://www.singinst.org/upload/LOGI//LOGI.pdf |doi=10.1146/annurev.psych.49.1.585 |pmid=9496632 |isbn=978-3-540-23733-4 |url-status=dead |archiveurl=https://web.archive.org/web/20090411050423/http://www.singinst.org/upload/LOGI/LOGI.pdf |archivedate=11 April 2009 |volume=49 |pages=585–612}} <br />
<br />
* {{Citation |last=Zucker|first=Jean-Daniel |date=July 2003 |title=A grounded theory of abstraction in artificial intelligence|journal=[[Philosophical Transactions of the Royal Society B]] |pmid=12903672 |volume=358 |issue=1435 |pmc=1693211 |pages=1293–1309 |doi=10.1098/rstb.2003.1308 |postscript=.}}<br />
<br />
* {{Citation| last=Yudkowsky | first=Eliezer | author-link=Eliezer Yudkowsky| year=2008 |title=Artificial Intelligence as a Positive and Negative Factor in Global Risk |journal=Global Catastrophic Risks | bibcode=2008gcr..book..303Y }}.<br />
<br />
{{refend}}<br />
<br />
<br />
<br />
==External links 拓展链接==<br />
<br />
* [https://www.zhihu.com/question/50049187/answer/1361795900] 强人工智能目前发展怎样,有希望实现吗?<br />
<br />
* [https://zhuanlan.zhihu.com/p/59966491] AI寒冬论作者:通用人工智能仍是白日梦<br />
<br />
* [https://cis.temple.edu/~pwang/AGI-Intro.html The AGI portal maintained by Pei Wang]<br />
<br />
* [https://web.archive.org/web/20050405071221/http://genesis.csail.mit.edu/index.html The Genesis Group at MIT's CSAIL] – Modern research on the computations that underlay human intelligence<br />
<br />
* [http://www.opencog.org/ OpenCog – open source project to develop a human-level AI]<br />
<br />
* [http://academia.wikia.com/wiki/A_Method_for_Simulating_the_Process_of_Logical_Human_Thought Simulating logical human thought]<br />
<br />
* [http://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/ai-timelines What Do We Know about AI Timelines?] – Literature review<br />
<br />
<br />
<br />
{{Existential risk from artificial intelligence}}<br />
<br />
<br />
<br />
{{DEFAULTSORT:Artificial general intelligence}}<br />
<br />
[[Category:Hypothetical technology]]<br />
<br />
Category:Hypothetical technology<br />
<br />
类别: 假设技术<br />
<br />
[[Category:Artificial intelligence]]<br />
<br />
Category:Artificial intelligence<br />
<br />
类别: 人工智能<br />
<br />
[[Category:Computational neuroscience]]<br />
<br />
Category:Computational neuroscience<br />
<br />
类别: 计算神经科学<br />
<br />
<br />
<br />
[[fr:Intelligence artificielle#Intelligence artificielle forte]]<br />
<br />
fr:Intelligence artificielle#Intelligence artificielle forte<br />
<br />
智力人工 # 智力人工强项<br />
<br />
<noinclude><br />
<br />
<small>This page was moved from [[wikipedia:en:Artificial general intelligence]]. Its edit history can be viewed at [[通用人工智能/edithistory]]</small></noinclude><br />
<br />
[[Category:待整理页面]]</div>粲兰https://wiki.swarma.org/index.php?title=%E9%80%9A%E7%94%A8%E4%BA%BA%E5%B7%A5%E6%99%BA%E8%83%BD&diff=15042通用人工智能2020-10-13T06:39:38Z<p>粲兰:</p>
<hr />
<div>此词条暂由彩云小译翻译,未经人工整理和审校,带来阅读不便,请见谅。<br />
<br />
{{Short description|Hypothetical human-level or stronger AI}}<br />
<br />
{{Use British English|date = March 2019}}<br />
<br />
{{Use dmy dates|date=December 2019}}<br />
<br />
{{Artificial intelligence}}<br />
<br />
'''Artificial general intelligence''' ('''AGI''') is the hypothetical<ref>{{cite news |title=DeepMind and Google: the battle to control artificial intelligence |url=https://www.economist.com/news/2019/04/05/deepmind-and-google-the-battle-to-control-artificial-intelligence |accessdate=15 March 2020 |work=[[The Economist]] ([[1843 (magazine)]]) |date=2019 |quote=AGI stands for Artificial General Intelligence, a hypothetical computer program...}}</ref> intelligence of a machine that has the capacity to understand or learn any intellectual task that a [[human being]] can. It is a primary goal of some [[artificial intelligence]] research and a common topic in [[science fiction]] and [[futures studies]]. AGI can also be referred to as '''strong AI''',<ref>Kurzweil, ''Singularity'' (2005) p. 260</ref><ref name = "Kurzweil 2005-08-05">{{Citation|url=https://www.forbes.com/home/free_forbes/2005/0815/030.htmlhttps://www.forbes.com/home/free_forbes/2005/0815/030.html |first=Ray |last=Kurzweil |date=5 August 2005 |magazine=[[Forbes]]|title=Long Live AI}}: Kurzweil describes strong AI as "machine intelligence with the full range of human intelligence."</ref><ref>{{Citation|url=https://crnano.typepad.com/crnblog/2005/08/advanced_human_.html |title=Advanced Human ntelligence<br />
<br />
Artificial general intelligence (AGI) is the hypothetical intelligence of a machine that has the capacity to understand or learn any intellectual task that a human being can. It is a primary goal of some artificial intelligence research and a common topic in science fiction and futures studies. AGI can also be referred to as strong AI,<ref>{{Citation|url=https://crnano.typepad.com/crnblog/2005/08/advanced_human_.html |title=Advanced Human ntelligence<br />
<br />
人工通用智能(Artificial general intelligence,AGI)是一种假设的机器智能,它有能力理解或学习任何人类能够完成的智力任务。这是一些人工智能研究的主要目标,也是科幻小说和未来研究的共同话题。也可以被称为强人工智能,参考文献{ Citation | url https://crnano.typepad.com/crnblog/2005/08/advanced_human_.html | title Advanced Human intelligence<br />
<br />
|first=Mike|last=Treder|work=Responsible Nanotechnology|date=10 August 2005 |archive-url=https://web.archive.org/web/20191016214415/https://crnano.typepad.com/crnblog/2005/08/advanced_human_.html|archive-date=16 October 2019 |url-status=live}}</ref> '''full AI''',<ref>{{Cite web |url=http://tedxtalks.ted.com/video/The-Age-of-Artificial-Intellige |title=The Age of Artificial Intelligence: George John at TEDxLondonBusinessSchool 2013 |access-date=22 February 2014 |archive-url=https://web.archive.org/web/20140226123940/http://tedxtalks.ted.com/video/The-Age-of-Artificial-Intellige |archive-date=26 February 2014 |url-status=live }}</ref> <br />
<br />
|first=Mike|last=Treder|work=Responsible Nanotechnology|date=10 August 2005 |archive-url=https://web.archive.org/web/20191016214415/https://crnano.typepad.com/crnblog/2005/08/advanced_human_.html|archive-date=16 October 2019 |url-status=live}}</ref> full AI, <br />
<br />
2005年8月10日2019年10月 https://web.archive.org/web/20191016214415/https://crnano.typepad.com/crnblog/2005/08/advanced_human_.html|archive-date=16,<br />
<br />
or '''general intelligent action'''.{{sfn|Newell|Simon|1976|ps=, This is the term they use for "human-level" intelligence in the [[physical symbol system]] hypothesis.}} <br />
<br />
or general intelligent action. <br />
<br />
或者是通用智能行为。<br />
<br />
Some academic sources reserve the term "strong AI" for machines that can experience [[Chinese room#Strong AI|consciousness]].{{sfn|Searle|1980|ps=, See below for the origin of the term "strong AI", and see the academic definition of "[[Chinese room#Strong AI|strong AI]]" in the article [[Chinese room]].}} Today's AI is speculated to be many years, if not decades, away from AGI.<ref>[https://www.europarl.europa.eu/at-your-service/files/be-heard/religious-and-non-confessional-dialogue/events/en-20190319-how-artificial-intelligence-works.pdf europarl.europa.eu: How artificial intelligence works], "Concluding remarks: Today's AI is powerful and useful, but remains far from speculated AGI or ASI.", European Parliamentary Research Service, retrieved March 3, 2020</ref><ref>{{Cite journal|last=Grace|first=Katja|last2=Salvatier|first2=John|last3=Dafoe|first3=Allan|last4=Zhang|first4=Baobao|last5=Evans|first5=Owain|date=2018-07-31|title=Viewpoint: When Will AI Exceed Human Performance? Evidence from AI Experts|journal=Journal of Artificial Intelligence Research|volume=62|pages=729–754|doi=10.1613/jair.1.11222|issn=1076-9757}}</ref><br />
<br />
Some academic sources reserve the term "strong AI" for machines that can experience consciousness. Today's AI is speculated to be many years, if not decades, away from AGI.<br />
<br />
一些学术资源保留了“强人工智能”这个术语,用来形容能够体会意识的机器。据推测,如果不是几十年的话,今天的人工智能将在很多年之后才能达到通用人工智能的地步。<br />
<br />
<br />
<br />
Some authorities emphasize a distinction between ''strong AI'' and ''applied AI'',<ref>Encyclopædia Britannica [http://www.britannica.com/eb/article-219086/artificial-intelligence Strong AI, applied AI, and cognitive simulation] {{Webarchive|url=https://web.archive.org/web/20071015054758/http://www.britannica.com/eb/article-219086/artificial-intelligence |date=15 October 2007 }} or Jack Copeland [http://www.cs.usfca.edu/www.AlanTuring.net/turing_archive/pages/Reference%20Articles/what_is_AI/What%20is%20AI02.html What is artificial intelligence?] {{Webarchive|url=https://web.archive.org/web/20070818125256/http://www.cs.usfca.edu/www.AlanTuring.net/turing_archive/pages/Reference%20Articles/what_is_AI/What%20is%20AI02.html |date=18 August 2007 }} on AlanTuring.net</ref> also called ''narrow AI''<ref name = "Kurzweil 2005-08-05"/> or ''[[weak AI]]''.<ref>{{Cite web |url=http://www.open2.net/nextbigthing/ai/ai_in_depth/in_depth.htm |title=The Open University on Strong and Weak AI |access-date=8 October 2007 |archive-url=https://web.archive.org/web/20090925043908/http://www.open2.net/nextbigthing/ai/ai_in_depth/in_depth.htm |archive-date=25 September 2009 |url-status=live }} {{Dead link |date=October 2019}}</ref> In contrast to strong AI, weak AI is not intended to perform human [[cognitive]] abilities. Rather, weak AI is limited to the use of software to study or accomplish specific [[problem solving]] or [[reason]]ing tasks.<br />
<br />
Some authorities emphasize a distinction between strong AI and applied AI, also called narrow AI In contrast to strong AI, weak AI is not intended to perform human cognitive abilities. Rather, weak AI is limited to the use of software to study or accomplish specific problem solving or reasoning tasks.<br />
<br />
一些权威机构强调'' 强人工智能 ''和'' 应用人工智能 ''之间的区别,也称为'' 狭义人工智能 ''与'' 强人工智能 ''相比,弱人工智能并不是为了执行人类的认知能力。相反,弱人工智能仅限于使用软件来研究或完成特定问题的解决或完成推理任务。<br />
<br />
<br />
<br />
As of 2017, over forty organizations are researching AGI.<ref name=baum/><br />
<br />
As of 2017, over forty organizations are researching AGI.<br />
<br />
截止到2017年,已经有超过四十家机构在研究 AGI。<br />
<br />
<br />
<br />
==Requirements 判定要求==<br />
<br />
{{main|Cognitive science}}<br />
<br />
<br />
<br />
Various criteria for [[intelligence]] have been proposed (most famously the [[Turing test]]) but to date, there is no definition that satisfies everyone.<ref>AI founder [[John McCarthy (computer scientist)|John McCarthy]] writes: "we cannot yet characterize in general what kinds of computational procedures we want to call intelligent." {{cite web| url=http://www-formal.stanford.edu/jmc/whatisai/node1.html| title=Basic Questions| last=McCarthy| first=John| authorlink=John McCarthy (computer scientist)| publisher=[[Stanford University]]| year=2007| access-date=6 December 2007| archive-url=https://web.archive.org/web/20071026100601/http://www-formal.stanford.edu/jmc/whatisai/node1.html| archive-date=26 October 2007| url-status=live}} (For a discussion of some definitions of intelligence used by [[artificial intelligence]] researchers, see [[philosophy of artificial intelligence]].)</ref> However, there ''is'' wide agreement among artificial intelligence researchers that intelligence is required to do the following:<ref><br />
<br />
Various criteria for intelligence have been proposed (most famously the Turing test) but to date, there is no definition that satisfies everyone. However, there is wide agreement among artificial intelligence researchers that intelligence is required to do the following:<ref><br />
<br />
人们提出了各种各样的智能标准(最著名的是图灵测试) ,但到目前为止,还没有一个定义能使所有人满意。然而,人工智能研究人员普遍认为,智能需要做到以下几点: <br />
<br />
This list of intelligent traits is based on the topics covered by major AI textbooks, including:<br />
<br />
This list of intelligent traits is based on the topics covered by major AI textbooks, including:<br />
<br />
这个智能特征的列表基于主流的人工智能教科书所涉及的主题,包括:<br />
<br />
{{Harvnb|Russell|Norvig|2003}},<br />
<br />
,<br />
<br />
,<br />
<br />
{{Harvnb|Luger|Stubblefield|2004}},<br />
<br />
,<br />
<br />
,<br />
<br />
{{Harvnb|Poole|Mackworth|Goebel|1998}} and<br />
<br />
and<br />
<br />
及<br />
<br />
{{Harvnb|Nilsson|1998}}.<br />
<br />
.<br />
<br />
.<br />
<br />
</ref><br />
<br />
</ref><br />
<br />
/ 参考<br />
<br />
* [[automated reasoning|reason]], use strategy, solve puzzles, and make judgments under [[uncertainty]];使用策略,解决问题,并且在不确定条件下做出决策。<br />
<br />
* [[knowledge representation|represent knowledge]], including [[Commonsense knowledge base|commonsense knowledge]];展现知识,包括常识。<br />
<br />
* [[automated planning and scheduling|plan]];规划。<br />
<br />
* [[machine learning|learn]];学习。<br />
<br />
* communicate in [[natural language processing|natural language]];使用自然语言进行交流。<br />
<br />
* and [[Artificial intelligence systems integration|integrate all these skills]] towards common goals.综合运用所有技巧以达到某个目的。<br />
<br />
<br />
<br />
Other important capabilities include the ability to [[machine perception|sense]] (e.g. [[computer vision|see]]) and the ability to act (e.g. [[robotics|move and manipulate objects]]) in the world where intelligent behaviour is to be observed.<ref>Pfeifer, R. and Bongard J. C., How the body shapes the way we think: a new view of intelligence (The MIT Press, 2007). {{ISBN|0-262-16239-3}}</ref> This would include an ability to detect and respond to [[hazard]].<ref>{{cite journal | last1 = White | first1 = R. W. | year = 1959 | title = Motivation reconsidered: The concept of competence | journal = Psychological Review | volume = 66 | issue = 5| pages = 297–333 | doi=10.1037/h0040934| pmid = 13844397 }}</ref> Many interdisciplinary approaches to intelligence (e.g. [[cognitive science]], [[computational intelligence]] and [[decision making]]) tend to emphasise the need to consider additional traits such as [[imagination]] (taken as the ability to form mental images and concepts that were not programmed in)<ref>{{Harvnb|Johnson|1987}}</ref> and [[Self-determination theory|autonomy]].<ref>deCharms, R. (1968). Personal causation. New York: Academic Press.</ref><br />
<br />
Other important capabilities include the ability to sense (e.g. see) and the ability to act (e.g. move and manipulate objects) in the world where intelligent behaviour is to be observed. This would include an ability to detect and respond to hazard. Many interdisciplinary approaches to intelligence (e.g. cognitive science, computational intelligence and decision making) tend to emphasise the need to consider additional traits such as imagination (taken as the ability to form mental images and concepts that were not programmed in) and autonomy.<br />
<br />
其他重要的能力包括在客观世界感知(例如:视觉)和行动(例如:移动和操纵物体)的能力。智能行为在客观世界中是可观测的。这将包括检测和应对危险的能力。许多跨学科的智力研究方法(例如:。认知科学、计算智能和决策)倾向于强调考虑额外特征的必要性,例如想象力(被认为是未编入程序的形成意象和概念的能力)和自主性。<br />
<br />
Computer based systems that exhibit many of these capabilities do exist (e.g. see [[computational creativity]], [[automated reasoning]], [[decision support system]], [[robot]], [[evolutionary computation]], [[intelligent agent]]), but not yet at human levels.<br />
<br />
Computer based systems that exhibit many of these capabilities do exist (e.g. see computational creativity, automated reasoning, decision support system, robot, evolutionary computation, intelligent agent), but not yet at human levels.<br />
<br />
基于计算机的系统,展示了许多这些能力确实存在(例如:。参见计算创造性、自动推理、决策支持系统、机器人、进化计算、智能代理) ,但还没有达到人类的水平。<br />
<br />
<br />
<br />
===Tests for confirming human-level AGI{{anchor|Tests_for_confirming_human-level_AGI}}===<br />
<br />
The following tests to confirm human-level AGI have been considered:<ref>{{cite web|last=Muehlhauser|first=Luke|title=What is AGI?|url=http://intelligence.org/2013/08/11/what-is-agi/|publisher=Machine Intelligence Research Institute|accessdate=1 May 2014|date=11 August 2013|archive-url=https://web.archive.org/web/20140425115445/http://intelligence.org/2013/08/11/what-is-agi/|archive-date=25 April 2014|url-status=live}}</ref><ref>{{Cite web|url=https://www.talkyblog.com/artificial_general_intelligence_agi/|title=What is Artificial General Intelligence (AGI)? {{!}} 4 Tests For Ensuring Artificial General Intelligence|date=13 July 2019|website=Talky Blog|language=en-US|access-date=17 July 2019|archive-url=https://web.archive.org/web/20190717071152/https://www.talkyblog.com/artificial_general_intelligence_agi/|archive-date=17 July 2019|url-status=live}}</ref><br />
<br />
The following tests to confirm human-level AGI have been considered:<br />
<br />
考虑了下列测试以确认人类水平 AGI:<br />
<br />
;[[Turing test|The Turing Test]] ([[Alan Turing|''Turing'']])<br />
<br />
The Turing Test (Turing)<br />
<br />
图灵测试(图灵)<br />
<br />
: A machine and a human both converse sight unseen with a second human, who must evaluate which of the two is the machine, which passes the test if it can fool the evaluator a significant fraction of the time. Note: Turing does not prescribe what should qualify as intelligence, only that knowing that it is a machine should disqualify it.<br />
<br />
A machine and a human both converse sight unseen with a second human, who must evaluate which of the two is the machine, which passes the test if it can fool the evaluator a significant fraction of the time. Note: Turing does not prescribe what should qualify as intelligence, only that knowing that it is a machine should disqualify it.<br />
<br />
一个机器人和一个人类都与另一个人类进行眼神交流,后者必须评估两者中哪一个是机器,如果它能骗过评估者很大一部分时间,那么机器就通过了测试。注意: 图灵并没有规定什么是智能,只要能认出它是一台机器就不应该认为它是智能的。<br />
<br />
;The Coffee Test ([[Steve Wozniak|''Wozniak'']])<br />
<br />
The Coffee Test (Wozniak)<br />
<br />
咖啡测试(沃兹尼亚克)<br />
<br />
: A machine is required to enter an average American home and figure out how to make coffee: find the coffee machine, find the coffee, add water, find a mug, and brew the coffee by pushing the proper buttons.<br />
<br />
A machine is required to enter an average American home and figure out how to make coffee: find the coffee machine, find the coffee, add water, find a mug, and brew the coffee by pushing the proper buttons.<br />
<br />
一台机器需要进入一个普通的美国家庭,并弄清楚如何制作咖啡: 找到咖啡机,找到咖啡,加水,找到一个马克杯,并通过按下正确的按钮来煮咖啡。<br />
<br />
;The Robot College Student Test ([[Ben Goertzel|''Goertzel'']])<br />
<br />
The Robot College Student Test (Goertzel)<br />
<br />
机器人大学生考试(格兹尔)<br />
<br />
: A machine enrolls in a university, taking and passing the same classes that humans would, and obtaining a degree.<br />
<br />
A machine enrolls in a university, taking and passing the same classes that humans would, and obtaining a degree.<br />
<br />
一台机器进入一所大学,学习并通过与人类相同的课程,并获得学位。<br />
<br />
;The Employment Test ([[Nils John Nilsson|''Nilsson'']])<br />
<br />
The Employment Test (Nilsson)<br />
<br />
就业测试(尼尔森)<br />
<br />
: A machine works an economically important job, performing at least as well as humans in the same job.<br />
<br />
A machine works an economically important job, performing at least as well as humans in the same job.<br />
<br />
机器从事一项经济上重要的工作,在同一项工作中表现至少和人类一样好。<br />
<br />
<br />
<br />
=== Problems requiring AGI to solve 等待通用人工智能解决的问题===<br />
<br />
{{Main|AI-complete}}<br />
<br />
<br />
<br />
The most difficult problems for computers are informally known as "AI-complete" or "AI-hard", implying that solving them is equivalent to the general aptitude of human intelligence, or strong AI, beyond the capabilities of a purpose-specific algorithm.<ref name="Shapiro92">Shapiro, Stuart C. (1992). [http://www.cse.buffalo.edu/~shapiro/Papers/ai.pdf Artificial Intelligence] {{Webarchive|url=https://web.archive.org/web/20160201014644/http://www.cse.buffalo.edu/~shapiro/Papers/ai.pdf |date=1 February 2016 }} In Stuart C. Shapiro (Ed.), ''Encyclopedia of Artificial Intelligence'' (Second Edition, pp.&nbsp;54–57). New York: John Wiley. (Section 4 is on "AI-Complete Tasks".)</ref><br />
<br />
The most difficult problems for computers are informally known as "AI-complete" or "AI-hard", implying that solving them is equivalent to the general aptitude of human intelligence, or strong AI, beyond the capabilities of a purpose-specific algorithm.<br />
<br />
对于计算机来说,最困难的问题被非正式地称为“ AI完全问题”或“ AI困难问题” ,这意味着解决这些问题相当于人类智能的一般才能,或强人工智能,超出了特定目的算法的能力。<br />
<br />
<br />
<br />
AI-complete problems are hypothesised to include general [[computer vision]], [[natural language understanding]], and dealing with unexpected circumstances while solving any real world problem.<ref>Roman V. Yampolskiy. Turing Test as a Defining Feature of AI-Completeness. In Artificial Intelligence, Evolutionary Computation and Metaheuristics (AIECM) --In the footsteps of Alan Turing. Xin-She Yang (Ed.). pp. 3–17. (Chapter 1). Springer, London. 2013. http://cecs.louisville.edu/ry/TuringTestasaDefiningFeature04270003.pdf {{Webarchive|url=https://web.archive.org/web/20130522094547/http://cecs.louisville.edu/ry/TuringTestasaDefiningFeature04270003.pdf |date=22 May 2013 }}</ref><br />
<br />
AI-complete problems are hypothesised to include general computer vision, natural language understanding, and dealing with unexpected circumstances while solving any real world problem.<br />
<br />
人工智能完全问题假设包括一般的计算机视觉,自然语言理解,以及在解决任何现实世界问题的同时处理意外情况。<br />
<br />
<br />
<br />
AI-complete problems cannot be solved with current computer technology alone, and also require [[human computation]]. This property could be useful, for example, to test for the presence of humans, as [[CAPTCHA]]s aim to do; and for [[computer security]] to repel [[brute-force attack]]s.<ref>Luis von Ahn, Manuel Blum, Nicholas Hopper, and John Langford. [http://www.captcha.net/captcha_crypt.pdf CAPTCHA: Using Hard AI Problems for Security] {{Webarchive|url=https://web.archive.org/web/20160304001102/http://www.captcha.net/captcha_crypt.pdf |date=4 March 2016 }}. In Proceedings of Eurocrypt, Vol. 2656 (2003), pp. 294–311.</ref><ref>{{cite journal | first = Richard | last = Bergmair | title = Natural Language Steganography and an "AI-complete" Security Primitive | citeseerx = 10.1.1.105.129 | date = 7 January 2006 }} (unpublished?)</ref><br />
<br />
AI-complete problems cannot be solved with current computer technology alone, and also require human computation. This property could be useful, for example, to test for the presence of humans, as CAPTCHAs aim to do; and for computer security to repel brute-force attacks.<br />
<br />
目前的计算机技术不能单独解决AI完全问题,而且还需要人工计算。例如,这个特性可以用来测试人类是否存在(CAPTCHAs 的目标就是这样做) ,以及应用于计算机安全以抵御强力攻击。<br />
<br />
<br />
<br />
== History 历史 == <br />
<br />
=== Classical AI 经典人工智能 ===<br />
<br />
{{Main|History of artificial intelligence}}<br />
<br />
Modern AI research began in the mid 1950s.<ref>{{Harvnb|Crevier|1993|pp=48–50}}</ref> The first generation of AI researchers were convinced that artificial general intelligence was possible and that it would exist in just a few decades. AI pioneer [[Herbert A. Simon]] wrote in 1965: "machines will be capable, within twenty years, of doing any work a man can do."<ref>{{Harvnb|Simon|1965|p=96}} quoted in {{Harvnb|Crevier|1993|p=109}}</ref> Their predictions were the inspiration for [[Stanley Kubrick]] and [[Arthur C. Clarke]]'s character [[HAL 9000]], who embodied what AI researchers believed they could create by the year 2001. AI pioneer [[Marvin Minsky]] was a consultant<ref>{{Cite web |url=http://mitpress.mit.edu/e-books/Hal/chap2/two1.html |title=Scientist on the Set: An Interview with Marvin Minsky |access-date=5 April 2008 |archive-url=https://web.archive.org/web/20120716182537/http://mitpress.mit.edu/e-books/Hal/chap2/two1.html |archive-date=16 July 2012 |url-status=live }}</ref> on the project of making HAL 9000 as realistic as possible according to the consensus predictions of the time; Crevier quotes him as having said on the subject in 1967, "Within a generation ... the problem of creating 'artificial intelligence' will substantially be solved,"<ref>Marvin Minsky to {{Harvtxt|Darrach|1970}}, quoted in {{Harvtxt|Crevier|1993|p=109}}.</ref> although Minsky states that he was misquoted.{{Citation needed|date=June 2011}}<br />
<br />
Modern AI research began in the mid 1950s. The first generation of AI researchers were convinced that artificial general intelligence was possible and that it would exist in just a few decades. AI pioneer Herbert A. Simon wrote in 1965: "machines will be capable, within twenty years, of doing any work a man can do." Their predictions were the inspiration for Stanley Kubrick and Arthur C. Clarke's character HAL 9000, who embodied what AI researchers believed they could create by the year 2001. AI pioneer Marvin Minsky was a consultant on the project of making HAL 9000 as realistic as possible according to the consensus prediction of the time; Crevier quotes him as having said on the subject in 1967, "Within a generation ... the problem of creating 'artificial intelligence' will substantially be solved," although Minsky states that he was misquoted.<br />
<br />
现代人工智能研究始于20世纪50年代中期。第一代人工智能研究人员确信,通用人工智能是可能的,并将在短短几十年内出现。人工智能的先驱赫伯特·A·西蒙(Herbert A. Simon)在1965年写道: “机器将在20年内拥有完成人类能做的任何工作的能力。”他们的预言启发了斯坦利·库布里克和亚瑟·查理斯·克拉克塑造的角色哈尔9000,它代表了人工智能研究人员相信他们截至2001年能够创造出的东西。人工智能先驱马文·明斯基(Marvin Minsky)是一个项目顾问,该项目旨在根据当时的一致预测,使哈尔9000尽可能逼真; 克里维尔援引他在1967年关于这个问题的话说,“在一代人的时间里... ... 创造‘人工智能’的问题将大体上得到解决,”尽管明斯基声称,他的话被错误引用了。<br />
<br />
<br />
<br />
However, in the early 1970s, it became obvious that researchers had grossly underestimated the difficulty of the project. Funding agencies became skeptical of AGI and put researchers under increasing pressure to produce useful "applied AI".<ref>The [[Lighthill report]] specifically criticized AI's "grandiose objectives" and led the dismantling of AI research in England. ({{Harvnb|Lighthill|1973}}; {{Harvnb|Howe|1994}}) In the U.S., [[DARPA]] became determined to fund only "mission-oriented direct research, rather than basic undirected research". See {{Harv|NRC|1999}} under "Shift to Applied Research Increases Investment". See also {{Harv|Crevier|1993|pp=115–117}} and {{Harv|Russell|Norvig|2003|pp=21–22}}</ref> As the 1980s began, Japan's [[Fifth Generation Computer]] Project revived interest in AGI, setting out a ten-year timeline that included AGI goals like "carry on a casual conversation".<ref>{{harvnb|Crevier|1993|p=211}}, {{harvnb|Russell|Norvig|2003|p=24}} and see also {{Harvnb|Feigenbaum|McCorduck|1983}}</ref> In response to this and the success of [[expert systems]], both industry and government pumped money back into the field.<ref>{{Harvnb|Crevier| 1993|pp=161–162,197–203,240}}; {{harvnb|Russell|Norvig|2003|p=25}}; {{harvnb|NRC|1999|loc=under "Shift to Applied Research Increases Investment"}}</ref> However, confidence in AI spectacularly collapsed in the late 1980s, and the goals of the Fifth Generation Computer Project were never fulfilled.<ref>{{Harvnb|Crevier|1993|pp=209–212}}</ref> For the second time in 20 years, AI researchers who had predicted the imminent achievement of AGI had been shown to be fundamentally mistaken. By the 1990s, AI researchers had gained a reputation for making vain promises. They became reluctant to make predictions at all<ref>As AI founder [[John McCarthy (computer scientist)|John McCarthy]] writes "it would be a great relief to the rest of the workers in AI if the inventors of new general formalisms would express their hopes in a more guarded form than has sometimes been the case." {{cite web | url=http://www-formal.stanford.edu/jmc/reviews/lighthill/lighthill.html | title=Reply to Lighthill | last=McCarthy | first=John | authorlink=John McCarthy (computer scientist) | publisher=Stanford University | year=2000 | access-date=29 September 2007 | archive-url=https://web.archive.org/web/20080930164952/http://www-formal.stanford.edu/jmc/reviews/lighthill/lighthill.html | archive-date=30 September 2008 | url-status=live }}</ref> and to avoid any mention of "human level" artificial intelligence for fear of being labeled "wild-eyed dreamer[s]."<ref>"At its low point, some computer scientists and software engineers avoided the term artificial intelligence for fear of being viewed as wild-eyed dreamers."{{cite news |first=John |last=Markoff |title=Behind Artificial Intelligence, a Squadron of Bright Real People |url=https://www.nytimes.com/2005/10/14/technology/14artificial.html?ei=5070&en=11ab55edb7cead5e&ex=1185940800&adxnnl=1&adxnnlx=1185805173-o7WsfW7qaP0x5/NUs1cQCQ |work=The New York Times|date=14 October 2005}}</ref><br />
<br />
However, in the early 1970s, it became obvious that researchers had grossly underestimated the difficulty of the project. Funding agencies became skeptical of AGI and put researchers under increasing pressure to produce useful "applied AI". As the 1980s began, Japan's Fifth Generation Computer Project revived interest in AGI, setting out a ten-year timeline that included AGI goals like "carry on a casual conversation". In response to this and the success of expert systems, both industry and government pumped money back into the field. However, confidence in AI spectacularly collapsed in the late 1980s, and the goals of the Fifth Generation Computer Project were never fulfilled. For the second time in 20 years, AI researchers who had predicted the imminent achievement of AGI had been shown to be fundamentally mistaken. By the 1990s, AI researchers had gained a reputation for making vain promises. They became reluctant to make predictions at all and to avoid any mention of "human level" artificial intelligence for fear of being labeled "wild-eyed dreamer[s]."<br />
<br />
然而,在20世纪70年代初,很明显,研究人员严重低估了该项目的难度。资助机构开始对通用人工智能持怀疑态度,并对研究人员施加越来越大的压力,要求他们生产出有用的“应用人工智能”。随着20世纪80年代的开始,日本的'''<font color="#ff8000">第五代计算机项目(Fifth Generation Computer Project)</font>'''重新唤起了人们对通用人工智能的兴趣,并设定了一个长达10年的时间线,其中包括通用人工智能的目标,比如“进行一次随意的交谈”。为了应对这种情况和专家系统的成功建立,工业界和政府都重新将资金投入这一领域。然而,人们对人工智能的信心在20世纪80年代末大幅下降,第五代计算机项目的目标从未实现。20年来的第二次,人工智能研究人员预测通用人工智能即将取得的成果已经被证明根本是错误的。到了20世纪90年代,人工智能研究人员因做出虚假承诺而臭名昭著。他们根本不愿意做预测,也不愿意提及“人类水平”的人工智能,因为他们害怕被贴上“狂热梦想家”的标签<br />
<br />
<br />
<br />
=== Narrow AI research 狭义人工智能的研究===<br />
<br />
{{Main|Artificial intelligence}}<br />
<br />
<br />
<br />
In the 1990s and early 21st century, mainstream AI achieved far greater commercial success and academic respectability by focusing on specific sub-problems where they can produce verifiable results and commercial applications, such as [[artificial neural networks]] and statistical [[machine learning]].<ref>{{Harvnb|Russell|Norvig|2003|pp=25–26}}</ref> These "applied AI" systems are now used extensively throughout the technology industry, and research in this vein is very heavily funded in both academia and industry. Currently, development on this field is considered an emerging trend, and a mature stage is expected to happen in more than 10 years.<ref>{{cite web |title=Trends in the Emerging Tech Hype Cycle |url=https://blogs.gartner.com/smarterwithgartner/files/2018/08/PR_490866_5_Trends_in_the_Emerging_Tech_Hype_Cycle_2018_Hype_Cycle.png |publisher=Gartner Reports |accessdate=7 May 2019 |archive-url=https://web.archive.org/web/20190522024829/https://blogs.gartner.com/smarterwithgartner/files/2018/08/PR_490866_5_Trends_in_the_Emerging_Tech_Hype_Cycle_2018_Hype_Cycle.png |archive-date=22 May 2019 |url-status=live }}</ref><br />
<br />
In the 1990s and early 21st century, mainstream AI achieved far greater commercial success and academic respectability by focusing on specific sub-problems where they can produce verifiable results and commercial applications, such as artificial neural networks and statistical machine learning. These "applied AI" systems are now used extensively throughout the technology industry, and research in this vein is very heavily funded in both academia and industry. Currently, development on this field is considered an emerging trend, and a mature stage is expected to happen in more than 10 years.<br />
<br />
在1990年代和21世纪初,主流人工智能取得了更大的商业成功和学术声望,因为它们把重点放在能够产生可验证结果和商业应用的具体子问题上,例如人工神经网络和统计机器学习。这些“应用人工智能”系统现在在整个技术产业中得到广泛应用,这方面的研究得到了学术界和产业界的大量资助。目前,这一领域的发展被认为是一个新兴的趋势,并有望在10多年内进入一个成熟的阶段。<br />
<br />
<br />
<br />
Most mainstream AI researchers hope that strong AI can be developed by combining the programs that solve various sub-problems. [[Hans Moravec]] wrote in 1988: <blockquote>"I am confident that this bottom-up route to artificial intelligence will one day meet the traditional top-down route more than half way, ready to provide the real world competence and the [[Commonsense knowledge base|commonsense knowledge]] that has been so frustratingly elusive in reasoning programs. Fully intelligent machines will result when the metaphorical [[golden spike]] is driven uniting the two efforts."<ref>{{Harvnb|Moravec|1988|p=20}}</ref></blockquote><br />
<br />
Most mainstream AI researchers hope that strong AI can be developed by combining the programs that solve various sub-problems. Hans Moravec wrote in 1988: <blockquote>"I am confident that this bottom-up route to artificial intelligence will one day meet the traditional top-down route more than half way, ready to provide the real world competence and the commonsense knowledge that has been so frustratingly elusive in reasoning programs. Fully intelligent machines will result when the metaphorical golden spike is driven uniting the two efforts."</blockquote><br />
<br />
大多数主流人工智能研究人员希望,通过结合解决各种子问题的项目,可以开发出强人工智能。汉斯·莫拉维克(Hans Moravec)在1988年写道: “我相信,这种自下而上的人工智能路线,终有一天会与传统的自上而下的路线在后半程相遇。令人沮丧的是,当下,真实世界的能力和常识知识在推理程序中一直难以捉摸。而这两种路线结合的人工智能将能为我们解决这些疑难。当一个黄金钉一样的东西将二者结合起来时,就会产生完全智能的机器。” <br />
<br />
<br />
<br />
However, even this fundamental philosophy has been disputed; for example, Stevan Harnad of Princeton concluded his 1990 paper on the [[Symbol grounding problem|Symbol Grounding Hypothesis]] by stating: <blockquote>"The expectation has often been voiced that "top-down" (symbolic) approaches to modeling cognition will somehow meet "bottom-up" (sensory) approaches somewhere in between. If the grounding considerations in this paper are valid, then this expectation is hopelessly modular and there is really only one viable route from sense to symbols: from the ground up. A free-floating symbolic level like the software level of a computer will never be reached by this route (or vice versa) – nor is it clear why we should even try to reach such a level, since it looks as if getting there would just amount to uprooting our symbols from their intrinsic meanings (thereby merely reducing ourselves to the functional equivalent of a programmable computer)."<ref>{{cite journal | last1 = Harnad | first1 = S | year = 1990 | title = The Symbol Grounding Problem | journal = Physica D | volume = 42 | issue = 1–3| pages = 335–346 | doi=10.1016/0167-2789(90)90087-6| bibcode = 1990PhyD...42..335H | arxiv = cs/9906002}}</ref></blockquote><br />
<br />
However, even this fundamental philosophy has been disputed; for example, Stevan Harnad of Princeton concluded his 1990 paper on the Symbol Grounding Hypothesis by stating: <blockquote>"The expectation has often been voiced that "top-down" (symbolic) approaches to modeling cognition will somehow meet "bottom-up" (sensory) approaches somewhere in between. If the grounding considerations in this paper are valid, then this expectation is hopelessly modular and there is really only one viable route from sense to symbols: from the ground up. A free-floating symbolic level like the software level of a computer will never be reached by this route (or vice versa) – nor is it clear why we should even try to reach such a level, since it looks as if getting there would just amount to uprooting our symbols from their intrinsic meanings (thereby merely reducing ourselves to the functional equivalent of a programmable computer)."</blockquote><br />
<br />
然而,连如此基本的哲学问题也存在争议; 例如,普林斯顿大学的斯蒂文·哈纳德(Stevan Harnad)在1990年关于'''<font color="#ff8000">符号基础假说(the Symbol Grounding Hypothesis)</font>'''的论文中总结道: “人们经常提出这样的期望,即建立“自上而下”(符号)的认知模型的方法将在某种程度上与“自下而上”(感官)的方法在建模过程中的某处相会。如果本文中的基本考虑是正确的,那么绝望的是,这种期望是模块化的,并且从认知到符号真的只有一条可行的路径: 从头开始。类似计算机软件级别的自由浮动的符号永远不可能通过这条路径实现,反之亦然——甚至也不清楚为什么我们应该尝试达到这样一个级别,因为它看起来就像是把我们的符号从它们的内在意义上连根拔起(从而仅仅把我们自己降低为可编程计算机的功能等价物)。”<br />
<br />
<br />
<br />
=== Modern artificial general intelligence research 现代通用人工智能的研究===<br />
<br />
The term "artificial general intelligence" was used as early as 1997, by Mark Gubrud<ref>{{Harvnb|Gubrud|1997}}</ref> in a discussion of the implications of fully automated military production and operations. The term was re-introduced and popularized by [[Shane Legg]] and [[Ben Goertzel]] around 2002.<ref>{{Cite web|url=http://goertzel.org/who-coined-the-term-agi/|title=Who coined the term "AGI"? » goertzel.org|language=en-US|access-date=28 December 2018|archive-url=https://web.archive.org/web/20181228083048/http://goertzel.org/who-coined-the-term-agi/|archive-date=28 December 2018|url-status=live}}, via [[Life 3.0]]: 'The term "AGI" was popularized by... Shane Legg, Mark Gubrud and Ben Goertzel'</ref> The research objective is much older, for example [[Doug Lenat]]'s [[Cyc]] project (that began in 1984), and [[Allen Newell]]'s [[Soar (cognitive architecture)|Soar]] project are regarded as within the scope of AGI. AGI research activity in 2006 was described by Pei Wang and Ben Goertzel<ref>{{harvnb|Goertzel|Wang|2006}}. See also {{harvtxt|Wang|2006}} with an up-to-date summary and lots of links.</ref> as "producing publications and preliminary results". The first summer school in AGI was organized in Xiamen, China in 2009<ref>https://goertzel.org/AGI_Summer_School_2009.htm</ref> by the Xiamen university's Artificial Brain Laboratory and OpenCog. The first university course was given in 2010<ref>http://fmi-plovdiv.org/index.jsp?id=1054&ln=1</ref> and 2011<ref>http://fmi.uni-plovdiv.bg/index.jsp?id=1139&ln=1</ref> at Plovdiv University, Bulgaria by Todor Arnaudov. MIT presented a course in AGI in 2018, organized by Lex Fridman and featuring a number of guest lecturers. However, as yet, most AI researchers have devoted little attention to AGI, with some claiming that intelligence is too complex to be completely replicated in the near term. However, a small number of computer scientists are active in AGI research, and many of this group are contributing to a series of [[Conference on Artificial General Intelligence|AGI conferences]]. The research is extremely diverse and often pioneering in nature. In the introduction to his book,{{sfn|Goertzel|Pennachin|2006}} Goertzel says that estimates of the time needed before a truly flexible AGI is built vary from 10 years to over a century, but the consensus in the AGI research community seems to be that the timeline discussed by [[Ray Kurzweil]] in ''[[The Singularity is Near]]''<ref name="K">{{Harv|Kurzweil|2005|p=260}} or see [http://crnano.typepad.com/crnblog/2005/08/advanced_human_.html Advanced Human Intelligence] {{Webarchive|url=https://web.archive.org/web/20110630032301/http://crnano.typepad.com/crnblog/2005/08/advanced_human_.html |date=30 June 2011 }} where he defines strong AI as "machine intelligence with the full range of human intelligence."</ref> (i.e. between 2015 and 2045) is plausible.{{sfn|Goertzel|2007}}<br />
<br />
The term "artificial general intelligence" was used as early as 1997, by Mark Gubrud in a discussion of the implications of fully automated military production and operations. The term was re-introduced and popularized by Shane Legg and Ben Goertzel around 2002. The research objective is much older, for example Doug Lenat's Cyc project (that began in 1984), and Allen Newell's Soar project are regarded as within the scope of AGI. AGI research activity in 2006 was described by Pei Wang and Ben Goertzel as "producing publications and preliminary results". The first summer school in AGI was organized in Xiamen, China in 2009 by the Xiamen university's Artificial Brain Laboratory and OpenCog. The first university course was given in 2010 and 2011 at Plovdiv University, Bulgaria by Todor Arnaudov. MIT presented a course in AGI in 2018, organized by Lex Fridman and featuring a number of guest lecturers. However, as yet, most AI researchers have devoted little attention to AGI, with some claiming that intelligence is too complex to be completely replicated in the near term. However, a small number of computer scientists are active in AGI research, and many of this group are contributing to a series of AGI conferences. The research is extremely diverse and often pioneering in nature. In the introduction to his book, Goertzel says that estimates of the time needed before a truly flexible AGI is built vary from 10 years to over a century, but the consensus in the AGI research community seems to be that the timeline discussed by Ray Kurzweil in The Singularity is Near (i.e. between 2015 and 2045) is plausible.<br />
<br />
“通用人工智能”一词早在1997年就由马克·古布鲁德(Mark Gubrud)在讨论全自动化军事生产和作业的影响时使用。这个术语在2002年左右被肖恩·莱格(Shane Legg)和本·格兹尔(Ben Goertzel)重新引入并推广。通用人工智能的研究目标要古老得多,例如道格•雷纳特(Doug Lenat)的 Cyc 项目(始于1984年) ,以及艾伦•纽厄尔(Allen Newell)的 Soar 项目被认为属于通用人工智能的范畴。王培(Pei Wang)和本·格兹尔将2006年的通用人工智能研究活动描述为“发表论文和取得初步成果”。2009年,厦门大学人工脑实验室和 OpenCog 在中国厦门组织了通用人工智能的第一个暑期学校。第一个大学课程于2010年和2011年在保加利亚普罗夫迪夫大学由托多尔·阿瑙多夫(Todor Arnaudov)开设。2018年,麻省理工学院开设了一门通用人工智能课程,由莱克斯·弗里德曼(Lex Fridman)组织,并邀请了一些客座讲师。然而,迄今为止,大多数人工智能研究人员对通用人工智能关注甚少,一些人声称,智能过于复杂,在短期内无法完全复制。然而,少数计算机科学家积极参与通用人工智能的研究,其中许多人正在为通用人工智能的一系列会议做出贡献。这项研究极其多样化,而且往往具有开创性。格兹尔在他的书的序言中,说,制造一个真正灵活的通用人工智能所需的时间约为10年到超过一个世纪不等,但是通用人工智能研究团体的似乎一致认为雷·库兹韦尔(Ray Kurzweil)在'''<font color="#ff8000">《奇点临近》(The Singularity is Near)</font>'''(即在2015年至2045年之间)中讨论的时间线是可信的。<br />
<br />
<br />
<br />
However, most mainstream AI researchers doubt that progress will be this rapid.{{citation_needed|date=January 2017}} Organizations explicitly pursuing AGI include the Swiss AI lab [[IDSIA]],{{citation needed|date=December 2017}} Nnaisense,<ref>{{cite news|last1=Markoff|first1=John|title=When A.I. Matures, It May Call Jürgen Schmidhuber 'Dad'|url=https://www.nytimes.com/2016/11/27/technology/artificial-intelligence-pioneer-jurgen-schmidhuber-overlooked.html|accessdate=26 December 2017|work=The New York Times|date=27 November 2016|archive-url=https://web.archive.org/web/20171226234555/https://www.nytimes.com/2016/11/27/technology/artificial-intelligence-pioneer-jurgen-schmidhuber-overlooked.html|archive-date=26 December 2017|url-status=live}}</ref> [[Vicarious (company)|Vicarious]],<!--<ref name=baum/>--> [[Maluuba]],<ref name=baum/> the [[OpenCog|OpenCog Foundation]], Adaptive AI, [[LIDA (cognitive architecture)|LIDA]], and [[Numenta]] and the associated [[Redwood Neuroscience Institute]].<ref>{{cite book|author1=James Barrat|title=Our Final Invention: Artificial Intelligence and the End of the Human Era|date=2013|publisher=St. Martin's Press|location=New York|isbn=9780312622374|edition=First|chapter=Chapter 11: A Hard Takeoff|title-link=Our Final Invention|author1-link=James Barrat}}</ref> In addition, organizations such as the [[Machine Intelligence Research Institute]]<ref>{{cite web|title=About the Machine Intelligence Research Institute|url=https://intelligence.org/about/|website=Machine Intelligence Research Institute|accessdate=26 December 2017|archive-url=https://web.archive.org/web/20180121025925/https://intelligence.org/about/|archive-date=21 January 2018|url-status=live}}</ref> and [[OpenAI]]<ref>{{cite news|title=About OpenAI|url=https://openai.com/about/|accessdate=26 December 2017|work=[[OpenAI]]|language=en-us|archive-url=https://web.archive.org/web/20171222181056/https://openai.com/about/|archive-date=22 December 2017|url-status=live}}</ref> have been founded to influence the development path of AGI. Finally, projects such as the [[Human Brain Project]]<ref>{{cite news|last1=Theil|first1=Stefan|title=Trouble in Mind|url=https://www.scientificamerican.com/article/why-the-human-brain-project-went-wrong-and-how-to-fix-it/|accessdate=26 December 2017|work=Scientific American|pages=36–42|language=en|doi=10.1038/scientificamerican1015-36|bibcode=2015SciAm.313d..36T|archive-url=https://web.archive.org/web/20171109234151/https://www.scientificamerican.com/article/why-the-human-brain-project-went-wrong-and-how-to-fix-it/|archive-date=9 November 2017|url-status=live}}</ref> have the goal of building a functioning simulation of the human brain. A 2017 survey of AGI categorized forty-five known "active R&D projects" that explicitly or implicitly (through published research) research AGI, with the largest three being [[DeepMind]], the Human Brain Project, and [[OpenAI]].<ref name=baum>{{cite journal|title=Baum, Seth, A Survey of Artificial General Intelligence Projects for Ethics, Risk, and Policy (November 12, 2017). Global Catastrophic Risk Institute Working Paper 17-1|url=https://ssrn.com/abstract=3070741|date=12 November 2017|last1=Baum|first1=Seth}}</ref><br />
<br />
However, most mainstream AI researchers doubt that progress will be this rapid. Organizations explicitly pursuing AGI include the Swiss AI lab IDSIA, Nnaisense, Vicarious. In addition, organizations such as the Machine Intelligence Research Institute and OpenAI have been founded to influence the development path of AGI. Finally, projects such as the Human Brain Project have the goal of building a functioning simulation of the human brain. A 2017 survey of AGI categorized forty-five known "active R&D projects" that explicitly or implicitly (through published research) research AGI, with the largest three being DeepMind, the Human Brain Project, and OpenAI.<br />
<br />
然而,大多数主流的人工智能研究人员怀疑进展是否会如此之快。明确寻求通用人工智能的组织包括瑞士人工智能实验室IDSIA,Nnaisense,Vicarious。此外,机器智能研究所和 OpenAI 等机构也建立起来以影响通用人工智能的发展道路。最后,像人脑计划这样的项目的目标是建立一个人脑的功能模拟。2017年针对通用人工智能的一项调查(通过已发表的研究)对45个已知的明确的或暗中研究通用人工智能的“活跃研发项目”进行了分类 ,其中最大的三个是 DeepMind、人类大脑项目和 OpenAI。<br />
<br />
<br />
<br />
In 2017, researchers Feng Liu, Yong Shi and Ying Liu conducted intelligence tests on publicly available and freely accessible weak AI such as Google AI or Apple's Siri and others. At the maximum, these AI reached a value of about 47, which corresponds approximately to a six-year-old child in first grade. An adult comes to about 100 on average. Similar tests had been carried out in 2014, with the IQ score reaching a maximum value of 27.<ref>{{cite journal|title=Intelligence Quotient and Intelligence Grade of Artificial Intelligence|journal=Annals of Data Science|volume=4|issue=2|pages=179–191|arxiv=1709.10242|doi=10.1007/s40745-017-0109-0|year=2017|last1=Liu|first1=Feng|last2=Shi|first2=Yong|last3=Liu|first3=Ying|bibcode=2017arXiv170910242L}}</ref><ref>{{cite web|title=Google-KI doppelt so schlau wie Siri|url=https://t3n.de/news/iq-kind-schlauer-google-ki-siri-864003|accessdate=2 January 2019|archive-url=https://web.archive.org/web/20190103055657/https://t3n.de/news/iq-kind-schlauer-google-ki-siri-864003/|archive-date=3 January 2019|url-status=live}}</ref><br />
<br />
In 2017, researchers Feng Liu, Yong Shi and Ying Liu conducted intelligence tests on publicly available and freely accessible weak AI such as Google AI or Apple's Siri and others. At the maximum, these AI reached a value of about 47, which corresponds approximately to a six-year-old child in first grade. An adult comes to about 100 on average. Similar tests had been carried out in 2014, with the IQ score reaching a maximum value of 27.<br />
<br />
2017年,研究人员 Feng Liu,Yong Shi 和 Ying Liu 对公开的和可自由访问的弱智能进行了智能测试,如谷歌人工智能或苹果的 Siri 等。在最大值,这些人工智能达到了约47,这大约相当于一个的六岁儿童。一个成年人平均智商为100。2014年也进行了类似的测试,智商分数的最高值达到了27。<br />
<br />
<br />
<br />
In 2019, video game programmer and aerospace engineer [[John Carmack]] announced plans to research AGI.<ref name="lawler">{{cite web |url=https://www.engadget.com/2019-11-13-john-carmack-agi.html |title=John Carmack takes a step back at Oculus to work on human-like AI |date=November 13, 2019 |first=Richard |last=Lawler |accessdate=April 4, 2020 |publisher=[[Engadget]]}}</ref><br />
<br />
In 2019, video game programmer and aerospace engineer John Carmack announced plans to research AGI.<br />
<br />
2019年,游戏程序师和航空工程师约翰·卡迈克(John Carmack)宣布了研究通用人工智能的计划。<br />
<br />
<br />
<br />
==Processing power needed to simulate a brain 模拟人脑所需要的处理能力==<br />
<br />
<br />
<br />
===Whole brain emulation 全脑模拟===<br />
<br />
{{main|Mind uploading}}<br />
<br />
A popular discussed approach to achieving general intelligent action is [[whole brain emulation]]. A low-level brain model is built by [[brain scanning|scanning]] and [[Brain mapping|mapping]] a biological brain in detail and copying its state into a computer system or another computational device. The computer runs a [[computer simulation|simulation]] model so faithful to the original that it will behave in essentially the same way as the original brain, or for all practical purposes, indistinguishably.<br />
<br />
A popular discussed approach to achieving general intelligent action is whole brain emulation. A low-level brain model is built by scanning and mapping a biological brain in detail and copying its state into a computer system or another computational device. The computer runs a simulation model so faithful to the original that it will behave in essentially the same way as the original brain, or for all practical purposes, indistinguishably.<ref name=Roadmap><br />
<br />
实现通用智能行为的一种流行的讨论方法是全脑模拟。一个低层次的大脑模型是通过扫描和绘制生物大脑的详细情况,并将其状态复制到计算机系统或其他计算设备中来建立的。计算机运行的模拟模型无比忠实于原始模型,以至于它的行为在本质,或者所有实际目的上与原始大脑相同,难以区分。<br />
<br />
{{Harvnb|Sandberg|Boström|2008}}. "The basic idea is to take a particular brain, scan its structure in detail, and construct a software model of it that is so faithful to the original that, when run on appropriate hardware, it will behave in essentially the same way as the original brain."</ref> Whole brain emulation is discussed in [[computational neuroscience]] and [[neuroinformatics]], in the context of [[brain simulation]] for medical research purposes. It is discussed in [[artificial intelligence]] research{{sfn|Goertzel|2007}} as an approach to strong AI. [[Neuroimaging]] technologies that could deliver the necessary detailed understanding are improving rapidly, and [[futurist]] Ray Kurzweil in the book ''The Singularity Is Near''<ref name=K/> predicts that a map of sufficient quality will become available on a similar timescale to the required computing power.<br />
<br />
."The basic idea is to take a particular brain, scan its structure in detail, and construct a software model of it that is so faithful to the original that, when run on appropriate hardware, it will behave in essentially the same way as the original brain."</ref> Whole brain emulation is discussed in computational neuroscience and neuroinformatics, in the context of brain simulation for medical research purposes. It is discussed in artificial intelligence research as an approach to strong AI. Neuroimaging technologies that could deliver the necessary detailed understanding are improving rapidly, and futurist Ray Kurzweil in the book The Singularity Is Near predicts that a map of sufficient quality will become available on a similar timescale to the required computing power.<br />
<br />
“基本思路是,取一个特定的大脑,详细地扫描其结构,并构建一个无比还原的原始大脑的软件模型,以至于在适当的硬件上运行时,它基本上与原始大脑的行为方式相同。”基于医学研究的大脑模拟背景下,全脑模拟在计算神经科学和神经信息学医学期刊上被讨论过。它是人工智能研究中讨论的一种强人工智能的方法。可提供必要详细的理解的神经成像技术正在迅速提高,未来学家雷·库兹韦尔(Ray Kurzweil)在《奇点临近》书中预测,一张质量足够高的地图将在类似的时间尺度上达到所需的计算能力。<br />
<br />
<br />
<br />
===Early estimates 初步预测===<br />
<br />
[[File:Estimations of Human Brain Emulation Required Performance.svg|thumb|right|400px|Estimates of how much processing power is needed to emulate a human brain at various levels (from Ray Kurzweil, and [[Anders Sandberg]] and [[Nick Bostrom]]), along with the fastest supercomputer from [[TOP500]] mapped by year. Note the logarithmic scale and exponential trendline, which assumes the computational capacity doubles every 1.1 years. Kurzweil believes that mind uploading will be possible at neural simulation, while the Sandberg, Bostrom report is less certain about where [[consciousness]] arises.根据对在不同水平上模拟人类大脑的所需处理能力的估计(来自 Ray Kurzweil,[ Anders Sandberg 和 Nick Bostrom ]) ,以及每年从最快的五百台超级计算机获得的数据,绘制出对数尺度趋势线和指数趋势线。它呈现出计算能力每1.1年增长一倍。库兹韦尔相信,在神经模拟中上传思维是可能的,而桑德伯格和博斯特罗姆的报告对意识从何产生则不太确定。{{sfn|Sandberg|Boström|2008}}]] For low-level brain simulation, an extremely powerful computer would be required. The [[human brain]] has a huge number of [[synapses]]. Each of the 10<sup>11</sup> (one hundred billion) [[neurons]] has on average 7,000 synaptic connections (synapses) to other neurons. It has been estimated that the brain of a three-year-old child has about 10<sup>15</sup> synapses (1 quadrillion). This number declines with age, stabilizing by adulthood. Estimates vary for an adult, ranging from 10<sup>14</sup> to 5×10<sup>14</sup> synapses (100 to 500 trillion).{{sfn|Drachman|2005}} An estimate of the brain's processing power, based on a simple switch model for neuron activity, is around 10<sup>14</sup> (100 trillion) synaptic updates per second ([[SUPS]]).{{sfn|Russell|Norvig|2003}} In 1997, Kurzweil looked at various estimates for the hardware required to equal the human brain and adopted a figure of 10<sup>16</sup> computations per second (cps).<ref>In "Mind Children" {{Harvnb|Moravec|1988|page=61}} 10<sup>15</sup> cps is used. More recently, in 1997, <{{cite web|url=http://www.transhumanist.com/volume1/moravec.htm |title=Archived copy |accessdate=23 June 2006 |url-status=dead |archiveurl=https://web.archive.org/web/20060615031852/http://transhumanist.com/volume1/moravec.htm |archivedate=15 June 2006 }}> Moravec argued for 10<sup>8</sup> MIPS which would roughly correspond to 10<sup>14</sup> cps. Moravec talks in terms of MIPS, not "cps", which is a non-standard term Kurzweil introduced.</ref> (For comparison, if a "computation" was equivalent to one "[[FLOPS|floating point operation]]" – a measure used to rate current [[supercomputer]]s – then 10<sup>16</sup> "computations" would be equivalent to 10 [[Peta-|petaFLOPS]], [[FLOPS#Performance records|achieved in 2011]]). He used this figure to predict the necessary hardware would be available sometime between 2015 and 2025, if the exponential growth in computer power at the time of writing continued.<br />
<br />
Estimates of how much processing power is needed to emulate a human brain at various levels (from Ray Kurzweil, and [[Anders Sandberg and Nick Bostrom), along with the fastest supercomputer from TOP500 mapped by year. Note the logarithmic scale and exponential trendline, which assumes the computational capacity doubles every 1.1 years. Kurzweil believes that mind uploading will be possible at neural simulation, while the Sandberg, Bostrom report is less certain about where consciousness arises.]] For low-level brain simulation, an extremely powerful computer would be required. The human brain has a huge number of synapses. Each of the 10<sup>11</sup> (one hundred billion) neurons has on average 7,000 synaptic connections (synapses) to other neurons. It has been estimated that the brain of a three-year-old child has about 10<sup>15</sup> synapses (1 quadrillion). This number declines with age, stabilizing by adulthood. Estimates vary for an adult, ranging from 10<sup>14</sup> to 5×10<sup>14</sup> synapses (100 to 500 trillion). An estimate of the brain's processing power, based on a simple switch model for neuron activity, is around 10<sup>14</sup> (100 trillion) synaptic updates per second (SUPS). In 1997, Kurzweil looked at various estimates for the hardware required to equal the human brain and adopted a figure of 10<sup>16</sup> computations per second (cps). (For comparison, if a "computation" was equivalent to one "floating point operation" – a measure used to rate current supercomputers – then 10<sup>16</sup> "computations" would be equivalent to 10 petaFLOPS, achieved in 2011). He used this figure to predict the necessary hardware would be available sometime between 2015 and 2025, if the exponential growth in computer power at the time of writing continued.<br />
<br />
根据对在不同水平上模拟人类大脑的所需处理能力的估计(来自 Ray Kurzweil,[ Anders Sandberg 和 Nick Bostrom ]) ,以及每年从最快的五百台超级计算机获得的数据,绘制出对数尺度趋势线和指数趋势线。它呈现出计算能力每1.1年增长一倍。库兹韦尔相信,在神经模拟中上传思维是可能的,而桑德伯格和博斯特罗姆的报告对意识从何产生则不太确定。为进行低层次的大脑模拟,需要一个非常强大的计算机。人类的大脑有大量的突触。每10个 sup 11 / sup (1000亿)神经元平均与其他神经元有7000个突触连接(突触)。据估计,一个三岁儿童的大脑约有10个 sup 15 / sup 突触(1千万亿)。这个数字随着年龄的增长而下降,成年后趋于稳定。而每个成年人的估计情况互不相同,从10个 sup 14 / sup 到5个 sup 10 sup 14 / sup 突触(100万亿到500万亿)不等。基于神经元活动的简单开关模型,对大脑处理能力的估计大约是每秒10次 / 秒(100万亿)突触更新(SUPS)。1997年,库兹韦尔研究了等价模拟人脑所需硬件的各种估计,并采纳了每秒10 sup 16 / sup 计算(cps)这个估计结果。(作为比较,如果一次“计算”相当于一次“浮点运算”——一种用于对当前超级计算机进行评级的措施——那么10 sup 16 / sup次“计算”相当于2011年达到的的每秒10000亿次浮点运算)。他用这个数字来预测,如果在撰写本文时计算机能力方面的指数增长继续下去的话,那么在2015年到2025年之间的某个时候,必要的硬件将会出现。<br />
<br />
<br />
<br />
===Modelling the neurons in more detail 对神经元的更精细的模拟===<br />
<br />
The [[artificial neuron]] model assumed by Kurzweil and used in many current [[artificial neural network]] implementations is simple compared with [[biological neuron model|biological neurons]]. A brain simulation would likely have to capture the detailed cellular behaviour of biological [[neurons]], presently understood only in the broadest of outlines. The overhead introduced by full modeling of the biological, chemical, and physical details of neural behaviour (especially on a molecular scale) would require computational powers several orders of magnitude larger than Kurzweil's estimate. In addition the estimates do not account for [[glial cells]], which are at least as numerous as neurons, and which may outnumber neurons by as much as 10:1, and are now known to play a role in cognitive processes.<ref name="Discover2011JanFeb">{{Cite journal|author=Swaminathan, Nikhil|title=Glia—the other brain cells|journal=Discover|date=Jan–Feb 2011|url=http://discovermagazine.com/2011/jan-feb/62|access-date=24 January 2014|archive-url=https://web.archive.org/web/20140208071350/http://discovermagazine.com/2011/jan-feb/62|archive-date=8 February 2014|url-status=live}}</ref><br />
<br />
The artificial neuron model assumed by Kurzweil and used in many current artificial neural network implementations is simple compared with biological neurons. A brain simulation would likely have to capture the detailed cellular behaviour of biological neurons, presently understood only in the broadest of outlines. The overhead introduced by full modeling of the biological, chemical, and physical details of neural behaviour (especially on a molecular scale) would require computational powers several orders of magnitude larger than Kurzweil's estimate. In addition the estimates do not account for glial cells, which are at least as numerous as neurons, and which may outnumber neurons by as much as 10:1, and are now known to play a role in cognitive processes.<br />
<br />
与生物神经元相比,库兹韦尔假设的人工神经元模型在当前许多'''<font color="#ff8000">人工神经网络(artificial neural network)</font>'''实现中的应用是简单的。大脑模拟可能需要捕捉生物神经元细胞行为的细节,目前只能从最广泛的概要中理解。对神经行为的生物、化学和物理细节(特别是在分子尺度上)进行全面建模所需要的计算能力将比库兹韦尔的估计大数个数量级。此外,这些估计没有考虑到至少和神经元一样多的'''<font color="#ff8000">胶质细胞(glial cells)</font>''',其数量可能比神经元多十分之一,且现已知它们在认知过程中发挥作用。<br />
<br />
<br />
<br />
=== Current research 研究现状===<br />
<br />
There are some research projects that are investigating brain simulation using more sophisticated neural models, implemented on conventional computing architectures. The [[Artificial Intelligence System]] project implemented non-real time simulations of a "brain" (with 10<sup>11</sup> neurons) in 2005. It took 50 days on a cluster of 27 processors to simulate 1 second of a model.<ref>{{cite journal |last=Izhikevich |first=Eugene M. |last2=Edelman |first2=Gerald M. |date=4 March 2008 |title=Large-scale model of mammalian thalamocortical systems |url=http://vesicle.nsi.edu/users/izhikevich/publications/large-scale_model_of_human_brain.pdf |journal=PNAS |volume=105 |issue=9 |pages=3593–3598 |doi= 10.1073/pnas.0712231105|access-date=23 June 2015 |archive-url=https://web.archive.org/web/20090612095651/http://vesicle.nsi.edu/users/izhikevich/publications/large-scale_model_of_human_brain.pdf |archive-date=12 June 2009 |pmid=18292226 |pmc=2265160|bibcode=2008PNAS..105.3593I }}</ref> The [[Blue Brain]] project used one of the fastest supercomputer architectures in the world, [[IBM]]'s [[Blue Gene]] platform, to create a real time simulation of a single rat [[Neocortex|neocortical column]] consisting of approximately 10,000 neurons and 10<sup>8</sup> synapses in 2006.<ref>{{cite web|url=http://bluebrain.epfl.ch/Jahia/site/bluebrain/op/edit/pid/19085|title=Project Milestones|work=Blue Brain|accessdate=11 August 2008}}</ref> A longer term goal is to build a detailed, functional simulation of the physiological processes in the human brain: "It is not impossible to build a human brain and we can do it in 10 years," [[Henry Markram]], director of the Blue Brain Project said in 2009 at the [[TED (conference)|TED conference]] in Oxford.<ref>{{Cite news |url=http://news.bbc.co.uk/1/hi/technology/8164060.stm |title=Artificial brain '10 years away' 2009 BBC news |date=22 July 2009 |access-date=25 July 2009 |archive-url=https://web.archive.org/web/20170726040959/http://news.bbc.co.uk/1/hi/technology/8164060.stm |archive-date=26 July 2017 |url-status=live }}</ref> There have also been controversial claims to have simulated a [[cat intelligence#Computer simulation of the cat brain|cat brain]]. Neuro-silicon interfaces have been proposed as an alternative implementation strategy that may scale better.<ref>[http://gauntlet.ucalgary.ca/story/10343 University of Calgary news] {{Webarchive|url=https://web.archive.org/web/20090818081044/http://gauntlet.ucalgary.ca/story/10343 |date=18 August 2009 }}, [http://www.nbcnews.com/id/12037941 NBC News news] {{Webarchive|url=https://web.archive.org/web/20170704063922/http://www.nbcnews.com/id/12037941/ |date=4 July 2017 }}</ref><br />
<br />
There are some research projects that are investigating brain simulation using more sophisticated neural models, implemented on conventional computing architectures. The Artificial Intelligence System project implemented non-real time simulations of a "brain" (with 10<sup>11</sup> neurons) in 2005. It took 50 days on a cluster of 27 processors to simulate 1 second of a model. The Blue Brain project used one of the fastest supercomputer architectures in the world, IBM's Blue Gene platform, to create a real time simulation of a single rat neocortical column consisting of approximately 10,000 neurons and 10<sup>8</sup> synapses in 2006. A longer term goal is to build a detailed, functional simulation of the physiological processes in the human brain: "It is not impossible to build a human brain and we can do it in 10 years," Henry Markram, director of the Blue Brain Project said in 2009 at the TED conference in Oxford. There have also been controversial claims to have simulated a cat brain. Neuro-silicon interfaces have been proposed as an alternative implementation strategy that may scale better.<br />
<br />
有一些研究项目正在使用更复杂的神经模型研究大脑模拟,这些模型是在传统的计算机体系结构上实现的。人工智能系统项目在2005年实现了对一个“大脑”(有10个 sup 11 / sup 神经元)的非实时模拟。在一个由27个处理器组成的集群上,模拟一个模型的一秒钟花费了50天时间。2006年,蓝脑项目利用世界上最快的超级计算机架构之一——IBM 的蓝色基因平台,创建了一个包含大约10,000个神经元和10个 sup 8 / sup 突触的单个大鼠的'''<font color="#ff8000">新皮质柱(neocortical column)</font>'''的实时模拟。一个更长期的目标是建立一个人脑生理过程的详细的功能模拟: “建立一个人脑并不是不可能的,我们可以在10年内完成,”蓝脑项目主任亨利·马克拉姆(Henry Markram)于2009年在牛津举行的 TED 大会上说道。还有一些有争议的说法是模拟猫的大脑。神经-硅接口已作为一种可替代的实施策略被提出,它可能会更好地进行模拟。<br />
<br />
<br />
<br />
[[Hans Moravec]] addressed the above arguments ("brains are more complicated", "neurons have to be modeled in more detail") in his 1997 paper "When will computer hardware match the human brain?".<ref>{{cite web|url=http://www.transhumanist.com/volume1/moravec.htm |title=Archived copy |accessdate=23 June 2006 |url-status=dead |archiveurl=https://web.archive.org/web/20060615031852/http://transhumanist.com/volume1/moravec.htm |archivedate=15 June 2006 }}</ref> He measured the ability of existing software to simulate the functionality of neural tissue, specifically the retina. His results do not depend on the number of glial cells, nor on what kinds of processing neurons perform where.<br />
<br />
Hans Moravec addressed the above arguments ("brains are more complicated", "neurons have to be modeled in more detail") in his 1997 paper "When will computer hardware match the human brain?". He measured the ability of existing software to simulate the functionality of neural tissue, specifically the retina. His results do not depend on the number of glial cells, nor on what kinds of processing neurons perform where.<br />
<br />
汉斯·莫拉维克(Hans Moravec)在他1997年的论文《计算机硬件何时能与人脑匹敌》中提出了上述观点(“大脑更复杂” ,“神经元的建模必须更详细”) .他测量了现有软件模拟神经组织,特别是视网膜功能的能力。他的研究结果并不取决于神经胶质细胞的数量,也不取决于处理神经元在哪里工作。<br />
<br />
<br />
<br />
The actual complexity of modeling biological neurons has been explored in [[OpenWorm|OpenWorm project]] that was aimed on complete simulation of a worm that has only 302 neurons in its neural network (among about 1000 cells in total). The animal's neural network has been well documented before the start of the project. However, although the task seemed simple at the beginning, the models based on a generic neural network did not work. Currently, the efforts are focused on precise emulation of biological neurons (partly on the molecular level), but the result cannot be called a total success yet. Even if the number of issues to be solved in a human-brain-scale model is not proportional to the number of neurons, the amount of work along this path is obvious.<br />
<br />
The actual complexity of modeling biological neurons has been explored in OpenWorm project that was aimed on complete simulation of a worm that has only 302 neurons in its neural network (among about 1000 cells in total). The animal's neural network has been well documented before the start of the project. However, although the task seemed simple at the beginning, the models based on a generic neural network did not work. Currently, the efforts are focused on precise emulation of biological neurons (partly on the molecular level), but the result cannot be called a total success yet. Even if the number of issues to be solved in a human-brain-scale model is not proportional to the number of neurons, the amount of work along this path is obvious.<br />
<br />
OpenWorm 项目已经探讨了建模生物神经元的实际复杂性。该项目旨在完全模拟一个蠕虫,其神经网络中只有302个神经元(在总共约1000个细胞中)。项目开始之前,蠕虫的神经网络已经被很好地记录了下来。然而,尽管任务一开始看起来很简单,基于一般神经网络的模型并不起作用。目前,研究的重点是精确模拟生物神经元(部分在分子水平上) ,但结果还不能被称为完全成功。即使在人脑尺度的模型中需要解决的问题的数量与神经元的数量不成比例,沿着这条路径走下去的工作量也是显而易见的。<br />
<br />
<br />
<br />
===Criticisms of simulation-based approaches 对基于模拟的研究方法的批评===<br />
<br />
A fundamental criticism of the simulated brain approach derives from [[embodied cognition]] where human embodiment is taken as an essential aspect of human intelligence. Many researchers believe that embodiment is necessary to ground meaning.<ref>{{Harvnb|de Vega|Glenberg|Graesser|2008}}. A wide range of views in current research, all of which require grounding to some degree</ref> If this view is correct, any fully functional brain model will need to encompass more than just the neurons (i.e., a robotic body). Goertzel{{sfn|Goertzel|2007}} proposes virtual embodiment (like in ''[[Second Life]]''), but it is not yet known whether this would be sufficient.<br />
<br />
A fundamental criticism of the simulated brain approach derives from embodied cognition where human embodiment is taken as an essential aspect of human intelligence. Many researchers believe that embodiment is necessary to ground meaning. If this view is correct, any fully functional brain model will need to encompass more than just the neurons (i.e., a robotic body). Goertzel proposes virtual embodiment (like in Second Life), but it is not yet known whether this would be sufficient.<br />
<br />
对模拟大脑的方法的一个基本批评来自具象认知,其中人形化被视为人类智力的一个重要方面。许多研究者认为,具体化是必要的基础意义。如果这种观点是正确的,那么任何功能齐全的大脑模型除了神经元还要包含更多东西(例如,一个机器人身体)。格兹尔提出了虚拟体(就像在《第二人生》中那样) ,但是目前还不知道这是否足够。<br />
<br />
<br />
<br />
Desktop computers using microprocessors capable of more than 10<sup>9</sup> cps (Kurzweil's non-standard unit "computations per second", see above) have been available since 2005. According to the brain power estimates used by Kurzweil (and Moravec), this computer should be capable of supporting a simulation of a bee brain, but despite some interest<ref>{{Cite web |url=http://www.setiai.com/archives/cat_honey_bee_brain.html |title=some links to bee brain studies |access-date=30 March 2010 |archive-url=https://web.archive.org/web/20111005162232/http://www.setiai.com/archives/cat_honey_bee_brain.html |archive-date=5 October 2011 |url-status=dead }}</ref> no such simulation exists {{Citation needed|date=April 2011}}. There are at least three reasons for this:<br />
<br />
Desktop computers using microprocessors capable of more than 10<sup>9</sup> cps (Kurzweil's non-standard unit "computations per second", see above) have been available since 2005. According to the brain power estimates used by Kurzweil (and Moravec), this computer should be capable of supporting a simulation of a bee brain, but despite some interest no such simulation exists . There are at least three reasons for this:<br />
<br />
自2005年以来,台式计算机使用的微处理器能够超过10 sup 9 / sup cps (库兹韦尔的非标准单位“每秒计算”,见上文)。根据库兹韦尔(和莫拉维克)使用的大脑能量估算,这台计算机应该能够支持蜜蜂大脑的模拟,但是尽管有些人感兴趣,这样的模拟并不存在。这至少有三个原因:<br />
<br />
#The neuron model seems to be oversimplified (see next section).<br />
<br />
The neuron model seems to be oversimplified (see next section).<br />
<br />
神经元模型似乎过于简化了(见下一节)。<br />
<br />
#There is insufficient understanding of higher cognitive processes{{refn|In Goertzels' AGI book ([[#CITEREFYudkowsky2006|Yudkowsky 2006]]), Yudkowsky proposes 5 levels of organisation that must be understood – code/data, sensory modality, concept & category, thought, and deliberation (consciousness) – in order to use the available hardware}} to establish accurately what the brain's neural activity, observed using techniques such as [[Neuroimaging#Functional magnetic resonance imaging|functional magnetic resonance imaging]], correlates with.<br />
<br />
There is insufficient understanding of higher cognitive processes to establish accurately what the brain's neural activity, observed using techniques such as functional magnetic resonance imaging, correlates with.<br />
<br />
人们对高级认知过程的理解不够充分,无法准确地确定大脑的神经活动---- 与之相关的是使用功能性磁共振成像等技术观察到的大脑活动。<br />
<br />
#Even if our understanding of cognition advances sufficiently, early simulation programs are likely to be very inefficient and will, therefore, need considerably more hardware.<br />
<br />
Even if our understanding of cognition advances sufficiently, early simulation programs are likely to be very inefficient and will, therefore, need considerably more hardware.<br />
<br />
即使我们对认知的理解有了足够的进步,早期的仿真程序也可能非常低效,因此需要更多的硬件。<br />
<br />
#The brain of an organism, while critical, may not be an appropriate boundary for a cognitive model. To simulate a bee brain, it may be necessary to simulate the body, and the environment. [[The Extended Mind]] thesis formalizes the philosophical concept, and research into [[Cephalopoda|cephalopods]] has demonstrated clear examples of a decentralized system.<ref>{{cite journal | pmid = 15829594 | doi=10.1152/jn.00684.2004 | volume=94 | issue=2 | title=Dynamic model of the octopus arm. I. Biomechanics of the octopus reaching movement |date=August 2005 | journal=J. Neurophysiol. | pages=1443–58 | last1 = Yekutieli | first1 = Y | last2 = Sagiv-Zohar | first2 = R | last3 = Aharonov | first3 = R | last4 = Engel | first4 = Y | last5 = Hochner | first5 = B | last6 = Flash | first6 = T}}</ref><br />
<br />
The brain of an organism, while critical, may not be an appropriate boundary for a cognitive model. To simulate a bee brain, it may be necessary to simulate the body, and the environment. The Extended Mind thesis formalizes the philosophical concept, and research into cephalopods has demonstrated clear examples of a decentralized system.<br />
<br />
有机体的大脑虽然关键,但可能不是认知模型的合适边界。为了模拟蜜蜂的大脑,可能需要模拟身体和环境。'''<font color="#ff8000">延展心灵论题(The Extended Mind thesis)</font>'''形式化了哲学概念,对头足类动物的研究已经展示了分散系统的明显的例子。<br />
<br />
<br />
<br />
In addition, the scale of the human brain is not currently well-constrained. One estimate puts the human brain at about 100 billion neurons and 100 trillion synapses.<ref>{{Harvnb|Williams|Herrup|1988}}</ref><ref>[http://search.eb.com/eb/article-75525 "nervous system, human."] ''[[Encyclopædia Britannica]]''. 9 January 2007</ref> Another estimate is 86 billion neurons of which 16.3 billion are in the [[cerebral cortex]] and 69 billion in the [[cerebellum]].{{sfn|Azevedo et al.|2009}} [[Glial cell]] synapses are currently unquantified but are known to be extremely numerous.<br />
<br />
In addition, the scale of the human brain is not currently well-constrained. One estimate puts the human brain at about 100 billion neurons and 100 trillion synapses. Another estimate is 86 billion neurons of which 16.3 billion are in the cerebral cortex and 69 billion in the cerebellum. Glial cell synapses are currently unquantified but are known to be extremely numerous.<br />
<br />
此外,人类大脑的规模目前还没有得到很好的限制。据估计,人类大脑大约有1000亿个神经元和100万亿个突触。另一个估计是860亿个神经元,其中163亿个在大脑皮层,690亿个在小脑。神经胶质细胞突触目前尚未定量,但已知数量极多。<br />
<br />
<br />
<br />
==Strong AI and consciousness 强人工智能和意识==<br />
<br />
{{See also|Philosophy of artificial intelligence|Turing test}}<br />
<br />
<br />
<br />
In 1980, philosopher [[John Searle]] coined the term "strong AI" as part of his [[Chinese room]] argument.<ref>{{Harvnb|Searle|1980}}</ref> He wanted to distinguish between two different hypotheses about artificial intelligence:<ref>As defined in a standard AI textbook: "The assertion that machines could possibly act intelligently (or, perhaps better, act as if they were intelligent) is called the 'weak AI' hypothesis by philosophers, and the assertion that machines that do so are actually thinking (as opposed to simulating thinking) is called the 'strong AI' hypothesis." {{Harv|Russell|Norvig|2003}}</ref><br />
<br />
In 1980, philosopher John Searle coined the term "strong AI" as part of his Chinese room argument. He wanted to distinguish between two different hypotheses about artificial intelligence:<br />
<br />
1980年,哲学家约翰•塞尔(John Searle)将“强人工智能”(strong AI)一词作为他在中文屋论证的一部分。他想要区分关于人工智能的两种不同假设:<br />
<br />
* An artificial intelligence system can ''think'' and have a ''mind''. (The word "mind" has a specific meaning for philosophers, as used in "the [[mind body problem]]" or "the [[philosophy of mind]]".)<br />
*一个人工智能系统可以思考并拥有思维。(词语“思维”对哲学家来说有特殊意义,正如在“身心问题”或“心灵哲学”中的使用一样。)<br />
<br />
* An artificial intelligence system can (only) ''act like'' it thinks and has a mind.<br />
*一个人工智能系统可以(仅仅)按它所想而“行动”,并且拥有思维。<br />
<br />
The first one is called "the ''strong'' AI hypothesis" and the second is "the ''weak'' AI hypothesis" because the first one makes the ''stronger'' statement: it assumes something special has happened to the machine that goes beyond all its abilities that we can test. Searle referred to the "strong AI hypothesis" as "strong AI". This usage is also common in academic AI research and textbooks.<ref>For example:<br />
<br />
The first one is called "the strong AI hypothesis" and the second is "the weak AI hypothesis" because the first one makes the stronger statement: it assumes something special has happened to the machine that goes beyond all its abilities that we can test. Searle referred to the "strong AI hypothesis" as "strong AI". This usage is also common in academic AI research and textbooks.<ref>For example:<br />
<br />
第一条被称为“强人工智能假设” ,第二条被称为“弱人工智能假设”,因为第一条假设提出了更强的陈述: 它假定机器发生了某种特殊的事件,超出了我们能够测试的所有能力。塞尔将“'''<font color="#ff8000">强人工智能假说(strong AI hypothesis)</font>'''”称为“强人工智能”。这种用法在人工智能学术研究和教科书中也很常见。例如:<br />
<br />
* {{Harvnb|Russell|Norvig|2003}},<br />
<br />
* [http://www.encyclopedia.com/doc/1O87-strongAI.html Oxford University Press Dictionary of Psychology] {{Webarchive|url=https://web.archive.org/web/20071203103022/http://www.encyclopedia.com/doc/1O87-strongAI.html |date=3 December 2007 }} (quoted in "High Beam Encyclopedia"),<br />
<br />
* [http://www.aaai.org/AITopics/html/phil.html MIT Encyclopedia of Cognitive Science] {{Webarchive|url=https://web.archive.org/web/20080719074502/http://www.aaai.org/AITopics/html/phil.html |date=19 July 2008 }} (quoted in "AITopics")<br />
<br />
* [http://planetmath.org/encyclopedia/StrongAIThesis.html Planet Math] {{Webarchive|url=https://web.archive.org/web/20070919012830/http://planetmath.org/encyclopedia/StrongAIThesis.html |date=19 September 2007 }}<br />
<br />
* [http://www.cbhd.org/resources/biotech/tongen_2003-11-07.htm Will Biological Computers Enable Artificially Intelligent Machines to Become Persons?] {{Webarchive|url=https://web.archive.org/web/20080513031753/http://www.cbhd.org/resources/biotech/tongen_2003-11-07.htm |date=13 May 2008 }} Anthony Tongen</ref><br />
<br />
<br />
<br />
The weak AI hypothesis is equivalent to the hypothesis that artificial general intelligence is possible. According to [[Stuart J. Russell|Russell]] and [[Peter Norvig|Norvig]], "Most AI researchers take the weak AI hypothesis for granted, and don't care about the strong AI hypothesis."{{sfn|Russell|Norvig|2003|p=947}}<br />
<br />
The weak AI hypothesis is equivalent to the hypothesis that artificial general intelligence is possible. According to Russell and Norvig, "Most AI researchers take the weak AI hypothesis for granted, and don't care about the strong AI hypothesis."<br />
<br />
弱人工智能假说等同于通用人工智能是可能的假说。根据罗素和诺维格的说法,“大多数人工智能研究人员认为弱人工智能假说是理所当然的,并且不关心强人工智能假说。”<br />
<br />
<br />
<br />
In contrast to Searle, [[Ray Kurzweil]] uses the term "strong AI" to describe any artificial intelligence system that acts like it has a mind,<ref name=K/> regardless of whether a philosopher would be able to determine if it ''actually'' has a mind or not.<br />
<br />
In contrast to Searle, Ray Kurzweil uses the term "strong AI" to describe any artificial intelligence system that acts like it has a mind, regardless of whether a philosopher would be able to determine if it actually has a mind or not.<br />
<br />
与 Searle 不同的是,Ray Kurzweil 使用“强人工智能”这个词来描述任何人工智能系统,这个系统的行为就像它有思想一样,不管哲学家能否确定它是否真的有思想。<br />
<br />
In science fiction, AGI is associated with traits such as [[consciousness]], [[sentience]], [[sapience]], and [[self-awareness]] observed in living beings. However, according to Searle, it is an open question whether general intelligence is sufficient for consciousness. "Strong AI" (as defined above by Kurzweil) should not be confused with Searle's "[[strong AI hypothesis]]." The strong AI hypothesis is the claim that a computer which behaves as intelligently as a person must also necessarily have a [[mind]] and [[consciousness]]. AGI refers only to the amount of intelligence that the machine displays, with or without a mind.<br />
<br />
In science fiction, AGI is associated with traits such as consciousness, sentience, sapience, and self-awareness observed in living beings. However, according to Searle, it is an open question whether general intelligence is sufficient for consciousness. "Strong AI" (as defined above by Kurzweil) should not be confused with Searle's "strong AI hypothesis." The strong AI hypothesis is the claim that a computer which behaves as intelligently as a person must also necessarily have a mind and consciousness. AGI refers only to the amount of intelligence that the machine displays, with or without a mind.<br />
<br />
在科幻小说中,通用人工智能与生物所具有的的意识、知觉、智慧和自我意识等特征有关。然而,根据塞尔的说法,一般智力是否足以产生意识还是一个悬而未决的问题。“强人工智能”(如上文库兹韦尔所定义的)不应与塞尔的“强人工智能假设”相混淆。强人工智能假说认为,一台像人一样智能运行的计算机必然具有思想和意识。通用人工智能只是指机器显示的智能程度,不管有没有头脑。<br />
<br />
<br />
<br />
===Consciousness 意识===<br />
<br />
There are other aspects of the human mind besides intelligence that are relevant to the concept of strong AI which play a major role in [[science fiction]] and the [[ethics of artificial intelligence]]:<br />
<br />
There are other aspects of the human mind besides intelligence that are relevant to the concept of strong AI which play a major role in science fiction and the ethics of artificial intelligence:<br />
<br />
除了与在科幻小说和人工智能伦理中扮演重要角色的强人工智能概念有关的智能,人类思维还有其他方面:<br />
<br />
* [[consciousness]]: To have [[qualia|subjective experience]] and [[thought]].<ref>Note that [[consciousness]] is difficult to define. A popular definition, due to [[Thomas Nagel]], is that it "feels like" something to be conscious. If we are not conscious, then it doesn't feel like anything. Nagel uses the example of a bat: we can sensibly ask "what does it feel like to be a bat?" However, we are unlikely to ask "what does it feel like to be a toaster?" Nagel concludes that a bat appears to be conscious (i.e. has consciousness) but a toaster does not. See {{Harv|Nagel|1974}}</ref><br />
意识:拥有主观体验和思想。值得一提的是意识是很难定义的。由托马斯·内格尔给出的一个著名定义陈述如下:一个事物如果能体会到某种感觉,那么它是有意识的。如果我们不是有意识的,那么我们不会有任何感觉。内格尔以蝙蝠为例:我们可以凭借感觉问出:“成为一只蝙蝠的感觉如何?”但是,我们不大可能问出:“成为一个吐司机的感觉如何?”内格尔总结认为蝙蝠像是有意识的(即拥有意识),但是吐司机却不是。<br />
<br />
* [[self-awareness]]: To be aware of oneself as a separate individual, especially to be aware of one's own thoughts.<br />
自我意识:能够意识到自己是一个独立的个体,尤其是意识到自己的思想。<br />
<br />
* [[sentience]]: The ability to "feel" perceptions or emotions subjectively.<br />
知觉:主观地感受概念或者情感的能力。<br />
<br />
* [[sapience]]: The capacity for wisdom.<br />
智慧:容纳知识的能力。<br />
<br />
<br />
<br />
These traits have a moral dimension, because a machine with this form of strong AI may have legal rights, analogous to the [[animal rights|rights of non-human animals]]. As such, preliminary work has been conducted on approaches to integrating full ethical agents with existing legal and social frameworks. These approaches have focused on the legal position and rights of 'strong' AI.<ref>{{Cite journal|last=Sotala|first=Kaj|last2=Yampolskiy|first2=Roman V|date=2014-12-19|title=Responses to catastrophic AGI risk: a survey|journal=Physica Scripta|volume=90|issue=1|pages=8|doi=10.1088/0031-8949/90/1/018001|issn=0031-8949|doi-access=free}}</ref><br />
<br />
These traits have a moral dimension, because a machine with this form of strong AI may have legal rights, analogous to the rights of non-human animals. As such, preliminary work has been conducted on approaches to integrating full ethical agents with existing legal and social frameworks. These approaches have focused on the legal position and rights of 'strong' AI.<br />
<br />
这些特征具有道德维度,因为拥有这种强人工智能形式的机器可能拥有法律权利,类似于非人类动物的权利。因此,已经开展了初步工作,探讨如何将全面的道德行为者纳入现有的法律和社会框架。这些方法都集中在强大的人工智能的法律地位和权利上。<br />
<br />
<br />
<br />
However, [[Bill Joy]], among others, argues a machine with these traits may be a threat to human life or dignity.<ref>{{cite journal| title=Why the future doesn't need us | last=Joy | first=Bill |author-link=Bill Joy | journal=Wired |date=April 2000 }}</ref> It remains to be shown whether any of these traits are [[Necessary and sufficient condition|necessary]] for strong AI. The role of [[consciousness]] is not clear, and currently there is no agreed test for its presence. If a machine is built with a device that simulates the [[neural correlates of consciousness]], would it automatically have self-awareness? It is also possible that some of these properties, such as sentience, [[Emergence|naturally emerge]] from a fully intelligent machine, or that it becomes natural to ''ascribe'' these properties to machines once they begin to act in a way that is clearly intelligent. For example, intelligent action may be sufficient for sentience, rather than the other way around.<br />
<br />
However, Bill Joy, among others, argues a machine with these traits may be a threat to human life or dignity. It remains to be shown whether any of these traits are necessary for strong AI. The role of consciousness is not clear, and currently there is no agreed test for its presence. If a machine is built with a device that simulates the neural correlates of consciousness, would it automatically have self-awareness? It is also possible that some of these properties, such as sentience, naturally emerge from a fully intelligent machine, or that it becomes natural to ascribe these properties to machines once they begin to act in a way that is clearly intelligent. For example, intelligent action may be sufficient for sentience, rather than the other way around.<br />
<br />
然而,比尔·乔伊(Bill Joy)等人认为,具有这些特征的机器可能会威胁到人类的生命或尊严。这些特征对于强人工智能来说是否是必要的还有待证明。意识的作用并不清楚,目前也没有针对其存在而进行的一致的测试。如果一台机器装有一个模拟与意识相关的神经的装置,它会自动具有自我意识吗?也有可能这些特性中的一些,比如感知能力,自然而然地从一个完全智能的机器中产生,或者一旦机器开始以一种明显智能的方式行动,人们就会自然而然地认为这些特性是机器自主产生的。<br />
--[[用户:粲兰|袁一博]]([[用户讨论:粲兰|讨论]])“或者一旦机器开始以一种明显智能的方式行动,,人们就会自然而然地认为这些特性是机器自主产生的。”对应原句“or that it becomes natural to ascribe these properties to machines once they begin to act in a way that is clearly intelligent.”与原句在语序和措辞上略有不同,是译者考虑到中文的阅读习惯在不改变原意的条件下意译得出的。<br />
例如,智能行为可能足以判定机器产生了知觉,而非反过来。<br />
<br />
<br />
<br />
===Artificial consciousness research 人工意识研究===<br />
<br />
{{Main|Artificial consciousness}}<br />
<br />
<br />
<br />
Although the role of consciousness in strong AI/AGI is debatable, many AGI researchers{{sfn|Yudkowsky|2006}} regard research that investigates possibilities for implementing consciousness as vital. In an early effort [[Igor Aleksander]]{{sfn|Aleksander|1996}} argued that the principles for creating a conscious machine already existed but that it would take forty years to train such a machine to understand [[language]].<br />
<br />
Although the role of consciousness in strong AI/AGI is debatable, many AGI researchers regard research that investigates possibilities for implementing consciousness as vital. In an early effort Igor Aleksander argued that the principles for creating a conscious machine already existed but that it would take forty years to train such a machine to understand language.<br />
<br />
虽然意识在强人工智能/通用人工智能中的作用是有争议的,但是很多通用人工智能的研究人员认为研究实现意识的可能性是至关重要的。在早期的努力中,伊格尔·亚历山大(Igor Aleksander)认为创造一个有意识的机器的原则已经存在,但是训练这样一个机器去理解语言可能需要四十年。<br />
<br />
<br />
<br />
==Possible explanations for the slow progress of AI research 人工智能研究进展缓慢的可能解释==<br />
<br />
{{See also|History of artificial intelligence#The problems}}<br />
<br />
<br />
<br />
Since the launch of AI research in 1956, the growth of this field has slowed down over time and has stalled the aims of creating machines skilled with intelligent action at the human level.{{sfn|Clocksin|2003}} A possible explanation for this delay is that computers lack a sufficient scope of memory or processing power.{{sfn|Clocksin|2003}} In addition, the level of complexity that connects to the process of AI research may also limit the progress of AI research.{{sfn|Clocksin|2003}}<br />
<br />
Since the launch of AI research in 1956, the growth of this field has slowed down over time and has stalled the aims of creating machines skilled with intelligent action at the human level. A possible explanation for this delay is that computers lack a sufficient scope of memory or processing power. In addition, the level of complexity that connects to the process of AI research may also limit the progress of AI research.<br />
<br />
自从1956年开始人工智能研究以来,这一领域的发展速度已经随着时间的推移而放缓,并且阻碍了创造具有人类水平的智能行为的机器的目标。这种延迟的一个可能的解释是计算机缺乏足够的存储空间或处理能力。此外,与人工智能研究过程相关的复杂程度也可能限制人工智能研究的进展。<br />
<br />
<br />
<br />
While most AI researchers believe strong AI can be achieved in the future, there are some individuals like [[Hubert Dreyfus]] and [[Roger Penrose]] who deny the possibility of achieving strong AI.{{sfn|Clocksin|2003}} [[John McCarthy (computer scientist)|John McCarthy]] was one of various computer scientists who believe human-level AI will be accomplished, but a date cannot accurately be predicted.{{sfn|McCarthy|2003}}<br />
<br />
While most AI researchers believe strong AI can be achieved in the future, there are some individuals like Hubert Dreyfus and Roger Penrose who deny the possibility of achieving strong AI. John McCarthy was one of various computer scientists who believe human-level AI will be accomplished, but a date cannot accurately be predicted.<br />
<br />
虽然大多数人工智能研究人员认为强大的人工智能可以在未来实现,但也有一些人像休伯特·德雷福斯(Hubert Dreyfus)和罗杰·彭罗斯(Roger Penrose)否认实现强大人工智能的可能性。约翰·麦卡锡(John McCarthy)是众多计算机科学家之一,他们相信人类水平的人工智能将会实现,但是日期无法准确预测。<br />
<br />
<br />
<br />
Conceptual limitations are another possible reason for the slowness in AI research.{{sfn|Clocksin|2003}} AI researchers may need to modify the conceptual framework of their discipline in order to provide a stronger base and contribution to the quest of achieving strong AI. As William Clocksin wrote in 2003: "the framework starts from Weizenbaum's observation that intelligence manifests itself only relative to specific social and cultural contexts".{{sfn|Clocksin|2003}}<br />
<br />
Conceptual limitations are another possible reason for the slowness in AI research. AI researchers may need to modify the conceptual framework of their discipline in order to provide a stronger base and contribution to the quest of achieving strong AI. As William Clocksin wrote in 2003: "the framework starts from Weizenbaum's observation that intelligence manifests itself only relative to specific social and cultural contexts".<br />
<br />
概念上的局限性是人工智能研究缓慢的另一个可能原因。人工智能研究人员可能需要修改他们学科的概念框架,以便为实现强大的人工智能提供一个更强大的基础和贡献。正如威廉·克罗克森(William Clocksin)在2003年写的那样: “这个框架始于魏泽堡的观察,即智力只在特定的社会和文化背景下表现出来。”。<br />
<br />
<br />
<br />
Furthermore, AI researchers have been able to create computers that can perform jobs that are complicated for people to do, such as mathematics, but conversely they have struggled to develop a computer that is capable of carrying out tasks that are simple for humans to do, such as walking ([[Moravec's paradox]]).{{sfn|Clocksin|2003}} A problem described by David Gelernter is that some people assume thinking and reasoning are equivalent.{{sfn|Gelernter|2010}} However, the idea of whether thoughts and the creator of those thoughts are isolated individually has intrigued AI researchers.{{sfn|Gelernter|2010}}<br />
<br />
Furthermore, AI researchers have been able to create computers that can perform jobs that are complicated for people to do, such as mathematics, but conversely they have struggled to develop a computer that is capable of carrying out tasks that are simple for humans to do, such as walking (Moravec's paradox). A problem described by David Gelernter is that some people assume thinking and reasoning are equivalent. However, the idea of whether thoughts and the creator of those thoughts are isolated individually has intrigued AI researchers.<br />
<br />
此外,人工智能研究人员已经能够创造出能够执行对人类而言复杂的工作(如数学)的计算机,但相反地,他们却难以开发出能够执行人类简单任务(如行走)的计算机(莫拉维克悖论)。大卫·格勒尼特(David Gelernter)描述的一个问题是,有些人认为思考和推理是等价的。然而,思想和这些思想的创造者是否被孤立的想法引起了人工智能研究者的兴趣。<br />
<br />
<br />
<br />
The problems that have been encountered in AI research over the past decades have further impeded the progress of AI. The failed predictions that have been promised by AI researchers and the lack of a complete understanding of human behaviors have helped diminish the primary idea of human-level AI.{{sfn|Goertzel|2007}} Although the progress of AI research has brought both improvement and disappointment, most investigators have established optimism about potentially achieving the goal of AI in the 21st century.{{sfn|Goertzel|2007}}<br />
<br />
The problems that have been encountered in AI research over the past decades have further impeded the progress of AI. The failed predictions that have been promised by AI researchers and the lack of a complete understanding of human behaviors have helped diminish the primary idea of human-level AI. Although the progress of AI research has brought both improvement and disappointment, most investigators have established optimism about potentially achieving the goal of AI in the 21st century.<br />
<br />
过去几十年人工智能研究中遇到的问题进一步阻碍了人工智能的发展。人工智能研究人员所承诺的失败的预测,以及对人类行为缺乏完整理解,已经帮助削弱了人类水平人工智能的最初设想。尽管人工智能研究的进展带来了进步和失望,但大多数研究人员对人工智能在21世纪可能实现的目标持乐观态度。<br />
<br />
<br />
<br />
Other possible reasons have been proposed for the lengthy research in the progress of strong AI. The intricacy of scientific problems and the need to fully understand the human brain through psychology and neurophysiology have limited many researchers in emulating the function of the human brain in computer hardware.{{sfn|McCarthy|2007}} Many researchers tend to underestimate any doubt that is involved with future predictions of AI, but without taking those issues seriously, people can then overlook solutions to problematic questions.{{sfn|Goertzel|2007}}<br />
<br />
Other possible reasons have been proposed for the lengthy research in the progress of strong AI. The intricacy of scientific problems and the need to fully understand the human brain through psychology and neurophysiology have limited many researchers in emulating the function of the human brain in computer hardware. Many researchers tend to underestimate any doubt that is involved with future predictions of AI, but without taking those issues seriously, people can then overlook solutions to problematic questions.<br />
<br />
还有其他可能的原因可以解释为什么对于强人工智能进行了长时间的研究。错综复杂的科学问题,以及通过心理学和神经生理学充分了解人脑的必要性,限制了许多研究人员在计算机硬件中模拟人脑的功能。许多研究人员倾向于低估与人工智能未来预测有关的任何怀疑,但是如果不认真对待这些问题,人们就会忽视不确定问题的解决方案。<br />
<br />
<br />
<br />
Clocksin says that a conceptual limitation that may impede the progress of AI research is that people may be using the wrong techniques for computer programs and implementation of equipment.{{sfn|Clocksin|2003}} When AI researchers first began to aim for the goal of artificial intelligence, a main interest was human reasoning.{{sfn|Holte|Choueiry|2003}} Researchers hoped to establish computational models of human knowledge through reasoning and to find out how to design a computer with a specific cognitive task.{{sfn|Holte|Choueiry|2003}}<br />
<br />
Clocksin says that a conceptual limitation that may impede the progress of AI research is that people may be using the wrong techniques for computer programs and implementation of equipment. When AI researchers first began to aim for the goal of artificial intelligence, a main interest was human reasoning. Researchers hoped to establish computational models of human knowledge through reasoning and to find out how to design a computer with a specific cognitive task.<br />
<br />
克罗克森说,阻碍人工智能研究进展的一个概念上的限制是,人们可能在计算机程序和安装设备方面使用了错误的技术。当人工智能研究人员第一次开始瞄准人工智能的目标时,主要的兴趣是人类推理。研究人员希望通过推理建立人类知识的计算模型,并找出如何设计一台具有特定认知任务的计算机。<br />
<br />
<br />
<br />
The practice of abstraction, which people tend to redefine when working with a particular context in research, provides researchers with a concentration on just a few concepts.{{sfn|Holte|Choueiry|2003}} The most productive use of abstraction in AI research comes from planning and problem solving.{{sfn|Holte|Choueiry|2003}} Although the aim is to increase the speed of a computation, the role of abstraction has posed questions about the involvement of abstraction operators.{{sfn|Zucker|2003}}<br />
<br />
The practice of abstraction, which people tend to redefine when working with a particular context in research, provides researchers with a concentration on just a few concepts. The most productive use of abstraction in AI research comes from planning and problem solving. Although the aim is to increase the speed of a computation, the role of abstraction has posed questions about the involvement of abstraction operators.<br />
<br />
人们在研究中使用特定的语境时倾向于重新定义抽象的实践,使得研究人员集中在数个概念上。抽象在人工智能研究中最有效的应用来自规划和解决问题。虽然目标是提高计算速度,但是抽象的作用已经对抽象算子的参与提出了问题。<br />
<br />
<br />
<br />
A possible reason for the slowness in AI relates to the acknowledgement by many AI researchers that heuristics is a section that contains a significant breach between computer performance and human performance.{{sfn|McCarthy|2007}} The specific functions that are programmed to a computer may be able to account for many of the requirements that allow it to match human intelligence. These explanations are not necessarily guaranteed to be the fundamental causes for the delay in achieving strong AI, but they are widely agreed by numerous researchers.<br />
<br />
A possible reason for the slowness in AI relates to the acknowledgement by many AI researchers that heuristics is a section that contains a significant breach between computer performance and human performance. The specific functions that are programmed to a computer may be able to account for many of the requirements that allow it to match human intelligence. These explanations are not necessarily guaranteed to be the fundamental causes for the delay in achieving strong AI, but they are widely agreed by numerous researchers.<br />
<br />
人工智能发展缓慢的一个可能原因是许多人工智能研究人员承认启发式算法是计算机性能和人类表现之间的一个重大缺口。为编入计算机的特定功能可以满足许多要求,使计算机与人类智能相匹配。这些解释并不一定是造成人工智能实现延迟的根本原因,但它们得到了众多研究人员的广泛认同。<br />
<br />
<br />
<br />
There have been many AI researchers that debate over the idea whether [[affective computing|machines should be created with emotions]]. There are no emotions in typical models of AI and some researchers say programming emotions into machines allows them to have a mind of their own.{{sfn|Clocksin|2003}} Emotion sums up the experiences of humans because it allows them to remember those experiences.{{sfn|Gelernter|2010}} David Gelernter writes, "No computer will be creative unless it can simulate all the nuances of human emotion."{{sfn|Gelernter|2010}} This concern about emotion has posed problems for AI researchers and it connects to the concept of strong AI as its research progresses into the future.<ref>{{cite journal|doi=10.1016/j.bushor.2018.08.004|title=Kaplan Andreas and Haelein Michael (2019) Siri, Siri, in my hand: Who's the fairest in the land? On the interpretations, illustrations, and implications of artificial intelligence | volume=62 | year=2019|journal=Business Horizons|pages=15–25 | last1 = Kaplan | first1 = Andreas | last2 = Haenlein | first2 = Michael}}</ref><br />
<br />
There have been many AI researchers that debate over the idea whether machines should be created with emotions. There are no emotions in typical models of AI and some researchers say programming emotions into machines allows them to have a mind of their own. Emotion sums up the experiences of humans because it allows them to remember those experiences. David Gelernter writes, "No computer will be creative unless it can simulate all the nuances of human emotion." This concern about emotion has posed problems for AI researchers and it connects to the concept of strong AI as its research progresses into the future.<br />
<br />
许多人工智能研究人员一直在争论机器是否应该带有情感。典型的人工智能模型中没有情感,一些研究人员说,将情感编程到机器中可以让它们拥有自己的思想。情感总结了人类的经历,因为它允许人们记住那些经历。大卫·格勒尼特(David Gelernter)写道: “除非计算机能够模拟人类情感的所有细微差别,否则它不会具有创造力。”这种对情绪的关注给人工智能研究人员带来了一些问题,随着未来人工智能研究的进展,它与强人工智能的概念联系起来。<br />
<br />
<br />
<br />
==Controversies and dangers 争议和风险==<br />
<br />
<br />
<br />
===Feasibility 可能性===<br />
<br />
{{expand section|date=February 2016}}<br />
<br />
As of March 2020, AGI remains speculative<ref name="spec1">[https://www.europarl.europa.eu/at-your-service/files/be-heard/religious-and-non-confessional-dialogue/events/en-20190319-how-artificial-intelligence-works.pdf europarl.europa.eu: How artificial intelligence works], "Concluding remarks: Today's AI is powerful and useful, but remains far from speculated AGI or ASI.", European Parliamentary Research Service, retrieved March 3, 2020</ref><ref name="spec2">[https://www.itu.int/en/journal/001/Documents/itu2018-9.pdf itu.int: Beyond Mad?: The Race For Artificial General Intelligence], "AGI represents a level of power that remains firmly in the realm of speculative fiction as on date." February 2, 2018, retrieved March 3, 2020</ref> as no such system has been demonstrated yet. Opinions vary both on ''whether'' and ''when'' artificial general intelligence will arrive. At one extreme, AI pioneer [[Herbert A. Simon]] wrote in 1965: "machines will be capable, within twenty years, of doing any work a man can do". However, this prediction failed to come true. Microsoft co-founder [[Paul Allen]] believed that such intelligence is unlikely in the 21st century because it would require "unforeseeable and fundamentally unpredictable breakthroughs" and a "scientifically deep understanding of cognition".<ref>{{cite news|last1=Allen|first1=Paul|title=The Singularity Isn't Near|url=http://www.technologyreview.com/view/425733/paul-allen-the-singularity-isnt-near/|accessdate=17 September 2014|work=[[MIT Technology Review]]}}</ref> Writing in [[The Guardian]], roboticist Alan Winfield claimed the gulf between modern computing and human-level artificial intelligence is as wide as the gulf between current space flight and practical faster-than-light spaceflight.<ref>{{cite news|last1=Winfield|first1=Alan|title=Artificial intelligence will not turn into a Frankenstein's monster|url=https://www.theguardian.com/technology/2014/aug/10/artificial-intelligence-will-not-become-a-frankensteins-monster-ian-winfield|accessdate=17 September 2014|work=[[The Guardian]]|archive-url=https://web.archive.org/web/20140917135230/http://www.theguardian.com/technology/2014/aug/10/artificial-intelligence-will-not-become-a-frankensteins-monster-ian-winfield|archive-date=17 September 2014|url-status=live}}</ref><br />
<br />
As of March 2020, AGI remains speculative as no such system has been demonstrated yet. Opinions vary both on whether and when artificial general intelligence will arrive. At one extreme, AI pioneer Herbert A. Simon wrote in 1965: "machines will be capable, within twenty years, of doing any work a man can do". However, this prediction failed to come true. Microsoft co-founder Paul Allen believed that such intelligence is unlikely in the 21st century because it would require "unforeseeable and fundamentally unpredictable breakthroughs" and a "scientifically deep understanding of cognition". Writing in The Guardian, roboticist Alan Winfield claimed the gulf between modern computing and human-level artificial intelligence is as wide as the gulf between current space flight and practical faster-than-light spaceflight.<br />
<br />
截至2020年3月,通用人工智能仍处于推测性的状态,因为迄今此类系统尚未被展示。对于人工通用智能是否会到来以及何时到来,人们的看法各不相同。一个极端如,人工智能的先驱赫伯特·西蒙在1965年写道: “机器将能在20年内具有完成人类能做的任何工作的能力。”然而,这个预言并没有实现。微软(Microsoft)联合创始人保罗•艾伦(Paul Allen)认为,这种情报在21世纪不太可能出现,因为它需要“不可预见且根本无法预测的突破”和“对认知的科学的深入理解”。机器人专家阿兰·温菲尔德(Alan Winfield)在《卫报》上发表文章称,现代计算机和人类水平的人工智能之间的鸿沟就像当前的太空飞行和实际的超光速空间飞行之间的鸿沟一样宽。<br />
<br />
<br />
<br />
AI experts' views on the feasibility of AGI wax and wane, and may have seen a resurgence in the 2010s. Four polls conducted in 2012 and 2013 suggested that the median guess among experts for when they would be 50% confident AGI would arrive was 2040 to 2050, depending on the poll, with the mean being 2081. Of the experts, 16.5% answered with "never" when asked the same question but with a 90% confidence instead.<ref name="new yorker doomsday">{{cite news|author1=Raffi Khatchadourian|title=The Doomsday Invention: Will artificial intelligence bring us utopia or destruction?|url=http://www.newyorker.com/magazine/2015/11/23/doomsday-invention-artificial-intelligence-nick-bostrom|accessdate=7 February 2016|work=[[The New Yorker (magazine)|The New Yorker]]|date=23 November 2015|archive-url=https://web.archive.org/web/20160128105955/http://www.newyorker.com/magazine/2015/11/23/doomsday-invention-artificial-intelligence-nick-bostrom|archive-date=28 January 2016|url-status=live}}</ref><ref>Müller, V. C., & Bostrom, N. (2016). Future progress in artificial intelligence: A survey of expert opinion. In Fundamental issues of artificial intelligence (pp. 555–572). Springer, Cham.</ref> Further current AGI progress considerations can be found below [[#Tests_for_confirming_human-level_AGI|''Tests for confirming human-level AGI'']] and [[#IQ-Tests_AGI|''IQ-tests AGI'']].<br />
<br />
AI experts' views on the feasibility of AGI wax and wane, and may have seen a resurgence in the 2010s. Four polls conducted in 2012 and 2013 suggested that the median guess among experts for when they would be 50% confident AGI would arrive was 2040 to 2050, depending on the poll, with the mean being 2081. Of the experts, 16.5% answered with "never" when asked the same question but with a 90% confidence instead. Further current AGI progress considerations can be found below Tests for confirming human-level AGI and IQ-tests AGI.<br />
<br />
人工智能专家一直在考虑通用人工智能兴衰可能性,可能在2010年代出现了复苏。2012年和2013年进行的四次民意调查显示,专家们平均有50%的信心相信通用人工智能会在2040年到2050年间到来,具体取决于调查结果,平均猜测是2081年。在这些专家中,16.5% 的人在被问到同样的问题时回答“从来没有” ,但他们的自信心却达到了90%。当前通用人工智能项目进展中的进一步考虑可以在确认人类水平通用人工智能和基于智商测试的通用人工智能测试下面找到。<br />
<br />
<br />
<br />
===Potential threat to human existence{{anchor|Risk_of_human_extinction}} 对人类的潜在威胁===<br />
<br />
{{Main|Existential risk from artificial general intelligence}}<br />
<br />
<br />
<br />
The thesis that AI poses an existential risk, and that this risk needs much more attention than it currently gets, has been endorsed by many public figures; perhaps the most famous are [[Elon Musk]], [[Bill Gates]], and [[Stephen Hawking]]. The most notable AI researcher to endorse the thesis is [[Stuart J. Russell]]. Endorsers of the thesis sometimes express bafflement at skeptics: Gates states he does not "understand why some people are not concerned",<ref name="BBC News">{{cite news|last1=Rawlinson|first1=Kevin|title=Microsoft's Bill Gates insists AI is a threat|url=https://www.bbc.co.uk/news/31047780|work=[[BBC News]]|accessdate=30 January 2015}}</ref> and Hawking criticized widespread indifference in his 2014 editorial: {{cquote|'So, facing possible futures of incalculable benefits and risks, the experts are surely doing everything possible to ensure the best outcome, right? Wrong. If a superior alien civilisation sent us a message saying, 'We'll arrive in a few decades,' would we just reply, 'OK, call us when you get here{{endash}}we'll leave the lights on?' Probably not{{endash}}but this is more or less what is happening with AI.'<ref name="hawking editorial">{{cite news |title=Stephen Hawking: 'Transcendence looks at the implications of artificial intelligence&nbsp;– but are we taking AI seriously enough?' |url=https://www.independent.co.uk/news/science/stephen-hawking-transcendence-looks-at-the-implications-of-artificial-intelligence--but-are-we-taking-ai-seriously-enough-9313474.html |accessdate=3 December 2014 |publisher=[[The Independent (UK)]]}}</ref>}}<br />
<br />
The thesis that AI poses an existential risk, and that this risk needs much more attention than it currently gets, has been endorsed by many public figures; perhaps the most famous are Elon Musk, Bill Gates, and Stephen Hawking. The most notable AI researcher to endorse the thesis is Stuart J. Russell. Endorsers of the thesis sometimes express bafflement at skeptics: Gates states he does not "understand why some people are not concerned", and Hawking criticized widespread indifference in his 2014 editorial: we'll leave the lights on?' Probably not, but this is more or less what is happening with AI.'}}<br />
<br />
人工智能构成了世界末日,这种风险需要比现在更多的关注,这一论点已经得到了许多公众人物的支持; 也许最著名的是埃隆·马斯克,比尔·盖茨和斯蒂芬·霍金。支持这一观点的最著名的人工智能研究者是斯图尔特·罗素。这篇论文的支持者有时会对怀疑论者表示困惑: 盖茨表示,他不“理解为什么有些人不关心” ,霍金在2014年的社论中批评了普遍的冷漠: 我们就这么让人工智能这盏灯亮着吗?可能不是,但这或多或少是正在发生在人工智能领域内的。<br />
<br />
<br />
<br />
Many of the scholars who are concerned about existential risk believe that the best way forward would be to conduct (possibly massive) research into solving the difficult "[[AI control problem|control problem]]" to answer the question: what types of safeguards, algorithms, or architectures can programmers implement to maximize the probability that their recursively-improving AI would continue to behave in a friendly, rather than destructive, manner after it reaches superintelligence?<ref name="superintelligence" >{{cite book|last1=Bostrom|first1=Nick|author-link=Nick Bostrom|title=Superintelligence: Paths, Dangers, Strategies|date=2014|isbn=978-0199678112|edition=First|quote=|title-link=Superintelligence: Paths, Dangers, Strategies}}<!-- preface --></ref><ref name="physica_scripta" >{{cite journal|title=Responses to catastrophic AGI risk: a survey|journal=[[Physica Scripta]]|date=19 December 2014|volume=90|issue=1|author1=Kaj Sotala|author2-link=Roman Yampolskiy|author2=Roman Yampolskiy}}</ref><br />
<br />
Many of the scholars who are concerned about existential risk believe that the best way forward would be to conduct (possibly massive) research into solving the difficult "control problem" to answer the question: what types of safeguards, algorithms, or architectures can programmers implement to maximize the probability that their recursively-improving AI would continue to behave in a friendly, rather than destructive, manner after it reaches superintelligence?<br />
<br />
许多关注世界末日的学者认为,最好的方法是进行(可能是大规模的)研究,解决困难的“控制问题” ,以回答这个问题: 程序员可以实现哪些类型的保障措施、算法或架构,以最大程度地提高其递归改进的人工智能在达到超级智能后继续以友好,而非破坏性的方式运行的可能性?<br />
<br />
<br />
<br />
The thesis that AI can pose existential risk also has many strong detractors. Skeptics sometimes charge that the thesis is crypto-religious, with an irrational belief in the possibility of superintelligence replacing an irrational belief in an omnipotent God; at an extreme, [[Jaron Lanier]] argues that the whole concept that current machines are in any way intelligent is "an illusion" and a "stupendous con" by the wealthy.<ref name="atlantic-but-what">{{cite magazine |url=https://www.theatlantic.com/health/archive/2014/05/but-what-does-the-end-of-humanity-mean-for-me/361931/ |title=But What Would the End of Humanity Mean for Me? |magazine=The Atlantic | date = 9 May 2014 | accessdate =12 December 2015}}</ref><br />
<br />
The thesis that AI can pose existential risk also has many strong detractors. Skeptics sometimes charge that the thesis is crypto-religious, with an irrational belief in the possibility of superintelligence replacing an irrational belief in an omnipotent God; at an extreme, Jaron Lanier argues that the whole concept that current machines are in any way intelligent is "an illusion" and a "stupendous con" by the wealthy.<br />
<br />
人工智能可以造成世界末日的观点也遭到了许多强烈的反对。怀疑论者有时指责该论点是神秘的、宗教性的,他们非理性地相信超级智能可能取代对万能的上帝的非理性信仰; 一个极端如,杰伦·拉尼尔(Jaron Lanier)认为,目前的机器以任何方式具有智能的整个概念是“一种幻觉” ,是富人的“惊人骗局”。<br />
<br />
<br />
<br />
Much of existing criticism argues that AGI is unlikely in the short term. Computer scientist [[Gordon Bell]] argues that the human race will already destroy itself before it reaches the [[technological singularity]]. [[Gordon Moore]], the original proponent of [[Moore's Law]], declares that "I am a skeptic. I don't believe [a technological singularity] is likely to happen, at least for a long time. And I don't know why I feel that way."<ref>{{cite news |title=Tech Luminaries Address Singularity |url=https://spectrum.ieee.org/computing/hardware/tech-luminaries-address-singularity |accessdate=8 April 2020 |work=IEEE Spectrum: Technology, Engineering, and Science News |issue=SPECIAL REPORT: THE SINGULARITY |date=1 June 2008 |language=en}}</ref> [[Baidu]] Vice President [[Andrew Ng]] states AI existential risk is "like worrying about overpopulation on Mars when we have not even set foot on the planet yet."<ref name=shermer>{{cite news|last1=Shermer|first1=Michael|title=Apocalypse AI|url=https://www.scientificamerican.com/article/artificial-intelligence-is-not-a-threat-mdash-yet/|accessdate=27 November 2017|work=Scientific American|date=1 March 2017|pages=77|language=en|doi=10.1038/scientificamerican0317-77|bibcode=2017SciAm.316c..77S}}</ref><br />
<br />
Much of existing criticism argues that AGI is unlikely in the short term. Computer scientist Gordon Bell argues that the human race will already destroy itself before it reaches the technological singularity. Gordon Moore, the original proponent of Moore's Law, declares that "I am a skeptic. I don't believe [a technological singularity] is likely to happen, at least for a long time. And I don't know why I feel that way." Baidu Vice President Andrew Ng states AI existential risk is "like worrying about overpopulation on Mars when we have not even set foot on the planet yet."<br />
<br />
现有的许多批评认为,通用人工智能短期内不太可能成功。计算机科学家戈登·贝尔(Gordon Bell)认为人类在到达'''<font color="#ff8000">技术奇点(technological singularity)</font>'''之前就已经自我毁灭了。戈登·摩尔(Gordon Moore),'''<font color="#ff8000">摩尔定律(Moore's Law)</font>'''的最初提出者,宣称“我是一个怀疑论者。我不认为技术奇点会发生,至少在很长一段时间内不会。我不知道为什么会有这种感觉。”百度副总裁吴恩达(Andrew Ng)说,人工智能世界末日就像是在担心火星人口过剩,而我们甚至还没有踏上这个星球。<br />
<br />
<br />
<br />
==See also 请参阅==<br />
<br />
{{div col|colwidth=30em}}<br />
<br />
* [[Automated machine learning]] 自动机器学习<br />
<br />
* [[Machine ethics]] 机器伦理<br />
<br />
* [[Multi-task learning]] 多任务学习<br />
<br />
* [[Superintelligence: Paths, Dangers, Strategies|Superintelligence]] 超级智能<br />
<br />
* [[Nick Bostrom]] 尼克·博斯特罗姆<br />
<br />
* [[Eliezer Yudkowsky]] 埃利泽·尤德科夫斯基<br />
<br />
* [[Future of Humanity Institute]] 人类未来研究所<br />
<br />
* [[Outline of artificial intelligence]] 人工智能概要<br />
<br />
* [[Artificial brain]] 人工大脑<br />
<br />
* [[Transfer learning]] 学习迁移<br />
<br />
* [[Outline of transhumanism]] 超人类主义概要<br />
<br />
* [[General game playing]] 一般博弈<br />
<br />
* [[Synthetic intelligence]] 合成智能<br />
<br />
* [[Intelligence amplification]] (IA), the use of information technology in augmenting human intelligence instead of creating an external autonomous "AGI"{{div col end}}<br />
智能放大,利用信息技术加强人类智慧而不是建造外在的通用人工智能<br />
<br />
<br />
==Notes 附注==<br />
<br />
{{reflist|colwidth=30em}}<br />
<br />
<br />
<br />
==References 参考文献==<br />
<br />
• Stages of Artificial Intelligence"[https://www.computerscience0.xyz/2020/04/ai-artificial-intelligence-stages-of.html Computer Science]" on 2nd April, 2020.{{refbegin|2}}<br />
<br />
• Stages of Artificial Intelligence"[https://www.computerscience0.xyz/2020/04/ai-artificial-intelligence-stages-of.html Computer Science]" on 2nd April, 2020.<br />
<br />
•2020年4月2日,《人工智能的阶段》[ https://www.computerscience0.xyz/2020/04/ai-Artificial-Intelligence-Stages-of.html 计算机科学]。<br />
<br />
* {{cite web |url=http://www.techcast.org/Upload/PDFs/633615794236495345_TCTheAutomationofThought.pdf |title=TechCast Article Series: The Automation of Thought |last1=Halal |first1=William E. |website= |access-date= |archive-url=https://web.archive.org/web/20130606101835/http://www.techcast.org/Upload/PDFs/633615794236495345_TCTheAutomationofThought.pdf |archive-date=6 June 2013}}<br />
<br />
* {{Citation | last=Aleksander | first=Igor | author-link=Igor Aleksander | year=1996 | title=Impossible Minds | publisher=World Scientific Publishing Company | isbn=978-1-86094-036-1 | url-access=registration | url=https://archive.org/details/impossiblemindsm0000alek }}<br />
<br />
* {{Citation | last = Omohundro|first= Steve| author-link= Steve Omohundro | year = 2008| title= The Nature of Self-Improving Artificial Intelligence| publisher= presented and distributed at the 2007 Singularity Summit, San Francisco, CA.}}<br />
<br />
* {{Citation|last=Sandberg |first=Anders|last2=Boström|first2=Nick|title=Whole Brain Emulation: A Roadmap|url=http://www.fhi.ox.ac.uk/Reports/2008-3.pdf|accessdate=5 April 2009|series= Technical Report #2008‐3|year=2008| publisher = Future of Humanity Institute, Oxford University}}<br />
<br />
* {{Citation|vauthors=Azevedo FA, Carvalho LR, Grinberg LT | title = Equal numbers of neuronal and nonneuronal cells make the human brain an isometrically scaled-up primate brain| journal = The Journal of Comparative Neurology| volume = 513| issue = 5| pages = 532–41|date=April 2009| pmid = 19226510| doi = 10.1002/cne.21974|url=https://www.researchgate.net/publication/24024444 |accessdate=4 September 2013| ref={{harvid|Azevedo et al.|2009}}|display-authors=etal}}<br />
<br />
* {{Citation<br />
<br />
| first=Anthony<br />
<br />
| first=Anthony<br />
<br />
首先是安东尼<br />
<br />
| last=Berglas<br />
<br />
| last=Berglas<br />
<br />
最后一个贝格拉斯<br />
<br />
| title=Artificial Intelligence will Kill our Grandchildren<br />
<br />
| title=Artificial Intelligence will Kill our Grandchildren<br />
<br />
人工智能会杀死我们的孙子<br />
<br />
| year=2008<br />
<br />
| year=2008<br />
<br />
2008年<br />
<br />
| url=http://berglas.org/Articles/AIKillGrandchildren/AIKillGrandchildren.html<br />
<br />
| url=http://berglas.org/Articles/AIKillGrandchildren/AIKillGrandchildren.html<br />
<br />
Http://berglas.org/articles/aikillgrandchildren/aikillgrandchildren.html<br />
<br />
}}<br />
<br />
}}<br />
<br />
}}<br />
<br />
* {{Citation | last = Chalmers |first = David | author-link=David Chalmers | year=1996 | title = The Conscious Mind |publisher=Oxford University Press.}}<br />
<br />
* {{Citation | last = Clocksin|first=William |date=Aug 2003 |title=Artificial intelligence and the future|journal=[[Philosophical Transactions of the Royal Society A]] |pmid=12952683 |volume=361 |issue=1809 |pages=1721–1748 |doi=10.1098/rsta.2003.1232 |postscript=.|bibcode=2003RSPTA.361.1721C }}<br />
<br />
* {{Crevier 1993}}<br />
<br />
* {{Citation | first = Brad | last = Darrach | date=20 November 1970 | title=Meet Shakey, the First Electronic Person | magazine=[[Life Magazine]] | pages = 58–68 }}.<br />
<br />
* {{Citation | last = Drachman | first = D | title = Do we have brain to spare? | journal = Neurology | volume = 64 | issue = 12 | pages = 2004–5 | year = 2005 | pmid = 15985565 | doi = 10.1212/01.WNL.0000166914.38327.BB | postscript = .}}<br />
<br />
* {{Citation | last = Feigenbaum | first = Edward A. | first2=Pamela | last2=McCorduck | author-link=Edward Feigenbaum | author2-link = Pamela McCorduck | title = The Fifth Generation: Artificial Intelligence and Japan's Computer Challenge to the World | publisher = Michael Joseph | year = 1983 | isbn = 978-0-7181-2401-4 }}<br />
<br />
* {{Citation<br />
<br />
| last = Gelernter | first = David<br />
<br />
| last = Gelernter | first = David<br />
<br />
最后的盖兰特 | 第一个大卫<br />
<br />
| year = 2010<br />
<br />
| year = 2010<br />
<br />
2010年<br />
<br />
| title = Dream-logic, the Internet and Artificial Thought<br />
<br />
| title = Dream-logic, the Internet and Artificial Thought<br />
<br />
梦的逻辑,互联网和人工思维<br />
<br />
| url=http://www.edge.org/3rd_culture/gelernter10.1/gelernter10.1_index.html | accessdate= 25 July 2010<br />
<br />
| url=http://www.edge.org/3rd_culture/gelernter10.1/gelernter10.1_index.html | accessdate= 25 July 2010<br />
<br />
Http://www.edge.org/3rd_culture/gelernter10.1/gelernter10.1_index.html : 2010年7月25日<br />
<br />
}}<br />
<br />
}}<br />
<br />
}}<br />
<br />
* {{Citation<br />
<br />
| editor1-last = Goertzel | editor1-first = Ben | authorlink = Ben Goertzel<br />
<br />
| editor1-last = Goertzel | editor1-first = Ben | authorlink = Ben Goertzel<br />
<br />
1-first Ben | authorlink Ben Goertzel<br />
<br />
| editor2-last = Pennachin | editor2-first= Cassio<br />
<br />
| editor2-last = Pennachin | editor2-first= Cassio<br />
<br />
| 编辑2-last Pennachin | 编辑2-first Cassio<br />
<br />
| year = 2006<br />
<br />
| year = 2006<br />
<br />
2006年<br />
<br />
| title=Artificial General Intelligence<br />
<br />
| title=Artificial General Intelligence<br />
<br />
人工通用智能<br />
<br />
| publisher = Springer <br />
<br />
| publisher = Springer <br />
<br />
出版商斯普林格<br />
<br />
| url=http://people.inf.elte.hu/csizsekp/ai/books/artificial-general-intelligence-cognitive-technologies.9783540237334.27156.pdf<br />
<br />
| url=http://people.inf.elte.hu/csizsekp/ai/books/artificial-general-intelligence-cognitive-technologies.9783540237334.27156.pdf<br />
<br />
Http://people.inf.elte.hu/csizsekp/ai/books/artificial-general-intelligence-cognitive-technologies.9783540237334.27156.pdf<br />
<br />
| isbn = 978-3-540-23733-4<br />
<br />
| isbn = 978-3-540-23733-4<br />
<br />
[国际标准图书馆编号978-3-540-23733-4]<br />
<br />
| archive-url = https://web.archive.org/web/20130320184603/http://people.inf.elte.hu/csizsekp/ai/books/artificial-general-intelligence-cognitive-technologies.9783540237334.27156.pdf<br />
<br />
| archive-url = https://web.archive.org/web/20130320184603/http://people.inf.elte.hu/csizsekp/ai/books/artificial-general-intelligence-cognitive-technologies.9783540237334.27156.pdf<br />
<br />
| 档案-url https://web.archive.org/web/20130320184603/http://people.inf.elte.hu/csizsekp/ai/books/artificial-general-intelligence-cognitive-technologies.9783540237334.27156.pdf<br />
<br />
| archive-date = 20 March 2013<br />
<br />
| archive-date = 20 March 2013<br />
<br />
| 档案-日期2013年3月20日<br />
<br />
}}<br />
<br />
}}<br />
<br />
}}<br />
<br />
* {{Citation<br />
<br />
| last = Goertzel | first = Ben | authorlink = Ben Goertzel<br />
<br />
| last = Goertzel | first = Ben | authorlink = Ben Goertzel<br />
<br />
作者: Ben Goertzel<br />
<br />
| last2 = Wang | first2 = Pei<br />
<br />
| last2 = Wang | first2 = Pei<br />
<br />
2 Wang | first2 Pei<br />
<br />
| year = 2006<br />
<br />
| year = 2006<br />
<br />
2006年<br />
<br />
| title = Introduction: Aspects of Artificial General Intelligence<br />
<br />
| title = Introduction: Aspects of Artificial General Intelligence<br />
<br />
| 题目简介: 人工通用智能的方方面面<br />
<br />
| url=http://sites.google.com/site/narswang/publications/wang-goertzel.AGI_Aspects.pdf?attredirects=1<br />
<br />
| url=http://sites.google.com/site/narswang/publications/wang-goertzel.AGI_Aspects.pdf?attredirects=1<br />
<br />
Http://sites.google.com/site/narswang/publications/wang-goertzel.agi_aspects.pdf?attredirects=1<br />
<br />
}}<br />
<br />
}}<br />
<br />
}}<br />
<br />
* {{Citation |last=Goertzel|first=Ben |authorlink=Ben Goertzel |date=Dec 2007 |title=Human-level artificial general intelligence and the possibility of a technological singularity: a reaction to Ray Kurzweil's The Singularity Is Near, and McDermott's critique of Kurzweil|journal=Artificial Intelligence |volume=171 |issue=18, Special Review Issue |pages=1161–1173 |url=https://scholar.google.com/scholar?hl=sv&lr=&cluster=15189798216526465792 |accessdate=1 April 2009 |doi=10.1016/j.artint.2007.10.011 |postscript=.}}<br />
<br />
* {{Citation | last = Gubrud | first = Mark | url=http://www.foresight.org/Conferences/MNT05/Papers/Gubrud/ | title = Nanotechnology and International Security | journal= Fifth Foresight Conference on Molecular Nanotechnology |date = November 1997| accessdate= 7 May 2011}}<br />
<br />
* {{Citation| last1 = Holte | first1=RC | last2=Choueiry |first2=BY| title = Abstraction and reformulation in artificial intelligence| journal = [[Philosophical Transactions of the Royal Society B]]| volume = 358| issue = 1435| pages = 1197–1204| year = 2003| pmid = 12903653| pmc = 1693218| doi = 10.1098/rstb.2003.1317| postscript = .}}<br />
<br />
* {{Citation | last = Howe | first = J. | url=http://www.dai.ed.ac.uk/AI_at_Edinburgh_perspective.html | title = Artificial Intelligence at Edinburgh University : a Perspective |date = November 1994| accessdate= 30 August 2007}}<br />
<br />
* {{Citation | last = Johnson| first= Mark | year = 1987| title =The body in the mind| publisher =Chicago|isbn= 978-0-226-40317-5}}<br />
<br />
* {{Citation | last = Kurzweil | first = Ray | author-link = Ray Kurzweil | title = The Singularity is Near | year = 2005 | publisher = Viking Press | title-link = The Singularity is Near }}<br />
<br />
* {{Citation | last = Lighthill | first = Professor Sir James | author-link=James Lighthill | year = 1973 | contribution= Artificial Intelligence: A General Survey | title = Artificial Intelligence: a paper symposium| publisher = Science Research Council }}<br />
<br />
* {{Citation|last=Luger|first=George|first2=William|last2=Stubblefield|year=2004|title=Artificial Intelligence: Structures and Strategies for Complex Problem Solving|edition=5th|publisher=The Benjamin/Cummings Publishing Company, Inc.|page=[https://archive.org/details/artificialintell0000luge/page/720 720]|isbn=978-0-8053-4780-7|url=https://archive.org/details/artificialintell0000luge/page/720}}<br />
<br />
* {{Citation |last=McCarthy|first=John |authorlink=John McCarthy (computer scientist)|date=Oct 2007 |title=From here to human-level AI|journal=Artificial Intelligence |volume=171 |pages=1174–1182 |doi=10.1016/j.artint.2007.10.009 | postscript=. |issue=18}}<br />
<br />
* {{McCorduck 2004}}<br />
<br />
* {{Citation | last = Moravec | first = Hans | author-link = Hans Moravec | year = 1976 | url = http://www.frc.ri.cmu.edu/users/hpm/project.archive/general.articles/1975/Raw.Power.html | title = The Role of Raw Power in Intelligence | access-date = 29 September 2007 | archive-url = https://web.archive.org/web/20160303232511/http://www.frc.ri.cmu.edu/users/hpm/project.archive/general.articles/1975/Raw.Power.html | archive-date = 3 March 2016 | url-status = dead }}<br />
<br />
* {{Citation | last = Moravec | first = Hans | author-link=Hans Moravec | year = 1988 | title = Mind Children | publisher = Harvard University Press}}<br />
<br />
* {{Citation| last=Nagel | year=1974 | title =What Is it Like to Be a Bat | journal = Philosophical Review | volume=83 | issue=4 | pages=435–50 | url=http://organizations.utep.edu/Portals/1475/nagel_bat.pdf| postscript=. | doi=10.2307/2183914 | jstor=2183914 }}<br />
<br />
* {{Citation | last = Newell | first = Allen | author-link=Allen Newell | last2 = Simon | first2=H. A. | year = 1963 | contribution=GPS: A Program that Simulates Human Thought| title=Computers and Thought | editor-last= Feigenbaum | editor-first= E.A. |editor2-last= Feldman |editor2-first= J. |publisher= McGraw-Hill | authorlink2 = Herbert A. Simon|location= New York }}<br />
<br />
* {{cite journal | doi = 10.1145/360018.360022 | last = Newell | first = Allen | last2 = Simon | first2=H. A. | year = 1976 | title=Computer Science as Empirical Inquiry: Symbols and Search| volume= 19 | pages = 113–126 | journal = Communications of the ACM| author-link=Allen Newell | authorlink2=Herbert A. Simon|issue=3 | ref=harv| doi-access= free }}<br />
<br />
* {{Citation| last=Nilsson | first=Nils | author-link=Nils Nilsson (researcher) | year=1998|title=Artificial Intelligence: A New Synthesis|publisher=Morgan Kaufmann Publishers|isbn=978-1-55860-467-4}}<br />
<br />
* {{Russell Norvig 2003}}<br />
<br />
* {{Citation | last = NRC| author-link=United States National Research Council | chapter=Developments in Artificial Intelligence|chapter-url=http://www.nap.edu/readingroom/books/far/ch9.html|title=Funding a Revolution: Government Support for Computing Research|publisher=National Academy Press|year=1999 }}<br />
<br />
* {{Citation | last = Poole | first = David | first2 = Alan | last2 = Mackworth | first3 = Randy | last3 = Goebel | publisher = Oxford University Press | year = 1998 | title = Computational Intelligence: A Logical Approach | url = http://www.cs.ubc.ca/spider/poole/ci.html | author-link=David Poole (researcher) | location = New York }}<br />
<br />
* {{Citation|last=Searle |first=John |author-link=John Searle |title=Minds, Brains and Programs |journal=Behavioral and Brain Sciences |volume=3 |issue=3 |pages=417–457 |year=1980 |doi=10.1017/S0140525X00005756 }}<br />
<br />
* {{Citation | last= Simon | first = H. A. | author-link=Herbert A. Simon | year = 1965 | title=The Shape of Automation for Men and Management | publisher =Harper & Row | location = New York }}<br />
<br />
* {{Citation |last=Sutherland|first= J.G. |year =1990| title= Holographic Model of Memory, Learning, and Expression|journal=International Journal of Neural Systems|volume= 1–3|pages= 256–267 |postscript=.}}<br />
<br />
* {{Citation|vauthors=Williams RW, Herrup K | title = The control of neuron number| journal = Annual Review of Neuroscience| volume = 11| pages = 423–53| year = 1988| pmid = 3284447| doi = 10.1146/annurev.ne.11.030188.002231| postscript = .}}<!--| accessdate = 20 June 2009--><br />
<br />
* {{Citation<br />
<br />
| editor1-last = de Vega | editor1-first = Manuel<br />
<br />
| editor1-last = de Vega | editor1-first = Manuel<br />
<br />
| 编辑1-last de Vega | 编辑1-first Manuel<br />
<br />
| editor2-last = Glenberg | editor2-first = Arthur<br />
<br />
| editor2-last = Glenberg | editor2-first = Arthur<br />
<br />
2- 最后的格伦伯格2- 第一个亚瑟<br />
<br />
| editor3-last = Graesser | editor3-first = Arthur<br />
<br />
| editor3-last = Graesser | editor3-first = Arthur<br />
<br />
| 编辑3-last Graesser | 编辑3-first Arthur<br />
<br />
| year = 2008<br />
<br />
| year = 2008<br />
<br />
2008年<br />
<br />
| title = Symbols and Embodiment: Debates on meaning and cognition<br />
<br />
| title = Symbols and Embodiment: Debates on meaning and cognition<br />
<br />
标题符号与具体化: 关于意义与认知的争论<br />
<br />
| publisher = Oxford University Press<br />
<br />
| publisher = Oxford University Press<br />
<br />
牛津大学出版社<br />
<br />
| isbn=978-0-19-921727-4<br />
<br />
| isbn=978-0-19-921727-4<br />
<br />
[国际标准图书编号978-0-19-921727-4]<br />
<br />
}}<br />
<br />
}}<br />
<br />
}}<br />
<br />
* {{Citation|last=Yudkowsky |first=Eliezer |author-link=Eliezer Yudkowsky |editor-last=Goertzel |editor-first=Ben |editor2-last=Pennachin |editor2-first=Cassio |title=Artificial General Intelligence |journal=Annual Review of Psychology |publisher=Springer |year=2006 |url=http://www.singinst.org/upload/LOGI//LOGI.pdf |doi=10.1146/annurev.psych.49.1.585 |pmid=9496632 |isbn=978-3-540-23733-4 |url-status=dead |archiveurl=https://web.archive.org/web/20090411050423/http://www.singinst.org/upload/LOGI/LOGI.pdf |archivedate=11 April 2009 |volume=49 |pages=585–612}} <br />
<br />
* {{Citation |last=Zucker|first=Jean-Daniel |date=July 2003 |title=A grounded theory of abstraction in artificial intelligence|journal=[[Philosophical Transactions of the Royal Society B]] |pmid=12903672 |volume=358 |issue=1435 |pmc=1693211 |pages=1293–1309 |doi=10.1098/rstb.2003.1308 |postscript=.}}<br />
<br />
* {{Citation| last=Yudkowsky | first=Eliezer | author-link=Eliezer Yudkowsky| year=2008 |title=Artificial Intelligence as a Positive and Negative Factor in Global Risk |journal=Global Catastrophic Risks | bibcode=2008gcr..book..303Y }}.<br />
<br />
{{refend}}<br />
<br />
<br />
<br />
==External links 拓展链接==<br />
<br />
* [https://cis.temple.edu/~pwang/AGI-Intro.html The AGI portal maintained by Pei Wang]<br />
<br />
* [https://web.archive.org/web/20050405071221/http://genesis.csail.mit.edu/index.html The Genesis Group at MIT's CSAIL] – Modern research on the computations that underlay human intelligence<br />
<br />
* [http://www.opencog.org/ OpenCog – open source project to develop a human-level AI]<br />
<br />
* [http://academia.wikia.com/wiki/A_Method_for_Simulating_the_Process_of_Logical_Human_Thought Simulating logical human thought]<br />
<br />
* [http://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/ai-timelines What Do We Know about AI Timelines?] – Literature review<br />
<br />
<br />
<br />
{{Existential risk from artificial intelligence}}<br />
<br />
<br />
<br />
{{DEFAULTSORT:Artificial general intelligence}}<br />
<br />
[[Category:Hypothetical technology]]<br />
<br />
Category:Hypothetical technology<br />
<br />
类别: 假设技术<br />
<br />
[[Category:Artificial intelligence]]<br />
<br />
Category:Artificial intelligence<br />
<br />
类别: 人工智能<br />
<br />
[[Category:Computational neuroscience]]<br />
<br />
Category:Computational neuroscience<br />
<br />
类别: 计算神经科学<br />
<br />
<br />
<br />
[[fr:Intelligence artificielle#Intelligence artificielle forte]]<br />
<br />
fr:Intelligence artificielle#Intelligence artificielle forte<br />
<br />
智力人工 # 智力人工强项<br />
<br />
<noinclude><br />
<br />
<small>This page was moved from [[wikipedia:en:Artificial general intelligence]]. Its edit history can be viewed at [[通用人工智能/edithistory]]</small></noinclude><br />
<br />
[[Category:待整理页面]]</div>粲兰https://wiki.swarma.org/index.php?title=%E9%80%9A%E7%94%A8%E4%BA%BA%E5%B7%A5%E6%99%BA%E8%83%BD&diff=15041通用人工智能2020-10-13T06:37:03Z<p>粲兰:</p>
<hr />
<div>此词条暂由彩云小译翻译,未经人工整理和审校,带来阅读不便,请见谅。<br />
<br />
{{Short description|Hypothetical human-level or stronger AI}}<br />
<br />
{{Use British English|date = March 2019}}<br />
<br />
{{Use dmy dates|date=December 2019}}<br />
<br />
{{Artificial intelligence}}<br />
<br />
'''Artificial general intelligence''' ('''AGI''') is the hypothetical<ref>{{cite news |title=DeepMind and Google: the battle to control artificial intelligence |url=https://www.economist.com/news/2019/04/05/deepmind-and-google-the-battle-to-control-artificial-intelligence |accessdate=15 March 2020 |work=[[The Economist]] ([[1843 (magazine)]]) |date=2019 |quote=AGI stands for Artificial General Intelligence, a hypothetical computer program...}}</ref> intelligence of a machine that has the capacity to understand or learn any intellectual task that a [[human being]] can. It is a primary goal of some [[artificial intelligence]] research and a common topic in [[science fiction]] and [[futures studies]]. AGI can also be referred to as '''strong AI''',<ref>Kurzweil, ''Singularity'' (2005) p. 260</ref><ref name = "Kurzweil 2005-08-05">{{Citation|url=https://www.forbes.com/home/free_forbes/2005/0815/030.htmlhttps://www.forbes.com/home/free_forbes/2005/0815/030.html |first=Ray |last=Kurzweil |date=5 August 2005 |magazine=[[Forbes]]|title=Long Live AI}}: Kurzweil describes strong AI as "machine intelligence with the full range of human intelligence."</ref><ref>{{Citation|url=https://crnano.typepad.com/crnblog/2005/08/advanced_human_.html |title=Advanced Human ntelligence<br />
<br />
Artificial general intelligence (AGI) is the hypothetical intelligence of a machine that has the capacity to understand or learn any intellectual task that a human being can. It is a primary goal of some artificial intelligence research and a common topic in science fiction and futures studies. AGI can also be referred to as strong AI,<ref>{{Citation|url=https://crnano.typepad.com/crnblog/2005/08/advanced_human_.html |title=Advanced Human ntelligence<br />
<br />
人工通用智能(Artificial general intelligence,AGI)是一种假设的机器智能,它有能力理解或学习任何人类能够完成的智力任务。这是一些人工智能研究的主要目标,也是科幻小说和未来研究的共同话题。也可以被称为强人工智能,参考文献{ Citation | url https://crnano.typepad.com/crnblog/2005/08/advanced_human_.html | title Advanced Human intelligence<br />
<br />
|first=Mike|last=Treder|work=Responsible Nanotechnology|date=10 August 2005 |archive-url=https://web.archive.org/web/20191016214415/https://crnano.typepad.com/crnblog/2005/08/advanced_human_.html|archive-date=16 October 2019 |url-status=live}}</ref> '''full AI''',<ref>{{Cite web |url=http://tedxtalks.ted.com/video/The-Age-of-Artificial-Intellige |title=The Age of Artificial Intelligence: George John at TEDxLondonBusinessSchool 2013 |access-date=22 February 2014 |archive-url=https://web.archive.org/web/20140226123940/http://tedxtalks.ted.com/video/The-Age-of-Artificial-Intellige |archive-date=26 February 2014 |url-status=live }}</ref> <br />
<br />
|first=Mike|last=Treder|work=Responsible Nanotechnology|date=10 August 2005 |archive-url=https://web.archive.org/web/20191016214415/https://crnano.typepad.com/crnblog/2005/08/advanced_human_.html|archive-date=16 October 2019 |url-status=live}}</ref> full AI, <br />
<br />
2005年8月10日2019年10月 https://web.archive.org/web/20191016214415/https://crnano.typepad.com/crnblog/2005/08/advanced_human_.html|archive-date=16,<br />
<br />
or '''general intelligent action'''.{{sfn|Newell|Simon|1976|ps=, This is the term they use for "human-level" intelligence in the [[physical symbol system]] hypothesis.}} <br />
<br />
or general intelligent action. <br />
<br />
或者是通用智能行为。<br />
<br />
Some academic sources reserve the term "strong AI" for machines that can experience [[Chinese room#Strong AI|consciousness]].{{sfn|Searle|1980|ps=, See below for the origin of the term "strong AI", and see the academic definition of "[[Chinese room#Strong AI|strong AI]]" in the article [[Chinese room]].}} Today's AI is speculated to be many years, if not decades, away from AGI.<ref>[https://www.europarl.europa.eu/at-your-service/files/be-heard/religious-and-non-confessional-dialogue/events/en-20190319-how-artificial-intelligence-works.pdf europarl.europa.eu: How artificial intelligence works], "Concluding remarks: Today's AI is powerful and useful, but remains far from speculated AGI or ASI.", European Parliamentary Research Service, retrieved March 3, 2020</ref><ref>{{Cite journal|last=Grace|first=Katja|last2=Salvatier|first2=John|last3=Dafoe|first3=Allan|last4=Zhang|first4=Baobao|last5=Evans|first5=Owain|date=2018-07-31|title=Viewpoint: When Will AI Exceed Human Performance? Evidence from AI Experts|journal=Journal of Artificial Intelligence Research|volume=62|pages=729–754|doi=10.1613/jair.1.11222|issn=1076-9757}}</ref><br />
<br />
Some academic sources reserve the term "strong AI" for machines that can experience consciousness. Today's AI is speculated to be many years, if not decades, away from AGI.<br />
<br />
一些学术资源保留了“强人工智能”这个术语,用来形容能够体会意识的机器。据推测,如果不是几十年的话,今天的人工智能将在很多年之后才能达到通用人工智能的地步。<br />
<br />
<br />
<br />
Some authorities emphasize a distinction between ''strong AI'' and ''applied AI'',<ref>Encyclopædia Britannica [http://www.britannica.com/eb/article-219086/artificial-intelligence Strong AI, applied AI, and cognitive simulation] {{Webarchive|url=https://web.archive.org/web/20071015054758/http://www.britannica.com/eb/article-219086/artificial-intelligence |date=15 October 2007 }} or Jack Copeland [http://www.cs.usfca.edu/www.AlanTuring.net/turing_archive/pages/Reference%20Articles/what_is_AI/What%20is%20AI02.html What is artificial intelligence?] {{Webarchive|url=https://web.archive.org/web/20070818125256/http://www.cs.usfca.edu/www.AlanTuring.net/turing_archive/pages/Reference%20Articles/what_is_AI/What%20is%20AI02.html |date=18 August 2007 }} on AlanTuring.net</ref> also called ''narrow AI''<ref name = "Kurzweil 2005-08-05"/> or ''[[weak AI]]''.<ref>{{Cite web |url=http://www.open2.net/nextbigthing/ai/ai_in_depth/in_depth.htm |title=The Open University on Strong and Weak AI |access-date=8 October 2007 |archive-url=https://web.archive.org/web/20090925043908/http://www.open2.net/nextbigthing/ai/ai_in_depth/in_depth.htm |archive-date=25 September 2009 |url-status=live }} {{Dead link |date=October 2019}}</ref> In contrast to strong AI, weak AI is not intended to perform human [[cognitive]] abilities. Rather, weak AI is limited to the use of software to study or accomplish specific [[problem solving]] or [[reason]]ing tasks.<br />
<br />
Some authorities emphasize a distinction between strong AI and applied AI, also called narrow AI In contrast to strong AI, weak AI is not intended to perform human cognitive abilities. Rather, weak AI is limited to the use of software to study or accomplish specific problem solving or reasoning tasks.<br />
<br />
一些权威机构强调'' 强人工智能 ''和'' 应用人工智能 ''之间的区别,也称为'' 狭义人工智能 ''与'' 强人工智能 ''相比,弱人工智能并不是为了执行人类的认知能力。相反,弱人工智能仅限于使用软件来研究或完成特定问题的解决或完成推理任务。<br />
<br />
<br />
<br />
As of 2017, over forty organizations are researching AGI.<ref name=baum/><br />
<br />
As of 2017, over forty organizations are researching AGI.<br />
<br />
截止到2017年,已经有超过四十家机构在研究 AGI。<br />
<br />
<br />
<br />
==Requirements 判定要求==<br />
<br />
{{main|Cognitive science}}<br />
<br />
<br />
<br />
Various criteria for [[intelligence]] have been proposed (most famously the [[Turing test]]) but to date, there is no definition that satisfies everyone.<ref>AI founder [[John McCarthy (computer scientist)|John McCarthy]] writes: "we cannot yet characterize in general what kinds of computational procedures we want to call intelligent." {{cite web| url=http://www-formal.stanford.edu/jmc/whatisai/node1.html| title=Basic Questions| last=McCarthy| first=John| authorlink=John McCarthy (computer scientist)| publisher=[[Stanford University]]| year=2007| access-date=6 December 2007| archive-url=https://web.archive.org/web/20071026100601/http://www-formal.stanford.edu/jmc/whatisai/node1.html| archive-date=26 October 2007| url-status=live}} (For a discussion of some definitions of intelligence used by [[artificial intelligence]] researchers, see [[philosophy of artificial intelligence]].)</ref> However, there ''is'' wide agreement among artificial intelligence researchers that intelligence is required to do the following:<ref><br />
<br />
Various criteria for intelligence have been proposed (most famously the Turing test) but to date, there is no definition that satisfies everyone. However, there is wide agreement among artificial intelligence researchers that intelligence is required to do the following:<ref><br />
<br />
人们提出了各种各样的智能标准(最著名的是图灵测试) ,但到目前为止,还没有一个定义能使所有人满意。然而,人工智能研究人员普遍认为,智能需要做到以下几点: <br />
<br />
This list of intelligent traits is based on the topics covered by major AI textbooks, including:<br />
<br />
This list of intelligent traits is based on the topics covered by major AI textbooks, including:<br />
<br />
这个智能特征的列表基于主流的人工智能教科书所涉及的主题,包括:<br />
<br />
{{Harvnb|Russell|Norvig|2003}},<br />
<br />
,<br />
<br />
,<br />
<br />
{{Harvnb|Luger|Stubblefield|2004}},<br />
<br />
,<br />
<br />
,<br />
<br />
{{Harvnb|Poole|Mackworth|Goebel|1998}} and<br />
<br />
and<br />
<br />
及<br />
<br />
{{Harvnb|Nilsson|1998}}.<br />
<br />
.<br />
<br />
.<br />
<br />
</ref><br />
<br />
</ref><br />
<br />
/ 参考<br />
<br />
* [[automated reasoning|reason]], use strategy, solve puzzles, and make judgments under [[uncertainty]];使用策略,解决问题,并且在不确定条件下做出决策。<br />
<br />
* [[knowledge representation|represent knowledge]], including [[Commonsense knowledge base|commonsense knowledge]];展现知识,包括常识。<br />
<br />
* [[automated planning and scheduling|plan]];规划。<br />
<br />
* [[machine learning|learn]];学习。<br />
<br />
* communicate in [[natural language processing|natural language]];使用自然语言进行交流。<br />
<br />
* and [[Artificial intelligence systems integration|integrate all these skills]] towards common goals.综合运用所有技巧以达到某个目的。<br />
<br />
<br />
<br />
Other important capabilities include the ability to [[machine perception|sense]] (e.g. [[computer vision|see]]) and the ability to act (e.g. [[robotics|move and manipulate objects]]) in the world where intelligent behaviour is to be observed.<ref>Pfeifer, R. and Bongard J. C., How the body shapes the way we think: a new view of intelligence (The MIT Press, 2007). {{ISBN|0-262-16239-3}}</ref> This would include an ability to detect and respond to [[hazard]].<ref>{{cite journal | last1 = White | first1 = R. W. | year = 1959 | title = Motivation reconsidered: The concept of competence | journal = Psychological Review | volume = 66 | issue = 5| pages = 297–333 | doi=10.1037/h0040934| pmid = 13844397 }}</ref> Many interdisciplinary approaches to intelligence (e.g. [[cognitive science]], [[computational intelligence]] and [[decision making]]) tend to emphasise the need to consider additional traits such as [[imagination]] (taken as the ability to form mental images and concepts that were not programmed in)<ref>{{Harvnb|Johnson|1987}}</ref> and [[Self-determination theory|autonomy]].<ref>deCharms, R. (1968). Personal causation. New York: Academic Press.</ref><br />
<br />
Other important capabilities include the ability to sense (e.g. see) and the ability to act (e.g. move and manipulate objects) in the world where intelligent behaviour is to be observed. This would include an ability to detect and respond to hazard. Many interdisciplinary approaches to intelligence (e.g. cognitive science, computational intelligence and decision making) tend to emphasise the need to consider additional traits such as imagination (taken as the ability to form mental images and concepts that were not programmed in) and autonomy.<br />
<br />
其他重要的能力包括在客观世界感知(例如:视觉)和行动(例如:移动和操纵物体)的能力。智能行为在客观世界中是可观测的。这将包括检测和应对危险的能力。许多跨学科的智力研究方法(例如:。认知科学、计算智能和决策)倾向于强调考虑额外特征的必要性,例如想象力(被认为是未编入程序的形成意象和概念的能力)和自主性。<br />
<br />
Computer based systems that exhibit many of these capabilities do exist (e.g. see [[computational creativity]], [[automated reasoning]], [[decision support system]], [[robot]], [[evolutionary computation]], [[intelligent agent]]), but not yet at human levels.<br />
<br />
Computer based systems that exhibit many of these capabilities do exist (e.g. see computational creativity, automated reasoning, decision support system, robot, evolutionary computation, intelligent agent), but not yet at human levels.<br />
<br />
基于计算机的系统,展示了许多这些能力确实存在(例如:。参见计算创造性、自动推理、决策支持系统、机器人、进化计算、智能代理) ,但还没有达到人类的水平。<br />
<br />
<br />
<br />
===Tests for confirming human-level AGI{{anchor|Tests_for_confirming_human-level_AGI}}===<br />
<br />
The following tests to confirm human-level AGI have been considered:<ref>{{cite web|last=Muehlhauser|first=Luke|title=What is AGI?|url=http://intelligence.org/2013/08/11/what-is-agi/|publisher=Machine Intelligence Research Institute|accessdate=1 May 2014|date=11 August 2013|archive-url=https://web.archive.org/web/20140425115445/http://intelligence.org/2013/08/11/what-is-agi/|archive-date=25 April 2014|url-status=live}}</ref><ref>{{Cite web|url=https://www.talkyblog.com/artificial_general_intelligence_agi/|title=What is Artificial General Intelligence (AGI)? {{!}} 4 Tests For Ensuring Artificial General Intelligence|date=13 July 2019|website=Talky Blog|language=en-US|access-date=17 July 2019|archive-url=https://web.archive.org/web/20190717071152/https://www.talkyblog.com/artificial_general_intelligence_agi/|archive-date=17 July 2019|url-status=live}}</ref><br />
<br />
The following tests to confirm human-level AGI have been considered:<br />
<br />
考虑了下列测试以确认人类水平 AGI:<br />
<br />
;[[Turing test|The Turing Test]] ([[Alan Turing|''Turing'']])<br />
<br />
The Turing Test (Turing)<br />
<br />
图灵测试(图灵)<br />
<br />
: A machine and a human both converse sight unseen with a second human, who must evaluate which of the two is the machine, which passes the test if it can fool the evaluator a significant fraction of the time. Note: Turing does not prescribe what should qualify as intelligence, only that knowing that it is a machine should disqualify it.<br />
<br />
A machine and a human both converse sight unseen with a second human, who must evaluate which of the two is the machine, which passes the test if it can fool the evaluator a significant fraction of the time. Note: Turing does not prescribe what should qualify as intelligence, only that knowing that it is a machine should disqualify it.<br />
<br />
一个机器人和一个人类都与另一个人类进行眼神交流,后者必须评估两者中哪一个是机器,如果它能骗过评估者很大一部分时间,那么机器就通过了测试。注意: 图灵并没有规定什么是智能,只要能认出它是一台机器就不应该认为它是智能的。<br />
<br />
;The Coffee Test ([[Steve Wozniak|''Wozniak'']])<br />
<br />
The Coffee Test (Wozniak)<br />
<br />
咖啡测试(沃兹尼亚克)<br />
<br />
: A machine is required to enter an average American home and figure out how to make coffee: find the coffee machine, find the coffee, add water, find a mug, and brew the coffee by pushing the proper buttons.<br />
<br />
A machine is required to enter an average American home and figure out how to make coffee: find the coffee machine, find the coffee, add water, find a mug, and brew the coffee by pushing the proper buttons.<br />
<br />
一台机器需要进入一个普通的美国家庭,并弄清楚如何制作咖啡: 找到咖啡机,找到咖啡,加水,找到一个马克杯,并通过按下正确的按钮来煮咖啡。<br />
<br />
;The Robot College Student Test ([[Ben Goertzel|''Goertzel'']])<br />
<br />
The Robot College Student Test (Goertzel)<br />
<br />
机器人大学生考试(格兹尔)<br />
<br />
: A machine enrolls in a university, taking and passing the same classes that humans would, and obtaining a degree.<br />
<br />
A machine enrolls in a university, taking and passing the same classes that humans would, and obtaining a degree.<br />
<br />
一台机器进入一所大学,学习并通过与人类相同的课程,并获得学位。<br />
<br />
;The Employment Test ([[Nils John Nilsson|''Nilsson'']])<br />
<br />
The Employment Test (Nilsson)<br />
<br />
就业测试(尼尔森)<br />
<br />
: A machine works an economically important job, performing at least as well as humans in the same job.<br />
<br />
A machine works an economically important job, performing at least as well as humans in the same job.<br />
<br />
机器从事一项经济上重要的工作,在同一项工作中表现至少和人类一样好。<br />
<br />
<br />
<br />
=== Problems requiring AGI to solve 等待通用人工智能解决的问题===<br />
<br />
{{Main|AI-complete}}<br />
<br />
<br />
<br />
The most difficult problems for computers are informally known as "AI-complete" or "AI-hard", implying that solving them is equivalent to the general aptitude of human intelligence, or strong AI, beyond the capabilities of a purpose-specific algorithm.<ref name="Shapiro92">Shapiro, Stuart C. (1992). [http://www.cse.buffalo.edu/~shapiro/Papers/ai.pdf Artificial Intelligence] {{Webarchive|url=https://web.archive.org/web/20160201014644/http://www.cse.buffalo.edu/~shapiro/Papers/ai.pdf |date=1 February 2016 }} In Stuart C. Shapiro (Ed.), ''Encyclopedia of Artificial Intelligence'' (Second Edition, pp.&nbsp;54–57). New York: John Wiley. (Section 4 is on "AI-Complete Tasks".)</ref><br />
<br />
The most difficult problems for computers are informally known as "AI-complete" or "AI-hard", implying that solving them is equivalent to the general aptitude of human intelligence, or strong AI, beyond the capabilities of a purpose-specific algorithm.<br />
<br />
对于计算机来说,最困难的问题被非正式地称为“ AI完全问题”或“ AI困难问题” ,这意味着解决这些问题相当于人类智能的一般才能,或强人工智能,超出了特定目的算法的能力。<br />
<br />
<br />
<br />
AI-complete problems are hypothesised to include general [[computer vision]], [[natural language understanding]], and dealing with unexpected circumstances while solving any real world problem.<ref>Roman V. Yampolskiy. Turing Test as a Defining Feature of AI-Completeness. In Artificial Intelligence, Evolutionary Computation and Metaheuristics (AIECM) --In the footsteps of Alan Turing. Xin-She Yang (Ed.). pp. 3–17. (Chapter 1). Springer, London. 2013. http://cecs.louisville.edu/ry/TuringTestasaDefiningFeature04270003.pdf {{Webarchive|url=https://web.archive.org/web/20130522094547/http://cecs.louisville.edu/ry/TuringTestasaDefiningFeature04270003.pdf |date=22 May 2013 }}</ref><br />
<br />
AI-complete problems are hypothesised to include general computer vision, natural language understanding, and dealing with unexpected circumstances while solving any real world problem.<br />
<br />
人工智能完全问题假设包括一般的计算机视觉,自然语言理解,以及在解决任何现实世界问题的同时处理意外情况。<br />
<br />
<br />
<br />
AI-complete problems cannot be solved with current computer technology alone, and also require [[human computation]]. This property could be useful, for example, to test for the presence of humans, as [[CAPTCHA]]s aim to do; and for [[computer security]] to repel [[brute-force attack]]s.<ref>Luis von Ahn, Manuel Blum, Nicholas Hopper, and John Langford. [http://www.captcha.net/captcha_crypt.pdf CAPTCHA: Using Hard AI Problems for Security] {{Webarchive|url=https://web.archive.org/web/20160304001102/http://www.captcha.net/captcha_crypt.pdf |date=4 March 2016 }}. In Proceedings of Eurocrypt, Vol. 2656 (2003), pp. 294–311.</ref><ref>{{cite journal | first = Richard | last = Bergmair | title = Natural Language Steganography and an "AI-complete" Security Primitive | citeseerx = 10.1.1.105.129 | date = 7 January 2006 }} (unpublished?)</ref><br />
<br />
AI-complete problems cannot be solved with current computer technology alone, and also require human computation. This property could be useful, for example, to test for the presence of humans, as CAPTCHAs aim to do; and for computer security to repel brute-force attacks.<br />
<br />
目前的计算机技术不能单独解决AI完全问题,而且还需要人工计算。例如,这个特性可以用来测试人类是否存在(CAPTCHAs 的目标就是这样做) ,以及应用于计算机安全以抵御强力攻击。<br />
<br />
<br />
<br />
== History 历史 == <br />
<br />
=== Classical AI 经典人工智能 ===<br />
<br />
{{Main|History of artificial intelligence}}<br />
<br />
Modern AI research began in the mid 1950s.<ref>{{Harvnb|Crevier|1993|pp=48–50}}</ref> The first generation of AI researchers were convinced that artificial general intelligence was possible and that it would exist in just a few decades. AI pioneer [[Herbert A. Simon]] wrote in 1965: "machines will be capable, within twenty years, of doing any work a man can do."<ref>{{Harvnb|Simon|1965|p=96}} quoted in {{Harvnb|Crevier|1993|p=109}}</ref> Their predictions were the inspiration for [[Stanley Kubrick]] and [[Arthur C. Clarke]]'s character [[HAL 9000]], who embodied what AI researchers believed they could create by the year 2001. AI pioneer [[Marvin Minsky]] was a consultant<ref>{{Cite web |url=http://mitpress.mit.edu/e-books/Hal/chap2/two1.html |title=Scientist on the Set: An Interview with Marvin Minsky |access-date=5 April 2008 |archive-url=https://web.archive.org/web/20120716182537/http://mitpress.mit.edu/e-books/Hal/chap2/two1.html |archive-date=16 July 2012 |url-status=live }}</ref> on the project of making HAL 9000 as realistic as possible according to the consensus predictions of the time; Crevier quotes him as having said on the subject in 1967, "Within a generation ... the problem of creating 'artificial intelligence' will substantially be solved,"<ref>Marvin Minsky to {{Harvtxt|Darrach|1970}}, quoted in {{Harvtxt|Crevier|1993|p=109}}.</ref> although Minsky states that he was misquoted.{{Citation needed|date=June 2011}}<br />
<br />
Modern AI research began in the mid 1950s. The first generation of AI researchers were convinced that artificial general intelligence was possible and that it would exist in just a few decades. AI pioneer Herbert A. Simon wrote in 1965: "machines will be capable, within twenty years, of doing any work a man can do." Their predictions were the inspiration for Stanley Kubrick and Arthur C. Clarke's character HAL 9000, who embodied what AI researchers believed they could create by the year 2001. AI pioneer Marvin Minsky was a consultant on the project of making HAL 9000 as realistic as possible according to the consensus prediction of the time; Crevier quotes him as having said on the subject in 1967, "Within a generation ... the problem of creating 'artificial intelligence' will substantially be solved," although Minsky states that he was misquoted.<br />
<br />
现代人工智能研究始于20世纪50年代中期。第一代人工智能研究人员确信,通用人工智能是可能的,并将在短短几十年内出现。人工智能的先驱赫伯特·A·西蒙(Herbert A. Simon)在1965年写道: “机器将在20年内拥有完成人类能做的任何工作的能力。”他们的预言启发了斯坦利·库布里克和亚瑟·查理斯·克拉克塑造的角色哈尔9000,它代表了人工智能研究人员相信他们截至2001年能够创造出的东西。人工智能先驱马文·明斯基(Marvin Minsky)是一个项目顾问,该项目旨在根据当时的一致预测,使哈尔9000尽可能逼真; 克里维尔援引他在1967年关于这个问题的话说,“在一代人的时间里... ... 创造‘人工智能’的问题将大体上得到解决,”尽管明斯基声称,他的话被错误引用了。<br />
<br />
<br />
<br />
However, in the early 1970s, it became obvious that researchers had grossly underestimated the difficulty of the project. Funding agencies became skeptical of AGI and put researchers under increasing pressure to produce useful "applied AI".<ref>The [[Lighthill report]] specifically criticized AI's "grandiose objectives" and led the dismantling of AI research in England. ({{Harvnb|Lighthill|1973}}; {{Harvnb|Howe|1994}}) In the U.S., [[DARPA]] became determined to fund only "mission-oriented direct research, rather than basic undirected research". See {{Harv|NRC|1999}} under "Shift to Applied Research Increases Investment". See also {{Harv|Crevier|1993|pp=115–117}} and {{Harv|Russell|Norvig|2003|pp=21–22}}</ref> As the 1980s began, Japan's [[Fifth Generation Computer]] Project revived interest in AGI, setting out a ten-year timeline that included AGI goals like "carry on a casual conversation".<ref>{{harvnb|Crevier|1993|p=211}}, {{harvnb|Russell|Norvig|2003|p=24}} and see also {{Harvnb|Feigenbaum|McCorduck|1983}}</ref> In response to this and the success of [[expert systems]], both industry and government pumped money back into the field.<ref>{{Harvnb|Crevier| 1993|pp=161–162,197–203,240}}; {{harvnb|Russell|Norvig|2003|p=25}}; {{harvnb|NRC|1999|loc=under "Shift to Applied Research Increases Investment"}}</ref> However, confidence in AI spectacularly collapsed in the late 1980s, and the goals of the Fifth Generation Computer Project were never fulfilled.<ref>{{Harvnb|Crevier|1993|pp=209–212}}</ref> For the second time in 20 years, AI researchers who had predicted the imminent achievement of AGI had been shown to be fundamentally mistaken. By the 1990s, AI researchers had gained a reputation for making vain promises. They became reluctant to make predictions at all<ref>As AI founder [[John McCarthy (computer scientist)|John McCarthy]] writes "it would be a great relief to the rest of the workers in AI if the inventors of new general formalisms would express their hopes in a more guarded form than has sometimes been the case." {{cite web | url=http://www-formal.stanford.edu/jmc/reviews/lighthill/lighthill.html | title=Reply to Lighthill | last=McCarthy | first=John | authorlink=John McCarthy (computer scientist) | publisher=Stanford University | year=2000 | access-date=29 September 2007 | archive-url=https://web.archive.org/web/20080930164952/http://www-formal.stanford.edu/jmc/reviews/lighthill/lighthill.html | archive-date=30 September 2008 | url-status=live }}</ref> and to avoid any mention of "human level" artificial intelligence for fear of being labeled "wild-eyed dreamer[s]."<ref>"At its low point, some computer scientists and software engineers avoided the term artificial intelligence for fear of being viewed as wild-eyed dreamers."{{cite news |first=John |last=Markoff |title=Behind Artificial Intelligence, a Squadron of Bright Real People |url=https://www.nytimes.com/2005/10/14/technology/14artificial.html?ei=5070&en=11ab55edb7cead5e&ex=1185940800&adxnnl=1&adxnnlx=1185805173-o7WsfW7qaP0x5/NUs1cQCQ |work=The New York Times|date=14 October 2005}}</ref><br />
<br />
However, in the early 1970s, it became obvious that researchers had grossly underestimated the difficulty of the project. Funding agencies became skeptical of AGI and put researchers under increasing pressure to produce useful "applied AI". As the 1980s began, Japan's Fifth Generation Computer Project revived interest in AGI, setting out a ten-year timeline that included AGI goals like "carry on a casual conversation". In response to this and the success of expert systems, both industry and government pumped money back into the field. However, confidence in AI spectacularly collapsed in the late 1980s, and the goals of the Fifth Generation Computer Project were never fulfilled. For the second time in 20 years, AI researchers who had predicted the imminent achievement of AGI had been shown to be fundamentally mistaken. By the 1990s, AI researchers had gained a reputation for making vain promises. They became reluctant to make predictions at all and to avoid any mention of "human level" artificial intelligence for fear of being labeled "wild-eyed dreamer[s]."<br />
<br />
然而,在20世纪70年代初,很明显,研究人员严重低估了该项目的难度。资助机构开始对通用人工智能持怀疑态度,并对研究人员施加越来越大的压力,要求他们生产出有用的“应用人工智能”。随着20世纪80年代的开始,日本的'''<font color="#ff8000">第五代计算机项目(Fifth Generation Computer Project)</font>'''重新唤起了人们对通用人工智能的兴趣,并设定了一个长达10年的时间线,其中包括通用人工智能的目标,比如“进行一次随意的交谈”。为了应对这种情况和专家系统的成功建立,工业界和政府都重新将资金投入这一领域。然而,人们对人工智能的信心在20世纪80年代末大幅下降,第五代计算机项目的目标从未实现。20年来的第二次,人工智能研究人员预测通用人工智能即将取得的成果已经被证明根本是错误的。到了20世纪90年代,人工智能研究人员因做出虚假承诺而臭名昭著。他们根本不愿意做预测,也不愿意提及“人类水平”的人工智能,因为他们害怕被贴上“狂热梦想家”的标签<br />
<br />
<br />
<br />
=== Narrow AI research 狭义人工智能的研究===<br />
<br />
{{Main|Artificial intelligence}}<br />
<br />
<br />
<br />
In the 1990s and early 21st century, mainstream AI achieved far greater commercial success and academic respectability by focusing on specific sub-problems where they can produce verifiable results and commercial applications, such as [[artificial neural networks]] and statistical [[machine learning]].<ref>{{Harvnb|Russell|Norvig|2003|pp=25–26}}</ref> These "applied AI" systems are now used extensively throughout the technology industry, and research in this vein is very heavily funded in both academia and industry. Currently, development on this field is considered an emerging trend, and a mature stage is expected to happen in more than 10 years.<ref>{{cite web |title=Trends in the Emerging Tech Hype Cycle |url=https://blogs.gartner.com/smarterwithgartner/files/2018/08/PR_490866_5_Trends_in_the_Emerging_Tech_Hype_Cycle_2018_Hype_Cycle.png |publisher=Gartner Reports |accessdate=7 May 2019 |archive-url=https://web.archive.org/web/20190522024829/https://blogs.gartner.com/smarterwithgartner/files/2018/08/PR_490866_5_Trends_in_the_Emerging_Tech_Hype_Cycle_2018_Hype_Cycle.png |archive-date=22 May 2019 |url-status=live }}</ref><br />
<br />
In the 1990s and early 21st century, mainstream AI achieved far greater commercial success and academic respectability by focusing on specific sub-problems where they can produce verifiable results and commercial applications, such as artificial neural networks and statistical machine learning. These "applied AI" systems are now used extensively throughout the technology industry, and research in this vein is very heavily funded in both academia and industry. Currently, development on this field is considered an emerging trend, and a mature stage is expected to happen in more than 10 years.<br />
<br />
在1990年代和21世纪初,主流人工智能取得了更大的商业成功和学术声望,因为它们把重点放在能够产生可验证结果和商业应用的具体子问题上,例如人工神经网络和统计机器学习。这些“应用人工智能”系统现在在整个技术产业中得到广泛应用,这方面的研究得到了学术界和产业界的大量资助。目前,这一领域的发展被认为是一个新兴的趋势,并有望在10多年内进入一个成熟的阶段。<br />
<br />
<br />
<br />
Most mainstream AI researchers hope that strong AI can be developed by combining the programs that solve various sub-problems. [[Hans Moravec]] wrote in 1988: <blockquote>"I am confident that this bottom-up route to artificial intelligence will one day meet the traditional top-down route more than half way, ready to provide the real world competence and the [[Commonsense knowledge base|commonsense knowledge]] that has been so frustratingly elusive in reasoning programs. Fully intelligent machines will result when the metaphorical [[golden spike]] is driven uniting the two efforts."<ref>{{Harvnb|Moravec|1988|p=20}}</ref></blockquote><br />
<br />
Most mainstream AI researchers hope that strong AI can be developed by combining the programs that solve various sub-problems. Hans Moravec wrote in 1988: <blockquote>"I am confident that this bottom-up route to artificial intelligence will one day meet the traditional top-down route more than half way, ready to provide the real world competence and the commonsense knowledge that has been so frustratingly elusive in reasoning programs. Fully intelligent machines will result when the metaphorical golden spike is driven uniting the two efforts."</blockquote><br />
<br />
大多数主流人工智能研究人员希望,通过结合解决各种子问题的项目,可以开发出强人工智能。汉斯·莫拉维克(Hans Moravec)在1988年写道: “我相信,这种自下而上的人工智能路线,终有一天会与传统的自上而下的路线在后半程相遇。令人沮丧的是,当下,真实世界的能力和常识知识在推理程序中一直难以捉摸。而这两种路线结合的人工智能将能为我们解决这些疑难。当一个黄金钉一样的东西将二者结合起来时,就会产生完全智能的机器。” <br />
<br />
<br />
<br />
However, even this fundamental philosophy has been disputed; for example, Stevan Harnad of Princeton concluded his 1990 paper on the [[Symbol grounding problem|Symbol Grounding Hypothesis]] by stating: <blockquote>"The expectation has often been voiced that "top-down" (symbolic) approaches to modeling cognition will somehow meet "bottom-up" (sensory) approaches somewhere in between. If the grounding considerations in this paper are valid, then this expectation is hopelessly modular and there is really only one viable route from sense to symbols: from the ground up. A free-floating symbolic level like the software level of a computer will never be reached by this route (or vice versa) – nor is it clear why we should even try to reach such a level, since it looks as if getting there would just amount to uprooting our symbols from their intrinsic meanings (thereby merely reducing ourselves to the functional equivalent of a programmable computer)."<ref>{{cite journal | last1 = Harnad | first1 = S | year = 1990 | title = The Symbol Grounding Problem | journal = Physica D | volume = 42 | issue = 1–3| pages = 335–346 | doi=10.1016/0167-2789(90)90087-6| bibcode = 1990PhyD...42..335H | arxiv = cs/9906002}}</ref></blockquote><br />
<br />
However, even this fundamental philosophy has been disputed; for example, Stevan Harnad of Princeton concluded his 1990 paper on the Symbol Grounding Hypothesis by stating: <blockquote>"The expectation has often been voiced that "top-down" (symbolic) approaches to modeling cognition will somehow meet "bottom-up" (sensory) approaches somewhere in between. If the grounding considerations in this paper are valid, then this expectation is hopelessly modular and there is really only one viable route from sense to symbols: from the ground up. A free-floating symbolic level like the software level of a computer will never be reached by this route (or vice versa) – nor is it clear why we should even try to reach such a level, since it looks as if getting there would just amount to uprooting our symbols from their intrinsic meanings (thereby merely reducing ourselves to the functional equivalent of a programmable computer)."</blockquote><br />
<br />
然而,连如此基本的哲学问题也存在争议; 例如,普林斯顿大学的斯蒂文·哈纳德(Stevan Harnad)在1990年关于'''<font color="#ff8000">符号基础假说(the Symbol Grounding Hypothesis)</font>'''的论文中总结道: “人们经常提出这样的期望,即建立“自上而下”(符号)的认知模型的方法将在某种程度上与“自下而上”(感官)的方法在建模过程中的某处相会。如果本文中的基本考虑是正确的,那么绝望的是,这种期望是模块化的,并且从认知到符号真的只有一条可行的路径: 从头开始。类似计算机软件级别的自由浮动的符号永远不可能通过这条路径实现,反之亦然——甚至也不清楚为什么我们应该尝试达到这样一个级别,因为它看起来就像是把我们的符号从它们的内在意义上连根拔起(从而仅仅把我们自己降低为可编程计算机的功能等价物)。”<br />
<br />
<br />
<br />
=== Modern artificial general intelligence research 现代通用人工智能的研究===<br />
<br />
The term "artificial general intelligence" was used as early as 1997, by Mark Gubrud<ref>{{Harvnb|Gubrud|1997}}</ref> in a discussion of the implications of fully automated military production and operations. The term was re-introduced and popularized by [[Shane Legg]] and [[Ben Goertzel]] around 2002.<ref>{{Cite web|url=http://goertzel.org/who-coined-the-term-agi/|title=Who coined the term "AGI"? » goertzel.org|language=en-US|access-date=28 December 2018|archive-url=https://web.archive.org/web/20181228083048/http://goertzel.org/who-coined-the-term-agi/|archive-date=28 December 2018|url-status=live}}, via [[Life 3.0]]: 'The term "AGI" was popularized by... Shane Legg, Mark Gubrud and Ben Goertzel'</ref> The research objective is much older, for example [[Doug Lenat]]'s [[Cyc]] project (that began in 1984), and [[Allen Newell]]'s [[Soar (cognitive architecture)|Soar]] project are regarded as within the scope of AGI. AGI research activity in 2006 was described by Pei Wang and Ben Goertzel<ref>{{harvnb|Goertzel|Wang|2006}}. See also {{harvtxt|Wang|2006}} with an up-to-date summary and lots of links.</ref> as "producing publications and preliminary results". The first summer school in AGI was organized in Xiamen, China in 2009<ref>https://goertzel.org/AGI_Summer_School_2009.htm</ref> by the Xiamen university's Artificial Brain Laboratory and OpenCog. The first university course was given in 2010<ref>http://fmi-plovdiv.org/index.jsp?id=1054&ln=1</ref> and 2011<ref>http://fmi.uni-plovdiv.bg/index.jsp?id=1139&ln=1</ref> at Plovdiv University, Bulgaria by Todor Arnaudov. MIT presented a course in AGI in 2018, organized by Lex Fridman and featuring a number of guest lecturers. However, as yet, most AI researchers have devoted little attention to AGI, with some claiming that intelligence is too complex to be completely replicated in the near term. However, a small number of computer scientists are active in AGI research, and many of this group are contributing to a series of [[Conference on Artificial General Intelligence|AGI conferences]]. The research is extremely diverse and often pioneering in nature. In the introduction to his book,{{sfn|Goertzel|Pennachin|2006}} Goertzel says that estimates of the time needed before a truly flexible AGI is built vary from 10 years to over a century, but the consensus in the AGI research community seems to be that the timeline discussed by [[Ray Kurzweil]] in ''[[The Singularity is Near]]''<ref name="K">{{Harv|Kurzweil|2005|p=260}} or see [http://crnano.typepad.com/crnblog/2005/08/advanced_human_.html Advanced Human Intelligence] {{Webarchive|url=https://web.archive.org/web/20110630032301/http://crnano.typepad.com/crnblog/2005/08/advanced_human_.html |date=30 June 2011 }} where he defines strong AI as "machine intelligence with the full range of human intelligence."</ref> (i.e. between 2015 and 2045) is plausible.{{sfn|Goertzel|2007}}<br />
<br />
The term "artificial general intelligence" was used as early as 1997, by Mark Gubrud in a discussion of the implications of fully automated military production and operations. The term was re-introduced and popularized by Shane Legg and Ben Goertzel around 2002. The research objective is much older, for example Doug Lenat's Cyc project (that began in 1984), and Allen Newell's Soar project are regarded as within the scope of AGI. AGI research activity in 2006 was described by Pei Wang and Ben Goertzel as "producing publications and preliminary results". The first summer school in AGI was organized in Xiamen, China in 2009 by the Xiamen university's Artificial Brain Laboratory and OpenCog. The first university course was given in 2010 and 2011 at Plovdiv University, Bulgaria by Todor Arnaudov. MIT presented a course in AGI in 2018, organized by Lex Fridman and featuring a number of guest lecturers. However, as yet, most AI researchers have devoted little attention to AGI, with some claiming that intelligence is too complex to be completely replicated in the near term. However, a small number of computer scientists are active in AGI research, and many of this group are contributing to a series of AGI conferences. The research is extremely diverse and often pioneering in nature. In the introduction to his book, Goertzel says that estimates of the time needed before a truly flexible AGI is built vary from 10 years to over a century, but the consensus in the AGI research community seems to be that the timeline discussed by Ray Kurzweil in The Singularity is Near (i.e. between 2015 and 2045) is plausible.<br />
<br />
“通用人工智能”一词早在1997年就由马克·古布鲁德(Mark Gubrud)在讨论全自动化军事生产和作业的影响时使用。这个术语在2002年左右被肖恩·莱格(Shane Legg)和本·格兹尔(Ben Goertzel)重新引入并推广。通用人工智能的研究目标要古老得多,例如道格•雷纳特(Doug Lenat)的 Cyc 项目(始于1984年) ,以及艾伦•纽厄尔(Allen Newell)的 Soar 项目被认为属于通用人工智能的范畴。王培(Pei Wang)和本·格兹尔将2006年的通用人工智能研究活动描述为“发表论文和取得初步成果”。2009年,厦门大学人工脑实验室和 OpenCog 在中国厦门组织了通用人工智能的第一个暑期学校。第一个大学课程于2010年和2011年在保加利亚普罗夫迪夫大学由托多尔·阿瑙多夫(Todor Arnaudov)开设。2018年,麻省理工学院开设了一门通用人工智能课程,由莱克斯·弗里德曼(Lex Fridman)组织,并邀请了一些客座讲师。然而,迄今为止,大多数人工智能研究人员对通用人工智能关注甚少,一些人声称,智能过于复杂,在短期内无法完全复制。然而,少数计算机科学家积极参与通用人工智能的研究,其中许多人正在为通用人工智能的一系列会议做出贡献。这项研究极其多样化,而且往往具有开创性。格兹尔在他的书的序言中,说,制造一个真正灵活的通用人工智能所需的时间约为10年到超过一个世纪不等,但是通用人工智能研究团体的似乎一致认为雷·库兹韦尔(Ray Kurzweil)在'''<font color="#ff8000">《奇点临近》(The Singularity is Near)</font>'''(即在2015年至2045年之间)中讨论的时间线是可信的。<br />
<br />
<br />
<br />
However, most mainstream AI researchers doubt that progress will be this rapid.{{citation_needed|date=January 2017}} Organizations explicitly pursuing AGI include the Swiss AI lab [[IDSIA]],{{citation needed|date=December 2017}} Nnaisense,<ref>{{cite news|last1=Markoff|first1=John|title=When A.I. Matures, It May Call Jürgen Schmidhuber 'Dad'|url=https://www.nytimes.com/2016/11/27/technology/artificial-intelligence-pioneer-jurgen-schmidhuber-overlooked.html|accessdate=26 December 2017|work=The New York Times|date=27 November 2016|archive-url=https://web.archive.org/web/20171226234555/https://www.nytimes.com/2016/11/27/technology/artificial-intelligence-pioneer-jurgen-schmidhuber-overlooked.html|archive-date=26 December 2017|url-status=live}}</ref> [[Vicarious (company)|Vicarious]],<!--<ref name=baum/>--> [[Maluuba]],<ref name=baum/> the [[OpenCog|OpenCog Foundation]], Adaptive AI, [[LIDA (cognitive architecture)|LIDA]], and [[Numenta]] and the associated [[Redwood Neuroscience Institute]].<ref>{{cite book|author1=James Barrat|title=Our Final Invention: Artificial Intelligence and the End of the Human Era|date=2013|publisher=St. Martin's Press|location=New York|isbn=9780312622374|edition=First|chapter=Chapter 11: A Hard Takeoff|title-link=Our Final Invention|author1-link=James Barrat}}</ref> In addition, organizations such as the [[Machine Intelligence Research Institute]]<ref>{{cite web|title=About the Machine Intelligence Research Institute|url=https://intelligence.org/about/|website=Machine Intelligence Research Institute|accessdate=26 December 2017|archive-url=https://web.archive.org/web/20180121025925/https://intelligence.org/about/|archive-date=21 January 2018|url-status=live}}</ref> and [[OpenAI]]<ref>{{cite news|title=About OpenAI|url=https://openai.com/about/|accessdate=26 December 2017|work=[[OpenAI]]|language=en-us|archive-url=https://web.archive.org/web/20171222181056/https://openai.com/about/|archive-date=22 December 2017|url-status=live}}</ref> have been founded to influence the development path of AGI. Finally, projects such as the [[Human Brain Project]]<ref>{{cite news|last1=Theil|first1=Stefan|title=Trouble in Mind|url=https://www.scientificamerican.com/article/why-the-human-brain-project-went-wrong-and-how-to-fix-it/|accessdate=26 December 2017|work=Scientific American|pages=36–42|language=en|doi=10.1038/scientificamerican1015-36|bibcode=2015SciAm.313d..36T|archive-url=https://web.archive.org/web/20171109234151/https://www.scientificamerican.com/article/why-the-human-brain-project-went-wrong-and-how-to-fix-it/|archive-date=9 November 2017|url-status=live}}</ref> have the goal of building a functioning simulation of the human brain. A 2017 survey of AGI categorized forty-five known "active R&D projects" that explicitly or implicitly (through published research) research AGI, with the largest three being [[DeepMind]], the Human Brain Project, and [[OpenAI]].<ref name=baum>{{cite journal|title=Baum, Seth, A Survey of Artificial General Intelligence Projects for Ethics, Risk, and Policy (November 12, 2017). Global Catastrophic Risk Institute Working Paper 17-1|url=https://ssrn.com/abstract=3070741|date=12 November 2017|last1=Baum|first1=Seth}}</ref><br />
<br />
However, most mainstream AI researchers doubt that progress will be this rapid. Organizations explicitly pursuing AGI include the Swiss AI lab IDSIA, Nnaisense, Vicarious. In addition, organizations such as the Machine Intelligence Research Institute and OpenAI have been founded to influence the development path of AGI. Finally, projects such as the Human Brain Project have the goal of building a functioning simulation of the human brain. A 2017 survey of AGI categorized forty-five known "active R&D projects" that explicitly or implicitly (through published research) research AGI, with the largest three being DeepMind, the Human Brain Project, and OpenAI.<br />
<br />
然而,大多数主流的人工智能研究人员怀疑进展是否会如此之快。明确寻求通用人工智能的组织包括瑞士人工智能实验室IDSIA,Nnaisense,Vicarious。此外,机器智能研究所和 OpenAI 等机构也建立起来以影响通用人工智能的发展道路。最后,像人脑计划这样的项目的目标是建立一个人脑的功能模拟。2017年针对通用人工智能的一项调查(通过已发表的研究)对45个已知的明确的或暗中研究通用人工智能的“活跃研发项目”进行了分类 ,其中最大的三个是 DeepMind、人类大脑项目和 OpenAI。<br />
<br />
<br />
<br />
In 2017, researchers Feng Liu, Yong Shi and Ying Liu conducted intelligence tests on publicly available and freely accessible weak AI such as Google AI or Apple's Siri and others. At the maximum, these AI reached a value of about 47, which corresponds approximately to a six-year-old child in first grade. An adult comes to about 100 on average. Similar tests had been carried out in 2014, with the IQ score reaching a maximum value of 27.<ref>{{cite journal|title=Intelligence Quotient and Intelligence Grade of Artificial Intelligence|journal=Annals of Data Science|volume=4|issue=2|pages=179–191|arxiv=1709.10242|doi=10.1007/s40745-017-0109-0|year=2017|last1=Liu|first1=Feng|last2=Shi|first2=Yong|last3=Liu|first3=Ying|bibcode=2017arXiv170910242L}}</ref><ref>{{cite web|title=Google-KI doppelt so schlau wie Siri|url=https://t3n.de/news/iq-kind-schlauer-google-ki-siri-864003|accessdate=2 January 2019|archive-url=https://web.archive.org/web/20190103055657/https://t3n.de/news/iq-kind-schlauer-google-ki-siri-864003/|archive-date=3 January 2019|url-status=live}}</ref><br />
<br />
In 2017, researchers Feng Liu, Yong Shi and Ying Liu conducted intelligence tests on publicly available and freely accessible weak AI such as Google AI or Apple's Siri and others. At the maximum, these AI reached a value of about 47, which corresponds approximately to a six-year-old child in first grade. An adult comes to about 100 on average. Similar tests had been carried out in 2014, with the IQ score reaching a maximum value of 27.<br />
<br />
2017年,研究人员 Feng Liu,Yong Shi 和 Ying Liu 对公开的和可自由访问的弱智能进行了智能测试,如谷歌人工智能或苹果的 Siri 等。在最大值,这些人工智能达到了约47,这大约相当于一个的六岁儿童。一个成年人平均智商为100。2014年也进行了类似的测试,智商分数的最高值达到了27。<br />
<br />
<br />
<br />
In 2019, video game programmer and aerospace engineer [[John Carmack]] announced plans to research AGI.<ref name="lawler">{{cite web |url=https://www.engadget.com/2019-11-13-john-carmack-agi.html |title=John Carmack takes a step back at Oculus to work on human-like AI |date=November 13, 2019 |first=Richard |last=Lawler |accessdate=April 4, 2020 |publisher=[[Engadget]]}}</ref><br />
<br />
In 2019, video game programmer and aerospace engineer John Carmack announced plans to research AGI.<br />
<br />
2019年,游戏程序师和航空工程师约翰·卡迈克(John Carmack)宣布了研究通用人工智能的计划。<br />
<br />
<br />
<br />
==Processing power needed to simulate a brain 模拟人脑所需要的处理能力==<br />
<br />
<br />
<br />
===Whole brain emulation 全脑模拟===<br />
<br />
{{main|Mind uploading}}<br />
<br />
A popular discussed approach to achieving general intelligent action is [[whole brain emulation]]. A low-level brain model is built by [[brain scanning|scanning]] and [[Brain mapping|mapping]] a biological brain in detail and copying its state into a computer system or another computational device. The computer runs a [[computer simulation|simulation]] model so faithful to the original that it will behave in essentially the same way as the original brain, or for all practical purposes, indistinguishably.<br />
<br />
A popular discussed approach to achieving general intelligent action is whole brain emulation. A low-level brain model is built by scanning and mapping a biological brain in detail and copying its state into a computer system or another computational device. The computer runs a simulation model so faithful to the original that it will behave in essentially the same way as the original brain, or for all practical purposes, indistinguishably.<ref name=Roadmap><br />
<br />
实现通用智能行为的一种流行的讨论方法是全脑模拟。一个低层次的大脑模型是通过扫描和绘制生物大脑的详细情况,并将其状态复制到计算机系统或其他计算设备中来建立的。计算机运行的模拟模型无比忠实于原始模型,以至于它的行为在本质,或者所有实际目的上与原始大脑相同,难以区分。<br />
<br />
{{Harvnb|Sandberg|Boström|2008}}. "The basic idea is to take a particular brain, scan its structure in detail, and construct a software model of it that is so faithful to the original that, when run on appropriate hardware, it will behave in essentially the same way as the original brain."</ref> Whole brain emulation is discussed in [[computational neuroscience]] and [[neuroinformatics]], in the context of [[brain simulation]] for medical research purposes. It is discussed in [[artificial intelligence]] research{{sfn|Goertzel|2007}} as an approach to strong AI. [[Neuroimaging]] technologies that could deliver the necessary detailed understanding are improving rapidly, and [[futurist]] Ray Kurzweil in the book ''The Singularity Is Near''<ref name=K/> predicts that a map of sufficient quality will become available on a similar timescale to the required computing power.<br />
<br />
."The basic idea is to take a particular brain, scan its structure in detail, and construct a software model of it that is so faithful to the original that, when run on appropriate hardware, it will behave in essentially the same way as the original brain."</ref> Whole brain emulation is discussed in computational neuroscience and neuroinformatics, in the context of brain simulation for medical research purposes. It is discussed in artificial intelligence research as an approach to strong AI. Neuroimaging technologies that could deliver the necessary detailed understanding are improving rapidly, and futurist Ray Kurzweil in the book The Singularity Is Near predicts that a map of sufficient quality will become available on a similar timescale to the required computing power.<br />
<br />
“基本思路是,取一个特定的大脑,详细地扫描其结构,并构建一个无比还原的原始大脑的软件模型,以至于在适当的硬件上运行时,它基本上与原始大脑的行为方式相同。”基于医学研究的大脑模拟背景下,全脑模拟在计算神经科学和神经信息学医学期刊上被讨论过。它是人工智能研究中讨论的一种强人工智能的方法。可提供必要详细的理解的神经成像技术正在迅速提高,未来学家雷·库兹韦尔(Ray Kurzweil)在《奇点临近》书中预测,一张质量足够高的地图将在类似的时间尺度上达到所需的计算能力。<br />
<br />
<br />
<br />
===Early estimates 初步预测===<br />
<br />
[[File:Estimations of Human Brain Emulation Required Performance.svg|thumb|right|400px|Estimates of how much processing power is needed to emulate a human brain at various levels (from Ray Kurzweil, and [[Anders Sandberg]] and [[Nick Bostrom]]), along with the fastest supercomputer from [[TOP500]] mapped by year. Note the logarithmic scale and exponential trendline, which assumes the computational capacity doubles every 1.1 years. Kurzweil believes that mind uploading will be possible at neural simulation, while the Sandberg, Bostrom report is less certain about where [[consciousness]] arises.{{sfn|Sandberg|Boström|2008}}]] For low-level brain simulation, an extremely powerful computer would be required. The [[human brain]] has a huge number of [[synapses]]. Each of the 10<sup>11</sup> (one hundred billion) [[neurons]] has on average 7,000 synaptic connections (synapses) to other neurons. It has been estimated that the brain of a three-year-old child has about 10<sup>15</sup> synapses (1 quadrillion). This number declines with age, stabilizing by adulthood. Estimates vary for an adult, ranging from 10<sup>14</sup> to 5×10<sup>14</sup> synapses (100 to 500 trillion).{{sfn|Drachman|2005}} An estimate of the brain's processing power, based on a simple switch model for neuron activity, is around 10<sup>14</sup> (100 trillion) synaptic updates per second ([[SUPS]]).{{sfn|Russell|Norvig|2003}} In 1997, Kurzweil looked at various estimates for the hardware required to equal the human brain and adopted a figure of 10<sup>16</sup> computations per second (cps).<ref>In "Mind Children" {{Harvnb|Moravec|1988|page=61}} 10<sup>15</sup> cps is used. More recently, in 1997, <{{cite web|url=http://www.transhumanist.com/volume1/moravec.htm |title=Archived copy |accessdate=23 June 2006 |url-status=dead |archiveurl=https://web.archive.org/web/20060615031852/http://transhumanist.com/volume1/moravec.htm |archivedate=15 June 2006 }}> Moravec argued for 10<sup>8</sup> MIPS which would roughly correspond to 10<sup>14</sup> cps. Moravec talks in terms of MIPS, not "cps", which is a non-standard term Kurzweil introduced.</ref> (For comparison, if a "computation" was equivalent to one "[[FLOPS|floating point operation]]" – a measure used to rate current [[supercomputer]]s – then 10<sup>16</sup> "computations" would be equivalent to 10 [[Peta-|petaFLOPS]], [[FLOPS#Performance records|achieved in 2011]]). He used this figure to predict the necessary hardware would be available sometime between 2015 and 2025, if the exponential growth in computer power at the time of writing continued.<br />
<br />
Estimates of how much processing power is needed to emulate a human brain at various levels (from Ray Kurzweil, and [[Anders Sandberg and Nick Bostrom), along with the fastest supercomputer from TOP500 mapped by year. Note the logarithmic scale and exponential trendline, which assumes the computational capacity doubles every 1.1 years. Kurzweil believes that mind uploading will be possible at neural simulation, while the Sandberg, Bostrom report is less certain about where consciousness arises.根据对在不同水平上模拟人类大脑的所需处理能力的估计(来自 Ray Kurzweil,[ Anders Sandberg 和 Nick Bostrom ]) ,以及每年从最快的五百台超级计算机获得的数据,绘制出对数尺度趋势线和指数趋势线。它呈现出计算能力每1.1年增长一倍。库兹韦尔相信,在神经模拟中上传思维是可能的,而桑德伯格和博斯特罗姆的报告对意识从何产生则不太确定。]] For low-level brain simulation, an extremely powerful computer would be required. The human brain has a huge number of synapses. Each of the 10<sup>11</sup> (one hundred billion) neurons has on average 7,000 synaptic connections (synapses) to other neurons. It has been estimated that the brain of a three-year-old child has about 10<sup>15</sup> synapses (1 quadrillion). This number declines with age, stabilizing by adulthood. Estimates vary for an adult, ranging from 10<sup>14</sup> to 5×10<sup>14</sup> synapses (100 to 500 trillion). An estimate of the brain's processing power, based on a simple switch model for neuron activity, is around 10<sup>14</sup> (100 trillion) synaptic updates per second (SUPS). In 1997, Kurzweil looked at various estimates for the hardware required to equal the human brain and adopted a figure of 10<sup>16</sup> computations per second (cps). (For comparison, if a "computation" was equivalent to one "floating point operation" – a measure used to rate current supercomputers – then 10<sup>16</sup> "computations" would be equivalent to 10 petaFLOPS, achieved in 2011). He used this figure to predict the necessary hardware would be available sometime between 2015 and 2025, if the exponential growth in computer power at the time of writing continued.<br />
<br />
根据对在不同水平上模拟人类大脑的所需处理能力的估计(来自 Ray Kurzweil,[ Anders Sandberg 和 Nick Bostrom ]) ,以及每年从最快的五百台超级计算机获得的数据,绘制出对数尺度趋势线和指数趋势线。它呈现出计算能力每1.1年增长一倍。库兹韦尔相信,在神经模拟中上传思维是可能的,而桑德伯格和博斯特罗姆的报告对意识从何产生则不太确定。为进行低层次的大脑模拟,需要一个非常强大的计算机。人类的大脑有大量的突触。每10个 sup 11 / sup (1000亿)神经元平均与其他神经元有7000个突触连接(突触)。据估计,一个三岁儿童的大脑约有10个 sup 15 / sup 突触(1千万亿)。这个数字随着年龄的增长而下降,成年后趋于稳定。而每个成年人的估计情况互不相同,从10个 sup 14 / sup 到5个 sup 10 sup 14 / sup 突触(100万亿到500万亿)不等。基于神经元活动的简单开关模型,对大脑处理能力的估计大约是每秒10次 / 秒(100万亿)突触更新(SUPS)。1997年,库兹韦尔研究了等价模拟人脑所需硬件的各种估计,并采纳了每秒10 sup 16 / sup 计算(cps)这个估计结果。(作为比较,如果一次“计算”相当于一次“浮点运算”——一种用于对当前超级计算机进行评级的措施——那么10 sup 16 / sup次“计算”相当于2011年达到的的每秒10000亿次浮点运算)。他用这个数字来预测,如果在撰写本文时计算机能力方面的指数增长继续下去的话,那么在2015年到2025年之间的某个时候,必要的硬件将会出现。<br />
<br />
<br />
<br />
===Modelling the neurons in more detail 对神经元的更精细的模拟===<br />
<br />
The [[artificial neuron]] model assumed by Kurzweil and used in many current [[artificial neural network]] implementations is simple compared with [[biological neuron model|biological neurons]]. A brain simulation would likely have to capture the detailed cellular behaviour of biological [[neurons]], presently understood only in the broadest of outlines. The overhead introduced by full modeling of the biological, chemical, and physical details of neural behaviour (especially on a molecular scale) would require computational powers several orders of magnitude larger than Kurzweil's estimate. In addition the estimates do not account for [[glial cells]], which are at least as numerous as neurons, and which may outnumber neurons by as much as 10:1, and are now known to play a role in cognitive processes.<ref name="Discover2011JanFeb">{{Cite journal|author=Swaminathan, Nikhil|title=Glia—the other brain cells|journal=Discover|date=Jan–Feb 2011|url=http://discovermagazine.com/2011/jan-feb/62|access-date=24 January 2014|archive-url=https://web.archive.org/web/20140208071350/http://discovermagazine.com/2011/jan-feb/62|archive-date=8 February 2014|url-status=live}}</ref><br />
<br />
The artificial neuron model assumed by Kurzweil and used in many current artificial neural network implementations is simple compared with biological neurons. A brain simulation would likely have to capture the detailed cellular behaviour of biological neurons, presently understood only in the broadest of outlines. The overhead introduced by full modeling of the biological, chemical, and physical details of neural behaviour (especially on a molecular scale) would require computational powers several orders of magnitude larger than Kurzweil's estimate. In addition the estimates do not account for glial cells, which are at least as numerous as neurons, and which may outnumber neurons by as much as 10:1, and are now known to play a role in cognitive processes.<br />
<br />
与生物神经元相比,库兹韦尔假设的人工神经元模型在当前许多'''<font color="#ff8000">人工神经网络(artificial neural network)</font>'''实现中的应用是简单的。大脑模拟可能需要捕捉生物神经元细胞行为的细节,目前只能从最广泛的概要中理解。对神经行为的生物、化学和物理细节(特别是在分子尺度上)进行全面建模所需要的计算能力将比库兹韦尔的估计大数个数量级。此外,这些估计没有考虑到至少和神经元一样多的'''<font color="#ff8000">胶质细胞(glial cells)</font>''',其数量可能比神经元多十分之一,且现已知它们在认知过程中发挥作用。<br />
<br />
<br />
<br />
=== Current research 研究现状===<br />
<br />
There are some research projects that are investigating brain simulation using more sophisticated neural models, implemented on conventional computing architectures. The [[Artificial Intelligence System]] project implemented non-real time simulations of a "brain" (with 10<sup>11</sup> neurons) in 2005. It took 50 days on a cluster of 27 processors to simulate 1 second of a model.<ref>{{cite journal |last=Izhikevich |first=Eugene M. |last2=Edelman |first2=Gerald M. |date=4 March 2008 |title=Large-scale model of mammalian thalamocortical systems |url=http://vesicle.nsi.edu/users/izhikevich/publications/large-scale_model_of_human_brain.pdf |journal=PNAS |volume=105 |issue=9 |pages=3593–3598 |doi= 10.1073/pnas.0712231105|access-date=23 June 2015 |archive-url=https://web.archive.org/web/20090612095651/http://vesicle.nsi.edu/users/izhikevich/publications/large-scale_model_of_human_brain.pdf |archive-date=12 June 2009 |pmid=18292226 |pmc=2265160|bibcode=2008PNAS..105.3593I }}</ref> The [[Blue Brain]] project used one of the fastest supercomputer architectures in the world, [[IBM]]'s [[Blue Gene]] platform, to create a real time simulation of a single rat [[Neocortex|neocortical column]] consisting of approximately 10,000 neurons and 10<sup>8</sup> synapses in 2006.<ref>{{cite web|url=http://bluebrain.epfl.ch/Jahia/site/bluebrain/op/edit/pid/19085|title=Project Milestones|work=Blue Brain|accessdate=11 August 2008}}</ref> A longer term goal is to build a detailed, functional simulation of the physiological processes in the human brain: "It is not impossible to build a human brain and we can do it in 10 years," [[Henry Markram]], director of the Blue Brain Project said in 2009 at the [[TED (conference)|TED conference]] in Oxford.<ref>{{Cite news |url=http://news.bbc.co.uk/1/hi/technology/8164060.stm |title=Artificial brain '10 years away' 2009 BBC news |date=22 July 2009 |access-date=25 July 2009 |archive-url=https://web.archive.org/web/20170726040959/http://news.bbc.co.uk/1/hi/technology/8164060.stm |archive-date=26 July 2017 |url-status=live }}</ref> There have also been controversial claims to have simulated a [[cat intelligence#Computer simulation of the cat brain|cat brain]]. Neuro-silicon interfaces have been proposed as an alternative implementation strategy that may scale better.<ref>[http://gauntlet.ucalgary.ca/story/10343 University of Calgary news] {{Webarchive|url=https://web.archive.org/web/20090818081044/http://gauntlet.ucalgary.ca/story/10343 |date=18 August 2009 }}, [http://www.nbcnews.com/id/12037941 NBC News news] {{Webarchive|url=https://web.archive.org/web/20170704063922/http://www.nbcnews.com/id/12037941/ |date=4 July 2017 }}</ref><br />
<br />
There are some research projects that are investigating brain simulation using more sophisticated neural models, implemented on conventional computing architectures. The Artificial Intelligence System project implemented non-real time simulations of a "brain" (with 10<sup>11</sup> neurons) in 2005. It took 50 days on a cluster of 27 processors to simulate 1 second of a model. The Blue Brain project used one of the fastest supercomputer architectures in the world, IBM's Blue Gene platform, to create a real time simulation of a single rat neocortical column consisting of approximately 10,000 neurons and 10<sup>8</sup> synapses in 2006. A longer term goal is to build a detailed, functional simulation of the physiological processes in the human brain: "It is not impossible to build a human brain and we can do it in 10 years," Henry Markram, director of the Blue Brain Project said in 2009 at the TED conference in Oxford. There have also been controversial claims to have simulated a cat brain. Neuro-silicon interfaces have been proposed as an alternative implementation strategy that may scale better.<br />
<br />
有一些研究项目正在使用更复杂的神经模型研究大脑模拟,这些模型是在传统的计算机体系结构上实现的。人工智能系统项目在2005年实现了对一个“大脑”(有10个 sup 11 / sup 神经元)的非实时模拟。在一个由27个处理器组成的集群上,模拟一个模型的一秒钟花费了50天时间。2006年,蓝脑项目利用世界上最快的超级计算机架构之一——IBM 的蓝色基因平台,创建了一个包含大约10,000个神经元和10个 sup 8 / sup 突触的单个大鼠的'''<font color="#ff8000">新皮质柱(neocortical column)</font>'''的实时模拟。一个更长期的目标是建立一个人脑生理过程的详细的功能模拟: “建立一个人脑并不是不可能的,我们可以在10年内完成,”蓝脑项目主任亨利·马克拉姆(Henry Markram)于2009年在牛津举行的 TED 大会上说道。还有一些有争议的说法是模拟猫的大脑。神经-硅接口已作为一种可替代的实施策略被提出,它可能会更好地进行模拟。<br />
<br />
<br />
<br />
[[Hans Moravec]] addressed the above arguments ("brains are more complicated", "neurons have to be modeled in more detail") in his 1997 paper "When will computer hardware match the human brain?".<ref>{{cite web|url=http://www.transhumanist.com/volume1/moravec.htm |title=Archived copy |accessdate=23 June 2006 |url-status=dead |archiveurl=https://web.archive.org/web/20060615031852/http://transhumanist.com/volume1/moravec.htm |archivedate=15 June 2006 }}</ref> He measured the ability of existing software to simulate the functionality of neural tissue, specifically the retina. His results do not depend on the number of glial cells, nor on what kinds of processing neurons perform where.<br />
<br />
Hans Moravec addressed the above arguments ("brains are more complicated", "neurons have to be modeled in more detail") in his 1997 paper "When will computer hardware match the human brain?". He measured the ability of existing software to simulate the functionality of neural tissue, specifically the retina. His results do not depend on the number of glial cells, nor on what kinds of processing neurons perform where.<br />
<br />
汉斯·莫拉维克(Hans Moravec)在他1997年的论文《计算机硬件何时能与人脑匹敌》中提出了上述观点(“大脑更复杂” ,“神经元的建模必须更详细”) .他测量了现有软件模拟神经组织,特别是视网膜功能的能力。他的研究结果并不取决于神经胶质细胞的数量,也不取决于处理神经元在哪里工作。<br />
<br />
<br />
<br />
The actual complexity of modeling biological neurons has been explored in [[OpenWorm|OpenWorm project]] that was aimed on complete simulation of a worm that has only 302 neurons in its neural network (among about 1000 cells in total). The animal's neural network has been well documented before the start of the project. However, although the task seemed simple at the beginning, the models based on a generic neural network did not work. Currently, the efforts are focused on precise emulation of biological neurons (partly on the molecular level), but the result cannot be called a total success yet. Even if the number of issues to be solved in a human-brain-scale model is not proportional to the number of neurons, the amount of work along this path is obvious.<br />
<br />
The actual complexity of modeling biological neurons has been explored in OpenWorm project that was aimed on complete simulation of a worm that has only 302 neurons in its neural network (among about 1000 cells in total). The animal's neural network has been well documented before the start of the project. However, although the task seemed simple at the beginning, the models based on a generic neural network did not work. Currently, the efforts are focused on precise emulation of biological neurons (partly on the molecular level), but the result cannot be called a total success yet. Even if the number of issues to be solved in a human-brain-scale model is not proportional to the number of neurons, the amount of work along this path is obvious.<br />
<br />
OpenWorm 项目已经探讨了建模生物神经元的实际复杂性。该项目旨在完全模拟一个蠕虫,其神经网络中只有302个神经元(在总共约1000个细胞中)。项目开始之前,蠕虫的神经网络已经被很好地记录了下来。然而,尽管任务一开始看起来很简单,基于一般神经网络的模型并不起作用。目前,研究的重点是精确模拟生物神经元(部分在分子水平上) ,但结果还不能被称为完全成功。即使在人脑尺度的模型中需要解决的问题的数量与神经元的数量不成比例,沿着这条路径走下去的工作量也是显而易见的。<br />
<br />
<br />
<br />
===Criticisms of simulation-based approaches 对基于模拟的研究方法的批评===<br />
<br />
A fundamental criticism of the simulated brain approach derives from [[embodied cognition]] where human embodiment is taken as an essential aspect of human intelligence. Many researchers believe that embodiment is necessary to ground meaning.<ref>{{Harvnb|de Vega|Glenberg|Graesser|2008}}. A wide range of views in current research, all of which require grounding to some degree</ref> If this view is correct, any fully functional brain model will need to encompass more than just the neurons (i.e., a robotic body). Goertzel{{sfn|Goertzel|2007}} proposes virtual embodiment (like in ''[[Second Life]]''), but it is not yet known whether this would be sufficient.<br />
<br />
A fundamental criticism of the simulated brain approach derives from embodied cognition where human embodiment is taken as an essential aspect of human intelligence. Many researchers believe that embodiment is necessary to ground meaning. If this view is correct, any fully functional brain model will need to encompass more than just the neurons (i.e., a robotic body). Goertzel proposes virtual embodiment (like in Second Life), but it is not yet known whether this would be sufficient.<br />
<br />
对模拟大脑的方法的一个基本批评来自具象认知,其中人形化被视为人类智力的一个重要方面。许多研究者认为,具体化是必要的基础意义。如果这种观点是正确的,那么任何功能齐全的大脑模型除了神经元还要包含更多东西(例如,一个机器人身体)。格兹尔提出了虚拟体(就像在《第二人生》中那样) ,但是目前还不知道这是否足够。<br />
<br />
<br />
<br />
Desktop computers using microprocessors capable of more than 10<sup>9</sup> cps (Kurzweil's non-standard unit "computations per second", see above) have been available since 2005. According to the brain power estimates used by Kurzweil (and Moravec), this computer should be capable of supporting a simulation of a bee brain, but despite some interest<ref>{{Cite web |url=http://www.setiai.com/archives/cat_honey_bee_brain.html |title=some links to bee brain studies |access-date=30 March 2010 |archive-url=https://web.archive.org/web/20111005162232/http://www.setiai.com/archives/cat_honey_bee_brain.html |archive-date=5 October 2011 |url-status=dead }}</ref> no such simulation exists {{Citation needed|date=April 2011}}. There are at least three reasons for this:<br />
<br />
Desktop computers using microprocessors capable of more than 10<sup>9</sup> cps (Kurzweil's non-standard unit "computations per second", see above) have been available since 2005. According to the brain power estimates used by Kurzweil (and Moravec), this computer should be capable of supporting a simulation of a bee brain, but despite some interest no such simulation exists . There are at least three reasons for this:<br />
<br />
自2005年以来,台式计算机使用的微处理器能够超过10 sup 9 / sup cps (库兹韦尔的非标准单位“每秒计算”,见上文)。根据库兹韦尔(和莫拉维克)使用的大脑能量估算,这台计算机应该能够支持蜜蜂大脑的模拟,但是尽管有些人感兴趣,这样的模拟并不存在。这至少有三个原因:<br />
<br />
#The neuron model seems to be oversimplified (see next section).<br />
<br />
The neuron model seems to be oversimplified (see next section).<br />
<br />
神经元模型似乎过于简化了(见下一节)。<br />
<br />
#There is insufficient understanding of higher cognitive processes{{refn|In Goertzels' AGI book ([[#CITEREFYudkowsky2006|Yudkowsky 2006]]), Yudkowsky proposes 5 levels of organisation that must be understood – code/data, sensory modality, concept & category, thought, and deliberation (consciousness) – in order to use the available hardware}} to establish accurately what the brain's neural activity, observed using techniques such as [[Neuroimaging#Functional magnetic resonance imaging|functional magnetic resonance imaging]], correlates with.<br />
<br />
There is insufficient understanding of higher cognitive processes to establish accurately what the brain's neural activity, observed using techniques such as functional magnetic resonance imaging, correlates with.<br />
<br />
人们对高级认知过程的理解不够充分,无法准确地确定大脑的神经活动---- 与之相关的是使用功能性磁共振成像等技术观察到的大脑活动。<br />
<br />
#Even if our understanding of cognition advances sufficiently, early simulation programs are likely to be very inefficient and will, therefore, need considerably more hardware.<br />
<br />
Even if our understanding of cognition advances sufficiently, early simulation programs are likely to be very inefficient and will, therefore, need considerably more hardware.<br />
<br />
即使我们对认知的理解有了足够的进步,早期的仿真程序也可能非常低效,因此需要更多的硬件。<br />
<br />
#The brain of an organism, while critical, may not be an appropriate boundary for a cognitive model. To simulate a bee brain, it may be necessary to simulate the body, and the environment. [[The Extended Mind]] thesis formalizes the philosophical concept, and research into [[Cephalopoda|cephalopods]] has demonstrated clear examples of a decentralized system.<ref>{{cite journal | pmid = 15829594 | doi=10.1152/jn.00684.2004 | volume=94 | issue=2 | title=Dynamic model of the octopus arm. I. Biomechanics of the octopus reaching movement |date=August 2005 | journal=J. Neurophysiol. | pages=1443–58 | last1 = Yekutieli | first1 = Y | last2 = Sagiv-Zohar | first2 = R | last3 = Aharonov | first3 = R | last4 = Engel | first4 = Y | last5 = Hochner | first5 = B | last6 = Flash | first6 = T}}</ref><br />
<br />
The brain of an organism, while critical, may not be an appropriate boundary for a cognitive model. To simulate a bee brain, it may be necessary to simulate the body, and the environment. The Extended Mind thesis formalizes the philosophical concept, and research into cephalopods has demonstrated clear examples of a decentralized system.<br />
<br />
有机体的大脑虽然关键,但可能不是认知模型的合适边界。为了模拟蜜蜂的大脑,可能需要模拟身体和环境。'''<font color="#ff8000">延展心灵论题(The Extended Mind thesis)</font>'''形式化了哲学概念,对头足类动物的研究已经展示了分散系统的明显的例子。<br />
<br />
<br />
<br />
In addition, the scale of the human brain is not currently well-constrained. One estimate puts the human brain at about 100 billion neurons and 100 trillion synapses.<ref>{{Harvnb|Williams|Herrup|1988}}</ref><ref>[http://search.eb.com/eb/article-75525 "nervous system, human."] ''[[Encyclopædia Britannica]]''. 9 January 2007</ref> Another estimate is 86 billion neurons of which 16.3 billion are in the [[cerebral cortex]] and 69 billion in the [[cerebellum]].{{sfn|Azevedo et al.|2009}} [[Glial cell]] synapses are currently unquantified but are known to be extremely numerous.<br />
<br />
In addition, the scale of the human brain is not currently well-constrained. One estimate puts the human brain at about 100 billion neurons and 100 trillion synapses. Another estimate is 86 billion neurons of which 16.3 billion are in the cerebral cortex and 69 billion in the cerebellum. Glial cell synapses are currently unquantified but are known to be extremely numerous.<br />
<br />
此外,人类大脑的规模目前还没有得到很好的限制。据估计,人类大脑大约有1000亿个神经元和100万亿个突触。另一个估计是860亿个神经元,其中163亿个在大脑皮层,690亿个在小脑。神经胶质细胞突触目前尚未定量,但已知数量极多。<br />
<br />
<br />
<br />
==Strong AI and consciousness 强人工智能和意识==<br />
<br />
{{See also|Philosophy of artificial intelligence|Turing test}}<br />
<br />
<br />
<br />
In 1980, philosopher [[John Searle]] coined the term "strong AI" as part of his [[Chinese room]] argument.<ref>{{Harvnb|Searle|1980}}</ref> He wanted to distinguish between two different hypotheses about artificial intelligence:<ref>As defined in a standard AI textbook: "The assertion that machines could possibly act intelligently (or, perhaps better, act as if they were intelligent) is called the 'weak AI' hypothesis by philosophers, and the assertion that machines that do so are actually thinking (as opposed to simulating thinking) is called the 'strong AI' hypothesis." {{Harv|Russell|Norvig|2003}}</ref><br />
<br />
In 1980, philosopher John Searle coined the term "strong AI" as part of his Chinese room argument. He wanted to distinguish between two different hypotheses about artificial intelligence:<br />
<br />
1980年,哲学家约翰•塞尔(John Searle)将“强人工智能”(strong AI)一词作为他在中文屋论证的一部分。他想要区分关于人工智能的两种不同假设:<br />
<br />
* An artificial intelligence system can ''think'' and have a ''mind''. (The word "mind" has a specific meaning for philosophers, as used in "the [[mind body problem]]" or "the [[philosophy of mind]]".)<br />
*一个人工智能系统可以思考并拥有思维。(词语“思维”对哲学家来说有特殊意义,正如在“身心问题”或“心灵哲学”中的使用一样。)<br />
<br />
* An artificial intelligence system can (only) ''act like'' it thinks and has a mind.<br />
*一个人工智能系统可以(仅仅)按它所想而“行动”,并且拥有思维。<br />
<br />
The first one is called "the ''strong'' AI hypothesis" and the second is "the ''weak'' AI hypothesis" because the first one makes the ''stronger'' statement: it assumes something special has happened to the machine that goes beyond all its abilities that we can test. Searle referred to the "strong AI hypothesis" as "strong AI". This usage is also common in academic AI research and textbooks.<ref>For example:<br />
<br />
The first one is called "the strong AI hypothesis" and the second is "the weak AI hypothesis" because the first one makes the stronger statement: it assumes something special has happened to the machine that goes beyond all its abilities that we can test. Searle referred to the "strong AI hypothesis" as "strong AI". This usage is also common in academic AI research and textbooks.<ref>For example:<br />
<br />
第一条被称为“强人工智能假设” ,第二条被称为“弱人工智能假设”,因为第一条假设提出了更强的陈述: 它假定机器发生了某种特殊的事件,超出了我们能够测试的所有能力。塞尔将“'''<font color="#ff8000">强人工智能假说(strong AI hypothesis)</font>'''”称为“强人工智能”。这种用法在人工智能学术研究和教科书中也很常见。例如:<br />
<br />
* {{Harvnb|Russell|Norvig|2003}},<br />
<br />
* [http://www.encyclopedia.com/doc/1O87-strongAI.html Oxford University Press Dictionary of Psychology] {{Webarchive|url=https://web.archive.org/web/20071203103022/http://www.encyclopedia.com/doc/1O87-strongAI.html |date=3 December 2007 }} (quoted in "High Beam Encyclopedia"),<br />
<br />
* [http://www.aaai.org/AITopics/html/phil.html MIT Encyclopedia of Cognitive Science] {{Webarchive|url=https://web.archive.org/web/20080719074502/http://www.aaai.org/AITopics/html/phil.html |date=19 July 2008 }} (quoted in "AITopics")<br />
<br />
* [http://planetmath.org/encyclopedia/StrongAIThesis.html Planet Math] {{Webarchive|url=https://web.archive.org/web/20070919012830/http://planetmath.org/encyclopedia/StrongAIThesis.html |date=19 September 2007 }}<br />
<br />
* [http://www.cbhd.org/resources/biotech/tongen_2003-11-07.htm Will Biological Computers Enable Artificially Intelligent Machines to Become Persons?] {{Webarchive|url=https://web.archive.org/web/20080513031753/http://www.cbhd.org/resources/biotech/tongen_2003-11-07.htm |date=13 May 2008 }} Anthony Tongen</ref><br />
<br />
<br />
<br />
The weak AI hypothesis is equivalent to the hypothesis that artificial general intelligence is possible. According to [[Stuart J. Russell|Russell]] and [[Peter Norvig|Norvig]], "Most AI researchers take the weak AI hypothesis for granted, and don't care about the strong AI hypothesis."{{sfn|Russell|Norvig|2003|p=947}}<br />
<br />
The weak AI hypothesis is equivalent to the hypothesis that artificial general intelligence is possible. According to Russell and Norvig, "Most AI researchers take the weak AI hypothesis for granted, and don't care about the strong AI hypothesis."<br />
<br />
弱人工智能假说等同于通用人工智能是可能的假说。根据罗素和诺维格的说法,“大多数人工智能研究人员认为弱人工智能假说是理所当然的,并且不关心强人工智能假说。”<br />
<br />
<br />
<br />
In contrast to Searle, [[Ray Kurzweil]] uses the term "strong AI" to describe any artificial intelligence system that acts like it has a mind,<ref name=K/> regardless of whether a philosopher would be able to determine if it ''actually'' has a mind or not.<br />
<br />
In contrast to Searle, Ray Kurzweil uses the term "strong AI" to describe any artificial intelligence system that acts like it has a mind, regardless of whether a philosopher would be able to determine if it actually has a mind or not.<br />
<br />
与 Searle 不同的是,Ray Kurzweil 使用“强人工智能”这个词来描述任何人工智能系统,这个系统的行为就像它有思想一样,不管哲学家能否确定它是否真的有思想。<br />
<br />
In science fiction, AGI is associated with traits such as [[consciousness]], [[sentience]], [[sapience]], and [[self-awareness]] observed in living beings. However, according to Searle, it is an open question whether general intelligence is sufficient for consciousness. "Strong AI" (as defined above by Kurzweil) should not be confused with Searle's "[[strong AI hypothesis]]." The strong AI hypothesis is the claim that a computer which behaves as intelligently as a person must also necessarily have a [[mind]] and [[consciousness]]. AGI refers only to the amount of intelligence that the machine displays, with or without a mind.<br />
<br />
In science fiction, AGI is associated with traits such as consciousness, sentience, sapience, and self-awareness observed in living beings. However, according to Searle, it is an open question whether general intelligence is sufficient for consciousness. "Strong AI" (as defined above by Kurzweil) should not be confused with Searle's "strong AI hypothesis." The strong AI hypothesis is the claim that a computer which behaves as intelligently as a person must also necessarily have a mind and consciousness. AGI refers only to the amount of intelligence that the machine displays, with or without a mind.<br />
<br />
在科幻小说中,通用人工智能与生物所具有的的意识、知觉、智慧和自我意识等特征有关。然而,根据塞尔的说法,一般智力是否足以产生意识还是一个悬而未决的问题。“强人工智能”(如上文库兹韦尔所定义的)不应与塞尔的“强人工智能假设”相混淆。强人工智能假说认为,一台像人一样智能运行的计算机必然具有思想和意识。通用人工智能只是指机器显示的智能程度,不管有没有头脑。<br />
<br />
<br />
<br />
===Consciousness 意识===<br />
<br />
There are other aspects of the human mind besides intelligence that are relevant to the concept of strong AI which play a major role in [[science fiction]] and the [[ethics of artificial intelligence]]:<br />
<br />
There are other aspects of the human mind besides intelligence that are relevant to the concept of strong AI which play a major role in science fiction and the ethics of artificial intelligence:<br />
<br />
除了与在科幻小说和人工智能伦理中扮演重要角色的强人工智能概念有关的智能,人类思维还有其他方面:<br />
<br />
* [[consciousness]]: To have [[qualia|subjective experience]] and [[thought]].<ref>Note that [[consciousness]] is difficult to define. A popular definition, due to [[Thomas Nagel]], is that it "feels like" something to be conscious. If we are not conscious, then it doesn't feel like anything. Nagel uses the example of a bat: we can sensibly ask "what does it feel like to be a bat?" However, we are unlikely to ask "what does it feel like to be a toaster?" Nagel concludes that a bat appears to be conscious (i.e. has consciousness) but a toaster does not. See {{Harv|Nagel|1974}}</ref><br />
意识:拥有主观体验和思想。值得一提的是意识是很难定义的。由托马斯·内格尔给出的一个著名定义陈述如下:一个事物如果能体会到某种感觉,那么它是有意识的。如果我们不是有意识的,那么我们不会有任何感觉。内格尔以蝙蝠为例:我们可以凭借感觉问出:“成为一只蝙蝠的感觉如何?”但是,我们不大可能问出:“成为一个吐司机的感觉如何?”内格尔总结认为蝙蝠像是有意识的(即拥有意识),但是吐司机却不是。<br />
<br />
* [[self-awareness]]: To be aware of oneself as a separate individual, especially to be aware of one's own thoughts.<br />
自我意识:能够意识到自己是一个独立的个体,尤其是意识到自己的思想。<br />
<br />
* [[sentience]]: The ability to "feel" perceptions or emotions subjectively.<br />
知觉:主观地感受概念或者情感的能力。<br />
<br />
* [[sapience]]: The capacity for wisdom.<br />
智慧:容纳知识的能力。<br />
<br />
<br />
<br />
These traits have a moral dimension, because a machine with this form of strong AI may have legal rights, analogous to the [[animal rights|rights of non-human animals]]. As such, preliminary work has been conducted on approaches to integrating full ethical agents with existing legal and social frameworks. These approaches have focused on the legal position and rights of 'strong' AI.<ref>{{Cite journal|last=Sotala|first=Kaj|last2=Yampolskiy|first2=Roman V|date=2014-12-19|title=Responses to catastrophic AGI risk: a survey|journal=Physica Scripta|volume=90|issue=1|pages=8|doi=10.1088/0031-8949/90/1/018001|issn=0031-8949|doi-access=free}}</ref><br />
<br />
These traits have a moral dimension, because a machine with this form of strong AI may have legal rights, analogous to the rights of non-human animals. As such, preliminary work has been conducted on approaches to integrating full ethical agents with existing legal and social frameworks. These approaches have focused on the legal position and rights of 'strong' AI.<br />
<br />
这些特征具有道德维度,因为拥有这种强人工智能形式的机器可能拥有法律权利,类似于非人类动物的权利。因此,已经开展了初步工作,探讨如何将全面的道德行为者纳入现有的法律和社会框架。这些方法都集中在强大的人工智能的法律地位和权利上。<br />
<br />
<br />
<br />
However, [[Bill Joy]], among others, argues a machine with these traits may be a threat to human life or dignity.<ref>{{cite journal| title=Why the future doesn't need us | last=Joy | first=Bill |author-link=Bill Joy | journal=Wired |date=April 2000 }}</ref> It remains to be shown whether any of these traits are [[Necessary and sufficient condition|necessary]] for strong AI. The role of [[consciousness]] is not clear, and currently there is no agreed test for its presence. If a machine is built with a device that simulates the [[neural correlates of consciousness]], would it automatically have self-awareness? It is also possible that some of these properties, such as sentience, [[Emergence|naturally emerge]] from a fully intelligent machine, or that it becomes natural to ''ascribe'' these properties to machines once they begin to act in a way that is clearly intelligent. For example, intelligent action may be sufficient for sentience, rather than the other way around.<br />
<br />
However, Bill Joy, among others, argues a machine with these traits may be a threat to human life or dignity. It remains to be shown whether any of these traits are necessary for strong AI. The role of consciousness is not clear, and currently there is no agreed test for its presence. If a machine is built with a device that simulates the neural correlates of consciousness, would it automatically have self-awareness? It is also possible that some of these properties, such as sentience, naturally emerge from a fully intelligent machine, or that it becomes natural to ascribe these properties to machines once they begin to act in a way that is clearly intelligent. For example, intelligent action may be sufficient for sentience, rather than the other way around.<br />
<br />
然而,比尔·乔伊(Bill Joy)等人认为,具有这些特征的机器可能会威胁到人类的生命或尊严。这些特征对于强人工智能来说是否是必要的还有待证明。意识的作用并不清楚,目前也没有针对其存在而进行的一致的测试。如果一台机器装有一个模拟与意识相关的神经的装置,它会自动具有自我意识吗?也有可能这些特性中的一些,比如感知能力,自然而然地从一个完全智能的机器中产生,或者一旦机器开始以一种明显智能的方式行动,人们就会自然而然地认为这些特性是机器自主产生的。<br />
--[[用户:粲兰|袁一博]]([[用户讨论:粲兰|讨论]])“或者一旦机器开始以一种明显智能的方式行动,,人们就会自然而然地认为这些特性是机器自主产生的。”对应原句“or that it becomes natural to ascribe these properties to machines once they begin to act in a way that is clearly intelligent.”与原句在语序和措辞上略有不同,是译者考虑到中文的阅读习惯在不改变原意的条件下意译得出的。<br />
例如,智能行为可能足以判定机器产生了知觉,而非反过来。<br />
<br />
<br />
<br />
===Artificial consciousness research 人工意识研究===<br />
<br />
{{Main|Artificial consciousness}}<br />
<br />
<br />
<br />
Although the role of consciousness in strong AI/AGI is debatable, many AGI researchers{{sfn|Yudkowsky|2006}} regard research that investigates possibilities for implementing consciousness as vital. In an early effort [[Igor Aleksander]]{{sfn|Aleksander|1996}} argued that the principles for creating a conscious machine already existed but that it would take forty years to train such a machine to understand [[language]].<br />
<br />
Although the role of consciousness in strong AI/AGI is debatable, many AGI researchers regard research that investigates possibilities for implementing consciousness as vital. In an early effort Igor Aleksander argued that the principles for creating a conscious machine already existed but that it would take forty years to train such a machine to understand language.<br />
<br />
虽然意识在强人工智能/通用人工智能中的作用是有争议的,但是很多通用人工智能的研究人员认为研究实现意识的可能性是至关重要的。在早期的努力中,伊格尔·亚历山大(Igor Aleksander)认为创造一个有意识的机器的原则已经存在,但是训练这样一个机器去理解语言可能需要四十年。<br />
<br />
<br />
<br />
==Possible explanations for the slow progress of AI research 人工智能研究进展缓慢的可能解释==<br />
<br />
{{See also|History of artificial intelligence#The problems}}<br />
<br />
<br />
<br />
Since the launch of AI research in 1956, the growth of this field has slowed down over time and has stalled the aims of creating machines skilled with intelligent action at the human level.{{sfn|Clocksin|2003}} A possible explanation for this delay is that computers lack a sufficient scope of memory or processing power.{{sfn|Clocksin|2003}} In addition, the level of complexity that connects to the process of AI research may also limit the progress of AI research.{{sfn|Clocksin|2003}}<br />
<br />
Since the launch of AI research in 1956, the growth of this field has slowed down over time and has stalled the aims of creating machines skilled with intelligent action at the human level. A possible explanation for this delay is that computers lack a sufficient scope of memory or processing power. In addition, the level of complexity that connects to the process of AI research may also limit the progress of AI research.<br />
<br />
自从1956年开始人工智能研究以来,这一领域的发展速度已经随着时间的推移而放缓,并且阻碍了创造具有人类水平的智能行为的机器的目标。这种延迟的一个可能的解释是计算机缺乏足够的存储空间或处理能力。此外,与人工智能研究过程相关的复杂程度也可能限制人工智能研究的进展。<br />
<br />
<br />
<br />
While most AI researchers believe strong AI can be achieved in the future, there are some individuals like [[Hubert Dreyfus]] and [[Roger Penrose]] who deny the possibility of achieving strong AI.{{sfn|Clocksin|2003}} [[John McCarthy (computer scientist)|John McCarthy]] was one of various computer scientists who believe human-level AI will be accomplished, but a date cannot accurately be predicted.{{sfn|McCarthy|2003}}<br />
<br />
While most AI researchers believe strong AI can be achieved in the future, there are some individuals like Hubert Dreyfus and Roger Penrose who deny the possibility of achieving strong AI. John McCarthy was one of various computer scientists who believe human-level AI will be accomplished, but a date cannot accurately be predicted.<br />
<br />
虽然大多数人工智能研究人员认为强大的人工智能可以在未来实现,但也有一些人像休伯特·德雷福斯(Hubert Dreyfus)和罗杰·彭罗斯(Roger Penrose)否认实现强大人工智能的可能性。约翰·麦卡锡(John McCarthy)是众多计算机科学家之一,他们相信人类水平的人工智能将会实现,但是日期无法准确预测。<br />
<br />
<br />
<br />
Conceptual limitations are another possible reason for the slowness in AI research.{{sfn|Clocksin|2003}} AI researchers may need to modify the conceptual framework of their discipline in order to provide a stronger base and contribution to the quest of achieving strong AI. As William Clocksin wrote in 2003: "the framework starts from Weizenbaum's observation that intelligence manifests itself only relative to specific social and cultural contexts".{{sfn|Clocksin|2003}}<br />
<br />
Conceptual limitations are another possible reason for the slowness in AI research. AI researchers may need to modify the conceptual framework of their discipline in order to provide a stronger base and contribution to the quest of achieving strong AI. As William Clocksin wrote in 2003: "the framework starts from Weizenbaum's observation that intelligence manifests itself only relative to specific social and cultural contexts".<br />
<br />
概念上的局限性是人工智能研究缓慢的另一个可能原因。人工智能研究人员可能需要修改他们学科的概念框架,以便为实现强大的人工智能提供一个更强大的基础和贡献。正如威廉·克罗克森(William Clocksin)在2003年写的那样: “这个框架始于魏泽堡的观察,即智力只在特定的社会和文化背景下表现出来。”。<br />
<br />
<br />
<br />
Furthermore, AI researchers have been able to create computers that can perform jobs that are complicated for people to do, such as mathematics, but conversely they have struggled to develop a computer that is capable of carrying out tasks that are simple for humans to do, such as walking ([[Moravec's paradox]]).{{sfn|Clocksin|2003}} A problem described by David Gelernter is that some people assume thinking and reasoning are equivalent.{{sfn|Gelernter|2010}} However, the idea of whether thoughts and the creator of those thoughts are isolated individually has intrigued AI researchers.{{sfn|Gelernter|2010}}<br />
<br />
Furthermore, AI researchers have been able to create computers that can perform jobs that are complicated for people to do, such as mathematics, but conversely they have struggled to develop a computer that is capable of carrying out tasks that are simple for humans to do, such as walking (Moravec's paradox). A problem described by David Gelernter is that some people assume thinking and reasoning are equivalent. However, the idea of whether thoughts and the creator of those thoughts are isolated individually has intrigued AI researchers.<br />
<br />
此外,人工智能研究人员已经能够创造出能够执行对人类而言复杂的工作(如数学)的计算机,但相反地,他们却难以开发出能够执行人类简单任务(如行走)的计算机(莫拉维克悖论)。大卫·格勒尼特(David Gelernter)描述的一个问题是,有些人认为思考和推理是等价的。然而,思想和这些思想的创造者是否被孤立的想法引起了人工智能研究者的兴趣。<br />
<br />
<br />
<br />
The problems that have been encountered in AI research over the past decades have further impeded the progress of AI. The failed predictions that have been promised by AI researchers and the lack of a complete understanding of human behaviors have helped diminish the primary idea of human-level AI.{{sfn|Goertzel|2007}} Although the progress of AI research has brought both improvement and disappointment, most investigators have established optimism about potentially achieving the goal of AI in the 21st century.{{sfn|Goertzel|2007}}<br />
<br />
The problems that have been encountered in AI research over the past decades have further impeded the progress of AI. The failed predictions that have been promised by AI researchers and the lack of a complete understanding of human behaviors have helped diminish the primary idea of human-level AI. Although the progress of AI research has brought both improvement and disappointment, most investigators have established optimism about potentially achieving the goal of AI in the 21st century.<br />
<br />
过去几十年人工智能研究中遇到的问题进一步阻碍了人工智能的发展。人工智能研究人员所承诺的失败的预测,以及对人类行为缺乏完整理解,已经帮助削弱了人类水平人工智能的最初设想。尽管人工智能研究的进展带来了进步和失望,但大多数研究人员对人工智能在21世纪可能实现的目标持乐观态度。<br />
<br />
<br />
<br />
Other possible reasons have been proposed for the lengthy research in the progress of strong AI. The intricacy of scientific problems and the need to fully understand the human brain through psychology and neurophysiology have limited many researchers in emulating the function of the human brain in computer hardware.{{sfn|McCarthy|2007}} Many researchers tend to underestimate any doubt that is involved with future predictions of AI, but without taking those issues seriously, people can then overlook solutions to problematic questions.{{sfn|Goertzel|2007}}<br />
<br />
Other possible reasons have been proposed for the lengthy research in the progress of strong AI. The intricacy of scientific problems and the need to fully understand the human brain through psychology and neurophysiology have limited many researchers in emulating the function of the human brain in computer hardware. Many researchers tend to underestimate any doubt that is involved with future predictions of AI, but without taking those issues seriously, people can then overlook solutions to problematic questions.<br />
<br />
还有其他可能的原因可以解释为什么对于强人工智能进行了长时间的研究。错综复杂的科学问题,以及通过心理学和神经生理学充分了解人脑的必要性,限制了许多研究人员在计算机硬件中模拟人脑的功能。许多研究人员倾向于低估与人工智能未来预测有关的任何怀疑,但是如果不认真对待这些问题,人们就会忽视不确定问题的解决方案。<br />
<br />
<br />
<br />
Clocksin says that a conceptual limitation that may impede the progress of AI research is that people may be using the wrong techniques for computer programs and implementation of equipment.{{sfn|Clocksin|2003}} When AI researchers first began to aim for the goal of artificial intelligence, a main interest was human reasoning.{{sfn|Holte|Choueiry|2003}} Researchers hoped to establish computational models of human knowledge through reasoning and to find out how to design a computer with a specific cognitive task.{{sfn|Holte|Choueiry|2003}}<br />
<br />
Clocksin says that a conceptual limitation that may impede the progress of AI research is that people may be using the wrong techniques for computer programs and implementation of equipment. When AI researchers first began to aim for the goal of artificial intelligence, a main interest was human reasoning. Researchers hoped to establish computational models of human knowledge through reasoning and to find out how to design a computer with a specific cognitive task.<br />
<br />
克罗克森说,阻碍人工智能研究进展的一个概念上的限制是,人们可能在计算机程序和安装设备方面使用了错误的技术。当人工智能研究人员第一次开始瞄准人工智能的目标时,主要的兴趣是人类推理。研究人员希望通过推理建立人类知识的计算模型,并找出如何设计一台具有特定认知任务的计算机。<br />
<br />
<br />
<br />
The practice of abstraction, which people tend to redefine when working with a particular context in research, provides researchers with a concentration on just a few concepts.{{sfn|Holte|Choueiry|2003}} The most productive use of abstraction in AI research comes from planning and problem solving.{{sfn|Holte|Choueiry|2003}} Although the aim is to increase the speed of a computation, the role of abstraction has posed questions about the involvement of abstraction operators.{{sfn|Zucker|2003}}<br />
<br />
The practice of abstraction, which people tend to redefine when working with a particular context in research, provides researchers with a concentration on just a few concepts. The most productive use of abstraction in AI research comes from planning and problem solving. Although the aim is to increase the speed of a computation, the role of abstraction has posed questions about the involvement of abstraction operators.<br />
<br />
人们在研究中使用特定的语境时倾向于重新定义抽象的实践,使得研究人员集中在数个概念上。抽象在人工智能研究中最有效的应用来自规划和解决问题。虽然目标是提高计算速度,但是抽象的作用已经对抽象算子的参与提出了问题。<br />
<br />
<br />
<br />
A possible reason for the slowness in AI relates to the acknowledgement by many AI researchers that heuristics is a section that contains a significant breach between computer performance and human performance.{{sfn|McCarthy|2007}} The specific functions that are programmed to a computer may be able to account for many of the requirements that allow it to match human intelligence. These explanations are not necessarily guaranteed to be the fundamental causes for the delay in achieving strong AI, but they are widely agreed by numerous researchers.<br />
<br />
A possible reason for the slowness in AI relates to the acknowledgement by many AI researchers that heuristics is a section that contains a significant breach between computer performance and human performance. The specific functions that are programmed to a computer may be able to account for many of the requirements that allow it to match human intelligence. These explanations are not necessarily guaranteed to be the fundamental causes for the delay in achieving strong AI, but they are widely agreed by numerous researchers.<br />
<br />
人工智能发展缓慢的一个可能原因是许多人工智能研究人员承认启发式算法是计算机性能和人类表现之间的一个重大缺口。为编入计算机的特定功能可以满足许多要求,使计算机与人类智能相匹配。这些解释并不一定是造成人工智能实现延迟的根本原因,但它们得到了众多研究人员的广泛认同。<br />
<br />
<br />
<br />
There have been many AI researchers that debate over the idea whether [[affective computing|machines should be created with emotions]]. There are no emotions in typical models of AI and some researchers say programming emotions into machines allows them to have a mind of their own.{{sfn|Clocksin|2003}} Emotion sums up the experiences of humans because it allows them to remember those experiences.{{sfn|Gelernter|2010}} David Gelernter writes, "No computer will be creative unless it can simulate all the nuances of human emotion."{{sfn|Gelernter|2010}} This concern about emotion has posed problems for AI researchers and it connects to the concept of strong AI as its research progresses into the future.<ref>{{cite journal|doi=10.1016/j.bushor.2018.08.004|title=Kaplan Andreas and Haelein Michael (2019) Siri, Siri, in my hand: Who's the fairest in the land? On the interpretations, illustrations, and implications of artificial intelligence | volume=62 | year=2019|journal=Business Horizons|pages=15–25 | last1 = Kaplan | first1 = Andreas | last2 = Haenlein | first2 = Michael}}</ref><br />
<br />
There have been many AI researchers that debate over the idea whether machines should be created with emotions. There are no emotions in typical models of AI and some researchers say programming emotions into machines allows them to have a mind of their own. Emotion sums up the experiences of humans because it allows them to remember those experiences. David Gelernter writes, "No computer will be creative unless it can simulate all the nuances of human emotion." This concern about emotion has posed problems for AI researchers and it connects to the concept of strong AI as its research progresses into the future.<br />
<br />
许多人工智能研究人员一直在争论机器是否应该带有情感。典型的人工智能模型中没有情感,一些研究人员说,将情感编程到机器中可以让它们拥有自己的思想。情感总结了人类的经历,因为它允许人们记住那些经历。大卫·格勒尼特(David Gelernter)写道: “除非计算机能够模拟人类情感的所有细微差别,否则它不会具有创造力。”这种对情绪的关注给人工智能研究人员带来了一些问题,随着未来人工智能研究的进展,它与强人工智能的概念联系起来。<br />
<br />
<br />
<br />
==Controversies and dangers 争议和风险==<br />
<br />
<br />
<br />
===Feasibility 可能性===<br />
<br />
{{expand section|date=February 2016}}<br />
<br />
As of March 2020, AGI remains speculative<ref name="spec1">[https://www.europarl.europa.eu/at-your-service/files/be-heard/religious-and-non-confessional-dialogue/events/en-20190319-how-artificial-intelligence-works.pdf europarl.europa.eu: How artificial intelligence works], "Concluding remarks: Today's AI is powerful and useful, but remains far from speculated AGI or ASI.", European Parliamentary Research Service, retrieved March 3, 2020</ref><ref name="spec2">[https://www.itu.int/en/journal/001/Documents/itu2018-9.pdf itu.int: Beyond Mad?: The Race For Artificial General Intelligence], "AGI represents a level of power that remains firmly in the realm of speculative fiction as on date." February 2, 2018, retrieved March 3, 2020</ref> as no such system has been demonstrated yet. Opinions vary both on ''whether'' and ''when'' artificial general intelligence will arrive. At one extreme, AI pioneer [[Herbert A. Simon]] wrote in 1965: "machines will be capable, within twenty years, of doing any work a man can do". However, this prediction failed to come true. Microsoft co-founder [[Paul Allen]] believed that such intelligence is unlikely in the 21st century because it would require "unforeseeable and fundamentally unpredictable breakthroughs" and a "scientifically deep understanding of cognition".<ref>{{cite news|last1=Allen|first1=Paul|title=The Singularity Isn't Near|url=http://www.technologyreview.com/view/425733/paul-allen-the-singularity-isnt-near/|accessdate=17 September 2014|work=[[MIT Technology Review]]}}</ref> Writing in [[The Guardian]], roboticist Alan Winfield claimed the gulf between modern computing and human-level artificial intelligence is as wide as the gulf between current space flight and practical faster-than-light spaceflight.<ref>{{cite news|last1=Winfield|first1=Alan|title=Artificial intelligence will not turn into a Frankenstein's monster|url=https://www.theguardian.com/technology/2014/aug/10/artificial-intelligence-will-not-become-a-frankensteins-monster-ian-winfield|accessdate=17 September 2014|work=[[The Guardian]]|archive-url=https://web.archive.org/web/20140917135230/http://www.theguardian.com/technology/2014/aug/10/artificial-intelligence-will-not-become-a-frankensteins-monster-ian-winfield|archive-date=17 September 2014|url-status=live}}</ref><br />
<br />
As of March 2020, AGI remains speculative as no such system has been demonstrated yet. Opinions vary both on whether and when artificial general intelligence will arrive. At one extreme, AI pioneer Herbert A. Simon wrote in 1965: "machines will be capable, within twenty years, of doing any work a man can do". However, this prediction failed to come true. Microsoft co-founder Paul Allen believed that such intelligence is unlikely in the 21st century because it would require "unforeseeable and fundamentally unpredictable breakthroughs" and a "scientifically deep understanding of cognition". Writing in The Guardian, roboticist Alan Winfield claimed the gulf between modern computing and human-level artificial intelligence is as wide as the gulf between current space flight and practical faster-than-light spaceflight.<br />
<br />
截至2020年3月,通用人工智能仍处于推测性的状态,因为迄今此类系统尚未被展示。对于人工通用智能是否会到来以及何时到来,人们的看法各不相同。一个极端如,人工智能的先驱赫伯特·西蒙在1965年写道: “机器将能在20年内具有完成人类能做的任何工作的能力。”然而,这个预言并没有实现。微软(Microsoft)联合创始人保罗•艾伦(Paul Allen)认为,这种情报在21世纪不太可能出现,因为它需要“不可预见且根本无法预测的突破”和“对认知的科学的深入理解”。机器人专家阿兰·温菲尔德(Alan Winfield)在《卫报》上发表文章称,现代计算机和人类水平的人工智能之间的鸿沟就像当前的太空飞行和实际的超光速空间飞行之间的鸿沟一样宽。<br />
<br />
<br />
<br />
AI experts' views on the feasibility of AGI wax and wane, and may have seen a resurgence in the 2010s. Four polls conducted in 2012 and 2013 suggested that the median guess among experts for when they would be 50% confident AGI would arrive was 2040 to 2050, depending on the poll, with the mean being 2081. Of the experts, 16.5% answered with "never" when asked the same question but with a 90% confidence instead.<ref name="new yorker doomsday">{{cite news|author1=Raffi Khatchadourian|title=The Doomsday Invention: Will artificial intelligence bring us utopia or destruction?|url=http://www.newyorker.com/magazine/2015/11/23/doomsday-invention-artificial-intelligence-nick-bostrom|accessdate=7 February 2016|work=[[The New Yorker (magazine)|The New Yorker]]|date=23 November 2015|archive-url=https://web.archive.org/web/20160128105955/http://www.newyorker.com/magazine/2015/11/23/doomsday-invention-artificial-intelligence-nick-bostrom|archive-date=28 January 2016|url-status=live}}</ref><ref>Müller, V. C., & Bostrom, N. (2016). Future progress in artificial intelligence: A survey of expert opinion. In Fundamental issues of artificial intelligence (pp. 555–572). Springer, Cham.</ref> Further current AGI progress considerations can be found below [[#Tests_for_confirming_human-level_AGI|''Tests for confirming human-level AGI'']] and [[#IQ-Tests_AGI|''IQ-tests AGI'']].<br />
<br />
AI experts' views on the feasibility of AGI wax and wane, and may have seen a resurgence in the 2010s. Four polls conducted in 2012 and 2013 suggested that the median guess among experts for when they would be 50% confident AGI would arrive was 2040 to 2050, depending on the poll, with the mean being 2081. Of the experts, 16.5% answered with "never" when asked the same question but with a 90% confidence instead. Further current AGI progress considerations can be found below Tests for confirming human-level AGI and IQ-tests AGI.<br />
<br />
人工智能专家一直在考虑通用人工智能兴衰可能性,可能在2010年代出现了复苏。2012年和2013年进行的四次民意调查显示,专家们平均有50%的信心相信通用人工智能会在2040年到2050年间到来,具体取决于调查结果,平均猜测是2081年。在这些专家中,16.5% 的人在被问到同样的问题时回答“从来没有” ,但他们的自信心却达到了90%。当前通用人工智能项目进展中的进一步考虑可以在确认人类水平通用人工智能和基于智商测试的通用人工智能测试下面找到。<br />
<br />
<br />
<br />
===Potential threat to human existence{{anchor|Risk_of_human_extinction}} 对人类的潜在威胁===<br />
<br />
{{Main|Existential risk from artificial general intelligence}}<br />
<br />
<br />
<br />
The thesis that AI poses an existential risk, and that this risk needs much more attention than it currently gets, has been endorsed by many public figures; perhaps the most famous are [[Elon Musk]], [[Bill Gates]], and [[Stephen Hawking]]. The most notable AI researcher to endorse the thesis is [[Stuart J. Russell]]. Endorsers of the thesis sometimes express bafflement at skeptics: Gates states he does not "understand why some people are not concerned",<ref name="BBC News">{{cite news|last1=Rawlinson|first1=Kevin|title=Microsoft's Bill Gates insists AI is a threat|url=https://www.bbc.co.uk/news/31047780|work=[[BBC News]]|accessdate=30 January 2015}}</ref> and Hawking criticized widespread indifference in his 2014 editorial: {{cquote|'So, facing possible futures of incalculable benefits and risks, the experts are surely doing everything possible to ensure the best outcome, right? Wrong. If a superior alien civilisation sent us a message saying, 'We'll arrive in a few decades,' would we just reply, 'OK, call us when you get here{{endash}}we'll leave the lights on?' Probably not{{endash}}but this is more or less what is happening with AI.'<ref name="hawking editorial">{{cite news |title=Stephen Hawking: 'Transcendence looks at the implications of artificial intelligence&nbsp;– but are we taking AI seriously enough?' |url=https://www.independent.co.uk/news/science/stephen-hawking-transcendence-looks-at-the-implications-of-artificial-intelligence--but-are-we-taking-ai-seriously-enough-9313474.html |accessdate=3 December 2014 |publisher=[[The Independent (UK)]]}}</ref>}}<br />
<br />
The thesis that AI poses an existential risk, and that this risk needs much more attention than it currently gets, has been endorsed by many public figures; perhaps the most famous are Elon Musk, Bill Gates, and Stephen Hawking. The most notable AI researcher to endorse the thesis is Stuart J. Russell. Endorsers of the thesis sometimes express bafflement at skeptics: Gates states he does not "understand why some people are not concerned", and Hawking criticized widespread indifference in his 2014 editorial: we'll leave the lights on?' Probably not, but this is more or less what is happening with AI.'}}<br />
<br />
人工智能构成了世界末日,这种风险需要比现在更多的关注,这一论点已经得到了许多公众人物的支持; 也许最著名的是埃隆·马斯克,比尔·盖茨和斯蒂芬·霍金。支持这一观点的最著名的人工智能研究者是斯图尔特·罗素。这篇论文的支持者有时会对怀疑论者表示困惑: 盖茨表示,他不“理解为什么有些人不关心” ,霍金在2014年的社论中批评了普遍的冷漠: 我们就这么让人工智能这盏灯亮着吗?可能不是,但这或多或少是正在发生在人工智能领域内的。<br />
<br />
<br />
<br />
Many of the scholars who are concerned about existential risk believe that the best way forward would be to conduct (possibly massive) research into solving the difficult "[[AI control problem|control problem]]" to answer the question: what types of safeguards, algorithms, or architectures can programmers implement to maximize the probability that their recursively-improving AI would continue to behave in a friendly, rather than destructive, manner after it reaches superintelligence?<ref name="superintelligence" >{{cite book|last1=Bostrom|first1=Nick|author-link=Nick Bostrom|title=Superintelligence: Paths, Dangers, Strategies|date=2014|isbn=978-0199678112|edition=First|quote=|title-link=Superintelligence: Paths, Dangers, Strategies}}<!-- preface --></ref><ref name="physica_scripta" >{{cite journal|title=Responses to catastrophic AGI risk: a survey|journal=[[Physica Scripta]]|date=19 December 2014|volume=90|issue=1|author1=Kaj Sotala|author2-link=Roman Yampolskiy|author2=Roman Yampolskiy}}</ref><br />
<br />
Many of the scholars who are concerned about existential risk believe that the best way forward would be to conduct (possibly massive) research into solving the difficult "control problem" to answer the question: what types of safeguards, algorithms, or architectures can programmers implement to maximize the probability that their recursively-improving AI would continue to behave in a friendly, rather than destructive, manner after it reaches superintelligence?<br />
<br />
许多关注世界末日的学者认为,最好的方法是进行(可能是大规模的)研究,解决困难的“控制问题” ,以回答这个问题: 程序员可以实现哪些类型的保障措施、算法或架构,以最大程度地提高其递归改进的人工智能在达到超级智能后继续以友好,而非破坏性的方式运行的可能性?<br />
<br />
<br />
<br />
The thesis that AI can pose existential risk also has many strong detractors. Skeptics sometimes charge that the thesis is crypto-religious, with an irrational belief in the possibility of superintelligence replacing an irrational belief in an omnipotent God; at an extreme, [[Jaron Lanier]] argues that the whole concept that current machines are in any way intelligent is "an illusion" and a "stupendous con" by the wealthy.<ref name="atlantic-but-what">{{cite magazine |url=https://www.theatlantic.com/health/archive/2014/05/but-what-does-the-end-of-humanity-mean-for-me/361931/ |title=But What Would the End of Humanity Mean for Me? |magazine=The Atlantic | date = 9 May 2014 | accessdate =12 December 2015}}</ref><br />
<br />
The thesis that AI can pose existential risk also has many strong detractors. Skeptics sometimes charge that the thesis is crypto-religious, with an irrational belief in the possibility of superintelligence replacing an irrational belief in an omnipotent God; at an extreme, Jaron Lanier argues that the whole concept that current machines are in any way intelligent is "an illusion" and a "stupendous con" by the wealthy.<br />
<br />
人工智能可以造成世界末日的观点也遭到了许多强烈的反对。怀疑论者有时指责该论点是神秘的、宗教性的,他们非理性地相信超级智能可能取代对万能的上帝的非理性信仰; 一个极端如,杰伦·拉尼尔(Jaron Lanier)认为,目前的机器以任何方式具有智能的整个概念是“一种幻觉” ,是富人的“惊人骗局”。<br />
<br />
<br />
<br />
Much of existing criticism argues that AGI is unlikely in the short term. Computer scientist [[Gordon Bell]] argues that the human race will already destroy itself before it reaches the [[technological singularity]]. [[Gordon Moore]], the original proponent of [[Moore's Law]], declares that "I am a skeptic. I don't believe [a technological singularity] is likely to happen, at least for a long time. And I don't know why I feel that way."<ref>{{cite news |title=Tech Luminaries Address Singularity |url=https://spectrum.ieee.org/computing/hardware/tech-luminaries-address-singularity |accessdate=8 April 2020 |work=IEEE Spectrum: Technology, Engineering, and Science News |issue=SPECIAL REPORT: THE SINGULARITY |date=1 June 2008 |language=en}}</ref> [[Baidu]] Vice President [[Andrew Ng]] states AI existential risk is "like worrying about overpopulation on Mars when we have not even set foot on the planet yet."<ref name=shermer>{{cite news|last1=Shermer|first1=Michael|title=Apocalypse AI|url=https://www.scientificamerican.com/article/artificial-intelligence-is-not-a-threat-mdash-yet/|accessdate=27 November 2017|work=Scientific American|date=1 March 2017|pages=77|language=en|doi=10.1038/scientificamerican0317-77|bibcode=2017SciAm.316c..77S}}</ref><br />
<br />
Much of existing criticism argues that AGI is unlikely in the short term. Computer scientist Gordon Bell argues that the human race will already destroy itself before it reaches the technological singularity. Gordon Moore, the original proponent of Moore's Law, declares that "I am a skeptic. I don't believe [a technological singularity] is likely to happen, at least for a long time. And I don't know why I feel that way." Baidu Vice President Andrew Ng states AI existential risk is "like worrying about overpopulation on Mars when we have not even set foot on the planet yet."<br />
<br />
现有的许多批评认为,通用人工智能短期内不太可能成功。计算机科学家戈登·贝尔(Gordon Bell)认为人类在到达'''<font color="#ff8000">技术奇点(technological singularity)</font>'''之前就已经自我毁灭了。戈登·摩尔(Gordon Moore),'''<font color="#ff8000">摩尔定律(Moore's Law)</font>'''的最初提出者,宣称“我是一个怀疑论者。我不认为技术奇点会发生,至少在很长一段时间内不会。我不知道为什么会有这种感觉。”百度副总裁吴恩达(Andrew Ng)说,人工智能世界末日就像是在担心火星人口过剩,而我们甚至还没有踏上这个星球。<br />
<br />
<br />
<br />
==See also 请参阅==<br />
<br />
{{div col|colwidth=30em}}<br />
<br />
* [[Automated machine learning]] 自动机器学习<br />
<br />
* [[Machine ethics]] 机器伦理<br />
<br />
* [[Multi-task learning]] 多任务学习<br />
<br />
* [[Superintelligence: Paths, Dangers, Strategies|Superintelligence]] 超级智能<br />
<br />
* [[Nick Bostrom]] 尼克·博斯特罗姆<br />
<br />
* [[Eliezer Yudkowsky]] 埃利泽·尤德科夫斯基<br />
<br />
* [[Future of Humanity Institute]] 人类未来研究所<br />
<br />
* [[Outline of artificial intelligence]] 人工智能概要<br />
<br />
* [[Artificial brain]] 人工大脑<br />
<br />
* [[Transfer learning]] 学习迁移<br />
<br />
* [[Outline of transhumanism]] 超人类主义概要<br />
<br />
* [[General game playing]] 一般博弈<br />
<br />
* [[Synthetic intelligence]] 合成智能<br />
<br />
* [[Intelligence amplification]] (IA), the use of information technology in augmenting human intelligence instead of creating an external autonomous "AGI"{{div col end}}<br />
智能放大,利用信息技术加强人类智慧而不是建造外在的通用人工智能<br />
<br />
<br />
==Notes 附注==<br />
<br />
{{reflist|colwidth=30em}}<br />
<br />
<br />
<br />
==References 参考文献==<br />
<br />
• Stages of Artificial Intelligence"[https://www.computerscience0.xyz/2020/04/ai-artificial-intelligence-stages-of.html Computer Science]" on 2nd April, 2020.{{refbegin|2}}<br />
<br />
• Stages of Artificial Intelligence"[https://www.computerscience0.xyz/2020/04/ai-artificial-intelligence-stages-of.html Computer Science]" on 2nd April, 2020.<br />
<br />
•2020年4月2日,《人工智能的阶段》[ https://www.computerscience0.xyz/2020/04/ai-Artificial-Intelligence-Stages-of.html 计算机科学]。<br />
<br />
* {{cite web |url=http://www.techcast.org/Upload/PDFs/633615794236495345_TCTheAutomationofThought.pdf |title=TechCast Article Series: The Automation of Thought |last1=Halal |first1=William E. |website= |access-date= |archive-url=https://web.archive.org/web/20130606101835/http://www.techcast.org/Upload/PDFs/633615794236495345_TCTheAutomationofThought.pdf |archive-date=6 June 2013}}<br />
<br />
* {{Citation | last=Aleksander | first=Igor | author-link=Igor Aleksander | year=1996 | title=Impossible Minds | publisher=World Scientific Publishing Company | isbn=978-1-86094-036-1 | url-access=registration | url=https://archive.org/details/impossiblemindsm0000alek }}<br />
<br />
* {{Citation | last = Omohundro|first= Steve| author-link= Steve Omohundro | year = 2008| title= The Nature of Self-Improving Artificial Intelligence| publisher= presented and distributed at the 2007 Singularity Summit, San Francisco, CA.}}<br />
<br />
* {{Citation|last=Sandberg |first=Anders|last2=Boström|first2=Nick|title=Whole Brain Emulation: A Roadmap|url=http://www.fhi.ox.ac.uk/Reports/2008-3.pdf|accessdate=5 April 2009|series= Technical Report #2008‐3|year=2008| publisher = Future of Humanity Institute, Oxford University}}<br />
<br />
* {{Citation|vauthors=Azevedo FA, Carvalho LR, Grinberg LT | title = Equal numbers of neuronal and nonneuronal cells make the human brain an isometrically scaled-up primate brain| journal = The Journal of Comparative Neurology| volume = 513| issue = 5| pages = 532–41|date=April 2009| pmid = 19226510| doi = 10.1002/cne.21974|url=https://www.researchgate.net/publication/24024444 |accessdate=4 September 2013| ref={{harvid|Azevedo et al.|2009}}|display-authors=etal}}<br />
<br />
* {{Citation<br />
<br />
| first=Anthony<br />
<br />
| first=Anthony<br />
<br />
首先是安东尼<br />
<br />
| last=Berglas<br />
<br />
| last=Berglas<br />
<br />
最后一个贝格拉斯<br />
<br />
| title=Artificial Intelligence will Kill our Grandchildren<br />
<br />
| title=Artificial Intelligence will Kill our Grandchildren<br />
<br />
人工智能会杀死我们的孙子<br />
<br />
| year=2008<br />
<br />
| year=2008<br />
<br />
2008年<br />
<br />
| url=http://berglas.org/Articles/AIKillGrandchildren/AIKillGrandchildren.html<br />
<br />
| url=http://berglas.org/Articles/AIKillGrandchildren/AIKillGrandchildren.html<br />
<br />
Http://berglas.org/articles/aikillgrandchildren/aikillgrandchildren.html<br />
<br />
}}<br />
<br />
}}<br />
<br />
}}<br />
<br />
* {{Citation | last = Chalmers |first = David | author-link=David Chalmers | year=1996 | title = The Conscious Mind |publisher=Oxford University Press.}}<br />
<br />
* {{Citation | last = Clocksin|first=William |date=Aug 2003 |title=Artificial intelligence and the future|journal=[[Philosophical Transactions of the Royal Society A]] |pmid=12952683 |volume=361 |issue=1809 |pages=1721–1748 |doi=10.1098/rsta.2003.1232 |postscript=.|bibcode=2003RSPTA.361.1721C }}<br />
<br />
* {{Crevier 1993}}<br />
<br />
* {{Citation | first = Brad | last = Darrach | date=20 November 1970 | title=Meet Shakey, the First Electronic Person | magazine=[[Life Magazine]] | pages = 58–68 }}.<br />
<br />
* {{Citation | last = Drachman | first = D | title = Do we have brain to spare? | journal = Neurology | volume = 64 | issue = 12 | pages = 2004–5 | year = 2005 | pmid = 15985565 | doi = 10.1212/01.WNL.0000166914.38327.BB | postscript = .}}<br />
<br />
* {{Citation | last = Feigenbaum | first = Edward A. | first2=Pamela | last2=McCorduck | author-link=Edward Feigenbaum | author2-link = Pamela McCorduck | title = The Fifth Generation: Artificial Intelligence and Japan's Computer Challenge to the World | publisher = Michael Joseph | year = 1983 | isbn = 978-0-7181-2401-4 }}<br />
<br />
* {{Citation<br />
<br />
| last = Gelernter | first = David<br />
<br />
| last = Gelernter | first = David<br />
<br />
最后的盖兰特 | 第一个大卫<br />
<br />
| year = 2010<br />
<br />
| year = 2010<br />
<br />
2010年<br />
<br />
| title = Dream-logic, the Internet and Artificial Thought<br />
<br />
| title = Dream-logic, the Internet and Artificial Thought<br />
<br />
梦的逻辑,互联网和人工思维<br />
<br />
| url=http://www.edge.org/3rd_culture/gelernter10.1/gelernter10.1_index.html | accessdate= 25 July 2010<br />
<br />
| url=http://www.edge.org/3rd_culture/gelernter10.1/gelernter10.1_index.html | accessdate= 25 July 2010<br />
<br />
Http://www.edge.org/3rd_culture/gelernter10.1/gelernter10.1_index.html : 2010年7月25日<br />
<br />
}}<br />
<br />
}}<br />
<br />
}}<br />
<br />
* {{Citation<br />
<br />
| editor1-last = Goertzel | editor1-first = Ben | authorlink = Ben Goertzel<br />
<br />
| editor1-last = Goertzel | editor1-first = Ben | authorlink = Ben Goertzel<br />
<br />
1-first Ben | authorlink Ben Goertzel<br />
<br />
| editor2-last = Pennachin | editor2-first= Cassio<br />
<br />
| editor2-last = Pennachin | editor2-first= Cassio<br />
<br />
| 编辑2-last Pennachin | 编辑2-first Cassio<br />
<br />
| year = 2006<br />
<br />
| year = 2006<br />
<br />
2006年<br />
<br />
| title=Artificial General Intelligence<br />
<br />
| title=Artificial General Intelligence<br />
<br />
人工通用智能<br />
<br />
| publisher = Springer <br />
<br />
| publisher = Springer <br />
<br />
出版商斯普林格<br />
<br />
| url=http://people.inf.elte.hu/csizsekp/ai/books/artificial-general-intelligence-cognitive-technologies.9783540237334.27156.pdf<br />
<br />
| url=http://people.inf.elte.hu/csizsekp/ai/books/artificial-general-intelligence-cognitive-technologies.9783540237334.27156.pdf<br />
<br />
Http://people.inf.elte.hu/csizsekp/ai/books/artificial-general-intelligence-cognitive-technologies.9783540237334.27156.pdf<br />
<br />
| isbn = 978-3-540-23733-4<br />
<br />
| isbn = 978-3-540-23733-4<br />
<br />
[国际标准图书馆编号978-3-540-23733-4]<br />
<br />
| archive-url = https://web.archive.org/web/20130320184603/http://people.inf.elte.hu/csizsekp/ai/books/artificial-general-intelligence-cognitive-technologies.9783540237334.27156.pdf<br />
<br />
| archive-url = https://web.archive.org/web/20130320184603/http://people.inf.elte.hu/csizsekp/ai/books/artificial-general-intelligence-cognitive-technologies.9783540237334.27156.pdf<br />
<br />
| 档案-url https://web.archive.org/web/20130320184603/http://people.inf.elte.hu/csizsekp/ai/books/artificial-general-intelligence-cognitive-technologies.9783540237334.27156.pdf<br />
<br />
| archive-date = 20 March 2013<br />
<br />
| archive-date = 20 March 2013<br />
<br />
| 档案-日期2013年3月20日<br />
<br />
}}<br />
<br />
}}<br />
<br />
}}<br />
<br />
* {{Citation<br />
<br />
| last = Goertzel | first = Ben | authorlink = Ben Goertzel<br />
<br />
| last = Goertzel | first = Ben | authorlink = Ben Goertzel<br />
<br />
作者: Ben Goertzel<br />
<br />
| last2 = Wang | first2 = Pei<br />
<br />
| last2 = Wang | first2 = Pei<br />
<br />
2 Wang | first2 Pei<br />
<br />
| year = 2006<br />
<br />
| year = 2006<br />
<br />
2006年<br />
<br />
| title = Introduction: Aspects of Artificial General Intelligence<br />
<br />
| title = Introduction: Aspects of Artificial General Intelligence<br />
<br />
| 题目简介: 人工通用智能的方方面面<br />
<br />
| url=http://sites.google.com/site/narswang/publications/wang-goertzel.AGI_Aspects.pdf?attredirects=1<br />
<br />
| url=http://sites.google.com/site/narswang/publications/wang-goertzel.AGI_Aspects.pdf?attredirects=1<br />
<br />
Http://sites.google.com/site/narswang/publications/wang-goertzel.agi_aspects.pdf?attredirects=1<br />
<br />
}}<br />
<br />
}}<br />
<br />
}}<br />
<br />
* {{Citation |last=Goertzel|first=Ben |authorlink=Ben Goertzel |date=Dec 2007 |title=Human-level artificial general intelligence and the possibility of a technological singularity: a reaction to Ray Kurzweil's The Singularity Is Near, and McDermott's critique of Kurzweil|journal=Artificial Intelligence |volume=171 |issue=18, Special Review Issue |pages=1161–1173 |url=https://scholar.google.com/scholar?hl=sv&lr=&cluster=15189798216526465792 |accessdate=1 April 2009 |doi=10.1016/j.artint.2007.10.011 |postscript=.}}<br />
<br />
* {{Citation | last = Gubrud | first = Mark | url=http://www.foresight.org/Conferences/MNT05/Papers/Gubrud/ | title = Nanotechnology and International Security | journal= Fifth Foresight Conference on Molecular Nanotechnology |date = November 1997| accessdate= 7 May 2011}}<br />
<br />
* {{Citation| last1 = Holte | first1=RC | last2=Choueiry |first2=BY| title = Abstraction and reformulation in artificial intelligence| journal = [[Philosophical Transactions of the Royal Society B]]| volume = 358| issue = 1435| pages = 1197–1204| year = 2003| pmid = 12903653| pmc = 1693218| doi = 10.1098/rstb.2003.1317| postscript = .}}<br />
<br />
* {{Citation | last = Howe | first = J. | url=http://www.dai.ed.ac.uk/AI_at_Edinburgh_perspective.html | title = Artificial Intelligence at Edinburgh University : a Perspective |date = November 1994| accessdate= 30 August 2007}}<br />
<br />
* {{Citation | last = Johnson| first= Mark | year = 1987| title =The body in the mind| publisher =Chicago|isbn= 978-0-226-40317-5}}<br />
<br />
* {{Citation | last = Kurzweil | first = Ray | author-link = Ray Kurzweil | title = The Singularity is Near | year = 2005 | publisher = Viking Press | title-link = The Singularity is Near }}<br />
<br />
* {{Citation | last = Lighthill | first = Professor Sir James | author-link=James Lighthill | year = 1973 | contribution= Artificial Intelligence: A General Survey | title = Artificial Intelligence: a paper symposium| publisher = Science Research Council }}<br />
<br />
* {{Citation|last=Luger|first=George|first2=William|last2=Stubblefield|year=2004|title=Artificial Intelligence: Structures and Strategies for Complex Problem Solving|edition=5th|publisher=The Benjamin/Cummings Publishing Company, Inc.|page=[https://archive.org/details/artificialintell0000luge/page/720 720]|isbn=978-0-8053-4780-7|url=https://archive.org/details/artificialintell0000luge/page/720}}<br />
<br />
* {{Citation |last=McCarthy|first=John |authorlink=John McCarthy (computer scientist)|date=Oct 2007 |title=From here to human-level AI|journal=Artificial Intelligence |volume=171 |pages=1174–1182 |doi=10.1016/j.artint.2007.10.009 | postscript=. |issue=18}}<br />
<br />
* {{McCorduck 2004}}<br />
<br />
* {{Citation | last = Moravec | first = Hans | author-link = Hans Moravec | year = 1976 | url = http://www.frc.ri.cmu.edu/users/hpm/project.archive/general.articles/1975/Raw.Power.html | title = The Role of Raw Power in Intelligence | access-date = 29 September 2007 | archive-url = https://web.archive.org/web/20160303232511/http://www.frc.ri.cmu.edu/users/hpm/project.archive/general.articles/1975/Raw.Power.html | archive-date = 3 March 2016 | url-status = dead }}<br />
<br />
* {{Citation | last = Moravec | first = Hans | author-link=Hans Moravec | year = 1988 | title = Mind Children | publisher = Harvard University Press}}<br />
<br />
* {{Citation| last=Nagel | year=1974 | title =What Is it Like to Be a Bat | journal = Philosophical Review | volume=83 | issue=4 | pages=435–50 | url=http://organizations.utep.edu/Portals/1475/nagel_bat.pdf| postscript=. | doi=10.2307/2183914 | jstor=2183914 }}<br />
<br />
* {{Citation | last = Newell | first = Allen | author-link=Allen Newell | last2 = Simon | first2=H. A. | year = 1963 | contribution=GPS: A Program that Simulates Human Thought| title=Computers and Thought | editor-last= Feigenbaum | editor-first= E.A. |editor2-last= Feldman |editor2-first= J. |publisher= McGraw-Hill | authorlink2 = Herbert A. Simon|location= New York }}<br />
<br />
* {{cite journal | doi = 10.1145/360018.360022 | last = Newell | first = Allen | last2 = Simon | first2=H. A. | year = 1976 | title=Computer Science as Empirical Inquiry: Symbols and Search| volume= 19 | pages = 113–126 | journal = Communications of the ACM| author-link=Allen Newell | authorlink2=Herbert A. Simon|issue=3 | ref=harv| doi-access= free }}<br />
<br />
* {{Citation| last=Nilsson | first=Nils | author-link=Nils Nilsson (researcher) | year=1998|title=Artificial Intelligence: A New Synthesis|publisher=Morgan Kaufmann Publishers|isbn=978-1-55860-467-4}}<br />
<br />
* {{Russell Norvig 2003}}<br />
<br />
* {{Citation | last = NRC| author-link=United States National Research Council | chapter=Developments in Artificial Intelligence|chapter-url=http://www.nap.edu/readingroom/books/far/ch9.html|title=Funding a Revolution: Government Support for Computing Research|publisher=National Academy Press|year=1999 }}<br />
<br />
* {{Citation | last = Poole | first = David | first2 = Alan | last2 = Mackworth | first3 = Randy | last3 = Goebel | publisher = Oxford University Press | year = 1998 | title = Computational Intelligence: A Logical Approach | url = http://www.cs.ubc.ca/spider/poole/ci.html | author-link=David Poole (researcher) | location = New York }}<br />
<br />
* {{Citation|last=Searle |first=John |author-link=John Searle |title=Minds, Brains and Programs |journal=Behavioral and Brain Sciences |volume=3 |issue=3 |pages=417–457 |year=1980 |doi=10.1017/S0140525X00005756 }}<br />
<br />
* {{Citation | last= Simon | first = H. A. | author-link=Herbert A. Simon | year = 1965 | title=The Shape of Automation for Men and Management | publisher =Harper & Row | location = New York }}<br />
<br />
* {{Citation |last=Sutherland|first= J.G. |year =1990| title= Holographic Model of Memory, Learning, and Expression|journal=International Journal of Neural Systems|volume= 1–3|pages= 256–267 |postscript=.}}<br />
<br />
* {{Citation|vauthors=Williams RW, Herrup K | title = The control of neuron number| journal = Annual Review of Neuroscience| volume = 11| pages = 423–53| year = 1988| pmid = 3284447| doi = 10.1146/annurev.ne.11.030188.002231| postscript = .}}<!--| accessdate = 20 June 2009--><br />
<br />
* {{Citation<br />
<br />
| editor1-last = de Vega | editor1-first = Manuel<br />
<br />
| editor1-last = de Vega | editor1-first = Manuel<br />
<br />
| 编辑1-last de Vega | 编辑1-first Manuel<br />
<br />
| editor2-last = Glenberg | editor2-first = Arthur<br />
<br />
| editor2-last = Glenberg | editor2-first = Arthur<br />
<br />
2- 最后的格伦伯格2- 第一个亚瑟<br />
<br />
| editor3-last = Graesser | editor3-first = Arthur<br />
<br />
| editor3-last = Graesser | editor3-first = Arthur<br />
<br />
| 编辑3-last Graesser | 编辑3-first Arthur<br />
<br />
| year = 2008<br />
<br />
| year = 2008<br />
<br />
2008年<br />
<br />
| title = Symbols and Embodiment: Debates on meaning and cognition<br />
<br />
| title = Symbols and Embodiment: Debates on meaning and cognition<br />
<br />
标题符号与具体化: 关于意义与认知的争论<br />
<br />
| publisher = Oxford University Press<br />
<br />
| publisher = Oxford University Press<br />
<br />
牛津大学出版社<br />
<br />
| isbn=978-0-19-921727-4<br />
<br />
| isbn=978-0-19-921727-4<br />
<br />
[国际标准图书编号978-0-19-921727-4]<br />
<br />
}}<br />
<br />
}}<br />
<br />
}}<br />
<br />
* {{Citation|last=Yudkowsky |first=Eliezer |author-link=Eliezer Yudkowsky |editor-last=Goertzel |editor-first=Ben |editor2-last=Pennachin |editor2-first=Cassio |title=Artificial General Intelligence |journal=Annual Review of Psychology |publisher=Springer |year=2006 |url=http://www.singinst.org/upload/LOGI//LOGI.pdf |doi=10.1146/annurev.psych.49.1.585 |pmid=9496632 |isbn=978-3-540-23733-4 |url-status=dead |archiveurl=https://web.archive.org/web/20090411050423/http://www.singinst.org/upload/LOGI/LOGI.pdf |archivedate=11 April 2009 |volume=49 |pages=585–612}} <br />
<br />
* {{Citation |last=Zucker|first=Jean-Daniel |date=July 2003 |title=A grounded theory of abstraction in artificial intelligence|journal=[[Philosophical Transactions of the Royal Society B]] |pmid=12903672 |volume=358 |issue=1435 |pmc=1693211 |pages=1293–1309 |doi=10.1098/rstb.2003.1308 |postscript=.}}<br />
<br />
* {{Citation| last=Yudkowsky | first=Eliezer | author-link=Eliezer Yudkowsky| year=2008 |title=Artificial Intelligence as a Positive and Negative Factor in Global Risk |journal=Global Catastrophic Risks | bibcode=2008gcr..book..303Y }}.<br />
<br />
{{refend}}<br />
<br />
<br />
<br />
==External links 拓展链接==<br />
<br />
* [https://cis.temple.edu/~pwang/AGI-Intro.html The AGI portal maintained by Pei Wang]<br />
<br />
* [https://web.archive.org/web/20050405071221/http://genesis.csail.mit.edu/index.html The Genesis Group at MIT's CSAIL] – Modern research on the computations that underlay human intelligence<br />
<br />
* [http://www.opencog.org/ OpenCog – open source project to develop a human-level AI]<br />
<br />
* [http://academia.wikia.com/wiki/A_Method_for_Simulating_the_Process_of_Logical_Human_Thought Simulating logical human thought]<br />
<br />
* [http://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/ai-timelines What Do We Know about AI Timelines?] – Literature review<br />
<br />
<br />
<br />
{{Existential risk from artificial intelligence}}<br />
<br />
<br />
<br />
{{DEFAULTSORT:Artificial general intelligence}}<br />
<br />
[[Category:Hypothetical technology]]<br />
<br />
Category:Hypothetical technology<br />
<br />
类别: 假设技术<br />
<br />
[[Category:Artificial intelligence]]<br />
<br />
Category:Artificial intelligence<br />
<br />
类别: 人工智能<br />
<br />
[[Category:Computational neuroscience]]<br />
<br />
Category:Computational neuroscience<br />
<br />
类别: 计算神经科学<br />
<br />
<br />
<br />
[[fr:Intelligence artificielle#Intelligence artificielle forte]]<br />
<br />
fr:Intelligence artificielle#Intelligence artificielle forte<br />
<br />
智力人工 # 智力人工强项<br />
<br />
<noinclude><br />
<br />
<small>This page was moved from [[wikipedia:en:Artificial general intelligence]]. Its edit history can be viewed at [[通用人工智能/edithistory]]</small></noinclude><br />
<br />
[[Category:待整理页面]]</div>粲兰https://wiki.swarma.org/index.php?title=%E9%80%9A%E7%94%A8%E4%BA%BA%E5%B7%A5%E6%99%BA%E8%83%BD&diff=15040通用人工智能2020-10-13T06:31:53Z<p>粲兰:</p>
<hr />
<div>此词条暂由彩云小译翻译,未经人工整理和审校,带来阅读不便,请见谅。<br />
<br />
{{Short description|Hypothetical human-level or stronger AI}}<br />
<br />
{{Use British English|date = March 2019}}<br />
<br />
{{Use dmy dates|date=December 2019}}<br />
<br />
{{Artificial intelligence}}<br />
<br />
'''Artificial general intelligence''' ('''AGI''') is the hypothetical<ref>{{cite news |title=DeepMind and Google: the battle to control artificial intelligence |url=https://www.economist.com/news/2019/04/05/deepmind-and-google-the-battle-to-control-artificial-intelligence |accessdate=15 March 2020 |work=[[The Economist]] ([[1843 (magazine)]]) |date=2019 |quote=AGI stands for Artificial General Intelligence, a hypothetical computer program...}}</ref> intelligence of a machine that has the capacity to understand or learn any intellectual task that a [[human being]] can. It is a primary goal of some [[artificial intelligence]] research and a common topic in [[science fiction]] and [[futures studies]]. AGI can also be referred to as '''strong AI''',<ref>Kurzweil, ''Singularity'' (2005) p. 260</ref><ref name = "Kurzweil 2005-08-05">{{Citation|url=https://www.forbes.com/home/free_forbes/2005/0815/030.htmlhttps://www.forbes.com/home/free_forbes/2005/0815/030.html |first=Ray |last=Kurzweil |date=5 August 2005 |magazine=[[Forbes]]|title=Long Live AI}}: Kurzweil describes strong AI as "machine intelligence with the full range of human intelligence."</ref><ref>{{Citation|url=https://crnano.typepad.com/crnblog/2005/08/advanced_human_.html |title=Advanced Human ntelligence<br />
<br />
Artificial general intelligence (AGI) is the hypothetical intelligence of a machine that has the capacity to understand or learn any intellectual task that a human being can. It is a primary goal of some artificial intelligence research and a common topic in science fiction and futures studies. AGI can also be referred to as strong AI,<ref>{{Citation|url=https://crnano.typepad.com/crnblog/2005/08/advanced_human_.html |title=Advanced Human ntelligence<br />
<br />
人工通用智能(Artificial general intelligence,AGI)是一种假设的机器智能,它有能力理解或学习任何人类能够完成的智力任务。这是一些人工智能研究的主要目标,也是科幻小说和未来研究的共同话题。也可以被称为强人工智能,参考文献{ Citation | url https://crnano.typepad.com/crnblog/2005/08/advanced_human_.html | title Advanced Human intelligence<br />
<br />
|first=Mike|last=Treder|work=Responsible Nanotechnology|date=10 August 2005 |archive-url=https://web.archive.org/web/20191016214415/https://crnano.typepad.com/crnblog/2005/08/advanced_human_.html|archive-date=16 October 2019 |url-status=live}}</ref> '''full AI''',<ref>{{Cite web |url=http://tedxtalks.ted.com/video/The-Age-of-Artificial-Intellige |title=The Age of Artificial Intelligence: George John at TEDxLondonBusinessSchool 2013 |access-date=22 February 2014 |archive-url=https://web.archive.org/web/20140226123940/http://tedxtalks.ted.com/video/The-Age-of-Artificial-Intellige |archive-date=26 February 2014 |url-status=live }}</ref> <br />
<br />
|first=Mike|last=Treder|work=Responsible Nanotechnology|date=10 August 2005 |archive-url=https://web.archive.org/web/20191016214415/https://crnano.typepad.com/crnblog/2005/08/advanced_human_.html|archive-date=16 October 2019 |url-status=live}}</ref> full AI, <br />
<br />
2005年8月10日2019年10月 https://web.archive.org/web/20191016214415/https://crnano.typepad.com/crnblog/2005/08/advanced_human_.html|archive-date=16,<br />
<br />
or '''general intelligent action'''.{{sfn|Newell|Simon|1976|ps=, This is the term they use for "human-level" intelligence in the [[physical symbol system]] hypothesis.}} <br />
<br />
or general intelligent action. <br />
<br />
或者是通用智能行为。<br />
<br />
Some academic sources reserve the term "strong AI" for machines that can experience [[Chinese room#Strong AI|consciousness]].{{sfn|Searle|1980|ps=, See below for the origin of the term "strong AI", and see the academic definition of "[[Chinese room#Strong AI|strong AI]]" in the article [[Chinese room]].}} Today's AI is speculated to be many years, if not decades, away from AGI.<ref>[https://www.europarl.europa.eu/at-your-service/files/be-heard/religious-and-non-confessional-dialogue/events/en-20190319-how-artificial-intelligence-works.pdf europarl.europa.eu: How artificial intelligence works], "Concluding remarks: Today's AI is powerful and useful, but remains far from speculated AGI or ASI.", European Parliamentary Research Service, retrieved March 3, 2020</ref><ref>{{Cite journal|last=Grace|first=Katja|last2=Salvatier|first2=John|last3=Dafoe|first3=Allan|last4=Zhang|first4=Baobao|last5=Evans|first5=Owain|date=2018-07-31|title=Viewpoint: When Will AI Exceed Human Performance? Evidence from AI Experts|journal=Journal of Artificial Intelligence Research|volume=62|pages=729–754|doi=10.1613/jair.1.11222|issn=1076-9757}}</ref><br />
<br />
Some academic sources reserve the term "strong AI" for machines that can experience consciousness. Today's AI is speculated to be many years, if not decades, away from AGI.<br />
<br />
一些学术资源保留了“强人工智能”这个术语,用来形容能够体会意识的机器。据推测,如果不是几十年的话,今天的人工智能将在很多年之后才能达到通用人工智能的地步。<br />
<br />
<br />
<br />
Some authorities emphasize a distinction between ''strong AI'' and ''applied AI'',<ref>Encyclopædia Britannica [http://www.britannica.com/eb/article-219086/artificial-intelligence Strong AI, applied AI, and cognitive simulation] {{Webarchive|url=https://web.archive.org/web/20071015054758/http://www.britannica.com/eb/article-219086/artificial-intelligence |date=15 October 2007 }} or Jack Copeland [http://www.cs.usfca.edu/www.AlanTuring.net/turing_archive/pages/Reference%20Articles/what_is_AI/What%20is%20AI02.html What is artificial intelligence?] {{Webarchive|url=https://web.archive.org/web/20070818125256/http://www.cs.usfca.edu/www.AlanTuring.net/turing_archive/pages/Reference%20Articles/what_is_AI/What%20is%20AI02.html |date=18 August 2007 }} on AlanTuring.net</ref> also called ''narrow AI''<ref name = "Kurzweil 2005-08-05"/> or ''[[weak AI]]''.<ref>{{Cite web |url=http://www.open2.net/nextbigthing/ai/ai_in_depth/in_depth.htm |title=The Open University on Strong and Weak AI |access-date=8 October 2007 |archive-url=https://web.archive.org/web/20090925043908/http://www.open2.net/nextbigthing/ai/ai_in_depth/in_depth.htm |archive-date=25 September 2009 |url-status=live }} {{Dead link |date=October 2019}}</ref> In contrast to strong AI, weak AI is not intended to perform human [[cognitive]] abilities. Rather, weak AI is limited to the use of software to study or accomplish specific [[problem solving]] or [[reason]]ing tasks.<br />
<br />
Some authorities emphasize a distinction between strong AI and applied AI, also called narrow AI In contrast to strong AI, weak AI is not intended to perform human cognitive abilities. Rather, weak AI is limited to the use of software to study or accomplish specific problem solving or reasoning tasks.<br />
<br />
一些权威机构强调'' 强人工智能 ''和'' 应用人工智能 ''之间的区别,也称为'' 狭义人工智能 ''与'' 强人工智能 ''相比,弱人工智能并不是为了执行人类的认知能力。相反,弱人工智能仅限于使用软件来研究或完成特定问题的解决或完成推理任务。<br />
<br />
<br />
<br />
As of 2017, over forty organizations are researching AGI.<ref name=baum/><br />
<br />
As of 2017, over forty organizations are researching AGI.<br />
<br />
截止到2017年,已经有超过四十家机构在研究 AGI。<br />
<br />
<br />
<br />
==Requirements 判定要求==<br />
<br />
{{main|Cognitive science}}<br />
<br />
<br />
<br />
Various criteria for [[intelligence]] have been proposed (most famously the [[Turing test]]) but to date, there is no definition that satisfies everyone.<ref>AI founder [[John McCarthy (computer scientist)|John McCarthy]] writes: "we cannot yet characterize in general what kinds of computational procedures we want to call intelligent." {{cite web| url=http://www-formal.stanford.edu/jmc/whatisai/node1.html| title=Basic Questions| last=McCarthy| first=John| authorlink=John McCarthy (computer scientist)| publisher=[[Stanford University]]| year=2007| access-date=6 December 2007| archive-url=https://web.archive.org/web/20071026100601/http://www-formal.stanford.edu/jmc/whatisai/node1.html| archive-date=26 October 2007| url-status=live}} (For a discussion of some definitions of intelligence used by [[artificial intelligence]] researchers, see [[philosophy of artificial intelligence]].)</ref> However, there ''is'' wide agreement among artificial intelligence researchers that intelligence is required to do the following:<ref><br />
<br />
Various criteria for intelligence have been proposed (most famously the Turing test) but to date, there is no definition that satisfies everyone. However, there is wide agreement among artificial intelligence researchers that intelligence is required to do the following:<ref><br />
<br />
人们提出了各种各样的智能标准(最著名的是图灵测试) ,但到目前为止,还没有一个定义能使所有人满意。然而,人工智能研究人员普遍认为,智能需要做到以下几点: <br />
<br />
This list of intelligent traits is based on the topics covered by major AI textbooks, including:<br />
<br />
This list of intelligent traits is based on the topics covered by major AI textbooks, including:<br />
<br />
这个智能特征的列表基于主流的人工智能教科书所涉及的主题,包括:<br />
<br />
{{Harvnb|Russell|Norvig|2003}},<br />
<br />
,<br />
<br />
,<br />
<br />
{{Harvnb|Luger|Stubblefield|2004}},<br />
<br />
,<br />
<br />
,<br />
<br />
{{Harvnb|Poole|Mackworth|Goebel|1998}} and<br />
<br />
and<br />
<br />
及<br />
<br />
{{Harvnb|Nilsson|1998}}.<br />
<br />
.<br />
<br />
.<br />
<br />
</ref><br />
<br />
</ref><br />
<br />
/ 参考<br />
<br />
* [[automated reasoning|reason]], use strategy, solve puzzles, and make judgments under [[uncertainty]];使用策略,解决问题,并且在不确定条件下做出决策。<br />
<br />
* [[knowledge representation|represent knowledge]], including [[Commonsense knowledge base|commonsense knowledge]];展现知识,包括常识。<br />
<br />
* [[automated planning and scheduling|plan]];规划。<br />
<br />
* [[machine learning|learn]];学习。<br />
<br />
* communicate in [[natural language processing|natural language]];使用自然语言进行交流。<br />
<br />
* and [[Artificial intelligence systems integration|integrate all these skills]] towards common goals.综合运用所有技巧以达到某个目的。<br />
<br />
<br />
<br />
Other important capabilities include the ability to [[machine perception|sense]] (e.g. [[computer vision|see]]) and the ability to act (e.g. [[robotics|move and manipulate objects]]) in the world where intelligent behaviour is to be observed.<ref>Pfeifer, R. and Bongard J. C., How the body shapes the way we think: a new view of intelligence (The MIT Press, 2007). {{ISBN|0-262-16239-3}}</ref> This would include an ability to detect and respond to [[hazard]].<ref>{{cite journal | last1 = White | first1 = R. W. | year = 1959 | title = Motivation reconsidered: The concept of competence | journal = Psychological Review | volume = 66 | issue = 5| pages = 297–333 | doi=10.1037/h0040934| pmid = 13844397 }}</ref> Many interdisciplinary approaches to intelligence (e.g. [[cognitive science]], [[computational intelligence]] and [[decision making]]) tend to emphasise the need to consider additional traits such as [[imagination]] (taken as the ability to form mental images and concepts that were not programmed in)<ref>{{Harvnb|Johnson|1987}}</ref> and [[Self-determination theory|autonomy]].<ref>deCharms, R. (1968). Personal causation. New York: Academic Press.</ref><br />
<br />
Other important capabilities include the ability to sense (e.g. see) and the ability to act (e.g. move and manipulate objects) in the world where intelligent behaviour is to be observed. This would include an ability to detect and respond to hazard. Many interdisciplinary approaches to intelligence (e.g. cognitive science, computational intelligence and decision making) tend to emphasise the need to consider additional traits such as imagination (taken as the ability to form mental images and concepts that were not programmed in) and autonomy.<br />
<br />
其他重要的能力包括在客观世界感知(例如:视觉)和行动(例如:移动和操纵物体)的能力。智能行为在客观世界中是可观测的。这将包括检测和应对危险的能力。许多跨学科的智力研究方法(例如:。认知科学、计算智能和决策)倾向于强调考虑额外特征的必要性,例如想象力(被认为是未编入程序的形成意象和概念的能力)和自主性。<br />
<br />
Computer based systems that exhibit many of these capabilities do exist (e.g. see [[computational creativity]], [[automated reasoning]], [[decision support system]], [[robot]], [[evolutionary computation]], [[intelligent agent]]), but not yet at human levels.<br />
<br />
Computer based systems that exhibit many of these capabilities do exist (e.g. see computational creativity, automated reasoning, decision support system, robot, evolutionary computation, intelligent agent), but not yet at human levels.<br />
<br />
基于计算机的系统,展示了许多这些能力确实存在(例如:。参见计算创造性、自动推理、决策支持系统、机器人、进化计算、智能代理) ,但还没有达到人类的水平。<br />
<br />
<br />
<br />
===Tests for confirming human-level AGI{{anchor|Tests_for_confirming_human-level_AGI}}===<br />
<br />
The following tests to confirm human-level AGI have been considered:<ref>{{cite web|last=Muehlhauser|first=Luke|title=What is AGI?|url=http://intelligence.org/2013/08/11/what-is-agi/|publisher=Machine Intelligence Research Institute|accessdate=1 May 2014|date=11 August 2013|archive-url=https://web.archive.org/web/20140425115445/http://intelligence.org/2013/08/11/what-is-agi/|archive-date=25 April 2014|url-status=live}}</ref><ref>{{Cite web|url=https://www.talkyblog.com/artificial_general_intelligence_agi/|title=What is Artificial General Intelligence (AGI)? {{!}} 4 Tests For Ensuring Artificial General Intelligence|date=13 July 2019|website=Talky Blog|language=en-US|access-date=17 July 2019|archive-url=https://web.archive.org/web/20190717071152/https://www.talkyblog.com/artificial_general_intelligence_agi/|archive-date=17 July 2019|url-status=live}}</ref><br />
<br />
The following tests to confirm human-level AGI have been considered:<br />
<br />
考虑了下列测试以确认人类水平 AGI:<br />
<br />
;[[Turing test|The Turing Test]] ([[Alan Turing|''Turing'']])<br />
<br />
The Turing Test (Turing)<br />
<br />
图灵测试(图灵)<br />
<br />
: A machine and a human both converse sight unseen with a second human, who must evaluate which of the two is the machine, which passes the test if it can fool the evaluator a significant fraction of the time. Note: Turing does not prescribe what should qualify as intelligence, only that knowing that it is a machine should disqualify it.<br />
<br />
A machine and a human both converse sight unseen with a second human, who must evaluate which of the two is the machine, which passes the test if it can fool the evaluator a significant fraction of the time. Note: Turing does not prescribe what should qualify as intelligence, only that knowing that it is a machine should disqualify it.<br />
<br />
一个机器人和一个人类都与另一个人类进行眼神交流,后者必须评估两者中哪一个是机器,如果它能骗过评估者很大一部分时间,那么机器就通过了测试。注意: 图灵并没有规定什么是智能,只要能认出它是一台机器就不应该认为它是智能的。<br />
<br />
;The Coffee Test ([[Steve Wozniak|''Wozniak'']])<br />
<br />
The Coffee Test (Wozniak)<br />
<br />
咖啡测试(沃兹尼亚克)<br />
<br />
: A machine is required to enter an average American home and figure out how to make coffee: find the coffee machine, find the coffee, add water, find a mug, and brew the coffee by pushing the proper buttons.<br />
<br />
A machine is required to enter an average American home and figure out how to make coffee: find the coffee machine, find the coffee, add water, find a mug, and brew the coffee by pushing the proper buttons.<br />
<br />
一台机器需要进入一个普通的美国家庭,并弄清楚如何制作咖啡: 找到咖啡机,找到咖啡,加水,找到一个马克杯,并通过按下正确的按钮来煮咖啡。<br />
<br />
;The Robot College Student Test ([[Ben Goertzel|''Goertzel'']])<br />
<br />
The Robot College Student Test (Goertzel)<br />
<br />
机器人大学生考试(格兹尔)<br />
<br />
: A machine enrolls in a university, taking and passing the same classes that humans would, and obtaining a degree.<br />
<br />
A machine enrolls in a university, taking and passing the same classes that humans would, and obtaining a degree.<br />
<br />
一台机器进入一所大学,学习并通过与人类相同的课程,并获得学位。<br />
<br />
;The Employment Test ([[Nils John Nilsson|''Nilsson'']])<br />
<br />
The Employment Test (Nilsson)<br />
<br />
就业测试(尼尔森)<br />
<br />
: A machine works an economically important job, performing at least as well as humans in the same job.<br />
<br />
A machine works an economically important job, performing at least as well as humans in the same job.<br />
<br />
机器从事一项经济上重要的工作,在同一项工作中表现至少和人类一样好。<br />
<br />
<br />
<br />
=== Problems requiring AGI to solve 等待通用人工智能解决的问题===<br />
<br />
{{Main|AI-complete}}<br />
<br />
<br />
<br />
The most difficult problems for computers are informally known as "AI-complete" or "AI-hard", implying that solving them is equivalent to the general aptitude of human intelligence, or strong AI, beyond the capabilities of a purpose-specific algorithm.<ref name="Shapiro92">Shapiro, Stuart C. (1992). [http://www.cse.buffalo.edu/~shapiro/Papers/ai.pdf Artificial Intelligence] {{Webarchive|url=https://web.archive.org/web/20160201014644/http://www.cse.buffalo.edu/~shapiro/Papers/ai.pdf |date=1 February 2016 }} In Stuart C. Shapiro (Ed.), ''Encyclopedia of Artificial Intelligence'' (Second Edition, pp.&nbsp;54–57). New York: John Wiley. (Section 4 is on "AI-Complete Tasks".)</ref><br />
<br />
The most difficult problems for computers are informally known as "AI-complete" or "AI-hard", implying that solving them is equivalent to the general aptitude of human intelligence, or strong AI, beyond the capabilities of a purpose-specific algorithm.<br />
<br />
对于计算机来说,最困难的问题被非正式地称为“ AI完全问题”或“ AI困难问题” ,这意味着解决这些问题相当于人类智能的一般才能,或强人工智能,超出了特定目的算法的能力。<br />
<br />
<br />
<br />
AI-complete problems are hypothesised to include general [[computer vision]], [[natural language understanding]], and dealing with unexpected circumstances while solving any real world problem.<ref>Roman V. Yampolskiy. Turing Test as a Defining Feature of AI-Completeness. In Artificial Intelligence, Evolutionary Computation and Metaheuristics (AIECM) --In the footsteps of Alan Turing. Xin-She Yang (Ed.). pp. 3–17. (Chapter 1). Springer, London. 2013. http://cecs.louisville.edu/ry/TuringTestasaDefiningFeature04270003.pdf {{Webarchive|url=https://web.archive.org/web/20130522094547/http://cecs.louisville.edu/ry/TuringTestasaDefiningFeature04270003.pdf |date=22 May 2013 }}</ref><br />
<br />
AI-complete problems are hypothesised to include general computer vision, natural language understanding, and dealing with unexpected circumstances while solving any real world problem.<br />
<br />
人工智能完全问题假设包括一般的计算机视觉,自然语言理解,以及在解决任何现实世界问题的同时处理意外情况。<br />
<br />
<br />
<br />
AI-complete problems cannot be solved with current computer technology alone, and also require [[human computation]]. This property could be useful, for example, to test for the presence of humans, as [[CAPTCHA]]s aim to do; and for [[computer security]] to repel [[brute-force attack]]s.<ref>Luis von Ahn, Manuel Blum, Nicholas Hopper, and John Langford. [http://www.captcha.net/captcha_crypt.pdf CAPTCHA: Using Hard AI Problems for Security] {{Webarchive|url=https://web.archive.org/web/20160304001102/http://www.captcha.net/captcha_crypt.pdf |date=4 March 2016 }}. In Proceedings of Eurocrypt, Vol. 2656 (2003), pp. 294–311.</ref><ref>{{cite journal | first = Richard | last = Bergmair | title = Natural Language Steganography and an "AI-complete" Security Primitive | citeseerx = 10.1.1.105.129 | date = 7 January 2006 }} (unpublished?)</ref><br />
<br />
AI-complete problems cannot be solved with current computer technology alone, and also require human computation. This property could be useful, for example, to test for the presence of humans, as CAPTCHAs aim to do; and for computer security to repel brute-force attacks.<br />
<br />
目前的计算机技术不能单独解决AI完全问题,而且还需要人工计算。例如,这个特性可以用来测试人类是否存在(CAPTCHAs 的目标就是这样做) ,以及应用于计算机安全以抵御强力攻击。<br />
<br />
<br />
<br />
== History 历史 == <br />
<br />
=== Classical AI 经典人工智能 ===<br />
<br />
{{Main|History of artificial intelligence}}<br />
<br />
Modern AI research began in the mid 1950s.<ref>{{Harvnb|Crevier|1993|pp=48–50}}</ref> The first generation of AI researchers were convinced that artificial general intelligence was possible and that it would exist in just a few decades. AI pioneer [[Herbert A. Simon]] wrote in 1965: "machines will be capable, within twenty years, of doing any work a man can do."<ref>{{Harvnb|Simon|1965|p=96}} quoted in {{Harvnb|Crevier|1993|p=109}}</ref> Their predictions were the inspiration for [[Stanley Kubrick]] and [[Arthur C. Clarke]]'s character [[HAL 9000]], who embodied what AI researchers believed they could create by the year 2001. AI pioneer [[Marvin Minsky]] was a consultant<ref>{{Cite web |url=http://mitpress.mit.edu/e-books/Hal/chap2/two1.html |title=Scientist on the Set: An Interview with Marvin Minsky |access-date=5 April 2008 |archive-url=https://web.archive.org/web/20120716182537/http://mitpress.mit.edu/e-books/Hal/chap2/two1.html |archive-date=16 July 2012 |url-status=live }}</ref> on the project of making HAL 9000 as realistic as possible according to the consensus predictions of the time; Crevier quotes him as having said on the subject in 1967, "Within a generation ... the problem of creating 'artificial intelligence' will substantially be solved,"<ref>Marvin Minsky to {{Harvtxt|Darrach|1970}}, quoted in {{Harvtxt|Crevier|1993|p=109}}.</ref> although Minsky states that he was misquoted.{{Citation needed|date=June 2011}}<br />
<br />
Modern AI research began in the mid 1950s. The first generation of AI researchers were convinced that artificial general intelligence was possible and that it would exist in just a few decades. AI pioneer Herbert A. Simon wrote in 1965: "machines will be capable, within twenty years, of doing any work a man can do." Their predictions were the inspiration for Stanley Kubrick and Arthur C. Clarke's character HAL 9000, who embodied what AI researchers believed they could create by the year 2001. AI pioneer Marvin Minsky was a consultant on the project of making HAL 9000 as realistic as possible according to the consensus prediction of the time; Crevier quotes him as having said on the subject in 1967, "Within a generation ... the problem of creating 'artificial intelligence' will substantially be solved," although Minsky states that he was misquoted.<br />
<br />
现代人工智能研究始于20世纪50年代中期。第一代人工智能研究人员确信,通用人工智能是可能的,并将在短短几十年内出现。人工智能的先驱赫伯特·A·西蒙(Herbert A. Simon)在1965年写道: “机器将在20年内拥有完成人类能做的任何工作的能力。”他们的预言启发了斯坦利·库布里克和亚瑟·查理斯·克拉克塑造的角色哈尔9000,它代表了人工智能研究人员相信他们截至2001年能够创造出的东西。人工智能先驱马文·明斯基(Marvin Minsky)是一个项目顾问,该项目旨在根据当时的一致预测,使哈尔9000尽可能逼真; 克里维尔援引他在1967年关于这个问题的话说,“在一代人的时间里... ... 创造‘人工智能’的问题将大体上得到解决,”尽管明斯基声称,他的话被错误引用了。<br />
<br />
<br />
<br />
However, in the early 1970s, it became obvious that researchers had grossly underestimated the difficulty of the project. Funding agencies became skeptical of AGI and put researchers under increasing pressure to produce useful "applied AI".<ref>The [[Lighthill report]] specifically criticized AI's "grandiose objectives" and led the dismantling of AI research in England. ({{Harvnb|Lighthill|1973}}; {{Harvnb|Howe|1994}}) In the U.S., [[DARPA]] became determined to fund only "mission-oriented direct research, rather than basic undirected research". See {{Harv|NRC|1999}} under "Shift to Applied Research Increases Investment". See also {{Harv|Crevier|1993|pp=115–117}} and {{Harv|Russell|Norvig|2003|pp=21–22}}</ref> As the 1980s began, Japan's [[Fifth Generation Computer]] Project revived interest in AGI, setting out a ten-year timeline that included AGI goals like "carry on a casual conversation".<ref>{{harvnb|Crevier|1993|p=211}}, {{harvnb|Russell|Norvig|2003|p=24}} and see also {{Harvnb|Feigenbaum|McCorduck|1983}}</ref> In response to this and the success of [[expert systems]], both industry and government pumped money back into the field.<ref>{{Harvnb|Crevier| 1993|pp=161–162,197–203,240}}; {{harvnb|Russell|Norvig|2003|p=25}}; {{harvnb|NRC|1999|loc=under "Shift to Applied Research Increases Investment"}}</ref> However, confidence in AI spectacularly collapsed in the late 1980s, and the goals of the Fifth Generation Computer Project were never fulfilled.<ref>{{Harvnb|Crevier|1993|pp=209–212}}</ref> For the second time in 20 years, AI researchers who had predicted the imminent achievement of AGI had been shown to be fundamentally mistaken. By the 1990s, AI researchers had gained a reputation for making vain promises. They became reluctant to make predictions at all<ref>As AI founder [[John McCarthy (computer scientist)|John McCarthy]] writes "it would be a great relief to the rest of the workers in AI if the inventors of new general formalisms would express their hopes in a more guarded form than has sometimes been the case." {{cite web | url=http://www-formal.stanford.edu/jmc/reviews/lighthill/lighthill.html | title=Reply to Lighthill | last=McCarthy | first=John | authorlink=John McCarthy (computer scientist) | publisher=Stanford University | year=2000 | access-date=29 September 2007 | archive-url=https://web.archive.org/web/20080930164952/http://www-formal.stanford.edu/jmc/reviews/lighthill/lighthill.html | archive-date=30 September 2008 | url-status=live }}</ref> and to avoid any mention of "human level" artificial intelligence for fear of being labeled "wild-eyed dreamer[s]."<ref>"At its low point, some computer scientists and software engineers avoided the term artificial intelligence for fear of being viewed as wild-eyed dreamers."{{cite news |first=John |last=Markoff |title=Behind Artificial Intelligence, a Squadron of Bright Real People |url=https://www.nytimes.com/2005/10/14/technology/14artificial.html?ei=5070&en=11ab55edb7cead5e&ex=1185940800&adxnnl=1&adxnnlx=1185805173-o7WsfW7qaP0x5/NUs1cQCQ |work=The New York Times|date=14 October 2005}}</ref><br />
<br />
However, in the early 1970s, it became obvious that researchers had grossly underestimated the difficulty of the project. Funding agencies became skeptical of AGI and put researchers under increasing pressure to produce useful "applied AI". As the 1980s began, Japan's Fifth Generation Computer Project revived interest in AGI, setting out a ten-year timeline that included AGI goals like "carry on a casual conversation". In response to this and the success of expert systems, both industry and government pumped money back into the field. However, confidence in AI spectacularly collapsed in the late 1980s, and the goals of the Fifth Generation Computer Project were never fulfilled. For the second time in 20 years, AI researchers who had predicted the imminent achievement of AGI had been shown to be fundamentally mistaken. By the 1990s, AI researchers had gained a reputation for making vain promises. They became reluctant to make predictions at all and to avoid any mention of "human level" artificial intelligence for fear of being labeled "wild-eyed dreamer[s]."<br />
<br />
然而,在20世纪70年代初,很明显,研究人员严重低估了该项目的难度。资助机构开始对通用人工智能持怀疑态度,并对研究人员施加越来越大的压力,要求他们生产出有用的“应用人工智能”。随着20世纪80年代的开始,日本的'''<font color="#ff8000">第五代计算机项目(Fifth Generation Computer Project)</font>'''重新唤起了人们对通用人工智能的兴趣,并设定了一个长达10年的时间线,其中包括通用人工智能的目标,比如“进行一次随意的交谈”。为了应对这种情况和专家系统的成功建立,工业界和政府都重新将资金投入这一领域。然而,人们对人工智能的信心在20世纪80年代末大幅下降,第五代计算机项目的目标从未实现。20年来的第二次,人工智能研究人员预测通用人工智能即将取得的成果已经被证明根本是错误的。到了20世纪90年代,人工智能研究人员因做出虚假承诺而臭名昭著。他们根本不愿意做预测,也不愿意提及“人类水平”的人工智能,因为他们害怕被贴上“狂热梦想家”的标签<br />
<br />
<br />
<br />
=== Narrow AI research 狭义人工智能的研究===<br />
<br />
{{Main|Artificial intelligence}}<br />
<br />
<br />
<br />
In the 1990s and early 21st century, mainstream AI achieved far greater commercial success and academic respectability by focusing on specific sub-problems where they can produce verifiable results and commercial applications, such as [[artificial neural networks]] and statistical [[machine learning]].<ref>{{Harvnb|Russell|Norvig|2003|pp=25–26}}</ref> These "applied AI" systems are now used extensively throughout the technology industry, and research in this vein is very heavily funded in both academia and industry. Currently, development on this field is considered an emerging trend, and a mature stage is expected to happen in more than 10 years.<ref>{{cite web |title=Trends in the Emerging Tech Hype Cycle |url=https://blogs.gartner.com/smarterwithgartner/files/2018/08/PR_490866_5_Trends_in_the_Emerging_Tech_Hype_Cycle_2018_Hype_Cycle.png |publisher=Gartner Reports |accessdate=7 May 2019 |archive-url=https://web.archive.org/web/20190522024829/https://blogs.gartner.com/smarterwithgartner/files/2018/08/PR_490866_5_Trends_in_the_Emerging_Tech_Hype_Cycle_2018_Hype_Cycle.png |archive-date=22 May 2019 |url-status=live }}</ref><br />
<br />
In the 1990s and early 21st century, mainstream AI achieved far greater commercial success and academic respectability by focusing on specific sub-problems where they can produce verifiable results and commercial applications, such as artificial neural networks and statistical machine learning. These "applied AI" systems are now used extensively throughout the technology industry, and research in this vein is very heavily funded in both academia and industry. Currently, development on this field is considered an emerging trend, and a mature stage is expected to happen in more than 10 years.<br />
<br />
在1990年代和21世纪初,主流人工智能取得了更大的商业成功和学术声望,因为它们把重点放在能够产生可验证结果和商业应用的具体子问题上,例如人工神经网络和统计机器学习。这些“应用人工智能”系统现在在整个技术产业中得到广泛应用,这方面的研究得到了学术界和产业界的大量资助。目前,这一领域的发展被认为是一个新兴的趋势,并有望在10多年内进入一个成熟的阶段。<br />
<br />
<br />
<br />
Most mainstream AI researchers hope that strong AI can be developed by combining the programs that solve various sub-problems. [[Hans Moravec]] wrote in 1988: <blockquote>"I am confident that this bottom-up route to artificial intelligence will one day meet the traditional top-down route more than half way, ready to provide the real world competence and the [[Commonsense knowledge base|commonsense knowledge]] that has been so frustratingly elusive in reasoning programs. Fully intelligent machines will result when the metaphorical [[golden spike]] is driven uniting the two efforts."<ref>{{Harvnb|Moravec|1988|p=20}}</ref></blockquote><br />
<br />
Most mainstream AI researchers hope that strong AI can be developed by combining the programs that solve various sub-problems. Hans Moravec wrote in 1988: <blockquote>"I am confident that this bottom-up route to artificial intelligence will one day meet the traditional top-down route more than half way, ready to provide the real world competence and the commonsense knowledge that has been so frustratingly elusive in reasoning programs. Fully intelligent machines will result when the metaphorical golden spike is driven uniting the two efforts."</blockquote><br />
<br />
大多数主流人工智能研究人员希望,通过结合解决各种子问题的项目,可以开发出强人工智能。汉斯·莫拉维克(Hans Moravec)在1988年写道: “我相信,这种自下而上的人工智能路线,终有一天会与传统的自上而下的路线在后半程相遇。令人沮丧的是,当下,真实世界的能力和常识知识在推理程序中一直难以捉摸。而这两种路线结合的人工智能将能为我们解决这些疑难。当一个黄金钉一样的东西将二者结合起来时,就会产生完全智能的机器。” <br />
<br />
<br />
<br />
However, even this fundamental philosophy has been disputed; for example, Stevan Harnad of Princeton concluded his 1990 paper on the [[Symbol grounding problem|Symbol Grounding Hypothesis]] by stating: <blockquote>"The expectation has often been voiced that "top-down" (symbolic) approaches to modeling cognition will somehow meet "bottom-up" (sensory) approaches somewhere in between. If the grounding considerations in this paper are valid, then this expectation is hopelessly modular and there is really only one viable route from sense to symbols: from the ground up. A free-floating symbolic level like the software level of a computer will never be reached by this route (or vice versa) – nor is it clear why we should even try to reach such a level, since it looks as if getting there would just amount to uprooting our symbols from their intrinsic meanings (thereby merely reducing ourselves to the functional equivalent of a programmable computer)."<ref>{{cite journal | last1 = Harnad | first1 = S | year = 1990 | title = The Symbol Grounding Problem | journal = Physica D | volume = 42 | issue = 1–3| pages = 335–346 | doi=10.1016/0167-2789(90)90087-6| bibcode = 1990PhyD...42..335H | arxiv = cs/9906002}}</ref></blockquote><br />
<br />
However, even this fundamental philosophy has been disputed; for example, Stevan Harnad of Princeton concluded his 1990 paper on the Symbol Grounding Hypothesis by stating: <blockquote>"The expectation has often been voiced that "top-down" (symbolic) approaches to modeling cognition will somehow meet "bottom-up" (sensory) approaches somewhere in between. If the grounding considerations in this paper are valid, then this expectation is hopelessly modular and there is really only one viable route from sense to symbols: from the ground up. A free-floating symbolic level like the software level of a computer will never be reached by this route (or vice versa) – nor is it clear why we should even try to reach such a level, since it looks as if getting there would just amount to uprooting our symbols from their intrinsic meanings (thereby merely reducing ourselves to the functional equivalent of a programmable computer)."</blockquote><br />
<br />
然而,连如此基本的哲学问题也存在争议; 例如,普林斯顿大学的斯蒂文·哈纳德(Stevan Harnad)在1990年关于'''<font color="#ff8000">符号基础假说(the Symbol Grounding Hypothesis)</font>'''的论文中总结道: “人们经常提出这样的期望,即建立“自上而下”(符号)的认知模型的方法将在某种程度上与“自下而上”(感官)的方法在建模过程中的某处相会。如果本文中的基本考虑是正确的,那么绝望的是,这种期望是模块化的,并且从认知到符号真的只有一条可行的路径: 从头开始。类似计算机软件级别的自由浮动的符号永远不可能通过这条路径实现,反之亦然——甚至也不清楚为什么我们应该尝试达到这样一个级别,因为它看起来就像是把我们的符号从它们的内在意义上连根拔起(从而仅仅把我们自己降低为可编程计算机的功能等价物)。”<br />
<br />
<br />
<br />
=== Modern artificial general intelligence research 现代通用人工智能的研究===<br />
<br />
The term "artificial general intelligence" was used as early as 1997, by Mark Gubrud<ref>{{Harvnb|Gubrud|1997}}</ref> in a discussion of the implications of fully automated military production and operations. The term was re-introduced and popularized by [[Shane Legg]] and [[Ben Goertzel]] around 2002.<ref>{{Cite web|url=http://goertzel.org/who-coined-the-term-agi/|title=Who coined the term "AGI"? » goertzel.org|language=en-US|access-date=28 December 2018|archive-url=https://web.archive.org/web/20181228083048/http://goertzel.org/who-coined-the-term-agi/|archive-date=28 December 2018|url-status=live}}, via [[Life 3.0]]: 'The term "AGI" was popularized by... Shane Legg, Mark Gubrud and Ben Goertzel'</ref> The research objective is much older, for example [[Doug Lenat]]'s [[Cyc]] project (that began in 1984), and [[Allen Newell]]'s [[Soar (cognitive architecture)|Soar]] project are regarded as within the scope of AGI. AGI research activity in 2006 was described by Pei Wang and Ben Goertzel<ref>{{harvnb|Goertzel|Wang|2006}}. See also {{harvtxt|Wang|2006}} with an up-to-date summary and lots of links.</ref> as "producing publications and preliminary results". The first summer school in AGI was organized in Xiamen, China in 2009<ref>https://goertzel.org/AGI_Summer_School_2009.htm</ref> by the Xiamen university's Artificial Brain Laboratory and OpenCog. The first university course was given in 2010<ref>http://fmi-plovdiv.org/index.jsp?id=1054&ln=1</ref> and 2011<ref>http://fmi.uni-plovdiv.bg/index.jsp?id=1139&ln=1</ref> at Plovdiv University, Bulgaria by Todor Arnaudov. MIT presented a course in AGI in 2018, organized by Lex Fridman and featuring a number of guest lecturers. However, as yet, most AI researchers have devoted little attention to AGI, with some claiming that intelligence is too complex to be completely replicated in the near term. However, a small number of computer scientists are active in AGI research, and many of this group are contributing to a series of [[Conference on Artificial General Intelligence|AGI conferences]]. The research is extremely diverse and often pioneering in nature. In the introduction to his book,{{sfn|Goertzel|Pennachin|2006}} Goertzel says that estimates of the time needed before a truly flexible AGI is built vary from 10 years to over a century, but the consensus in the AGI research community seems to be that the timeline discussed by [[Ray Kurzweil]] in ''[[The Singularity is Near]]''<ref name="K">{{Harv|Kurzweil|2005|p=260}} or see [http://crnano.typepad.com/crnblog/2005/08/advanced_human_.html Advanced Human Intelligence] {{Webarchive|url=https://web.archive.org/web/20110630032301/http://crnano.typepad.com/crnblog/2005/08/advanced_human_.html |date=30 June 2011 }} where he defines strong AI as "machine intelligence with the full range of human intelligence."</ref> (i.e. between 2015 and 2045) is plausible.{{sfn|Goertzel|2007}}<br />
<br />
The term "artificial general intelligence" was used as early as 1997, by Mark Gubrud in a discussion of the implications of fully automated military production and operations. The term was re-introduced and popularized by Shane Legg and Ben Goertzel around 2002. The research objective is much older, for example Doug Lenat's Cyc project (that began in 1984), and Allen Newell's Soar project are regarded as within the scope of AGI. AGI research activity in 2006 was described by Pei Wang and Ben Goertzel as "producing publications and preliminary results". The first summer school in AGI was organized in Xiamen, China in 2009 by the Xiamen university's Artificial Brain Laboratory and OpenCog. The first university course was given in 2010 and 2011 at Plovdiv University, Bulgaria by Todor Arnaudov. MIT presented a course in AGI in 2018, organized by Lex Fridman and featuring a number of guest lecturers. However, as yet, most AI researchers have devoted little attention to AGI, with some claiming that intelligence is too complex to be completely replicated in the near term. However, a small number of computer scientists are active in AGI research, and many of this group are contributing to a series of AGI conferences. The research is extremely diverse and often pioneering in nature. In the introduction to his book, Goertzel says that estimates of the time needed before a truly flexible AGI is built vary from 10 years to over a century, but the consensus in the AGI research community seems to be that the timeline discussed by Ray Kurzweil in The Singularity is Near (i.e. between 2015 and 2045) is plausible.<br />
<br />
“通用人工智能”一词早在1997年就由马克·古布鲁德(Mark Gubrud)在讨论全自动化军事生产和作业的影响时使用。这个术语在2002年左右被肖恩·莱格(Shane Legg)和本·格兹尔(Ben Goertzel)重新引入并推广。通用人工智能的研究目标要古老得多,例如道格•雷纳特(Doug Lenat)的 Cyc 项目(始于1984年) ,以及艾伦•纽厄尔(Allen Newell)的 Soar 项目被认为属于通用人工智能的范畴。王培(Pei Wang)和本·格兹尔将2006年的通用人工智能研究活动描述为“发表论文和取得初步成果”。2009年,厦门大学人工脑实验室和 OpenCog 在中国厦门组织了通用人工智能的第一个暑期学校。第一个大学课程于2010年和2011年在保加利亚普罗夫迪夫大学由托多尔·阿瑙多夫(Todor Arnaudov)开设。2018年,麻省理工学院开设了一门通用人工智能课程,由莱克斯·弗里德曼(Lex Fridman)组织,并邀请了一些客座讲师。然而,迄今为止,大多数人工智能研究人员对通用人工智能关注甚少,一些人声称,智能过于复杂,在短期内无法完全复制。然而,少数计算机科学家积极参与通用人工智能的研究,其中许多人正在为通用人工智能的一系列会议做出贡献。这项研究极其多样化,而且往往具有开创性。格兹尔在他的书的序言中,说,制造一个真正灵活的通用人工智能所需的时间约为10年到超过一个世纪不等,但是通用人工智能研究团体的似乎一致认为雷·库兹韦尔(Ray Kurzweil)在'''<font color="#ff8000">《奇点临近》</font>'''(即在2015年至2045年之间)中讨论的时间线是可信的。<br />
<br />
<br />
<br />
However, most mainstream AI researchers doubt that progress will be this rapid.{{citation_needed|date=January 2017}} Organizations explicitly pursuing AGI include the Swiss AI lab [[IDSIA]],{{citation needed|date=December 2017}} Nnaisense,<ref>{{cite news|last1=Markoff|first1=John|title=When A.I. Matures, It May Call Jürgen Schmidhuber 'Dad'|url=https://www.nytimes.com/2016/11/27/technology/artificial-intelligence-pioneer-jurgen-schmidhuber-overlooked.html|accessdate=26 December 2017|work=The New York Times|date=27 November 2016|archive-url=https://web.archive.org/web/20171226234555/https://www.nytimes.com/2016/11/27/technology/artificial-intelligence-pioneer-jurgen-schmidhuber-overlooked.html|archive-date=26 December 2017|url-status=live}}</ref> [[Vicarious (company)|Vicarious]],<!--<ref name=baum/>--> [[Maluuba]],<ref name=baum/> the [[OpenCog|OpenCog Foundation]], Adaptive AI, [[LIDA (cognitive architecture)|LIDA]], and [[Numenta]] and the associated [[Redwood Neuroscience Institute]].<ref>{{cite book|author1=James Barrat|title=Our Final Invention: Artificial Intelligence and the End of the Human Era|date=2013|publisher=St. Martin's Press|location=New York|isbn=9780312622374|edition=First|chapter=Chapter 11: A Hard Takeoff|title-link=Our Final Invention|author1-link=James Barrat}}</ref> In addition, organizations such as the [[Machine Intelligence Research Institute]]<ref>{{cite web|title=About the Machine Intelligence Research Institute|url=https://intelligence.org/about/|website=Machine Intelligence Research Institute|accessdate=26 December 2017|archive-url=https://web.archive.org/web/20180121025925/https://intelligence.org/about/|archive-date=21 January 2018|url-status=live}}</ref> and [[OpenAI]]<ref>{{cite news|title=About OpenAI|url=https://openai.com/about/|accessdate=26 December 2017|work=[[OpenAI]]|language=en-us|archive-url=https://web.archive.org/web/20171222181056/https://openai.com/about/|archive-date=22 December 2017|url-status=live}}</ref> have been founded to influence the development path of AGI. Finally, projects such as the [[Human Brain Project]]<ref>{{cite news|last1=Theil|first1=Stefan|title=Trouble in Mind|url=https://www.scientificamerican.com/article/why-the-human-brain-project-went-wrong-and-how-to-fix-it/|accessdate=26 December 2017|work=Scientific American|pages=36–42|language=en|doi=10.1038/scientificamerican1015-36|bibcode=2015SciAm.313d..36T|archive-url=https://web.archive.org/web/20171109234151/https://www.scientificamerican.com/article/why-the-human-brain-project-went-wrong-and-how-to-fix-it/|archive-date=9 November 2017|url-status=live}}</ref> have the goal of building a functioning simulation of the human brain. A 2017 survey of AGI categorized forty-five known "active R&D projects" that explicitly or implicitly (through published research) research AGI, with the largest three being [[DeepMind]], the Human Brain Project, and [[OpenAI]].<ref name=baum>{{cite journal|title=Baum, Seth, A Survey of Artificial General Intelligence Projects for Ethics, Risk, and Policy (November 12, 2017). Global Catastrophic Risk Institute Working Paper 17-1|url=https://ssrn.com/abstract=3070741|date=12 November 2017|last1=Baum|first1=Seth}}</ref><br />
<br />
However, most mainstream AI researchers doubt that progress will be this rapid. Organizations explicitly pursuing AGI include the Swiss AI lab IDSIA, Nnaisense, Vicarious. In addition, organizations such as the Machine Intelligence Research Institute and OpenAI have been founded to influence the development path of AGI. Finally, projects such as the Human Brain Project have the goal of building a functioning simulation of the human brain. A 2017 survey of AGI categorized forty-five known "active R&D projects" that explicitly or implicitly (through published research) research AGI, with the largest three being DeepMind, the Human Brain Project, and OpenAI.<br />
<br />
然而,大多数主流的人工智能研究人员怀疑进展是否会如此之快。明确寻求通用人工智能的组织包括瑞士人工智能实验室IDSIA,Nnaisense,Vicarious。此外,机器智能研究所和 OpenAI 等机构也建立起来以影响通用人工智能的发展道路。最后,像人脑计划这样的项目的目标是建立一个人脑的功能模拟。2017年针对通用人工智能的一项调查(通过已发表的研究)对45个已知的明确的或暗中研究通用人工智能的“活跃研发项目”进行了分类 ,其中最大的三个是 DeepMind、人类大脑项目和 OpenAI。<br />
<br />
<br />
<br />
In 2017, researchers Feng Liu, Yong Shi and Ying Liu conducted intelligence tests on publicly available and freely accessible weak AI such as Google AI or Apple's Siri and others. At the maximum, these AI reached a value of about 47, which corresponds approximately to a six-year-old child in first grade. An adult comes to about 100 on average. Similar tests had been carried out in 2014, with the IQ score reaching a maximum value of 27.<ref>{{cite journal|title=Intelligence Quotient and Intelligence Grade of Artificial Intelligence|journal=Annals of Data Science|volume=4|issue=2|pages=179–191|arxiv=1709.10242|doi=10.1007/s40745-017-0109-0|year=2017|last1=Liu|first1=Feng|last2=Shi|first2=Yong|last3=Liu|first3=Ying|bibcode=2017arXiv170910242L}}</ref><ref>{{cite web|title=Google-KI doppelt so schlau wie Siri|url=https://t3n.de/news/iq-kind-schlauer-google-ki-siri-864003|accessdate=2 January 2019|archive-url=https://web.archive.org/web/20190103055657/https://t3n.de/news/iq-kind-schlauer-google-ki-siri-864003/|archive-date=3 January 2019|url-status=live}}</ref><br />
<br />
In 2017, researchers Feng Liu, Yong Shi and Ying Liu conducted intelligence tests on publicly available and freely accessible weak AI such as Google AI or Apple's Siri and others. At the maximum, these AI reached a value of about 47, which corresponds approximately to a six-year-old child in first grade. An adult comes to about 100 on average. Similar tests had been carried out in 2014, with the IQ score reaching a maximum value of 27.<br />
<br />
2017年,研究人员 Feng Liu,Yong Shi 和 Ying Liu 对公开的和可自由访问的弱智能进行了智能测试,如谷歌人工智能或苹果的 Siri 等。在最大值,这些人工智能达到了约47,这大约相当于一个的六岁儿童。一个成年人平均智商为100。2014年也进行了类似的测试,智商分数的最高值达到了27。<br />
<br />
<br />
<br />
In 2019, video game programmer and aerospace engineer [[John Carmack]] announced plans to research AGI.<ref name="lawler">{{cite web |url=https://www.engadget.com/2019-11-13-john-carmack-agi.html |title=John Carmack takes a step back at Oculus to work on human-like AI |date=November 13, 2019 |first=Richard |last=Lawler |accessdate=April 4, 2020 |publisher=[[Engadget]]}}</ref><br />
<br />
In 2019, video game programmer and aerospace engineer John Carmack announced plans to research AGI.<br />
<br />
2019年,游戏程序师和航空工程师约翰·卡迈克(John Carmack)宣布了研究通用人工智能的计划。<br />
<br />
<br />
<br />
==Processing power needed to simulate a brain 模拟人脑所需要的处理能力==<br />
<br />
<br />
<br />
===Whole brain emulation 全脑模拟===<br />
<br />
{{main|Mind uploading}}<br />
<br />
A popular discussed approach to achieving general intelligent action is [[whole brain emulation]]. A low-level brain model is built by [[brain scanning|scanning]] and [[Brain mapping|mapping]] a biological brain in detail and copying its state into a computer system or another computational device. The computer runs a [[computer simulation|simulation]] model so faithful to the original that it will behave in essentially the same way as the original brain, or for all practical purposes, indistinguishably.<br />
<br />
A popular discussed approach to achieving general intelligent action is whole brain emulation. A low-level brain model is built by scanning and mapping a biological brain in detail and copying its state into a computer system or another computational device. The computer runs a simulation model so faithful to the original that it will behave in essentially the same way as the original brain, or for all practical purposes, indistinguishably.<ref name=Roadmap><br />
<br />
实现通用智能行为的一种流行的讨论方法是全脑模拟。一个低层次的大脑模型是通过扫描和绘制生物大脑的详细情况,并将其状态复制到计算机系统或其他计算设备中来建立的。计算机运行的模拟模型无比忠实于原始模型,以至于它的行为在本质,或者所有实际目的上与原始大脑相同,难以区分。<br />
<br />
{{Harvnb|Sandberg|Boström|2008}}. "The basic idea is to take a particular brain, scan its structure in detail, and construct a software model of it that is so faithful to the original that, when run on appropriate hardware, it will behave in essentially the same way as the original brain."</ref> Whole brain emulation is discussed in [[computational neuroscience]] and [[neuroinformatics]], in the context of [[brain simulation]] for medical research purposes. It is discussed in [[artificial intelligence]] research{{sfn|Goertzel|2007}} as an approach to strong AI. [[Neuroimaging]] technologies that could deliver the necessary detailed understanding are improving rapidly, and [[futurist]] Ray Kurzweil in the book ''The Singularity Is Near''<ref name=K/> predicts that a map of sufficient quality will become available on a similar timescale to the required computing power.<br />
<br />
."The basic idea is to take a particular brain, scan its structure in detail, and construct a software model of it that is so faithful to the original that, when run on appropriate hardware, it will behave in essentially the same way as the original brain."</ref> Whole brain emulation is discussed in computational neuroscience and neuroinformatics, in the context of brain simulation for medical research purposes. It is discussed in artificial intelligence research as an approach to strong AI. Neuroimaging technologies that could deliver the necessary detailed understanding are improving rapidly, and futurist Ray Kurzweil in the book The Singularity Is Near predicts that a map of sufficient quality will become available on a similar timescale to the required computing power.<br />
<br />
“基本思路是,取一个特定的大脑,详细地扫描其结构,并构建一个无比还原的原始大脑的软件模型,以至于在适当的硬件上运行时,它基本上与原始大脑的行为方式相同。”基于医学研究的大脑模拟背景下,全脑模拟在计算神经科学和神经信息学医学期刊上被讨论过。它是人工智能研究中讨论的一种强人工智能的方法。可提供必要详细的理解的神经成像技术正在迅速提高,未来学家雷·库兹韦尔(Ray Kurzweil)在《奇点临近》书中预测,一张质量足够高的地图将在类似的时间尺度上达到所需的计算能力。<br />
<br />
<br />
<br />
===Early estimates 初步预测===<br />
<br />
[[File:Estimations of Human Brain Emulation Required Performance.svg|thumb|right|400px|Estimates of how much processing power is needed to emulate a human brain at various levels (from Ray Kurzweil, and [[Anders Sandberg]] and [[Nick Bostrom]]), along with the fastest supercomputer from [[TOP500]] mapped by year. Note the logarithmic scale and exponential trendline, which assumes the computational capacity doubles every 1.1 years. Kurzweil believes that mind uploading will be possible at neural simulation, while the Sandberg, Bostrom report is less certain about where [[consciousness]] arises.{{sfn|Sandberg|Boström|2008}}]] For low-level brain simulation, an extremely powerful computer would be required. The [[human brain]] has a huge number of [[synapses]]. Each of the 10<sup>11</sup> (one hundred billion) [[neurons]] has on average 7,000 synaptic connections (synapses) to other neurons. It has been estimated that the brain of a three-year-old child has about 10<sup>15</sup> synapses (1 quadrillion). This number declines with age, stabilizing by adulthood. Estimates vary for an adult, ranging from 10<sup>14</sup> to 5×10<sup>14</sup> synapses (100 to 500 trillion).{{sfn|Drachman|2005}} An estimate of the brain's processing power, based on a simple switch model for neuron activity, is around 10<sup>14</sup> (100 trillion) synaptic updates per second ([[SUPS]]).{{sfn|Russell|Norvig|2003}} In 1997, Kurzweil looked at various estimates for the hardware required to equal the human brain and adopted a figure of 10<sup>16</sup> computations per second (cps).<ref>In "Mind Children" {{Harvnb|Moravec|1988|page=61}} 10<sup>15</sup> cps is used. More recently, in 1997, <{{cite web|url=http://www.transhumanist.com/volume1/moravec.htm |title=Archived copy |accessdate=23 June 2006 |url-status=dead |archiveurl=https://web.archive.org/web/20060615031852/http://transhumanist.com/volume1/moravec.htm |archivedate=15 June 2006 }}> Moravec argued for 10<sup>8</sup> MIPS which would roughly correspond to 10<sup>14</sup> cps. Moravec talks in terms of MIPS, not "cps", which is a non-standard term Kurzweil introduced.</ref> (For comparison, if a "computation" was equivalent to one "[[FLOPS|floating point operation]]" – a measure used to rate current [[supercomputer]]s – then 10<sup>16</sup> "computations" would be equivalent to 10 [[Peta-|petaFLOPS]], [[FLOPS#Performance records|achieved in 2011]]). He used this figure to predict the necessary hardware would be available sometime between 2015 and 2025, if the exponential growth in computer power at the time of writing continued.<br />
<br />
Estimates of how much processing power is needed to emulate a human brain at various levels (from Ray Kurzweil, and [[Anders Sandberg and Nick Bostrom), along with the fastest supercomputer from TOP500 mapped by year. Note the logarithmic scale and exponential trendline, which assumes the computational capacity doubles every 1.1 years. Kurzweil believes that mind uploading will be possible at neural simulation, while the Sandberg, Bostrom report is less certain about where consciousness arises.]] For low-level brain simulation, an extremely powerful computer would be required. The human brain has a huge number of synapses. Each of the 10<sup>11</sup> (one hundred billion) neurons has on average 7,000 synaptic connections (synapses) to other neurons. It has been estimated that the brain of a three-year-old child has about 10<sup>15</sup> synapses (1 quadrillion). This number declines with age, stabilizing by adulthood. Estimates vary for an adult, ranging from 10<sup>14</sup> to 5×10<sup>14</sup> synapses (100 to 500 trillion). An estimate of the brain's processing power, based on a simple switch model for neuron activity, is around 10<sup>14</sup> (100 trillion) synaptic updates per second (SUPS). In 1997, Kurzweil looked at various estimates for the hardware required to equal the human brain and adopted a figure of 10<sup>16</sup> computations per second (cps). (For comparison, if a "computation" was equivalent to one "floating point operation" – a measure used to rate current supercomputers – then 10<sup>16</sup> "computations" would be equivalent to 10 petaFLOPS, achieved in 2011). He used this figure to predict the necessary hardware would be available sometime between 2015 and 2025, if the exponential growth in computer power at the time of writing continued.<br />
<br />
根据对在不同水平上模拟人类大脑的所需处理能力的估计(来自 Ray Kurzweil,[ Anders Sandberg 和 Nick Bostrom ]) ,以及每年从最快的五百台超级计算机获得的数据,绘制出对数尺度趋势线和指数趋势线。它呈现出计算能力每1.1年增长一倍。库兹韦尔相信,在神经模拟中上传思维是可能的,而桑德伯格和博斯特罗姆的报告对意识从何产生则不太确定。]为进行低层次的大脑模拟,需要一个非常强大的计算机。人类的大脑有大量的突触。每10个 sup 11 / sup (1000亿)神经元平均与其他神经元有7000个突触连接(突触)。据估计,一个三岁儿童的大脑约有10个 sup 15 / sup 突触(1千万亿)。这个数字随着年龄的增长而下降,成年后趋于稳定。而每个成年人的估计情况互不相同,从10个 sup 14 / sup 到5个 sup 10 sup 14 / sup 突触(100万亿到500万亿)不等。基于神经元活动的简单开关模型,对大脑处理能力的估计大约是每秒10次 / 秒(100万亿)突触更新(SUPS)。1997年,库兹韦尔研究了等价模拟人脑所需硬件的各种估计,并采纳了每秒10 sup 16 / sup 计算(cps)这个估计结果。(作为比较,如果一次“计算”相当于一次“浮点运算”——一种用于对当前超级计算机进行评级的措施——那么10 sup 16 / sup次“计算”相当于2011年达到的的每秒10000亿次浮点运算)。他用这个数字来预测,如果在撰写本文时计算机能力方面的指数增长继续下去的话,那么在2015年到2025年之间的某个时候,必要的硬件将会出现。<br />
<br />
<br />
<br />
===Modelling the neurons in more detail 对神经元的更精细的模拟===<br />
<br />
The [[artificial neuron]] model assumed by Kurzweil and used in many current [[artificial neural network]] implementations is simple compared with [[biological neuron model|biological neurons]]. A brain simulation would likely have to capture the detailed cellular behaviour of biological [[neurons]], presently understood only in the broadest of outlines. The overhead introduced by full modeling of the biological, chemical, and physical details of neural behaviour (especially on a molecular scale) would require computational powers several orders of magnitude larger than Kurzweil's estimate. In addition the estimates do not account for [[glial cells]], which are at least as numerous as neurons, and which may outnumber neurons by as much as 10:1, and are now known to play a role in cognitive processes.<ref name="Discover2011JanFeb">{{Cite journal|author=Swaminathan, Nikhil|title=Glia—the other brain cells|journal=Discover|date=Jan–Feb 2011|url=http://discovermagazine.com/2011/jan-feb/62|access-date=24 January 2014|archive-url=https://web.archive.org/web/20140208071350/http://discovermagazine.com/2011/jan-feb/62|archive-date=8 February 2014|url-status=live}}</ref><br />
<br />
The artificial neuron model assumed by Kurzweil and used in many current artificial neural network implementations is simple compared with biological neurons. A brain simulation would likely have to capture the detailed cellular behaviour of biological neurons, presently understood only in the broadest of outlines. The overhead introduced by full modeling of the biological, chemical, and physical details of neural behaviour (especially on a molecular scale) would require computational powers several orders of magnitude larger than Kurzweil's estimate. In addition the estimates do not account for glial cells, which are at least as numerous as neurons, and which may outnumber neurons by as much as 10:1, and are now known to play a role in cognitive processes.<br />
<br />
与生物神经元相比,库兹韦尔假设的人工神经元模型在当前许多'''<font color="#ff8000">人工神经网络(artificial neural network)</font>'''实现中的应用是简单的。大脑模拟可能需要捕捉生物神经元细胞行为的细节,目前只能从最广泛的概要中理解。对神经行为的生物、化学和物理细节(特别是在分子尺度上)进行全面建模所需要的计算能力将比库兹韦尔的估计大数个数量级。此外,这些估计没有考虑到至少和神经元一样多的'''<font color="#ff8000">胶质细胞(glial cells)</font>''',其数量可能比神经元多十分之一,且现已知它们在认知过程中发挥作用。<br />
<br />
<br />
<br />
=== Current research 研究现状===<br />
<br />
There are some research projects that are investigating brain simulation using more sophisticated neural models, implemented on conventional computing architectures. The [[Artificial Intelligence System]] project implemented non-real time simulations of a "brain" (with 10<sup>11</sup> neurons) in 2005. It took 50 days on a cluster of 27 processors to simulate 1 second of a model.<ref>{{cite journal |last=Izhikevich |first=Eugene M. |last2=Edelman |first2=Gerald M. |date=4 March 2008 |title=Large-scale model of mammalian thalamocortical systems |url=http://vesicle.nsi.edu/users/izhikevich/publications/large-scale_model_of_human_brain.pdf |journal=PNAS |volume=105 |issue=9 |pages=3593–3598 |doi= 10.1073/pnas.0712231105|access-date=23 June 2015 |archive-url=https://web.archive.org/web/20090612095651/http://vesicle.nsi.edu/users/izhikevich/publications/large-scale_model_of_human_brain.pdf |archive-date=12 June 2009 |pmid=18292226 |pmc=2265160|bibcode=2008PNAS..105.3593I }}</ref> The [[Blue Brain]] project used one of the fastest supercomputer architectures in the world, [[IBM]]'s [[Blue Gene]] platform, to create a real time simulation of a single rat [[Neocortex|neocortical column]] consisting of approximately 10,000 neurons and 10<sup>8</sup> synapses in 2006.<ref>{{cite web|url=http://bluebrain.epfl.ch/Jahia/site/bluebrain/op/edit/pid/19085|title=Project Milestones|work=Blue Brain|accessdate=11 August 2008}}</ref> A longer term goal is to build a detailed, functional simulation of the physiological processes in the human brain: "It is not impossible to build a human brain and we can do it in 10 years," [[Henry Markram]], director of the Blue Brain Project said in 2009 at the [[TED (conference)|TED conference]] in Oxford.<ref>{{Cite news |url=http://news.bbc.co.uk/1/hi/technology/8164060.stm |title=Artificial brain '10 years away' 2009 BBC news |date=22 July 2009 |access-date=25 July 2009 |archive-url=https://web.archive.org/web/20170726040959/http://news.bbc.co.uk/1/hi/technology/8164060.stm |archive-date=26 July 2017 |url-status=live }}</ref> There have also been controversial claims to have simulated a [[cat intelligence#Computer simulation of the cat brain|cat brain]]. Neuro-silicon interfaces have been proposed as an alternative implementation strategy that may scale better.<ref>[http://gauntlet.ucalgary.ca/story/10343 University of Calgary news] {{Webarchive|url=https://web.archive.org/web/20090818081044/http://gauntlet.ucalgary.ca/story/10343 |date=18 August 2009 }}, [http://www.nbcnews.com/id/12037941 NBC News news] {{Webarchive|url=https://web.archive.org/web/20170704063922/http://www.nbcnews.com/id/12037941/ |date=4 July 2017 }}</ref><br />
<br />
There are some research projects that are investigating brain simulation using more sophisticated neural models, implemented on conventional computing architectures. The Artificial Intelligence System project implemented non-real time simulations of a "brain" (with 10<sup>11</sup> neurons) in 2005. It took 50 days on a cluster of 27 processors to simulate 1 second of a model. The Blue Brain project used one of the fastest supercomputer architectures in the world, IBM's Blue Gene platform, to create a real time simulation of a single rat neocortical column consisting of approximately 10,000 neurons and 10<sup>8</sup> synapses in 2006. A longer term goal is to build a detailed, functional simulation of the physiological processes in the human brain: "It is not impossible to build a human brain and we can do it in 10 years," Henry Markram, director of the Blue Brain Project said in 2009 at the TED conference in Oxford. There have also been controversial claims to have simulated a cat brain. Neuro-silicon interfaces have been proposed as an alternative implementation strategy that may scale better.<br />
<br />
有一些研究项目正在使用更复杂的神经模型研究大脑模拟,这些模型是在传统的计算机体系结构上实现的。人工智能系统项目在2005年实现了对一个“大脑”(有10个 sup 11 / sup 神经元)的非实时模拟。在一个由27个处理器组成的集群上,模拟一个模型的一秒钟花费了50天时间。2006年,蓝脑项目利用世界上最快的超级计算机架构之一——IBM 的蓝色基因平台,创建了一个包含大约10,000个神经元和10个 sup 8 / sup 突触的单个大鼠的'''<font color="#ff8000">新皮质柱(neocortical column)</font>'''的实时模拟。一个更长期的目标是建立一个人脑生理过程的详细的功能模拟: “建立一个人脑并不是不可能的,我们可以在10年内完成,”蓝脑项目主任亨利·马克拉姆(Henry Markram)于2009年在牛津举行的 TED 大会上说道。还有一些有争议的说法是模拟猫的大脑。神经-硅接口已作为一种可替代的实施策略被提出,它可能会更好地进行模拟。<br />
<br />
<br />
<br />
[[Hans Moravec]] addressed the above arguments ("brains are more complicated", "neurons have to be modeled in more detail") in his 1997 paper "When will computer hardware match the human brain?".<ref>{{cite web|url=http://www.transhumanist.com/volume1/moravec.htm |title=Archived copy |accessdate=23 June 2006 |url-status=dead |archiveurl=https://web.archive.org/web/20060615031852/http://transhumanist.com/volume1/moravec.htm |archivedate=15 June 2006 }}</ref> He measured the ability of existing software to simulate the functionality of neural tissue, specifically the retina. His results do not depend on the number of glial cells, nor on what kinds of processing neurons perform where.<br />
<br />
Hans Moravec addressed the above arguments ("brains are more complicated", "neurons have to be modeled in more detail") in his 1997 paper "When will computer hardware match the human brain?". He measured the ability of existing software to simulate the functionality of neural tissue, specifically the retina. His results do not depend on the number of glial cells, nor on what kinds of processing neurons perform where.<br />
<br />
汉斯·莫拉维克(Hans Moravec)在他1997年的论文《计算机硬件何时能与人脑匹敌》中提出了上述观点(“大脑更复杂” ,“神经元的建模必须更详细”) .他测量了现有软件模拟神经组织,特别是视网膜功能的能力。他的研究结果并不取决于神经胶质细胞的数量,也不取决于处理神经元在哪里工作。<br />
<br />
<br />
<br />
The actual complexity of modeling biological neurons has been explored in [[OpenWorm|OpenWorm project]] that was aimed on complete simulation of a worm that has only 302 neurons in its neural network (among about 1000 cells in total). The animal's neural network has been well documented before the start of the project. However, although the task seemed simple at the beginning, the models based on a generic neural network did not work. Currently, the efforts are focused on precise emulation of biological neurons (partly on the molecular level), but the result cannot be called a total success yet. Even if the number of issues to be solved in a human-brain-scale model is not proportional to the number of neurons, the amount of work along this path is obvious.<br />
<br />
The actual complexity of modeling biological neurons has been explored in OpenWorm project that was aimed on complete simulation of a worm that has only 302 neurons in its neural network (among about 1000 cells in total). The animal's neural network has been well documented before the start of the project. However, although the task seemed simple at the beginning, the models based on a generic neural network did not work. Currently, the efforts are focused on precise emulation of biological neurons (partly on the molecular level), but the result cannot be called a total success yet. Even if the number of issues to be solved in a human-brain-scale model is not proportional to the number of neurons, the amount of work along this path is obvious.<br />
<br />
OpenWorm 项目已经探讨了建模生物神经元的实际复杂性。该项目旨在完全模拟一个蠕虫,其神经网络中只有302个神经元(在总共约1000个细胞中)。项目开始之前,蠕虫的神经网络已经被很好地记录了下来。然而,尽管任务一开始看起来很简单,基于一般神经网络的模型并不起作用。目前,研究的重点是精确模拟生物神经元(部分在分子水平上) ,但结果还不能被称为完全成功。即使在人脑尺度的模型中需要解决的问题的数量与神经元的数量不成比例,沿着这条路径走下去的工作量也是显而易见的。<br />
<br />
<br />
<br />
===Criticisms of simulation-based approaches 对基于模拟的研究方法的批评===<br />
<br />
A fundamental criticism of the simulated brain approach derives from [[embodied cognition]] where human embodiment is taken as an essential aspect of human intelligence. Many researchers believe that embodiment is necessary to ground meaning.<ref>{{Harvnb|de Vega|Glenberg|Graesser|2008}}. A wide range of views in current research, all of which require grounding to some degree</ref> If this view is correct, any fully functional brain model will need to encompass more than just the neurons (i.e., a robotic body). Goertzel{{sfn|Goertzel|2007}} proposes virtual embodiment (like in ''[[Second Life]]''), but it is not yet known whether this would be sufficient.<br />
<br />
A fundamental criticism of the simulated brain approach derives from embodied cognition where human embodiment is taken as an essential aspect of human intelligence. Many researchers believe that embodiment is necessary to ground meaning. If this view is correct, any fully functional brain model will need to encompass more than just the neurons (i.e., a robotic body). Goertzel proposes virtual embodiment (like in Second Life), but it is not yet known whether this would be sufficient.<br />
<br />
对模拟大脑的方法的一个基本批评来自具象认知,其中人形化被视为人类智力的一个重要方面。许多研究者认为,具体化是必要的基础意义。如果这种观点是正确的,那么任何功能齐全的大脑模型除了神经元还要包含更多东西(例如,一个机器人身体)。格兹尔提出了虚拟体(就像在《第二人生》中那样) ,但是目前还不知道这是否足够。<br />
<br />
<br />
<br />
Desktop computers using microprocessors capable of more than 10<sup>9</sup> cps (Kurzweil's non-standard unit "computations per second", see above) have been available since 2005. According to the brain power estimates used by Kurzweil (and Moravec), this computer should be capable of supporting a simulation of a bee brain, but despite some interest<ref>{{Cite web |url=http://www.setiai.com/archives/cat_honey_bee_brain.html |title=some links to bee brain studies |access-date=30 March 2010 |archive-url=https://web.archive.org/web/20111005162232/http://www.setiai.com/archives/cat_honey_bee_brain.html |archive-date=5 October 2011 |url-status=dead }}</ref> no such simulation exists {{Citation needed|date=April 2011}}. There are at least three reasons for this:<br />
<br />
Desktop computers using microprocessors capable of more than 10<sup>9</sup> cps (Kurzweil's non-standard unit "computations per second", see above) have been available since 2005. According to the brain power estimates used by Kurzweil (and Moravec), this computer should be capable of supporting a simulation of a bee brain, but despite some interest no such simulation exists . There are at least three reasons for this:<br />
<br />
自2005年以来,台式计算机使用的微处理器能够超过10 sup 9 / sup cps (库兹韦尔的非标准单位“每秒计算”,见上文)。根据库兹韦尔(和莫拉维克)使用的大脑能量估算,这台计算机应该能够支持蜜蜂大脑的模拟,但是尽管有些人感兴趣,这样的模拟并不存在。这至少有三个原因:<br />
<br />
#The neuron model seems to be oversimplified (see next section).<br />
<br />
The neuron model seems to be oversimplified (see next section).<br />
<br />
神经元模型似乎过于简化了(见下一节)。<br />
<br />
#There is insufficient understanding of higher cognitive processes{{refn|In Goertzels' AGI book ([[#CITEREFYudkowsky2006|Yudkowsky 2006]]), Yudkowsky proposes 5 levels of organisation that must be understood – code/data, sensory modality, concept & category, thought, and deliberation (consciousness) – in order to use the available hardware}} to establish accurately what the brain's neural activity, observed using techniques such as [[Neuroimaging#Functional magnetic resonance imaging|functional magnetic resonance imaging]], correlates with.<br />
<br />
There is insufficient understanding of higher cognitive processes to establish accurately what the brain's neural activity, observed using techniques such as functional magnetic resonance imaging, correlates with.<br />
<br />
人们对高级认知过程的理解不够充分,无法准确地确定大脑的神经活动---- 与之相关的是使用功能性磁共振成像等技术观察到的大脑活动。<br />
<br />
#Even if our understanding of cognition advances sufficiently, early simulation programs are likely to be very inefficient and will, therefore, need considerably more hardware.<br />
<br />
Even if our understanding of cognition advances sufficiently, early simulation programs are likely to be very inefficient and will, therefore, need considerably more hardware.<br />
<br />
即使我们对认知的理解有了足够的进步,早期的仿真程序也可能非常低效,因此需要更多的硬件。<br />
<br />
#The brain of an organism, while critical, may not be an appropriate boundary for a cognitive model. To simulate a bee brain, it may be necessary to simulate the body, and the environment. [[The Extended Mind]] thesis formalizes the philosophical concept, and research into [[Cephalopoda|cephalopods]] has demonstrated clear examples of a decentralized system.<ref>{{cite journal | pmid = 15829594 | doi=10.1152/jn.00684.2004 | volume=94 | issue=2 | title=Dynamic model of the octopus arm. I. Biomechanics of the octopus reaching movement |date=August 2005 | journal=J. Neurophysiol. | pages=1443–58 | last1 = Yekutieli | first1 = Y | last2 = Sagiv-Zohar | first2 = R | last3 = Aharonov | first3 = R | last4 = Engel | first4 = Y | last5 = Hochner | first5 = B | last6 = Flash | first6 = T}}</ref><br />
<br />
The brain of an organism, while critical, may not be an appropriate boundary for a cognitive model. To simulate a bee brain, it may be necessary to simulate the body, and the environment. The Extended Mind thesis formalizes the philosophical concept, and research into cephalopods has demonstrated clear examples of a decentralized system.<br />
<br />
有机体的大脑虽然关键,但可能不是认知模型的合适边界。为了模拟蜜蜂的大脑,可能需要模拟身体和环境。'''<font color="#ff8000">延展心灵论题(The Extended Mind thesis)</font>'''形式化了哲学概念,对头足类动物的研究已经展示了分散系统的明显的例子。<br />
<br />
<br />
<br />
In addition, the scale of the human brain is not currently well-constrained. One estimate puts the human brain at about 100 billion neurons and 100 trillion synapses.<ref>{{Harvnb|Williams|Herrup|1988}}</ref><ref>[http://search.eb.com/eb/article-75525 "nervous system, human."] ''[[Encyclopædia Britannica]]''. 9 January 2007</ref> Another estimate is 86 billion neurons of which 16.3 billion are in the [[cerebral cortex]] and 69 billion in the [[cerebellum]].{{sfn|Azevedo et al.|2009}} [[Glial cell]] synapses are currently unquantified but are known to be extremely numerous.<br />
<br />
In addition, the scale of the human brain is not currently well-constrained. One estimate puts the human brain at about 100 billion neurons and 100 trillion synapses. Another estimate is 86 billion neurons of which 16.3 billion are in the cerebral cortex and 69 billion in the cerebellum. Glial cell synapses are currently unquantified but are known to be extremely numerous.<br />
<br />
此外,人类大脑的规模目前还没有得到很好的限制。据估计,人类大脑大约有1000亿个神经元和100万亿个突触。另一个估计是860亿个神经元,其中163亿个在大脑皮层,690亿个在小脑。神经胶质细胞突触目前尚未定量,但已知数量极多。<br />
<br />
<br />
<br />
==Strong AI and consciousness 强人工智能和意识==<br />
<br />
{{See also|Philosophy of artificial intelligence|Turing test}}<br />
<br />
<br />
<br />
In 1980, philosopher [[John Searle]] coined the term "strong AI" as part of his [[Chinese room]] argument.<ref>{{Harvnb|Searle|1980}}</ref> He wanted to distinguish between two different hypotheses about artificial intelligence:<ref>As defined in a standard AI textbook: "The assertion that machines could possibly act intelligently (or, perhaps better, act as if they were intelligent) is called the 'weak AI' hypothesis by philosophers, and the assertion that machines that do so are actually thinking (as opposed to simulating thinking) is called the 'strong AI' hypothesis." {{Harv|Russell|Norvig|2003}}</ref><br />
<br />
In 1980, philosopher John Searle coined the term "strong AI" as part of his Chinese room argument. He wanted to distinguish between two different hypotheses about artificial intelligence:<br />
<br />
1980年,哲学家约翰•塞尔(John Searle)将“强人工智能”(strong AI)一词作为他在中文屋论证的一部分。他想要区分关于人工智能的两种不同假设:<br />
<br />
* An artificial intelligence system can ''think'' and have a ''mind''. (The word "mind" has a specific meaning for philosophers, as used in "the [[mind body problem]]" or "the [[philosophy of mind]]".)<br />
*一个人工智能系统可以思考并拥有思维。(词语“思维”对哲学家来说有特殊意义,正如在“身心问题”或“心灵哲学”中的使用一样。)<br />
<br />
* An artificial intelligence system can (only) ''act like'' it thinks and has a mind.<br />
*一个人工智能系统可以(仅仅)按它所想而“行动”,并且拥有思维。<br />
<br />
The first one is called "the ''strong'' AI hypothesis" and the second is "the ''weak'' AI hypothesis" because the first one makes the ''stronger'' statement: it assumes something special has happened to the machine that goes beyond all its abilities that we can test. Searle referred to the "strong AI hypothesis" as "strong AI". This usage is also common in academic AI research and textbooks.<ref>For example:<br />
<br />
The first one is called "the strong AI hypothesis" and the second is "the weak AI hypothesis" because the first one makes the stronger statement: it assumes something special has happened to the machine that goes beyond all its abilities that we can test. Searle referred to the "strong AI hypothesis" as "strong AI". This usage is also common in academic AI research and textbooks.<ref>For example:<br />
<br />
第一条被称为“强人工智能假设” ,第二条被称为“弱人工智能假设”,因为第一条假设提出了更强的陈述: 它假定机器发生了某种特殊的事件,超出了我们能够测试的所有能力。塞尔将“'''<font color="#ff8000">强人工智能假说(strong AI hypothesis)</font>'''”称为“强人工智能”。这种用法在人工智能学术研究和教科书中也很常见。例如:<br />
<br />
* {{Harvnb|Russell|Norvig|2003}},<br />
<br />
* [http://www.encyclopedia.com/doc/1O87-strongAI.html Oxford University Press Dictionary of Psychology] {{Webarchive|url=https://web.archive.org/web/20071203103022/http://www.encyclopedia.com/doc/1O87-strongAI.html |date=3 December 2007 }} (quoted in "High Beam Encyclopedia"),<br />
<br />
* [http://www.aaai.org/AITopics/html/phil.html MIT Encyclopedia of Cognitive Science] {{Webarchive|url=https://web.archive.org/web/20080719074502/http://www.aaai.org/AITopics/html/phil.html |date=19 July 2008 }} (quoted in "AITopics")<br />
<br />
* [http://planetmath.org/encyclopedia/StrongAIThesis.html Planet Math] {{Webarchive|url=https://web.archive.org/web/20070919012830/http://planetmath.org/encyclopedia/StrongAIThesis.html |date=19 September 2007 }}<br />
<br />
* [http://www.cbhd.org/resources/biotech/tongen_2003-11-07.htm Will Biological Computers Enable Artificially Intelligent Machines to Become Persons?] {{Webarchive|url=https://web.archive.org/web/20080513031753/http://www.cbhd.org/resources/biotech/tongen_2003-11-07.htm |date=13 May 2008 }} Anthony Tongen</ref><br />
<br />
<br />
<br />
The weak AI hypothesis is equivalent to the hypothesis that artificial general intelligence is possible. According to [[Stuart J. Russell|Russell]] and [[Peter Norvig|Norvig]], "Most AI researchers take the weak AI hypothesis for granted, and don't care about the strong AI hypothesis."{{sfn|Russell|Norvig|2003|p=947}}<br />
<br />
The weak AI hypothesis is equivalent to the hypothesis that artificial general intelligence is possible. According to Russell and Norvig, "Most AI researchers take the weak AI hypothesis for granted, and don't care about the strong AI hypothesis."<br />
<br />
弱人工智能假说等同于通用人工智能是可能的假说。根据罗素和诺维格的说法,“大多数人工智能研究人员认为弱人工智能假说是理所当然的,并且不关心强人工智能假说。”<br />
<br />
<br />
<br />
In contrast to Searle, [[Ray Kurzweil]] uses the term "strong AI" to describe any artificial intelligence system that acts like it has a mind,<ref name=K/> regardless of whether a philosopher would be able to determine if it ''actually'' has a mind or not.<br />
<br />
In contrast to Searle, Ray Kurzweil uses the term "strong AI" to describe any artificial intelligence system that acts like it has a mind, regardless of whether a philosopher would be able to determine if it actually has a mind or not.<br />
<br />
与 Searle 不同的是,Ray Kurzweil 使用“强人工智能”这个词来描述任何人工智能系统,这个系统的行为就像它有思想一样,不管哲学家能否确定它是否真的有思想。<br />
<br />
In science fiction, AGI is associated with traits such as [[consciousness]], [[sentience]], [[sapience]], and [[self-awareness]] observed in living beings. However, according to Searle, it is an open question whether general intelligence is sufficient for consciousness. "Strong AI" (as defined above by Kurzweil) should not be confused with Searle's "[[strong AI hypothesis]]." The strong AI hypothesis is the claim that a computer which behaves as intelligently as a person must also necessarily have a [[mind]] and [[consciousness]]. AGI refers only to the amount of intelligence that the machine displays, with or without a mind.<br />
<br />
In science fiction, AGI is associated with traits such as consciousness, sentience, sapience, and self-awareness observed in living beings. However, according to Searle, it is an open question whether general intelligence is sufficient for consciousness. "Strong AI" (as defined above by Kurzweil) should not be confused with Searle's "strong AI hypothesis." The strong AI hypothesis is the claim that a computer which behaves as intelligently as a person must also necessarily have a mind and consciousness. AGI refers only to the amount of intelligence that the machine displays, with or without a mind.<br />
<br />
在科幻小说中,通用人工智能与生物所具有的的意识、知觉、智慧和自我意识等特征有关。然而,根据塞尔的说法,一般智力是否足以产生意识还是一个悬而未决的问题。“强人工智能”(如上文库兹韦尔所定义的)不应与塞尔的“强人工智能假设”相混淆。强人工智能假说认为,一台像人一样智能运行的计算机必然具有思想和意识。通用人工智能只是指机器显示的智能程度,不管有没有头脑。<br />
<br />
<br />
<br />
===Consciousness 意识===<br />
<br />
There are other aspects of the human mind besides intelligence that are relevant to the concept of strong AI which play a major role in [[science fiction]] and the [[ethics of artificial intelligence]]:<br />
<br />
There are other aspects of the human mind besides intelligence that are relevant to the concept of strong AI which play a major role in science fiction and the ethics of artificial intelligence:<br />
<br />
除了与在科幻小说和人工智能伦理中扮演重要角色的强人工智能概念有关的智能,人类思维还有其他方面:<br />
<br />
* [[consciousness]]: To have [[qualia|subjective experience]] and [[thought]].<ref>Note that [[consciousness]] is difficult to define. A popular definition, due to [[Thomas Nagel]], is that it "feels like" something to be conscious. If we are not conscious, then it doesn't feel like anything. Nagel uses the example of a bat: we can sensibly ask "what does it feel like to be a bat?" However, we are unlikely to ask "what does it feel like to be a toaster?" Nagel concludes that a bat appears to be conscious (i.e. has consciousness) but a toaster does not. See {{Harv|Nagel|1974}}</ref><br />
意识:拥有主观体验和思想。值得一提的是意识是很难定义的。由托马斯·内格尔给出的一个著名定义陈述如下:一个事物如果能体会到某种感觉,那么它是有意识的。如果我们不是有意识的,那么我们不会有任何感觉。内格尔以蝙蝠为例:我们可以凭借感觉问出:“成为一只蝙蝠的感觉如何?”但是,我们不大可能问出:“成为一个吐司机的感觉如何?”内格尔总结认为蝙蝠像是有意识的(即拥有意识),但是吐司机却不是。<br />
<br />
* [[self-awareness]]: To be aware of oneself as a separate individual, especially to be aware of one's own thoughts.<br />
自我意识:能够意识到自己是一个独立的个体,尤其是意识到自己的思想。<br />
<br />
* [[sentience]]: The ability to "feel" perceptions or emotions subjectively.<br />
知觉:主观地感受概念或者情感的能力。<br />
<br />
* [[sapience]]: The capacity for wisdom.<br />
智慧:容纳知识的能力。<br />
<br />
<br />
<br />
These traits have a moral dimension, because a machine with this form of strong AI may have legal rights, analogous to the [[animal rights|rights of non-human animals]]. As such, preliminary work has been conducted on approaches to integrating full ethical agents with existing legal and social frameworks. These approaches have focused on the legal position and rights of 'strong' AI.<ref>{{Cite journal|last=Sotala|first=Kaj|last2=Yampolskiy|first2=Roman V|date=2014-12-19|title=Responses to catastrophic AGI risk: a survey|journal=Physica Scripta|volume=90|issue=1|pages=8|doi=10.1088/0031-8949/90/1/018001|issn=0031-8949|doi-access=free}}</ref><br />
<br />
These traits have a moral dimension, because a machine with this form of strong AI may have legal rights, analogous to the rights of non-human animals. As such, preliminary work has been conducted on approaches to integrating full ethical agents with existing legal and social frameworks. These approaches have focused on the legal position and rights of 'strong' AI.<br />
<br />
这些特征具有道德维度,因为拥有这种强人工智能形式的机器可能拥有法律权利,类似于非人类动物的权利。因此,已经开展了初步工作,探讨如何将全面的道德行为者纳入现有的法律和社会框架。这些方法都集中在强大的人工智能的法律地位和权利上。<br />
<br />
<br />
<br />
However, [[Bill Joy]], among others, argues a machine with these traits may be a threat to human life or dignity.<ref>{{cite journal| title=Why the future doesn't need us | last=Joy | first=Bill |author-link=Bill Joy | journal=Wired |date=April 2000 }}</ref> It remains to be shown whether any of these traits are [[Necessary and sufficient condition|necessary]] for strong AI. The role of [[consciousness]] is not clear, and currently there is no agreed test for its presence. If a machine is built with a device that simulates the [[neural correlates of consciousness]], would it automatically have self-awareness? It is also possible that some of these properties, such as sentience, [[Emergence|naturally emerge]] from a fully intelligent machine, or that it becomes natural to ''ascribe'' these properties to machines once they begin to act in a way that is clearly intelligent. For example, intelligent action may be sufficient for sentience, rather than the other way around.<br />
<br />
However, Bill Joy, among others, argues a machine with these traits may be a threat to human life or dignity. It remains to be shown whether any of these traits are necessary for strong AI. The role of consciousness is not clear, and currently there is no agreed test for its presence. If a machine is built with a device that simulates the neural correlates of consciousness, would it automatically have self-awareness? It is also possible that some of these properties, such as sentience, naturally emerge from a fully intelligent machine, or that it becomes natural to ascribe these properties to machines once they begin to act in a way that is clearly intelligent. For example, intelligent action may be sufficient for sentience, rather than the other way around.<br />
<br />
然而,比尔·乔伊(Bill Joy)等人认为,具有这些特征的机器可能会威胁到人类的生命或尊严。这些特征对于强人工智能来说是否是必要的还有待证明。意识的作用并不清楚,目前也没有针对其存在而进行的一致的测试。如果一台机器装有一个模拟与意识相关的神经的装置,它会自动具有自我意识吗?也有可能这些特性中的一些,比如感知能力,自然而然地从一个完全智能的机器中产生,或者一旦机器开始以一种明显智能的方式行动,人们就会自然而然地认为这些特性是机器自主产生的。<br />
--[[用户:粲兰|袁一博]]([[用户讨论:粲兰|讨论]])“或者一旦机器开始以一种明显智能的方式行动,,人们就会自然而然地认为这些特性是机器自主产生的。”对应原句“or that it becomes natural to ascribe these properties to machines once they begin to act in a way that is clearly intelligent.”与原句在语序和措辞上略有不同,是译者考虑到中文的阅读习惯在不改变原意的条件下意译得出的。<br />
例如,智能行为可能足以判定机器产生了知觉,而非反过来。<br />
<br />
<br />
<br />
===Artificial consciousness research 人工意识研究===<br />
<br />
{{Main|Artificial consciousness}}<br />
<br />
<br />
<br />
Although the role of consciousness in strong AI/AGI is debatable, many AGI researchers{{sfn|Yudkowsky|2006}} regard research that investigates possibilities for implementing consciousness as vital. In an early effort [[Igor Aleksander]]{{sfn|Aleksander|1996}} argued that the principles for creating a conscious machine already existed but that it would take forty years to train such a machine to understand [[language]].<br />
<br />
Although the role of consciousness in strong AI/AGI is debatable, many AGI researchers regard research that investigates possibilities for implementing consciousness as vital. In an early effort Igor Aleksander argued that the principles for creating a conscious machine already existed but that it would take forty years to train such a machine to understand language.<br />
<br />
虽然意识在强人工智能/通用人工智能中的作用是有争议的,但是很多通用人工智能的研究人员认为研究实现意识的可能性是至关重要的。在早期的努力中,伊格尔·亚历山大(Igor Aleksander)认为创造一个有意识的机器的原则已经存在,但是训练这样一个机器去理解语言可能需要四十年。<br />
<br />
<br />
<br />
==Possible explanations for the slow progress of AI research 人工智能研究进展缓慢的可能解释==<br />
<br />
{{See also|History of artificial intelligence#The problems}}<br />
<br />
<br />
<br />
Since the launch of AI research in 1956, the growth of this field has slowed down over time and has stalled the aims of creating machines skilled with intelligent action at the human level.{{sfn|Clocksin|2003}} A possible explanation for this delay is that computers lack a sufficient scope of memory or processing power.{{sfn|Clocksin|2003}} In addition, the level of complexity that connects to the process of AI research may also limit the progress of AI research.{{sfn|Clocksin|2003}}<br />
<br />
Since the launch of AI research in 1956, the growth of this field has slowed down over time and has stalled the aims of creating machines skilled with intelligent action at the human level. A possible explanation for this delay is that computers lack a sufficient scope of memory or processing power. In addition, the level of complexity that connects to the process of AI research may also limit the progress of AI research.<br />
<br />
自从1956年开始人工智能研究以来,这一领域的发展速度已经随着时间的推移而放缓,并且阻碍了创造具有人类水平的智能行为的机器的目标。这种延迟的一个可能的解释是计算机缺乏足够的存储空间或处理能力。此外,与人工智能研究过程相关的复杂程度也可能限制人工智能研究的进展。<br />
<br />
<br />
<br />
While most AI researchers believe strong AI can be achieved in the future, there are some individuals like [[Hubert Dreyfus]] and [[Roger Penrose]] who deny the possibility of achieving strong AI.{{sfn|Clocksin|2003}} [[John McCarthy (computer scientist)|John McCarthy]] was one of various computer scientists who believe human-level AI will be accomplished, but a date cannot accurately be predicted.{{sfn|McCarthy|2003}}<br />
<br />
While most AI researchers believe strong AI can be achieved in the future, there are some individuals like Hubert Dreyfus and Roger Penrose who deny the possibility of achieving strong AI. John McCarthy was one of various computer scientists who believe human-level AI will be accomplished, but a date cannot accurately be predicted.<br />
<br />
虽然大多数人工智能研究人员认为强大的人工智能可以在未来实现,但也有一些人像休伯特·德雷福斯(Hubert Dreyfus)和罗杰·彭罗斯(Roger Penrose)否认实现强大人工智能的可能性。约翰·麦卡锡(John McCarthy)是众多计算机科学家之一,他们相信人类水平的人工智能将会实现,但是日期无法准确预测。<br />
<br />
<br />
<br />
Conceptual limitations are another possible reason for the slowness in AI research.{{sfn|Clocksin|2003}} AI researchers may need to modify the conceptual framework of their discipline in order to provide a stronger base and contribution to the quest of achieving strong AI. As William Clocksin wrote in 2003: "the framework starts from Weizenbaum's observation that intelligence manifests itself only relative to specific social and cultural contexts".{{sfn|Clocksin|2003}}<br />
<br />
Conceptual limitations are another possible reason for the slowness in AI research. AI researchers may need to modify the conceptual framework of their discipline in order to provide a stronger base and contribution to the quest of achieving strong AI. As William Clocksin wrote in 2003: "the framework starts from Weizenbaum's observation that intelligence manifests itself only relative to specific social and cultural contexts".<br />
<br />
概念上的局限性是人工智能研究缓慢的另一个可能原因。人工智能研究人员可能需要修改他们学科的概念框架,以便为实现强大的人工智能提供一个更强大的基础和贡献。正如威廉·克罗克森(William Clocksin)在2003年写的那样: “这个框架始于魏泽堡的观察,即智力只在特定的社会和文化背景下表现出来。”。<br />
<br />
<br />
<br />
Furthermore, AI researchers have been able to create computers that can perform jobs that are complicated for people to do, such as mathematics, but conversely they have struggled to develop a computer that is capable of carrying out tasks that are simple for humans to do, such as walking ([[Moravec's paradox]]).{{sfn|Clocksin|2003}} A problem described by David Gelernter is that some people assume thinking and reasoning are equivalent.{{sfn|Gelernter|2010}} However, the idea of whether thoughts and the creator of those thoughts are isolated individually has intrigued AI researchers.{{sfn|Gelernter|2010}}<br />
<br />
Furthermore, AI researchers have been able to create computers that can perform jobs that are complicated for people to do, such as mathematics, but conversely they have struggled to develop a computer that is capable of carrying out tasks that are simple for humans to do, such as walking (Moravec's paradox). A problem described by David Gelernter is that some people assume thinking and reasoning are equivalent. However, the idea of whether thoughts and the creator of those thoughts are isolated individually has intrigued AI researchers.<br />
<br />
此外,人工智能研究人员已经能够创造出能够执行对人类而言复杂的工作(如数学)的计算机,但相反地,他们却难以开发出能够执行人类简单任务(如行走)的计算机(莫拉维克悖论)。大卫·格勒尼特(David Gelernter)描述的一个问题是,有些人认为思考和推理是等价的。然而,思想和这些思想的创造者是否被孤立的想法引起了人工智能研究者的兴趣。<br />
<br />
<br />
<br />
The problems that have been encountered in AI research over the past decades have further impeded the progress of AI. The failed predictions that have been promised by AI researchers and the lack of a complete understanding of human behaviors have helped diminish the primary idea of human-level AI.{{sfn|Goertzel|2007}} Although the progress of AI research has brought both improvement and disappointment, most investigators have established optimism about potentially achieving the goal of AI in the 21st century.{{sfn|Goertzel|2007}}<br />
<br />
The problems that have been encountered in AI research over the past decades have further impeded the progress of AI. The failed predictions that have been promised by AI researchers and the lack of a complete understanding of human behaviors have helped diminish the primary idea of human-level AI. Although the progress of AI research has brought both improvement and disappointment, most investigators have established optimism about potentially achieving the goal of AI in the 21st century.<br />
<br />
过去几十年人工智能研究中遇到的问题进一步阻碍了人工智能的发展。人工智能研究人员所承诺的失败的预测,以及对人类行为缺乏完整理解,已经帮助削弱了人类水平人工智能的最初设想。尽管人工智能研究的进展带来了进步和失望,但大多数研究人员对人工智能在21世纪可能实现的目标持乐观态度。<br />
<br />
<br />
<br />
Other possible reasons have been proposed for the lengthy research in the progress of strong AI. The intricacy of scientific problems and the need to fully understand the human brain through psychology and neurophysiology have limited many researchers in emulating the function of the human brain in computer hardware.{{sfn|McCarthy|2007}} Many researchers tend to underestimate any doubt that is involved with future predictions of AI, but without taking those issues seriously, people can then overlook solutions to problematic questions.{{sfn|Goertzel|2007}}<br />
<br />
Other possible reasons have been proposed for the lengthy research in the progress of strong AI. The intricacy of scientific problems and the need to fully understand the human brain through psychology and neurophysiology have limited many researchers in emulating the function of the human brain in computer hardware. Many researchers tend to underestimate any doubt that is involved with future predictions of AI, but without taking those issues seriously, people can then overlook solutions to problematic questions.<br />
<br />
还有其他可能的原因可以解释为什么对于强人工智能进行了长时间的研究。错综复杂的科学问题,以及通过心理学和神经生理学充分了解人脑的必要性,限制了许多研究人员在计算机硬件中模拟人脑的功能。许多研究人员倾向于低估与人工智能未来预测有关的任何怀疑,但是如果不认真对待这些问题,人们就会忽视不确定问题的解决方案。<br />
<br />
<br />
<br />
Clocksin says that a conceptual limitation that may impede the progress of AI research is that people may be using the wrong techniques for computer programs and implementation of equipment.{{sfn|Clocksin|2003}} When AI researchers first began to aim for the goal of artificial intelligence, a main interest was human reasoning.{{sfn|Holte|Choueiry|2003}} Researchers hoped to establish computational models of human knowledge through reasoning and to find out how to design a computer with a specific cognitive task.{{sfn|Holte|Choueiry|2003}}<br />
<br />
Clocksin says that a conceptual limitation that may impede the progress of AI research is that people may be using the wrong techniques for computer programs and implementation of equipment. When AI researchers first began to aim for the goal of artificial intelligence, a main interest was human reasoning. Researchers hoped to establish computational models of human knowledge through reasoning and to find out how to design a computer with a specific cognitive task.<br />
<br />
克罗克森说,阻碍人工智能研究进展的一个概念上的限制是,人们可能在计算机程序和安装设备方面使用了错误的技术。当人工智能研究人员第一次开始瞄准人工智能的目标时,主要的兴趣是人类推理。研究人员希望通过推理建立人类知识的计算模型,并找出如何设计一台具有特定认知任务的计算机。<br />
<br />
<br />
<br />
The practice of abstraction, which people tend to redefine when working with a particular context in research, provides researchers with a concentration on just a few concepts.{{sfn|Holte|Choueiry|2003}} The most productive use of abstraction in AI research comes from planning and problem solving.{{sfn|Holte|Choueiry|2003}} Although the aim is to increase the speed of a computation, the role of abstraction has posed questions about the involvement of abstraction operators.{{sfn|Zucker|2003}}<br />
<br />
The practice of abstraction, which people tend to redefine when working with a particular context in research, provides researchers with a concentration on just a few concepts. The most productive use of abstraction in AI research comes from planning and problem solving. Although the aim is to increase the speed of a computation, the role of abstraction has posed questions about the involvement of abstraction operators.<br />
<br />
人们在研究中使用特定的语境时倾向于重新定义抽象的实践,使得研究人员集中在数个概念上。抽象在人工智能研究中最有效的应用来自规划和解决问题。虽然目标是提高计算速度,但是抽象的作用已经对抽象算子的参与提出了问题。<br />
<br />
<br />
<br />
A possible reason for the slowness in AI relates to the acknowledgement by many AI researchers that heuristics is a section that contains a significant breach between computer performance and human performance.{{sfn|McCarthy|2007}} The specific functions that are programmed to a computer may be able to account for many of the requirements that allow it to match human intelligence. These explanations are not necessarily guaranteed to be the fundamental causes for the delay in achieving strong AI, but they are widely agreed by numerous researchers.<br />
<br />
A possible reason for the slowness in AI relates to the acknowledgement by many AI researchers that heuristics is a section that contains a significant breach between computer performance and human performance. The specific functions that are programmed to a computer may be able to account for many of the requirements that allow it to match human intelligence. These explanations are not necessarily guaranteed to be the fundamental causes for the delay in achieving strong AI, but they are widely agreed by numerous researchers.<br />
<br />
人工智能发展缓慢的一个可能原因是许多人工智能研究人员承认启发式算法是计算机性能和人类表现之间的一个重大缺口。为编入计算机的特定功能可以满足许多要求,使计算机与人类智能相匹配。这些解释并不一定是造成人工智能实现延迟的根本原因,但它们得到了众多研究人员的广泛认同。<br />
<br />
<br />
<br />
There have been many AI researchers that debate over the idea whether [[affective computing|machines should be created with emotions]]. There are no emotions in typical models of AI and some researchers say programming emotions into machines allows them to have a mind of their own.{{sfn|Clocksin|2003}} Emotion sums up the experiences of humans because it allows them to remember those experiences.{{sfn|Gelernter|2010}} David Gelernter writes, "No computer will be creative unless it can simulate all the nuances of human emotion."{{sfn|Gelernter|2010}} This concern about emotion has posed problems for AI researchers and it connects to the concept of strong AI as its research progresses into the future.<ref>{{cite journal|doi=10.1016/j.bushor.2018.08.004|title=Kaplan Andreas and Haelein Michael (2019) Siri, Siri, in my hand: Who's the fairest in the land? On the interpretations, illustrations, and implications of artificial intelligence | volume=62 | year=2019|journal=Business Horizons|pages=15–25 | last1 = Kaplan | first1 = Andreas | last2 = Haenlein | first2 = Michael}}</ref><br />
<br />
There have been many AI researchers that debate over the idea whether machines should be created with emotions. There are no emotions in typical models of AI and some researchers say programming emotions into machines allows them to have a mind of their own. Emotion sums up the experiences of humans because it allows them to remember those experiences. David Gelernter writes, "No computer will be creative unless it can simulate all the nuances of human emotion." This concern about emotion has posed problems for AI researchers and it connects to the concept of strong AI as its research progresses into the future.<br />
<br />
许多人工智能研究人员一直在争论机器是否应该带有情感。典型的人工智能模型中没有情感,一些研究人员说,将情感编程到机器中可以让它们拥有自己的思想。情感总结了人类的经历,因为它允许人们记住那些经历。大卫·格勒尼特(David Gelernter)写道: “除非计算机能够模拟人类情感的所有细微差别,否则它不会具有创造力。”这种对情绪的关注给人工智能研究人员带来了一些问题,随着未来人工智能研究的进展,它与强人工智能的概念联系起来。<br />
<br />
<br />
<br />
==Controversies and dangers 争议和风险==<br />
<br />
<br />
<br />
===Feasibility 可能性===<br />
<br />
{{expand section|date=February 2016}}<br />
<br />
As of March 2020, AGI remains speculative<ref name="spec1">[https://www.europarl.europa.eu/at-your-service/files/be-heard/religious-and-non-confessional-dialogue/events/en-20190319-how-artificial-intelligence-works.pdf europarl.europa.eu: How artificial intelligence works], "Concluding remarks: Today's AI is powerful and useful, but remains far from speculated AGI or ASI.", European Parliamentary Research Service, retrieved March 3, 2020</ref><ref name="spec2">[https://www.itu.int/en/journal/001/Documents/itu2018-9.pdf itu.int: Beyond Mad?: The Race For Artificial General Intelligence], "AGI represents a level of power that remains firmly in the realm of speculative fiction as on date." February 2, 2018, retrieved March 3, 2020</ref> as no such system has been demonstrated yet. Opinions vary both on ''whether'' and ''when'' artificial general intelligence will arrive. At one extreme, AI pioneer [[Herbert A. Simon]] wrote in 1965: "machines will be capable, within twenty years, of doing any work a man can do". However, this prediction failed to come true. Microsoft co-founder [[Paul Allen]] believed that such intelligence is unlikely in the 21st century because it would require "unforeseeable and fundamentally unpredictable breakthroughs" and a "scientifically deep understanding of cognition".<ref>{{cite news|last1=Allen|first1=Paul|title=The Singularity Isn't Near|url=http://www.technologyreview.com/view/425733/paul-allen-the-singularity-isnt-near/|accessdate=17 September 2014|work=[[MIT Technology Review]]}}</ref> Writing in [[The Guardian]], roboticist Alan Winfield claimed the gulf between modern computing and human-level artificial intelligence is as wide as the gulf between current space flight and practical faster-than-light spaceflight.<ref>{{cite news|last1=Winfield|first1=Alan|title=Artificial intelligence will not turn into a Frankenstein's monster|url=https://www.theguardian.com/technology/2014/aug/10/artificial-intelligence-will-not-become-a-frankensteins-monster-ian-winfield|accessdate=17 September 2014|work=[[The Guardian]]|archive-url=https://web.archive.org/web/20140917135230/http://www.theguardian.com/technology/2014/aug/10/artificial-intelligence-will-not-become-a-frankensteins-monster-ian-winfield|archive-date=17 September 2014|url-status=live}}</ref><br />
<br />
As of March 2020, AGI remains speculative as no such system has been demonstrated yet. Opinions vary both on whether and when artificial general intelligence will arrive. At one extreme, AI pioneer Herbert A. Simon wrote in 1965: "machines will be capable, within twenty years, of doing any work a man can do". However, this prediction failed to come true. Microsoft co-founder Paul Allen believed that such intelligence is unlikely in the 21st century because it would require "unforeseeable and fundamentally unpredictable breakthroughs" and a "scientifically deep understanding of cognition". Writing in The Guardian, roboticist Alan Winfield claimed the gulf between modern computing and human-level artificial intelligence is as wide as the gulf between current space flight and practical faster-than-light spaceflight.<br />
<br />
截至2020年3月,通用人工智能仍处于推测性的状态,因为迄今此类系统尚未被展示。对于人工通用智能是否会到来以及何时到来,人们的看法各不相同。一个极端如,人工智能的先驱赫伯特·西蒙在1965年写道: “机器将能在20年内具有完成人类能做的任何工作的能力。”然而,这个预言并没有实现。微软(Microsoft)联合创始人保罗•艾伦(Paul Allen)认为,这种情报在21世纪不太可能出现,因为它需要“不可预见且根本无法预测的突破”和“对认知的科学的深入理解”。机器人专家阿兰·温菲尔德(Alan Winfield)在《卫报》上发表文章称,现代计算机和人类水平的人工智能之间的鸿沟就像当前的太空飞行和实际的超光速空间飞行之间的鸿沟一样宽。<br />
<br />
<br />
<br />
AI experts' views on the feasibility of AGI wax and wane, and may have seen a resurgence in the 2010s. Four polls conducted in 2012 and 2013 suggested that the median guess among experts for when they would be 50% confident AGI would arrive was 2040 to 2050, depending on the poll, with the mean being 2081. Of the experts, 16.5% answered with "never" when asked the same question but with a 90% confidence instead.<ref name="new yorker doomsday">{{cite news|author1=Raffi Khatchadourian|title=The Doomsday Invention: Will artificial intelligence bring us utopia or destruction?|url=http://www.newyorker.com/magazine/2015/11/23/doomsday-invention-artificial-intelligence-nick-bostrom|accessdate=7 February 2016|work=[[The New Yorker (magazine)|The New Yorker]]|date=23 November 2015|archive-url=https://web.archive.org/web/20160128105955/http://www.newyorker.com/magazine/2015/11/23/doomsday-invention-artificial-intelligence-nick-bostrom|archive-date=28 January 2016|url-status=live}}</ref><ref>Müller, V. C., & Bostrom, N. (2016). Future progress in artificial intelligence: A survey of expert opinion. In Fundamental issues of artificial intelligence (pp. 555–572). Springer, Cham.</ref> Further current AGI progress considerations can be found below [[#Tests_for_confirming_human-level_AGI|''Tests for confirming human-level AGI'']] and [[#IQ-Tests_AGI|''IQ-tests AGI'']].<br />
<br />
AI experts' views on the feasibility of AGI wax and wane, and may have seen a resurgence in the 2010s. Four polls conducted in 2012 and 2013 suggested that the median guess among experts for when they would be 50% confident AGI would arrive was 2040 to 2050, depending on the poll, with the mean being 2081. Of the experts, 16.5% answered with "never" when asked the same question but with a 90% confidence instead. Further current AGI progress considerations can be found below Tests for confirming human-level AGI and IQ-tests AGI.<br />
<br />
人工智能专家一直在考虑通用人工智能兴衰可能性,可能在2010年代出现了复苏。2012年和2013年进行的四次民意调查显示,专家们平均有50%的信心相信通用人工智能会在2040年到2050年间到来,具体取决于调查结果,平均猜测是2081年。在这些专家中,16.5% 的人在被问到同样的问题时回答“从来没有” ,但他们的自信心却达到了90%。当前通用人工智能项目进展中的进一步考虑可以在确认人类水平通用人工智能和基于智商测试的通用人工智能测试下面找到。<br />
<br />
<br />
<br />
===Potential threat to human existence{{anchor|Risk_of_human_extinction}} 对人类的潜在威胁===<br />
<br />
{{Main|Existential risk from artificial general intelligence}}<br />
<br />
<br />
<br />
The thesis that AI poses an existential risk, and that this risk needs much more attention than it currently gets, has been endorsed by many public figures; perhaps the most famous are [[Elon Musk]], [[Bill Gates]], and [[Stephen Hawking]]. The most notable AI researcher to endorse the thesis is [[Stuart J. Russell]]. Endorsers of the thesis sometimes express bafflement at skeptics: Gates states he does not "understand why some people are not concerned",<ref name="BBC News">{{cite news|last1=Rawlinson|first1=Kevin|title=Microsoft's Bill Gates insists AI is a threat|url=https://www.bbc.co.uk/news/31047780|work=[[BBC News]]|accessdate=30 January 2015}}</ref> and Hawking criticized widespread indifference in his 2014 editorial: {{cquote|'So, facing possible futures of incalculable benefits and risks, the experts are surely doing everything possible to ensure the best outcome, right? Wrong. If a superior alien civilisation sent us a message saying, 'We'll arrive in a few decades,' would we just reply, 'OK, call us when you get here{{endash}}we'll leave the lights on?' Probably not{{endash}}but this is more or less what is happening with AI.'<ref name="hawking editorial">{{cite news |title=Stephen Hawking: 'Transcendence looks at the implications of artificial intelligence&nbsp;– but are we taking AI seriously enough?' |url=https://www.independent.co.uk/news/science/stephen-hawking-transcendence-looks-at-the-implications-of-artificial-intelligence--but-are-we-taking-ai-seriously-enough-9313474.html |accessdate=3 December 2014 |publisher=[[The Independent (UK)]]}}</ref>}}<br />
<br />
The thesis that AI poses an existential risk, and that this risk needs much more attention than it currently gets, has been endorsed by many public figures; perhaps the most famous are Elon Musk, Bill Gates, and Stephen Hawking. The most notable AI researcher to endorse the thesis is Stuart J. Russell. Endorsers of the thesis sometimes express bafflement at skeptics: Gates states he does not "understand why some people are not concerned", and Hawking criticized widespread indifference in his 2014 editorial: we'll leave the lights on?' Probably not, but this is more or less what is happening with AI.'}}<br />
<br />
人工智能构成了世界末日,这种风险需要比现在更多的关注,这一论点已经得到了许多公众人物的支持; 也许最著名的是埃隆·马斯克,比尔·盖茨和斯蒂芬·霍金。支持这一观点的最著名的人工智能研究者是斯图尔特·罗素。这篇论文的支持者有时会对怀疑论者表示困惑: 盖茨表示,他不“理解为什么有些人不关心” ,霍金在2014年的社论中批评了普遍的冷漠: 我们就这么让人工智能这盏灯亮着吗?可能不是,但这或多或少是正在发生在人工智能领域内的。<br />
<br />
<br />
<br />
Many of the scholars who are concerned about existential risk believe that the best way forward would be to conduct (possibly massive) research into solving the difficult "[[AI control problem|control problem]]" to answer the question: what types of safeguards, algorithms, or architectures can programmers implement to maximize the probability that their recursively-improving AI would continue to behave in a friendly, rather than destructive, manner after it reaches superintelligence?<ref name="superintelligence" >{{cite book|last1=Bostrom|first1=Nick|author-link=Nick Bostrom|title=Superintelligence: Paths, Dangers, Strategies|date=2014|isbn=978-0199678112|edition=First|quote=|title-link=Superintelligence: Paths, Dangers, Strategies}}<!-- preface --></ref><ref name="physica_scripta" >{{cite journal|title=Responses to catastrophic AGI risk: a survey|journal=[[Physica Scripta]]|date=19 December 2014|volume=90|issue=1|author1=Kaj Sotala|author2-link=Roman Yampolskiy|author2=Roman Yampolskiy}}</ref><br />
<br />
Many of the scholars who are concerned about existential risk believe that the best way forward would be to conduct (possibly massive) research into solving the difficult "control problem" to answer the question: what types of safeguards, algorithms, or architectures can programmers implement to maximize the probability that their recursively-improving AI would continue to behave in a friendly, rather than destructive, manner after it reaches superintelligence?<br />
<br />
许多关注世界末日的学者认为,最好的方法是进行(可能是大规模的)研究,解决困难的“控制问题” ,以回答这个问题: 程序员可以实现哪些类型的保障措施、算法或架构,以最大程度地提高其递归改进的人工智能在达到超级智能后继续以友好,而非破坏性的方式运行的可能性?<br />
<br />
<br />
<br />
The thesis that AI can pose existential risk also has many strong detractors. Skeptics sometimes charge that the thesis is crypto-religious, with an irrational belief in the possibility of superintelligence replacing an irrational belief in an omnipotent God; at an extreme, [[Jaron Lanier]] argues that the whole concept that current machines are in any way intelligent is "an illusion" and a "stupendous con" by the wealthy.<ref name="atlantic-but-what">{{cite magazine |url=https://www.theatlantic.com/health/archive/2014/05/but-what-does-the-end-of-humanity-mean-for-me/361931/ |title=But What Would the End of Humanity Mean for Me? |magazine=The Atlantic | date = 9 May 2014 | accessdate =12 December 2015}}</ref><br />
<br />
The thesis that AI can pose existential risk also has many strong detractors. Skeptics sometimes charge that the thesis is crypto-religious, with an irrational belief in the possibility of superintelligence replacing an irrational belief in an omnipotent God; at an extreme, Jaron Lanier argues that the whole concept that current machines are in any way intelligent is "an illusion" and a "stupendous con" by the wealthy.<br />
<br />
人工智能可以造成世界末日的观点也遭到了许多强烈的反对。怀疑论者有时指责该论点是神秘的、宗教性的,他们非理性地相信超级智能可能取代对万能的上帝的非理性信仰; 一个极端如,杰伦·拉尼尔(Jaron Lanier)认为,目前的机器以任何方式具有智能的整个概念是“一种幻觉” ,是富人的“惊人骗局”。<br />
<br />
<br />
<br />
Much of existing criticism argues that AGI is unlikely in the short term. Computer scientist [[Gordon Bell]] argues that the human race will already destroy itself before it reaches the [[technological singularity]]. [[Gordon Moore]], the original proponent of [[Moore's Law]], declares that "I am a skeptic. I don't believe [a technological singularity] is likely to happen, at least for a long time. And I don't know why I feel that way."<ref>{{cite news |title=Tech Luminaries Address Singularity |url=https://spectrum.ieee.org/computing/hardware/tech-luminaries-address-singularity |accessdate=8 April 2020 |work=IEEE Spectrum: Technology, Engineering, and Science News |issue=SPECIAL REPORT: THE SINGULARITY |date=1 June 2008 |language=en}}</ref> [[Baidu]] Vice President [[Andrew Ng]] states AI existential risk is "like worrying about overpopulation on Mars when we have not even set foot on the planet yet."<ref name=shermer>{{cite news|last1=Shermer|first1=Michael|title=Apocalypse AI|url=https://www.scientificamerican.com/article/artificial-intelligence-is-not-a-threat-mdash-yet/|accessdate=27 November 2017|work=Scientific American|date=1 March 2017|pages=77|language=en|doi=10.1038/scientificamerican0317-77|bibcode=2017SciAm.316c..77S}}</ref><br />
<br />
Much of existing criticism argues that AGI is unlikely in the short term. Computer scientist Gordon Bell argues that the human race will already destroy itself before it reaches the technological singularity. Gordon Moore, the original proponent of Moore's Law, declares that "I am a skeptic. I don't believe [a technological singularity] is likely to happen, at least for a long time. And I don't know why I feel that way." Baidu Vice President Andrew Ng states AI existential risk is "like worrying about overpopulation on Mars when we have not even set foot on the planet yet."<br />
<br />
现有的许多批评认为,通用人工智能短期内不太可能成功。计算机科学家戈登·贝尔(Gordon Bell)认为人类在到达'''<font color="#ff8000">技术奇点(technological singularity)</font>'''之前就已经自我毁灭了。戈登·摩尔(Gordon Moore),'''<font color="#ff8000">摩尔定律(Moore's Law)</font>'''的最初提出者,宣称“我是一个怀疑论者。我不认为技术奇点会发生,至少在很长一段时间内不会。我不知道为什么会有这种感觉。”百度副总裁吴恩达(Andrew Ng)说,人工智能世界末日就像是在担心火星人口过剩,而我们甚至还没有踏上这个星球。<br />
<br />
<br />
<br />
==See also 请参阅==<br />
<br />
{{div col|colwidth=30em}}<br />
<br />
* [[Automated machine learning]] 自动机器学习<br />
<br />
* [[Machine ethics]] 机器伦理<br />
<br />
* [[Multi-task learning]] 多任务学习<br />
<br />
* [[Superintelligence: Paths, Dangers, Strategies|Superintelligence]] 超级智能<br />
<br />
* [[Nick Bostrom]] 尼克·博斯特罗姆<br />
<br />
* [[Eliezer Yudkowsky]] 埃利泽·尤德科夫斯基<br />
<br />
* [[Future of Humanity Institute]] 人类未来研究所<br />
<br />
* [[Outline of artificial intelligence]] 人工智能概要<br />
<br />
* [[Artificial brain]] 人工大脑<br />
<br />
* [[Transfer learning]] 学习迁移<br />
<br />
* [[Outline of transhumanism]] 超人类主义概要<br />
<br />
* [[General game playing]] 一般博弈<br />
<br />
* [[Synthetic intelligence]] 合成智能<br />
<br />
* [[Intelligence amplification]] (IA), the use of information technology in augmenting human intelligence instead of creating an external autonomous "AGI"{{div col end}}<br />
智能放大,利用信息技术加强人类智慧而不是建造外在的通用人工智能<br />
<br />
<br />
==Notes 附注==<br />
<br />
{{reflist|colwidth=30em}}<br />
<br />
<br />
<br />
==References 参考文献==<br />
<br />
• Stages of Artificial Intelligence"[https://www.computerscience0.xyz/2020/04/ai-artificial-intelligence-stages-of.html Computer Science]" on 2nd April, 2020.{{refbegin|2}}<br />
<br />
• Stages of Artificial Intelligence"[https://www.computerscience0.xyz/2020/04/ai-artificial-intelligence-stages-of.html Computer Science]" on 2nd April, 2020.<br />
<br />
•2020年4月2日,《人工智能的阶段》[ https://www.computerscience0.xyz/2020/04/ai-Artificial-Intelligence-Stages-of.html 计算机科学]。<br />
<br />
* {{cite web |url=http://www.techcast.org/Upload/PDFs/633615794236495345_TCTheAutomationofThought.pdf |title=TechCast Article Series: The Automation of Thought |last1=Halal |first1=William E. |website= |access-date= |archive-url=https://web.archive.org/web/20130606101835/http://www.techcast.org/Upload/PDFs/633615794236495345_TCTheAutomationofThought.pdf |archive-date=6 June 2013}}<br />
<br />
* {{Citation | last=Aleksander | first=Igor | author-link=Igor Aleksander | year=1996 | title=Impossible Minds | publisher=World Scientific Publishing Company | isbn=978-1-86094-036-1 | url-access=registration | url=https://archive.org/details/impossiblemindsm0000alek }}<br />
<br />
* {{Citation | last = Omohundro|first= Steve| author-link= Steve Omohundro | year = 2008| title= The Nature of Self-Improving Artificial Intelligence| publisher= presented and distributed at the 2007 Singularity Summit, San Francisco, CA.}}<br />
<br />
* {{Citation|last=Sandberg |first=Anders|last2=Boström|first2=Nick|title=Whole Brain Emulation: A Roadmap|url=http://www.fhi.ox.ac.uk/Reports/2008-3.pdf|accessdate=5 April 2009|series= Technical Report #2008‐3|year=2008| publisher = Future of Humanity Institute, Oxford University}}<br />
<br />
* {{Citation|vauthors=Azevedo FA, Carvalho LR, Grinberg LT | title = Equal numbers of neuronal and nonneuronal cells make the human brain an isometrically scaled-up primate brain| journal = The Journal of Comparative Neurology| volume = 513| issue = 5| pages = 532–41|date=April 2009| pmid = 19226510| doi = 10.1002/cne.21974|url=https://www.researchgate.net/publication/24024444 |accessdate=4 September 2013| ref={{harvid|Azevedo et al.|2009}}|display-authors=etal}}<br />
<br />
* {{Citation<br />
<br />
| first=Anthony<br />
<br />
| first=Anthony<br />
<br />
首先是安东尼<br />
<br />
| last=Berglas<br />
<br />
| last=Berglas<br />
<br />
最后一个贝格拉斯<br />
<br />
| title=Artificial Intelligence will Kill our Grandchildren<br />
<br />
| title=Artificial Intelligence will Kill our Grandchildren<br />
<br />
人工智能会杀死我们的孙子<br />
<br />
| year=2008<br />
<br />
| year=2008<br />
<br />
2008年<br />
<br />
| url=http://berglas.org/Articles/AIKillGrandchildren/AIKillGrandchildren.html<br />
<br />
| url=http://berglas.org/Articles/AIKillGrandchildren/AIKillGrandchildren.html<br />
<br />
Http://berglas.org/articles/aikillgrandchildren/aikillgrandchildren.html<br />
<br />
}}<br />
<br />
}}<br />
<br />
}}<br />
<br />
* {{Citation | last = Chalmers |first = David | author-link=David Chalmers | year=1996 | title = The Conscious Mind |publisher=Oxford University Press.}}<br />
<br />
* {{Citation | last = Clocksin|first=William |date=Aug 2003 |title=Artificial intelligence and the future|journal=[[Philosophical Transactions of the Royal Society A]] |pmid=12952683 |volume=361 |issue=1809 |pages=1721–1748 |doi=10.1098/rsta.2003.1232 |postscript=.|bibcode=2003RSPTA.361.1721C }}<br />
<br />
* {{Crevier 1993}}<br />
<br />
* {{Citation | first = Brad | last = Darrach | date=20 November 1970 | title=Meet Shakey, the First Electronic Person | magazine=[[Life Magazine]] | pages = 58–68 }}.<br />
<br />
* {{Citation | last = Drachman | first = D | title = Do we have brain to spare? | journal = Neurology | volume = 64 | issue = 12 | pages = 2004–5 | year = 2005 | pmid = 15985565 | doi = 10.1212/01.WNL.0000166914.38327.BB | postscript = .}}<br />
<br />
* {{Citation | last = Feigenbaum | first = Edward A. | first2=Pamela | last2=McCorduck | author-link=Edward Feigenbaum | author2-link = Pamela McCorduck | title = The Fifth Generation: Artificial Intelligence and Japan's Computer Challenge to the World | publisher = Michael Joseph | year = 1983 | isbn = 978-0-7181-2401-4 }}<br />
<br />
* {{Citation<br />
<br />
| last = Gelernter | first = David<br />
<br />
| last = Gelernter | first = David<br />
<br />
最后的盖兰特 | 第一个大卫<br />
<br />
| year = 2010<br />
<br />
| year = 2010<br />
<br />
2010年<br />
<br />
| title = Dream-logic, the Internet and Artificial Thought<br />
<br />
| title = Dream-logic, the Internet and Artificial Thought<br />
<br />
梦的逻辑,互联网和人工思维<br />
<br />
| url=http://www.edge.org/3rd_culture/gelernter10.1/gelernter10.1_index.html | accessdate= 25 July 2010<br />
<br />
| url=http://www.edge.org/3rd_culture/gelernter10.1/gelernter10.1_index.html | accessdate= 25 July 2010<br />
<br />
Http://www.edge.org/3rd_culture/gelernter10.1/gelernter10.1_index.html : 2010年7月25日<br />
<br />
}}<br />
<br />
}}<br />
<br />
}}<br />
<br />
* {{Citation<br />
<br />
| editor1-last = Goertzel | editor1-first = Ben | authorlink = Ben Goertzel<br />
<br />
| editor1-last = Goertzel | editor1-first = Ben | authorlink = Ben Goertzel<br />
<br />
1-first Ben | authorlink Ben Goertzel<br />
<br />
| editor2-last = Pennachin | editor2-first= Cassio<br />
<br />
| editor2-last = Pennachin | editor2-first= Cassio<br />
<br />
| 编辑2-last Pennachin | 编辑2-first Cassio<br />
<br />
| year = 2006<br />
<br />
| year = 2006<br />
<br />
2006年<br />
<br />
| title=Artificial General Intelligence<br />
<br />
| title=Artificial General Intelligence<br />
<br />
人工通用智能<br />
<br />
| publisher = Springer <br />
<br />
| publisher = Springer <br />
<br />
出版商斯普林格<br />
<br />
| url=http://people.inf.elte.hu/csizsekp/ai/books/artificial-general-intelligence-cognitive-technologies.9783540237334.27156.pdf<br />
<br />
| url=http://people.inf.elte.hu/csizsekp/ai/books/artificial-general-intelligence-cognitive-technologies.9783540237334.27156.pdf<br />
<br />
Http://people.inf.elte.hu/csizsekp/ai/books/artificial-general-intelligence-cognitive-technologies.9783540237334.27156.pdf<br />
<br />
| isbn = 978-3-540-23733-4<br />
<br />
| isbn = 978-3-540-23733-4<br />
<br />
[国际标准图书馆编号978-3-540-23733-4]<br />
<br />
| archive-url = https://web.archive.org/web/20130320184603/http://people.inf.elte.hu/csizsekp/ai/books/artificial-general-intelligence-cognitive-technologies.9783540237334.27156.pdf<br />
<br />
| archive-url = https://web.archive.org/web/20130320184603/http://people.inf.elte.hu/csizsekp/ai/books/artificial-general-intelligence-cognitive-technologies.9783540237334.27156.pdf<br />
<br />
| 档案-url https://web.archive.org/web/20130320184603/http://people.inf.elte.hu/csizsekp/ai/books/artificial-general-intelligence-cognitive-technologies.9783540237334.27156.pdf<br />
<br />
| archive-date = 20 March 2013<br />
<br />
| archive-date = 20 March 2013<br />
<br />
| 档案-日期2013年3月20日<br />
<br />
}}<br />
<br />
}}<br />
<br />
}}<br />
<br />
* {{Citation<br />
<br />
| last = Goertzel | first = Ben | authorlink = Ben Goertzel<br />
<br />
| last = Goertzel | first = Ben | authorlink = Ben Goertzel<br />
<br />
作者: Ben Goertzel<br />
<br />
| last2 = Wang | first2 = Pei<br />
<br />
| last2 = Wang | first2 = Pei<br />
<br />
2 Wang | first2 Pei<br />
<br />
| year = 2006<br />
<br />
| year = 2006<br />
<br />
2006年<br />
<br />
| title = Introduction: Aspects of Artificial General Intelligence<br />
<br />
| title = Introduction: Aspects of Artificial General Intelligence<br />
<br />
| 题目简介: 人工通用智能的方方面面<br />
<br />
| url=http://sites.google.com/site/narswang/publications/wang-goertzel.AGI_Aspects.pdf?attredirects=1<br />
<br />
| url=http://sites.google.com/site/narswang/publications/wang-goertzel.AGI_Aspects.pdf?attredirects=1<br />
<br />
Http://sites.google.com/site/narswang/publications/wang-goertzel.agi_aspects.pdf?attredirects=1<br />
<br />
}}<br />
<br />
}}<br />
<br />
}}<br />
<br />
* {{Citation |last=Goertzel|first=Ben |authorlink=Ben Goertzel |date=Dec 2007 |title=Human-level artificial general intelligence and the possibility of a technological singularity: a reaction to Ray Kurzweil's The Singularity Is Near, and McDermott's critique of Kurzweil|journal=Artificial Intelligence |volume=171 |issue=18, Special Review Issue |pages=1161–1173 |url=https://scholar.google.com/scholar?hl=sv&lr=&cluster=15189798216526465792 |accessdate=1 April 2009 |doi=10.1016/j.artint.2007.10.011 |postscript=.}}<br />
<br />
* {{Citation | last = Gubrud | first = Mark | url=http://www.foresight.org/Conferences/MNT05/Papers/Gubrud/ | title = Nanotechnology and International Security | journal= Fifth Foresight Conference on Molecular Nanotechnology |date = November 1997| accessdate= 7 May 2011}}<br />
<br />
* {{Citation| last1 = Holte | first1=RC | last2=Choueiry |first2=BY| title = Abstraction and reformulation in artificial intelligence| journal = [[Philosophical Transactions of the Royal Society B]]| volume = 358| issue = 1435| pages = 1197–1204| year = 2003| pmid = 12903653| pmc = 1693218| doi = 10.1098/rstb.2003.1317| postscript = .}}<br />
<br />
* {{Citation | last = Howe | first = J. | url=http://www.dai.ed.ac.uk/AI_at_Edinburgh_perspective.html | title = Artificial Intelligence at Edinburgh University : a Perspective |date = November 1994| accessdate= 30 August 2007}}<br />
<br />
* {{Citation | last = Johnson| first= Mark | year = 1987| title =The body in the mind| publisher =Chicago|isbn= 978-0-226-40317-5}}<br />
<br />
* {{Citation | last = Kurzweil | first = Ray | author-link = Ray Kurzweil | title = The Singularity is Near | year = 2005 | publisher = Viking Press | title-link = The Singularity is Near }}<br />
<br />
* {{Citation | last = Lighthill | first = Professor Sir James | author-link=James Lighthill | year = 1973 | contribution= Artificial Intelligence: A General Survey | title = Artificial Intelligence: a paper symposium| publisher = Science Research Council }}<br />
<br />
* {{Citation|last=Luger|first=George|first2=William|last2=Stubblefield|year=2004|title=Artificial Intelligence: Structures and Strategies for Complex Problem Solving|edition=5th|publisher=The Benjamin/Cummings Publishing Company, Inc.|page=[https://archive.org/details/artificialintell0000luge/page/720 720]|isbn=978-0-8053-4780-7|url=https://archive.org/details/artificialintell0000luge/page/720}}<br />
<br />
* {{Citation |last=McCarthy|first=John |authorlink=John McCarthy (computer scientist)|date=Oct 2007 |title=From here to human-level AI|journal=Artificial Intelligence |volume=171 |pages=1174–1182 |doi=10.1016/j.artint.2007.10.009 | postscript=. |issue=18}}<br />
<br />
* {{McCorduck 2004}}<br />
<br />
* {{Citation | last = Moravec | first = Hans | author-link = Hans Moravec | year = 1976 | url = http://www.frc.ri.cmu.edu/users/hpm/project.archive/general.articles/1975/Raw.Power.html | title = The Role of Raw Power in Intelligence | access-date = 29 September 2007 | archive-url = https://web.archive.org/web/20160303232511/http://www.frc.ri.cmu.edu/users/hpm/project.archive/general.articles/1975/Raw.Power.html | archive-date = 3 March 2016 | url-status = dead }}<br />
<br />
* {{Citation | last = Moravec | first = Hans | author-link=Hans Moravec | year = 1988 | title = Mind Children | publisher = Harvard University Press}}<br />
<br />
* {{Citation| last=Nagel | year=1974 | title =What Is it Like to Be a Bat | journal = Philosophical Review | volume=83 | issue=4 | pages=435–50 | url=http://organizations.utep.edu/Portals/1475/nagel_bat.pdf| postscript=. | doi=10.2307/2183914 | jstor=2183914 }}<br />
<br />
* {{Citation | last = Newell | first = Allen | author-link=Allen Newell | last2 = Simon | first2=H. A. | year = 1963 | contribution=GPS: A Program that Simulates Human Thought| title=Computers and Thought | editor-last= Feigenbaum | editor-first= E.A. |editor2-last= Feldman |editor2-first= J. |publisher= McGraw-Hill | authorlink2 = Herbert A. Simon|location= New York }}<br />
<br />
* {{cite journal | doi = 10.1145/360018.360022 | last = Newell | first = Allen | last2 = Simon | first2=H. A. | year = 1976 | title=Computer Science as Empirical Inquiry: Symbols and Search| volume= 19 | pages = 113–126 | journal = Communications of the ACM| author-link=Allen Newell | authorlink2=Herbert A. Simon|issue=3 | ref=harv| doi-access= free }}<br />
<br />
* {{Citation| last=Nilsson | first=Nils | author-link=Nils Nilsson (researcher) | year=1998|title=Artificial Intelligence: A New Synthesis|publisher=Morgan Kaufmann Publishers|isbn=978-1-55860-467-4}}<br />
<br />
* {{Russell Norvig 2003}}<br />
<br />
* {{Citation | last = NRC| author-link=United States National Research Council | chapter=Developments in Artificial Intelligence|chapter-url=http://www.nap.edu/readingroom/books/far/ch9.html|title=Funding a Revolution: Government Support for Computing Research|publisher=National Academy Press|year=1999 }}<br />
<br />
* {{Citation | last = Poole | first = David | first2 = Alan | last2 = Mackworth | first3 = Randy | last3 = Goebel | publisher = Oxford University Press | year = 1998 | title = Computational Intelligence: A Logical Approach | url = http://www.cs.ubc.ca/spider/poole/ci.html | author-link=David Poole (researcher) | location = New York }}<br />
<br />
* {{Citation|last=Searle |first=John |author-link=John Searle |title=Minds, Brains and Programs |journal=Behavioral and Brain Sciences |volume=3 |issue=3 |pages=417–457 |year=1980 |doi=10.1017/S0140525X00005756 }}<br />
<br />
* {{Citation | last= Simon | first = H. A. | author-link=Herbert A. Simon | year = 1965 | title=The Shape of Automation for Men and Management | publisher =Harper & Row | location = New York }}<br />
<br />
* {{Citation |last=Sutherland|first= J.G. |year =1990| title= Holographic Model of Memory, Learning, and Expression|journal=International Journal of Neural Systems|volume= 1–3|pages= 256–267 |postscript=.}}<br />
<br />
* {{Citation|vauthors=Williams RW, Herrup K | title = The control of neuron number| journal = Annual Review of Neuroscience| volume = 11| pages = 423–53| year = 1988| pmid = 3284447| doi = 10.1146/annurev.ne.11.030188.002231| postscript = .}}<!--| accessdate = 20 June 2009--><br />
<br />
* {{Citation<br />
<br />
| editor1-last = de Vega | editor1-first = Manuel<br />
<br />
| editor1-last = de Vega | editor1-first = Manuel<br />
<br />
| 编辑1-last de Vega | 编辑1-first Manuel<br />
<br />
| editor2-last = Glenberg | editor2-first = Arthur<br />
<br />
| editor2-last = Glenberg | editor2-first = Arthur<br />
<br />
2- 最后的格伦伯格2- 第一个亚瑟<br />
<br />
| editor3-last = Graesser | editor3-first = Arthur<br />
<br />
| editor3-last = Graesser | editor3-first = Arthur<br />
<br />
| 编辑3-last Graesser | 编辑3-first Arthur<br />
<br />
| year = 2008<br />
<br />
| year = 2008<br />
<br />
2008年<br />
<br />
| title = Symbols and Embodiment: Debates on meaning and cognition<br />
<br />
| title = Symbols and Embodiment: Debates on meaning and cognition<br />
<br />
标题符号与具体化: 关于意义与认知的争论<br />
<br />
| publisher = Oxford University Press<br />
<br />
| publisher = Oxford University Press<br />
<br />
牛津大学出版社<br />
<br />
| isbn=978-0-19-921727-4<br />
<br />
| isbn=978-0-19-921727-4<br />
<br />
[国际标准图书编号978-0-19-921727-4]<br />
<br />
}}<br />
<br />
}}<br />
<br />
}}<br />
<br />
* {{Citation|last=Yudkowsky |first=Eliezer |author-link=Eliezer Yudkowsky |editor-last=Goertzel |editor-first=Ben |editor2-last=Pennachin |editor2-first=Cassio |title=Artificial General Intelligence |journal=Annual Review of Psychology |publisher=Springer |year=2006 |url=http://www.singinst.org/upload/LOGI//LOGI.pdf |doi=10.1146/annurev.psych.49.1.585 |pmid=9496632 |isbn=978-3-540-23733-4 |url-status=dead |archiveurl=https://web.archive.org/web/20090411050423/http://www.singinst.org/upload/LOGI/LOGI.pdf |archivedate=11 April 2009 |volume=49 |pages=585–612}} <br />
<br />
* {{Citation |last=Zucker|first=Jean-Daniel |date=July 2003 |title=A grounded theory of abstraction in artificial intelligence|journal=[[Philosophical Transactions of the Royal Society B]] |pmid=12903672 |volume=358 |issue=1435 |pmc=1693211 |pages=1293–1309 |doi=10.1098/rstb.2003.1308 |postscript=.}}<br />
<br />
* {{Citation| last=Yudkowsky | first=Eliezer | author-link=Eliezer Yudkowsky| year=2008 |title=Artificial Intelligence as a Positive and Negative Factor in Global Risk |journal=Global Catastrophic Risks | bibcode=2008gcr..book..303Y }}.<br />
<br />
{{refend}}<br />
<br />
<br />
<br />
==External links 拓展链接==<br />
<br />
* [https://cis.temple.edu/~pwang/AGI-Intro.html The AGI portal maintained by Pei Wang]<br />
<br />
* [https://web.archive.org/web/20050405071221/http://genesis.csail.mit.edu/index.html The Genesis Group at MIT's CSAIL] – Modern research on the computations that underlay human intelligence<br />
<br />
* [http://www.opencog.org/ OpenCog – open source project to develop a human-level AI]<br />
<br />
* [http://academia.wikia.com/wiki/A_Method_for_Simulating_the_Process_of_Logical_Human_Thought Simulating logical human thought]<br />
<br />
* [http://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/ai-timelines What Do We Know about AI Timelines?] – Literature review<br />
<br />
<br />
<br />
{{Existential risk from artificial intelligence}}<br />
<br />
<br />
<br />
{{DEFAULTSORT:Artificial general intelligence}}<br />
<br />
[[Category:Hypothetical technology]]<br />
<br />
Category:Hypothetical technology<br />
<br />
类别: 假设技术<br />
<br />
[[Category:Artificial intelligence]]<br />
<br />
Category:Artificial intelligence<br />
<br />
类别: 人工智能<br />
<br />
[[Category:Computational neuroscience]]<br />
<br />
Category:Computational neuroscience<br />
<br />
类别: 计算神经科学<br />
<br />
<br />
<br />
[[fr:Intelligence artificielle#Intelligence artificielle forte]]<br />
<br />
fr:Intelligence artificielle#Intelligence artificielle forte<br />
<br />
智力人工 # 智力人工强项<br />
<br />
<noinclude><br />
<br />
<small>This page was moved from [[wikipedia:en:Artificial general intelligence]]. Its edit history can be viewed at [[通用人工智能/edithistory]]</small></noinclude><br />
<br />
[[Category:待整理页面]]</div>粲兰https://wiki.swarma.org/index.php?title=%E9%80%9A%E7%94%A8%E4%BA%BA%E5%B7%A5%E6%99%BA%E8%83%BD&diff=15038通用人工智能2020-10-12T14:36:27Z<p>粲兰:</p>
<hr />
<div>此词条暂由彩云小译翻译,未经人工整理和审校,带来阅读不便,请见谅。<br />
<br />
{{Short description|Hypothetical human-level or stronger AI}}<br />
<br />
{{Use British English|date = March 2019}}<br />
<br />
{{Use dmy dates|date=December 2019}}<br />
<br />
{{Artificial intelligence}}<br />
<br />
'''Artificial general intelligence''' ('''AGI''') is the hypothetical<ref>{{cite news |title=DeepMind and Google: the battle to control artificial intelligence |url=https://www.economist.com/news/2019/04/05/deepmind-and-google-the-battle-to-control-artificial-intelligence |accessdate=15 March 2020 |work=[[The Economist]] ([[1843 (magazine)]]) |date=2019 |quote=AGI stands for Artificial General Intelligence, a hypothetical computer program...}}</ref> intelligence of a machine that has the capacity to understand or learn any intellectual task that a [[human being]] can. It is a primary goal of some [[artificial intelligence]] research and a common topic in [[science fiction]] and [[futures studies]]. AGI can also be referred to as '''strong AI''',<ref>Kurzweil, ''Singularity'' (2005) p. 260</ref><ref name = "Kurzweil 2005-08-05">{{Citation|url=https://www.forbes.com/home/free_forbes/2005/0815/030.htmlhttps://www.forbes.com/home/free_forbes/2005/0815/030.html |first=Ray |last=Kurzweil |date=5 August 2005 |magazine=[[Forbes]]|title=Long Live AI}}: Kurzweil describes strong AI as "machine intelligence with the full range of human intelligence."</ref><ref>{{Citation|url=https://crnano.typepad.com/crnblog/2005/08/advanced_human_.html |title=Advanced Human ntelligence<br />
<br />
Artificial general intelligence (AGI) is the hypothetical intelligence of a machine that has the capacity to understand or learn any intellectual task that a human being can. It is a primary goal of some artificial intelligence research and a common topic in science fiction and futures studies. AGI can also be referred to as strong AI,<ref>{{Citation|url=https://crnano.typepad.com/crnblog/2005/08/advanced_human_.html |title=Advanced Human ntelligence<br />
<br />
人工通用智能(Artificial general intelligence,AGI)是一种假设的机器智能,它有能力理解或学习任何人类能够完成的智力任务。这是一些人工智能研究的主要目标,也是科幻小说和未来研究的共同话题。也可以被称为强人工智能,参考文献{ Citation | url https://crnano.typepad.com/crnblog/2005/08/advanced_human_.html | title Advanced Human intelligence<br />
<br />
|first=Mike|last=Treder|work=Responsible Nanotechnology|date=10 August 2005 |archive-url=https://web.archive.org/web/20191016214415/https://crnano.typepad.com/crnblog/2005/08/advanced_human_.html|archive-date=16 October 2019 |url-status=live}}</ref> '''full AI''',<ref>{{Cite web |url=http://tedxtalks.ted.com/video/The-Age-of-Artificial-Intellige |title=The Age of Artificial Intelligence: George John at TEDxLondonBusinessSchool 2013 |access-date=22 February 2014 |archive-url=https://web.archive.org/web/20140226123940/http://tedxtalks.ted.com/video/The-Age-of-Artificial-Intellige |archive-date=26 February 2014 |url-status=live }}</ref> <br />
<br />
|first=Mike|last=Treder|work=Responsible Nanotechnology|date=10 August 2005 |archive-url=https://web.archive.org/web/20191016214415/https://crnano.typepad.com/crnblog/2005/08/advanced_human_.html|archive-date=16 October 2019 |url-status=live}}</ref> full AI, <br />
<br />
2005年8月10日2019年10月 https://web.archive.org/web/20191016214415/https://crnano.typepad.com/crnblog/2005/08/advanced_human_.html|archive-date=16,<br />
<br />
or '''general intelligent action'''.{{sfn|Newell|Simon|1976|ps=, This is the term they use for "human-level" intelligence in the [[physical symbol system]] hypothesis.}} <br />
<br />
or general intelligent action. <br />
<br />
或者是通用智能行为。<br />
<br />
Some academic sources reserve the term "strong AI" for machines that can experience [[Chinese room#Strong AI|consciousness]].{{sfn|Searle|1980|ps=, See below for the origin of the term "strong AI", and see the academic definition of "[[Chinese room#Strong AI|strong AI]]" in the article [[Chinese room]].}} Today's AI is speculated to be many years, if not decades, away from AGI.<ref>[https://www.europarl.europa.eu/at-your-service/files/be-heard/religious-and-non-confessional-dialogue/events/en-20190319-how-artificial-intelligence-works.pdf europarl.europa.eu: How artificial intelligence works], "Concluding remarks: Today's AI is powerful and useful, but remains far from speculated AGI or ASI.", European Parliamentary Research Service, retrieved March 3, 2020</ref><ref>{{Cite journal|last=Grace|first=Katja|last2=Salvatier|first2=John|last3=Dafoe|first3=Allan|last4=Zhang|first4=Baobao|last5=Evans|first5=Owain|date=2018-07-31|title=Viewpoint: When Will AI Exceed Human Performance? Evidence from AI Experts|journal=Journal of Artificial Intelligence Research|volume=62|pages=729–754|doi=10.1613/jair.1.11222|issn=1076-9757}}</ref><br />
<br />
Some academic sources reserve the term "strong AI" for machines that can experience consciousness. Today's AI is speculated to be many years, if not decades, away from AGI.<br />
<br />
一些学术资源保留了“强人工智能”这个术语,用来形容能够体会意识的机器。据推测,如果不是几十年的话,今天的人工智能将在很多年之后才能达到通用人工智能的地步。<br />
<br />
<br />
<br />
Some authorities emphasize a distinction between ''strong AI'' and ''applied AI'',<ref>Encyclopædia Britannica [http://www.britannica.com/eb/article-219086/artificial-intelligence Strong AI, applied AI, and cognitive simulation] {{Webarchive|url=https://web.archive.org/web/20071015054758/http://www.britannica.com/eb/article-219086/artificial-intelligence |date=15 October 2007 }} or Jack Copeland [http://www.cs.usfca.edu/www.AlanTuring.net/turing_archive/pages/Reference%20Articles/what_is_AI/What%20is%20AI02.html What is artificial intelligence?] {{Webarchive|url=https://web.archive.org/web/20070818125256/http://www.cs.usfca.edu/www.AlanTuring.net/turing_archive/pages/Reference%20Articles/what_is_AI/What%20is%20AI02.html |date=18 August 2007 }} on AlanTuring.net</ref> also called ''narrow AI''<ref name = "Kurzweil 2005-08-05"/> or ''[[weak AI]]''.<ref>{{Cite web |url=http://www.open2.net/nextbigthing/ai/ai_in_depth/in_depth.htm |title=The Open University on Strong and Weak AI |access-date=8 October 2007 |archive-url=https://web.archive.org/web/20090925043908/http://www.open2.net/nextbigthing/ai/ai_in_depth/in_depth.htm |archive-date=25 September 2009 |url-status=live }} {{Dead link |date=October 2019}}</ref> In contrast to strong AI, weak AI is not intended to perform human [[cognitive]] abilities. Rather, weak AI is limited to the use of software to study or accomplish specific [[problem solving]] or [[reason]]ing tasks.<br />
<br />
Some authorities emphasize a distinction between strong AI and applied AI, also called narrow AI In contrast to strong AI, weak AI is not intended to perform human cognitive abilities. Rather, weak AI is limited to the use of software to study or accomplish specific problem solving or reasoning tasks.<br />
<br />
一些权威机构强调'' 强人工智能 ''和'' 应用人工智能 ''之间的区别,也称为'' 狭义人工智能 ''与'' 强人工智能 ''相比,弱人工智能并不是为了执行人类的认知能力。相反,弱人工智能仅限于使用软件来研究或完成特定问题的解决或完成推理任务。<br />
<br />
<br />
<br />
As of 2017, over forty organizations are researching AGI.<ref name=baum/><br />
<br />
As of 2017, over forty organizations are researching AGI.<br />
<br />
截止到2017年,已经有超过四十家机构在研究 AGI。<br />
<br />
<br />
<br />
==Requirements 判定要求==<br />
<br />
{{main|Cognitive science}}<br />
<br />
<br />
<br />
Various criteria for [[intelligence]] have been proposed (most famously the [[Turing test]]) but to date, there is no definition that satisfies everyone.<ref>AI founder [[John McCarthy (computer scientist)|John McCarthy]] writes: "we cannot yet characterize in general what kinds of computational procedures we want to call intelligent." {{cite web| url=http://www-formal.stanford.edu/jmc/whatisai/node1.html| title=Basic Questions| last=McCarthy| first=John| authorlink=John McCarthy (computer scientist)| publisher=[[Stanford University]]| year=2007| access-date=6 December 2007| archive-url=https://web.archive.org/web/20071026100601/http://www-formal.stanford.edu/jmc/whatisai/node1.html| archive-date=26 October 2007| url-status=live}} (For a discussion of some definitions of intelligence used by [[artificial intelligence]] researchers, see [[philosophy of artificial intelligence]].)</ref> However, there ''is'' wide agreement among artificial intelligence researchers that intelligence is required to do the following:<ref><br />
<br />
Various criteria for intelligence have been proposed (most famously the Turing test) but to date, there is no definition that satisfies everyone. However, there is wide agreement among artificial intelligence researchers that intelligence is required to do the following:<ref><br />
<br />
人们提出了各种各样的智能标准(最著名的是图灵测试) ,但到目前为止,还没有一个定义能使所有人满意。然而,人工智能研究人员普遍认为,智能需要做到以下几点: <br />
<br />
This list of intelligent traits is based on the topics covered by major AI textbooks, including:<br />
<br />
This list of intelligent traits is based on the topics covered by major AI textbooks, including:<br />
<br />
这个智能特征的列表基于主流的人工智能教科书所涉及的主题,包括:<br />
<br />
{{Harvnb|Russell|Norvig|2003}},<br />
<br />
,<br />
<br />
,<br />
<br />
{{Harvnb|Luger|Stubblefield|2004}},<br />
<br />
,<br />
<br />
,<br />
<br />
{{Harvnb|Poole|Mackworth|Goebel|1998}} and<br />
<br />
and<br />
<br />
及<br />
<br />
{{Harvnb|Nilsson|1998}}.<br />
<br />
.<br />
<br />
.<br />
<br />
</ref><br />
<br />
</ref><br />
<br />
/ 参考<br />
<br />
* [[automated reasoning|reason]], use strategy, solve puzzles, and make judgments under [[uncertainty]];使用策略,解决问题,并且在不确定条件下做出决策。<br />
<br />
* [[knowledge representation|represent knowledge]], including [[Commonsense knowledge base|commonsense knowledge]];展现知识,包括常识。<br />
<br />
* [[automated planning and scheduling|plan]];规划。<br />
<br />
* [[machine learning|learn]];学习。<br />
<br />
* communicate in [[natural language processing|natural language]];使用自然语言进行交流。<br />
<br />
* and [[Artificial intelligence systems integration|integrate all these skills]] towards common goals.综合运用所有技巧以达到某个目的。<br />
<br />
<br />
<br />
Other important capabilities include the ability to [[machine perception|sense]] (e.g. [[computer vision|see]]) and the ability to act (e.g. [[robotics|move and manipulate objects]]) in the world where intelligent behaviour is to be observed.<ref>Pfeifer, R. and Bongard J. C., How the body shapes the way we think: a new view of intelligence (The MIT Press, 2007). {{ISBN|0-262-16239-3}}</ref> This would include an ability to detect and respond to [[hazard]].<ref>{{cite journal | last1 = White | first1 = R. W. | year = 1959 | title = Motivation reconsidered: The concept of competence | journal = Psychological Review | volume = 66 | issue = 5| pages = 297–333 | doi=10.1037/h0040934| pmid = 13844397 }}</ref> Many interdisciplinary approaches to intelligence (e.g. [[cognitive science]], [[computational intelligence]] and [[decision making]]) tend to emphasise the need to consider additional traits such as [[imagination]] (taken as the ability to form mental images and concepts that were not programmed in)<ref>{{Harvnb|Johnson|1987}}</ref> and [[Self-determination theory|autonomy]].<ref>deCharms, R. (1968). Personal causation. New York: Academic Press.</ref><br />
<br />
Other important capabilities include the ability to sense (e.g. see) and the ability to act (e.g. move and manipulate objects) in the world where intelligent behaviour is to be observed. This would include an ability to detect and respond to hazard. Many interdisciplinary approaches to intelligence (e.g. cognitive science, computational intelligence and decision making) tend to emphasise the need to consider additional traits such as imagination (taken as the ability to form mental images and concepts that were not programmed in) and autonomy.<br />
<br />
其他重要的能力包括在客观世界感知(例如:视觉)和行动(例如:移动和操纵物体)的能力。智能行为在客观世界中是可观测的。这将包括检测和应对危险的能力。许多跨学科的智力研究方法(例如:。认知科学、计算智能和决策)倾向于强调考虑额外特征的必要性,例如想象力(被认为是未编入程序的形成意象和概念的能力)和自主性。<br />
<br />
Computer based systems that exhibit many of these capabilities do exist (e.g. see [[computational creativity]], [[automated reasoning]], [[decision support system]], [[robot]], [[evolutionary computation]], [[intelligent agent]]), but not yet at human levels.<br />
<br />
Computer based systems that exhibit many of these capabilities do exist (e.g. see computational creativity, automated reasoning, decision support system, robot, evolutionary computation, intelligent agent), but not yet at human levels.<br />
<br />
基于计算机的系统,展示了许多这些能力确实存在(例如:。参见计算创造性、自动推理、决策支持系统、机器人、进化计算、智能代理) ,但还没有达到人类的水平。<br />
<br />
<br />
<br />
===Tests for confirming human-level AGI{{anchor|Tests_for_confirming_human-level_AGI}}===<br />
<br />
The following tests to confirm human-level AGI have been considered:<ref>{{cite web|last=Muehlhauser|first=Luke|title=What is AGI?|url=http://intelligence.org/2013/08/11/what-is-agi/|publisher=Machine Intelligence Research Institute|accessdate=1 May 2014|date=11 August 2013|archive-url=https://web.archive.org/web/20140425115445/http://intelligence.org/2013/08/11/what-is-agi/|archive-date=25 April 2014|url-status=live}}</ref><ref>{{Cite web|url=https://www.talkyblog.com/artificial_general_intelligence_agi/|title=What is Artificial General Intelligence (AGI)? {{!}} 4 Tests For Ensuring Artificial General Intelligence|date=13 July 2019|website=Talky Blog|language=en-US|access-date=17 July 2019|archive-url=https://web.archive.org/web/20190717071152/https://www.talkyblog.com/artificial_general_intelligence_agi/|archive-date=17 July 2019|url-status=live}}</ref><br />
<br />
The following tests to confirm human-level AGI have been considered:<br />
<br />
考虑了下列测试以确认人类水平 AGI:<br />
<br />
;[[Turing test|The Turing Test]] ([[Alan Turing|''Turing'']])<br />
<br />
The Turing Test (Turing)<br />
<br />
图灵测试(图灵)<br />
<br />
: A machine and a human both converse sight unseen with a second human, who must evaluate which of the two is the machine, which passes the test if it can fool the evaluator a significant fraction of the time. Note: Turing does not prescribe what should qualify as intelligence, only that knowing that it is a machine should disqualify it.<br />
<br />
A machine and a human both converse sight unseen with a second human, who must evaluate which of the two is the machine, which passes the test if it can fool the evaluator a significant fraction of the time. Note: Turing does not prescribe what should qualify as intelligence, only that knowing that it is a machine should disqualify it.<br />
<br />
一个机器人和一个人类都与另一个人类进行眼神交流,后者必须评估两者中哪一个是机器,如果它能骗过评估者很大一部分时间,那么机器就通过了测试。注意: 图灵并没有规定什么是智能,只要能认出它是一台机器就不应该认为它是智能的。<br />
<br />
;The Coffee Test ([[Steve Wozniak|''Wozniak'']])<br />
<br />
The Coffee Test (Wozniak)<br />
<br />
咖啡测试(沃兹尼亚克)<br />
<br />
: A machine is required to enter an average American home and figure out how to make coffee: find the coffee machine, find the coffee, add water, find a mug, and brew the coffee by pushing the proper buttons.<br />
<br />
A machine is required to enter an average American home and figure out how to make coffee: find the coffee machine, find the coffee, add water, find a mug, and brew the coffee by pushing the proper buttons.<br />
<br />
一台机器需要进入一个普通的美国家庭,并弄清楚如何制作咖啡: 找到咖啡机,找到咖啡,加水,找到一个马克杯,并通过按下正确的按钮来煮咖啡。<br />
<br />
;The Robot College Student Test ([[Ben Goertzel|''Goertzel'']])<br />
<br />
The Robot College Student Test (Goertzel)<br />
<br />
机器人大学生考试(格兹尔)<br />
<br />
: A machine enrolls in a university, taking and passing the same classes that humans would, and obtaining a degree.<br />
<br />
A machine enrolls in a university, taking and passing the same classes that humans would, and obtaining a degree.<br />
<br />
一台机器进入一所大学,学习并通过与人类相同的课程,并获得学位。<br />
<br />
;The Employment Test ([[Nils John Nilsson|''Nilsson'']])<br />
<br />
The Employment Test (Nilsson)<br />
<br />
就业测试(尼尔森)<br />
<br />
: A machine works an economically important job, performing at least as well as humans in the same job.<br />
<br />
A machine works an economically important job, performing at least as well as humans in the same job.<br />
<br />
机器从事一项经济上重要的工作,在同一项工作中表现至少和人类一样好。<br />
<br />
<br />
<br />
=== Problems requiring AGI to solve 等待通用人工智能解决的问题===<br />
<br />
{{Main|AI-complete}}<br />
<br />
<br />
<br />
The most difficult problems for computers are informally known as "AI-complete" or "AI-hard", implying that solving them is equivalent to the general aptitude of human intelligence, or strong AI, beyond the capabilities of a purpose-specific algorithm.<ref name="Shapiro92">Shapiro, Stuart C. (1992). [http://www.cse.buffalo.edu/~shapiro/Papers/ai.pdf Artificial Intelligence] {{Webarchive|url=https://web.archive.org/web/20160201014644/http://www.cse.buffalo.edu/~shapiro/Papers/ai.pdf |date=1 February 2016 }} In Stuart C. Shapiro (Ed.), ''Encyclopedia of Artificial Intelligence'' (Second Edition, pp.&nbsp;54–57). New York: John Wiley. (Section 4 is on "AI-Complete Tasks".)</ref><br />
<br />
The most difficult problems for computers are informally known as "AI-complete" or "AI-hard", implying that solving them is equivalent to the general aptitude of human intelligence, or strong AI, beyond the capabilities of a purpose-specific algorithm.<br />
<br />
对于计算机来说,最困难的问题被非正式地称为“ AI完全问题”或“ AI困难问题” ,这意味着解决这些问题相当于人类智能的一般才能,或强人工智能,超出了特定目的算法的能力。<br />
<br />
<br />
<br />
AI-complete problems are hypothesised to include general [[computer vision]], [[natural language understanding]], and dealing with unexpected circumstances while solving any real world problem.<ref>Roman V. Yampolskiy. Turing Test as a Defining Feature of AI-Completeness. In Artificial Intelligence, Evolutionary Computation and Metaheuristics (AIECM) --In the footsteps of Alan Turing. Xin-She Yang (Ed.). pp. 3–17. (Chapter 1). Springer, London. 2013. http://cecs.louisville.edu/ry/TuringTestasaDefiningFeature04270003.pdf {{Webarchive|url=https://web.archive.org/web/20130522094547/http://cecs.louisville.edu/ry/TuringTestasaDefiningFeature04270003.pdf |date=22 May 2013 }}</ref><br />
<br />
AI-complete problems are hypothesised to include general computer vision, natural language understanding, and dealing with unexpected circumstances while solving any real world problem.<br />
<br />
人工智能完全问题假设包括一般的计算机视觉,自然语言理解,以及在解决任何现实世界问题的同时处理意外情况。<br />
<br />
<br />
<br />
AI-complete problems cannot be solved with current computer technology alone, and also require [[human computation]]. This property could be useful, for example, to test for the presence of humans, as [[CAPTCHA]]s aim to do; and for [[computer security]] to repel [[brute-force attack]]s.<ref>Luis von Ahn, Manuel Blum, Nicholas Hopper, and John Langford. [http://www.captcha.net/captcha_crypt.pdf CAPTCHA: Using Hard AI Problems for Security] {{Webarchive|url=https://web.archive.org/web/20160304001102/http://www.captcha.net/captcha_crypt.pdf |date=4 March 2016 }}. In Proceedings of Eurocrypt, Vol. 2656 (2003), pp. 294–311.</ref><ref>{{cite journal | first = Richard | last = Bergmair | title = Natural Language Steganography and an "AI-complete" Security Primitive | citeseerx = 10.1.1.105.129 | date = 7 January 2006 }} (unpublished?)</ref><br />
<br />
AI-complete problems cannot be solved with current computer technology alone, and also require human computation. This property could be useful, for example, to test for the presence of humans, as CAPTCHAs aim to do; and for computer security to repel brute-force attacks.<br />
<br />
目前的计算机技术不能单独解决AI完全问题,而且还需要人工计算。例如,这个特性可以用来测试人类是否存在(CAPTCHAs 的目标就是这样做) ,以及应用于计算机安全以抵御强力攻击。<br />
<br />
<br />
<br />
== History 历史 == <br />
<br />
=== Classical AI 经典人工智能 ===<br />
<br />
{{Main|History of artificial intelligence}}<br />
<br />
Modern AI research began in the mid 1950s.<ref>{{Harvnb|Crevier|1993|pp=48–50}}</ref> The first generation of AI researchers were convinced that artificial general intelligence was possible and that it would exist in just a few decades. AI pioneer [[Herbert A. Simon]] wrote in 1965: "machines will be capable, within twenty years, of doing any work a man can do."<ref>{{Harvnb|Simon|1965|p=96}} quoted in {{Harvnb|Crevier|1993|p=109}}</ref> Their predictions were the inspiration for [[Stanley Kubrick]] and [[Arthur C. Clarke]]'s character [[HAL 9000]], who embodied what AI researchers believed they could create by the year 2001. AI pioneer [[Marvin Minsky]] was a consultant<ref>{{Cite web |url=http://mitpress.mit.edu/e-books/Hal/chap2/two1.html |title=Scientist on the Set: An Interview with Marvin Minsky |access-date=5 April 2008 |archive-url=https://web.archive.org/web/20120716182537/http://mitpress.mit.edu/e-books/Hal/chap2/two1.html |archive-date=16 July 2012 |url-status=live }}</ref> on the project of making HAL 9000 as realistic as possible according to the consensus predictions of the time; Crevier quotes him as having said on the subject in 1967, "Within a generation ... the problem of creating 'artificial intelligence' will substantially be solved,"<ref>Marvin Minsky to {{Harvtxt|Darrach|1970}}, quoted in {{Harvtxt|Crevier|1993|p=109}}.</ref> although Minsky states that he was misquoted.{{Citation needed|date=June 2011}}<br />
<br />
Modern AI research began in the mid 1950s. The first generation of AI researchers were convinced that artificial general intelligence was possible and that it would exist in just a few decades. AI pioneer Herbert A. Simon wrote in 1965: "machines will be capable, within twenty years, of doing any work a man can do." Their predictions were the inspiration for Stanley Kubrick and Arthur C. Clarke's character HAL 9000, who embodied what AI researchers believed they could create by the year 2001. AI pioneer Marvin Minsky was a consultant on the project of making HAL 9000 as realistic as possible according to the consensus prediction of the time; Crevier quotes him as having said on the subject in 1967, "Within a generation ... the problem of creating 'artificial intelligence' will substantially be solved," although Minsky states that he was misquoted.<br />
<br />
现代人工智能研究始于20世纪50年代中期。第一代人工智能研究人员确信,通用人工智能是可能的,并将在短短几十年内出现。人工智能的先驱赫伯特·A·西蒙(Herbert A. Simon)在1965年写道: “机器将在20年内拥有完成人类能做的任何工作的能力。”他们的预言启发了斯坦利·库布里克和亚瑟·查理斯·克拉克塑造的角色哈尔9000,它代表了人工智能研究人员相信他们截至2001年能够创造出的东西。人工智能先驱马文·明斯基(Marvin Minsky)是一个项目顾问,该项目旨在根据当时的一致预测,使哈尔9000尽可能逼真; 克里维尔援引他在1967年关于这个问题的话说,“在一代人的时间里... ... 创造‘人工智能’的问题将大体上得到解决,”尽管明斯基声称,他的话被错误引用了。<br />
<br />
<br />
<br />
However, in the early 1970s, it became obvious that researchers had grossly underestimated the difficulty of the project. Funding agencies became skeptical of AGI and put researchers under increasing pressure to produce useful "applied AI".<ref>The [[Lighthill report]] specifically criticized AI's "grandiose objectives" and led the dismantling of AI research in England. ({{Harvnb|Lighthill|1973}}; {{Harvnb|Howe|1994}}) In the U.S., [[DARPA]] became determined to fund only "mission-oriented direct research, rather than basic undirected research". See {{Harv|NRC|1999}} under "Shift to Applied Research Increases Investment". See also {{Harv|Crevier|1993|pp=115–117}} and {{Harv|Russell|Norvig|2003|pp=21–22}}</ref> As the 1980s began, Japan's [[Fifth Generation Computer]] Project revived interest in AGI, setting out a ten-year timeline that included AGI goals like "carry on a casual conversation".<ref>{{harvnb|Crevier|1993|p=211}}, {{harvnb|Russell|Norvig|2003|p=24}} and see also {{Harvnb|Feigenbaum|McCorduck|1983}}</ref> In response to this and the success of [[expert systems]], both industry and government pumped money back into the field.<ref>{{Harvnb|Crevier| 1993|pp=161–162,197–203,240}}; {{harvnb|Russell|Norvig|2003|p=25}}; {{harvnb|NRC|1999|loc=under "Shift to Applied Research Increases Investment"}}</ref> However, confidence in AI spectacularly collapsed in the late 1980s, and the goals of the Fifth Generation Computer Project were never fulfilled.<ref>{{Harvnb|Crevier|1993|pp=209–212}}</ref> For the second time in 20 years, AI researchers who had predicted the imminent achievement of AGI had been shown to be fundamentally mistaken. By the 1990s, AI researchers had gained a reputation for making vain promises. They became reluctant to make predictions at all<ref>As AI founder [[John McCarthy (computer scientist)|John McCarthy]] writes "it would be a great relief to the rest of the workers in AI if the inventors of new general formalisms would express their hopes in a more guarded form than has sometimes been the case." {{cite web | url=http://www-formal.stanford.edu/jmc/reviews/lighthill/lighthill.html | title=Reply to Lighthill | last=McCarthy | first=John | authorlink=John McCarthy (computer scientist) | publisher=Stanford University | year=2000 | access-date=29 September 2007 | archive-url=https://web.archive.org/web/20080930164952/http://www-formal.stanford.edu/jmc/reviews/lighthill/lighthill.html | archive-date=30 September 2008 | url-status=live }}</ref> and to avoid any mention of "human level" artificial intelligence for fear of being labeled "wild-eyed dreamer[s]."<ref>"At its low point, some computer scientists and software engineers avoided the term artificial intelligence for fear of being viewed as wild-eyed dreamers."{{cite news |first=John |last=Markoff |title=Behind Artificial Intelligence, a Squadron of Bright Real People |url=https://www.nytimes.com/2005/10/14/technology/14artificial.html?ei=5070&en=11ab55edb7cead5e&ex=1185940800&adxnnl=1&adxnnlx=1185805173-o7WsfW7qaP0x5/NUs1cQCQ |work=The New York Times|date=14 October 2005}}</ref><br />
<br />
However, in the early 1970s, it became obvious that researchers had grossly underestimated the difficulty of the project. Funding agencies became skeptical of AGI and put researchers under increasing pressure to produce useful "applied AI". As the 1980s began, Japan's Fifth Generation Computer Project revived interest in AGI, setting out a ten-year timeline that included AGI goals like "carry on a casual conversation". In response to this and the success of expert systems, both industry and government pumped money back into the field. However, confidence in AI spectacularly collapsed in the late 1980s, and the goals of the Fifth Generation Computer Project were never fulfilled. For the second time in 20 years, AI researchers who had predicted the imminent achievement of AGI had been shown to be fundamentally mistaken. By the 1990s, AI researchers had gained a reputation for making vain promises. They became reluctant to make predictions at all and to avoid any mention of "human level" artificial intelligence for fear of being labeled "wild-eyed dreamer[s]."<br />
<br />
然而,在20世纪70年代初,很明显,研究人员严重低估了该项目的难度。资助机构开始对通用人工智能持怀疑态度,并对研究人员施加越来越大的压力,要求他们生产出有用的“应用人工智能”。随着20世纪80年代的开始,日本的'''<font color="#ff8000">第五代计算机项目(Fifth Generation Computer Project)</font>'''重新唤起了人们对通用人工智能的兴趣,并设定了一个长达10年的时间线,其中包括通用人工智能的目标,比如“进行一次随意的交谈”。为了应对这种情况和专家系统的成功建立,工业界和政府都重新将资金投入这一领域。然而,人们对人工智能的信心在20世纪80年代末大幅下降,第五代计算机项目的目标从未实现。20年来的第二次,人工智能研究人员预测通用人工智能即将取得的成果已经被证明根本是错误的。到了20世纪90年代,人工智能研究人员因做出虚假承诺而臭名昭著。他们根本不愿意做预测,也不愿意提及“人类水平”的人工智能,因为他们害怕被贴上“狂热梦想家”的标签<br />
<br />
<br />
<br />
=== Narrow AI research 狭义人工智能的研究===<br />
<br />
{{Main|Artificial intelligence}}<br />
<br />
<br />
<br />
In the 1990s and early 21st century, mainstream AI achieved far greater commercial success and academic respectability by focusing on specific sub-problems where they can produce verifiable results and commercial applications, such as [[artificial neural networks]] and statistical [[machine learning]].<ref>{{Harvnb|Russell|Norvig|2003|pp=25–26}}</ref> These "applied AI" systems are now used extensively throughout the technology industry, and research in this vein is very heavily funded in both academia and industry. Currently, development on this field is considered an emerging trend, and a mature stage is expected to happen in more than 10 years.<ref>{{cite web |title=Trends in the Emerging Tech Hype Cycle |url=https://blogs.gartner.com/smarterwithgartner/files/2018/08/PR_490866_5_Trends_in_the_Emerging_Tech_Hype_Cycle_2018_Hype_Cycle.png |publisher=Gartner Reports |accessdate=7 May 2019 |archive-url=https://web.archive.org/web/20190522024829/https://blogs.gartner.com/smarterwithgartner/files/2018/08/PR_490866_5_Trends_in_the_Emerging_Tech_Hype_Cycle_2018_Hype_Cycle.png |archive-date=22 May 2019 |url-status=live }}</ref><br />
<br />
In the 1990s and early 21st century, mainstream AI achieved far greater commercial success and academic respectability by focusing on specific sub-problems where they can produce verifiable results and commercial applications, such as artificial neural networks and statistical machine learning. These "applied AI" systems are now used extensively throughout the technology industry, and research in this vein is very heavily funded in both academia and industry. Currently, development on this field is considered an emerging trend, and a mature stage is expected to happen in more than 10 years.<br />
<br />
在1990年代和21世纪初,主流人工智能取得了更大的商业成功和学术声望,因为它们把重点放在能够产生可验证结果和商业应用的具体子问题上,例如人工神经网络和统计机器学习。这些“应用人工智能”系统现在在整个技术产业中得到广泛应用,这方面的研究得到了学术界和产业界的大量资助。目前,这一领域的发展被认为是一个新兴的趋势,并有望在10多年内进入一个成熟的阶段。<br />
<br />
<br />
<br />
Most mainstream AI researchers hope that strong AI can be developed by combining the programs that solve various sub-problems. [[Hans Moravec]] wrote in 1988: <blockquote>"I am confident that this bottom-up route to artificial intelligence will one day meet the traditional top-down route more than half way, ready to provide the real world competence and the [[Commonsense knowledge base|commonsense knowledge]] that has been so frustratingly elusive in reasoning programs. Fully intelligent machines will result when the metaphorical [[golden spike]] is driven uniting the two efforts."<ref>{{Harvnb|Moravec|1988|p=20}}</ref></blockquote><br />
<br />
Most mainstream AI researchers hope that strong AI can be developed by combining the programs that solve various sub-problems. Hans Moravec wrote in 1988: <blockquote>"I am confident that this bottom-up route to artificial intelligence will one day meet the traditional top-down route more than half way, ready to provide the real world competence and the commonsense knowledge that has been so frustratingly elusive in reasoning programs. Fully intelligent machines will result when the metaphorical golden spike is driven uniting the two efforts."</blockquote><br />
<br />
大多数主流人工智能研究人员希望,通过结合解决各种子问题的项目,可以开发出强人工智能。汉斯·莫拉维克(Hans Moravec)在1988年写道: “我相信,这种自下而上的人工智能路线,终有一天会与传统的自上而下的路线在后半程相遇。令人沮丧的是,当下,真实世界的能力和常识知识在推理程序中一直难以捉摸。而这两种路线结合的人工智能将能为我们解决这些疑难。当一个黄金钉一样的东西将二者结合起来时,就会产生完全智能的机器。” <br />
<br />
<br />
<br />
However, even this fundamental philosophy has been disputed; for example, Stevan Harnad of Princeton concluded his 1990 paper on the [[Symbol grounding problem|Symbol Grounding Hypothesis]] by stating: <blockquote>"The expectation has often been voiced that "top-down" (symbolic) approaches to modeling cognition will somehow meet "bottom-up" (sensory) approaches somewhere in between. If the grounding considerations in this paper are valid, then this expectation is hopelessly modular and there is really only one viable route from sense to symbols: from the ground up. A free-floating symbolic level like the software level of a computer will never be reached by this route (or vice versa) – nor is it clear why we should even try to reach such a level, since it looks as if getting there would just amount to uprooting our symbols from their intrinsic meanings (thereby merely reducing ourselves to the functional equivalent of a programmable computer)."<ref>{{cite journal | last1 = Harnad | first1 = S | year = 1990 | title = The Symbol Grounding Problem | journal = Physica D | volume = 42 | issue = 1–3| pages = 335–346 | doi=10.1016/0167-2789(90)90087-6| bibcode = 1990PhyD...42..335H | arxiv = cs/9906002}}</ref></blockquote><br />
<br />
However, even this fundamental philosophy has been disputed; for example, Stevan Harnad of Princeton concluded his 1990 paper on the Symbol Grounding Hypothesis by stating: <blockquote>"The expectation has often been voiced that "top-down" (symbolic) approaches to modeling cognition will somehow meet "bottom-up" (sensory) approaches somewhere in between. If the grounding considerations in this paper are valid, then this expectation is hopelessly modular and there is really only one viable route from sense to symbols: from the ground up. A free-floating symbolic level like the software level of a computer will never be reached by this route (or vice versa) – nor is it clear why we should even try to reach such a level, since it looks as if getting there would just amount to uprooting our symbols from their intrinsic meanings (thereby merely reducing ourselves to the functional equivalent of a programmable computer)."</blockquote><br />
<br />
然而,连如此基本的哲学问题也存在争议; 例如,普林斯顿大学的斯蒂文·哈纳德(Stevan Harnad)在1990年关于'''<font color="#ff8000">符号基础假说(the Symbol Grounding Hypothesis)</font>'''的论文中总结道: “人们经常提出这样的期望,即建立“自上而下”(符号)的认知模型的方法将在某种程度上与“自下而上”(感官)的方法在建模过程中的某处相会。如果本文中的基本考虑是正确的,那么绝望的是,这种期望是模块化的,并且从认知到符号真的只有一条可行的路径: 从头开始。类似计算机软件级别的自由浮动的符号永远不可能通过这条路径实现,反之亦然——甚至也不清楚为什么我们应该尝试达到这样一个级别,因为它看起来就像是把我们的符号从它们的内在意义上连根拔起(从而仅仅把我们自己降低为可编程计算机的功能等价物)。”<br />
<br />
<br />
<br />
=== Modern artificial general intelligence research 现代通用人工智能的研究===<br />
<br />
The term "artificial general intelligence" was used as early as 1997, by Mark Gubrud<ref>{{Harvnb|Gubrud|1997}}</ref> in a discussion of the implications of fully automated military production and operations. The term was re-introduced and popularized by [[Shane Legg]] and [[Ben Goertzel]] around 2002.<ref>{{Cite web|url=http://goertzel.org/who-coined-the-term-agi/|title=Who coined the term "AGI"? » goertzel.org|language=en-US|access-date=28 December 2018|archive-url=https://web.archive.org/web/20181228083048/http://goertzel.org/who-coined-the-term-agi/|archive-date=28 December 2018|url-status=live}}, via [[Life 3.0]]: 'The term "AGI" was popularized by... Shane Legg, Mark Gubrud and Ben Goertzel'</ref> The research objective is much older, for example [[Doug Lenat]]'s [[Cyc]] project (that began in 1984), and [[Allen Newell]]'s [[Soar (cognitive architecture)|Soar]] project are regarded as within the scope of AGI. AGI research activity in 2006 was described by Pei Wang and Ben Goertzel<ref>{{harvnb|Goertzel|Wang|2006}}. See also {{harvtxt|Wang|2006}} with an up-to-date summary and lots of links.</ref> as "producing publications and preliminary results". The first summer school in AGI was organized in Xiamen, China in 2009<ref>https://goertzel.org/AGI_Summer_School_2009.htm</ref> by the Xiamen university's Artificial Brain Laboratory and OpenCog. The first university course was given in 2010<ref>http://fmi-plovdiv.org/index.jsp?id=1054&ln=1</ref> and 2011<ref>http://fmi.uni-plovdiv.bg/index.jsp?id=1139&ln=1</ref> at Plovdiv University, Bulgaria by Todor Arnaudov. MIT presented a course in AGI in 2018, organized by Lex Fridman and featuring a number of guest lecturers. However, as yet, most AI researchers have devoted little attention to AGI, with some claiming that intelligence is too complex to be completely replicated in the near term. However, a small number of computer scientists are active in AGI research, and many of this group are contributing to a series of [[Conference on Artificial General Intelligence|AGI conferences]]. The research is extremely diverse and often pioneering in nature. In the introduction to his book,{{sfn|Goertzel|Pennachin|2006}} Goertzel says that estimates of the time needed before a truly flexible AGI is built vary from 10 years to over a century, but the consensus in the AGI research community seems to be that the timeline discussed by [[Ray Kurzweil]] in ''[[The Singularity is Near]]''<ref name="K">{{Harv|Kurzweil|2005|p=260}} or see [http://crnano.typepad.com/crnblog/2005/08/advanced_human_.html Advanced Human Intelligence] {{Webarchive|url=https://web.archive.org/web/20110630032301/http://crnano.typepad.com/crnblog/2005/08/advanced_human_.html |date=30 June 2011 }} where he defines strong AI as "machine intelligence with the full range of human intelligence."</ref> (i.e. between 2015 and 2045) is plausible.{{sfn|Goertzel|2007}}<br />
<br />
The term "artificial general intelligence" was used as early as 1997, by Mark Gubrud in a discussion of the implications of fully automated military production and operations. The term was re-introduced and popularized by Shane Legg and Ben Goertzel around 2002. The research objective is much older, for example Doug Lenat's Cyc project (that began in 1984), and Allen Newell's Soar project are regarded as within the scope of AGI. AGI research activity in 2006 was described by Pei Wang and Ben Goertzel as "producing publications and preliminary results". The first summer school in AGI was organized in Xiamen, China in 2009 by the Xiamen university's Artificial Brain Laboratory and OpenCog. The first university course was given in 2010 and 2011 at Plovdiv University, Bulgaria by Todor Arnaudov. MIT presented a course in AGI in 2018, organized by Lex Fridman and featuring a number of guest lecturers. However, as yet, most AI researchers have devoted little attention to AGI, with some claiming that intelligence is too complex to be completely replicated in the near term. However, a small number of computer scientists are active in AGI research, and many of this group are contributing to a series of AGI conferences. The research is extremely diverse and often pioneering in nature. In the introduction to his book, Goertzel says that estimates of the time needed before a truly flexible AGI is built vary from 10 years to over a century, but the consensus in the AGI research community seems to be that the timeline discussed by Ray Kurzweil in The Singularity is Near (i.e. between 2015 and 2045) is plausible.<br />
<br />
”人工通用智能”一词早在1997年就由马克·古布鲁德(Mark Gubrud)在讨论全自动化军事生产和作业的影响时使用。这个术语在2002年左右被肖恩·莱格(Shane Legg)和本·格兹尔(Ben Goertzel)重新引入并推广。研究目标要古老得多,例如道格•雷纳特(Doug Lenat)的 Cyc 项目(始于1984年) ,以及艾伦•纽厄尔(Allen Newell)的 Soar 项目被认为属于通用人工智能的范围。王培(Pei Wang)和本·格兹尔将2006年的通用人工智能研究活动描述为“发表论文和取得初步成果”。2009年,厦门大学人工脑实验室和 OpenCog 在中国厦门组织了通用人工智能的第一个暑期学校。第一个大学课程于2010年和2011年在保加利亚普罗夫迪夫大学由托多尔·阿瑙多夫(Todor Arnaudov)开设。2018年,麻省理工学院开设了一门通用人工智能课程,由莱克斯·弗里德曼(Lex Fridman)组织,并邀请了一些客座讲师。然而,迄今为止,大多数人工智能研究人员对通用人工智能关注甚少,一些人声称,智能过于复杂,在短期内无法完全复制。然而,少数计算机科学家积极参与通用人工智能的研究,其中许多人正在为通用人工智能的一系列会议做出贡献。这项研究极其多样化,而且往往具有开创性。格兹尔在他的书的序言中,说,制造一个真正灵活的通用人工智能所需的时间约为10年到超过一个世纪不等,但是通用人工智能研究团体的似乎一致认为雷·库兹韦尔(Ray Kurzweil)在《奇点临近》(即在2015年至2045年之间)中讨论的时间线是可信的。<br />
<br />
<br />
<br />
However, most mainstream AI researchers doubt that progress will be this rapid.{{citation_needed|date=January 2017}} Organizations explicitly pursuing AGI include the Swiss AI lab [[IDSIA]],{{citation needed|date=December 2017}} Nnaisense,<ref>{{cite news|last1=Markoff|first1=John|title=When A.I. Matures, It May Call Jürgen Schmidhuber 'Dad'|url=https://www.nytimes.com/2016/11/27/technology/artificial-intelligence-pioneer-jurgen-schmidhuber-overlooked.html|accessdate=26 December 2017|work=The New York Times|date=27 November 2016|archive-url=https://web.archive.org/web/20171226234555/https://www.nytimes.com/2016/11/27/technology/artificial-intelligence-pioneer-jurgen-schmidhuber-overlooked.html|archive-date=26 December 2017|url-status=live}}</ref> [[Vicarious (company)|Vicarious]],<!--<ref name=baum/>--> [[Maluuba]],<ref name=baum/> the [[OpenCog|OpenCog Foundation]], Adaptive AI, [[LIDA (cognitive architecture)|LIDA]], and [[Numenta]] and the associated [[Redwood Neuroscience Institute]].<ref>{{cite book|author1=James Barrat|title=Our Final Invention: Artificial Intelligence and the End of the Human Era|date=2013|publisher=St. Martin's Press|location=New York|isbn=9780312622374|edition=First|chapter=Chapter 11: A Hard Takeoff|title-link=Our Final Invention|author1-link=James Barrat}}</ref> In addition, organizations such as the [[Machine Intelligence Research Institute]]<ref>{{cite web|title=About the Machine Intelligence Research Institute|url=https://intelligence.org/about/|website=Machine Intelligence Research Institute|accessdate=26 December 2017|archive-url=https://web.archive.org/web/20180121025925/https://intelligence.org/about/|archive-date=21 January 2018|url-status=live}}</ref> and [[OpenAI]]<ref>{{cite news|title=About OpenAI|url=https://openai.com/about/|accessdate=26 December 2017|work=[[OpenAI]]|language=en-us|archive-url=https://web.archive.org/web/20171222181056/https://openai.com/about/|archive-date=22 December 2017|url-status=live}}</ref> have been founded to influence the development path of AGI. Finally, projects such as the [[Human Brain Project]]<ref>{{cite news|last1=Theil|first1=Stefan|title=Trouble in Mind|url=https://www.scientificamerican.com/article/why-the-human-brain-project-went-wrong-and-how-to-fix-it/|accessdate=26 December 2017|work=Scientific American|pages=36–42|language=en|doi=10.1038/scientificamerican1015-36|bibcode=2015SciAm.313d..36T|archive-url=https://web.archive.org/web/20171109234151/https://www.scientificamerican.com/article/why-the-human-brain-project-went-wrong-and-how-to-fix-it/|archive-date=9 November 2017|url-status=live}}</ref> have the goal of building a functioning simulation of the human brain. A 2017 survey of AGI categorized forty-five known "active R&D projects" that explicitly or implicitly (through published research) research AGI, with the largest three being [[DeepMind]], the Human Brain Project, and [[OpenAI]].<ref name=baum>{{cite journal|title=Baum, Seth, A Survey of Artificial General Intelligence Projects for Ethics, Risk, and Policy (November 12, 2017). Global Catastrophic Risk Institute Working Paper 17-1|url=https://ssrn.com/abstract=3070741|date=12 November 2017|last1=Baum|first1=Seth}}</ref><br />
<br />
However, most mainstream AI researchers doubt that progress will be this rapid. Organizations explicitly pursuing AGI include the Swiss AI lab IDSIA, Nnaisense, Vicarious. In addition, organizations such as the Machine Intelligence Research Institute and OpenAI have been founded to influence the development path of AGI. Finally, projects such as the Human Brain Project have the goal of building a functioning simulation of the human brain. A 2017 survey of AGI categorized forty-five known "active R&D projects" that explicitly or implicitly (through published research) research AGI, with the largest three being DeepMind, the Human Brain Project, and OpenAI.<br />
<br />
然而,大多数主流的人工智能研究人员怀疑进展是否会如此之快。明确寻求通用人工智能的组织包括瑞士人工智能实验室IDSIA,Nnaisense,Vicarious。此外,机器智能研究所和 OpenAI 等机构也建立起来以影响通用人工智能的发展道路。最后,像人脑计划这样的项目的目标是建立一个人脑的功能模拟。2017年针对通用人工智能的一项调查(通过已发表的研究)对45个已知的明确的或暗中研究通用人工智能的“活跃研发项目”进行了分类 ,其中最大的三个是 DeepMind、人类大脑项目和 OpenAI。<br />
<br />
<br />
<br />
In 2017, researchers Feng Liu, Yong Shi and Ying Liu conducted intelligence tests on publicly available and freely accessible weak AI such as Google AI or Apple's Siri and others. At the maximum, these AI reached a value of about 47, which corresponds approximately to a six-year-old child in first grade. An adult comes to about 100 on average. Similar tests had been carried out in 2014, with the IQ score reaching a maximum value of 27.<ref>{{cite journal|title=Intelligence Quotient and Intelligence Grade of Artificial Intelligence|journal=Annals of Data Science|volume=4|issue=2|pages=179–191|arxiv=1709.10242|doi=10.1007/s40745-017-0109-0|year=2017|last1=Liu|first1=Feng|last2=Shi|first2=Yong|last3=Liu|first3=Ying|bibcode=2017arXiv170910242L}}</ref><ref>{{cite web|title=Google-KI doppelt so schlau wie Siri|url=https://t3n.de/news/iq-kind-schlauer-google-ki-siri-864003|accessdate=2 January 2019|archive-url=https://web.archive.org/web/20190103055657/https://t3n.de/news/iq-kind-schlauer-google-ki-siri-864003/|archive-date=3 January 2019|url-status=live}}</ref><br />
<br />
In 2017, researchers Feng Liu, Yong Shi and Ying Liu conducted intelligence tests on publicly available and freely accessible weak AI such as Google AI or Apple's Siri and others. At the maximum, these AI reached a value of about 47, which corresponds approximately to a six-year-old child in first grade. An adult comes to about 100 on average. Similar tests had been carried out in 2014, with the IQ score reaching a maximum value of 27.<br />
<br />
2017年,研究人员 Feng Liu,Yong Shi 和 Ying Liu 对公开的和可自由访问的弱智能进行了智能测试,如谷歌人工智能或苹果的 Siri 等。在最大值,这些人工智能达到了约47,这大约相当于一个的六岁儿童。一个成年人平均智商为100。2014年也进行了类似的测试,智商分数的最高值达到了27。<br />
<br />
<br />
<br />
In 2019, video game programmer and aerospace engineer [[John Carmack]] announced plans to research AGI.<ref name="lawler">{{cite web |url=https://www.engadget.com/2019-11-13-john-carmack-agi.html |title=John Carmack takes a step back at Oculus to work on human-like AI |date=November 13, 2019 |first=Richard |last=Lawler |accessdate=April 4, 2020 |publisher=[[Engadget]]}}</ref><br />
<br />
In 2019, video game programmer and aerospace engineer John Carmack announced plans to research AGI.<br />
<br />
2019年,游戏程序师和航空工程师约翰·卡迈克(John Carmack)宣布了研究通用人工智能的计划。<br />
<br />
<br />
<br />
==Processing power needed to simulate a brain 模拟人脑所需要的处理能力==<br />
<br />
<br />
<br />
===Whole brain emulation 全脑模拟===<br />
<br />
{{main|Mind uploading}}<br />
<br />
A popular discussed approach to achieving general intelligent action is [[whole brain emulation]]. A low-level brain model is built by [[brain scanning|scanning]] and [[Brain mapping|mapping]] a biological brain in detail and copying its state into a computer system or another computational device. The computer runs a [[computer simulation|simulation]] model so faithful to the original that it will behave in essentially the same way as the original brain, or for all practical purposes, indistinguishably.<ref name=Roadmap><br />
<br />
A popular discussed approach to achieving general intelligent action is whole brain emulation. A low-level brain model is built by scanning and mapping a biological brain in detail and copying its state into a computer system or another computational device. The computer runs a simulation model so faithful to the original that it will behave in essentially the same way as the original brain, or for all practical purposes, indistinguishably.<ref name=Roadmap><br />
<br />
实现通用智能行为的一种流行的讨论方法是全脑模拟。一个低层次的大脑模型是通过扫描和绘制生物大脑的详细情况,并将其状态复制到计算机系统或其他计算设备中来建立的。计算机运行的模拟模型无比忠实于原始模型,以至于它的行为在本质,或者对于所有的实际目的上与原始大脑相同,难以区分。 参考名称路线图<br />
<br />
{{Harvnb|Sandberg|Boström|2008}}. "The basic idea is to take a particular brain, scan its structure in detail, and construct a software model of it that is so faithful to the original that, when run on appropriate hardware, it will behave in essentially the same way as the original brain."</ref> Whole brain emulation is discussed in [[computational neuroscience]] and [[neuroinformatics]], in the context of [[brain simulation]] for medical research purposes. It is discussed in [[artificial intelligence]] research{{sfn|Goertzel|2007}} as an approach to strong AI. [[Neuroimaging]] technologies that could deliver the necessary detailed understanding are improving rapidly, and [[futurist]] Ray Kurzweil in the book ''The Singularity Is Near''<ref name=K/> predicts that a map of sufficient quality will become available on a similar timescale to the required computing power.<br />
<br />
."The basic idea is to take a particular brain, scan its structure in detail, and construct a software model of it that is so faithful to the original that, when run on appropriate hardware, it will behave in essentially the same way as the original brain."</ref> Whole brain emulation is discussed in computational neuroscience and neuroinformatics, in the context of brain simulation for medical research purposes. It is discussed in artificial intelligence research as an approach to strong AI. Neuroimaging technologies that could deliver the necessary detailed understanding are improving rapidly, and futurist Ray Kurzweil in the book The Singularity Is Near predicts that a map of sufficient quality will become available on a similar timescale to the required computing power.<br />
<br />
“基本思路是,取一个特定的大脑,详细地扫描其结构,并构建一个无比还原的原始大脑的软件模型,以至于在适当的硬件上运行时,它基本上与原始大脑的行为方式相同。基于医学研究的大脑模拟背景下,全脑模拟在计算神经科学和神经信息学医学期刊上被讨论过。它是人工智能研究中讨论的一种强人工智能的方法。可提供必要详细的理解的神经成像技术正在迅速提高,未来学家雷·库兹韦尔(Ray Kurzweil)在《奇点临近》书中预测,一张质量足够高的地图将在类似的时间尺度上达到所需的计算能力。<br />
<br />
<br />
<br />
===Early estimates 初步预测===<br />
<br />
[[File:Estimations of Human Brain Emulation Required Performance.svg|thumb|right|400px|Estimates of how much processing power is needed to emulate a human brain at various levels (from Ray Kurzweil, and [[Anders Sandberg]] and [[Nick Bostrom]]), along with the fastest supercomputer from [[TOP500]] mapped by year. Note the logarithmic scale and exponential trendline, which assumes the computational capacity doubles every 1.1 years. Kurzweil believes that mind uploading will be possible at neural simulation, while the Sandberg, Bostrom report is less certain about where [[consciousness]] arises.{{sfn|Sandberg|Boström|2008}}]] For low-level brain simulation, an extremely powerful computer would be required. The [[human brain]] has a huge number of [[synapses]]. Each of the 10<sup>11</sup> (one hundred billion) [[neurons]] has on average 7,000 synaptic connections (synapses) to other neurons. It has been estimated that the brain of a three-year-old child has about 10<sup>15</sup> synapses (1 quadrillion). This number declines with age, stabilizing by adulthood. Estimates vary for an adult, ranging from 10<sup>14</sup> to 5×10<sup>14</sup> synapses (100 to 500 trillion).{{sfn|Drachman|2005}} An estimate of the brain's processing power, based on a simple switch model for neuron activity, is around 10<sup>14</sup> (100 trillion) synaptic updates per second ([[SUPS]]).{{sfn|Russell|Norvig|2003}} In 1997, Kurzweil looked at various estimates for the hardware required to equal the human brain and adopted a figure of 10<sup>16</sup> computations per second (cps).<ref>In "Mind Children" {{Harvnb|Moravec|1988|page=61}} 10<sup>15</sup> cps is used. More recently, in 1997, <{{cite web|url=http://www.transhumanist.com/volume1/moravec.htm |title=Archived copy |accessdate=23 June 2006 |url-status=dead |archiveurl=https://web.archive.org/web/20060615031852/http://transhumanist.com/volume1/moravec.htm |archivedate=15 June 2006 }}> Moravec argued for 10<sup>8</sup> MIPS which would roughly correspond to 10<sup>14</sup> cps. Moravec talks in terms of MIPS, not "cps", which is a non-standard term Kurzweil introduced.</ref> (For comparison, if a "computation" was equivalent to one "[[FLOPS|floating point operation]]" – a measure used to rate current [[supercomputer]]s – then 10<sup>16</sup> "computations" would be equivalent to 10 [[Peta-|petaFLOPS]], [[FLOPS#Performance records|achieved in 2011]]). He used this figure to predict the necessary hardware would be available sometime between 2015 and 2025, if the exponential growth in computer power at the time of writing continued.<br />
<br />
Estimates of how much processing power is needed to emulate a human brain at various levels (from Ray Kurzweil, and [[Anders Sandberg and Nick Bostrom), along with the fastest supercomputer from TOP500 mapped by year. Note the logarithmic scale and exponential trendline, which assumes the computational capacity doubles every 1.1 years. Kurzweil believes that mind uploading will be possible at neural simulation, while the Sandberg, Bostrom report is less certain about where consciousness arises.]] For low-level brain simulation, an extremely powerful computer would be required. The human brain has a huge number of synapses. Each of the 10<sup>11</sup> (one hundred billion) neurons has on average 7,000 synaptic connections (synapses) to other neurons. It has been estimated that the brain of a three-year-old child has about 10<sup>15</sup> synapses (1 quadrillion). This number declines with age, stabilizing by adulthood. Estimates vary for an adult, ranging from 10<sup>14</sup> to 5×10<sup>14</sup> synapses (100 to 500 trillion). An estimate of the brain's processing power, based on a simple switch model for neuron activity, is around 10<sup>14</sup> (100 trillion) synaptic updates per second (SUPS). In 1997, Kurzweil looked at various estimates for the hardware required to equal the human brain and adopted a figure of 10<sup>16</sup> computations per second (cps). (For comparison, if a "computation" was equivalent to one "floating point operation" – a measure used to rate current supercomputers – then 10<sup>16</sup> "computations" would be equivalent to 10 petaFLOPS, achieved in 2011). He used this figure to predict the necessary hardware would be available sometime between 2015 and 2025, if the exponential growth in computer power at the time of writing continued.<br />
<br />
根据对在不同水平上模拟人类大脑的所需处理能力的估计(来自 Ray Kurzweil,[ Anders Sandberg 和 Nick Bostrom ]) ,以及每年从最快的五百台超级计算机获得的数据,绘制出对数尺度趋势线和指数趋势线。它呈现出计算能力每1.1年增长一倍。库兹韦尔相信,在神经模拟中上传思维是可能的,而桑德伯格和博斯特罗姆的报告对意识从何产生则不太确定。]为进行低层次的大脑模拟,需要一个非常强大的计算机。人类的大脑有大量的突触。每10个 sup 11 / sup (1000亿)神经元平均与其他神经元有7000个突触连接(突触)。据估计,一个三岁儿童的大脑约有10个 sup 15 / sup 突触(1千万亿)。这个数字随着年龄的增长而下降,成年后趋于稳定。而每个成年人的估计情况互不相同,从10个 sup 14 / sup 到5个 sup 10 sup 14 / sup 突触(100万亿到500万亿)不等。基于神经元活动的简单开关模型,对大脑处理能力的估计大约是每秒10次 / 秒(100万亿)突触更新(SUPS)。1997年,库兹韦尔研究了等价模拟人脑所需硬件的各种估计,并采纳了每秒10 sup 16 / sup 计算(cps)这个估计结果。(作为比较,如果一次“计算”相当于一次“浮点运算”——一种用于对当前超级计算机进行评级的措施——那么10 sup 16 / sup次“计算”相当于2011年达到的的每秒10000亿次浮点运算)。他用这个数字来预测,如果在撰写本文时计算机能力方面的指数增长继续下去的话,那么在2015年到2025年之间的某个时候,必要的硬件将会出现。<br />
<br />
<br />
<br />
===Modelling the neurons in more detail 对神经元的更精细的模拟===<br />
<br />
The [[artificial neuron]] model assumed by Kurzweil and used in many current [[artificial neural network]] implementations is simple compared with [[biological neuron model|biological neurons]]. A brain simulation would likely have to capture the detailed cellular behaviour of biological [[neurons]], presently understood only in the broadest of outlines. The overhead introduced by full modeling of the biological, chemical, and physical details of neural behaviour (especially on a molecular scale) would require computational powers several orders of magnitude larger than Kurzweil's estimate. In addition the estimates do not account for [[glial cells]], which are at least as numerous as neurons, and which may outnumber neurons by as much as 10:1, and are now known to play a role in cognitive processes.<ref name="Discover2011JanFeb">{{Cite journal|author=Swaminathan, Nikhil|title=Glia—the other brain cells|journal=Discover|date=Jan–Feb 2011|url=http://discovermagazine.com/2011/jan-feb/62|access-date=24 January 2014|archive-url=https://web.archive.org/web/20140208071350/http://discovermagazine.com/2011/jan-feb/62|archive-date=8 February 2014|url-status=live}}</ref><br />
<br />
The artificial neuron model assumed by Kurzweil and used in many current artificial neural network implementations is simple compared with biological neurons. A brain simulation would likely have to capture the detailed cellular behaviour of biological neurons, presently understood only in the broadest of outlines. The overhead introduced by full modeling of the biological, chemical, and physical details of neural behaviour (especially on a molecular scale) would require computational powers several orders of magnitude larger than Kurzweil's estimate. In addition the estimates do not account for glial cells, which are at least as numerous as neurons, and which may outnumber neurons by as much as 10:1, and are now known to play a role in cognitive processes.<br />
<br />
与生物神经元相比,库兹韦尔假设的人工神经元模型在当前许多人工神经网络实现中的应用是简单的。大脑模拟可能需要捕捉生物神经元细胞行为的细节,目前只能从最广泛的概要中理解。对神经行为的生物、化学和物理细节(特别是在分子尺度上)进行全面建模所需要的计算能力将比库兹韦尔的估计大数个数量级。此外,这些估计没有考虑到至少和神经元一样多的胶质细胞,其数量可能比神经元多十分之一,且现已知它们在认知过程中发挥作用。<br />
<br />
<br />
<br />
=== Current research 研究现状===<br />
<br />
There are some research projects that are investigating brain simulation using more sophisticated neural models, implemented on conventional computing architectures. The [[Artificial Intelligence System]] project implemented non-real time simulations of a "brain" (with 10<sup>11</sup> neurons) in 2005. It took 50 days on a cluster of 27 processors to simulate 1 second of a model.<ref>{{cite journal |last=Izhikevich |first=Eugene M. |last2=Edelman |first2=Gerald M. |date=4 March 2008 |title=Large-scale model of mammalian thalamocortical systems |url=http://vesicle.nsi.edu/users/izhikevich/publications/large-scale_model_of_human_brain.pdf |journal=PNAS |volume=105 |issue=9 |pages=3593–3598 |doi= 10.1073/pnas.0712231105|access-date=23 June 2015 |archive-url=https://web.archive.org/web/20090612095651/http://vesicle.nsi.edu/users/izhikevich/publications/large-scale_model_of_human_brain.pdf |archive-date=12 June 2009 |pmid=18292226 |pmc=2265160|bibcode=2008PNAS..105.3593I }}</ref> The [[Blue Brain]] project used one of the fastest supercomputer architectures in the world, [[IBM]]'s [[Blue Gene]] platform, to create a real time simulation of a single rat [[Neocortex|neocortical column]] consisting of approximately 10,000 neurons and 10<sup>8</sup> synapses in 2006.<ref>{{cite web|url=http://bluebrain.epfl.ch/Jahia/site/bluebrain/op/edit/pid/19085|title=Project Milestones|work=Blue Brain|accessdate=11 August 2008}}</ref> A longer term goal is to build a detailed, functional simulation of the physiological processes in the human brain: "It is not impossible to build a human brain and we can do it in 10 years," [[Henry Markram]], director of the Blue Brain Project said in 2009 at the [[TED (conference)|TED conference]] in Oxford.<ref>{{Cite news |url=http://news.bbc.co.uk/1/hi/technology/8164060.stm |title=Artificial brain '10 years away' 2009 BBC news |date=22 July 2009 |access-date=25 July 2009 |archive-url=https://web.archive.org/web/20170726040959/http://news.bbc.co.uk/1/hi/technology/8164060.stm |archive-date=26 July 2017 |url-status=live }}</ref> There have also been controversial claims to have simulated a [[cat intelligence#Computer simulation of the cat brain|cat brain]]. Neuro-silicon interfaces have been proposed as an alternative implementation strategy that may scale better.<ref>[http://gauntlet.ucalgary.ca/story/10343 University of Calgary news] {{Webarchive|url=https://web.archive.org/web/20090818081044/http://gauntlet.ucalgary.ca/story/10343 |date=18 August 2009 }}, [http://www.nbcnews.com/id/12037941 NBC News news] {{Webarchive|url=https://web.archive.org/web/20170704063922/http://www.nbcnews.com/id/12037941/ |date=4 July 2017 }}</ref><br />
<br />
There are some research projects that are investigating brain simulation using more sophisticated neural models, implemented on conventional computing architectures. The Artificial Intelligence System project implemented non-real time simulations of a "brain" (with 10<sup>11</sup> neurons) in 2005. It took 50 days on a cluster of 27 processors to simulate 1 second of a model. The Blue Brain project used one of the fastest supercomputer architectures in the world, IBM's Blue Gene platform, to create a real time simulation of a single rat neocortical column consisting of approximately 10,000 neurons and 10<sup>8</sup> synapses in 2006. A longer term goal is to build a detailed, functional simulation of the physiological processes in the human brain: "It is not impossible to build a human brain and we can do it in 10 years," Henry Markram, director of the Blue Brain Project said in 2009 at the TED conference in Oxford. There have also been controversial claims to have simulated a cat brain. Neuro-silicon interfaces have been proposed as an alternative implementation strategy that may scale better.<br />
<br />
有一些研究项目正在使用更复杂的神经模型研究大脑模拟,这些模型是在传统的计算机体系结构上实现的。人工智能系统项目在2005年实现了对一个“大脑”(有10个 sup 11 / sup 神经元)的非实时模拟。在一个由27个处理器组成的集群上,模拟一个模型的一秒钟花费了50天时间。2006年,蓝脑项目利用世界上最快的超级计算机架构之一——IBM 的蓝色基因平台,创建了一个包含大约10,000个神经元和10个 sup 8 / sup 突触的单个大鼠的新皮质柱的实时模拟。一个更长期的目标是建立一个人脑生理过程的详细的功能模拟: “建立一个人脑并不是不可能的,我们可以在10年内完成,”蓝脑项目主任亨利·马克拉姆(Henry Markram)于2009年在牛津举行的 TED 大会上说道。还有一些有争议的说法是模拟猫的大脑。神经-硅接口已作为一种可替代的实施策略被提出,它可能会更好地进行模拟。<br />
<br />
<br />
<br />
[[Hans Moravec]] addressed the above arguments ("brains are more complicated", "neurons have to be modeled in more detail") in his 1997 paper "When will computer hardware match the human brain?".<ref>{{cite web|url=http://www.transhumanist.com/volume1/moravec.htm |title=Archived copy |accessdate=23 June 2006 |url-status=dead |archiveurl=https://web.archive.org/web/20060615031852/http://transhumanist.com/volume1/moravec.htm |archivedate=15 June 2006 }}</ref> He measured the ability of existing software to simulate the functionality of neural tissue, specifically the retina. His results do not depend on the number of glial cells, nor on what kinds of processing neurons perform where.<br />
<br />
Hans Moravec addressed the above arguments ("brains are more complicated", "neurons have to be modeled in more detail") in his 1997 paper "When will computer hardware match the human brain?". He measured the ability of existing software to simulate the functionality of neural tissue, specifically the retina. His results do not depend on the number of glial cells, nor on what kinds of processing neurons perform where.<br />
<br />
汉斯·莫拉维克(Hans Moravec)在他1997年的论文《计算机硬件何时能与人脑匹敌》中提出了上述观点(“大脑更复杂” ,“神经元的建模必须更详细”) .他测量了现有软件模拟神经组织,特别是视网膜功能的能力。他的研究结果并不取决于神经胶质细胞的数量,也不取决于处理神经元在哪里工作。<br />
<br />
<br />
<br />
The actual complexity of modeling biological neurons has been explored in [[OpenWorm|OpenWorm project]] that was aimed on complete simulation of a worm that has only 302 neurons in its neural network (among about 1000 cells in total). The animal's neural network has been well documented before the start of the project. However, although the task seemed simple at the beginning, the models based on a generic neural network did not work. Currently, the efforts are focused on precise emulation of biological neurons (partly on the molecular level), but the result cannot be called a total success yet. Even if the number of issues to be solved in a human-brain-scale model is not proportional to the number of neurons, the amount of work along this path is obvious.<br />
<br />
The actual complexity of modeling biological neurons has been explored in OpenWorm project that was aimed on complete simulation of a worm that has only 302 neurons in its neural network (among about 1000 cells in total). The animal's neural network has been well documented before the start of the project. However, although the task seemed simple at the beginning, the models based on a generic neural network did not work. Currently, the efforts are focused on precise emulation of biological neurons (partly on the molecular level), but the result cannot be called a total success yet. Even if the number of issues to be solved in a human-brain-scale model is not proportional to the number of neurons, the amount of work along this path is obvious.<br />
<br />
OpenWorm 项目已经探讨了建模生物神经元的实际复杂性。该项目旨在完全模拟一个蠕虫,其神经网络中只有302个神经元(在总共约1000个细胞中)。项目开始之前,蠕虫的神经网络已经被很好地记录了下来。然而,尽管任务一开始看起来很简单,基于一般神经网络的模型并不起作用。目前,研究的重点是精确模拟生物神经元(部分在分子水平上) ,但结果还不能被称为完全成功。即使在人脑尺度的模型中需要解决的问题的数量与神经元的数量不成比例,沿着这条路径走下去的工作量也是显而易见的。<br />
<br />
<br />
<br />
===Criticisms of simulation-based approaches 对基于模拟的研究方法的批评===<br />
<br />
A fundamental criticism of the simulated brain approach derives from [[embodied cognition]] where human embodiment is taken as an essential aspect of human intelligence. Many researchers believe that embodiment is necessary to ground meaning.<ref>{{Harvnb|de Vega|Glenberg|Graesser|2008}}. A wide range of views in current research, all of which require grounding to some degree</ref> If this view is correct, any fully functional brain model will need to encompass more than just the neurons (i.e., a robotic body). Goertzel{{sfn|Goertzel|2007}} proposes virtual embodiment (like in ''[[Second Life]]''), but it is not yet known whether this would be sufficient.<br />
<br />
A fundamental criticism of the simulated brain approach derives from embodied cognition where human embodiment is taken as an essential aspect of human intelligence. Many researchers believe that embodiment is necessary to ground meaning. If this view is correct, any fully functional brain model will need to encompass more than just the neurons (i.e., a robotic body). Goertzel proposes virtual embodiment (like in Second Life), but it is not yet known whether this would be sufficient.<br />
<br />
对模拟大脑的方法的一个基本批评来自具象认知,其中人形化被视为人类智力的一个重要方面。许多研究者认为,具体化是必要的基础意义。如果这种观点是正确的,那么任何功能齐全的大脑模型除了神经元还要包含更多东西(例如,一个机器人身体)。格兹尔提出了虚拟体(就像在《第二人生》中那样) ,但是目前还不知道这是否足够。<br />
<br />
<br />
<br />
Desktop computers using microprocessors capable of more than 10<sup>9</sup> cps (Kurzweil's non-standard unit "computations per second", see above) have been available since 2005. According to the brain power estimates used by Kurzweil (and Moravec), this computer should be capable of supporting a simulation of a bee brain, but despite some interest<ref>{{Cite web |url=http://www.setiai.com/archives/cat_honey_bee_brain.html |title=some links to bee brain studies |access-date=30 March 2010 |archive-url=https://web.archive.org/web/20111005162232/http://www.setiai.com/archives/cat_honey_bee_brain.html |archive-date=5 October 2011 |url-status=dead }}</ref> no such simulation exists {{Citation needed|date=April 2011}}. There are at least three reasons for this:<br />
<br />
Desktop computers using microprocessors capable of more than 10<sup>9</sup> cps (Kurzweil's non-standard unit "computations per second", see above) have been available since 2005. According to the brain power estimates used by Kurzweil (and Moravec), this computer should be capable of supporting a simulation of a bee brain, but despite some interest no such simulation exists . There are at least three reasons for this:<br />
<br />
自2005年以来,台式计算机使用的微处理器能够超过10 sup 9 / sup cps (库兹韦尔的非标准单位“每秒计算”,见上文)。根据库兹韦尔(和莫拉维克)使用的大脑能量估算,这台计算机应该能够支持蜜蜂大脑的模拟,但是尽管有些人感兴趣,这样的模拟并不存在。这至少有三个原因:<br />
<br />
#The neuron model seems to be oversimplified (see next section).<br />
<br />
The neuron model seems to be oversimplified (see next section).<br />
<br />
神经元模型似乎过于简化了(见下一节)。<br />
<br />
#There is insufficient understanding of higher cognitive processes{{refn|In Goertzels' AGI book ([[#CITEREFYudkowsky2006|Yudkowsky 2006]]), Yudkowsky proposes 5 levels of organisation that must be understood – code/data, sensory modality, concept & category, thought, and deliberation (consciousness) – in order to use the available hardware}} to establish accurately what the brain's neural activity, observed using techniques such as [[Neuroimaging#Functional magnetic resonance imaging|functional magnetic resonance imaging]], correlates with.<br />
<br />
There is insufficient understanding of higher cognitive processes to establish accurately what the brain's neural activity, observed using techniques such as functional magnetic resonance imaging, correlates with.<br />
<br />
人们对高级认知过程的理解不够充分,无法准确地确定大脑的神经活动---- 与之相关的是使用功能性磁共振成像等技术观察到的大脑活动。<br />
<br />
#Even if our understanding of cognition advances sufficiently, early simulation programs are likely to be very inefficient and will, therefore, need considerably more hardware.<br />
<br />
Even if our understanding of cognition advances sufficiently, early simulation programs are likely to be very inefficient and will, therefore, need considerably more hardware.<br />
<br />
即使我们对认知的理解有了足够的进步,早期的仿真程序也可能非常低效,因此需要更多的硬件。<br />
<br />
#The brain of an organism, while critical, may not be an appropriate boundary for a cognitive model. To simulate a bee brain, it may be necessary to simulate the body, and the environment. [[The Extended Mind]] thesis formalizes the philosophical concept, and research into [[Cephalopoda|cephalopods]] has demonstrated clear examples of a decentralized system.<ref>{{cite journal | pmid = 15829594 | doi=10.1152/jn.00684.2004 | volume=94 | issue=2 | title=Dynamic model of the octopus arm. I. Biomechanics of the octopus reaching movement |date=August 2005 | journal=J. Neurophysiol. | pages=1443–58 | last1 = Yekutieli | first1 = Y | last2 = Sagiv-Zohar | first2 = R | last3 = Aharonov | first3 = R | last4 = Engel | first4 = Y | last5 = Hochner | first5 = B | last6 = Flash | first6 = T}}</ref><br />
<br />
The brain of an organism, while critical, may not be an appropriate boundary for a cognitive model. To simulate a bee brain, it may be necessary to simulate the body, and the environment. The Extended Mind thesis formalizes the philosophical concept, and research into cephalopods has demonstrated clear examples of a decentralized system.<br />
<br />
有机体的大脑虽然关键,但可能不是认知模型的合适边界。为了模拟蜜蜂的大脑,可能需要模拟身体和环境。扩展理智论点形式化了哲学概念,对头足类动物的研究已经展示了分散系统的明显的例子。<br />
<br />
<br />
<br />
In addition, the scale of the human brain is not currently well-constrained. One estimate puts the human brain at about 100 billion neurons and 100 trillion synapses.<ref>{{Harvnb|Williams|Herrup|1988}}</ref><ref>[http://search.eb.com/eb/article-75525 "nervous system, human."] ''[[Encyclopædia Britannica]]''. 9 January 2007</ref> Another estimate is 86 billion neurons of which 16.3 billion are in the [[cerebral cortex]] and 69 billion in the [[cerebellum]].{{sfn|Azevedo et al.|2009}} [[Glial cell]] synapses are currently unquantified but are known to be extremely numerous.<br />
<br />
In addition, the scale of the human brain is not currently well-constrained. One estimate puts the human brain at about 100 billion neurons and 100 trillion synapses. Another estimate is 86 billion neurons of which 16.3 billion are in the cerebral cortex and 69 billion in the cerebellum. Glial cell synapses are currently unquantified but are known to be extremely numerous.<br />
<br />
此外,人类大脑的规模目前还没有得到很好的限制。据估计,人类大脑大约有1000亿个神经元和100万亿个突触。另一个估计是860亿个神经元,其中163亿个在大脑皮层,690亿个在小脑。神经胶质细胞突触目前尚未定量,但已知数量极多。<br />
<br />
<br />
<br />
==Strong AI and consciousness 强人工智能和意识==<br />
<br />
{{See also|Philosophy of artificial intelligence|Turing test}}<br />
<br />
<br />
<br />
In 1980, philosopher [[John Searle]] coined the term "strong AI" as part of his [[Chinese room]] argument.<ref>{{Harvnb|Searle|1980}}</ref> He wanted to distinguish between two different hypotheses about artificial intelligence:<ref>As defined in a standard AI textbook: "The assertion that machines could possibly act intelligently (or, perhaps better, act as if they were intelligent) is called the 'weak AI' hypothesis by philosophers, and the assertion that machines that do so are actually thinking (as opposed to simulating thinking) is called the 'strong AI' hypothesis." {{Harv|Russell|Norvig|2003}}</ref><br />
<br />
In 1980, philosopher John Searle coined the term "strong AI" as part of his Chinese room argument. He wanted to distinguish between two different hypotheses about artificial intelligence:<br />
<br />
1980年,哲学家约翰•塞尔(John Searle)将“强人工智能”(strong AI)一词作为他在中文屋论证的一部分。他想要区分关于人工智能的两种不同假设:<br />
<br />
* An artificial intelligence system can ''think'' and have a ''mind''. (The word "mind" has a specific meaning for philosophers, as used in "the [[mind body problem]]" or "the [[philosophy of mind]]".)<br />
*一个人工智能系统可以思考并拥有思维。(词语“思维”对哲学家来说有特殊意义,正如在“身心问题”或“心灵哲学”中的使用一样。)<br />
<br />
* An artificial intelligence system can (only) ''act like'' it thinks and has a mind.<br />
*一个人工智能系统可以(仅仅)按它所想而“行动”,并且拥有思维。<br />
<br />
The first one is called "the ''strong'' AI hypothesis" and the second is "the ''weak'' AI hypothesis" because the first one makes the ''stronger'' statement: it assumes something special has happened to the machine that goes beyond all its abilities that we can test. Searle referred to the "strong AI hypothesis" as "strong AI". This usage is also common in academic AI research and textbooks.<ref>For example:<br />
<br />
The first one is called "the strong AI hypothesis" and the second is "the weak AI hypothesis" because the first one makes the stronger statement: it assumes something special has happened to the machine that goes beyond all its abilities that we can test. Searle referred to the "strong AI hypothesis" as "strong AI". This usage is also common in academic AI research and textbooks.<ref>For example:<br />
<br />
第一条被称为“强人工智能假设” ,第二条被称为“弱人工智能假设”,因为第一条假设提出了更强的陈述: 它假定机器发生了某种特殊的事件,超出了我们能够测试的所有能力。塞尔将“强人工智能假说”称为“强人工智能”。这种用法在人工智能学术研究和教科书中也很常见。例如:<br />
<br />
* {{Harvnb|Russell|Norvig|2003}},<br />
<br />
* [http://www.encyclopedia.com/doc/1O87-strongAI.html Oxford University Press Dictionary of Psychology] {{Webarchive|url=https://web.archive.org/web/20071203103022/http://www.encyclopedia.com/doc/1O87-strongAI.html |date=3 December 2007 }} (quoted in "High Beam Encyclopedia"),<br />
<br />
* [http://www.aaai.org/AITopics/html/phil.html MIT Encyclopedia of Cognitive Science] {{Webarchive|url=https://web.archive.org/web/20080719074502/http://www.aaai.org/AITopics/html/phil.html |date=19 July 2008 }} (quoted in "AITopics")<br />
<br />
* [http://planetmath.org/encyclopedia/StrongAIThesis.html Planet Math] {{Webarchive|url=https://web.archive.org/web/20070919012830/http://planetmath.org/encyclopedia/StrongAIThesis.html |date=19 September 2007 }}<br />
<br />
* [http://www.cbhd.org/resources/biotech/tongen_2003-11-07.htm Will Biological Computers Enable Artificially Intelligent Machines to Become Persons?] {{Webarchive|url=https://web.archive.org/web/20080513031753/http://www.cbhd.org/resources/biotech/tongen_2003-11-07.htm |date=13 May 2008 }} Anthony Tongen</ref><br />
<br />
<br />
<br />
The weak AI hypothesis is equivalent to the hypothesis that artificial general intelligence is possible. According to [[Stuart J. Russell|Russell]] and [[Peter Norvig|Norvig]], "Most AI researchers take the weak AI hypothesis for granted, and don't care about the strong AI hypothesis."{{sfn|Russell|Norvig|2003|p=947}}<br />
<br />
The weak AI hypothesis is equivalent to the hypothesis that artificial general intelligence is possible. According to Russell and Norvig, "Most AI researchers take the weak AI hypothesis for granted, and don't care about the strong AI hypothesis."<br />
<br />
弱人工智能假说等同于通用人工智能是可能的假说。根据罗素和诺维格的说法,“大多数人工智能研究人员认为弱人工智能假说是理所当然的,并且不关心强人工智能假说。”<br />
<br />
<br />
<br />
In contrast to Searle, [[Ray Kurzweil]] uses the term "strong AI" to describe any artificial intelligence system that acts like it has a mind,<ref name=K/> regardless of whether a philosopher would be able to determine if it ''actually'' has a mind or not.<br />
<br />
In contrast to Searle, Ray Kurzweil uses the term "strong AI" to describe any artificial intelligence system that acts like it has a mind, regardless of whether a philosopher would be able to determine if it actually has a mind or not.<br />
<br />
与 Searle 不同的是,Ray Kurzweil 使用“强人工智能”这个词来描述任何人工智能系统,这个系统的行为就像它有思想一样,不管哲学家能否确定它是否真的有思想。<br />
<br />
In science fiction, AGI is associated with traits such as [[consciousness]], [[sentience]], [[sapience]], and [[self-awareness]] observed in living beings. However, according to Searle, it is an open question whether general intelligence is sufficient for consciousness. "Strong AI" (as defined above by Kurzweil) should not be confused with Searle's "[[strong AI hypothesis]]." The strong AI hypothesis is the claim that a computer which behaves as intelligently as a person must also necessarily have a [[mind]] and [[consciousness]]. AGI refers only to the amount of intelligence that the machine displays, with or without a mind.<br />
<br />
In science fiction, AGI is associated with traits such as consciousness, sentience, sapience, and self-awareness observed in living beings. However, according to Searle, it is an open question whether general intelligence is sufficient for consciousness. "Strong AI" (as defined above by Kurzweil) should not be confused with Searle's "strong AI hypothesis." The strong AI hypothesis is the claim that a computer which behaves as intelligently as a person must also necessarily have a mind and consciousness. AGI refers only to the amount of intelligence that the machine displays, with or without a mind.<br />
<br />
在科幻小说中,通用人工智能与生物所具有的的意识、知觉、智慧和自我意识等特征有关。然而,根据塞尔的说法,一般智力是否足以产生意识还是一个悬而未决的问题。“强人工智能”(如上文库兹韦尔所定义的)不应与塞尔的“强人工智能假设”相混淆。强人工智能假说认为,一台像人一样智能运行的计算机必然具有思想和意识。通用人工智能只是指机器显示的智能程度,不管有没有头脑。<br />
<br />
<br />
<br />
===Consciousness 意识===<br />
<br />
There are other aspects of the human mind besides intelligence that are relevant to the concept of strong AI which play a major role in [[science fiction]] and the [[ethics of artificial intelligence]]:<br />
<br />
There are other aspects of the human mind besides intelligence that are relevant to the concept of strong AI which play a major role in science fiction and the ethics of artificial intelligence:<br />
<br />
除了与在科幻小说和人工智能伦理中扮演重要角色的强人工智能概念有关的智能,人类思维还有其他方面:<br />
<br />
* [[consciousness]]: To have [[qualia|subjective experience]] and [[thought]].<ref>Note that [[consciousness]] is difficult to define. A popular definition, due to [[Thomas Nagel]], is that it "feels like" something to be conscious. If we are not conscious, then it doesn't feel like anything. Nagel uses the example of a bat: we can sensibly ask "what does it feel like to be a bat?" However, we are unlikely to ask "what does it feel like to be a toaster?" Nagel concludes that a bat appears to be conscious (i.e. has consciousness) but a toaster does not. See {{Harv|Nagel|1974}}</ref><br />
意识:拥有主观体验和思想。值得一提的是意识是很难定义的。由托马斯·内格尔给出的一个著名定义陈述如下:一个事物如果能体会到某种感觉,那么它是有意识的。如果我们不是有意识的,那么我们不会有任何感觉。内格尔以蝙蝠为例:我们可以凭借感觉问出:“成为一只蝙蝠的感觉如何?”但是,我们不大可能问出:“成为一个吐司机的感觉如何?”内格尔总结认为蝙蝠像是有意识的(即拥有意识),但是吐司机却不是。<br />
<br />
* [[self-awareness]]: To be aware of oneself as a separate individual, especially to be aware of one's own thoughts.<br />
自我意识:能够意识到自己是一个独立的个体,尤其是意识到自己的思想。<br />
<br />
* [[sentience]]: The ability to "feel" perceptions or emotions subjectively.<br />
知觉:主观地感受概念或者情感的能力。<br />
<br />
* [[sapience]]: The capacity for wisdom.<br />
智慧:容纳知识的能力。<br />
<br />
<br />
<br />
These traits have a moral dimension, because a machine with this form of strong AI may have legal rights, analogous to the [[animal rights|rights of non-human animals]]. As such, preliminary work has been conducted on approaches to integrating full ethical agents with existing legal and social frameworks. These approaches have focused on the legal position and rights of 'strong' AI.<ref>{{Cite journal|last=Sotala|first=Kaj|last2=Yampolskiy|first2=Roman V|date=2014-12-19|title=Responses to catastrophic AGI risk: a survey|journal=Physica Scripta|volume=90|issue=1|pages=8|doi=10.1088/0031-8949/90/1/018001|issn=0031-8949|doi-access=free}}</ref><br />
<br />
These traits have a moral dimension, because a machine with this form of strong AI may have legal rights, analogous to the rights of non-human animals. As such, preliminary work has been conducted on approaches to integrating full ethical agents with existing legal and social frameworks. These approaches have focused on the legal position and rights of 'strong' AI.<br />
<br />
这些特征具有道德维度,因为拥有这种强人工智能形式的机器可能拥有法律权利,类似于非人类动物的权利。因此,已经开展了初步工作,探讨如何将全面的道德行为者纳入现有的法律和社会框架。这些方法都集中在强大的人工智能的法律地位和权利上。<br />
<br />
<br />
<br />
However, [[Bill Joy]], among others, argues a machine with these traits may be a threat to human life or dignity.<ref>{{cite journal| title=Why the future doesn't need us | last=Joy | first=Bill |author-link=Bill Joy | journal=Wired |date=April 2000 }}</ref> It remains to be shown whether any of these traits are [[Necessary and sufficient condition|necessary]] for strong AI. The role of [[consciousness]] is not clear, and currently there is no agreed test for its presence. If a machine is built with a device that simulates the [[neural correlates of consciousness]], would it automatically have self-awareness? It is also possible that some of these properties, such as sentience, [[Emergence|naturally emerge]] from a fully intelligent machine, or that it becomes natural to ''ascribe'' these properties to machines once they begin to act in a way that is clearly intelligent. For example, intelligent action may be sufficient for sentience, rather than the other way around.<br />
<br />
However, Bill Joy, among others, argues a machine with these traits may be a threat to human life or dignity. It remains to be shown whether any of these traits are necessary for strong AI. The role of consciousness is not clear, and currently there is no agreed test for its presence. If a machine is built with a device that simulates the neural correlates of consciousness, would it automatically have self-awareness? It is also possible that some of these properties, such as sentience, naturally emerge from a fully intelligent machine, or that it becomes natural to ascribe these properties to machines once they begin to act in a way that is clearly intelligent. For example, intelligent action may be sufficient for sentience, rather than the other way around.<br />
<br />
然而,比尔·乔伊(Bill Joy)等人认为,具有这些特征的机器可能会威胁到人类的生命或尊严。这些特征对于强人工智能来说是否是必要的还有待证明。意识的作用并不清楚,目前也没有针对其存在而进行的一致的测试。如果一台机器装有一个模拟与意识相关的神经的装置,它会自动具有自我意识吗?也有可能这些特性中的一些,比如感知能力,自然而然地从一个完全智能的机器中产生,或者一旦机器开始以一种明显智能的方式行动,人们就会自然而然地认为这些特性是机器自主产生的。<br />
--[[用户:粲兰|袁一博]]([[用户讨论:粲兰|讨论]])“或者一旦机器开始以一种明显智能的方式行动,,人们就会自然而然地认为这些特性是机器自主产生的。”对应原句“or that it becomes natural to ascribe these properties to machines once they begin to act in a way that is clearly intelligent.”与原句在语序和措辞上略有不同,是译者考虑到中文的阅读习惯在不改变原意的条件下意译得出的。<br />
例如,智能行为可能足以判定机器产生了知觉,而非反过来。<br />
<br />
<br />
<br />
===Artificial consciousness research 人工意识研究===<br />
<br />
{{Main|Artificial consciousness}}<br />
<br />
<br />
<br />
Although the role of consciousness in strong AI/AGI is debatable, many AGI researchers{{sfn|Yudkowsky|2006}} regard research that investigates possibilities for implementing consciousness as vital. In an early effort [[Igor Aleksander]]{{sfn|Aleksander|1996}} argued that the principles for creating a conscious machine already existed but that it would take forty years to train such a machine to understand [[language]].<br />
<br />
Although the role of consciousness in strong AI/AGI is debatable, many AGI researchers regard research that investigates possibilities for implementing consciousness as vital. In an early effort Igor Aleksander argued that the principles for creating a conscious machine already existed but that it would take forty years to train such a machine to understand language.<br />
<br />
虽然意识在强人工智能/通用人工智能中的作用是有争议的,但是很多通用人工智能的研究人员认为研究实现意识的可能性是至关重要的。在早期的努力中,伊格尔·亚历山大(Igor Aleksander)认为创造一个有意识的机器的原则已经存在,但是训练这样一个机器去理解语言可能需要四十年。<br />
<br />
<br />
<br />
==Possible explanations for the slow progress of AI research 人工智能研究进展缓慢的可能解释==<br />
<br />
{{See also|History of artificial intelligence#The problems}}<br />
<br />
<br />
<br />
Since the launch of AI research in 1956, the growth of this field has slowed down over time and has stalled the aims of creating machines skilled with intelligent action at the human level.{{sfn|Clocksin|2003}} A possible explanation for this delay is that computers lack a sufficient scope of memory or processing power.{{sfn|Clocksin|2003}} In addition, the level of complexity that connects to the process of AI research may also limit the progress of AI research.{{sfn|Clocksin|2003}}<br />
<br />
Since the launch of AI research in 1956, the growth of this field has slowed down over time and has stalled the aims of creating machines skilled with intelligent action at the human level. A possible explanation for this delay is that computers lack a sufficient scope of memory or processing power. In addition, the level of complexity that connects to the process of AI research may also limit the progress of AI research.<br />
<br />
自从1956年开始人工智能研究以来,这一领域的发展速度已经随着时间的推移而放缓,并且阻碍了创造具有人类水平的智能行为的机器的目标。这种延迟的一个可能的解释是计算机缺乏足够的存储空间或处理能力。此外,与人工智能研究过程相关的复杂程度也可能限制人工智能研究的进展。<br />
<br />
<br />
<br />
While most AI researchers believe strong AI can be achieved in the future, there are some individuals like [[Hubert Dreyfus]] and [[Roger Penrose]] who deny the possibility of achieving strong AI.{{sfn|Clocksin|2003}} [[John McCarthy (computer scientist)|John McCarthy]] was one of various computer scientists who believe human-level AI will be accomplished, but a date cannot accurately be predicted.{{sfn|McCarthy|2003}}<br />
<br />
While most AI researchers believe strong AI can be achieved in the future, there are some individuals like Hubert Dreyfus and Roger Penrose who deny the possibility of achieving strong AI. John McCarthy was one of various computer scientists who believe human-level AI will be accomplished, but a date cannot accurately be predicted.<br />
<br />
虽然大多数人工智能研究人员认为强大的人工智能可以在未来实现,但也有一些人像休伯特·德雷福斯(Hubert Dreyfus)和罗杰·彭罗斯(Roger Penrose)否认实现强大人工智能的可能性。约翰·麦卡锡(John McCarthy)是众多计算机科学家之一,他们相信人类水平的人工智能将会实现,但是日期无法准确预测。<br />
<br />
<br />
<br />
Conceptual limitations are another possible reason for the slowness in AI research.{{sfn|Clocksin|2003}} AI researchers may need to modify the conceptual framework of their discipline in order to provide a stronger base and contribution to the quest of achieving strong AI. As William Clocksin wrote in 2003: "the framework starts from Weizenbaum's observation that intelligence manifests itself only relative to specific social and cultural contexts".{{sfn|Clocksin|2003}}<br />
<br />
Conceptual limitations are another possible reason for the slowness in AI research. AI researchers may need to modify the conceptual framework of their discipline in order to provide a stronger base and contribution to the quest of achieving strong AI. As William Clocksin wrote in 2003: "the framework starts from Weizenbaum's observation that intelligence manifests itself only relative to specific social and cultural contexts".<br />
<br />
概念上的局限性是人工智能研究缓慢的另一个可能原因。人工智能研究人员可能需要修改他们学科的概念框架,以便为实现强大的人工智能提供一个更强大的基础和贡献。正如威廉·克罗克森(William Clocksin)在2003年写的那样: “这个框架始于魏泽堡的观察,即智力只在特定的社会和文化背景下表现出来。”。<br />
<br />
<br />
<br />
Furthermore, AI researchers have been able to create computers that can perform jobs that are complicated for people to do, such as mathematics, but conversely they have struggled to develop a computer that is capable of carrying out tasks that are simple for humans to do, such as walking ([[Moravec's paradox]]).{{sfn|Clocksin|2003}} A problem described by David Gelernter is that some people assume thinking and reasoning are equivalent.{{sfn|Gelernter|2010}} However, the idea of whether thoughts and the creator of those thoughts are isolated individually has intrigued AI researchers.{{sfn|Gelernter|2010}}<br />
<br />
Furthermore, AI researchers have been able to create computers that can perform jobs that are complicated for people to do, such as mathematics, but conversely they have struggled to develop a computer that is capable of carrying out tasks that are simple for humans to do, such as walking (Moravec's paradox). A problem described by David Gelernter is that some people assume thinking and reasoning are equivalent. However, the idea of whether thoughts and the creator of those thoughts are isolated individually has intrigued AI researchers.<br />
<br />
此外,人工智能研究人员已经能够创造出能够执行对人类而言复杂的工作(如数学)的计算机,但相反地,他们却难以开发出能够执行人类简单任务(如行走)的计算机(莫拉维克悖论)。大卫·格勒尼特(David Gelernter)描述的一个问题是,有些人认为思考和推理是等价的。然而,思想和这些思想的创造者是否被孤立的想法引起了人工智能研究者的兴趣。<br />
<br />
<br />
<br />
The problems that have been encountered in AI research over the past decades have further impeded the progress of AI. The failed predictions that have been promised by AI researchers and the lack of a complete understanding of human behaviors have helped diminish the primary idea of human-level AI.{{sfn|Goertzel|2007}} Although the progress of AI research has brought both improvement and disappointment, most investigators have established optimism about potentially achieving the goal of AI in the 21st century.{{sfn|Goertzel|2007}}<br />
<br />
The problems that have been encountered in AI research over the past decades have further impeded the progress of AI. The failed predictions that have been promised by AI researchers and the lack of a complete understanding of human behaviors have helped diminish the primary idea of human-level AI. Although the progress of AI research has brought both improvement and disappointment, most investigators have established optimism about potentially achieving the goal of AI in the 21st century.<br />
<br />
过去几十年人工智能研究中遇到的问题进一步阻碍了人工智能的发展。人工智能研究人员所承诺的失败的预测,以及对人类行为缺乏完整理解,已经帮助削弱了人类水平人工智能的最初设想。尽管人工智能研究的进展带来了进步和失望,但大多数研究人员对人工智能在21世纪可能实现的目标持乐观态度。<br />
<br />
<br />
<br />
Other possible reasons have been proposed for the lengthy research in the progress of strong AI. The intricacy of scientific problems and the need to fully understand the human brain through psychology and neurophysiology have limited many researchers in emulating the function of the human brain in computer hardware.{{sfn|McCarthy|2007}} Many researchers tend to underestimate any doubt that is involved with future predictions of AI, but without taking those issues seriously, people can then overlook solutions to problematic questions.{{sfn|Goertzel|2007}}<br />
<br />
Other possible reasons have been proposed for the lengthy research in the progress of strong AI. The intricacy of scientific problems and the need to fully understand the human brain through psychology and neurophysiology have limited many researchers in emulating the function of the human brain in computer hardware. Many researchers tend to underestimate any doubt that is involved with future predictions of AI, but without taking those issues seriously, people can then overlook solutions to problematic questions.<br />
<br />
还有其他可能的原因可以解释为什么对于强人工智能进行了长时间的研究。错综复杂的科学问题,以及通过心理学和神经生理学充分了解人脑的必要性,限制了许多研究人员在计算机硬件中模拟人脑的功能。许多研究人员倾向于低估与人工智能未来预测有关的任何怀疑,但是如果不认真对待这些问题,人们就会忽视不确定问题的解决方案。<br />
<br />
<br />
<br />
Clocksin says that a conceptual limitation that may impede the progress of AI research is that people may be using the wrong techniques for computer programs and implementation of equipment.{{sfn|Clocksin|2003}} When AI researchers first began to aim for the goal of artificial intelligence, a main interest was human reasoning.{{sfn|Holte|Choueiry|2003}} Researchers hoped to establish computational models of human knowledge through reasoning and to find out how to design a computer with a specific cognitive task.{{sfn|Holte|Choueiry|2003}}<br />
<br />
Clocksin says that a conceptual limitation that may impede the progress of AI research is that people may be using the wrong techniques for computer programs and implementation of equipment. When AI researchers first began to aim for the goal of artificial intelligence, a main interest was human reasoning. Researchers hoped to establish computational models of human knowledge through reasoning and to find out how to design a computer with a specific cognitive task.<br />
<br />
克罗克森说,阻碍人工智能研究进展的一个概念上的限制是,人们可能在计算机程序和安装设备方面使用了错误的技术。当人工智能研究人员第一次开始瞄准人工智能的目标时,主要的兴趣是人类推理。研究人员希望通过推理建立人类知识的计算模型,并找出如何设计一台具有特定认知任务的计算机。<br />
<br />
<br />
<br />
The practice of abstraction, which people tend to redefine when working with a particular context in research, provides researchers with a concentration on just a few concepts.{{sfn|Holte|Choueiry|2003}} The most productive use of abstraction in AI research comes from planning and problem solving.{{sfn|Holte|Choueiry|2003}} Although the aim is to increase the speed of a computation, the role of abstraction has posed questions about the involvement of abstraction operators.{{sfn|Zucker|2003}}<br />
<br />
The practice of abstraction, which people tend to redefine when working with a particular context in research, provides researchers with a concentration on just a few concepts. The most productive use of abstraction in AI research comes from planning and problem solving. Although the aim is to increase the speed of a computation, the role of abstraction has posed questions about the involvement of abstraction operators.<br />
<br />
人们在研究中使用特定的语境时倾向于重新定义抽象的实践,使得研究人员集中在数个概念上。抽象在人工智能研究中最有效的应用来自规划和解决问题。虽然目标是提高计算速度,但是抽象的作用已经对抽象算子的参与提出了问题。<br />
<br />
<br />
<br />
A possible reason for the slowness in AI relates to the acknowledgement by many AI researchers that heuristics is a section that contains a significant breach between computer performance and human performance.{{sfn|McCarthy|2007}} The specific functions that are programmed to a computer may be able to account for many of the requirements that allow it to match human intelligence. These explanations are not necessarily guaranteed to be the fundamental causes for the delay in achieving strong AI, but they are widely agreed by numerous researchers.<br />
<br />
A possible reason for the slowness in AI relates to the acknowledgement by many AI researchers that heuristics is a section that contains a significant breach between computer performance and human performance. The specific functions that are programmed to a computer may be able to account for many of the requirements that allow it to match human intelligence. These explanations are not necessarily guaranteed to be the fundamental causes for the delay in achieving strong AI, but they are widely agreed by numerous researchers.<br />
<br />
人工智能发展缓慢的一个可能原因是许多人工智能研究人员承认启发式算法是计算机性能和人类表现之间的一个重大缺口。为编入计算机的特定功能可以满足许多要求,使计算机与人类智能相匹配。这些解释并不一定是造成人工智能实现延迟的根本原因,但它们得到了众多研究人员的广泛认同。<br />
<br />
<br />
<br />
There have been many AI researchers that debate over the idea whether [[affective computing|machines should be created with emotions]]. There are no emotions in typical models of AI and some researchers say programming emotions into machines allows them to have a mind of their own.{{sfn|Clocksin|2003}} Emotion sums up the experiences of humans because it allows them to remember those experiences.{{sfn|Gelernter|2010}} David Gelernter writes, "No computer will be creative unless it can simulate all the nuances of human emotion."{{sfn|Gelernter|2010}} This concern about emotion has posed problems for AI researchers and it connects to the concept of strong AI as its research progresses into the future.<ref>{{cite journal|doi=10.1016/j.bushor.2018.08.004|title=Kaplan Andreas and Haelein Michael (2019) Siri, Siri, in my hand: Who's the fairest in the land? On the interpretations, illustrations, and implications of artificial intelligence | volume=62 | year=2019|journal=Business Horizons|pages=15–25 | last1 = Kaplan | first1 = Andreas | last2 = Haenlein | first2 = Michael}}</ref><br />
<br />
There have been many AI researchers that debate over the idea whether machines should be created with emotions. There are no emotions in typical models of AI and some researchers say programming emotions into machines allows them to have a mind of their own. Emotion sums up the experiences of humans because it allows them to remember those experiences. David Gelernter writes, "No computer will be creative unless it can simulate all the nuances of human emotion." This concern about emotion has posed problems for AI researchers and it connects to the concept of strong AI as its research progresses into the future.<br />
<br />
许多人工智能研究人员一直在争论机器是否应该带有情感。典型的人工智能模型中没有情感,一些研究人员说,将情感编程到机器中可以让它们拥有自己的思想。情感总结了人类的经历,因为它允许人们记住那些经历。大卫·格勒尼特(David Gelernter)写道: “除非计算机能够模拟人类情感的所有细微差别,否则它不会具有创造力。”这种对情绪的关注给人工智能研究人员带来了一些问题,随着未来人工智能研究的进展,它与强人工智能的概念联系起来。<br />
<br />
<br />
<br />
==Controversies and dangers 争议和风险==<br />
<br />
<br />
<br />
===Feasibility===<br />
<br />
{{expand section|date=February 2016}}<br />
<br />
As of March 2020, AGI remains speculative<ref name="spec1">[https://www.europarl.europa.eu/at-your-service/files/be-heard/religious-and-non-confessional-dialogue/events/en-20190319-how-artificial-intelligence-works.pdf europarl.europa.eu: How artificial intelligence works], "Concluding remarks: Today's AI is powerful and useful, but remains far from speculated AGI or ASI.", European Parliamentary Research Service, retrieved March 3, 2020</ref><ref name="spec2">[https://www.itu.int/en/journal/001/Documents/itu2018-9.pdf itu.int: Beyond Mad?: The Race For Artificial General Intelligence], "AGI represents a level of power that remains firmly in the realm of speculative fiction as on date." February 2, 2018, retrieved March 3, 2020</ref> as no such system has been demonstrated yet. Opinions vary both on ''whether'' and ''when'' artificial general intelligence will arrive. At one extreme, AI pioneer [[Herbert A. Simon]] wrote in 1965: "machines will be capable, within twenty years, of doing any work a man can do". However, this prediction failed to come true. Microsoft co-founder [[Paul Allen]] believed that such intelligence is unlikely in the 21st century because it would require "unforeseeable and fundamentally unpredictable breakthroughs" and a "scientifically deep understanding of cognition".<ref>{{cite news|last1=Allen|first1=Paul|title=The Singularity Isn't Near|url=http://www.technologyreview.com/view/425733/paul-allen-the-singularity-isnt-near/|accessdate=17 September 2014|work=[[MIT Technology Review]]}}</ref> Writing in [[The Guardian]], roboticist Alan Winfield claimed the gulf between modern computing and human-level artificial intelligence is as wide as the gulf between current space flight and practical faster-than-light spaceflight.<ref>{{cite news|last1=Winfield|first1=Alan|title=Artificial intelligence will not turn into a Frankenstein's monster|url=https://www.theguardian.com/technology/2014/aug/10/artificial-intelligence-will-not-become-a-frankensteins-monster-ian-winfield|accessdate=17 September 2014|work=[[The Guardian]]|archive-url=https://web.archive.org/web/20140917135230/http://www.theguardian.com/technology/2014/aug/10/artificial-intelligence-will-not-become-a-frankensteins-monster-ian-winfield|archive-date=17 September 2014|url-status=live}}</ref><br />
<br />
As of March 2020, AGI remains speculative as no such system has been demonstrated yet. Opinions vary both on whether and when artificial general intelligence will arrive. At one extreme, AI pioneer Herbert A. Simon wrote in 1965: "machines will be capable, within twenty years, of doing any work a man can do". However, this prediction failed to come true. Microsoft co-founder Paul Allen believed that such intelligence is unlikely in the 21st century because it would require "unforeseeable and fundamentally unpredictable breakthroughs" and a "scientifically deep understanding of cognition". Writing in The Guardian, roboticist Alan Winfield claimed the gulf between modern computing and human-level artificial intelligence is as wide as the gulf between current space flight and practical faster-than-light spaceflight.<br />
<br />
截至2020年3月,通用人工智能仍处于推测性的状态,因为迄今此类系统尚未被展示。对于人工通用智能是否会到来以及何时到来,人们的看法各不相同。一个极端如,人工智能的先驱赫伯特·西蒙在1965年写道: “机器将能在20年内具有完成人类能做的任何工作的能力。”然而,这个预言并没有实现。微软(Microsoft)联合创始人保罗•艾伦(Paul Allen)认为,这种情报在21世纪不太可能出现,因为它需要“不可预见且根本无法预测的突破”和“对认知的科学的深入理解”。机器人专家阿兰·温菲尔德(Alan Winfield)在《卫报》上发表文章称,现代计算机和人类水平的人工智能之间的鸿沟就像当前的太空飞行和实际的超光速空间飞行之间的鸿沟一样宽。<br />
<br />
<br />
<br />
AI experts' views on the feasibility of AGI wax and wane, and may have seen a resurgence in the 2010s. Four polls conducted in 2012 and 2013 suggested that the median guess among experts for when they would be 50% confident AGI would arrive was 2040 to 2050, depending on the poll, with the mean being 2081. Of the experts, 16.5% answered with "never" when asked the same question but with a 90% confidence instead.<ref name="new yorker doomsday">{{cite news|author1=Raffi Khatchadourian|title=The Doomsday Invention: Will artificial intelligence bring us utopia or destruction?|url=http://www.newyorker.com/magazine/2015/11/23/doomsday-invention-artificial-intelligence-nick-bostrom|accessdate=7 February 2016|work=[[The New Yorker (magazine)|The New Yorker]]|date=23 November 2015|archive-url=https://web.archive.org/web/20160128105955/http://www.newyorker.com/magazine/2015/11/23/doomsday-invention-artificial-intelligence-nick-bostrom|archive-date=28 January 2016|url-status=live}}</ref><ref>Müller, V. C., & Bostrom, N. (2016). Future progress in artificial intelligence: A survey of expert opinion. In Fundamental issues of artificial intelligence (pp. 555–572). Springer, Cham.</ref> Further current AGI progress considerations can be found below [[#Tests_for_confirming_human-level_AGI|''Tests for confirming human-level AGI'']] and [[#IQ-Tests_AGI|''IQ-tests AGI'']].<br />
<br />
AI experts' views on the feasibility of AGI wax and wane, and may have seen a resurgence in the 2010s. Four polls conducted in 2012 and 2013 suggested that the median guess among experts for when they would be 50% confident AGI would arrive was 2040 to 2050, depending on the poll, with the mean being 2081. Of the experts, 16.5% answered with "never" when asked the same question but with a 90% confidence instead. Further current AGI progress considerations can be found below Tests for confirming human-level AGI and IQ-tests AGI.<br />
<br />
人工智能专家一直在考虑通用人工智能兴衰可能性,可能在2010年代出现了复苏。2012年和2013年进行的四次民意调查显示,专家们平均有50%的信心相信通用人工智能会在2040年到2050年间到来,具体取决于调查结果,平均猜测是2081年。在这些专家中,16.5% 的人在被问到同样的问题时回答“从来没有” ,但他们的自信心却达到了90%。当前通用人工智能项目进展中的进一步考虑可以在确认人类水平通用人工智能和基于智商测试的通用人工智能测试下面找到。<br />
<br />
<br />
<br />
===Potential threat to human existence{{anchor|Risk_of_human_extinction}} 对人类的潜在威胁===<br />
<br />
{{Main|Existential risk from artificial general intelligence}}<br />
<br />
<br />
<br />
The thesis that AI poses an existential risk, and that this risk needs much more attention than it currently gets, has been endorsed by many public figures; perhaps the most famous are [[Elon Musk]], [[Bill Gates]], and [[Stephen Hawking]]. The most notable AI researcher to endorse the thesis is [[Stuart J. Russell]]. Endorsers of the thesis sometimes express bafflement at skeptics: Gates states he does not "understand why some people are not concerned",<ref name="BBC News">{{cite news|last1=Rawlinson|first1=Kevin|title=Microsoft's Bill Gates insists AI is a threat|url=https://www.bbc.co.uk/news/31047780|work=[[BBC News]]|accessdate=30 January 2015}}</ref> and Hawking criticized widespread indifference in his 2014 editorial: {{cquote|'So, facing possible futures of incalculable benefits and risks, the experts are surely doing everything possible to ensure the best outcome, right? Wrong. If a superior alien civilisation sent us a message saying, 'We'll arrive in a few decades,' would we just reply, 'OK, call us when you get here{{endash}}we'll leave the lights on?' Probably not{{endash}}but this is more or less what is happening with AI.'<ref name="hawking editorial">{{cite news |title=Stephen Hawking: 'Transcendence looks at the implications of artificial intelligence&nbsp;– but are we taking AI seriously enough?' |url=https://www.independent.co.uk/news/science/stephen-hawking-transcendence-looks-at-the-implications-of-artificial-intelligence--but-are-we-taking-ai-seriously-enough-9313474.html |accessdate=3 December 2014 |publisher=[[The Independent (UK)]]}}</ref>}}<br />
<br />
The thesis that AI poses an existential risk, and that this risk needs much more attention than it currently gets, has been endorsed by many public figures; perhaps the most famous are Elon Musk, Bill Gates, and Stephen Hawking. The most notable AI researcher to endorse the thesis is Stuart J. Russell. Endorsers of the thesis sometimes express bafflement at skeptics: Gates states he does not "understand why some people are not concerned", and Hawking criticized widespread indifference in his 2014 editorial: we'll leave the lights on?' Probably not, but this is more or less what is happening with AI.'}}<br />
<br />
人工智能构成了世界末日,这种风险需要比现在更多的关注,这一论点已经得到了许多公众人物的支持; 也许最著名的是埃隆·马斯克,比尔·盖茨和斯蒂芬·霍金。支持这一观点的最著名的人工智能研究者是斯图尔特·罗素。这篇论文的支持者有时会对怀疑论者表示困惑: 盖茨表示,他不“理解为什么有些人不关心” ,霍金在2014年的社论中批评了普遍的冷漠: 我们就这么让人工智能这盏灯亮着吗?可能不是,但这或多或少是正在发生在人工智能领域内的。<br />
<br />
<br />
<br />
Many of the scholars who are concerned about existential risk believe that the best way forward would be to conduct (possibly massive) research into solving the difficult "[[AI control problem|control problem]]" to answer the question: what types of safeguards, algorithms, or architectures can programmers implement to maximize the probability that their recursively-improving AI would continue to behave in a friendly, rather than destructive, manner after it reaches superintelligence?<ref name="superintelligence" >{{cite book|last1=Bostrom|first1=Nick|author-link=Nick Bostrom|title=Superintelligence: Paths, Dangers, Strategies|date=2014|isbn=978-0199678112|edition=First|quote=|title-link=Superintelligence: Paths, Dangers, Strategies}}<!-- preface --></ref><ref name="physica_scripta" >{{cite journal|title=Responses to catastrophic AGI risk: a survey|journal=[[Physica Scripta]]|date=19 December 2014|volume=90|issue=1|author1=Kaj Sotala|author2-link=Roman Yampolskiy|author2=Roman Yampolskiy}}</ref><br />
<br />
Many of the scholars who are concerned about existential risk believe that the best way forward would be to conduct (possibly massive) research into solving the difficult "control problem" to answer the question: what types of safeguards, algorithms, or architectures can programmers implement to maximize the probability that their recursively-improving AI would continue to behave in a friendly, rather than destructive, manner after it reaches superintelligence?<br />
<br />
许多关注世界末日的学者认为,最好的方法是进行(可能是大规模的)研究,解决困难的“控制问题” ,以回答这个问题: 程序员可以实现哪些类型的保障措施、算法或架构,以最大程度地提高其递归改进的人工智能在达到超级智能后继续以友好,而非破坏性的方式运行的可能性?<br />
<br />
<br />
<br />
The thesis that AI can pose existential risk also has many strong detractors. Skeptics sometimes charge that the thesis is crypto-religious, with an irrational belief in the possibility of superintelligence replacing an irrational belief in an omnipotent God; at an extreme, [[Jaron Lanier]] argues that the whole concept that current machines are in any way intelligent is "an illusion" and a "stupendous con" by the wealthy.<ref name="atlantic-but-what">{{cite magazine |url=https://www.theatlantic.com/health/archive/2014/05/but-what-does-the-end-of-humanity-mean-for-me/361931/ |title=But What Would the End of Humanity Mean for Me? |magazine=The Atlantic | date = 9 May 2014 | accessdate =12 December 2015}}</ref><br />
<br />
The thesis that AI can pose existential risk also has many strong detractors. Skeptics sometimes charge that the thesis is crypto-religious, with an irrational belief in the possibility of superintelligence replacing an irrational belief in an omnipotent God; at an extreme, Jaron Lanier argues that the whole concept that current machines are in any way intelligent is "an illusion" and a "stupendous con" by the wealthy.<br />
<br />
人工智能可以造成世界末日的观点也遭到了许多强烈的反对。怀疑论者有时指责该论点是神秘的、宗教性的,他们非理性地相信超级智能可能取代对万能的上帝的非理性信仰; 一个极端如,杰伦·拉尼尔(Jaron Lanier)认为,目前的机器以任何方式具有智能的整个概念是“一种幻觉” ,是富人的“惊人骗局”。<br />
<br />
<br />
<br />
Much of existing criticism argues that AGI is unlikely in the short term. Computer scientist [[Gordon Bell]] argues that the human race will already destroy itself before it reaches the [[technological singularity]]. [[Gordon Moore]], the original proponent of [[Moore's Law]], declares that "I am a skeptic. I don't believe [a technological singularity] is likely to happen, at least for a long time. And I don't know why I feel that way."<ref>{{cite news |title=Tech Luminaries Address Singularity |url=https://spectrum.ieee.org/computing/hardware/tech-luminaries-address-singularity |accessdate=8 April 2020 |work=IEEE Spectrum: Technology, Engineering, and Science News |issue=SPECIAL REPORT: THE SINGULARITY |date=1 June 2008 |language=en}}</ref> [[Baidu]] Vice President [[Andrew Ng]] states AI existential risk is "like worrying about overpopulation on Mars when we have not even set foot on the planet yet."<ref name=shermer>{{cite news|last1=Shermer|first1=Michael|title=Apocalypse AI|url=https://www.scientificamerican.com/article/artificial-intelligence-is-not-a-threat-mdash-yet/|accessdate=27 November 2017|work=Scientific American|date=1 March 2017|pages=77|language=en|doi=10.1038/scientificamerican0317-77|bibcode=2017SciAm.316c..77S}}</ref><br />
<br />
Much of existing criticism argues that AGI is unlikely in the short term. Computer scientist Gordon Bell argues that the human race will already destroy itself before it reaches the technological singularity. Gordon Moore, the original proponent of Moore's Law, declares that "I am a skeptic. I don't believe [a technological singularity] is likely to happen, at least for a long time. And I don't know why I feel that way." Baidu Vice President Andrew Ng states AI existential risk is "like worrying about overpopulation on Mars when we have not even set foot on the planet yet."<br />
<br />
现有的许多批评认为,通用人工智能短期内不太可能成功。计算机科学家戈登·贝尔(Gordon Bell)认为人类在到达技术奇点之前就已经自我毁灭了。戈登·摩尔(Gordon Moore),摩尔定律的最初提出者,宣称“我是一个怀疑论者。我不认为技术奇点会发生,至少在很长一段时间内不会。我不知道为什么会有这种感觉。”百度副总裁吴恩达(Andrew Ng)说,人工智能世界末日就像是在担心火星人口过剩,而我们甚至还没有踏上这个星球。<br />
<br />
<br />
<br />
==See also 请参阅==<br />
<br />
{{div col|colwidth=30em}}<br />
<br />
* [[Automated machine learning]]<br />
<br />
* [[Machine ethics]]<br />
<br />
* [[Multi-task learning]]<br />
<br />
* [[Superintelligence: Paths, Dangers, Strategies|Superintelligence]]<br />
<br />
* [[Nick Bostrom]]<br />
<br />
* [[Eliezer Yudkowsky]]<br />
<br />
* [[Future of Humanity Institute]]<br />
<br />
* [[Outline of artificial intelligence]]<br />
<br />
* [[Artificial brain]]<br />
<br />
* [[Transfer learning]]<br />
<br />
* [[Outline of transhumanism]]<br />
<br />
* [[General game playing]]<br />
<br />
* [[Synthetic intelligence]]<br />
<br />
* [[Intelligence amplification]] (IA), the use of information technology in augmenting human intelligence instead of creating an external autonomous "AGI"{{div col end}}<br />
<br />
<br />
<br />
==Notes==<br />
<br />
{{reflist|colwidth=30em}}<br />
<br />
<br />
<br />
==References==<br />
<br />
• Stages of Artificial Intelligence"[https://www.computerscience0.xyz/2020/04/ai-artificial-intelligence-stages-of.html Computer Science]" on 2nd April, 2020.{{refbegin|2}}<br />
<br />
• Stages of Artificial Intelligence"[https://www.computerscience0.xyz/2020/04/ai-artificial-intelligence-stages-of.html Computer Science]" on 2nd April, 2020.<br />
<br />
•2020年4月2日,《人工智能的阶段》[ https://www.computerscience0.xyz/2020/04/ai-Artificial-Intelligence-Stages-of.html 计算机科学]。<br />
<br />
* {{cite web |url=http://www.techcast.org/Upload/PDFs/633615794236495345_TCTheAutomationofThought.pdf |title=TechCast Article Series: The Automation of Thought |last1=Halal |first1=William E. |website= |access-date= |archive-url=https://web.archive.org/web/20130606101835/http://www.techcast.org/Upload/PDFs/633615794236495345_TCTheAutomationofThought.pdf |archive-date=6 June 2013}}<br />
<br />
* {{Citation | last=Aleksander | first=Igor | author-link=Igor Aleksander | year=1996 | title=Impossible Minds | publisher=World Scientific Publishing Company | isbn=978-1-86094-036-1 | url-access=registration | url=https://archive.org/details/impossiblemindsm0000alek }}<br />
<br />
* {{Citation | last = Omohundro|first= Steve| author-link= Steve Omohundro | year = 2008| title= The Nature of Self-Improving Artificial Intelligence| publisher= presented and distributed at the 2007 Singularity Summit, San Francisco, CA.}}<br />
<br />
* {{Citation|last=Sandberg |first=Anders|last2=Boström|first2=Nick|title=Whole Brain Emulation: A Roadmap|url=http://www.fhi.ox.ac.uk/Reports/2008-3.pdf|accessdate=5 April 2009|series= Technical Report #2008‐3|year=2008| publisher = Future of Humanity Institute, Oxford University}}<br />
<br />
* {{Citation|vauthors=Azevedo FA, Carvalho LR, Grinberg LT | title = Equal numbers of neuronal and nonneuronal cells make the human brain an isometrically scaled-up primate brain| journal = The Journal of Comparative Neurology| volume = 513| issue = 5| pages = 532–41|date=April 2009| pmid = 19226510| doi = 10.1002/cne.21974|url=https://www.researchgate.net/publication/24024444 |accessdate=4 September 2013| ref={{harvid|Azevedo et al.|2009}}|display-authors=etal}}<br />
<br />
* {{Citation<br />
<br />
| first=Anthony<br />
<br />
| first=Anthony<br />
<br />
首先是安东尼<br />
<br />
| last=Berglas<br />
<br />
| last=Berglas<br />
<br />
最后一个贝格拉斯<br />
<br />
| title=Artificial Intelligence will Kill our Grandchildren<br />
<br />
| title=Artificial Intelligence will Kill our Grandchildren<br />
<br />
人工智能会杀死我们的孙子<br />
<br />
| year=2008<br />
<br />
| year=2008<br />
<br />
2008年<br />
<br />
| url=http://berglas.org/Articles/AIKillGrandchildren/AIKillGrandchildren.html<br />
<br />
| url=http://berglas.org/Articles/AIKillGrandchildren/AIKillGrandchildren.html<br />
<br />
Http://berglas.org/articles/aikillgrandchildren/aikillgrandchildren.html<br />
<br />
}}<br />
<br />
}}<br />
<br />
}}<br />
<br />
* {{Citation | last = Chalmers |first = David | author-link=David Chalmers | year=1996 | title = The Conscious Mind |publisher=Oxford University Press.}}<br />
<br />
* {{Citation | last = Clocksin|first=William |date=Aug 2003 |title=Artificial intelligence and the future|journal=[[Philosophical Transactions of the Royal Society A]] |pmid=12952683 |volume=361 |issue=1809 |pages=1721–1748 |doi=10.1098/rsta.2003.1232 |postscript=.|bibcode=2003RSPTA.361.1721C }}<br />
<br />
* {{Crevier 1993}}<br />
<br />
* {{Citation | first = Brad | last = Darrach | date=20 November 1970 | title=Meet Shakey, the First Electronic Person | magazine=[[Life Magazine]] | pages = 58–68 }}.<br />
<br />
* {{Citation | last = Drachman | first = D | title = Do we have brain to spare? | journal = Neurology | volume = 64 | issue = 12 | pages = 2004–5 | year = 2005 | pmid = 15985565 | doi = 10.1212/01.WNL.0000166914.38327.BB | postscript = .}}<br />
<br />
* {{Citation | last = Feigenbaum | first = Edward A. | first2=Pamela | last2=McCorduck | author-link=Edward Feigenbaum | author2-link = Pamela McCorduck | title = The Fifth Generation: Artificial Intelligence and Japan's Computer Challenge to the World | publisher = Michael Joseph | year = 1983 | isbn = 978-0-7181-2401-4 }}<br />
<br />
* {{Citation<br />
<br />
| last = Gelernter | first = David<br />
<br />
| last = Gelernter | first = David<br />
<br />
最后的盖兰特 | 第一个大卫<br />
<br />
| year = 2010<br />
<br />
| year = 2010<br />
<br />
2010年<br />
<br />
| title = Dream-logic, the Internet and Artificial Thought<br />
<br />
| title = Dream-logic, the Internet and Artificial Thought<br />
<br />
梦的逻辑,互联网和人工思维<br />
<br />
| url=http://www.edge.org/3rd_culture/gelernter10.1/gelernter10.1_index.html | accessdate= 25 July 2010<br />
<br />
| url=http://www.edge.org/3rd_culture/gelernter10.1/gelernter10.1_index.html | accessdate= 25 July 2010<br />
<br />
Http://www.edge.org/3rd_culture/gelernter10.1/gelernter10.1_index.html : 2010年7月25日<br />
<br />
}}<br />
<br />
}}<br />
<br />
}}<br />
<br />
* {{Citation<br />
<br />
| editor1-last = Goertzel | editor1-first = Ben | authorlink = Ben Goertzel<br />
<br />
| editor1-last = Goertzel | editor1-first = Ben | authorlink = Ben Goertzel<br />
<br />
1-first Ben | authorlink Ben Goertzel<br />
<br />
| editor2-last = Pennachin | editor2-first= Cassio<br />
<br />
| editor2-last = Pennachin | editor2-first= Cassio<br />
<br />
| 编辑2-last Pennachin | 编辑2-first Cassio<br />
<br />
| year = 2006<br />
<br />
| year = 2006<br />
<br />
2006年<br />
<br />
| title=Artificial General Intelligence<br />
<br />
| title=Artificial General Intelligence<br />
<br />
人工通用智能<br />
<br />
| publisher = Springer <br />
<br />
| publisher = Springer <br />
<br />
出版商斯普林格<br />
<br />
| url=http://people.inf.elte.hu/csizsekp/ai/books/artificial-general-intelligence-cognitive-technologies.9783540237334.27156.pdf<br />
<br />
| url=http://people.inf.elte.hu/csizsekp/ai/books/artificial-general-intelligence-cognitive-technologies.9783540237334.27156.pdf<br />
<br />
Http://people.inf.elte.hu/csizsekp/ai/books/artificial-general-intelligence-cognitive-technologies.9783540237334.27156.pdf<br />
<br />
| isbn = 978-3-540-23733-4<br />
<br />
| isbn = 978-3-540-23733-4<br />
<br />
[国际标准图书馆编号978-3-540-23733-4]<br />
<br />
| archive-url = https://web.archive.org/web/20130320184603/http://people.inf.elte.hu/csizsekp/ai/books/artificial-general-intelligence-cognitive-technologies.9783540237334.27156.pdf<br />
<br />
| archive-url = https://web.archive.org/web/20130320184603/http://people.inf.elte.hu/csizsekp/ai/books/artificial-general-intelligence-cognitive-technologies.9783540237334.27156.pdf<br />
<br />
| 档案-url https://web.archive.org/web/20130320184603/http://people.inf.elte.hu/csizsekp/ai/books/artificial-general-intelligence-cognitive-technologies.9783540237334.27156.pdf<br />
<br />
| archive-date = 20 March 2013<br />
<br />
| archive-date = 20 March 2013<br />
<br />
| 档案-日期2013年3月20日<br />
<br />
}}<br />
<br />
}}<br />
<br />
}}<br />
<br />
* {{Citation<br />
<br />
| last = Goertzel | first = Ben | authorlink = Ben Goertzel<br />
<br />
| last = Goertzel | first = Ben | authorlink = Ben Goertzel<br />
<br />
作者: Ben Goertzel<br />
<br />
| last2 = Wang | first2 = Pei<br />
<br />
| last2 = Wang | first2 = Pei<br />
<br />
2 Wang | first2 Pei<br />
<br />
| year = 2006<br />
<br />
| year = 2006<br />
<br />
2006年<br />
<br />
| title = Introduction: Aspects of Artificial General Intelligence<br />
<br />
| title = Introduction: Aspects of Artificial General Intelligence<br />
<br />
| 题目简介: 人工通用智能的方方面面<br />
<br />
| url=http://sites.google.com/site/narswang/publications/wang-goertzel.AGI_Aspects.pdf?attredirects=1<br />
<br />
| url=http://sites.google.com/site/narswang/publications/wang-goertzel.AGI_Aspects.pdf?attredirects=1<br />
<br />
Http://sites.google.com/site/narswang/publications/wang-goertzel.agi_aspects.pdf?attredirects=1<br />
<br />
}}<br />
<br />
}}<br />
<br />
}}<br />
<br />
* {{Citation |last=Goertzel|first=Ben |authorlink=Ben Goertzel |date=Dec 2007 |title=Human-level artificial general intelligence and the possibility of a technological singularity: a reaction to Ray Kurzweil's The Singularity Is Near, and McDermott's critique of Kurzweil|journal=Artificial Intelligence |volume=171 |issue=18, Special Review Issue |pages=1161–1173 |url=https://scholar.google.com/scholar?hl=sv&lr=&cluster=15189798216526465792 |accessdate=1 April 2009 |doi=10.1016/j.artint.2007.10.011 |postscript=.}}<br />
<br />
* {{Citation | last = Gubrud | first = Mark | url=http://www.foresight.org/Conferences/MNT05/Papers/Gubrud/ | title = Nanotechnology and International Security | journal= Fifth Foresight Conference on Molecular Nanotechnology |date = November 1997| accessdate= 7 May 2011}}<br />
<br />
* {{Citation| last1 = Holte | first1=RC | last2=Choueiry |first2=BY| title = Abstraction and reformulation in artificial intelligence| journal = [[Philosophical Transactions of the Royal Society B]]| volume = 358| issue = 1435| pages = 1197–1204| year = 2003| pmid = 12903653| pmc = 1693218| doi = 10.1098/rstb.2003.1317| postscript = .}}<br />
<br />
* {{Citation | last = Howe | first = J. | url=http://www.dai.ed.ac.uk/AI_at_Edinburgh_perspective.html | title = Artificial Intelligence at Edinburgh University : a Perspective |date = November 1994| accessdate= 30 August 2007}}<br />
<br />
* {{Citation | last = Johnson| first= Mark | year = 1987| title =The body in the mind| publisher =Chicago|isbn= 978-0-226-40317-5}}<br />
<br />
* {{Citation | last = Kurzweil | first = Ray | author-link = Ray Kurzweil | title = The Singularity is Near | year = 2005 | publisher = Viking Press | title-link = The Singularity is Near }}<br />
<br />
* {{Citation | last = Lighthill | first = Professor Sir James | author-link=James Lighthill | year = 1973 | contribution= Artificial Intelligence: A General Survey | title = Artificial Intelligence: a paper symposium| publisher = Science Research Council }}<br />
<br />
* {{Citation|last=Luger|first=George|first2=William|last2=Stubblefield|year=2004|title=Artificial Intelligence: Structures and Strategies for Complex Problem Solving|edition=5th|publisher=The Benjamin/Cummings Publishing Company, Inc.|page=[https://archive.org/details/artificialintell0000luge/page/720 720]|isbn=978-0-8053-4780-7|url=https://archive.org/details/artificialintell0000luge/page/720}}<br />
<br />
* {{Citation |last=McCarthy|first=John |authorlink=John McCarthy (computer scientist)|date=Oct 2007 |title=From here to human-level AI|journal=Artificial Intelligence |volume=171 |pages=1174–1182 |doi=10.1016/j.artint.2007.10.009 | postscript=. |issue=18}}<br />
<br />
* {{McCorduck 2004}}<br />
<br />
* {{Citation | last = Moravec | first = Hans | author-link = Hans Moravec | year = 1976 | url = http://www.frc.ri.cmu.edu/users/hpm/project.archive/general.articles/1975/Raw.Power.html | title = The Role of Raw Power in Intelligence | access-date = 29 September 2007 | archive-url = https://web.archive.org/web/20160303232511/http://www.frc.ri.cmu.edu/users/hpm/project.archive/general.articles/1975/Raw.Power.html | archive-date = 3 March 2016 | url-status = dead }}<br />
<br />
* {{Citation | last = Moravec | first = Hans | author-link=Hans Moravec | year = 1988 | title = Mind Children | publisher = Harvard University Press}}<br />
<br />
* {{Citation| last=Nagel | year=1974 | title =What Is it Like to Be a Bat | journal = Philosophical Review | volume=83 | issue=4 | pages=435–50 | url=http://organizations.utep.edu/Portals/1475/nagel_bat.pdf| postscript=. | doi=10.2307/2183914 | jstor=2183914 }}<br />
<br />
* {{Citation | last = Newell | first = Allen | author-link=Allen Newell | last2 = Simon | first2=H. A. | year = 1963 | contribution=GPS: A Program that Simulates Human Thought| title=Computers and Thought | editor-last= Feigenbaum | editor-first= E.A. |editor2-last= Feldman |editor2-first= J. |publisher= McGraw-Hill | authorlink2 = Herbert A. Simon|location= New York }}<br />
<br />
* {{cite journal | doi = 10.1145/360018.360022 | last = Newell | first = Allen | last2 = Simon | first2=H. A. | year = 1976 | title=Computer Science as Empirical Inquiry: Symbols and Search| volume= 19 | pages = 113–126 | journal = Communications of the ACM| author-link=Allen Newell | authorlink2=Herbert A. Simon|issue=3 | ref=harv| doi-access= free }}<br />
<br />
* {{Citation| last=Nilsson | first=Nils | author-link=Nils Nilsson (researcher) | year=1998|title=Artificial Intelligence: A New Synthesis|publisher=Morgan Kaufmann Publishers|isbn=978-1-55860-467-4}}<br />
<br />
* {{Russell Norvig 2003}}<br />
<br />
* {{Citation | last = NRC| author-link=United States National Research Council | chapter=Developments in Artificial Intelligence|chapter-url=http://www.nap.edu/readingroom/books/far/ch9.html|title=Funding a Revolution: Government Support for Computing Research|publisher=National Academy Press|year=1999 }}<br />
<br />
* {{Citation | last = Poole | first = David | first2 = Alan | last2 = Mackworth | first3 = Randy | last3 = Goebel | publisher = Oxford University Press | year = 1998 | title = Computational Intelligence: A Logical Approach | url = http://www.cs.ubc.ca/spider/poole/ci.html | author-link=David Poole (researcher) | location = New York }}<br />
<br />
* {{Citation|last=Searle |first=John |author-link=John Searle |title=Minds, Brains and Programs |journal=Behavioral and Brain Sciences |volume=3 |issue=3 |pages=417–457 |year=1980 |doi=10.1017/S0140525X00005756 }}<br />
<br />
* {{Citation | last= Simon | first = H. A. | author-link=Herbert A. Simon | year = 1965 | title=The Shape of Automation for Men and Management | publisher =Harper & Row | location = New York }}<br />
<br />
* {{Citation |last=Sutherland|first= J.G. |year =1990| title= Holographic Model of Memory, Learning, and Expression|journal=International Journal of Neural Systems|volume= 1–3|pages= 256–267 |postscript=.}}<br />
<br />
* {{Citation|vauthors=Williams RW, Herrup K | title = The control of neuron number| journal = Annual Review of Neuroscience| volume = 11| pages = 423–53| year = 1988| pmid = 3284447| doi = 10.1146/annurev.ne.11.030188.002231| postscript = .}}<!--| accessdate = 20 June 2009--><br />
<br />
* {{Citation<br />
<br />
| editor1-last = de Vega | editor1-first = Manuel<br />
<br />
| editor1-last = de Vega | editor1-first = Manuel<br />
<br />
| 编辑1-last de Vega | 编辑1-first Manuel<br />
<br />
| editor2-last = Glenberg | editor2-first = Arthur<br />
<br />
| editor2-last = Glenberg | editor2-first = Arthur<br />
<br />
2- 最后的格伦伯格2- 第一个亚瑟<br />
<br />
| editor3-last = Graesser | editor3-first = Arthur<br />
<br />
| editor3-last = Graesser | editor3-first = Arthur<br />
<br />
| 编辑3-last Graesser | 编辑3-first Arthur<br />
<br />
| year = 2008<br />
<br />
| year = 2008<br />
<br />
2008年<br />
<br />
| title = Symbols and Embodiment: Debates on meaning and cognition<br />
<br />
| title = Symbols and Embodiment: Debates on meaning and cognition<br />
<br />
标题符号与具体化: 关于意义与认知的争论<br />
<br />
| publisher = Oxford University Press<br />
<br />
| publisher = Oxford University Press<br />
<br />
牛津大学出版社<br />
<br />
| isbn=978-0-19-921727-4<br />
<br />
| isbn=978-0-19-921727-4<br />
<br />
[国际标准图书编号978-0-19-921727-4]<br />
<br />
}}<br />
<br />
}}<br />
<br />
}}<br />
<br />
* {{Citation|last=Yudkowsky |first=Eliezer |author-link=Eliezer Yudkowsky |editor-last=Goertzel |editor-first=Ben |editor2-last=Pennachin |editor2-first=Cassio |title=Artificial General Intelligence |journal=Annual Review of Psychology |publisher=Springer |year=2006 |url=http://www.singinst.org/upload/LOGI//LOGI.pdf |doi=10.1146/annurev.psych.49.1.585 |pmid=9496632 |isbn=978-3-540-23733-4 |url-status=dead |archiveurl=https://web.archive.org/web/20090411050423/http://www.singinst.org/upload/LOGI/LOGI.pdf |archivedate=11 April 2009 |volume=49 |pages=585–612}} <br />
<br />
* {{Citation |last=Zucker|first=Jean-Daniel |date=July 2003 |title=A grounded theory of abstraction in artificial intelligence|journal=[[Philosophical Transactions of the Royal Society B]] |pmid=12903672 |volume=358 |issue=1435 |pmc=1693211 |pages=1293–1309 |doi=10.1098/rstb.2003.1308 |postscript=.}}<br />
<br />
* {{Citation| last=Yudkowsky | first=Eliezer | author-link=Eliezer Yudkowsky| year=2008 |title=Artificial Intelligence as a Positive and Negative Factor in Global Risk |journal=Global Catastrophic Risks | bibcode=2008gcr..book..303Y }}.<br />
<br />
{{refend}}<br />
<br />
<br />
<br />
==External links==<br />
<br />
* [https://cis.temple.edu/~pwang/AGI-Intro.html The AGI portal maintained by Pei Wang]<br />
<br />
* [https://web.archive.org/web/20050405071221/http://genesis.csail.mit.edu/index.html The Genesis Group at MIT's CSAIL] – Modern research on the computations that underlay human intelligence<br />
<br />
* [http://www.opencog.org/ OpenCog – open source project to develop a human-level AI]<br />
<br />
* [http://academia.wikia.com/wiki/A_Method_for_Simulating_the_Process_of_Logical_Human_Thought Simulating logical human thought]<br />
<br />
* [http://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/ai-timelines What Do We Know about AI Timelines?] – Literature review<br />
<br />
<br />
<br />
{{Existential risk from artificial intelligence}}<br />
<br />
<br />
<br />
{{DEFAULTSORT:Artificial general intelligence}}<br />
<br />
[[Category:Hypothetical technology]]<br />
<br />
Category:Hypothetical technology<br />
<br />
类别: 假设技术<br />
<br />
[[Category:Artificial intelligence]]<br />
<br />
Category:Artificial intelligence<br />
<br />
类别: 人工智能<br />
<br />
[[Category:Computational neuroscience]]<br />
<br />
Category:Computational neuroscience<br />
<br />
类别: 计算神经科学<br />
<br />
<br />
<br />
[[fr:Intelligence artificielle#Intelligence artificielle forte]]<br />
<br />
fr:Intelligence artificielle#Intelligence artificielle forte<br />
<br />
智力人工 # 智力人工强项<br />
<br />
<noinclude><br />
<br />
<small>This page was moved from [[wikipedia:en:Artificial general intelligence]]. Its edit history can be viewed at [[通用人工智能/edithistory]]</small></noinclude><br />
<br />
[[Category:待整理页面]]</div>粲兰https://wiki.swarma.org/index.php?title=%E9%80%9A%E7%94%A8%E4%BA%BA%E5%B7%A5%E6%99%BA%E8%83%BD&diff=15036通用人工智能2020-10-12T10:56:29Z<p>粲兰:</p>
<hr />
<div>此词条暂由彩云小译翻译,未经人工整理和审校,带来阅读不便,请见谅。<br />
<br />
{{Short description|Hypothetical human-level or stronger AI}}<br />
<br />
{{Use British English|date = March 2019}}<br />
<br />
{{Use dmy dates|date=December 2019}}<br />
<br />
{{Artificial intelligence}}<br />
<br />
'''Artificial general intelligence''' ('''AGI''') is the hypothetical<ref>{{cite news |title=DeepMind and Google: the battle to control artificial intelligence |url=https://www.economist.com/news/2019/04/05/deepmind-and-google-the-battle-to-control-artificial-intelligence |accessdate=15 March 2020 |work=[[The Economist]] ([[1843 (magazine)]]) |date=2019 |quote=AGI stands for Artificial General Intelligence, a hypothetical computer program...}}</ref> intelligence of a machine that has the capacity to understand or learn any intellectual task that a [[human being]] can. It is a primary goal of some [[artificial intelligence]] research and a common topic in [[science fiction]] and [[futures studies]]. AGI can also be referred to as '''strong AI''',<ref>Kurzweil, ''Singularity'' (2005) p. 260</ref><ref name = "Kurzweil 2005-08-05">{{Citation|url=https://www.forbes.com/home/free_forbes/2005/0815/030.htmlhttps://www.forbes.com/home/free_forbes/2005/0815/030.html |first=Ray |last=Kurzweil |date=5 August 2005 |magazine=[[Forbes]]|title=Long Live AI}}: Kurzweil describes strong AI as "machine intelligence with the full range of human intelligence."</ref><ref>{{Citation|url=https://crnano.typepad.com/crnblog/2005/08/advanced_human_.html |title=Advanced Human ntelligence<br />
<br />
Artificial general intelligence (AGI) is the hypothetical intelligence of a machine that has the capacity to understand or learn any intellectual task that a human being can. It is a primary goal of some artificial intelligence research and a common topic in science fiction and futures studies. AGI can also be referred to as strong AI,<ref>{{Citation|url=https://crnano.typepad.com/crnblog/2005/08/advanced_human_.html |title=Advanced Human ntelligence<br />
<br />
人工通用智能(Artificial general intelligence,AGI)是一种假设的机器智能,它有能力理解或学习任何人类能够完成的智力任务。这是一些人工智能研究的主要目标,也是科幻小说和未来研究的共同话题。也可以被称为强人工智能,参考文献{ Citation | url https://crnano.typepad.com/crnblog/2005/08/advanced_human_.html | title Advanced Human intelligence<br />
<br />
|first=Mike|last=Treder|work=Responsible Nanotechnology|date=10 August 2005 |archive-url=https://web.archive.org/web/20191016214415/https://crnano.typepad.com/crnblog/2005/08/advanced_human_.html|archive-date=16 October 2019 |url-status=live}}</ref> '''full AI''',<ref>{{Cite web |url=http://tedxtalks.ted.com/video/The-Age-of-Artificial-Intellige |title=The Age of Artificial Intelligence: George John at TEDxLondonBusinessSchool 2013 |access-date=22 February 2014 |archive-url=https://web.archive.org/web/20140226123940/http://tedxtalks.ted.com/video/The-Age-of-Artificial-Intellige |archive-date=26 February 2014 |url-status=live }}</ref> <br />
<br />
|first=Mike|last=Treder|work=Responsible Nanotechnology|date=10 August 2005 |archive-url=https://web.archive.org/web/20191016214415/https://crnano.typepad.com/crnblog/2005/08/advanced_human_.html|archive-date=16 October 2019 |url-status=live}}</ref> full AI, <br />
<br />
2005年8月10日2019年10月 https://web.archive.org/web/20191016214415/https://crnano.typepad.com/crnblog/2005/08/advanced_human_.html|archive-date=16,<br />
<br />
or '''general intelligent action'''.{{sfn|Newell|Simon|1976|ps=, This is the term they use for "human-level" intelligence in the [[physical symbol system]] hypothesis.}} <br />
<br />
or general intelligent action. <br />
<br />
或者是通用智能行为。<br />
<br />
Some academic sources reserve the term "strong AI" for machines that can experience [[Chinese room#Strong AI|consciousness]].{{sfn|Searle|1980|ps=, See below for the origin of the term "strong AI", and see the academic definition of "[[Chinese room#Strong AI|strong AI]]" in the article [[Chinese room]].}} Today's AI is speculated to be many years, if not decades, away from AGI.<ref>[https://www.europarl.europa.eu/at-your-service/files/be-heard/religious-and-non-confessional-dialogue/events/en-20190319-how-artificial-intelligence-works.pdf europarl.europa.eu: How artificial intelligence works], "Concluding remarks: Today's AI is powerful and useful, but remains far from speculated AGI or ASI.", European Parliamentary Research Service, retrieved March 3, 2020</ref><ref>{{Cite journal|last=Grace|first=Katja|last2=Salvatier|first2=John|last3=Dafoe|first3=Allan|last4=Zhang|first4=Baobao|last5=Evans|first5=Owain|date=2018-07-31|title=Viewpoint: When Will AI Exceed Human Performance? Evidence from AI Experts|journal=Journal of Artificial Intelligence Research|volume=62|pages=729–754|doi=10.1613/jair.1.11222|issn=1076-9757}}</ref><br />
<br />
Some academic sources reserve the term "strong AI" for machines that can experience consciousness. Today's AI is speculated to be many years, if not decades, away from AGI.<br />
<br />
一些学术资源保留了“强人工智能”这个术语,用来形容能够体会意识的机器。据推测,如果不是几十年的话,今天的人工智能将在很多年之后才能达到通用人工智能的地步。<br />
<br />
<br />
<br />
Some authorities emphasize a distinction between ''strong AI'' and ''applied AI'',<ref>Encyclopædia Britannica [http://www.britannica.com/eb/article-219086/artificial-intelligence Strong AI, applied AI, and cognitive simulation] {{Webarchive|url=https://web.archive.org/web/20071015054758/http://www.britannica.com/eb/article-219086/artificial-intelligence |date=15 October 2007 }} or Jack Copeland [http://www.cs.usfca.edu/www.AlanTuring.net/turing_archive/pages/Reference%20Articles/what_is_AI/What%20is%20AI02.html What is artificial intelligence?] {{Webarchive|url=https://web.archive.org/web/20070818125256/http://www.cs.usfca.edu/www.AlanTuring.net/turing_archive/pages/Reference%20Articles/what_is_AI/What%20is%20AI02.html |date=18 August 2007 }} on AlanTuring.net</ref> also called ''narrow AI''<ref name = "Kurzweil 2005-08-05"/> or ''[[weak AI]]''.<ref>{{Cite web |url=http://www.open2.net/nextbigthing/ai/ai_in_depth/in_depth.htm |title=The Open University on Strong and Weak AI |access-date=8 October 2007 |archive-url=https://web.archive.org/web/20090925043908/http://www.open2.net/nextbigthing/ai/ai_in_depth/in_depth.htm |archive-date=25 September 2009 |url-status=live }} {{Dead link |date=October 2019}}</ref> In contrast to strong AI, weak AI is not intended to perform human [[cognitive]] abilities. Rather, weak AI is limited to the use of software to study or accomplish specific [[problem solving]] or [[reason]]ing tasks.<br />
<br />
Some authorities emphasize a distinction between strong AI and applied AI, also called narrow AI In contrast to strong AI, weak AI is not intended to perform human cognitive abilities. Rather, weak AI is limited to the use of software to study or accomplish specific problem solving or reasoning tasks.<br />
<br />
一些权威机构强调'' 强人工智能 ''和'' 应用人工智能 ''之间的区别,也称为'' 狭义人工智能 ''与'' 强人工智能 ''相比,弱人工智能并不是为了执行人类的认知能力。相反,弱人工智能仅限于使用软件来研究或完成特定问题的解决或完成推理任务。<br />
<br />
<br />
<br />
As of 2017, over forty organizations are researching AGI.<ref name=baum/><br />
<br />
As of 2017, over forty organizations are researching AGI.<br />
<br />
截止到2017年,已经有超过四十家机构在研究 AGI。<br />
<br />
<br />
<br />
==Requirements 判定要求==<br />
<br />
{{main|Cognitive science}}<br />
<br />
<br />
<br />
Various criteria for [[intelligence]] have been proposed (most famously the [[Turing test]]) but to date, there is no definition that satisfies everyone.<ref>AI founder [[John McCarthy (computer scientist)|John McCarthy]] writes: "we cannot yet characterize in general what kinds of computational procedures we want to call intelligent." {{cite web| url=http://www-formal.stanford.edu/jmc/whatisai/node1.html| title=Basic Questions| last=McCarthy| first=John| authorlink=John McCarthy (computer scientist)| publisher=[[Stanford University]]| year=2007| access-date=6 December 2007| archive-url=https://web.archive.org/web/20071026100601/http://www-formal.stanford.edu/jmc/whatisai/node1.html| archive-date=26 October 2007| url-status=live}} (For a discussion of some definitions of intelligence used by [[artificial intelligence]] researchers, see [[philosophy of artificial intelligence]].)</ref> However, there ''is'' wide agreement among artificial intelligence researchers that intelligence is required to do the following:<ref><br />
<br />
Various criteria for intelligence have been proposed (most famously the Turing test) but to date, there is no definition that satisfies everyone. However, there is wide agreement among artificial intelligence researchers that intelligence is required to do the following:<ref><br />
<br />
人们提出了各种各样的智能标准(最著名的是图灵测试) ,但到目前为止,还没有一个定义能使所有人满意。然而,人工智能研究人员普遍认为,智能需要做到以下几点: <br />
<br />
This list of intelligent traits is based on the topics covered by major AI textbooks, including:<br />
<br />
This list of intelligent traits is based on the topics covered by major AI textbooks, including:<br />
<br />
这个智能特征的列表基于主流的人工智能教科书所涉及的主题,包括:<br />
<br />
{{Harvnb|Russell|Norvig|2003}},<br />
<br />
,<br />
<br />
,<br />
<br />
{{Harvnb|Luger|Stubblefield|2004}},<br />
<br />
,<br />
<br />
,<br />
<br />
{{Harvnb|Poole|Mackworth|Goebel|1998}} and<br />
<br />
and<br />
<br />
及<br />
<br />
{{Harvnb|Nilsson|1998}}.<br />
<br />
.<br />
<br />
.<br />
<br />
</ref><br />
<br />
</ref><br />
<br />
/ 参考<br />
<br />
* [[automated reasoning|reason]], use strategy, solve puzzles, and make judgments under [[uncertainty]];使用策略,解决问题,并且在不确定条件下做出决策。<br />
<br />
* [[knowledge representation|represent knowledge]], including [[Commonsense knowledge base|commonsense knowledge]];展现知识,包括常识。<br />
<br />
* [[automated planning and scheduling|plan]];规划。<br />
<br />
* [[machine learning|learn]];学习。<br />
<br />
* communicate in [[natural language processing|natural language]];使用自然语言进行交流。<br />
<br />
* and [[Artificial intelligence systems integration|integrate all these skills]] towards common goals.综合运用所有技巧以达到某个目的。<br />
<br />
<br />
<br />
Other important capabilities include the ability to [[machine perception|sense]] (e.g. [[computer vision|see]]) and the ability to act (e.g. [[robotics|move and manipulate objects]]) in the world where intelligent behaviour is to be observed.<ref>Pfeifer, R. and Bongard J. C., How the body shapes the way we think: a new view of intelligence (The MIT Press, 2007). {{ISBN|0-262-16239-3}}</ref> This would include an ability to detect and respond to [[hazard]].<ref>{{cite journal | last1 = White | first1 = R. W. | year = 1959 | title = Motivation reconsidered: The concept of competence | journal = Psychological Review | volume = 66 | issue = 5| pages = 297–333 | doi=10.1037/h0040934| pmid = 13844397 }}</ref> Many interdisciplinary approaches to intelligence (e.g. [[cognitive science]], [[computational intelligence]] and [[decision making]]) tend to emphasise the need to consider additional traits such as [[imagination]] (taken as the ability to form mental images and concepts that were not programmed in)<ref>{{Harvnb|Johnson|1987}}</ref> and [[Self-determination theory|autonomy]].<ref>deCharms, R. (1968). Personal causation. New York: Academic Press.</ref><br />
<br />
Other important capabilities include the ability to sense (e.g. see) and the ability to act (e.g. move and manipulate objects) in the world where intelligent behaviour is to be observed. This would include an ability to detect and respond to hazard. Many interdisciplinary approaches to intelligence (e.g. cognitive science, computational intelligence and decision making) tend to emphasise the need to consider additional traits such as imagination (taken as the ability to form mental images and concepts that were not programmed in) and autonomy.<br />
<br />
其他重要的能力包括在客观世界感知(例如:视觉)和行动(例如:移动和操纵物体)的能力。智能行为在客观世界中是可观测的。这将包括检测和应对危险的能力。许多跨学科的智力研究方法(例如:。认知科学、计算智能和决策)倾向于强调考虑额外特征的必要性,例如想象力(被认为是未编入程序的形成意象和概念的能力)和自主性。<br />
<br />
Computer based systems that exhibit many of these capabilities do exist (e.g. see [[computational creativity]], [[automated reasoning]], [[decision support system]], [[robot]], [[evolutionary computation]], [[intelligent agent]]), but not yet at human levels.<br />
<br />
Computer based systems that exhibit many of these capabilities do exist (e.g. see computational creativity, automated reasoning, decision support system, robot, evolutionary computation, intelligent agent), but not yet at human levels.<br />
<br />
基于计算机的系统,展示了许多这些能力确实存在(例如:。参见计算创造性、自动推理、决策支持系统、机器人、进化计算、智能代理) ,但还没有达到人类的水平。<br />
<br />
<br />
<br />
===Tests for confirming human-level AGI{{anchor|Tests_for_confirming_human-level_AGI}}===<br />
<br />
The following tests to confirm human-level AGI have been considered:<ref>{{cite web|last=Muehlhauser|first=Luke|title=What is AGI?|url=http://intelligence.org/2013/08/11/what-is-agi/|publisher=Machine Intelligence Research Institute|accessdate=1 May 2014|date=11 August 2013|archive-url=https://web.archive.org/web/20140425115445/http://intelligence.org/2013/08/11/what-is-agi/|archive-date=25 April 2014|url-status=live}}</ref><ref>{{Cite web|url=https://www.talkyblog.com/artificial_general_intelligence_agi/|title=What is Artificial General Intelligence (AGI)? {{!}} 4 Tests For Ensuring Artificial General Intelligence|date=13 July 2019|website=Talky Blog|language=en-US|access-date=17 July 2019|archive-url=https://web.archive.org/web/20190717071152/https://www.talkyblog.com/artificial_general_intelligence_agi/|archive-date=17 July 2019|url-status=live}}</ref><br />
<br />
The following tests to confirm human-level AGI have been considered:<br />
<br />
考虑了下列测试以确认人类水平 AGI:<br />
<br />
;[[Turing test|The Turing Test]] ([[Alan Turing|''Turing'']])<br />
<br />
The Turing Test (Turing)<br />
<br />
图灵测试(图灵)<br />
<br />
: A machine and a human both converse sight unseen with a second human, who must evaluate which of the two is the machine, which passes the test if it can fool the evaluator a significant fraction of the time. Note: Turing does not prescribe what should qualify as intelligence, only that knowing that it is a machine should disqualify it.<br />
<br />
A machine and a human both converse sight unseen with a second human, who must evaluate which of the two is the machine, which passes the test if it can fool the evaluator a significant fraction of the time. Note: Turing does not prescribe what should qualify as intelligence, only that knowing that it is a machine should disqualify it.<br />
<br />
一个机器人和一个人类都与另一个人类进行眼神交流,后者必须评估两者中哪一个是机器,如果它能骗过评估者很大一部分时间,那么机器就通过了测试。注意: 图灵并没有规定什么是智能,只要能认出它是一台机器就不应该认为它是智能的。<br />
<br />
;The Coffee Test ([[Steve Wozniak|''Wozniak'']])<br />
<br />
The Coffee Test (Wozniak)<br />
<br />
咖啡测试(沃兹尼亚克)<br />
<br />
: A machine is required to enter an average American home and figure out how to make coffee: find the coffee machine, find the coffee, add water, find a mug, and brew the coffee by pushing the proper buttons.<br />
<br />
A machine is required to enter an average American home and figure out how to make coffee: find the coffee machine, find the coffee, add water, find a mug, and brew the coffee by pushing the proper buttons.<br />
<br />
一台机器需要进入一个普通的美国家庭,并弄清楚如何制作咖啡: 找到咖啡机,找到咖啡,加水,找到一个马克杯,并通过按下正确的按钮来煮咖啡。<br />
<br />
;The Robot College Student Test ([[Ben Goertzel|''Goertzel'']])<br />
<br />
The Robot College Student Test (Goertzel)<br />
<br />
机器人大学生考试(格兹尔)<br />
<br />
: A machine enrolls in a university, taking and passing the same classes that humans would, and obtaining a degree.<br />
<br />
A machine enrolls in a university, taking and passing the same classes that humans would, and obtaining a degree.<br />
<br />
一台机器进入一所大学,学习并通过与人类相同的课程,并获得学位。<br />
<br />
;The Employment Test ([[Nils John Nilsson|''Nilsson'']])<br />
<br />
The Employment Test (Nilsson)<br />
<br />
就业测试(尼尔森)<br />
<br />
: A machine works an economically important job, performing at least as well as humans in the same job.<br />
<br />
A machine works an economically important job, performing at least as well as humans in the same job.<br />
<br />
机器从事一项经济上重要的工作,在同一项工作中表现至少和人类一样好。<br />
<br />
<br />
<br />
=== Problems requiring AGI to solve 等待通用人工智能解决的问题===<br />
<br />
{{Main|AI-complete}}<br />
<br />
<br />
<br />
The most difficult problems for computers are informally known as "AI-complete" or "AI-hard", implying that solving them is equivalent to the general aptitude of human intelligence, or strong AI, beyond the capabilities of a purpose-specific algorithm.<ref name="Shapiro92">Shapiro, Stuart C. (1992). [http://www.cse.buffalo.edu/~shapiro/Papers/ai.pdf Artificial Intelligence] {{Webarchive|url=https://web.archive.org/web/20160201014644/http://www.cse.buffalo.edu/~shapiro/Papers/ai.pdf |date=1 February 2016 }} In Stuart C. Shapiro (Ed.), ''Encyclopedia of Artificial Intelligence'' (Second Edition, pp.&nbsp;54–57). New York: John Wiley. (Section 4 is on "AI-Complete Tasks".)</ref><br />
<br />
The most difficult problems for computers are informally known as "AI-complete" or "AI-hard", implying that solving them is equivalent to the general aptitude of human intelligence, or strong AI, beyond the capabilities of a purpose-specific algorithm.<br />
<br />
对于计算机来说,最困难的问题被非正式地称为“ AI完全问题”或“ AI困难问题” ,这意味着解决这些问题相当于人类智能的一般才能,或强人工智能,超出了特定目的算法的能力。<br />
<br />
<br />
<br />
AI-complete problems are hypothesised to include general [[computer vision]], [[natural language understanding]], and dealing with unexpected circumstances while solving any real world problem.<ref>Roman V. Yampolskiy. Turing Test as a Defining Feature of AI-Completeness. In Artificial Intelligence, Evolutionary Computation and Metaheuristics (AIECM) --In the footsteps of Alan Turing. Xin-She Yang (Ed.). pp. 3–17. (Chapter 1). Springer, London. 2013. http://cecs.louisville.edu/ry/TuringTestasaDefiningFeature04270003.pdf {{Webarchive|url=https://web.archive.org/web/20130522094547/http://cecs.louisville.edu/ry/TuringTestasaDefiningFeature04270003.pdf |date=22 May 2013 }}</ref><br />
<br />
AI-complete problems are hypothesised to include general computer vision, natural language understanding, and dealing with unexpected circumstances while solving any real world problem.<br />
<br />
人工智能完全问题假设包括一般的计算机视觉,自然语言理解,以及在解决任何现实世界问题的同时处理意外情况。<br />
<br />
<br />
<br />
AI-complete problems cannot be solved with current computer technology alone, and also require [[human computation]]. This property could be useful, for example, to test for the presence of humans, as [[CAPTCHA]]s aim to do; and for [[computer security]] to repel [[brute-force attack]]s.<ref>Luis von Ahn, Manuel Blum, Nicholas Hopper, and John Langford. [http://www.captcha.net/captcha_crypt.pdf CAPTCHA: Using Hard AI Problems for Security] {{Webarchive|url=https://web.archive.org/web/20160304001102/http://www.captcha.net/captcha_crypt.pdf |date=4 March 2016 }}. In Proceedings of Eurocrypt, Vol. 2656 (2003), pp. 294–311.</ref><ref>{{cite journal | first = Richard | last = Bergmair | title = Natural Language Steganography and an "AI-complete" Security Primitive | citeseerx = 10.1.1.105.129 | date = 7 January 2006 }} (unpublished?)</ref><br />
<br />
AI-complete problems cannot be solved with current computer technology alone, and also require human computation. This property could be useful, for example, to test for the presence of humans, as CAPTCHAs aim to do; and for computer security to repel brute-force attacks.<br />
<br />
目前的计算机技术不能单独解决AI完全问题,而且还需要人工计算。例如,这个特性可以用来测试人类是否存在(CAPTCHAs 的目标就是这样做) ,以及应用于计算机安全以抵御强力攻击。<br />
<br />
<br />
<br />
== History 历史 == <br />
<br />
=== Classical AI 经典人工智能 ===<br />
<br />
{{Main|History of artificial intelligence}}<br />
<br />
Modern AI research began in the mid 1950s.<ref>{{Harvnb|Crevier|1993|pp=48–50}}</ref> The first generation of AI researchers were convinced that artificial general intelligence was possible and that it would exist in just a few decades. AI pioneer [[Herbert A. Simon]] wrote in 1965: "machines will be capable, within twenty years, of doing any work a man can do."<ref>{{Harvnb|Simon|1965|p=96}} quoted in {{Harvnb|Crevier|1993|p=109}}</ref> Their predictions were the inspiration for [[Stanley Kubrick]] and [[Arthur C. Clarke]]'s character [[HAL 9000]], who embodied what AI researchers believed they could create by the year 2001. AI pioneer [[Marvin Minsky]] was a consultant<ref>{{Cite web |url=http://mitpress.mit.edu/e-books/Hal/chap2/two1.html |title=Scientist on the Set: An Interview with Marvin Minsky |access-date=5 April 2008 |archive-url=https://web.archive.org/web/20120716182537/http://mitpress.mit.edu/e-books/Hal/chap2/two1.html |archive-date=16 July 2012 |url-status=live }}</ref> on the project of making HAL 9000 as realistic as possible according to the consensus predictions of the time; Crevier quotes him as having said on the subject in 1967, "Within a generation ... the problem of creating 'artificial intelligence' will substantially be solved,"<ref>Marvin Minsky to {{Harvtxt|Darrach|1970}}, quoted in {{Harvtxt|Crevier|1993|p=109}}.</ref> although Minsky states that he was misquoted.{{Citation needed|date=June 2011}}<br />
<br />
Modern AI research began in the mid 1950s. The first generation of AI researchers were convinced that artificial general intelligence was possible and that it would exist in just a few decades. AI pioneer Herbert A. Simon wrote in 1965: "machines will be capable, within twenty years, of doing any work a man can do." Their predictions were the inspiration for Stanley Kubrick and Arthur C. Clarke's character HAL 9000, who embodied what AI researchers believed they could create by the year 2001. AI pioneer Marvin Minsky was a consultant on the project of making HAL 9000 as realistic as possible according to the consensus prediction of the time; Crevier quotes him as having said on the subject in 1967, "Within a generation ... the problem of creating 'artificial intelligence' will substantially be solved," although Minsky states that he was misquoted.<br />
<br />
现代人工智能研究始于20世纪50年代中期。第一代人工智能研究人员确信,通用人工智能是可能的,并将在短短几十年内出现。人工智能的先驱赫伯特·A·西蒙(Herbert A. Simon)在1965年写道: “机器将在20年内拥有完成人类能做的任何工作的能力。”他们的预言启发了斯坦利·库布里克和亚瑟·查理斯·克拉克塑造的角色哈尔9000,它代表了人工智能研究人员相信他们截至2001年能够创造出的东西。人工智能先驱马文·明斯基(Marvin Minsky)是一个项目顾问,该项目旨在根据当时的一致预测,使哈尔9000尽可能逼真; 克里维尔援引他在1967年关于这个问题的话说,“在一代人的时间里... ... 创造‘人工智能’的问题将大体上得到解决,”尽管明斯基声称,他的话被错误引用了。<br />
<br />
<br />
<br />
However, in the early 1970s, it became obvious that researchers had grossly underestimated the difficulty of the project. Funding agencies became skeptical of AGI and put researchers under increasing pressure to produce useful "applied AI".<ref>The [[Lighthill report]] specifically criticized AI's "grandiose objectives" and led the dismantling of AI research in England. ({{Harvnb|Lighthill|1973}}; {{Harvnb|Howe|1994}}) In the U.S., [[DARPA]] became determined to fund only "mission-oriented direct research, rather than basic undirected research". See {{Harv|NRC|1999}} under "Shift to Applied Research Increases Investment". See also {{Harv|Crevier|1993|pp=115–117}} and {{Harv|Russell|Norvig|2003|pp=21–22}}</ref> As the 1980s began, Japan's [[Fifth Generation Computer]] Project revived interest in AGI, setting out a ten-year timeline that included AGI goals like "carry on a casual conversation".<ref>{{harvnb|Crevier|1993|p=211}}, {{harvnb|Russell|Norvig|2003|p=24}} and see also {{Harvnb|Feigenbaum|McCorduck|1983}}</ref> In response to this and the success of [[expert systems]], both industry and government pumped money back into the field.<ref>{{Harvnb|Crevier| 1993|pp=161–162,197–203,240}}; {{harvnb|Russell|Norvig|2003|p=25}}; {{harvnb|NRC|1999|loc=under "Shift to Applied Research Increases Investment"}}</ref> However, confidence in AI spectacularly collapsed in the late 1980s, and the goals of the Fifth Generation Computer Project were never fulfilled.<ref>{{Harvnb|Crevier|1993|pp=209–212}}</ref> For the second time in 20 years, AI researchers who had predicted the imminent achievement of AGI had been shown to be fundamentally mistaken. By the 1990s, AI researchers had gained a reputation for making vain promises. They became reluctant to make predictions at all<ref>As AI founder [[John McCarthy (computer scientist)|John McCarthy]] writes "it would be a great relief to the rest of the workers in AI if the inventors of new general formalisms would express their hopes in a more guarded form than has sometimes been the case." {{cite web | url=http://www-formal.stanford.edu/jmc/reviews/lighthill/lighthill.html | title=Reply to Lighthill | last=McCarthy | first=John | authorlink=John McCarthy (computer scientist) | publisher=Stanford University | year=2000 | access-date=29 September 2007 | archive-url=https://web.archive.org/web/20080930164952/http://www-formal.stanford.edu/jmc/reviews/lighthill/lighthill.html | archive-date=30 September 2008 | url-status=live }}</ref> and to avoid any mention of "human level" artificial intelligence for fear of being labeled "wild-eyed dreamer[s]."<ref>"At its low point, some computer scientists and software engineers avoided the term artificial intelligence for fear of being viewed as wild-eyed dreamers."{{cite news |first=John |last=Markoff |title=Behind Artificial Intelligence, a Squadron of Bright Real People |url=https://www.nytimes.com/2005/10/14/technology/14artificial.html?ei=5070&en=11ab55edb7cead5e&ex=1185940800&adxnnl=1&adxnnlx=1185805173-o7WsfW7qaP0x5/NUs1cQCQ |work=The New York Times|date=14 October 2005}}</ref><br />
<br />
However, in the early 1970s, it became obvious that researchers had grossly underestimated the difficulty of the project. Funding agencies became skeptical of AGI and put researchers under increasing pressure to produce useful "applied AI". As the 1980s began, Japan's Fifth Generation Computer Project revived interest in AGI, setting out a ten-year timeline that included AGI goals like "carry on a casual conversation". In response to this and the success of expert systems, both industry and government pumped money back into the field. However, confidence in AI spectacularly collapsed in the late 1980s, and the goals of the Fifth Generation Computer Project were never fulfilled. For the second time in 20 years, AI researchers who had predicted the imminent achievement of AGI had been shown to be fundamentally mistaken. By the 1990s, AI researchers had gained a reputation for making vain promises. They became reluctant to make predictions at all and to avoid any mention of "human level" artificial intelligence for fear of being labeled "wild-eyed dreamer[s]."<br />
<br />
然而,在20世纪70年代初,很明显,研究人员严重低估了该项目的难度。资助机构开始对通用人工智能持怀疑态度,并对研究人员施加越来越大的压力,要求他们生产出有用的“应用人工智能”。随着20世纪80年代的开始,日本的'''<font color="#ff8000">第五代计算机项目(Fifth Generation Computer Project)</font>'''重新唤起了人们对通用人工智能的兴趣,并设定了一个长达10年的时间线,其中包括通用人工智能的目标,比如“进行一次随意的交谈”。为了应对这种情况和专家系统的成功建立,工业界和政府都重新将资金投入这一领域。然而,人们对人工智能的信心在20世纪80年代末大幅下降,第五代计算机项目的目标从未实现。20年来的第二次,人工智能研究人员预测通用人工智能即将取得的成果已经被证明根本是错误的。到了20世纪90年代,人工智能研究人员因做出虚假承诺而臭名昭著。他们根本不愿意做预测,也不愿意提及“人类水平”的人工智能,因为他们害怕被贴上“狂热梦想家”的标签<br />
<br />
<br />
<br />
=== Narrow AI research 狭义人工智能的研究===<br />
<br />
{{Main|Artificial intelligence}}<br />
<br />
<br />
<br />
In the 1990s and early 21st century, mainstream AI achieved far greater commercial success and academic respectability by focusing on specific sub-problems where they can produce verifiable results and commercial applications, such as [[artificial neural networks]] and statistical [[machine learning]].<ref>{{Harvnb|Russell|Norvig|2003|pp=25–26}}</ref> These "applied AI" systems are now used extensively throughout the technology industry, and research in this vein is very heavily funded in both academia and industry. Currently, development on this field is considered an emerging trend, and a mature stage is expected to happen in more than 10 years.<ref>{{cite web |title=Trends in the Emerging Tech Hype Cycle |url=https://blogs.gartner.com/smarterwithgartner/files/2018/08/PR_490866_5_Trends_in_the_Emerging_Tech_Hype_Cycle_2018_Hype_Cycle.png |publisher=Gartner Reports |accessdate=7 May 2019 |archive-url=https://web.archive.org/web/20190522024829/https://blogs.gartner.com/smarterwithgartner/files/2018/08/PR_490866_5_Trends_in_the_Emerging_Tech_Hype_Cycle_2018_Hype_Cycle.png |archive-date=22 May 2019 |url-status=live }}</ref><br />
<br />
In the 1990s and early 21st century, mainstream AI achieved far greater commercial success and academic respectability by focusing on specific sub-problems where they can produce verifiable results and commercial applications, such as artificial neural networks and statistical machine learning. These "applied AI" systems are now used extensively throughout the technology industry, and research in this vein is very heavily funded in both academia and industry. Currently, development on this field is considered an emerging trend, and a mature stage is expected to happen in more than 10 years.<br />
<br />
在1990年代和21世纪初,主流人工智能取得了更大的商业成功和学术声望,因为它们把重点放在能够产生可验证结果和商业应用的具体子问题上,例如人工神经网络和统计机器学习。这些“应用人工智能”系统现在在整个技术产业中得到广泛应用,这方面的研究得到了学术界和产业界的大量资助。目前,这一领域的发展被认为是一个新兴的趋势,并有望在10多年内进入一个成熟的阶段。<br />
<br />
<br />
<br />
Most mainstream AI researchers hope that strong AI can be developed by combining the programs that solve various sub-problems. [[Hans Moravec]] wrote in 1988: <blockquote>"I am confident that this bottom-up route to artificial intelligence will one day meet the traditional top-down route more than half way, ready to provide the real world competence and the [[Commonsense knowledge base|commonsense knowledge]] that has been so frustratingly elusive in reasoning programs. Fully intelligent machines will result when the metaphorical [[golden spike]] is driven uniting the two efforts."<ref>{{Harvnb|Moravec|1988|p=20}}</ref></blockquote><br />
<br />
Most mainstream AI researchers hope that strong AI can be developed by combining the programs that solve various sub-problems. Hans Moravec wrote in 1988: <blockquote>"I am confident that this bottom-up route to artificial intelligence will one day meet the traditional top-down route more than half way, ready to provide the real world competence and the commonsense knowledge that has been so frustratingly elusive in reasoning programs. Fully intelligent machines will result when the metaphorical golden spike is driven uniting the two efforts."</blockquote><br />
<br />
大多数主流人工智能研究人员希望,通过结合解决各种子问题的项目,可以开发出强人工智能。汉斯·莫拉维克(Hans Moravec)在1988年写道: “我相信,这种自下而上的人工智能路线,终有一天会与传统的自上而下的路线在后半程相遇。令人沮丧的是,当下,真实世界的能力和常识知识在推理程序中一直难以捉摸。而这两种路线结合的人工智能将能为我们解决这些疑难。当一个黄金钉一样的东西将二者结合起来时,就会产生完全智能的机器。” / blockquote<br />
<br />
<br />
<br />
However, even this fundamental philosophy has been disputed; for example, Stevan Harnad of Princeton concluded his 1990 paper on the [[Symbol grounding problem|Symbol Grounding Hypothesis]] by stating: <blockquote>"The expectation has often been voiced that "top-down" (symbolic) approaches to modeling cognition will somehow meet "bottom-up" (sensory) approaches somewhere in between. If the grounding considerations in this paper are valid, then this expectation is hopelessly modular and there is really only one viable route from sense to symbols: from the ground up. A free-floating symbolic level like the software level of a computer will never be reached by this route (or vice versa) – nor is it clear why we should even try to reach such a level, since it looks as if getting there would just amount to uprooting our symbols from their intrinsic meanings (thereby merely reducing ourselves to the functional equivalent of a programmable computer)."<ref>{{cite journal | last1 = Harnad | first1 = S | year = 1990 | title = The Symbol Grounding Problem | journal = Physica D | volume = 42 | issue = 1–3| pages = 335–346 | doi=10.1016/0167-2789(90)90087-6| bibcode = 1990PhyD...42..335H | arxiv = cs/9906002}}</ref></blockquote><br />
<br />
However, even this fundamental philosophy has been disputed; for example, Stevan Harnad of Princeton concluded his 1990 paper on the Symbol Grounding Hypothesis by stating: <blockquote>"The expectation has often been voiced that "top-down" (symbolic) approaches to modeling cognition will somehow meet "bottom-up" (sensory) approaches somewhere in between. If the grounding considerations in this paper are valid, then this expectation is hopelessly modular and there is really only one viable route from sense to symbols: from the ground up. A free-floating symbolic level like the software level of a computer will never be reached by this route (or vice versa) – nor is it clear why we should even try to reach such a level, since it looks as if getting there would just amount to uprooting our symbols from their intrinsic meanings (thereby merely reducing ourselves to the functional equivalent of a programmable computer)."</blockquote><br />
<br />
然而,连如此基本的哲学问题也存在争议; 例如,普林斯顿大学的斯蒂文·哈纳德(Stevan Harnad)在1990年关于'''<font color="#ff8000">符号基础假说(the Symbol Grounding Hypothesis)</font>'''的论文中总结道: “人们经常提出这样的期望,即建立“自上而下”(符号)的认知模型的方法将在某种程度上与“自下而上”(感官)的方法在建模过程中的某处相会。如果本文中的基本考虑是正确的,那么绝望的是,这种期望是模块化的,并且从认知到符号真的只有一条可行的路径: 从头开始。类似计算机软件级别的自由浮动的符号永远不可能通过这条路径实现,反之亦然——甚至也不清楚为什么我们应该尝试达到这样一个级别,因为它看起来就像是把我们的符号从它们的内在意义上连根拔起(从而仅仅把我们自己降低为可编程计算机的功能等价物)。” / blockquote<br />
<br />
<br />
<br />
=== Modern artificial general intelligence research 现代通用人工智能的研究===<br />
<br />
The term "artificial general intelligence" was used as early as 1997, by Mark Gubrud<ref>{{Harvnb|Gubrud|1997}}</ref> in a discussion of the implications of fully automated military production and operations. The term was re-introduced and popularized by [[Shane Legg]] and [[Ben Goertzel]] around 2002.<ref>{{Cite web|url=http://goertzel.org/who-coined-the-term-agi/|title=Who coined the term "AGI"? » goertzel.org|language=en-US|access-date=28 December 2018|archive-url=https://web.archive.org/web/20181228083048/http://goertzel.org/who-coined-the-term-agi/|archive-date=28 December 2018|url-status=live}}, via [[Life 3.0]]: 'The term "AGI" was popularized by... Shane Legg, Mark Gubrud and Ben Goertzel'</ref> The research objective is much older, for example [[Doug Lenat]]'s [[Cyc]] project (that began in 1984), and [[Allen Newell]]'s [[Soar (cognitive architecture)|Soar]] project are regarded as within the scope of AGI. AGI research activity in 2006 was described by Pei Wang and Ben Goertzel<ref>{{harvnb|Goertzel|Wang|2006}}. See also {{harvtxt|Wang|2006}} with an up-to-date summary and lots of links.</ref> as "producing publications and preliminary results". The first summer school in AGI was organized in Xiamen, China in 2009<ref>https://goertzel.org/AGI_Summer_School_2009.htm</ref> by the Xiamen university's Artificial Brain Laboratory and OpenCog. The first university course was given in 2010<ref>http://fmi-plovdiv.org/index.jsp?id=1054&ln=1</ref> and 2011<ref>http://fmi.uni-plovdiv.bg/index.jsp?id=1139&ln=1</ref> at Plovdiv University, Bulgaria by Todor Arnaudov. MIT presented a course in AGI in 2018, organized by Lex Fridman and featuring a number of guest lecturers. However, as yet, most AI researchers have devoted little attention to AGI, with some claiming that intelligence is too complex to be completely replicated in the near term. However, a small number of computer scientists are active in AGI research, and many of this group are contributing to a series of [[Conference on Artificial General Intelligence|AGI conferences]]. The research is extremely diverse and often pioneering in nature. In the introduction to his book,{{sfn|Goertzel|Pennachin|2006}} Goertzel says that estimates of the time needed before a truly flexible AGI is built vary from 10 years to over a century, but the consensus in the AGI research community seems to be that the timeline discussed by [[Ray Kurzweil]] in ''[[The Singularity is Near]]''<ref name="K">{{Harv|Kurzweil|2005|p=260}} or see [http://crnano.typepad.com/crnblog/2005/08/advanced_human_.html Advanced Human Intelligence] {{Webarchive|url=https://web.archive.org/web/20110630032301/http://crnano.typepad.com/crnblog/2005/08/advanced_human_.html |date=30 June 2011 }} where he defines strong AI as "machine intelligence with the full range of human intelligence."</ref> (i.e. between 2015 and 2045) is plausible.{{sfn|Goertzel|2007}}<br />
<br />
The term "artificial general intelligence" was used as early as 1997, by Mark Gubrud in a discussion of the implications of fully automated military production and operations. The term was re-introduced and popularized by Shane Legg and Ben Goertzel around 2002. The research objective is much older, for example Doug Lenat's Cyc project (that began in 1984), and Allen Newell's Soar project are regarded as within the scope of AGI. AGI research activity in 2006 was described by Pei Wang and Ben Goertzel as "producing publications and preliminary results". The first summer school in AGI was organized in Xiamen, China in 2009 by the Xiamen university's Artificial Brain Laboratory and OpenCog. The first university course was given in 2010 and 2011 at Plovdiv University, Bulgaria by Todor Arnaudov. MIT presented a course in AGI in 2018, organized by Lex Fridman and featuring a number of guest lecturers. However, as yet, most AI researchers have devoted little attention to AGI, with some claiming that intelligence is too complex to be completely replicated in the near term. However, a small number of computer scientists are active in AGI research, and many of this group are contributing to a series of AGI conferences. The research is extremely diverse and often pioneering in nature. In the introduction to his book, Goertzel says that estimates of the time needed before a truly flexible AGI is built vary from 10 years to over a century, but the consensus in the AGI research community seems to be that the timeline discussed by Ray Kurzweil in The Singularity is Near (i.e. between 2015 and 2045) is plausible.<br />
<br />
”人工通用智能”一词早在1997年就由马克·古布鲁德(Mark Gubrud)在讨论全自动化军事生产和作业的影响时使用。这个术语在2002年左右被肖恩·莱格(Shane Legg)和本·格兹尔(Ben Goertzel)重新引入并推广。研究目标要古老得多,例如道格•雷纳特(Doug Lenat)的 Cyc 项目(始于1984年) ,以及艾伦•纽厄尔(Allen Newell)的 Soar 项目被认为属于通用人工智能的范围。王培(Pei Wang)和本·格兹尔将2006年的通用人工智能研究活动描述为“发表论文和取得初步成果”。2009年,厦门大学人工脑实验室和 OpenCog 在中国厦门组织了通用人工智能的第一个暑期学校。第一个大学课程于2010年和2011年在保加利亚普罗夫迪夫大学由托多尔·阿瑙多夫(Todor Arnaudov)开设。2018年,麻省理工学院开设了一门通用人工智能课程,由莱克斯·弗里德曼(Lex Fridman)组织,并邀请了一些客座讲师。然而,迄今为止,大多数人工智能研究人员对通用人工智能关注甚少,一些人声称,智能过于复杂,在短期内无法完全复制。然而,少数计算机科学家积极参与通用人工智能的研究,其中许多人正在为通用人工智能的一系列会议做出贡献。这项研究极其多样化,而且往往具有开创性。格兹尔在他的书的序言中,说,制造一个真正灵活的通用人工智能所需的时间约为10年到超过一个世纪不等,但是通用人工智能研究团体的似乎一致认为雷·库兹韦尔(Ray Kurzweil)在《奇点临近》(即在2015年至2045年之间)中讨论的时间线是可信的。<br />
<br />
<br />
<br />
However, most mainstream AI researchers doubt that progress will be this rapid.{{citation_needed|date=January 2017}} Organizations explicitly pursuing AGI include the Swiss AI lab [[IDSIA]],{{citation needed|date=December 2017}} Nnaisense,<ref>{{cite news|last1=Markoff|first1=John|title=When A.I. Matures, It May Call Jürgen Schmidhuber 'Dad'|url=https://www.nytimes.com/2016/11/27/technology/artificial-intelligence-pioneer-jurgen-schmidhuber-overlooked.html|accessdate=26 December 2017|work=The New York Times|date=27 November 2016|archive-url=https://web.archive.org/web/20171226234555/https://www.nytimes.com/2016/11/27/technology/artificial-intelligence-pioneer-jurgen-schmidhuber-overlooked.html|archive-date=26 December 2017|url-status=live}}</ref> [[Vicarious (company)|Vicarious]],<!--<ref name=baum/>--> [[Maluuba]],<ref name=baum/> the [[OpenCog|OpenCog Foundation]], Adaptive AI, [[LIDA (cognitive architecture)|LIDA]], and [[Numenta]] and the associated [[Redwood Neuroscience Institute]].<ref>{{cite book|author1=James Barrat|title=Our Final Invention: Artificial Intelligence and the End of the Human Era|date=2013|publisher=St. Martin's Press|location=New York|isbn=9780312622374|edition=First|chapter=Chapter 11: A Hard Takeoff|title-link=Our Final Invention|author1-link=James Barrat}}</ref> In addition, organizations such as the [[Machine Intelligence Research Institute]]<ref>{{cite web|title=About the Machine Intelligence Research Institute|url=https://intelligence.org/about/|website=Machine Intelligence Research Institute|accessdate=26 December 2017|archive-url=https://web.archive.org/web/20180121025925/https://intelligence.org/about/|archive-date=21 January 2018|url-status=live}}</ref> and [[OpenAI]]<ref>{{cite news|title=About OpenAI|url=https://openai.com/about/|accessdate=26 December 2017|work=[[OpenAI]]|language=en-us|archive-url=https://web.archive.org/web/20171222181056/https://openai.com/about/|archive-date=22 December 2017|url-status=live}}</ref> have been founded to influence the development path of AGI. Finally, projects such as the [[Human Brain Project]]<ref>{{cite news|last1=Theil|first1=Stefan|title=Trouble in Mind|url=https://www.scientificamerican.com/article/why-the-human-brain-project-went-wrong-and-how-to-fix-it/|accessdate=26 December 2017|work=Scientific American|pages=36–42|language=en|doi=10.1038/scientificamerican1015-36|bibcode=2015SciAm.313d..36T|archive-url=https://web.archive.org/web/20171109234151/https://www.scientificamerican.com/article/why-the-human-brain-project-went-wrong-and-how-to-fix-it/|archive-date=9 November 2017|url-status=live}}</ref> have the goal of building a functioning simulation of the human brain. A 2017 survey of AGI categorized forty-five known "active R&D projects" that explicitly or implicitly (through published research) research AGI, with the largest three being [[DeepMind]], the Human Brain Project, and [[OpenAI]].<ref name=baum>{{cite journal|title=Baum, Seth, A Survey of Artificial General Intelligence Projects for Ethics, Risk, and Policy (November 12, 2017). Global Catastrophic Risk Institute Working Paper 17-1|url=https://ssrn.com/abstract=3070741|date=12 November 2017|last1=Baum|first1=Seth}}</ref><br />
<br />
However, most mainstream AI researchers doubt that progress will be this rapid. Organizations explicitly pursuing AGI include the Swiss AI lab IDSIA, Nnaisense, Vicarious,<!-- In addition, organizations such as the Machine Intelligence Research Institute and OpenAI have been founded to influence the development path of AGI. Finally, projects such as the Human Brain Project have the goal of building a functioning simulation of the human brain. A 2017 survey of AGI categorized forty-five known "active R&D projects" that explicitly or implicitly (through published research) research AGI, with the largest three being DeepMind, the Human Brain Project, and OpenAI.<br />
<br />
然而,大多数主流的人工智能研究人员怀疑进展是否会如此之快。明确寻求通用人工智能的组织包括瑞士人工智能实验室IDSIA,Nnaisense,Vicarious,! -- 此外,机器智能研究所和 OpenAI 等机构也建立起来以影响通用人工智能的发展道路。最后,像人脑计划这样的项目的目标是建立一个人脑的功能模拟。2017年针对通用人工智能的一项调查(通过已发表的研究)对45个已知的明确的或暗中研究通用人工智能的“活跃研发项目”进行了分类 ,其中最大的三个是 DeepMind、人类大脑项目和 OpenAI。<br />
<br />
<br />
<br />
In 2017, researchers Feng Liu, Yong Shi and Ying Liu conducted intelligence tests on publicly available and freely accessible weak AI such as Google AI or Apple's Siri and others. At the maximum, these AI reached a value of about 47, which corresponds approximately to a six-year-old child in first grade. An adult comes to about 100 on average. Similar tests had been carried out in 2014, with the IQ score reaching a maximum value of 27.<ref>{{cite journal|title=Intelligence Quotient and Intelligence Grade of Artificial Intelligence|journal=Annals of Data Science|volume=4|issue=2|pages=179–191|arxiv=1709.10242|doi=10.1007/s40745-017-0109-0|year=2017|last1=Liu|first1=Feng|last2=Shi|first2=Yong|last3=Liu|first3=Ying|bibcode=2017arXiv170910242L}}</ref><ref>{{cite web|title=Google-KI doppelt so schlau wie Siri|url=https://t3n.de/news/iq-kind-schlauer-google-ki-siri-864003|accessdate=2 January 2019|archive-url=https://web.archive.org/web/20190103055657/https://t3n.de/news/iq-kind-schlauer-google-ki-siri-864003/|archive-date=3 January 2019|url-status=live}}</ref><br />
<br />
In 2017, researchers Feng Liu, Yong Shi and Ying Liu conducted intelligence tests on publicly available and freely accessible weak AI such as Google AI or Apple's Siri and others. At the maximum, these AI reached a value of about 47, which corresponds approximately to a six-year-old child in first grade. An adult comes to about 100 on average. Similar tests had been carried out in 2014, with the IQ score reaching a maximum value of 27.<br />
<br />
2017年,研究人员 Feng Liu,Yong Shi 和 Ying Liu 对公开的和可自由访问的弱智能进行了智能测试,如谷歌人工智能或苹果的 Siri 等。在最大值,这些人工智能达到了约47,这大约相当于一个的六岁儿童。一个成年人平均智商为100。2014年也进行了类似的测试,智商分数的最高值达到了27。<br />
<br />
<br />
<br />
In 2019, video game programmer and aerospace engineer [[John Carmack]] announced plans to research AGI.<ref name="lawler">{{cite web |url=https://www.engadget.com/2019-11-13-john-carmack-agi.html |title=John Carmack takes a step back at Oculus to work on human-like AI |date=November 13, 2019 |first=Richard |last=Lawler |accessdate=April 4, 2020 |publisher=[[Engadget]]}}</ref><br />
<br />
In 2019, video game programmer and aerospace engineer John Carmack announced plans to research AGI.<br />
<br />
2019年,游戏程序师和航空工程师约翰·卡迈克(John Carmack)宣布了研究通用人工智能的计划。<br />
<br />
<br />
<br />
==Processing power needed to simulate a brain 模拟人脑所需要的处理能力==<br />
<br />
<br />
<br />
===Whole brain emulation 全脑模拟===<br />
<br />
{{main|Mind uploading}}<br />
<br />
A popular discussed approach to achieving general intelligent action is [[whole brain emulation]]. A low-level brain model is built by [[brain scanning|scanning]] and [[Brain mapping|mapping]] a biological brain in detail and copying its state into a computer system or another computational device. The computer runs a [[computer simulation|simulation]] model so faithful to the original that it will behave in essentially the same way as the original brain, or for all practical purposes, indistinguishably.<ref name=Roadmap><br />
<br />
A popular discussed approach to achieving general intelligent action is whole brain emulation. A low-level brain model is built by scanning and mapping a biological brain in detail and copying its state into a computer system or another computational device. The computer runs a simulation model so faithful to the original that it will behave in essentially the same way as the original brain, or for all practical purposes, indistinguishably.<ref name=Roadmap><br />
<br />
实现通用智能行为的一种流行的讨论方法是全脑模拟。一个低层次的大脑模型是通过扫描和绘制生物大脑的详细情况,并将其状态复制到计算机系统或其他计算设备中来建立的。计算机运行的模拟模型无比忠实于原始模型,以至于它的行为在本质,或者对于所有的实际目的上与原始大脑相同,难以区分。 参考名称路线图<br />
<br />
{{Harvnb|Sandberg|Boström|2008}}. "The basic idea is to take a particular brain, scan its structure in detail, and construct a software model of it that is so faithful to the original that, when run on appropriate hardware, it will behave in essentially the same way as the original brain."</ref> Whole brain emulation is discussed in [[computational neuroscience]] and [[neuroinformatics]], in the context of [[brain simulation]] for medical research purposes. It is discussed in [[artificial intelligence]] research{{sfn|Goertzel|2007}} as an approach to strong AI. [[Neuroimaging]] technologies that could deliver the necessary detailed understanding are improving rapidly, and [[futurist]] Ray Kurzweil in the book ''The Singularity Is Near''<ref name=K/> predicts that a map of sufficient quality will become available on a similar timescale to the required computing power.<br />
<br />
."The basic idea is to take a particular brain, scan its structure in detail, and construct a software model of it that is so faithful to the original that, when run on appropriate hardware, it will behave in essentially the same way as the original brain."</ref> Whole brain emulation is discussed in computational neuroscience and neuroinformatics, in the context of brain simulation for medical research purposes. It is discussed in artificial intelligence research as an approach to strong AI. Neuroimaging technologies that could deliver the necessary detailed understanding are improving rapidly, and futurist Ray Kurzweil in the book The Singularity Is Near predicts that a map of sufficient quality will become available on a similar timescale to the required computing power.<br />
<br />
“基本思路是,取一个特定的大脑,详细地扫描其结构,并构建一个无比还原的原始大脑的软件模型,以至于在适当的硬件上运行时,它基本上与原始大脑的行为方式相同。基于医学研究的大脑模拟背景下,全脑模拟在计算神经科学和神经信息学医学期刊上被讨论过。它是人工智能研究中讨论的一种强人工智能的方法。可提供必要详细的理解的神经成像技术正在迅速提高,未来学家雷·库兹韦尔(Ray Kurzweil)在《奇点临近》书中预测,一张质量足够高的地图将在类似的时间尺度上达到所需的计算能力。<br />
<br />
<br />
<br />
===Early estimates 初步预测===<br />
<br />
[[File:Estimations of Human Brain Emulation Required Performance.svg|thumb|right|400px|Estimates of how much processing power is needed to emulate a human brain at various levels (from Ray Kurzweil, and [[Anders Sandberg]] and [[Nick Bostrom]]), along with the fastest supercomputer from [[TOP500]] mapped by year. Note the logarithmic scale and exponential trendline, which assumes the computational capacity doubles every 1.1 years. Kurzweil believes that mind uploading will be possible at neural simulation, while the Sandberg, Bostrom report is less certain about where [[consciousness]] arises.{{sfn|Sandberg|Boström|2008}}]] For low-level brain simulation, an extremely powerful computer would be required. The [[human brain]] has a huge number of [[synapses]]. Each of the 10<sup>11</sup> (one hundred billion) [[neurons]] has on average 7,000 synaptic connections (synapses) to other neurons. It has been estimated that the brain of a three-year-old child has about 10<sup>15</sup> synapses (1 quadrillion). This number declines with age, stabilizing by adulthood. Estimates vary for an adult, ranging from 10<sup>14</sup> to 5×10<sup>14</sup> synapses (100 to 500 trillion).{{sfn|Drachman|2005}} An estimate of the brain's processing power, based on a simple switch model for neuron activity, is around 10<sup>14</sup> (100 trillion) synaptic updates per second ([[SUPS]]).{{sfn|Russell|Norvig|2003}} In 1997, Kurzweil looked at various estimates for the hardware required to equal the human brain and adopted a figure of 10<sup>16</sup> computations per second (cps).<ref>In "Mind Children" {{Harvnb|Moravec|1988|page=61}} 10<sup>15</sup> cps is used. More recently, in 1997, <{{cite web|url=http://www.transhumanist.com/volume1/moravec.htm |title=Archived copy |accessdate=23 June 2006 |url-status=dead |archiveurl=https://web.archive.org/web/20060615031852/http://transhumanist.com/volume1/moravec.htm |archivedate=15 June 2006 }}> Moravec argued for 10<sup>8</sup> MIPS which would roughly correspond to 10<sup>14</sup> cps. Moravec talks in terms of MIPS, not "cps", which is a non-standard term Kurzweil introduced.</ref> (For comparison, if a "computation" was equivalent to one "[[FLOPS|floating point operation]]" – a measure used to rate current [[supercomputer]]s – then 10<sup>16</sup> "computations" would be equivalent to 10 [[Peta-|petaFLOPS]], [[FLOPS#Performance records|achieved in 2011]]). He used this figure to predict the necessary hardware would be available sometime between 2015 and 2025, if the exponential growth in computer power at the time of writing continued.<br />
<br />
Estimates of how much processing power is needed to emulate a human brain at various levels (from Ray Kurzweil, and [[Anders Sandberg and Nick Bostrom), along with the fastest supercomputer from TOP500 mapped by year. Note the logarithmic scale and exponential trendline, which assumes the computational capacity doubles every 1.1 years. Kurzweil believes that mind uploading will be possible at neural simulation, while the Sandberg, Bostrom report is less certain about where consciousness arises.]] For low-level brain simulation, an extremely powerful computer would be required. The human brain has a huge number of synapses. Each of the 10<sup>11</sup> (one hundred billion) neurons has on average 7,000 synaptic connections (synapses) to other neurons. It has been estimated that the brain of a three-year-old child has about 10<sup>15</sup> synapses (1 quadrillion). This number declines with age, stabilizing by adulthood. Estimates vary for an adult, ranging from 10<sup>14</sup> to 5×10<sup>14</sup> synapses (100 to 500 trillion). An estimate of the brain's processing power, based on a simple switch model for neuron activity, is around 10<sup>14</sup> (100 trillion) synaptic updates per second (SUPS). In 1997, Kurzweil looked at various estimates for the hardware required to equal the human brain and adopted a figure of 10<sup>16</sup> computations per second (cps). (For comparison, if a "computation" was equivalent to one "floating point operation" – a measure used to rate current supercomputers – then 10<sup>16</sup> "computations" would be equivalent to 10 petaFLOPS, achieved in 2011). He used this figure to predict the necessary hardware would be available sometime between 2015 and 2025, if the exponential growth in computer power at the time of writing continued.<br />
<br />
根据对在不同水平上模拟人类大脑的所需处理能力的估计(来自 Ray Kurzweil,[ Anders Sandberg 和 Nick Bostrom ]) ,以及每年从最快的五百台超级计算机获得的数据,绘制出对数尺度趋势线和指数趋势线。它呈现出计算能力每1.1年增长一倍。库兹韦尔相信,在神经模拟中上传思维是可能的,而桑德伯格和博斯特罗姆的报告对意识从何产生则不太确定。]为进行低层次的大脑模拟,需要一个非常强大的计算机。人类的大脑有大量的突触。每10个 sup 11 / sup (1000亿)神经元平均与其他神经元有7000个突触连接(突触)。据估计,一个三岁儿童的大脑约有10个 sup 15 / sup 突触(1千万亿)。这个数字随着年龄的增长而下降,成年后趋于稳定。而每个成年人的估计情况互不相同,从10个 sup 14 / sup 到5个 sup 10 sup 14 / sup 突触(100万亿到500万亿)不等。基于神经元活动的简单开关模型,对大脑处理能力的估计大约是每秒10次 / 秒(100万亿)突触更新(SUPS)。1997年,库兹韦尔研究了等价模拟人脑所需硬件的各种估计,并采纳了每秒10 sup 16 / sup 计算(cps)这个估计结果。(作为比较,如果一次“计算”相当于一次“浮点运算”——一种用于对当前超级计算机进行评级的措施——那么10 sup 16 / sup次“计算”相当于2011年达到的的每秒10000亿次浮点运算)。他用这个数字来预测,如果在撰写本文时计算机能力方面的指数增长继续下去的话,那么在2015年到2025年之间的某个时候,必要的硬件将会出现。<br />
<br />
<br />
<br />
===Modelling the neurons in more detail 对神经元的更精细的模拟===<br />
<br />
The [[artificial neuron]] model assumed by Kurzweil and used in many current [[artificial neural network]] implementations is simple compared with [[biological neuron model|biological neurons]]. A brain simulation would likely have to capture the detailed cellular behaviour of biological [[neurons]], presently understood only in the broadest of outlines. The overhead introduced by full modeling of the biological, chemical, and physical details of neural behaviour (especially on a molecular scale) would require computational powers several orders of magnitude larger than Kurzweil's estimate. In addition the estimates do not account for [[glial cells]], which are at least as numerous as neurons, and which may outnumber neurons by as much as 10:1, and are now known to play a role in cognitive processes.<ref name="Discover2011JanFeb">{{Cite journal|author=Swaminathan, Nikhil|title=Glia—the other brain cells|journal=Discover|date=Jan–Feb 2011|url=http://discovermagazine.com/2011/jan-feb/62|access-date=24 January 2014|archive-url=https://web.archive.org/web/20140208071350/http://discovermagazine.com/2011/jan-feb/62|archive-date=8 February 2014|url-status=live}}</ref><br />
<br />
The artificial neuron model assumed by Kurzweil and used in many current artificial neural network implementations is simple compared with biological neurons. A brain simulation would likely have to capture the detailed cellular behaviour of biological neurons, presently understood only in the broadest of outlines. The overhead introduced by full modeling of the biological, chemical, and physical details of neural behaviour (especially on a molecular scale) would require computational powers several orders of magnitude larger than Kurzweil's estimate. In addition the estimates do not account for glial cells, which are at least as numerous as neurons, and which may outnumber neurons by as much as 10:1, and are now known to play a role in cognitive processes.<br />
<br />
与生物神经元相比,库兹韦尔假设的人工神经元模型在当前许多人工神经网络实现中的应用是简单的。大脑模拟可能需要捕捉生物神经元细胞行为的细节,目前只能从最广泛的概要中理解。对神经行为的生物、化学和物理细节(特别是在分子尺度上)进行全面建模所需要的计算能力将比库兹韦尔的估计大数个数量级。此外,这些估计没有考虑到至少和神经元一样多的胶质细胞,其数量可能比神经元多十分之一,且现已知它们在认知过程中发挥作用。<br />
<br />
<br />
<br />
=== Current research 研究现状===<br />
<br />
There are some research projects that are investigating brain simulation using more sophisticated neural models, implemented on conventional computing architectures. The [[Artificial Intelligence System]] project implemented non-real time simulations of a "brain" (with 10<sup>11</sup> neurons) in 2005. It took 50 days on a cluster of 27 processors to simulate 1 second of a model.<ref>{{cite journal |last=Izhikevich |first=Eugene M. |last2=Edelman |first2=Gerald M. |date=4 March 2008 |title=Large-scale model of mammalian thalamocortical systems |url=http://vesicle.nsi.edu/users/izhikevich/publications/large-scale_model_of_human_brain.pdf |journal=PNAS |volume=105 |issue=9 |pages=3593–3598 |doi= 10.1073/pnas.0712231105|access-date=23 June 2015 |archive-url=https://web.archive.org/web/20090612095651/http://vesicle.nsi.edu/users/izhikevich/publications/large-scale_model_of_human_brain.pdf |archive-date=12 June 2009 |pmid=18292226 |pmc=2265160|bibcode=2008PNAS..105.3593I }}</ref> The [[Blue Brain]] project used one of the fastest supercomputer architectures in the world, [[IBM]]'s [[Blue Gene]] platform, to create a real time simulation of a single rat [[Neocortex|neocortical column]] consisting of approximately 10,000 neurons and 10<sup>8</sup> synapses in 2006.<ref>{{cite web|url=http://bluebrain.epfl.ch/Jahia/site/bluebrain/op/edit/pid/19085|title=Project Milestones|work=Blue Brain|accessdate=11 August 2008}}</ref> A longer term goal is to build a detailed, functional simulation of the physiological processes in the human brain: "It is not impossible to build a human brain and we can do it in 10 years," [[Henry Markram]], director of the Blue Brain Project said in 2009 at the [[TED (conference)|TED conference]] in Oxford.<ref>{{Cite news |url=http://news.bbc.co.uk/1/hi/technology/8164060.stm |title=Artificial brain '10 years away' 2009 BBC news |date=22 July 2009 |access-date=25 July 2009 |archive-url=https://web.archive.org/web/20170726040959/http://news.bbc.co.uk/1/hi/technology/8164060.stm |archive-date=26 July 2017 |url-status=live }}</ref> There have also been controversial claims to have simulated a [[cat intelligence#Computer simulation of the cat brain|cat brain]]. Neuro-silicon interfaces have been proposed as an alternative implementation strategy that may scale better.<ref>[http://gauntlet.ucalgary.ca/story/10343 University of Calgary news] {{Webarchive|url=https://web.archive.org/web/20090818081044/http://gauntlet.ucalgary.ca/story/10343 |date=18 August 2009 }}, [http://www.nbcnews.com/id/12037941 NBC News news] {{Webarchive|url=https://web.archive.org/web/20170704063922/http://www.nbcnews.com/id/12037941/ |date=4 July 2017 }}</ref><br />
<br />
There are some research projects that are investigating brain simulation using more sophisticated neural models, implemented on conventional computing architectures. The Artificial Intelligence System project implemented non-real time simulations of a "brain" (with 10<sup>11</sup> neurons) in 2005. It took 50 days on a cluster of 27 processors to simulate 1 second of a model. The Blue Brain project used one of the fastest supercomputer architectures in the world, IBM's Blue Gene platform, to create a real time simulation of a single rat neocortical column consisting of approximately 10,000 neurons and 10<sup>8</sup> synapses in 2006. A longer term goal is to build a detailed, functional simulation of the physiological processes in the human brain: "It is not impossible to build a human brain and we can do it in 10 years," Henry Markram, director of the Blue Brain Project said in 2009 at the TED conference in Oxford. There have also been controversial claims to have simulated a cat brain. Neuro-silicon interfaces have been proposed as an alternative implementation strategy that may scale better.<br />
<br />
有一些研究项目正在使用更复杂的神经模型研究大脑模拟,这些模型是在传统的计算机体系结构上实现的。人工智能系统项目在2005年实现了对一个“大脑”(有10个 sup 11 / sup 神经元)的非实时模拟。在一个由27个处理器组成的集群上,模拟一个模型的一秒钟花费了50天时间。2006年,蓝脑项目利用世界上最快的超级计算机架构之一——IBM 的蓝色基因平台,创建了一个包含大约10,000个神经元和10个 sup 8 / sup 突触的单个大鼠的新皮质柱的实时模拟。一个更长期的目标是建立一个人脑生理过程的详细的功能模拟: “建立一个人脑并不是不可能的,我们可以在10年内完成,”蓝脑项目主任亨利·马克拉姆(Henry Markram)于2009年在牛津举行的 TED 大会上说道。还有一些有争议的说法是模拟猫的大脑。神经-硅接口已作为一种可替代的实施策略被提出,它可能会更好地进行模拟。<br />
<br />
<br />
<br />
[[Hans Moravec]] addressed the above arguments ("brains are more complicated", "neurons have to be modeled in more detail") in his 1997 paper "When will computer hardware match the human brain?".<ref>{{cite web|url=http://www.transhumanist.com/volume1/moravec.htm |title=Archived copy |accessdate=23 June 2006 |url-status=dead |archiveurl=https://web.archive.org/web/20060615031852/http://transhumanist.com/volume1/moravec.htm |archivedate=15 June 2006 }}</ref> He measured the ability of existing software to simulate the functionality of neural tissue, specifically the retina. His results do not depend on the number of glial cells, nor on what kinds of processing neurons perform where.<br />
<br />
Hans Moravec addressed the above arguments ("brains are more complicated", "neurons have to be modeled in more detail") in his 1997 paper "When will computer hardware match the human brain?". He measured the ability of existing software to simulate the functionality of neural tissue, specifically the retina. His results do not depend on the number of glial cells, nor on what kinds of processing neurons perform where.<br />
<br />
汉斯·莫拉维克(Hans Moravec)在他1997年的论文《计算机硬件何时能与人脑匹敌》中提出了上述观点(“大脑更复杂” ,“神经元的建模必须更详细”) .他测量了现有软件模拟神经组织,特别是视网膜功能的能力。他的研究结果并不取决于神经胶质细胞的数量,也不取决于处理神经元在哪里工作。<br />
<br />
<br />
<br />
The actual complexity of modeling biological neurons has been explored in [[OpenWorm|OpenWorm project]] that was aimed on complete simulation of a worm that has only 302 neurons in its neural network (among about 1000 cells in total). The animal's neural network has been well documented before the start of the project. However, although the task seemed simple at the beginning, the models based on a generic neural network did not work. Currently, the efforts are focused on precise emulation of biological neurons (partly on the molecular level), but the result cannot be called a total success yet. Even if the number of issues to be solved in a human-brain-scale model is not proportional to the number of neurons, the amount of work along this path is obvious.<br />
<br />
The actual complexity of modeling biological neurons has been explored in OpenWorm project that was aimed on complete simulation of a worm that has only 302 neurons in its neural network (among about 1000 cells in total). The animal's neural network has been well documented before the start of the project. However, although the task seemed simple at the beginning, the models based on a generic neural network did not work. Currently, the efforts are focused on precise emulation of biological neurons (partly on the molecular level), but the result cannot be called a total success yet. Even if the number of issues to be solved in a human-brain-scale model is not proportional to the number of neurons, the amount of work along this path is obvious.<br />
<br />
OpenWorm 项目已经探讨了建模生物神经元的实际复杂性。该项目旨在完全模拟一个蠕虫,其神经网络中只有302个神经元(在总共约1000个细胞中)。项目开始之前,蠕虫的神经网络已经被很好地记录了下来。然而,尽管任务一开始看起来很简单,基于一般神经网络的模型并不起作用。目前,研究的重点是精确模拟生物神经元(部分在分子水平上) ,但结果还不能被称为完全成功。即使在人脑尺度的模型中需要解决的问题的数量与神经元的数量不成比例,沿着这条路径走下去的工作量也是显而易见的。<br />
<br />
<br />
<br />
===Criticisms of simulation-based approaches 对基于模拟的研究方法的批评===<br />
<br />
A fundamental criticism of the simulated brain approach derives from [[embodied cognition]] where human embodiment is taken as an essential aspect of human intelligence. Many researchers believe that embodiment is necessary to ground meaning.<ref>{{Harvnb|de Vega|Glenberg|Graesser|2008}}. A wide range of views in current research, all of which require grounding to some degree</ref> If this view is correct, any fully functional brain model will need to encompass more than just the neurons (i.e., a robotic body). Goertzel{{sfn|Goertzel|2007}} proposes virtual embodiment (like in ''[[Second Life]]''), but it is not yet known whether this would be sufficient.<br />
<br />
A fundamental criticism of the simulated brain approach derives from embodied cognition where human embodiment is taken as an essential aspect of human intelligence. Many researchers believe that embodiment is necessary to ground meaning. If this view is correct, any fully functional brain model will need to encompass more than just the neurons (i.e., a robotic body). Goertzel proposes virtual embodiment (like in Second Life), but it is not yet known whether this would be sufficient.<br />
<br />
对模拟大脑的方法的一个基本批评来自具象认知,其中人形化被视为人类智力的一个重要方面。许多研究者认为,具体化是必要的基础意义。如果这种观点是正确的,那么任何功能齐全的大脑模型除了神经元还要包含更多东西(例如,一个机器人身体)。格兹尔提出了虚拟体(就像在《第二人生》中那样) ,但是目前还不知道这是否足够。<br />
<br />
<br />
<br />
Desktop computers using microprocessors capable of more than 10<sup>9</sup> cps (Kurzweil's non-standard unit "computations per second", see above) have been available since 2005. According to the brain power estimates used by Kurzweil (and Moravec), this computer should be capable of supporting a simulation of a bee brain, but despite some interest<ref>{{Cite web |url=http://www.setiai.com/archives/cat_honey_bee_brain.html |title=some links to bee brain studies |access-date=30 March 2010 |archive-url=https://web.archive.org/web/20111005162232/http://www.setiai.com/archives/cat_honey_bee_brain.html |archive-date=5 October 2011 |url-status=dead }}</ref> no such simulation exists {{Citation needed|date=April 2011}}. There are at least three reasons for this:<br />
<br />
Desktop computers using microprocessors capable of more than 10<sup>9</sup> cps (Kurzweil's non-standard unit "computations per second", see above) have been available since 2005. According to the brain power estimates used by Kurzweil (and Moravec), this computer should be capable of supporting a simulation of a bee brain, but despite some interest no such simulation exists . There are at least three reasons for this:<br />
<br />
自2005年以来,台式计算机使用的微处理器能够超过10 sup 9 / sup cps (库兹韦尔的非标准单位“每秒计算”,见上文)。根据库兹韦尔(和莫拉维克)使用的大脑能量估算,这台计算机应该能够支持蜜蜂大脑的模拟,但是尽管有些人感兴趣,这样的模拟并不存在。这至少有三个原因:<br />
<br />
#The neuron model seems to be oversimplified (see next section).<br />
<br />
The neuron model seems to be oversimplified (see next section).<br />
<br />
神经元模型似乎过于简化了(见下一节)。<br />
<br />
#There is insufficient understanding of higher cognitive processes{{refn|In Goertzels' AGI book ([[#CITEREFYudkowsky2006|Yudkowsky 2006]]), Yudkowsky proposes 5 levels of organisation that must be understood – code/data, sensory modality, concept & category, thought, and deliberation (consciousness) – in order to use the available hardware}} to establish accurately what the brain's neural activity, observed using techniques such as [[Neuroimaging#Functional magnetic resonance imaging|functional magnetic resonance imaging]], correlates with.<br />
<br />
There is insufficient understanding of higher cognitive processes to establish accurately what the brain's neural activity, observed using techniques such as functional magnetic resonance imaging, correlates with.<br />
<br />
人们对高级认知过程的理解不够充分,无法准确地确定大脑的神经活动---- 与之相关的是使用功能性磁共振成像等技术观察到的大脑活动。<br />
<br />
#Even if our understanding of cognition advances sufficiently, early simulation programs are likely to be very inefficient and will, therefore, need considerably more hardware.<br />
<br />
Even if our understanding of cognition advances sufficiently, early simulation programs are likely to be very inefficient and will, therefore, need considerably more hardware.<br />
<br />
即使我们对认知的理解有了足够的进步,早期的仿真程序也可能非常低效,因此需要更多的硬件。<br />
<br />
#The brain of an organism, while critical, may not be an appropriate boundary for a cognitive model. To simulate a bee brain, it may be necessary to simulate the body, and the environment. [[The Extended Mind]] thesis formalizes the philosophical concept, and research into [[Cephalopoda|cephalopods]] has demonstrated clear examples of a decentralized system.<ref>{{cite journal | pmid = 15829594 | doi=10.1152/jn.00684.2004 | volume=94 | issue=2 | title=Dynamic model of the octopus arm. I. Biomechanics of the octopus reaching movement |date=August 2005 | journal=J. Neurophysiol. | pages=1443–58 | last1 = Yekutieli | first1 = Y | last2 = Sagiv-Zohar | first2 = R | last3 = Aharonov | first3 = R | last4 = Engel | first4 = Y | last5 = Hochner | first5 = B | last6 = Flash | first6 = T}}</ref><br />
<br />
The brain of an organism, while critical, may not be an appropriate boundary for a cognitive model. To simulate a bee brain, it may be necessary to simulate the body, and the environment. The Extended Mind thesis formalizes the philosophical concept, and research into cephalopods has demonstrated clear examples of a decentralized system.<br />
<br />
有机体的大脑虽然关键,但可能不是认知模型的合适边界。为了模拟蜜蜂的大脑,可能需要模拟身体和环境。扩展理智论点形式化了哲学概念,对头足类动物的研究已经展示了分散系统的明显的例子。<br />
<br />
<br />
<br />
In addition, the scale of the human brain is not currently well-constrained. One estimate puts the human brain at about 100 billion neurons and 100 trillion synapses.<ref>{{Harvnb|Williams|Herrup|1988}}</ref><ref>[http://search.eb.com/eb/article-75525 "nervous system, human."] ''[[Encyclopædia Britannica]]''. 9 January 2007</ref> Another estimate is 86 billion neurons of which 16.3 billion are in the [[cerebral cortex]] and 69 billion in the [[cerebellum]].{{sfn|Azevedo et al.|2009}} [[Glial cell]] synapses are currently unquantified but are known to be extremely numerous.<br />
<br />
In addition, the scale of the human brain is not currently well-constrained. One estimate puts the human brain at about 100 billion neurons and 100 trillion synapses. Another estimate is 86 billion neurons of which 16.3 billion are in the cerebral cortex and 69 billion in the cerebellum. Glial cell synapses are currently unquantified but are known to be extremely numerous.<br />
<br />
此外,人类大脑的规模目前还没有得到很好的限制。据估计,人类大脑大约有1000亿个神经元和100万亿个突触。另一个估计是860亿个神经元,其中163亿个在大脑皮层,690亿个在小脑。神经胶质细胞突触目前尚未定量,但已知数量极多。<br />
<br />
<br />
<br />
==Strong AI and consciousness 强人工智能和意识==<br />
<br />
{{See also|Philosophy of artificial intelligence|Turing test}}<br />
<br />
<br />
<br />
In 1980, philosopher [[John Searle]] coined the term "strong AI" as part of his [[Chinese room]] argument.<ref>{{Harvnb|Searle|1980}}</ref> He wanted to distinguish between two different hypotheses about artificial intelligence:<ref>As defined in a standard AI textbook: "The assertion that machines could possibly act intelligently (or, perhaps better, act as if they were intelligent) is called the 'weak AI' hypothesis by philosophers, and the assertion that machines that do so are actually thinking (as opposed to simulating thinking) is called the 'strong AI' hypothesis." {{Harv|Russell|Norvig|2003}}</ref><br />
<br />
In 1980, philosopher John Searle coined the term "strong AI" as part of his Chinese room argument. He wanted to distinguish between two different hypotheses about artificial intelligence:<br />
<br />
1980年,哲学家约翰•塞尔(John Searle)将“强人工智能”(strong AI)一词作为他在中文屋论证的一部分。他想要区分关于人工智能的两种不同假设:<br />
<br />
* An artificial intelligence system can ''think'' and have a ''mind''. (The word "mind" has a specific meaning for philosophers, as used in "the [[mind body problem]]" or "the [[philosophy of mind]]".)<br />
*一个人工智能系统可以思考并拥有思维。(词语“思维”对哲学家来说有特殊意义,正如在“身心问题”或“心灵哲学”中的使用一样。)<br />
<br />
* An artificial intelligence system can (only) ''act like'' it thinks and has a mind.<br />
*一个人工智能系统可以(仅仅)按它所想而“行动”,并且拥有思维。<br />
<br />
The first one is called "the ''strong'' AI hypothesis" and the second is "the ''weak'' AI hypothesis" because the first one makes the ''stronger'' statement: it assumes something special has happened to the machine that goes beyond all its abilities that we can test. Searle referred to the "strong AI hypothesis" as "strong AI". This usage is also common in academic AI research and textbooks.<ref>For example:<br />
<br />
The first one is called "the strong AI hypothesis" and the second is "the weak AI hypothesis" because the first one makes the stronger statement: it assumes something special has happened to the machine that goes beyond all its abilities that we can test. Searle referred to the "strong AI hypothesis" as "strong AI". This usage is also common in academic AI research and textbooks.<ref>For example:<br />
<br />
第一条被称为“强人工智能假设” ,第二条被称为“弱人工智能假设”,因为第一条假设提出了更强的陈述: 它假定机器发生了某种特殊的事件,超出了我们能够测试的所有能力。塞尔将“强人工智能假说”称为“强人工智能”。这种用法在人工智能学术研究和教科书中也很常见。例如:<br />
<br />
* {{Harvnb|Russell|Norvig|2003}},<br />
<br />
* [http://www.encyclopedia.com/doc/1O87-strongAI.html Oxford University Press Dictionary of Psychology] {{Webarchive|url=https://web.archive.org/web/20071203103022/http://www.encyclopedia.com/doc/1O87-strongAI.html |date=3 December 2007 }} (quoted in "High Beam Encyclopedia"),<br />
<br />
* [http://www.aaai.org/AITopics/html/phil.html MIT Encyclopedia of Cognitive Science] {{Webarchive|url=https://web.archive.org/web/20080719074502/http://www.aaai.org/AITopics/html/phil.html |date=19 July 2008 }} (quoted in "AITopics")<br />
<br />
* [http://planetmath.org/encyclopedia/StrongAIThesis.html Planet Math] {{Webarchive|url=https://web.archive.org/web/20070919012830/http://planetmath.org/encyclopedia/StrongAIThesis.html |date=19 September 2007 }}<br />
<br />
* [http://www.cbhd.org/resources/biotech/tongen_2003-11-07.htm Will Biological Computers Enable Artificially Intelligent Machines to Become Persons?] {{Webarchive|url=https://web.archive.org/web/20080513031753/http://www.cbhd.org/resources/biotech/tongen_2003-11-07.htm |date=13 May 2008 }} Anthony Tongen</ref><br />
<br />
<br />
<br />
The weak AI hypothesis is equivalent to the hypothesis that artificial general intelligence is possible. According to [[Stuart J. Russell|Russell]] and [[Peter Norvig|Norvig]], "Most AI researchers take the weak AI hypothesis for granted, and don't care about the strong AI hypothesis."{{sfn|Russell|Norvig|2003|p=947}}<br />
<br />
The weak AI hypothesis is equivalent to the hypothesis that artificial general intelligence is possible. According to Russell and Norvig, "Most AI researchers take the weak AI hypothesis for granted, and don't care about the strong AI hypothesis."<br />
<br />
弱人工智能假说等同于通用人工智能是可能的假说。根据罗素和诺维格的说法,“大多数人工智能研究人员认为弱人工智能假说是理所当然的,并且不关心强人工智能假说。”<br />
<br />
<br />
<br />
In contrast to Searle, [[Ray Kurzweil]] uses the term "strong AI" to describe any artificial intelligence system that acts like it has a mind,<ref name=K/> regardless of whether a philosopher would be able to determine if it ''actually'' has a mind or not.<br />
<br />
In contrast to Searle, Ray Kurzweil uses the term "strong AI" to describe any artificial intelligence system that acts like it has a mind, regardless of whether a philosopher would be able to determine if it actually has a mind or not.<br />
<br />
与 Searle 不同的是,Ray Kurzweil 使用“强人工智能”这个词来描述任何人工智能系统,这个系统的行为就像它有思想一样,不管哲学家能否确定它是否真的有思想。<br />
<br />
In science fiction, AGI is associated with traits such as [[consciousness]], [[sentience]], [[sapience]], and [[self-awareness]] observed in living beings. However, according to Searle, it is an open question whether general intelligence is sufficient for consciousness. "Strong AI" (as defined above by Kurzweil) should not be confused with Searle's "[[strong AI hypothesis]]." The strong AI hypothesis is the claim that a computer which behaves as intelligently as a person must also necessarily have a [[mind]] and [[consciousness]]. AGI refers only to the amount of intelligence that the machine displays, with or without a mind.<br />
<br />
In science fiction, AGI is associated with traits such as consciousness, sentience, sapience, and self-awareness observed in living beings. However, according to Searle, it is an open question whether general intelligence is sufficient for consciousness. "Strong AI" (as defined above by Kurzweil) should not be confused with Searle's "strong AI hypothesis." The strong AI hypothesis is the claim that a computer which behaves as intelligently as a person must also necessarily have a mind and consciousness. AGI refers only to the amount of intelligence that the machine displays, with or without a mind.<br />
<br />
在科幻小说中,通用人工智能与生物所具有的的意识、知觉、智慧和自我意识等特征有关。然而,根据塞尔的说法,一般智力是否足以产生意识还是一个悬而未决的问题。“强人工智能”(如上文库兹韦尔所定义的)不应与塞尔的“强人工智能假设”相混淆。强人工智能假说认为,一台像人一样智能运行的计算机必然具有思想和意识。通用人工智能只是指机器显示的智能程度,不管有没有头脑。<br />
<br />
<br />
<br />
===Consciousness 意识===<br />
<br />
There are other aspects of the human mind besides intelligence that are relevant to the concept of strong AI which play a major role in [[science fiction]] and the [[ethics of artificial intelligence]]:<br />
<br />
There are other aspects of the human mind besides intelligence that are relevant to the concept of strong AI which play a major role in science fiction and the ethics of artificial intelligence:<br />
<br />
除了与在科幻小说和人工智能伦理中扮演重要角色的强人工智能概念有关的智能,人类思维还有其他方面:<br />
<br />
* [[consciousness]]: To have [[qualia|subjective experience]] and [[thought]].<ref>Note that [[consciousness]] is difficult to define. A popular definition, due to [[Thomas Nagel]], is that it "feels like" something to be conscious. If we are not conscious, then it doesn't feel like anything. Nagel uses the example of a bat: we can sensibly ask "what does it feel like to be a bat?" However, we are unlikely to ask "what does it feel like to be a toaster?" Nagel concludes that a bat appears to be conscious (i.e. has consciousness) but a toaster does not. See {{Harv|Nagel|1974}}</ref><br />
意识:拥有主观体验和思想。值得一提的是意识是很难定义的。由托马斯·内格尔给出的一个著名定义陈述如下:一个事物如果能体会到某种感觉,那么它是有意识的。如果我们不是有意识的,那么我们不会有任何感觉。内格尔以蝙蝠为例:我们可以凭借感觉问出:“成为一只蝙蝠的感觉如何?”但是,我们不大可能问出:“成为一个吐司机的感觉如何?”内格尔总结认为蝙蝠像是有意识的(即拥有意识),但是吐司机却不是。<br />
<br />
* [[self-awareness]]: To be aware of oneself as a separate individual, especially to be aware of one's own thoughts.<br />
自我意识:能够意识到自己是一个独立的个体,尤其是意识到自己的思想。<br />
<br />
* [[sentience]]: The ability to "feel" perceptions or emotions subjectively.<br />
知觉:主观地感受概念或者情感的能力。<br />
<br />
* [[sapience]]: The capacity for wisdom.<br />
智慧:容纳知识的能力。<br />
<br />
<br />
<br />
These traits have a moral dimension, because a machine with this form of strong AI may have legal rights, analogous to the [[animal rights|rights of non-human animals]]. As such, preliminary work has been conducted on approaches to integrating full ethical agents with existing legal and social frameworks. These approaches have focused on the legal position and rights of 'strong' AI.<ref>{{Cite journal|last=Sotala|first=Kaj|last2=Yampolskiy|first2=Roman V|date=2014-12-19|title=Responses to catastrophic AGI risk: a survey|journal=Physica Scripta|volume=90|issue=1|pages=8|doi=10.1088/0031-8949/90/1/018001|issn=0031-8949|doi-access=free}}</ref><br />
<br />
These traits have a moral dimension, because a machine with this form of strong AI may have legal rights, analogous to the rights of non-human animals. As such, preliminary work has been conducted on approaches to integrating full ethical agents with existing legal and social frameworks. These approaches have focused on the legal position and rights of 'strong' AI.<br />
<br />
这些特征具有道德维度,因为拥有这种强人工智能形式的机器可能拥有法律权利,类似于非人类动物的权利。因此,已经开展了初步工作,探讨如何将全面的道德行为者纳入现有的法律和社会框架。这些方法都集中在强大的人工智能的法律地位和权利上。<br />
<br />
<br />
<br />
However, [[Bill Joy]], among others, argues a machine with these traits may be a threat to human life or dignity.<ref>{{cite journal| title=Why the future doesn't need us | last=Joy | first=Bill |author-link=Bill Joy | journal=Wired |date=April 2000 }}</ref> It remains to be shown whether any of these traits are [[Necessary and sufficient condition|necessary]] for strong AI. The role of [[consciousness]] is not clear, and currently there is no agreed test for its presence. If a machine is built with a device that simulates the [[neural correlates of consciousness]], would it automatically have self-awareness? It is also possible that some of these properties, such as sentience, [[Emergence|naturally emerge]] from a fully intelligent machine, or that it becomes natural to ''ascribe'' these properties to machines once they begin to act in a way that is clearly intelligent. For example, intelligent action may be sufficient for sentience, rather than the other way around.<br />
<br />
However, Bill Joy, among others, argues a machine with these traits may be a threat to human life or dignity. It remains to be shown whether any of these traits are necessary for strong AI. The role of consciousness is not clear, and currently there is no agreed test for its presence. If a machine is built with a device that simulates the neural correlates of consciousness, would it automatically have self-awareness? It is also possible that some of these properties, such as sentience, naturally emerge from a fully intelligent machine, or that it becomes natural to ascribe these properties to machines once they begin to act in a way that is clearly intelligent. For example, intelligent action may be sufficient for sentience, rather than the other way around.<br />
<br />
然而,比尔·乔伊(Bill Joy)等人认为,具有这些特征的机器可能会威胁到人类的生命或尊严。这些特征对于强人工智能来说是否是必要的还有待证明。意识的作用并不清楚,目前也没有针对其存在而进行的一致的测试。如果一台机器装有一个模拟与意识相关的神经的装置,它会自动具有自我意识吗?也有可能这些特性中的一些,比如感知能力,自然而然地从一个完全智能的机器中产生,或者一旦机器开始以一种明显智能的方式行动,人们就会自然而然地认为这些特性是机器自主产生的。<br />
--~~~“或者一旦机器开始以一种明显智能的方式行动,,人们就会自然而然地认为这些特性是机器自主产生的。”对应原句“or that it becomes natural to ascribe these properties to machines once they begin to act in a way that is clearly intelligent.”与原句在语序和措辞上略有不同,是译者考虑到中文的阅读习惯在不改变原意的条件下意译得出的。<br />
例如,智能行为可能足以判定机器产生了知觉,而非反过来。<br />
<br />
<br />
<br />
===Artificial consciousness research 人工意识研究===<br />
<br />
{{Main|Artificial consciousness}}<br />
<br />
<br />
<br />
Although the role of consciousness in strong AI/AGI is debatable, many AGI researchers{{sfn|Yudkowsky|2006}} regard research that investigates possibilities for implementing consciousness as vital. In an early effort [[Igor Aleksander]]{{sfn|Aleksander|1996}} argued that the principles for creating a conscious machine already existed but that it would take forty years to train such a machine to understand [[language]].<br />
<br />
Although the role of consciousness in strong AI/AGI is debatable, many AGI researchers regard research that investigates possibilities for implementing consciousness as vital. In an early effort Igor Aleksander argued that the principles for creating a conscious machine already existed but that it would take forty years to train such a machine to understand language.<br />
<br />
虽然意识在强人工智能/通用人工智能中的作用是有争议的,但是很多通用人工智能的研究人员认为研究实现意识的可能性是至关重要的。在早期的努力中,伊格尔·亚历山大(Igor Aleksander)认为创造一个有意识的机器的原则已经存在,但是训练这样一个机器去理解语言可能需要四十年。<br />
<br />
<br />
<br />
==Possible explanations for the slow progress of AI research 人工智能研究进展缓慢的可能解释==<br />
<br />
{{See also|History of artificial intelligence#The problems}}<br />
<br />
<br />
<br />
Since the launch of AI research in 1956, the growth of this field has slowed down over time and has stalled the aims of creating machines skilled with intelligent action at the human level.{{sfn|Clocksin|2003}} A possible explanation for this delay is that computers lack a sufficient scope of memory or processing power.{{sfn|Clocksin|2003}} In addition, the level of complexity that connects to the process of AI research may also limit the progress of AI research.{{sfn|Clocksin|2003}}<br />
<br />
Since the launch of AI research in 1956, the growth of this field has slowed down over time and has stalled the aims of creating machines skilled with intelligent action at the human level. A possible explanation for this delay is that computers lack a sufficient scope of memory or processing power. In addition, the level of complexity that connects to the process of AI research may also limit the progress of AI research.<br />
<br />
自从1956年开始人工智能研究以来,这一领域的发展速度已经随着时间的推移而放缓,并且阻碍了创造具有人类水平的智能行为的机器的目标。这种延迟的一个可能的解释是计算机缺乏足够的存储空间或处理能力。此外,与人工智能研究过程相关的复杂程度也可能限制人工智能研究的进展。<br />
<br />
<br />
<br />
While most AI researchers believe strong AI can be achieved in the future, there are some individuals like [[Hubert Dreyfus]] and [[Roger Penrose]] who deny the possibility of achieving strong AI.{{sfn|Clocksin|2003}} [[John McCarthy (computer scientist)|John McCarthy]] was one of various computer scientists who believe human-level AI will be accomplished, but a date cannot accurately be predicted.{{sfn|McCarthy|2003}}<br />
<br />
While most AI researchers believe strong AI can be achieved in the future, there are some individuals like Hubert Dreyfus and Roger Penrose who deny the possibility of achieving strong AI. John McCarthy was one of various computer scientists who believe human-level AI will be accomplished, but a date cannot accurately be predicted.<br />
<br />
虽然大多数人工智能研究人员认为强大的人工智能可以在未来实现,但也有一些人像休伯特·德雷福斯(Hubert Dreyfus)和罗杰·彭罗斯(Roger Penrose)否认实现强大人工智能的可能性。约翰·麦卡锡(John McCarthy)是众多计算机科学家之一,他们相信人类水平的人工智能将会实现,但是日期无法准确预测。<br />
<br />
<br />
<br />
Conceptual limitations are another possible reason for the slowness in AI research.{{sfn|Clocksin|2003}} AI researchers may need to modify the conceptual framework of their discipline in order to provide a stronger base and contribution to the quest of achieving strong AI. As William Clocksin wrote in 2003: "the framework starts from Weizenbaum's observation that intelligence manifests itself only relative to specific social and cultural contexts".{{sfn|Clocksin|2003}}<br />
<br />
Conceptual limitations are another possible reason for the slowness in AI research. AI researchers may need to modify the conceptual framework of their discipline in order to provide a stronger base and contribution to the quest of achieving strong AI. As William Clocksin wrote in 2003: "the framework starts from Weizenbaum's observation that intelligence manifests itself only relative to specific social and cultural contexts".<br />
<br />
概念上的局限性是人工智能研究缓慢的另一个可能原因。人工智能研究人员可能需要修改他们学科的概念框架,以便为实现强大的人工智能提供一个更强大的基础和贡献。正如威廉·克罗克森(William Clocksin)在2003年写的那样: “这个框架始于魏泽堡的观察,即智力只在特定的社会和文化背景下表现出来。”。<br />
<br />
<br />
<br />
Furthermore, AI researchers have been able to create computers that can perform jobs that are complicated for people to do, such as mathematics, but conversely they have struggled to develop a computer that is capable of carrying out tasks that are simple for humans to do, such as walking ([[Moravec's paradox]]).{{sfn|Clocksin|2003}} A problem described by David Gelernter is that some people assume thinking and reasoning are equivalent.{{sfn|Gelernter|2010}} However, the idea of whether thoughts and the creator of those thoughts are isolated individually has intrigued AI researchers.{{sfn|Gelernter|2010}}<br />
<br />
Furthermore, AI researchers have been able to create computers that can perform jobs that are complicated for people to do, such as mathematics, but conversely they have struggled to develop a computer that is capable of carrying out tasks that are simple for humans to do, such as walking (Moravec's paradox). A problem described by David Gelernter is that some people assume thinking and reasoning are equivalent. However, the idea of whether thoughts and the creator of those thoughts are isolated individually has intrigued AI researchers.<br />
<br />
此外,人工智能研究人员已经能够创造出能够执行对人类而言复杂的工作(如数学)的计算机,但相反地,他们却难以开发出能够执行人类简单任务(如行走)的计算机(莫拉维克悖论)。大卫·格勒尼特(David Gelernter)描述的一个问题是,有些人认为思考和推理是等价的。然而,思想和这些思想的创造者是否被孤立的想法引起了人工智能研究者的兴趣。<br />
<br />
<br />
<br />
The problems that have been encountered in AI research over the past decades have further impeded the progress of AI. The failed predictions that have been promised by AI researchers and the lack of a complete understanding of human behaviors have helped diminish the primary idea of human-level AI.{{sfn|Goertzel|2007}} Although the progress of AI research has brought both improvement and disappointment, most investigators have established optimism about potentially achieving the goal of AI in the 21st century.{{sfn|Goertzel|2007}}<br />
<br />
The problems that have been encountered in AI research over the past decades have further impeded the progress of AI. The failed predictions that have been promised by AI researchers and the lack of a complete understanding of human behaviors have helped diminish the primary idea of human-level AI. Although the progress of AI research has brought both improvement and disappointment, most investigators have established optimism about potentially achieving the goal of AI in the 21st century.<br />
<br />
过去几十年人工智能研究中遇到的问题进一步阻碍了人工智能的发展。人工智能研究人员所承诺的失败的预测,以及对人类行为缺乏完整理解,已经帮助削弱了人类水平人工智能的最初设想。尽管人工智能研究的进展带来了进步和失望,但大多数研究人员对人工智能在21世纪可能实现的目标持乐观态度。<br />
<br />
<br />
<br />
Other possible reasons have been proposed for the lengthy research in the progress of strong AI. The intricacy of scientific problems and the need to fully understand the human brain through psychology and neurophysiology have limited many researchers in emulating the function of the human brain in computer hardware.{{sfn|McCarthy|2007}} Many researchers tend to underestimate any doubt that is involved with future predictions of AI, but without taking those issues seriously, people can then overlook solutions to problematic questions.{{sfn|Goertzel|2007}}<br />
<br />
Other possible reasons have been proposed for the lengthy research in the progress of strong AI. The intricacy of scientific problems and the need to fully understand the human brain through psychology and neurophysiology have limited many researchers in emulating the function of the human brain in computer hardware. Many researchers tend to underestimate any doubt that is involved with future predictions of AI, but without taking those issues seriously, people can then overlook solutions to problematic questions.<br />
<br />
还有其他可能的原因可以解释为什么对于强人工智能进行了长时间的研究。错综复杂的科学问题,以及通过心理学和神经生理学充分了解人脑的必要性,限制了许多研究人员在计算机硬件中模拟人脑的功能。许多研究人员倾向于低估与人工智能未来预测有关的任何怀疑,但是如果不认真对待这些问题,人们就会忽视不确定问题的解决方案。<br />
<br />
<br />
<br />
Clocksin says that a conceptual limitation that may impede the progress of AI research is that people may be using the wrong techniques for computer programs and implementation of equipment.{{sfn|Clocksin|2003}} When AI researchers first began to aim for the goal of artificial intelligence, a main interest was human reasoning.{{sfn|Holte|Choueiry|2003}} Researchers hoped to establish computational models of human knowledge through reasoning and to find out how to design a computer with a specific cognitive task.{{sfn|Holte|Choueiry|2003}}<br />
<br />
Clocksin says that a conceptual limitation that may impede the progress of AI research is that people may be using the wrong techniques for computer programs and implementation of equipment. When AI researchers first began to aim for the goal of artificial intelligence, a main interest was human reasoning. Researchers hoped to establish computational models of human knowledge through reasoning and to find out how to design a computer with a specific cognitive task.<br />
<br />
克罗克森说,阻碍人工智能研究进展的一个概念上的限制是,人们可能在计算机程序和安装设备方面使用了错误的技术。当人工智能研究人员第一次开始瞄准人工智能的目标时,主要的兴趣是人类推理。研究人员希望通过推理建立人类知识的计算模型,并找出如何设计一台具有特定认知任务的计算机。<br />
<br />
<br />
<br />
The practice of abstraction, which people tend to redefine when working with a particular context in research, provides researchers with a concentration on just a few concepts.{{sfn|Holte|Choueiry|2003}} The most productive use of abstraction in AI research comes from planning and problem solving.{{sfn|Holte|Choueiry|2003}} Although the aim is to increase the speed of a computation, the role of abstraction has posed questions about the involvement of abstraction operators.{{sfn|Zucker|2003}}<br />
<br />
The practice of abstraction, which people tend to redefine when working with a particular context in research, provides researchers with a concentration on just a few concepts. The most productive use of abstraction in AI research comes from planning and problem solving. Although the aim is to increase the speed of a computation, the role of abstraction has posed questions about the involvement of abstraction operators.<br />
<br />
人们在研究中使用特定的语境时倾向于重新定义抽象的实践,使得研究人员集中在数个概念上。抽象在人工智能研究中最有效的应用来自规划和解决问题。虽然目标是提高计算速度,但是抽象的作用已经对抽象算子的参与提出了问题。<br />
<br />
<br />
<br />
A possible reason for the slowness in AI relates to the acknowledgement by many AI researchers that heuristics is a section that contains a significant breach between computer performance and human performance.{{sfn|McCarthy|2007}} The specific functions that are programmed to a computer may be able to account for many of the requirements that allow it to match human intelligence. These explanations are not necessarily guaranteed to be the fundamental causes for the delay in achieving strong AI, but they are widely agreed by numerous researchers.<br />
<br />
A possible reason for the slowness in AI relates to the acknowledgement by many AI researchers that heuristics is a section that contains a significant breach between computer performance and human performance. The specific functions that are programmed to a computer may be able to account for many of the requirements that allow it to match human intelligence. These explanations are not necessarily guaranteed to be the fundamental causes for the delay in achieving strong AI, but they are widely agreed by numerous researchers.<br />
<br />
人工智能发展缓慢的一个可能原因是许多人工智能研究人员承认启发式算法是计算机性能和人类表现之间的一个重大缺口。为编入计算机的特定功能可以满足许多要求,使计算机与人类智能相匹配。这些解释并不一定是造成人工智能实现延迟的根本原因,但它们得到了众多研究人员的广泛认同。<br />
<br />
<br />
<br />
There have been many AI researchers that debate over the idea whether [[affective computing|machines should be created with emotions]]. There are no emotions in typical models of AI and some researchers say programming emotions into machines allows them to have a mind of their own.{{sfn|Clocksin|2003}} Emotion sums up the experiences of humans because it allows them to remember those experiences.{{sfn|Gelernter|2010}} David Gelernter writes, "No computer will be creative unless it can simulate all the nuances of human emotion."{{sfn|Gelernter|2010}} This concern about emotion has posed problems for AI researchers and it connects to the concept of strong AI as its research progresses into the future.<ref>{{cite journal|doi=10.1016/j.bushor.2018.08.004|title=Kaplan Andreas and Haelein Michael (2019) Siri, Siri, in my hand: Who's the fairest in the land? On the interpretations, illustrations, and implications of artificial intelligence | volume=62 | year=2019|journal=Business Horizons|pages=15–25 | last1 = Kaplan | first1 = Andreas | last2 = Haenlein | first2 = Michael}}</ref><br />
<br />
There have been many AI researchers that debate over the idea whether machines should be created with emotions. There are no emotions in typical models of AI and some researchers say programming emotions into machines allows them to have a mind of their own. Emotion sums up the experiences of humans because it allows them to remember those experiences. David Gelernter writes, "No computer will be creative unless it can simulate all the nuances of human emotion." This concern about emotion has posed problems for AI researchers and it connects to the concept of strong AI as its research progresses into the future.<br />
<br />
许多人工智能研究人员一直在争论机器是否应该带有情感。典型的人工智能模型中没有情感,一些研究人员说,将情感编程到机器中可以让它们拥有自己的思想。情感总结了人类的经历,因为它允许人们记住那些经历。大卫·格勒尼特(David Gelernter)写道: “除非计算机能够模拟人类情感的所有细微差别,否则它不会具有创造力。”这种对情绪的关注给人工智能研究人员带来了一些问题,随着未来人工智能研究的进展,它与强人工智能的概念联系起来。<br />
<br />
<br />
<br />
==Controversies and dangers 争议和风险==<br />
<br />
<br />
<br />
===Feasibility===<br />
<br />
{{expand section|date=February 2016}}<br />
<br />
As of March 2020, AGI remains speculative<ref name="spec1">[https://www.europarl.europa.eu/at-your-service/files/be-heard/religious-and-non-confessional-dialogue/events/en-20190319-how-artificial-intelligence-works.pdf europarl.europa.eu: How artificial intelligence works], "Concluding remarks: Today's AI is powerful and useful, but remains far from speculated AGI or ASI.", European Parliamentary Research Service, retrieved March 3, 2020</ref><ref name="spec2">[https://www.itu.int/en/journal/001/Documents/itu2018-9.pdf itu.int: Beyond Mad?: The Race For Artificial General Intelligence], "AGI represents a level of power that remains firmly in the realm of speculative fiction as on date." February 2, 2018, retrieved March 3, 2020</ref> as no such system has been demonstrated yet. Opinions vary both on ''whether'' and ''when'' artificial general intelligence will arrive. At one extreme, AI pioneer [[Herbert A. Simon]] wrote in 1965: "machines will be capable, within twenty years, of doing any work a man can do". However, this prediction failed to come true. Microsoft co-founder [[Paul Allen]] believed that such intelligence is unlikely in the 21st century because it would require "unforeseeable and fundamentally unpredictable breakthroughs" and a "scientifically deep understanding of cognition".<ref>{{cite news|last1=Allen|first1=Paul|title=The Singularity Isn't Near|url=http://www.technologyreview.com/view/425733/paul-allen-the-singularity-isnt-near/|accessdate=17 September 2014|work=[[MIT Technology Review]]}}</ref> Writing in [[The Guardian]], roboticist Alan Winfield claimed the gulf between modern computing and human-level artificial intelligence is as wide as the gulf between current space flight and practical faster-than-light spaceflight.<ref>{{cite news|last1=Winfield|first1=Alan|title=Artificial intelligence will not turn into a Frankenstein's monster|url=https://www.theguardian.com/technology/2014/aug/10/artificial-intelligence-will-not-become-a-frankensteins-monster-ian-winfield|accessdate=17 September 2014|work=[[The Guardian]]|archive-url=https://web.archive.org/web/20140917135230/http://www.theguardian.com/technology/2014/aug/10/artificial-intelligence-will-not-become-a-frankensteins-monster-ian-winfield|archive-date=17 September 2014|url-status=live}}</ref><br />
<br />
As of March 2020, AGI remains speculative as no such system has been demonstrated yet. Opinions vary both on whether and when artificial general intelligence will arrive. At one extreme, AI pioneer Herbert A. Simon wrote in 1965: "machines will be capable, within twenty years, of doing any work a man can do". However, this prediction failed to come true. Microsoft co-founder Paul Allen believed that such intelligence is unlikely in the 21st century because it would require "unforeseeable and fundamentally unpredictable breakthroughs" and a "scientifically deep understanding of cognition". Writing in The Guardian, roboticist Alan Winfield claimed the gulf between modern computing and human-level artificial intelligence is as wide as the gulf between current space flight and practical faster-than-light spaceflight.<br />
<br />
截至2020年3月,通用人工智能仍处于推测性的状态,因为迄今此类系统尚未被展示。对于人工通用智能是否会到来以及何时到来,人们的看法各不相同。一个极端如,人工智能的先驱赫伯特·西蒙在1965年写道: “机器将能在20年内具有完成人类能做的任何工作的能力。”然而,这个预言并没有实现。微软(Microsoft)联合创始人保罗•艾伦(Paul Allen)认为,这种情报在21世纪不太可能出现,因为它需要“不可预见且根本无法预测的突破”和“对认知的科学的深入理解”。机器人专家阿兰·温菲尔德(Alan Winfield)在《卫报》上发表文章称,现代计算机和人类水平的人工智能之间的鸿沟就像当前的太空飞行和实际的超光速空间飞行之间的鸿沟一样宽。<br />
<br />
<br />
<br />
AI experts' views on the feasibility of AGI wax and wane, and may have seen a resurgence in the 2010s. Four polls conducted in 2012 and 2013 suggested that the median guess among experts for when they would be 50% confident AGI would arrive was 2040 to 2050, depending on the poll, with the mean being 2081. Of the experts, 16.5% answered with "never" when asked the same question but with a 90% confidence instead.<ref name="new yorker doomsday">{{cite news|author1=Raffi Khatchadourian|title=The Doomsday Invention: Will artificial intelligence bring us utopia or destruction?|url=http://www.newyorker.com/magazine/2015/11/23/doomsday-invention-artificial-intelligence-nick-bostrom|accessdate=7 February 2016|work=[[The New Yorker (magazine)|The New Yorker]]|date=23 November 2015|archive-url=https://web.archive.org/web/20160128105955/http://www.newyorker.com/magazine/2015/11/23/doomsday-invention-artificial-intelligence-nick-bostrom|archive-date=28 January 2016|url-status=live}}</ref><ref>Müller, V. C., & Bostrom, N. (2016). Future progress in artificial intelligence: A survey of expert opinion. In Fundamental issues of artificial intelligence (pp. 555–572). Springer, Cham.</ref> Further current AGI progress considerations can be found below [[#Tests_for_confirming_human-level_AGI|''Tests for confirming human-level AGI'']] and [[#IQ-Tests_AGI|''IQ-tests AGI'']].<br />
<br />
AI experts' views on the feasibility of AGI wax and wane, and may have seen a resurgence in the 2010s. Four polls conducted in 2012 and 2013 suggested that the median guess among experts for when they would be 50% confident AGI would arrive was 2040 to 2050, depending on the poll, with the mean being 2081. Of the experts, 16.5% answered with "never" when asked the same question but with a 90% confidence instead. Further current AGI progress considerations can be found below Tests for confirming human-level AGI and IQ-tests AGI.<br />
<br />
人工智能专家一直在考虑通用人工智能兴衰可能性,可能在2010年代出现了复苏。2012年和2013年进行的四次民意调查显示,专家们平均有50%的信心相信通用人工智能会在2040年到2050年间到来,具体取决于调查结果,平均猜测是2081年。在这些专家中,16.5% 的人在被问到同样的问题时回答“从来没有” ,但他们的自信心却达到了90%。当前通用人工智能项目进展中的进一步考虑可以在确认人类水平通用人工智能和基于智商测试的通用人工智能测试下面找到。<br />
<br />
<br />
<br />
===Potential threat to human existence{{anchor|Risk_of_human_extinction}} 对人类的潜在威胁===<br />
<br />
{{Main|Existential risk from artificial general intelligence}}<br />
<br />
<br />
<br />
The thesis that AI poses an existential risk, and that this risk needs much more attention than it currently gets, has been endorsed by many public figures; perhaps the most famous are [[Elon Musk]], [[Bill Gates]], and [[Stephen Hawking]]. The most notable AI researcher to endorse the thesis is [[Stuart J. Russell]]. Endorsers of the thesis sometimes express bafflement at skeptics: Gates states he does not "understand why some people are not concerned",<ref name="BBC News">{{cite news|last1=Rawlinson|first1=Kevin|title=Microsoft's Bill Gates insists AI is a threat|url=https://www.bbc.co.uk/news/31047780|work=[[BBC News]]|accessdate=30 January 2015}}</ref> and Hawking criticized widespread indifference in his 2014 editorial: {{cquote|'So, facing possible futures of incalculable benefits and risks, the experts are surely doing everything possible to ensure the best outcome, right? Wrong. If a superior alien civilisation sent us a message saying, 'We'll arrive in a few decades,' would we just reply, 'OK, call us when you get here{{endash}}we'll leave the lights on?' Probably not{{endash}}but this is more or less what is happening with AI.'<ref name="hawking editorial">{{cite news |title=Stephen Hawking: 'Transcendence looks at the implications of artificial intelligence&nbsp;– but are we taking AI seriously enough?' |url=https://www.independent.co.uk/news/science/stephen-hawking-transcendence-looks-at-the-implications-of-artificial-intelligence--but-are-we-taking-ai-seriously-enough-9313474.html |accessdate=3 December 2014 |publisher=[[The Independent (UK)]]}}</ref>}}<br />
<br />
The thesis that AI poses an existential risk, and that this risk needs much more attention than it currently gets, has been endorsed by many public figures; perhaps the most famous are Elon Musk, Bill Gates, and Stephen Hawking. The most notable AI researcher to endorse the thesis is Stuart J. Russell. Endorsers of the thesis sometimes express bafflement at skeptics: Gates states he does not "understand why some people are not concerned", and Hawking criticized widespread indifference in his 2014 editorial: we'll leave the lights on?' Probably not, but this is more or less what is happening with AI.'}}<br />
<br />
人工智能构成了世界末日,这种风险需要比现在更多的关注,这一论点已经得到了许多公众人物的支持; 也许最著名的是埃隆·马斯克,比尔·盖茨和斯蒂芬·霍金。支持这一观点的最著名的人工智能研究者是斯图尔特·罗素。这篇论文的支持者有时会对怀疑论者表示困惑: 盖茨表示,他不“理解为什么有些人不关心” ,霍金在2014年的社论中批评了普遍的冷漠: 我们就这么让人工智能这盏灯亮着吗?可能不是,但这或多或少是正在发生在人工智能领域内的。<br />
<br />
<br />
<br />
Many of the scholars who are concerned about existential risk believe that the best way forward would be to conduct (possibly massive) research into solving the difficult "[[AI control problem|control problem]]" to answer the question: what types of safeguards, algorithms, or architectures can programmers implement to maximize the probability that their recursively-improving AI would continue to behave in a friendly, rather than destructive, manner after it reaches superintelligence?<ref name="superintelligence" >{{cite book|last1=Bostrom|first1=Nick|author-link=Nick Bostrom|title=Superintelligence: Paths, Dangers, Strategies|date=2014|isbn=978-0199678112|edition=First|quote=|title-link=Superintelligence: Paths, Dangers, Strategies}}<!-- preface --></ref><ref name="physica_scripta" >{{cite journal|title=Responses to catastrophic AGI risk: a survey|journal=[[Physica Scripta]]|date=19 December 2014|volume=90|issue=1|author1=Kaj Sotala|author2-link=Roman Yampolskiy|author2=Roman Yampolskiy}}</ref><br />
<br />
Many of the scholars who are concerned about existential risk believe that the best way forward would be to conduct (possibly massive) research into solving the difficult "control problem" to answer the question: what types of safeguards, algorithms, or architectures can programmers implement to maximize the probability that their recursively-improving AI would continue to behave in a friendly, rather than destructive, manner after it reaches superintelligence?<br />
<br />
许多关注世界末日的学者认为,最好的方法是进行(可能是大规模的)研究,解决困难的“控制问题” ,以回答这个问题: 程序员可以实现哪些类型的保障措施、算法或架构,以最大程度地提高其递归改进的人工智能在达到超级智能后继续以友好,而非破坏性的方式运行的可能性?<br />
<br />
<br />
<br />
The thesis that AI can pose existential risk also has many strong detractors. Skeptics sometimes charge that the thesis is crypto-religious, with an irrational belief in the possibility of superintelligence replacing an irrational belief in an omnipotent God; at an extreme, [[Jaron Lanier]] argues that the whole concept that current machines are in any way intelligent is "an illusion" and a "stupendous con" by the wealthy.<ref name="atlantic-but-what">{{cite magazine |url=https://www.theatlantic.com/health/archive/2014/05/but-what-does-the-end-of-humanity-mean-for-me/361931/ |title=But What Would the End of Humanity Mean for Me? |magazine=The Atlantic | date = 9 May 2014 | accessdate =12 December 2015}}</ref><br />
<br />
The thesis that AI can pose existential risk also has many strong detractors. Skeptics sometimes charge that the thesis is crypto-religious, with an irrational belief in the possibility of superintelligence replacing an irrational belief in an omnipotent God; at an extreme, Jaron Lanier argues that the whole concept that current machines are in any way intelligent is "an illusion" and a "stupendous con" by the wealthy.<br />
<br />
人工智能可以造成世界末日的观点也遭到了许多强烈的反对。怀疑论者有时指责该论点是神秘的、宗教性的,他们非理性地相信超级智能可能取代对万能的上帝的非理性信仰; 一个极端如,杰伦·拉尼尔(Jaron Lanier)认为,目前的机器以任何方式具有智能的整个概念是“一种幻觉” ,是富人的“惊人骗局”。<br />
<br />
<br />
<br />
Much of existing criticism argues that AGI is unlikely in the short term. Computer scientist [[Gordon Bell]] argues that the human race will already destroy itself before it reaches the [[technological singularity]]. [[Gordon Moore]], the original proponent of [[Moore's Law]], declares that "I am a skeptic. I don't believe [a technological singularity] is likely to happen, at least for a long time. And I don't know why I feel that way."<ref>{{cite news |title=Tech Luminaries Address Singularity |url=https://spectrum.ieee.org/computing/hardware/tech-luminaries-address-singularity |accessdate=8 April 2020 |work=IEEE Spectrum: Technology, Engineering, and Science News |issue=SPECIAL REPORT: THE SINGULARITY |date=1 June 2008 |language=en}}</ref> [[Baidu]] Vice President [[Andrew Ng]] states AI existential risk is "like worrying about overpopulation on Mars when we have not even set foot on the planet yet."<ref name=shermer>{{cite news|last1=Shermer|first1=Michael|title=Apocalypse AI|url=https://www.scientificamerican.com/article/artificial-intelligence-is-not-a-threat-mdash-yet/|accessdate=27 November 2017|work=Scientific American|date=1 March 2017|pages=77|language=en|doi=10.1038/scientificamerican0317-77|bibcode=2017SciAm.316c..77S}}</ref><br />
<br />
Much of existing criticism argues that AGI is unlikely in the short term. Computer scientist Gordon Bell argues that the human race will already destroy itself before it reaches the technological singularity. Gordon Moore, the original proponent of Moore's Law, declares that "I am a skeptic. I don't believe [a technological singularity] is likely to happen, at least for a long time. And I don't know why I feel that way." Baidu Vice President Andrew Ng states AI existential risk is "like worrying about overpopulation on Mars when we have not even set foot on the planet yet."<br />
<br />
现有的许多批评认为,通用人工智能短期内不太可能成功。计算机科学家戈登·贝尔(Gordon Bell)认为人类在到达技术奇点之前就已经自我毁灭了。戈登·摩尔(Gordon Moore),摩尔定律的最初提出者,宣称“我是一个怀疑论者。我不认为技术奇点会发生,至少在很长一段时间内不会。我不知道为什么会有这种感觉。”百度副总裁吴恩达(Andrew Ng)说,人工智能世界末日就像是在担心火星人口过剩,而我们甚至还没有踏上这个星球。<br />
<br />
<br />
<br />
==See also 请参阅==<br />
<br />
{{div col|colwidth=30em}}<br />
<br />
* [[Automated machine learning]]<br />
<br />
* [[Machine ethics]]<br />
<br />
* [[Multi-task learning]]<br />
<br />
* [[Superintelligence: Paths, Dangers, Strategies|Superintelligence]]<br />
<br />
* [[Nick Bostrom]]<br />
<br />
* [[Eliezer Yudkowsky]]<br />
<br />
* [[Future of Humanity Institute]]<br />
<br />
* [[Outline of artificial intelligence]]<br />
<br />
* [[Artificial brain]]<br />
<br />
* [[Transfer learning]]<br />
<br />
* [[Outline of transhumanism]]<br />
<br />
* [[General game playing]]<br />
<br />
* [[Synthetic intelligence]]<br />
<br />
* [[Intelligence amplification]] (IA), the use of information technology in augmenting human intelligence instead of creating an external autonomous "AGI"{{div col end}}<br />
<br />
<br />
<br />
==Notes==<br />
<br />
{{reflist|colwidth=30em}}<br />
<br />
<br />
<br />
==References==<br />
<br />
• Stages of Artificial Intelligence"[https://www.computerscience0.xyz/2020/04/ai-artificial-intelligence-stages-of.html Computer Science]" on 2nd April, 2020.{{refbegin|2}}<br />
<br />
• Stages of Artificial Intelligence"[https://www.computerscience0.xyz/2020/04/ai-artificial-intelligence-stages-of.html Computer Science]" on 2nd April, 2020.<br />
<br />
•2020年4月2日,《人工智能的阶段》[ https://www.computerscience0.xyz/2020/04/ai-Artificial-Intelligence-Stages-of.html 计算机科学]。<br />
<br />
* {{cite web |url=http://www.techcast.org/Upload/PDFs/633615794236495345_TCTheAutomationofThought.pdf |title=TechCast Article Series: The Automation of Thought |last1=Halal |first1=William E. |website= |access-date= |archive-url=https://web.archive.org/web/20130606101835/http://www.techcast.org/Upload/PDFs/633615794236495345_TCTheAutomationofThought.pdf |archive-date=6 June 2013}}<br />
<br />
* {{Citation | last=Aleksander | first=Igor | author-link=Igor Aleksander | year=1996 | title=Impossible Minds | publisher=World Scientific Publishing Company | isbn=978-1-86094-036-1 | url-access=registration | url=https://archive.org/details/impossiblemindsm0000alek }}<br />
<br />
* {{Citation | last = Omohundro|first= Steve| author-link= Steve Omohundro | year = 2008| title= The Nature of Self-Improving Artificial Intelligence| publisher= presented and distributed at the 2007 Singularity Summit, San Francisco, CA.}}<br />
<br />
* {{Citation|last=Sandberg |first=Anders|last2=Boström|first2=Nick|title=Whole Brain Emulation: A Roadmap|url=http://www.fhi.ox.ac.uk/Reports/2008-3.pdf|accessdate=5 April 2009|series= Technical Report #2008‐3|year=2008| publisher = Future of Humanity Institute, Oxford University}}<br />
<br />
* {{Citation|vauthors=Azevedo FA, Carvalho LR, Grinberg LT | title = Equal numbers of neuronal and nonneuronal cells make the human brain an isometrically scaled-up primate brain| journal = The Journal of Comparative Neurology| volume = 513| issue = 5| pages = 532–41|date=April 2009| pmid = 19226510| doi = 10.1002/cne.21974|url=https://www.researchgate.net/publication/24024444 |accessdate=4 September 2013| ref={{harvid|Azevedo et al.|2009}}|display-authors=etal}}<br />
<br />
* {{Citation<br />
<br />
| first=Anthony<br />
<br />
| first=Anthony<br />
<br />
首先是安东尼<br />
<br />
| last=Berglas<br />
<br />
| last=Berglas<br />
<br />
最后一个贝格拉斯<br />
<br />
| title=Artificial Intelligence will Kill our Grandchildren<br />
<br />
| title=Artificial Intelligence will Kill our Grandchildren<br />
<br />
人工智能会杀死我们的孙子<br />
<br />
| year=2008<br />
<br />
| year=2008<br />
<br />
2008年<br />
<br />
| url=http://berglas.org/Articles/AIKillGrandchildren/AIKillGrandchildren.html<br />
<br />
| url=http://berglas.org/Articles/AIKillGrandchildren/AIKillGrandchildren.html<br />
<br />
Http://berglas.org/articles/aikillgrandchildren/aikillgrandchildren.html<br />
<br />
}}<br />
<br />
}}<br />
<br />
}}<br />
<br />
* {{Citation | last = Chalmers |first = David | author-link=David Chalmers | year=1996 | title = The Conscious Mind |publisher=Oxford University Press.}}<br />
<br />
* {{Citation | last = Clocksin|first=William |date=Aug 2003 |title=Artificial intelligence and the future|journal=[[Philosophical Transactions of the Royal Society A]] |pmid=12952683 |volume=361 |issue=1809 |pages=1721–1748 |doi=10.1098/rsta.2003.1232 |postscript=.|bibcode=2003RSPTA.361.1721C }}<br />
<br />
* {{Crevier 1993}}<br />
<br />
* {{Citation | first = Brad | last = Darrach | date=20 November 1970 | title=Meet Shakey, the First Electronic Person | magazine=[[Life Magazine]] | pages = 58–68 }}.<br />
<br />
* {{Citation | last = Drachman | first = D | title = Do we have brain to spare? | journal = Neurology | volume = 64 | issue = 12 | pages = 2004–5 | year = 2005 | pmid = 15985565 | doi = 10.1212/01.WNL.0000166914.38327.BB | postscript = .}}<br />
<br />
* {{Citation | last = Feigenbaum | first = Edward A. | first2=Pamela | last2=McCorduck | author-link=Edward Feigenbaum | author2-link = Pamela McCorduck | title = The Fifth Generation: Artificial Intelligence and Japan's Computer Challenge to the World | publisher = Michael Joseph | year = 1983 | isbn = 978-0-7181-2401-4 }}<br />
<br />
* {{Citation<br />
<br />
| last = Gelernter | first = David<br />
<br />
| last = Gelernter | first = David<br />
<br />
最后的盖兰特 | 第一个大卫<br />
<br />
| year = 2010<br />
<br />
| year = 2010<br />
<br />
2010年<br />
<br />
| title = Dream-logic, the Internet and Artificial Thought<br />
<br />
| title = Dream-logic, the Internet and Artificial Thought<br />
<br />
梦的逻辑,互联网和人工思维<br />
<br />
| url=http://www.edge.org/3rd_culture/gelernter10.1/gelernter10.1_index.html | accessdate= 25 July 2010<br />
<br />
| url=http://www.edge.org/3rd_culture/gelernter10.1/gelernter10.1_index.html | accessdate= 25 July 2010<br />
<br />
Http://www.edge.org/3rd_culture/gelernter10.1/gelernter10.1_index.html : 2010年7月25日<br />
<br />
}}<br />
<br />
}}<br />
<br />
}}<br />
<br />
* {{Citation<br />
<br />
| editor1-last = Goertzel | editor1-first = Ben | authorlink = Ben Goertzel<br />
<br />
| editor1-last = Goertzel | editor1-first = Ben | authorlink = Ben Goertzel<br />
<br />
1-first Ben | authorlink Ben Goertzel<br />
<br />
| editor2-last = Pennachin | editor2-first= Cassio<br />
<br />
| editor2-last = Pennachin | editor2-first= Cassio<br />
<br />
| 编辑2-last Pennachin | 编辑2-first Cassio<br />
<br />
| year = 2006<br />
<br />
| year = 2006<br />
<br />
2006年<br />
<br />
| title=Artificial General Intelligence<br />
<br />
| title=Artificial General Intelligence<br />
<br />
人工通用智能<br />
<br />
| publisher = Springer <br />
<br />
| publisher = Springer <br />
<br />
出版商斯普林格<br />
<br />
| url=http://people.inf.elte.hu/csizsekp/ai/books/artificial-general-intelligence-cognitive-technologies.9783540237334.27156.pdf<br />
<br />
| url=http://people.inf.elte.hu/csizsekp/ai/books/artificial-general-intelligence-cognitive-technologies.9783540237334.27156.pdf<br />
<br />
Http://people.inf.elte.hu/csizsekp/ai/books/artificial-general-intelligence-cognitive-technologies.9783540237334.27156.pdf<br />
<br />
| isbn = 978-3-540-23733-4<br />
<br />
| isbn = 978-3-540-23733-4<br />
<br />
[国际标准图书馆编号978-3-540-23733-4]<br />
<br />
| archive-url = https://web.archive.org/web/20130320184603/http://people.inf.elte.hu/csizsekp/ai/books/artificial-general-intelligence-cognitive-technologies.9783540237334.27156.pdf<br />
<br />
| archive-url = https://web.archive.org/web/20130320184603/http://people.inf.elte.hu/csizsekp/ai/books/artificial-general-intelligence-cognitive-technologies.9783540237334.27156.pdf<br />
<br />
| 档案-url https://web.archive.org/web/20130320184603/http://people.inf.elte.hu/csizsekp/ai/books/artificial-general-intelligence-cognitive-technologies.9783540237334.27156.pdf<br />
<br />
| archive-date = 20 March 2013<br />
<br />
| archive-date = 20 March 2013<br />
<br />
| 档案-日期2013年3月20日<br />
<br />
}}<br />
<br />
}}<br />
<br />
}}<br />
<br />
* {{Citation<br />
<br />
| last = Goertzel | first = Ben | authorlink = Ben Goertzel<br />
<br />
| last = Goertzel | first = Ben | authorlink = Ben Goertzel<br />
<br />
作者: Ben Goertzel<br />
<br />
| last2 = Wang | first2 = Pei<br />
<br />
| last2 = Wang | first2 = Pei<br />
<br />
2 Wang | first2 Pei<br />
<br />
| year = 2006<br />
<br />
| year = 2006<br />
<br />
2006年<br />
<br />
| title = Introduction: Aspects of Artificial General Intelligence<br />
<br />
| title = Introduction: Aspects of Artificial General Intelligence<br />
<br />
| 题目简介: 人工通用智能的方方面面<br />
<br />
| url=http://sites.google.com/site/narswang/publications/wang-goertzel.AGI_Aspects.pdf?attredirects=1<br />
<br />
| url=http://sites.google.com/site/narswang/publications/wang-goertzel.AGI_Aspects.pdf?attredirects=1<br />
<br />
Http://sites.google.com/site/narswang/publications/wang-goertzel.agi_aspects.pdf?attredirects=1<br />
<br />
}}<br />
<br />
}}<br />
<br />
}}<br />
<br />
* {{Citation |last=Goertzel|first=Ben |authorlink=Ben Goertzel |date=Dec 2007 |title=Human-level artificial general intelligence and the possibility of a technological singularity: a reaction to Ray Kurzweil's The Singularity Is Near, and McDermott's critique of Kurzweil|journal=Artificial Intelligence |volume=171 |issue=18, Special Review Issue |pages=1161–1173 |url=https://scholar.google.com/scholar?hl=sv&lr=&cluster=15189798216526465792 |accessdate=1 April 2009 |doi=10.1016/j.artint.2007.10.011 |postscript=.}}<br />
<br />
* {{Citation | last = Gubrud | first = Mark | url=http://www.foresight.org/Conferences/MNT05/Papers/Gubrud/ | title = Nanotechnology and International Security | journal= Fifth Foresight Conference on Molecular Nanotechnology |date = November 1997| accessdate= 7 May 2011}}<br />
<br />
* {{Citation| last1 = Holte | first1=RC | last2=Choueiry |first2=BY| title = Abstraction and reformulation in artificial intelligence| journal = [[Philosophical Transactions of the Royal Society B]]| volume = 358| issue = 1435| pages = 1197–1204| year = 2003| pmid = 12903653| pmc = 1693218| doi = 10.1098/rstb.2003.1317| postscript = .}}<br />
<br />
* {{Citation | last = Howe | first = J. | url=http://www.dai.ed.ac.uk/AI_at_Edinburgh_perspective.html | title = Artificial Intelligence at Edinburgh University : a Perspective |date = November 1994| accessdate= 30 August 2007}}<br />
<br />
* {{Citation | last = Johnson| first= Mark | year = 1987| title =The body in the mind| publisher =Chicago|isbn= 978-0-226-40317-5}}<br />
<br />
* {{Citation | last = Kurzweil | first = Ray | author-link = Ray Kurzweil | title = The Singularity is Near | year = 2005 | publisher = Viking Press | title-link = The Singularity is Near }}<br />
<br />
* {{Citation | last = Lighthill | first = Professor Sir James | author-link=James Lighthill | year = 1973 | contribution= Artificial Intelligence: A General Survey | title = Artificial Intelligence: a paper symposium| publisher = Science Research Council }}<br />
<br />
* {{Citation|last=Luger|first=George|first2=William|last2=Stubblefield|year=2004|title=Artificial Intelligence: Structures and Strategies for Complex Problem Solving|edition=5th|publisher=The Benjamin/Cummings Publishing Company, Inc.|page=[https://archive.org/details/artificialintell0000luge/page/720 720]|isbn=978-0-8053-4780-7|url=https://archive.org/details/artificialintell0000luge/page/720}}<br />
<br />
* {{Citation |last=McCarthy|first=John |authorlink=John McCarthy (computer scientist)|date=Oct 2007 |title=From here to human-level AI|journal=Artificial Intelligence |volume=171 |pages=1174–1182 |doi=10.1016/j.artint.2007.10.009 | postscript=. |issue=18}}<br />
<br />
* {{McCorduck 2004}}<br />
<br />
* {{Citation | last = Moravec | first = Hans | author-link = Hans Moravec | year = 1976 | url = http://www.frc.ri.cmu.edu/users/hpm/project.archive/general.articles/1975/Raw.Power.html | title = The Role of Raw Power in Intelligence | access-date = 29 September 2007 | archive-url = https://web.archive.org/web/20160303232511/http://www.frc.ri.cmu.edu/users/hpm/project.archive/general.articles/1975/Raw.Power.html | archive-date = 3 March 2016 | url-status = dead }}<br />
<br />
* {{Citation | last = Moravec | first = Hans | author-link=Hans Moravec | year = 1988 | title = Mind Children | publisher = Harvard University Press}}<br />
<br />
* {{Citation| last=Nagel | year=1974 | title =What Is it Like to Be a Bat | journal = Philosophical Review | volume=83 | issue=4 | pages=435–50 | url=http://organizations.utep.edu/Portals/1475/nagel_bat.pdf| postscript=. | doi=10.2307/2183914 | jstor=2183914 }}<br />
<br />
* {{Citation | last = Newell | first = Allen | author-link=Allen Newell | last2 = Simon | first2=H. A. | year = 1963 | contribution=GPS: A Program that Simulates Human Thought| title=Computers and Thought | editor-last= Feigenbaum | editor-first= E.A. |editor2-last= Feldman |editor2-first= J. |publisher= McGraw-Hill | authorlink2 = Herbert A. Simon|location= New York }}<br />
<br />
* {{cite journal | doi = 10.1145/360018.360022 | last = Newell | first = Allen | last2 = Simon | first2=H. A. | year = 1976 | title=Computer Science as Empirical Inquiry: Symbols and Search| volume= 19 | pages = 113–126 | journal = Communications of the ACM| author-link=Allen Newell | authorlink2=Herbert A. Simon|issue=3 | ref=harv| doi-access= free }}<br />
<br />
* {{Citation| last=Nilsson | first=Nils | author-link=Nils Nilsson (researcher) | year=1998|title=Artificial Intelligence: A New Synthesis|publisher=Morgan Kaufmann Publishers|isbn=978-1-55860-467-4}}<br />
<br />
* {{Russell Norvig 2003}}<br />
<br />
* {{Citation | last = NRC| author-link=United States National Research Council | chapter=Developments in Artificial Intelligence|chapter-url=http://www.nap.edu/readingroom/books/far/ch9.html|title=Funding a Revolution: Government Support for Computing Research|publisher=National Academy Press|year=1999 }}<br />
<br />
* {{Citation | last = Poole | first = David | first2 = Alan | last2 = Mackworth | first3 = Randy | last3 = Goebel | publisher = Oxford University Press | year = 1998 | title = Computational Intelligence: A Logical Approach | url = http://www.cs.ubc.ca/spider/poole/ci.html | author-link=David Poole (researcher) | location = New York }}<br />
<br />
* {{Citation|last=Searle |first=John |author-link=John Searle |title=Minds, Brains and Programs |journal=Behavioral and Brain Sciences |volume=3 |issue=3 |pages=417–457 |year=1980 |doi=10.1017/S0140525X00005756 }}<br />
<br />
* {{Citation | last= Simon | first = H. A. | author-link=Herbert A. Simon | year = 1965 | title=The Shape of Automation for Men and Management | publisher =Harper & Row | location = New York }}<br />
<br />
* {{Citation |last=Sutherland|first= J.G. |year =1990| title= Holographic Model of Memory, Learning, and Expression|journal=International Journal of Neural Systems|volume= 1–3|pages= 256–267 |postscript=.}}<br />
<br />
* {{Citation|vauthors=Williams RW, Herrup K | title = The control of neuron number| journal = Annual Review of Neuroscience| volume = 11| pages = 423–53| year = 1988| pmid = 3284447| doi = 10.1146/annurev.ne.11.030188.002231| postscript = .}}<!--| accessdate = 20 June 2009--><br />
<br />
* {{Citation<br />
<br />
| editor1-last = de Vega | editor1-first = Manuel<br />
<br />
| editor1-last = de Vega | editor1-first = Manuel<br />
<br />
| 编辑1-last de Vega | 编辑1-first Manuel<br />
<br />
| editor2-last = Glenberg | editor2-first = Arthur<br />
<br />
| editor2-last = Glenberg | editor2-first = Arthur<br />
<br />
2- 最后的格伦伯格2- 第一个亚瑟<br />
<br />
| editor3-last = Graesser | editor3-first = Arthur<br />
<br />
| editor3-last = Graesser | editor3-first = Arthur<br />
<br />
| 编辑3-last Graesser | 编辑3-first Arthur<br />
<br />
| year = 2008<br />
<br />
| year = 2008<br />
<br />
2008年<br />
<br />
| title = Symbols and Embodiment: Debates on meaning and cognition<br />
<br />
| title = Symbols and Embodiment: Debates on meaning and cognition<br />
<br />
标题符号与具体化: 关于意义与认知的争论<br />
<br />
| publisher = Oxford University Press<br />
<br />
| publisher = Oxford University Press<br />
<br />
牛津大学出版社<br />
<br />
| isbn=978-0-19-921727-4<br />
<br />
| isbn=978-0-19-921727-4<br />
<br />
[国际标准图书编号978-0-19-921727-4]<br />
<br />
}}<br />
<br />
}}<br />
<br />
}}<br />
<br />
* {{Citation|last=Yudkowsky |first=Eliezer |author-link=Eliezer Yudkowsky |editor-last=Goertzel |editor-first=Ben |editor2-last=Pennachin |editor2-first=Cassio |title=Artificial General Intelligence |journal=Annual Review of Psychology |publisher=Springer |year=2006 |url=http://www.singinst.org/upload/LOGI//LOGI.pdf |doi=10.1146/annurev.psych.49.1.585 |pmid=9496632 |isbn=978-3-540-23733-4 |url-status=dead |archiveurl=https://web.archive.org/web/20090411050423/http://www.singinst.org/upload/LOGI/LOGI.pdf |archivedate=11 April 2009 |volume=49 |pages=585–612}} <br />
<br />
* {{Citation |last=Zucker|first=Jean-Daniel |date=July 2003 |title=A grounded theory of abstraction in artificial intelligence|journal=[[Philosophical Transactions of the Royal Society B]] |pmid=12903672 |volume=358 |issue=1435 |pmc=1693211 |pages=1293–1309 |doi=10.1098/rstb.2003.1308 |postscript=.}}<br />
<br />
* {{Citation| last=Yudkowsky | first=Eliezer | author-link=Eliezer Yudkowsky| year=2008 |title=Artificial Intelligence as a Positive and Negative Factor in Global Risk |journal=Global Catastrophic Risks | bibcode=2008gcr..book..303Y }}.<br />
<br />
{{refend}}<br />
<br />
<br />
<br />
==External links==<br />
<br />
* [https://cis.temple.edu/~pwang/AGI-Intro.html The AGI portal maintained by Pei Wang]<br />
<br />
* [https://web.archive.org/web/20050405071221/http://genesis.csail.mit.edu/index.html The Genesis Group at MIT's CSAIL] – Modern research on the computations that underlay human intelligence<br />
<br />
* [http://www.opencog.org/ OpenCog – open source project to develop a human-level AI]<br />
<br />
* [http://academia.wikia.com/wiki/A_Method_for_Simulating_the_Process_of_Logical_Human_Thought Simulating logical human thought]<br />
<br />
* [http://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/ai-timelines What Do We Know about AI Timelines?] – Literature review<br />
<br />
<br />
<br />
{{Existential risk from artificial intelligence}}<br />
<br />
<br />
<br />
{{DEFAULTSORT:Artificial general intelligence}}<br />
<br />
[[Category:Hypothetical technology]]<br />
<br />
Category:Hypothetical technology<br />
<br />
类别: 假设技术<br />
<br />
[[Category:Artificial intelligence]]<br />
<br />
Category:Artificial intelligence<br />
<br />
类别: 人工智能<br />
<br />
[[Category:Computational neuroscience]]<br />
<br />
Category:Computational neuroscience<br />
<br />
类别: 计算神经科学<br />
<br />
<br />
<br />
[[fr:Intelligence artificielle#Intelligence artificielle forte]]<br />
<br />
fr:Intelligence artificielle#Intelligence artificielle forte<br />
<br />
智力人工 # 智力人工强项<br />
<br />
<noinclude><br />
<br />
<small>This page was moved from [[wikipedia:en:Artificial general intelligence]]. Its edit history can be viewed at [[通用人工智能/edithistory]]</small></noinclude><br />
<br />
[[Category:待整理页面]]</div>粲兰https://wiki.swarma.org/index.php?title=%E9%80%9A%E7%94%A8%E4%BA%BA%E5%B7%A5%E6%99%BA%E8%83%BD&diff=15035通用人工智能2020-10-12T09:02:47Z<p>粲兰:</p>
<hr />
<div>此词条暂由彩云小译翻译,未经人工整理和审校,带来阅读不便,请见谅。<br />
<br />
{{Short description|Hypothetical human-level or stronger AI}}<br />
<br />
{{Use British English|date = March 2019}}<br />
<br />
{{Use dmy dates|date=December 2019}}<br />
<br />
{{Artificial intelligence}}<br />
<br />
'''Artificial general intelligence''' ('''AGI''') is the hypothetical<ref>{{cite news |title=DeepMind and Google: the battle to control artificial intelligence |url=https://www.economist.com/news/2019/04/05/deepmind-and-google-the-battle-to-control-artificial-intelligence |accessdate=15 March 2020 |work=[[The Economist]] ([[1843 (magazine)]]) |date=2019 |quote=AGI stands for Artificial General Intelligence, a hypothetical computer program...}}</ref> intelligence of a machine that has the capacity to understand or learn any intellectual task that a [[human being]] can. It is a primary goal of some [[artificial intelligence]] research and a common topic in [[science fiction]] and [[futures studies]]. AGI can also be referred to as '''strong AI''',<ref>Kurzweil, ''Singularity'' (2005) p. 260</ref><ref name = "Kurzweil 2005-08-05">{{Citation|url=https://www.forbes.com/home/free_forbes/2005/0815/030.htmlhttps://www.forbes.com/home/free_forbes/2005/0815/030.html |first=Ray |last=Kurzweil |date=5 August 2005 |magazine=[[Forbes]]|title=Long Live AI}}: Kurzweil describes strong AI as "machine intelligence with the full range of human intelligence."</ref><ref>{{Citation|url=https://crnano.typepad.com/crnblog/2005/08/advanced_human_.html |title=Advanced Human ntelligence<br />
<br />
Artificial general intelligence (AGI) is the hypothetical intelligence of a machine that has the capacity to understand or learn any intellectual task that a human being can. It is a primary goal of some artificial intelligence research and a common topic in science fiction and futures studies. AGI can also be referred to as strong AI,<ref>{{Citation|url=https://crnano.typepad.com/crnblog/2005/08/advanced_human_.html |title=Advanced Human ntelligence<br />
<br />
人工通用智能(Artificial general intelligence,AGI)是一种假设的机器智能,它有能力理解或学习任何人类能够完成的智力任务。这是一些人工智能研究的主要目标,也是科幻小说和未来研究的共同话题。也可以被称为强人工智能,参考文献{ Citation | url https://crnano.typepad.com/crnblog/2005/08/advanced_human_.html | title Advanced Human intelligence<br />
<br />
|first=Mike|last=Treder|work=Responsible Nanotechnology|date=10 August 2005 |archive-url=https://web.archive.org/web/20191016214415/https://crnano.typepad.com/crnblog/2005/08/advanced_human_.html|archive-date=16 October 2019 |url-status=live}}</ref> '''full AI''',<ref>{{Cite web |url=http://tedxtalks.ted.com/video/The-Age-of-Artificial-Intellige |title=The Age of Artificial Intelligence: George John at TEDxLondonBusinessSchool 2013 |access-date=22 February 2014 |archive-url=https://web.archive.org/web/20140226123940/http://tedxtalks.ted.com/video/The-Age-of-Artificial-Intellige |archive-date=26 February 2014 |url-status=live }}</ref> <br />
<br />
|first=Mike|last=Treder|work=Responsible Nanotechnology|date=10 August 2005 |archive-url=https://web.archive.org/web/20191016214415/https://crnano.typepad.com/crnblog/2005/08/advanced_human_.html|archive-date=16 October 2019 |url-status=live}}</ref> full AI, <br />
<br />
2005年8月10日2019年10月 https://web.archive.org/web/20191016214415/https://crnano.typepad.com/crnblog/2005/08/advanced_human_.html|archive-date=16,<br />
<br />
or '''general intelligent action'''.{{sfn|Newell|Simon|1976|ps=, This is the term they use for "human-level" intelligence in the [[physical symbol system]] hypothesis.}} <br />
<br />
or general intelligent action. <br />
<br />
或者是通用智能行为。<br />
<br />
Some academic sources reserve the term "strong AI" for machines that can experience [[Chinese room#Strong AI|consciousness]].{{sfn|Searle|1980|ps=, See below for the origin of the term "strong AI", and see the academic definition of "[[Chinese room#Strong AI|strong AI]]" in the article [[Chinese room]].}} Today's AI is speculated to be many years, if not decades, away from AGI.<ref>[https://www.europarl.europa.eu/at-your-service/files/be-heard/religious-and-non-confessional-dialogue/events/en-20190319-how-artificial-intelligence-works.pdf europarl.europa.eu: How artificial intelligence works], "Concluding remarks: Today's AI is powerful and useful, but remains far from speculated AGI or ASI.", European Parliamentary Research Service, retrieved March 3, 2020</ref><ref>{{Cite journal|last=Grace|first=Katja|last2=Salvatier|first2=John|last3=Dafoe|first3=Allan|last4=Zhang|first4=Baobao|last5=Evans|first5=Owain|date=2018-07-31|title=Viewpoint: When Will AI Exceed Human Performance? Evidence from AI Experts|journal=Journal of Artificial Intelligence Research|volume=62|pages=729–754|doi=10.1613/jair.1.11222|issn=1076-9757}}</ref><br />
<br />
Some academic sources reserve the term "strong AI" for machines that can experience consciousness. Today's AI is speculated to be many years, if not decades, away from AGI.<br />
<br />
一些学术资源保留了“强人工智能”这个术语,用来形容能够体会意识的机器。据推测,如果不是几十年的话,今天的人工智能将在很多年之后才能达到通用人工智能的地步。<br />
<br />
<br />
<br />
Some authorities emphasize a distinction between ''strong AI'' and ''applied AI'',<ref>Encyclopædia Britannica [http://www.britannica.com/eb/article-219086/artificial-intelligence Strong AI, applied AI, and cognitive simulation] {{Webarchive|url=https://web.archive.org/web/20071015054758/http://www.britannica.com/eb/article-219086/artificial-intelligence |date=15 October 2007 }} or Jack Copeland [http://www.cs.usfca.edu/www.AlanTuring.net/turing_archive/pages/Reference%20Articles/what_is_AI/What%20is%20AI02.html What is artificial intelligence?] {{Webarchive|url=https://web.archive.org/web/20070818125256/http://www.cs.usfca.edu/www.AlanTuring.net/turing_archive/pages/Reference%20Articles/what_is_AI/What%20is%20AI02.html |date=18 August 2007 }} on AlanTuring.net</ref> also called ''narrow AI''<ref name = "Kurzweil 2005-08-05"/> or ''[[weak AI]]''.<ref>{{Cite web |url=http://www.open2.net/nextbigthing/ai/ai_in_depth/in_depth.htm |title=The Open University on Strong and Weak AI |access-date=8 October 2007 |archive-url=https://web.archive.org/web/20090925043908/http://www.open2.net/nextbigthing/ai/ai_in_depth/in_depth.htm |archive-date=25 September 2009 |url-status=live }} {{Dead link |date=October 2019}}</ref> In contrast to strong AI, weak AI is not intended to perform human [[cognitive]] abilities. Rather, weak AI is limited to the use of software to study or accomplish specific [[problem solving]] or [[reason]]ing tasks.<br />
<br />
Some authorities emphasize a distinction between strong AI and applied AI, also called narrow AI In contrast to strong AI, weak AI is not intended to perform human cognitive abilities. Rather, weak AI is limited to the use of software to study or accomplish specific problem solving or reasoning tasks.<br />
<br />
一些权威机构强调'' 强人工智能 ''和'' 应用人工智能 ''之间的区别,也称为'' 狭义人工智能 ''与'' 强人工智能 ''相比,弱人工智能并不是为了执行人类的认知能力。相反,弱人工智能仅限于使用软件来研究或完成特定问题的解决或完成推理任务。<br />
<br />
<br />
<br />
As of 2017, over forty organizations are researching AGI.<ref name=baum/><br />
<br />
As of 2017, over forty organizations are researching AGI.<br />
<br />
截止到2017年,已经有超过四十家机构在研究 AGI。<br />
<br />
<br />
<br />
==Requirements 判定要求==<br />
<br />
{{main|Cognitive science}}<br />
<br />
<br />
<br />
Various criteria for [[intelligence]] have been proposed (most famously the [[Turing test]]) but to date, there is no definition that satisfies everyone.<ref>AI founder [[John McCarthy (computer scientist)|John McCarthy]] writes: "we cannot yet characterize in general what kinds of computational procedures we want to call intelligent." {{cite web| url=http://www-formal.stanford.edu/jmc/whatisai/node1.html| title=Basic Questions| last=McCarthy| first=John| authorlink=John McCarthy (computer scientist)| publisher=[[Stanford University]]| year=2007| access-date=6 December 2007| archive-url=https://web.archive.org/web/20071026100601/http://www-formal.stanford.edu/jmc/whatisai/node1.html| archive-date=26 October 2007| url-status=live}} (For a discussion of some definitions of intelligence used by [[artificial intelligence]] researchers, see [[philosophy of artificial intelligence]].)</ref> However, there ''is'' wide agreement among artificial intelligence researchers that intelligence is required to do the following:<ref><br />
<br />
Various criteria for intelligence have been proposed (most famously the Turing test) but to date, there is no definition that satisfies everyone. However, there is wide agreement among artificial intelligence researchers that intelligence is required to do the following:<ref><br />
<br />
人们提出了各种各样的智能标准(最著名的是图灵测试) ,但到目前为止,还没有一个定义能使所有人满意。然而,人工智能研究人员普遍认为,智能需要做到以下几点: <br />
<br />
This list of intelligent traits is based on the topics covered by major AI textbooks, including:<br />
<br />
This list of intelligent traits is based on the topics covered by major AI textbooks, including:<br />
<br />
这个智能特征的列表基于主流的人工智能教科书所涉及的主题,包括:<br />
<br />
{{Harvnb|Russell|Norvig|2003}},<br />
<br />
,<br />
<br />
,<br />
<br />
{{Harvnb|Luger|Stubblefield|2004}},<br />
<br />
,<br />
<br />
,<br />
<br />
{{Harvnb|Poole|Mackworth|Goebel|1998}} and<br />
<br />
and<br />
<br />
及<br />
<br />
{{Harvnb|Nilsson|1998}}.<br />
<br />
.<br />
<br />
.<br />
<br />
</ref><br />
<br />
</ref><br />
<br />
/ 参考<br />
<br />
* [[automated reasoning|reason]], use strategy, solve puzzles, and make judgments under [[uncertainty]];使用策略,解决问题,并且在不确定条件下做出决策。<br />
<br />
* [[knowledge representation|represent knowledge]], including [[Commonsense knowledge base|commonsense knowledge]];展现知识,包括常识。<br />
<br />
* [[automated planning and scheduling|plan]];规划。<br />
<br />
* [[machine learning|learn]];学习。<br />
<br />
* communicate in [[natural language processing|natural language]];使用自然语言进行交流。<br />
<br />
* and [[Artificial intelligence systems integration|integrate all these skills]] towards common goals.综合运用所有技巧以达到某个目的。<br />
<br />
<br />
<br />
Other important capabilities include the ability to [[machine perception|sense]] (e.g. [[computer vision|see]]) and the ability to act (e.g. [[robotics|move and manipulate objects]]) in the world where intelligent behaviour is to be observed.<ref>Pfeifer, R. and Bongard J. C., How the body shapes the way we think: a new view of intelligence (The MIT Press, 2007). {{ISBN|0-262-16239-3}}</ref> This would include an ability to detect and respond to [[hazard]].<ref>{{cite journal | last1 = White | first1 = R. W. | year = 1959 | title = Motivation reconsidered: The concept of competence | journal = Psychological Review | volume = 66 | issue = 5| pages = 297–333 | doi=10.1037/h0040934| pmid = 13844397 }}</ref> Many interdisciplinary approaches to intelligence (e.g. [[cognitive science]], [[computational intelligence]] and [[decision making]]) tend to emphasise the need to consider additional traits such as [[imagination]] (taken as the ability to form mental images and concepts that were not programmed in)<ref>{{Harvnb|Johnson|1987}}</ref> and [[Self-determination theory|autonomy]].<ref>deCharms, R. (1968). Personal causation. New York: Academic Press.</ref><br />
<br />
Other important capabilities include the ability to sense (e.g. see) and the ability to act (e.g. move and manipulate objects) in the world where intelligent behaviour is to be observed. This would include an ability to detect and respond to hazard. Many interdisciplinary approaches to intelligence (e.g. cognitive science, computational intelligence and decision making) tend to emphasise the need to consider additional traits such as imagination (taken as the ability to form mental images and concepts that were not programmed in) and autonomy.<br />
<br />
其他重要的能力包括在客观世界感知(例如:视觉)和行动(例如:移动和操纵物体)的能力。智能行为在客观世界中是可观测的。这将包括检测和应对危险的能力。许多跨学科的智力研究方法(例如:。认知科学、计算智能和决策)倾向于强调考虑额外特征的必要性,例如想象力(被认为是未编入程序的形成意象和概念的能力)和自主性。<br />
<br />
Computer based systems that exhibit many of these capabilities do exist (e.g. see [[computational creativity]], [[automated reasoning]], [[decision support system]], [[robot]], [[evolutionary computation]], [[intelligent agent]]), but not yet at human levels.<br />
<br />
Computer based systems that exhibit many of these capabilities do exist (e.g. see computational creativity, automated reasoning, decision support system, robot, evolutionary computation, intelligent agent), but not yet at human levels.<br />
<br />
基于计算机的系统,展示了许多这些能力确实存在(例如:。参见计算创造性、自动推理、决策支持系统、机器人、进化计算、智能代理) ,但还没有达到人类的水平。<br />
<br />
<br />
<br />
===Tests for confirming human-level AGI{{anchor|Tests_for_confirming_human-level_AGI}}===<br />
<br />
The following tests to confirm human-level AGI have been considered:<ref>{{cite web|last=Muehlhauser|first=Luke|title=What is AGI?|url=http://intelligence.org/2013/08/11/what-is-agi/|publisher=Machine Intelligence Research Institute|accessdate=1 May 2014|date=11 August 2013|archive-url=https://web.archive.org/web/20140425115445/http://intelligence.org/2013/08/11/what-is-agi/|archive-date=25 April 2014|url-status=live}}</ref><ref>{{Cite web|url=https://www.talkyblog.com/artificial_general_intelligence_agi/|title=What is Artificial General Intelligence (AGI)? {{!}} 4 Tests For Ensuring Artificial General Intelligence|date=13 July 2019|website=Talky Blog|language=en-US|access-date=17 July 2019|archive-url=https://web.archive.org/web/20190717071152/https://www.talkyblog.com/artificial_general_intelligence_agi/|archive-date=17 July 2019|url-status=live}}</ref><br />
<br />
The following tests to confirm human-level AGI have been considered:<br />
<br />
考虑了下列测试以确认人类水平 AGI:<br />
<br />
;[[Turing test|The Turing Test]] ([[Alan Turing|''Turing'']])<br />
<br />
The Turing Test (Turing)<br />
<br />
图灵测试(图灵)<br />
<br />
: A machine and a human both converse sight unseen with a second human, who must evaluate which of the two is the machine, which passes the test if it can fool the evaluator a significant fraction of the time. Note: Turing does not prescribe what should qualify as intelligence, only that knowing that it is a machine should disqualify it.<br />
<br />
A machine and a human both converse sight unseen with a second human, who must evaluate which of the two is the machine, which passes the test if it can fool the evaluator a significant fraction of the time. Note: Turing does not prescribe what should qualify as intelligence, only that knowing that it is a machine should disqualify it.<br />
<br />
一个机器人和一个人类都与另一个人类进行眼神交流,后者必须评估两者中哪一个是机器,如果它能骗过评估者很大一部分时间,那么机器就通过了测试。注意: 图灵并没有规定什么是智能,只要能认出它是一台机器就不应该认为它是智能的。<br />
<br />
;The Coffee Test ([[Steve Wozniak|''Wozniak'']])<br />
<br />
The Coffee Test (Wozniak)<br />
<br />
咖啡测试(沃兹尼亚克)<br />
<br />
: A machine is required to enter an average American home and figure out how to make coffee: find the coffee machine, find the coffee, add water, find a mug, and brew the coffee by pushing the proper buttons.<br />
<br />
A machine is required to enter an average American home and figure out how to make coffee: find the coffee machine, find the coffee, add water, find a mug, and brew the coffee by pushing the proper buttons.<br />
<br />
一台机器需要进入一个普通的美国家庭,并弄清楚如何制作咖啡: 找到咖啡机,找到咖啡,加水,找到一个马克杯,并通过按下正确的按钮来煮咖啡。<br />
<br />
;The Robot College Student Test ([[Ben Goertzel|''Goertzel'']])<br />
<br />
The Robot College Student Test (Goertzel)<br />
<br />
机器人大学生考试(格兹尔)<br />
<br />
: A machine enrolls in a university, taking and passing the same classes that humans would, and obtaining a degree.<br />
<br />
A machine enrolls in a university, taking and passing the same classes that humans would, and obtaining a degree.<br />
<br />
一台机器进入一所大学,学习并通过与人类相同的课程,并获得学位。<br />
<br />
;The Employment Test ([[Nils John Nilsson|''Nilsson'']])<br />
<br />
The Employment Test (Nilsson)<br />
<br />
就业测试(尼尔森)<br />
<br />
: A machine works an economically important job, performing at least as well as humans in the same job.<br />
<br />
A machine works an economically important job, performing at least as well as humans in the same job.<br />
<br />
机器从事一项经济上重要的工作,在同一项工作中表现至少和人类一样好。<br />
<br />
<br />
<br />
=== Problems requiring AGI to solve 等待通用人工智能解决的问题===<br />
<br />
{{Main|AI-complete}}<br />
<br />
<br />
<br />
The most difficult problems for computers are informally known as "AI-complete" or "AI-hard", implying that solving them is equivalent to the general aptitude of human intelligence, or strong AI, beyond the capabilities of a purpose-specific algorithm.<ref name="Shapiro92">Shapiro, Stuart C. (1992). [http://www.cse.buffalo.edu/~shapiro/Papers/ai.pdf Artificial Intelligence] {{Webarchive|url=https://web.archive.org/web/20160201014644/http://www.cse.buffalo.edu/~shapiro/Papers/ai.pdf |date=1 February 2016 }} In Stuart C. Shapiro (Ed.), ''Encyclopedia of Artificial Intelligence'' (Second Edition, pp.&nbsp;54–57). New York: John Wiley. (Section 4 is on "AI-Complete Tasks".)</ref><br />
<br />
The most difficult problems for computers are informally known as "AI-complete" or "AI-hard", implying that solving them is equivalent to the general aptitude of human intelligence, or strong AI, beyond the capabilities of a purpose-specific algorithm.<br />
<br />
对于计算机来说,最困难的问题被非正式地称为“ AI完全问题”或“ AI困难问题” ,这意味着解决这些问题相当于人类智能的一般才能,或强人工智能,超出了特定目的算法的能力。<br />
<br />
<br />
<br />
AI-complete problems are hypothesised to include general [[computer vision]], [[natural language understanding]], and dealing with unexpected circumstances while solving any real world problem.<ref>Roman V. Yampolskiy. Turing Test as a Defining Feature of AI-Completeness. In Artificial Intelligence, Evolutionary Computation and Metaheuristics (AIECM) --In the footsteps of Alan Turing. Xin-She Yang (Ed.). pp. 3–17. (Chapter 1). Springer, London. 2013. http://cecs.louisville.edu/ry/TuringTestasaDefiningFeature04270003.pdf {{Webarchive|url=https://web.archive.org/web/20130522094547/http://cecs.louisville.edu/ry/TuringTestasaDefiningFeature04270003.pdf |date=22 May 2013 }}</ref><br />
<br />
AI-complete problems are hypothesised to include general computer vision, natural language understanding, and dealing with unexpected circumstances while solving any real world problem.<br />
<br />
人工智能完全问题假设包括一般的计算机视觉,自然语言理解,以及在解决任何现实世界问题的同时处理意外情况。<br />
<br />
<br />
<br />
AI-complete problems cannot be solved with current computer technology alone, and also require [[human computation]]. This property could be useful, for example, to test for the presence of humans, as [[CAPTCHA]]s aim to do; and for [[computer security]] to repel [[brute-force attack]]s.<ref>Luis von Ahn, Manuel Blum, Nicholas Hopper, and John Langford. [http://www.captcha.net/captcha_crypt.pdf CAPTCHA: Using Hard AI Problems for Security] {{Webarchive|url=https://web.archive.org/web/20160304001102/http://www.captcha.net/captcha_crypt.pdf |date=4 March 2016 }}. In Proceedings of Eurocrypt, Vol. 2656 (2003), pp. 294–311.</ref><ref>{{cite journal | first = Richard | last = Bergmair | title = Natural Language Steganography and an "AI-complete" Security Primitive | citeseerx = 10.1.1.105.129 | date = 7 January 2006 }} (unpublished?)</ref><br />
<br />
AI-complete problems cannot be solved with current computer technology alone, and also require human computation. This property could be useful, for example, to test for the presence of humans, as CAPTCHAs aim to do; and for computer security to repel brute-force attacks.<br />
<br />
目前的计算机技术不能单独解决AI完全问题,而且还需要人工计算。例如,这个特性可以用来测试人类是否存在(CAPTCHAs 的目标就是这样做) ,以及应用于计算机安全以抵御强力攻击。<br />
<br />
<br />
<br />
== History 历史 == <br />
<br />
=== Classical AI 经典人工智能 ===<br />
<br />
{{Main|History of artificial intelligence}}<br />
<br />
Modern AI research began in the mid 1950s.<ref>{{Harvnb|Crevier|1993|pp=48–50}}</ref> The first generation of AI researchers were convinced that artificial general intelligence was possible and that it would exist in just a few decades. AI pioneer [[Herbert A. Simon]] wrote in 1965: "machines will be capable, within twenty years, of doing any work a man can do."<ref>{{Harvnb|Simon|1965|p=96}} quoted in {{Harvnb|Crevier|1993|p=109}}</ref> Their predictions were the inspiration for [[Stanley Kubrick]] and [[Arthur C. Clarke]]'s character [[HAL 9000]], who embodied what AI researchers believed they could create by the year 2001. AI pioneer [[Marvin Minsky]] was a consultant<ref>{{Cite web |url=http://mitpress.mit.edu/e-books/Hal/chap2/two1.html |title=Scientist on the Set: An Interview with Marvin Minsky |access-date=5 April 2008 |archive-url=https://web.archive.org/web/20120716182537/http://mitpress.mit.edu/e-books/Hal/chap2/two1.html |archive-date=16 July 2012 |url-status=live }}</ref> on the project of making HAL 9000 as realistic as possible according to the consensus predictions of the time; Crevier quotes him as having said on the subject in 1967, "Within a generation ... the problem of creating 'artificial intelligence' will substantially be solved,"<ref>Marvin Minsky to {{Harvtxt|Darrach|1970}}, quoted in {{Harvtxt|Crevier|1993|p=109}}.</ref> although Minsky states that he was misquoted.{{Citation needed|date=June 2011}}<br />
<br />
Modern AI research began in the mid 1950s. The first generation of AI researchers were convinced that artificial general intelligence was possible and that it would exist in just a few decades. AI pioneer Herbert A. Simon wrote in 1965: "machines will be capable, within twenty years, of doing any work a man can do." Their predictions were the inspiration for Stanley Kubrick and Arthur C. Clarke's character HAL 9000, who embodied what AI researchers believed they could create by the year 2001. AI pioneer Marvin Minsky was a consultant on the project of making HAL 9000 as realistic as possible according to the consensus prediction of the time; Crevier quotes him as having said on the subject in 1967, "Within a generation ... the problem of creating 'artificial intelligence' will substantially be solved," although Minsky states that he was misquoted.<br />
<br />
现代人工智能研究始于20世纪50年代中期。第一代人工智能研究人员确信,通用人工智能是可能的,并将在短短几十年内出现。人工智能的先驱赫伯特·A·西蒙(Herbert A. Simon)在1965年写道: “机器将在20年内拥有完成人类能做的任何工作的能力。”他们的预言启发了斯坦利·库布里克和亚瑟·查理斯·克拉克塑造的角色哈尔9000,它代表了人工智能研究人员相信他们截至2001年能够创造出的东西。人工智能先驱马文·明斯基(Marvin Minsky)是一个项目顾问,该项目旨在根据当时的一致预测,使哈尔9000尽可能逼真; 克里维尔援引他在1967年关于这个问题的话说,“在一代人的时间里... ... 创造‘人工智能’的问题将大体上得到解决,”尽管明斯基声称,他的话被错误引用了。<br />
<br />
<br />
<br />
However, in the early 1970s, it became obvious that researchers had grossly underestimated the difficulty of the project. Funding agencies became skeptical of AGI and put researchers under increasing pressure to produce useful "applied AI".<ref>The [[Lighthill report]] specifically criticized AI's "grandiose objectives" and led the dismantling of AI research in England. ({{Harvnb|Lighthill|1973}}; {{Harvnb|Howe|1994}}) In the U.S., [[DARPA]] became determined to fund only "mission-oriented direct research, rather than basic undirected research". See {{Harv|NRC|1999}} under "Shift to Applied Research Increases Investment". See also {{Harv|Crevier|1993|pp=115–117}} and {{Harv|Russell|Norvig|2003|pp=21–22}}</ref> As the 1980s began, Japan's [[Fifth Generation Computer]] Project revived interest in AGI, setting out a ten-year timeline that included AGI goals like "carry on a casual conversation".<ref>{{harvnb|Crevier|1993|p=211}}, {{harvnb|Russell|Norvig|2003|p=24}} and see also {{Harvnb|Feigenbaum|McCorduck|1983}}</ref> In response to this and the success of [[expert systems]], both industry and government pumped money back into the field.<ref>{{Harvnb|Crevier| 1993|pp=161–162,197–203,240}}; {{harvnb|Russell|Norvig|2003|p=25}}; {{harvnb|NRC|1999|loc=under "Shift to Applied Research Increases Investment"}}</ref> However, confidence in AI spectacularly collapsed in the late 1980s, and the goals of the Fifth Generation Computer Project were never fulfilled.<ref>{{Harvnb|Crevier|1993|pp=209–212}}</ref> For the second time in 20 years, AI researchers who had predicted the imminent achievement of AGI had been shown to be fundamentally mistaken. By the 1990s, AI researchers had gained a reputation for making vain promises. They became reluctant to make predictions at all<ref>As AI founder [[John McCarthy (computer scientist)|John McCarthy]] writes "it would be a great relief to the rest of the workers in AI if the inventors of new general formalisms would express their hopes in a more guarded form than has sometimes been the case." {{cite web | url=http://www-formal.stanford.edu/jmc/reviews/lighthill/lighthill.html | title=Reply to Lighthill | last=McCarthy | first=John | authorlink=John McCarthy (computer scientist) | publisher=Stanford University | year=2000 | access-date=29 September 2007 | archive-url=https://web.archive.org/web/20080930164952/http://www-formal.stanford.edu/jmc/reviews/lighthill/lighthill.html | archive-date=30 September 2008 | url-status=live }}</ref> and to avoid any mention of "human level" artificial intelligence for fear of being labeled "wild-eyed dreamer[s]."<ref>"At its low point, some computer scientists and software engineers avoided the term artificial intelligence for fear of being viewed as wild-eyed dreamers."{{cite news |first=John |last=Markoff |title=Behind Artificial Intelligence, a Squadron of Bright Real People |url=https://www.nytimes.com/2005/10/14/technology/14artificial.html?ei=5070&en=11ab55edb7cead5e&ex=1185940800&adxnnl=1&adxnnlx=1185805173-o7WsfW7qaP0x5/NUs1cQCQ |work=The New York Times|date=14 October 2005}}</ref><br />
<br />
However, in the early 1970s, it became obvious that researchers had grossly underestimated the difficulty of the project. Funding agencies became skeptical of AGI and put researchers under increasing pressure to produce useful "applied AI". As the 1980s began, Japan's Fifth Generation Computer Project revived interest in AGI, setting out a ten-year timeline that included AGI goals like "carry on a casual conversation". In response to this and the success of expert systems, both industry and government pumped money back into the field. However, confidence in AI spectacularly collapsed in the late 1980s, and the goals of the Fifth Generation Computer Project were never fulfilled. For the second time in 20 years, AI researchers who had predicted the imminent achievement of AGI had been shown to be fundamentally mistaken. By the 1990s, AI researchers had gained a reputation for making vain promises. They became reluctant to make predictions at all and to avoid any mention of "human level" artificial intelligence for fear of being labeled "wild-eyed dreamer[s]."<br />
<br />
然而,在20世纪70年代初,很明显,研究人员严重低估了该项目的难度。资助机构开始对通用人工智能持怀疑态度,并对研究人员施加越来越大的压力,要求他们生产出有用的“应用人工智能”。随着20世纪80年代的开始,日本的'''<font color="#ff8000">第五代计算机项目(Fifth Generation Computer Project)</font>'''重新唤起了人们对通用人工智能的兴趣,并设定了一个长达10年的时间线,其中包括通用人工智能的目标,比如“进行一次随意的交谈”。为了应对这种情况和专家系统的成功建立,工业界和政府都重新将资金投入这一领域。然而,人们对人工智能的信心在20世纪80年代末大幅下降,第五代计算机项目的目标从未实现。20年来的第二次,人工智能研究人员预测通用人工智能即将取得的成果已经被证明根本是错误的。到了20世纪90年代,人工智能研究人员因做出虚假承诺而臭名昭著。他们根本不愿意做预测,也不愿意提及“人类水平”的人工智能,因为他们害怕被贴上“狂热梦想家”的标签<br />
<br />
<br />
<br />
=== Narrow AI research 狭义人工智能的研究===<br />
<br />
{{Main|Artificial intelligence}}<br />
<br />
<br />
<br />
In the 1990s and early 21st century, mainstream AI achieved far greater commercial success and academic respectability by focusing on specific sub-problems where they can produce verifiable results and commercial applications, such as [[artificial neural networks]] and statistical [[machine learning]].<ref>{{Harvnb|Russell|Norvig|2003|pp=25–26}}</ref> These "applied AI" systems are now used extensively throughout the technology industry, and research in this vein is very heavily funded in both academia and industry. Currently, development on this field is considered an emerging trend, and a mature stage is expected to happen in more than 10 years.<ref>{{cite web |title=Trends in the Emerging Tech Hype Cycle |url=https://blogs.gartner.com/smarterwithgartner/files/2018/08/PR_490866_5_Trends_in_the_Emerging_Tech_Hype_Cycle_2018_Hype_Cycle.png |publisher=Gartner Reports |accessdate=7 May 2019 |archive-url=https://web.archive.org/web/20190522024829/https://blogs.gartner.com/smarterwithgartner/files/2018/08/PR_490866_5_Trends_in_the_Emerging_Tech_Hype_Cycle_2018_Hype_Cycle.png |archive-date=22 May 2019 |url-status=live }}</ref><br />
<br />
In the 1990s and early 21st century, mainstream AI achieved far greater commercial success and academic respectability by focusing on specific sub-problems where they can produce verifiable results and commercial applications, such as artificial neural networks and statistical machine learning. These "applied AI" systems are now used extensively throughout the technology industry, and research in this vein is very heavily funded in both academia and industry. Currently, development on this field is considered an emerging trend, and a mature stage is expected to happen in more than 10 years.<br />
<br />
在1990年代和21世纪初,主流人工智能取得了更大的商业成功和学术声望,因为它们把重点放在能够产生可验证结果和商业应用的具体子问题上,例如人工神经网络和统计机器学习。这些“应用人工智能”系统现在在整个技术产业中得到广泛应用,这方面的研究得到了学术界和产业界的大量资助。目前,这一领域的发展被认为是一个新兴的趋势,并有望在10多年内进入一个成熟的阶段。<br />
<br />
<br />
<br />
Most mainstream AI researchers hope that strong AI can be developed by combining the programs that solve various sub-problems. [[Hans Moravec]] wrote in 1988: <blockquote>"I am confident that this bottom-up route to artificial intelligence will one day meet the traditional top-down route more than half way, ready to provide the real world competence and the [[Commonsense knowledge base|commonsense knowledge]] that has been so frustratingly elusive in reasoning programs. Fully intelligent machines will result when the metaphorical [[golden spike]] is driven uniting the two efforts."<ref>{{Harvnb|Moravec|1988|p=20}}</ref></blockquote><br />
<br />
Most mainstream AI researchers hope that strong AI can be developed by combining the programs that solve various sub-problems. Hans Moravec wrote in 1988: <blockquote>"I am confident that this bottom-up route to artificial intelligence will one day meet the traditional top-down route more than half way, ready to provide the real world competence and the commonsense knowledge that has been so frustratingly elusive in reasoning programs. Fully intelligent machines will result when the metaphorical golden spike is driven uniting the two efforts."</blockquote><br />
<br />
大多数主流人工智能研究人员希望,通过结合解决各种子问题的项目,可以开发出强人工智能。汉斯·莫拉维克(Hans Moravec)在1988年写道: “我相信,这种自下而上的人工智能路线,终有一天会与传统的自上而下的路线在后半程相遇。令人沮丧的是,当下,真实世界的能力和常识知识在推理程序中一直难以捉摸。而这两种路线结合的人工智能将能为我们解决这些疑难。当一个黄金钉一样的东西将二者结合起来时,就会产生完全智能的机器。” / blockquote<br />
<br />
<br />
<br />
However, even this fundamental philosophy has been disputed; for example, Stevan Harnad of Princeton concluded his 1990 paper on the [[Symbol grounding problem|Symbol Grounding Hypothesis]] by stating: <blockquote>"The expectation has often been voiced that "top-down" (symbolic) approaches to modeling cognition will somehow meet "bottom-up" (sensory) approaches somewhere in between. If the grounding considerations in this paper are valid, then this expectation is hopelessly modular and there is really only one viable route from sense to symbols: from the ground up. A free-floating symbolic level like the software level of a computer will never be reached by this route (or vice versa) – nor is it clear why we should even try to reach such a level, since it looks as if getting there would just amount to uprooting our symbols from their intrinsic meanings (thereby merely reducing ourselves to the functional equivalent of a programmable computer)."<ref>{{cite journal | last1 = Harnad | first1 = S | year = 1990 | title = The Symbol Grounding Problem | journal = Physica D | volume = 42 | issue = 1–3| pages = 335–346 | doi=10.1016/0167-2789(90)90087-6| bibcode = 1990PhyD...42..335H | arxiv = cs/9906002}}</ref></blockquote><br />
<br />
However, even this fundamental philosophy has been disputed; for example, Stevan Harnad of Princeton concluded his 1990 paper on the Symbol Grounding Hypothesis by stating: <blockquote>"The expectation has often been voiced that "top-down" (symbolic) approaches to modeling cognition will somehow meet "bottom-up" (sensory) approaches somewhere in between. If the grounding considerations in this paper are valid, then this expectation is hopelessly modular and there is really only one viable route from sense to symbols: from the ground up. A free-floating symbolic level like the software level of a computer will never be reached by this route (or vice versa) – nor is it clear why we should even try to reach such a level, since it looks as if getting there would just amount to uprooting our symbols from their intrinsic meanings (thereby merely reducing ourselves to the functional equivalent of a programmable computer)."</blockquote><br />
<br />
然而,连如此基本的哲学问题也存在争议; 例如,普林斯顿大学的斯蒂文·哈纳德(Stevan Harnad)在1990年关于'''<font color="#ff8000">符号基础假说(the Symbol Grounding Hypothesis)</font>'''的论文中总结道: “人们经常提出这样的期望,即建立“自上而下”(符号)的认知模型的方法将在某种程度上与“自下而上”(感官)的方法在建模过程中的某处相会。如果本文中的基本考虑是正确的,那么绝望的是,这种期望是模块化的,并且从认知到符号真的只有一条可行的路径: 从头开始。类似计算机软件级别的自由浮动的符号永远不可能通过这条路径实现,反之亦然——甚至也不清楚为什么我们应该尝试达到这样一个级别,因为它看起来就像是把我们的符号从它们的内在意义上连根拔起(从而仅仅把我们自己降低为可编程计算机的功能等价物)。” / blockquote<br />
<br />
<br />
<br />
=== Modern artificial general intelligence research 现代通用人工智能的研究===<br />
<br />
The term "artificial general intelligence" was used as early as 1997, by Mark Gubrud<ref>{{Harvnb|Gubrud|1997}}</ref> in a discussion of the implications of fully automated military production and operations. The term was re-introduced and popularized by [[Shane Legg]] and [[Ben Goertzel]] around 2002.<ref>{{Cite web|url=http://goertzel.org/who-coined-the-term-agi/|title=Who coined the term "AGI"? » goertzel.org|language=en-US|access-date=28 December 2018|archive-url=https://web.archive.org/web/20181228083048/http://goertzel.org/who-coined-the-term-agi/|archive-date=28 December 2018|url-status=live}}, via [[Life 3.0]]: 'The term "AGI" was popularized by... Shane Legg, Mark Gubrud and Ben Goertzel'</ref> The research objective is much older, for example [[Doug Lenat]]'s [[Cyc]] project (that began in 1984), and [[Allen Newell]]'s [[Soar (cognitive architecture)|Soar]] project are regarded as within the scope of AGI. AGI research activity in 2006 was described by Pei Wang and Ben Goertzel<ref>{{harvnb|Goertzel|Wang|2006}}. See also {{harvtxt|Wang|2006}} with an up-to-date summary and lots of links.</ref> as "producing publications and preliminary results". The first summer school in AGI was organized in Xiamen, China in 2009<ref>https://goertzel.org/AGI_Summer_School_2009.htm</ref> by the Xiamen university's Artificial Brain Laboratory and OpenCog. The first university course was given in 2010<ref>http://fmi-plovdiv.org/index.jsp?id=1054&ln=1</ref> and 2011<ref>http://fmi.uni-plovdiv.bg/index.jsp?id=1139&ln=1</ref> at Plovdiv University, Bulgaria by Todor Arnaudov. MIT presented a course in AGI in 2018, organized by Lex Fridman and featuring a number of guest lecturers. However, as yet, most AI researchers have devoted little attention to AGI, with some claiming that intelligence is too complex to be completely replicated in the near term. However, a small number of computer scientists are active in AGI research, and many of this group are contributing to a series of [[Conference on Artificial General Intelligence|AGI conferences]]. The research is extremely diverse and often pioneering in nature. In the introduction to his book,{{sfn|Goertzel|Pennachin|2006}} Goertzel says that estimates of the time needed before a truly flexible AGI is built vary from 10 years to over a century, but the consensus in the AGI research community seems to be that the timeline discussed by [[Ray Kurzweil]] in ''[[The Singularity is Near]]''<ref name="K">{{Harv|Kurzweil|2005|p=260}} or see [http://crnano.typepad.com/crnblog/2005/08/advanced_human_.html Advanced Human Intelligence] {{Webarchive|url=https://web.archive.org/web/20110630032301/http://crnano.typepad.com/crnblog/2005/08/advanced_human_.html |date=30 June 2011 }} where he defines strong AI as "machine intelligence with the full range of human intelligence."</ref> (i.e. between 2015 and 2045) is plausible.{{sfn|Goertzel|2007}}<br />
<br />
The term "artificial general intelligence" was used as early as 1997, by Mark Gubrud in a discussion of the implications of fully automated military production and operations. The term was re-introduced and popularized by Shane Legg and Ben Goertzel around 2002. The research objective is much older, for example Doug Lenat's Cyc project (that began in 1984), and Allen Newell's Soar project are regarded as within the scope of AGI. AGI research activity in 2006 was described by Pei Wang and Ben Goertzel as "producing publications and preliminary results". The first summer school in AGI was organized in Xiamen, China in 2009 by the Xiamen university's Artificial Brain Laboratory and OpenCog. The first university course was given in 2010 and 2011 at Plovdiv University, Bulgaria by Todor Arnaudov. MIT presented a course in AGI in 2018, organized by Lex Fridman and featuring a number of guest lecturers. However, as yet, most AI researchers have devoted little attention to AGI, with some claiming that intelligence is too complex to be completely replicated in the near term. However, a small number of computer scientists are active in AGI research, and many of this group are contributing to a series of AGI conferences. The research is extremely diverse and often pioneering in nature. In the introduction to his book, Goertzel says that estimates of the time needed before a truly flexible AGI is built vary from 10 years to over a century, but the consensus in the AGI research community seems to be that the timeline discussed by Ray Kurzweil in The Singularity is Near (i.e. between 2015 and 2045) is plausible.<br />
<br />
”人工通用智能”一词早在1997年就由马克·古布鲁德(Mark Gubrud)在讨论全自动化军事生产和作业的影响时使用。这个术语在2002年左右被肖恩·莱格(Shane Legg)和本·格兹尔(Ben Goertzel)重新引入并推广。研究目标要古老得多,例如道格•雷纳特(Doug Lenat)的 Cyc 项目(始于1984年) ,以及艾伦•纽厄尔(Allen Newell)的 Soar 项目被认为属于通用人工智能的范围。王培(Pei Wang)和本·格兹尔将2006年的通用人工智能研究活动描述为“发表论文和取得初步成果”。2009年,厦门大学人工脑实验室和 OpenCog 在中国厦门组织了通用人工智能的第一个暑期学校。第一个大学课程于2010年和2011年在保加利亚普罗夫迪夫大学由托多尔·阿瑙多夫(Todor Arnaudov)开设。2018年,麻省理工学院开设了一门通用人工智能课程,由莱克斯·弗里德曼(Lex Fridman)组织,并邀请了一些客座讲师。然而,迄今为止,大多数人工智能研究人员对通用人工智能关注甚少,一些人声称,智能过于复杂,在短期内无法完全复制。然而,少数计算机科学家积极参与通用人工智能的研究,其中许多人正在为通用人工智能的一系列会议做出贡献。这项研究极其多样化,而且往往具有开创性。格兹尔在他的书的序言中,说,制造一个真正灵活的通用人工智能所需的时间约为10年到超过一个世纪不等,但是通用人工智能研究团体的似乎一致认为雷·库兹韦尔(Ray Kurzweil)在《奇点临近》(即在2015年至2045年之间)中讨论的时间线是可信的。<br />
<br />
<br />
<br />
However, most mainstream AI researchers doubt that progress will be this rapid.{{citation_needed|date=January 2017}} Organizations explicitly pursuing AGI include the Swiss AI lab [[IDSIA]],{{citation needed|date=December 2017}} Nnaisense,<ref>{{cite news|last1=Markoff|first1=John|title=When A.I. Matures, It May Call Jürgen Schmidhuber 'Dad'|url=https://www.nytimes.com/2016/11/27/technology/artificial-intelligence-pioneer-jurgen-schmidhuber-overlooked.html|accessdate=26 December 2017|work=The New York Times|date=27 November 2016|archive-url=https://web.archive.org/web/20171226234555/https://www.nytimes.com/2016/11/27/technology/artificial-intelligence-pioneer-jurgen-schmidhuber-overlooked.html|archive-date=26 December 2017|url-status=live}}</ref> [[Vicarious (company)|Vicarious]],<!--<ref name=baum/>--> [[Maluuba]],<ref name=baum/> the [[OpenCog|OpenCog Foundation]], Adaptive AI, [[LIDA (cognitive architecture)|LIDA]], and [[Numenta]] and the associated [[Redwood Neuroscience Institute]].<ref>{{cite book|author1=James Barrat|title=Our Final Invention: Artificial Intelligence and the End of the Human Era|date=2013|publisher=St. Martin's Press|location=New York|isbn=9780312622374|edition=First|chapter=Chapter 11: A Hard Takeoff|title-link=Our Final Invention|author1-link=James Barrat}}</ref> In addition, organizations such as the [[Machine Intelligence Research Institute]]<ref>{{cite web|title=About the Machine Intelligence Research Institute|url=https://intelligence.org/about/|website=Machine Intelligence Research Institute|accessdate=26 December 2017|archive-url=https://web.archive.org/web/20180121025925/https://intelligence.org/about/|archive-date=21 January 2018|url-status=live}}</ref> and [[OpenAI]]<ref>{{cite news|title=About OpenAI|url=https://openai.com/about/|accessdate=26 December 2017|work=[[OpenAI]]|language=en-us|archive-url=https://web.archive.org/web/20171222181056/https://openai.com/about/|archive-date=22 December 2017|url-status=live}}</ref> have been founded to influence the development path of AGI. Finally, projects such as the [[Human Brain Project]]<ref>{{cite news|last1=Theil|first1=Stefan|title=Trouble in Mind|url=https://www.scientificamerican.com/article/why-the-human-brain-project-went-wrong-and-how-to-fix-it/|accessdate=26 December 2017|work=Scientific American|pages=36–42|language=en|doi=10.1038/scientificamerican1015-36|bibcode=2015SciAm.313d..36T|archive-url=https://web.archive.org/web/20171109234151/https://www.scientificamerican.com/article/why-the-human-brain-project-went-wrong-and-how-to-fix-it/|archive-date=9 November 2017|url-status=live}}</ref> have the goal of building a functioning simulation of the human brain. A 2017 survey of AGI categorized forty-five known "active R&D projects" that explicitly or implicitly (through published research) research AGI, with the largest three being [[DeepMind]], the Human Brain Project, and [[OpenAI]].<ref name=baum>{{cite journal|title=Baum, Seth, A Survey of Artificial General Intelligence Projects for Ethics, Risk, and Policy (November 12, 2017). Global Catastrophic Risk Institute Working Paper 17-1|url=https://ssrn.com/abstract=3070741|date=12 November 2017|last1=Baum|first1=Seth}}</ref><br />
<br />
However, most mainstream AI researchers doubt that progress will be this rapid. Organizations explicitly pursuing AGI include the Swiss AI lab IDSIA, Nnaisense, Vicarious,<!-- In addition, organizations such as the Machine Intelligence Research Institute and OpenAI have been founded to influence the development path of AGI. Finally, projects such as the Human Brain Project have the goal of building a functioning simulation of the human brain. A 2017 survey of AGI categorized forty-five known "active R&D projects" that explicitly or implicitly (through published research) research AGI, with the largest three being DeepMind, the Human Brain Project, and OpenAI.<br />
<br />
然而,大多数主流的人工智能研究人员怀疑进展是否会如此之快。明确寻求通用人工智能的组织包括瑞士人工智能实验室IDSIA,Nnaisense,Vicarious,! -- 此外,机器智能研究所和 OpenAI 等机构也建立起来以影响通用人工智能的发展道路。最后,像人脑计划这样的项目的目标是建立一个人脑的功能模拟。2017年针对通用人工智能的一项调查(通过已发表的研究)对45个已知的明确的或暗中研究通用人工智能的“活跃研发项目”进行了分类 ,其中最大的三个是 DeepMind、人类大脑项目和 OpenAI。<br />
<br />
<br />
<br />
In 2017, researchers Feng Liu, Yong Shi and Ying Liu conducted intelligence tests on publicly available and freely accessible weak AI such as Google AI or Apple's Siri and others. At the maximum, these AI reached a value of about 47, which corresponds approximately to a six-year-old child in first grade. An adult comes to about 100 on average. Similar tests had been carried out in 2014, with the IQ score reaching a maximum value of 27.<ref>{{cite journal|title=Intelligence Quotient and Intelligence Grade of Artificial Intelligence|journal=Annals of Data Science|volume=4|issue=2|pages=179–191|arxiv=1709.10242|doi=10.1007/s40745-017-0109-0|year=2017|last1=Liu|first1=Feng|last2=Shi|first2=Yong|last3=Liu|first3=Ying|bibcode=2017arXiv170910242L}}</ref><ref>{{cite web|title=Google-KI doppelt so schlau wie Siri|url=https://t3n.de/news/iq-kind-schlauer-google-ki-siri-864003|accessdate=2 January 2019|archive-url=https://web.archive.org/web/20190103055657/https://t3n.de/news/iq-kind-schlauer-google-ki-siri-864003/|archive-date=3 January 2019|url-status=live}}</ref><br />
<br />
In 2017, researchers Feng Liu, Yong Shi and Ying Liu conducted intelligence tests on publicly available and freely accessible weak AI such as Google AI or Apple's Siri and others. At the maximum, these AI reached a value of about 47, which corresponds approximately to a six-year-old child in first grade. An adult comes to about 100 on average. Similar tests had been carried out in 2014, with the IQ score reaching a maximum value of 27.<br />
<br />
2017年,研究人员 Feng Liu,Yong Shi 和 Ying Liu 对公开的和可自由访问的弱智能进行了智能测试,如谷歌人工智能或苹果的 Siri 等。在最大值,这些人工智能达到了约47,这大约相当于一个的六岁儿童。一个成年人平均智商为100。2014年也进行了类似的测试,智商分数的最高值达到了27。<br />
<br />
<br />
<br />
In 2019, video game programmer and aerospace engineer [[John Carmack]] announced plans to research AGI.<ref name="lawler">{{cite web |url=https://www.engadget.com/2019-11-13-john-carmack-agi.html |title=John Carmack takes a step back at Oculus to work on human-like AI |date=November 13, 2019 |first=Richard |last=Lawler |accessdate=April 4, 2020 |publisher=[[Engadget]]}}</ref><br />
<br />
In 2019, video game programmer and aerospace engineer John Carmack announced plans to research AGI.<br />
<br />
2019年,游戏程序师和航空工程师约翰·卡迈克(John Carmack)宣布了研究通用人工智能的计划。<br />
<br />
<br />
<br />
==Processing power needed to simulate a brain 模拟人脑所需要的处理能力==<br />
<br />
<br />
<br />
===Whole brain emulation 全脑模拟===<br />
<br />
{{main|Mind uploading}}<br />
<br />
A popular discussed approach to achieving general intelligent action is [[whole brain emulation]]. A low-level brain model is built by [[brain scanning|scanning]] and [[Brain mapping|mapping]] a biological brain in detail and copying its state into a computer system or another computational device. The computer runs a [[computer simulation|simulation]] model so faithful to the original that it will behave in essentially the same way as the original brain, or for all practical purposes, indistinguishably.<ref name=Roadmap><br />
<br />
A popular discussed approach to achieving general intelligent action is whole brain emulation. A low-level brain model is built by scanning and mapping a biological brain in detail and copying its state into a computer system or another computational device. The computer runs a simulation model so faithful to the original that it will behave in essentially the same way as the original brain, or for all practical purposes, indistinguishably.<ref name=Roadmap><br />
<br />
实现通用智能行为的一种流行的讨论方法是全脑模拟。一个低层次的大脑模型是通过扫描和绘制生物大脑的详细情况,并将其状态复制到计算机系统或其他计算设备中来建立的。计算机运行的模拟模型无比忠实于原始模型,以至于它的行为在本质,或者对于所有的实际目的上与原始大脑相同,难以区分。 参考名称路线图<br />
<br />
{{Harvnb|Sandberg|Boström|2008}}. "The basic idea is to take a particular brain, scan its structure in detail, and construct a software model of it that is so faithful to the original that, when run on appropriate hardware, it will behave in essentially the same way as the original brain."</ref> Whole brain emulation is discussed in [[computational neuroscience]] and [[neuroinformatics]], in the context of [[brain simulation]] for medical research purposes. It is discussed in [[artificial intelligence]] research{{sfn|Goertzel|2007}} as an approach to strong AI. [[Neuroimaging]] technologies that could deliver the necessary detailed understanding are improving rapidly, and [[futurist]] Ray Kurzweil in the book ''The Singularity Is Near''<ref name=K/> predicts that a map of sufficient quality will become available on a similar timescale to the required computing power.<br />
<br />
."The basic idea is to take a particular brain, scan its structure in detail, and construct a software model of it that is so faithful to the original that, when run on appropriate hardware, it will behave in essentially the same way as the original brain."</ref> Whole brain emulation is discussed in computational neuroscience and neuroinformatics, in the context of brain simulation for medical research purposes. It is discussed in artificial intelligence research as an approach to strong AI. Neuroimaging technologies that could deliver the necessary detailed understanding are improving rapidly, and futurist Ray Kurzweil in the book The Singularity Is Near predicts that a map of sufficient quality will become available on a similar timescale to the required computing power.<br />
<br />
“基本思路是,取一个特定的大脑,详细地扫描其结构,并构建一个无比还原的原始大脑的软件模型,以至于在适当的硬件上运行时,它基本上与原始大脑的行为方式相同。基于医学研究的大脑模拟背景下,全脑模拟在计算神经科学和神经信息学医学期刊上被讨论过。它是人工智能研究中讨论的一种强人工智能的方法。可提供必要详细的理解的神经成像技术正在迅速提高,未来学家雷·库兹韦尔(Ray Kurzweil)在《奇点临近》书中预测,一张质量足够高的地图将在类似的时间尺度上达到所需的计算能力。<br />
<br />
<br />
<br />
===Early estimates 初步预测===<br />
<br />
[[File:Estimations of Human Brain Emulation Required Performance.svg|thumb|right|400px|Estimates of how much processing power is needed to emulate a human brain at various levels (from Ray Kurzweil, and [[Anders Sandberg]] and [[Nick Bostrom]]), along with the fastest supercomputer from [[TOP500]] mapped by year. Note the logarithmic scale and exponential trendline, which assumes the computational capacity doubles every 1.1 years. Kurzweil believes that mind uploading will be possible at neural simulation, while the Sandberg, Bostrom report is less certain about where [[consciousness]] arises.{{sfn|Sandberg|Boström|2008}}]] For low-level brain simulation, an extremely powerful computer would be required. The [[human brain]] has a huge number of [[synapses]]. Each of the 10<sup>11</sup> (one hundred billion) [[neurons]] has on average 7,000 synaptic connections (synapses) to other neurons. It has been estimated that the brain of a three-year-old child has about 10<sup>15</sup> synapses (1 quadrillion). This number declines with age, stabilizing by adulthood. Estimates vary for an adult, ranging from 10<sup>14</sup> to 5×10<sup>14</sup> synapses (100 to 500 trillion).{{sfn|Drachman|2005}} An estimate of the brain's processing power, based on a simple switch model for neuron activity, is around 10<sup>14</sup> (100 trillion) synaptic updates per second ([[SUPS]]).{{sfn|Russell|Norvig|2003}} In 1997, Kurzweil looked at various estimates for the hardware required to equal the human brain and adopted a figure of 10<sup>16</sup> computations per second (cps).<ref>In "Mind Children" {{Harvnb|Moravec|1988|page=61}} 10<sup>15</sup> cps is used. More recently, in 1997, <{{cite web|url=http://www.transhumanist.com/volume1/moravec.htm |title=Archived copy |accessdate=23 June 2006 |url-status=dead |archiveurl=https://web.archive.org/web/20060615031852/http://transhumanist.com/volume1/moravec.htm |archivedate=15 June 2006 }}> Moravec argued for 10<sup>8</sup> MIPS which would roughly correspond to 10<sup>14</sup> cps. Moravec talks in terms of MIPS, not "cps", which is a non-standard term Kurzweil introduced.</ref> (For comparison, if a "computation" was equivalent to one "[[FLOPS|floating point operation]]" – a measure used to rate current [[supercomputer]]s – then 10<sup>16</sup> "computations" would be equivalent to 10 [[Peta-|petaFLOPS]], [[FLOPS#Performance records|achieved in 2011]]). He used this figure to predict the necessary hardware would be available sometime between 2015 and 2025, if the exponential growth in computer power at the time of writing continued.<br />
<br />
Estimates of how much processing power is needed to emulate a human brain at various levels (from Ray Kurzweil, and [[Anders Sandberg and Nick Bostrom), along with the fastest supercomputer from TOP500 mapped by year. Note the logarithmic scale and exponential trendline, which assumes the computational capacity doubles every 1.1 years. Kurzweil believes that mind uploading will be possible at neural simulation, while the Sandberg, Bostrom report is less certain about where consciousness arises.]] For low-level brain simulation, an extremely powerful computer would be required. The human brain has a huge number of synapses. Each of the 10<sup>11</sup> (one hundred billion) neurons has on average 7,000 synaptic connections (synapses) to other neurons. It has been estimated that the brain of a three-year-old child has about 10<sup>15</sup> synapses (1 quadrillion). This number declines with age, stabilizing by adulthood. Estimates vary for an adult, ranging from 10<sup>14</sup> to 5×10<sup>14</sup> synapses (100 to 500 trillion). An estimate of the brain's processing power, based on a simple switch model for neuron activity, is around 10<sup>14</sup> (100 trillion) synaptic updates per second (SUPS). In 1997, Kurzweil looked at various estimates for the hardware required to equal the human brain and adopted a figure of 10<sup>16</sup> computations per second (cps). (For comparison, if a "computation" was equivalent to one "floating point operation" – a measure used to rate current supercomputers – then 10<sup>16</sup> "computations" would be equivalent to 10 petaFLOPS, achieved in 2011). He used this figure to predict the necessary hardware would be available sometime between 2015 and 2025, if the exponential growth in computer power at the time of writing continued.<br />
<br />
根据对在不同水平上模拟人类大脑的所需处理能力的估计(来自 Ray Kurzweil,[ Anders Sandberg 和 Nick Bostrom ]) ,以及每年从最快的五百台超级计算机获得的数据,绘制出对数尺度趋势线和指数趋势线。它呈现出计算能力每1.1年增长一倍。库兹韦尔相信,在神经模拟中上传思维是可能的,而桑德伯格和博斯特罗姆的报告对意识从何产生则不太确定。]为进行低层次的大脑模拟,需要一个非常强大的计算机。人类的大脑有大量的突触。每10个 sup 11 / sup (1000亿)神经元平均与其他神经元有7000个突触连接(突触)。据估计,一个三岁儿童的大脑约有10个 sup 15 / sup 突触(1千万亿)。这个数字随着年龄的增长而下降,成年后趋于稳定。而每个成年人的估计情况互不相同,从10个 sup 14 / sup 到5个 sup 10 sup 14 / sup 突触(100万亿到500万亿)不等。基于神经元活动的简单开关模型,对大脑处理能力的估计大约是每秒10次 / 秒(100万亿)突触更新(SUPS)。1997年,库兹韦尔研究了等价模拟人脑所需硬件的各种估计,并采纳了每秒10 sup 16 / sup 计算(cps)这个估计结果。(作为比较,如果一次“计算”相当于一次“浮点运算”——一种用于对当前超级计算机进行评级的措施——那么10 sup 16 / sup次“计算”相当于2011年达到的的每秒10000亿次浮点运算)。他用这个数字来预测,如果在撰写本文时计算机能力方面的指数增长继续下去的话,那么在2015年到2025年之间的某个时候,必要的硬件将会出现。<br />
<br />
<br />
<br />
===Modelling the neurons in more detail 对神经元的更精细的模拟===<br />
<br />
The [[artificial neuron]] model assumed by Kurzweil and used in many current [[artificial neural network]] implementations is simple compared with [[biological neuron model|biological neurons]]. A brain simulation would likely have to capture the detailed cellular behaviour of biological [[neurons]], presently understood only in the broadest of outlines. The overhead introduced by full modeling of the biological, chemical, and physical details of neural behaviour (especially on a molecular scale) would require computational powers several orders of magnitude larger than Kurzweil's estimate. In addition the estimates do not account for [[glial cells]], which are at least as numerous as neurons, and which may outnumber neurons by as much as 10:1, and are now known to play a role in cognitive processes.<ref name="Discover2011JanFeb">{{Cite journal|author=Swaminathan, Nikhil|title=Glia—the other brain cells|journal=Discover|date=Jan–Feb 2011|url=http://discovermagazine.com/2011/jan-feb/62|access-date=24 January 2014|archive-url=https://web.archive.org/web/20140208071350/http://discovermagazine.com/2011/jan-feb/62|archive-date=8 February 2014|url-status=live}}</ref><br />
<br />
The artificial neuron model assumed by Kurzweil and used in many current artificial neural network implementations is simple compared with biological neurons. A brain simulation would likely have to capture the detailed cellular behaviour of biological neurons, presently understood only in the broadest of outlines. The overhead introduced by full modeling of the biological, chemical, and physical details of neural behaviour (especially on a molecular scale) would require computational powers several orders of magnitude larger than Kurzweil's estimate. In addition the estimates do not account for glial cells, which are at least as numerous as neurons, and which may outnumber neurons by as much as 10:1, and are now known to play a role in cognitive processes.<br />
<br />
与生物神经元相比,库兹韦尔假设的人工神经元模型在当前许多人工神经网络实现中的应用是简单的。大脑模拟可能需要捕捉生物神经元细胞行为的细节,目前只能从最广泛的概要中理解。对神经行为的生物、化学和物理细节(特别是在分子尺度上)进行全面建模所需要的计算能力将比库兹韦尔的估计大数个数量级。此外,这些估计没有考虑到至少和神经元一样多的胶质细胞,其数量可能比神经元多十分之一,且现已知它们在认知过程中发挥作用。<br />
<br />
<br />
<br />
=== Current research 研究现状===<br />
<br />
There are some research projects that are investigating brain simulation using more sophisticated neural models, implemented on conventional computing architectures. The [[Artificial Intelligence System]] project implemented non-real time simulations of a "brain" (with 10<sup>11</sup> neurons) in 2005. It took 50 days on a cluster of 27 processors to simulate 1 second of a model.<ref>{{cite journal |last=Izhikevich |first=Eugene M. |last2=Edelman |first2=Gerald M. |date=4 March 2008 |title=Large-scale model of mammalian thalamocortical systems |url=http://vesicle.nsi.edu/users/izhikevich/publications/large-scale_model_of_human_brain.pdf |journal=PNAS |volume=105 |issue=9 |pages=3593–3598 |doi= 10.1073/pnas.0712231105|access-date=23 June 2015 |archive-url=https://web.archive.org/web/20090612095651/http://vesicle.nsi.edu/users/izhikevich/publications/large-scale_model_of_human_brain.pdf |archive-date=12 June 2009 |pmid=18292226 |pmc=2265160|bibcode=2008PNAS..105.3593I }}</ref> The [[Blue Brain]] project used one of the fastest supercomputer architectures in the world, [[IBM]]'s [[Blue Gene]] platform, to create a real time simulation of a single rat [[Neocortex|neocortical column]] consisting of approximately 10,000 neurons and 10<sup>8</sup> synapses in 2006.<ref>{{cite web|url=http://bluebrain.epfl.ch/Jahia/site/bluebrain/op/edit/pid/19085|title=Project Milestones|work=Blue Brain|accessdate=11 August 2008}}</ref> A longer term goal is to build a detailed, functional simulation of the physiological processes in the human brain: "It is not impossible to build a human brain and we can do it in 10 years," [[Henry Markram]], director of the Blue Brain Project said in 2009 at the [[TED (conference)|TED conference]] in Oxford.<ref>{{Cite news |url=http://news.bbc.co.uk/1/hi/technology/8164060.stm |title=Artificial brain '10 years away' 2009 BBC news |date=22 July 2009 |access-date=25 July 2009 |archive-url=https://web.archive.org/web/20170726040959/http://news.bbc.co.uk/1/hi/technology/8164060.stm |archive-date=26 July 2017 |url-status=live }}</ref> There have also been controversial claims to have simulated a [[cat intelligence#Computer simulation of the cat brain|cat brain]]. Neuro-silicon interfaces have been proposed as an alternative implementation strategy that may scale better.<ref>[http://gauntlet.ucalgary.ca/story/10343 University of Calgary news] {{Webarchive|url=https://web.archive.org/web/20090818081044/http://gauntlet.ucalgary.ca/story/10343 |date=18 August 2009 }}, [http://www.nbcnews.com/id/12037941 NBC News news] {{Webarchive|url=https://web.archive.org/web/20170704063922/http://www.nbcnews.com/id/12037941/ |date=4 July 2017 }}</ref><br />
<br />
There are some research projects that are investigating brain simulation using more sophisticated neural models, implemented on conventional computing architectures. The Artificial Intelligence System project implemented non-real time simulations of a "brain" (with 10<sup>11</sup> neurons) in 2005. It took 50 days on a cluster of 27 processors to simulate 1 second of a model. The Blue Brain project used one of the fastest supercomputer architectures in the world, IBM's Blue Gene platform, to create a real time simulation of a single rat neocortical column consisting of approximately 10,000 neurons and 10<sup>8</sup> synapses in 2006. A longer term goal is to build a detailed, functional simulation of the physiological processes in the human brain: "It is not impossible to build a human brain and we can do it in 10 years," Henry Markram, director of the Blue Brain Project said in 2009 at the TED conference in Oxford. There have also been controversial claims to have simulated a cat brain. Neuro-silicon interfaces have been proposed as an alternative implementation strategy that may scale better.<br />
<br />
有一些研究项目正在使用更复杂的神经模型研究大脑模拟,这些模型是在传统的计算机体系结构上实现的。人工智能系统项目在2005年实现了对一个“大脑”(有10个 sup 11 / sup 神经元)的非实时模拟。在一个由27个处理器组成的集群上,模拟一个模型的一秒钟花费了50天时间。2006年,蓝脑项目利用世界上最快的超级计算机架构之一——IBM 的蓝色基因平台,创建了一个包含大约10,000个神经元和10个 sup 8 / sup 突触的单个大鼠的新皮质柱的实时模拟。一个更长期的目标是建立一个人脑生理过程的详细的功能模拟: “建立一个人脑并不是不可能的,我们可以在10年内完成,”蓝脑项目主任亨利·马克拉姆(Henry Markram)于2009年在牛津举行的 TED 大会上说道。还有一些有争议的说法是模拟猫的大脑。神经-硅接口已作为一种可替代的实施策略被提出,它可能会更好地进行模拟。<br />
<br />
<br />
<br />
[[Hans Moravec]] addressed the above arguments ("brains are more complicated", "neurons have to be modeled in more detail") in his 1997 paper "When will computer hardware match the human brain?".<ref>{{cite web|url=http://www.transhumanist.com/volume1/moravec.htm |title=Archived copy |accessdate=23 June 2006 |url-status=dead |archiveurl=https://web.archive.org/web/20060615031852/http://transhumanist.com/volume1/moravec.htm |archivedate=15 June 2006 }}</ref> He measured the ability of existing software to simulate the functionality of neural tissue, specifically the retina. His results do not depend on the number of glial cells, nor on what kinds of processing neurons perform where.<br />
<br />
Hans Moravec addressed the above arguments ("brains are more complicated", "neurons have to be modeled in more detail") in his 1997 paper "When will computer hardware match the human brain?". He measured the ability of existing software to simulate the functionality of neural tissue, specifically the retina. His results do not depend on the number of glial cells, nor on what kinds of processing neurons perform where.<br />
<br />
汉斯·莫拉维克(Hans Moravec)在他1997年的论文《计算机硬件何时能与人脑匹敌》中提出了上述观点(“大脑更复杂” ,“神经元的建模必须更详细”) .他测量了现有软件模拟神经组织,特别是视网膜功能的能力。他的研究结果并不取决于神经胶质细胞的数量,也不取决于处理神经元在哪里工作。<br />
<br />
<br />
<br />
The actual complexity of modeling biological neurons has been explored in [[OpenWorm|OpenWorm project]] that was aimed on complete simulation of a worm that has only 302 neurons in its neural network (among about 1000 cells in total). The animal's neural network has been well documented before the start of the project. However, although the task seemed simple at the beginning, the models based on a generic neural network did not work. Currently, the efforts are focused on precise emulation of biological neurons (partly on the molecular level), but the result cannot be called a total success yet. Even if the number of issues to be solved in a human-brain-scale model is not proportional to the number of neurons, the amount of work along this path is obvious.<br />
<br />
The actual complexity of modeling biological neurons has been explored in OpenWorm project that was aimed on complete simulation of a worm that has only 302 neurons in its neural network (among about 1000 cells in total). The animal's neural network has been well documented before the start of the project. However, although the task seemed simple at the beginning, the models based on a generic neural network did not work. Currently, the efforts are focused on precise emulation of biological neurons (partly on the molecular level), but the result cannot be called a total success yet. Even if the number of issues to be solved in a human-brain-scale model is not proportional to the number of neurons, the amount of work along this path is obvious.<br />
<br />
OpenWorm 项目已经探讨了建模生物神经元的实际复杂性。该项目旨在完全模拟一个蠕虫,其神经网络中只有302个神经元(在总共约1000个细胞中)。项目开始之前,蠕虫的神经网络已经被很好地记录了下来。然而,尽管任务一开始看起来很简单,基于一般神经网络的模型并不起作用。目前,研究的重点是精确模拟生物神经元(部分在分子水平上) ,但结果还不能被称为完全成功。即使在人脑尺度的模型中需要解决的问题的数量与神经元的数量不成比例,沿着这条路径走下去的工作量也是显而易见的。<br />
<br />
<br />
<br />
===Criticisms of simulation-based approaches 对基于模拟的研究方法的批评===<br />
<br />
A fundamental criticism of the simulated brain approach derives from [[embodied cognition]] where human embodiment is taken as an essential aspect of human intelligence. Many researchers believe that embodiment is necessary to ground meaning.<ref>{{Harvnb|de Vega|Glenberg|Graesser|2008}}. A wide range of views in current research, all of which require grounding to some degree</ref> If this view is correct, any fully functional brain model will need to encompass more than just the neurons (i.e., a robotic body). Goertzel{{sfn|Goertzel|2007}} proposes virtual embodiment (like in ''[[Second Life]]''), but it is not yet known whether this would be sufficient.<br />
<br />
A fundamental criticism of the simulated brain approach derives from embodied cognition where human embodiment is taken as an essential aspect of human intelligence. Many researchers believe that embodiment is necessary to ground meaning. If this view is correct, any fully functional brain model will need to encompass more than just the neurons (i.e., a robotic body). Goertzel proposes virtual embodiment (like in Second Life), but it is not yet known whether this would be sufficient.<br />
<br />
对模拟大脑的方法的一个基本批评来自具象认知,其中人形化被视为人类智力的一个重要方面。许多研究者认为,具体化是必要的基础意义。如果这种观点是正确的,那么任何功能齐全的大脑模型除了神经元还要包含更多东西(例如,一个机器人身体)。格兹尔提出了虚拟体(就像在《第二人生》中那样) ,但是目前还不知道这是否足够。<br />
<br />
<br />
<br />
Desktop computers using microprocessors capable of more than 10<sup>9</sup> cps (Kurzweil's non-standard unit "computations per second", see above) have been available since 2005. According to the brain power estimates used by Kurzweil (and Moravec), this computer should be capable of supporting a simulation of a bee brain, but despite some interest<ref>{{Cite web |url=http://www.setiai.com/archives/cat_honey_bee_brain.html |title=some links to bee brain studies |access-date=30 March 2010 |archive-url=https://web.archive.org/web/20111005162232/http://www.setiai.com/archives/cat_honey_bee_brain.html |archive-date=5 October 2011 |url-status=dead }}</ref> no such simulation exists {{Citation needed|date=April 2011}}. There are at least three reasons for this:<br />
<br />
Desktop computers using microprocessors capable of more than 10<sup>9</sup> cps (Kurzweil's non-standard unit "computations per second", see above) have been available since 2005. According to the brain power estimates used by Kurzweil (and Moravec), this computer should be capable of supporting a simulation of a bee brain, but despite some interest no such simulation exists . There are at least three reasons for this:<br />
<br />
自2005年以来,台式计算机使用的微处理器能够超过10 sup 9 / sup cps (库兹韦尔的非标准单位“每秒计算”,见上文)。根据库兹韦尔(和莫拉维克)使用的大脑能量估算,这台计算机应该能够支持蜜蜂大脑的模拟,但是尽管有些人感兴趣,这样的模拟并不存在。这至少有三个原因:<br />
<br />
#The neuron model seems to be oversimplified (see next section).<br />
<br />
The neuron model seems to be oversimplified (see next section).<br />
<br />
神经元模型似乎过于简化了(见下一节)。<br />
<br />
#There is insufficient understanding of higher cognitive processes{{refn|In Goertzels' AGI book ([[#CITEREFYudkowsky2006|Yudkowsky 2006]]), Yudkowsky proposes 5 levels of organisation that must be understood – code/data, sensory modality, concept & category, thought, and deliberation (consciousness) – in order to use the available hardware}} to establish accurately what the brain's neural activity, observed using techniques such as [[Neuroimaging#Functional magnetic resonance imaging|functional magnetic resonance imaging]], correlates with.<br />
<br />
There is insufficient understanding of higher cognitive processes to establish accurately what the brain's neural activity, observed using techniques such as functional magnetic resonance imaging, correlates with.<br />
<br />
人们对高级认知过程的理解不够充分,无法准确地确定大脑的神经活动---- 与之相关的是使用功能性磁共振成像等技术观察到的大脑活动。<br />
<br />
#Even if our understanding of cognition advances sufficiently, early simulation programs are likely to be very inefficient and will, therefore, need considerably more hardware.<br />
<br />
Even if our understanding of cognition advances sufficiently, early simulation programs are likely to be very inefficient and will, therefore, need considerably more hardware.<br />
<br />
即使我们对认知的理解有了足够的进步,早期的仿真程序也可能非常低效,因此需要更多的硬件。<br />
<br />
#The brain of an organism, while critical, may not be an appropriate boundary for a cognitive model. To simulate a bee brain, it may be necessary to simulate the body, and the environment. [[The Extended Mind]] thesis formalizes the philosophical concept, and research into [[Cephalopoda|cephalopods]] has demonstrated clear examples of a decentralized system.<ref>{{cite journal | pmid = 15829594 | doi=10.1152/jn.00684.2004 | volume=94 | issue=2 | title=Dynamic model of the octopus arm. I. Biomechanics of the octopus reaching movement |date=August 2005 | journal=J. Neurophysiol. | pages=1443–58 | last1 = Yekutieli | first1 = Y | last2 = Sagiv-Zohar | first2 = R | last3 = Aharonov | first3 = R | last4 = Engel | first4 = Y | last5 = Hochner | first5 = B | last6 = Flash | first6 = T}}</ref><br />
<br />
The brain of an organism, while critical, may not be an appropriate boundary for a cognitive model. To simulate a bee brain, it may be necessary to simulate the body, and the environment. The Extended Mind thesis formalizes the philosophical concept, and research into cephalopods has demonstrated clear examples of a decentralized system.<br />
<br />
有机体的大脑虽然关键,但可能不是认知模型的合适边界。为了模拟蜜蜂的大脑,可能需要模拟身体和环境。扩展理智论点形式化了哲学概念,对头足类动物的研究已经展示了分散系统的明显的例子。<br />
<br />
<br />
<br />
In addition, the scale of the human brain is not currently well-constrained. One estimate puts the human brain at about 100 billion neurons and 100 trillion synapses.<ref>{{Harvnb|Williams|Herrup|1988}}</ref><ref>[http://search.eb.com/eb/article-75525 "nervous system, human."] ''[[Encyclopædia Britannica]]''. 9 January 2007</ref> Another estimate is 86 billion neurons of which 16.3 billion are in the [[cerebral cortex]] and 69 billion in the [[cerebellum]].{{sfn|Azevedo et al.|2009}} [[Glial cell]] synapses are currently unquantified but are known to be extremely numerous.<br />
<br />
In addition, the scale of the human brain is not currently well-constrained. One estimate puts the human brain at about 100 billion neurons and 100 trillion synapses. Another estimate is 86 billion neurons of which 16.3 billion are in the cerebral cortex and 69 billion in the cerebellum. Glial cell synapses are currently unquantified but are known to be extremely numerous.<br />
<br />
此外,人类大脑的规模目前还没有得到很好的限制。据估计,人类大脑大约有1000亿个神经元和100万亿个突触。另一个估计是860亿个神经元,其中163亿个在大脑皮层,690亿个在小脑。神经胶质细胞突触目前尚未定量,但已知数量极多。<br />
<br />
<br />
<br />
==Strong AI and consciousness 强人工智能和意识==<br />
<br />
{{See also|Philosophy of artificial intelligence|Turing test}}<br />
<br />
<br />
<br />
In 1980, philosopher [[John Searle]] coined the term "strong AI" as part of his [[Chinese room]] argument.<ref>{{Harvnb|Searle|1980}}</ref> He wanted to distinguish between two different hypotheses about artificial intelligence:<ref>As defined in a standard AI textbook: "The assertion that machines could possibly act intelligently (or, perhaps better, act as if they were intelligent) is called the 'weak AI' hypothesis by philosophers, and the assertion that machines that do so are actually thinking (as opposed to simulating thinking) is called the 'strong AI' hypothesis." {{Harv|Russell|Norvig|2003}}</ref><br />
<br />
In 1980, philosopher John Searle coined the term "strong AI" as part of his Chinese room argument. He wanted to distinguish between two different hypotheses about artificial intelligence:<br />
<br />
1980年,哲学家约翰•塞尔(John Searle)将“强人工智能”(strong AI)一词作为他在中文屋论证的一部分。他想要区分关于人工智能的两种不同假设:<br />
<br />
* An artificial intelligence system can ''think'' and have a ''mind''. (The word "mind" has a specific meaning for philosophers, as used in "the [[mind body problem]]" or "the [[philosophy of mind]]".)<br />
*一个人工智能系统可以思考并拥有思维。(词语“思维”对哲学家来说有特殊意义,正如在“身心问题”或“心灵哲学”中的使用一样。)<br />
<br />
* An artificial intelligence system can (only) ''act like'' it thinks and has a mind.<br />
*一个人工智能系统可以(仅仅)按它所想而“行动”,并且拥有思维。<br />
<br />
The first one is called "the ''strong'' AI hypothesis" and the second is "the ''weak'' AI hypothesis" because the first one makes the ''stronger'' statement: it assumes something special has happened to the machine that goes beyond all its abilities that we can test. Searle referred to the "strong AI hypothesis" as "strong AI". This usage is also common in academic AI research and textbooks.<ref>For example:<br />
<br />
The first one is called "the strong AI hypothesis" and the second is "the weak AI hypothesis" because the first one makes the stronger statement: it assumes something special has happened to the machine that goes beyond all its abilities that we can test. Searle referred to the "strong AI hypothesis" as "strong AI". This usage is also common in academic AI research and textbooks.<ref>For example:<br />
<br />
第一条被称为“强人工智能假设” ,第二条被称为“弱人工智能假设”,因为第一条假设提出了更强的陈述: 它假定机器发生了某种特殊的事件,超出了我们能够测试的所有能力。塞尔将“强人工智能假说”称为“强人工智能”。这种用法在人工智能学术研究和教科书中也很常见。例如:<br />
<br />
* {{Harvnb|Russell|Norvig|2003}},<br />
<br />
* [http://www.encyclopedia.com/doc/1O87-strongAI.html Oxford University Press Dictionary of Psychology] {{Webarchive|url=https://web.archive.org/web/20071203103022/http://www.encyclopedia.com/doc/1O87-strongAI.html |date=3 December 2007 }} (quoted in "High Beam Encyclopedia"),<br />
<br />
* [http://www.aaai.org/AITopics/html/phil.html MIT Encyclopedia of Cognitive Science] {{Webarchive|url=https://web.archive.org/web/20080719074502/http://www.aaai.org/AITopics/html/phil.html |date=19 July 2008 }} (quoted in "AITopics")<br />
<br />
* [http://planetmath.org/encyclopedia/StrongAIThesis.html Planet Math] {{Webarchive|url=https://web.archive.org/web/20070919012830/http://planetmath.org/encyclopedia/StrongAIThesis.html |date=19 September 2007 }}<br />
<br />
* [http://www.cbhd.org/resources/biotech/tongen_2003-11-07.htm Will Biological Computers Enable Artificially Intelligent Machines to Become Persons?] {{Webarchive|url=https://web.archive.org/web/20080513031753/http://www.cbhd.org/resources/biotech/tongen_2003-11-07.htm |date=13 May 2008 }} Anthony Tongen</ref><br />
<br />
<br />
<br />
The weak AI hypothesis is equivalent to the hypothesis that artificial general intelligence is possible. According to [[Stuart J. Russell|Russell]] and [[Peter Norvig|Norvig]], "Most AI researchers take the weak AI hypothesis for granted, and don't care about the strong AI hypothesis."{{sfn|Russell|Norvig|2003|p=947}}<br />
<br />
The weak AI hypothesis is equivalent to the hypothesis that artificial general intelligence is possible. According to Russell and Norvig, "Most AI researchers take the weak AI hypothesis for granted, and don't care about the strong AI hypothesis."<br />
<br />
弱人工智能假说等同于通用人工智能是可能的假说。根据罗素和诺维格的说法,“大多数人工智能研究人员认为弱人工智能假说是理所当然的,并且不关心强人工智能假说。”<br />
<br />
<br />
<br />
In contrast to Searle, [[Ray Kurzweil]] uses the term "strong AI" to describe any artificial intelligence system that acts like it has a mind,<ref name=K/> regardless of whether a philosopher would be able to determine if it ''actually'' has a mind or not.<br />
<br />
In contrast to Searle, Ray Kurzweil uses the term "strong AI" to describe any artificial intelligence system that acts like it has a mind, regardless of whether a philosopher would be able to determine if it actually has a mind or not.<br />
<br />
与 Searle 不同的是,Ray Kurzweil 使用“强人工智能”这个词来描述任何人工智能系统,这个系统的行为就像它有思想一样,不管哲学家能否确定它是否真的有思想。<br />
<br />
In science fiction, AGI is associated with traits such as [[consciousness]], [[sentience]], [[sapience]], and [[self-awareness]] observed in living beings. However, according to Searle, it is an open question whether general intelligence is sufficient for consciousness. "Strong AI" (as defined above by Kurzweil) should not be confused with Searle's "[[strong AI hypothesis]]." The strong AI hypothesis is the claim that a computer which behaves as intelligently as a person must also necessarily have a [[mind]] and [[consciousness]]. AGI refers only to the amount of intelligence that the machine displays, with or without a mind.<br />
<br />
In science fiction, AGI is associated with traits such as consciousness, sentience, sapience, and self-awareness observed in living beings. However, according to Searle, it is an open question whether general intelligence is sufficient for consciousness. "Strong AI" (as defined above by Kurzweil) should not be confused with Searle's "strong AI hypothesis." The strong AI hypothesis is the claim that a computer which behaves as intelligently as a person must also necessarily have a mind and consciousness. AGI refers only to the amount of intelligence that the machine displays, with or without a mind.<br />
<br />
在科幻小说中,通用人工智能与生物所具有的的意识、知觉、智慧和自我意识等特征有关。然而,根据塞尔的说法,一般智力是否足以产生意识还是一个悬而未决的问题。“强人工智能”(如上文库兹韦尔所定义的)不应与塞尔的“强人工智能假设”相混淆。强人工智能假说认为,一台像人一样智能运行的计算机必然具有思想和意识。通用人工智能只是指机器显示的智能程度,不管有没有头脑。<br />
<br />
<br />
<br />
===Consciousness 意识===<br />
<br />
There are other aspects of the human mind besides intelligence that are relevant to the concept of strong AI which play a major role in [[science fiction]] and the [[ethics of artificial intelligence]]:<br />
<br />
There are other aspects of the human mind besides intelligence that are relevant to the concept of strong AI which play a major role in science fiction and the ethics of artificial intelligence:<br />
<br />
除了与在科幻小说和人工智能伦理中扮演重要角色的强人工智能概念有关的智能,人类思维还有其他方面:<br />
<br />
* [[consciousness]]: To have [[qualia|subjective experience]] and [[thought]].<ref>Note that [[consciousness]] is difficult to define. A popular definition, due to [[Thomas Nagel]], is that it "feels like" something to be conscious. If we are not conscious, then it doesn't feel like anything. Nagel uses the example of a bat: we can sensibly ask "what does it feel like to be a bat?" However, we are unlikely to ask "what does it feel like to be a toaster?" Nagel concludes that a bat appears to be conscious (i.e. has consciousness) but a toaster does not. See {{Harv|Nagel|1974}}</ref><br />
意识:拥有主观体验和思想。值得一提的是意识是很难定义的。由托马斯·内格尔给出的一个著名定义陈述如下:一个事物如果能体会到某种感觉,那么它是有意识的。如果我们不是有意识的,那么我们不会有任何感觉。内格尔以蝙蝠为例:我们可以凭借感觉问出:“成为一只蝙蝠的感觉如何?”但是,我们不大可能问出:“成为一个吐司机的感觉如何?”内格尔总结认为蝙蝠像是有意识的(即拥有意识),但是吐司机却不是。<br />
<br />
* [[self-awareness]]: To be aware of oneself as a separate individual, especially to be aware of one's own thoughts.<br />
自我意识:能够意识到自己是一个独立的个体,尤其是意识到自己的思想。<br />
<br />
* [[sentience]]: The ability to "feel" perceptions or emotions subjectively.<br />
知觉:主观地感受概念或者情感的能力。<br />
<br />
* [[sapience]]: The capacity for wisdom.<br />
智慧:容纳知识的能力。<br />
<br />
<br />
<br />
These traits have a moral dimension, because a machine with this form of strong AI may have legal rights, analogous to the [[animal rights|rights of non-human animals]]. As such, preliminary work has been conducted on approaches to integrating full ethical agents with existing legal and social frameworks. These approaches have focused on the legal position and rights of 'strong' AI.<ref>{{Cite journal|last=Sotala|first=Kaj|last2=Yampolskiy|first2=Roman V|date=2014-12-19|title=Responses to catastrophic AGI risk: a survey|journal=Physica Scripta|volume=90|issue=1|pages=8|doi=10.1088/0031-8949/90/1/018001|issn=0031-8949|doi-access=free}}</ref><br />
<br />
These traits have a moral dimension, because a machine with this form of strong AI may have legal rights, analogous to the rights of non-human animals. As such, preliminary work has been conducted on approaches to integrating full ethical agents with existing legal and social frameworks. These approaches have focused on the legal position and rights of 'strong' AI.<br />
<br />
这些特征具有道德维度,因为拥有这种强人工智能形式的机器可能拥有法律权利,类似于非人类动物的权利。因此,已经开展了初步工作,探讨如何将全面的道德行为者纳入现有的法律和社会框架。这些方法都集中在强大的人工智能的法律地位和权利上。<br />
<br />
<br />
<br />
However, [[Bill Joy]], among others, argues a machine with these traits may be a threat to human life or dignity.<ref>{{cite journal| title=Why the future doesn't need us | last=Joy | first=Bill |author-link=Bill Joy | journal=Wired |date=April 2000 }}</ref> It remains to be shown whether any of these traits are [[Necessary and sufficient condition|necessary]] for strong AI. The role of [[consciousness]] is not clear, and currently there is no agreed test for its presence. If a machine is built with a device that simulates the [[neural correlates of consciousness]], would it automatically have self-awareness? It is also possible that some of these properties, such as sentience, [[Emergence|naturally emerge]] from a fully intelligent machine, or that it becomes natural to ''ascribe'' these properties to machines once they begin to act in a way that is clearly intelligent. For example, intelligent action may be sufficient for sentience, rather than the other way around.<br />
<br />
However, Bill Joy, among others, argues a machine with these traits may be a threat to human life or dignity. It remains to be shown whether any of these traits are necessary for strong AI. The role of consciousness is not clear, and currently there is no agreed test for its presence. If a machine is built with a device that simulates the neural correlates of consciousness, would it automatically have self-awareness? It is also possible that some of these properties, such as sentience, naturally emerge from a fully intelligent machine, or that it becomes natural to ascribe these properties to machines once they begin to act in a way that is clearly intelligent. For example, intelligent action may be sufficient for sentience, rather than the other way around.<br />
<br />
然而,比尔·乔伊(Bill Joy)等人认为,具有这些特征的机器可能会威胁到人类的生命或尊严。这些特征对于强人工智能来说是否是必要的还有待证明。意识的作用并不清楚,目前也没有针对其存在而进行的一致的测试。如果一台机器装有一个模拟与意识相关的神经的装置,它会自动具有自我意识吗?也有可能这些特性中的一些,比如感知能力,自然而然地从一个完全智能的机器中产生,或者一旦机器开始以一种明显智能的方式行动,人们就会自然而然地认为这些特性是机器自主产生的。<br />
--~~~“或者一旦机器开始以一种明显智能的方式行动,,人们就会自然而然地认为这些特性是机器自主产生的。”对应原句“or that it becomes natural to ascribe these properties to machines once they begin to act in a way that is clearly intelligent.”与原句在语序和措辞上略有不同,是译者考虑到中文的阅读习惯在不改变原意的条件下意译得出的。<br />
例如,智能行为可能足以判定机器产生了知觉,而非反过来。<br />
<br />
<br />
<br />
===Artificial consciousness research 人工意识研究===<br />
<br />
{{Main|Artificial consciousness}}<br />
<br />
<br />
<br />
Although the role of consciousness in strong AI/AGI is debatable, many AGI researchers{{sfn|Yudkowsky|2006}} regard research that investigates possibilities for implementing consciousness as vital. In an early effort [[Igor Aleksander]]{{sfn|Aleksander|1996}} argued that the principles for creating a conscious machine already existed but that it would take forty years to train such a machine to understand [[language]].<br />
<br />
Although the role of consciousness in strong AI/AGI is debatable, many AGI researchers regard research that investigates possibilities for implementing consciousness as vital. In an early effort Igor Aleksander argued that the principles for creating a conscious machine already existed but that it would take forty years to train such a machine to understand language.<br />
<br />
虽然意识在强人工智能/通用人工智能中的作用是有争议的,但是很多通用人工智能的研究人员认为研究实现意识的可能性是至关重要的。在早期的努力中,伊格尔·亚历山大(Igor Aleksander)认为创造一个有意识的机器的原则已经存在,但是训练这样一个机器去理解语言可能需要四十年。<br />
<br />
<br />
<br />
==Possible explanations for the slow progress of AI research 人工智能研究进展缓慢的可能解释==<br />
<br />
{{See also|History of artificial intelligence#The problems}}<br />
<br />
<br />
<br />
Since the launch of AI research in 1956, the growth of this field has slowed down over time and has stalled the aims of creating machines skilled with intelligent action at the human level.{{sfn|Clocksin|2003}} A possible explanation for this delay is that computers lack a sufficient scope of memory or processing power.{{sfn|Clocksin|2003}} In addition, the level of complexity that connects to the process of AI research may also limit the progress of AI research.{{sfn|Clocksin|2003}}<br />
<br />
Since the launch of AI research in 1956, the growth of this field has slowed down over time and has stalled the aims of creating machines skilled with intelligent action at the human level. A possible explanation for this delay is that computers lack a sufficient scope of memory or processing power. In addition, the level of complexity that connects to the process of AI research may also limit the progress of AI research.<br />
<br />
自从1956年开始人工智能研究以来,这一领域的发展速度已经随着时间的推移而放缓,并且阻碍了创造具有人类水平的智能行为的机器的目标。这种延迟的一个可能的解释是计算机缺乏足够的存储空间或处理能力。此外,与人工智能研究过程相关的复杂程度也可能限制人工智能研究的进展。<br />
<br />
<br />
<br />
While most AI researchers believe strong AI can be achieved in the future, there are some individuals like [[Hubert Dreyfus]] and [[Roger Penrose]] who deny the possibility of achieving strong AI.{{sfn|Clocksin|2003}} [[John McCarthy (computer scientist)|John McCarthy]] was one of various computer scientists who believe human-level AI will be accomplished, but a date cannot accurately be predicted.{{sfn|McCarthy|2003}}<br />
<br />
While most AI researchers believe strong AI can be achieved in the future, there are some individuals like Hubert Dreyfus and Roger Penrose who deny the possibility of achieving strong AI. John McCarthy was one of various computer scientists who believe human-level AI will be accomplished, but a date cannot accurately be predicted.<br />
<br />
虽然大多数人工智能研究人员认为强大的人工智能可以在未来实现,但也有一些人像休伯特·德雷福斯(Hubert Dreyfus)和罗杰·彭罗斯(Roger Penrose)否认实现强大人工智能的可能性。约翰·麦卡锡(John McCarthy)是众多计算机科学家之一,他们相信人类水平的人工智能将会实现,但是日期无法准确预测。<br />
<br />
<br />
<br />
Conceptual limitations are another possible reason for the slowness in AI research.{{sfn|Clocksin|2003}} AI researchers may need to modify the conceptual framework of their discipline in order to provide a stronger base and contribution to the quest of achieving strong AI. As William Clocksin wrote in 2003: "the framework starts from Weizenbaum's observation that intelligence manifests itself only relative to specific social and cultural contexts".{{sfn|Clocksin|2003}}<br />
<br />
Conceptual limitations are another possible reason for the slowness in AI research. AI researchers may need to modify the conceptual framework of their discipline in order to provide a stronger base and contribution to the quest of achieving strong AI. As William Clocksin wrote in 2003: "the framework starts from Weizenbaum's observation that intelligence manifests itself only relative to specific social and cultural contexts".<br />
<br />
概念上的局限性是人工智能研究缓慢的另一个可能原因。人工智能研究人员可能需要修改他们学科的概念框架,以便为实现强大的人工智能提供一个更强大的基础和贡献。正如威廉·克罗克森(William Clocksin)在2003年写的那样: “这个框架始于魏泽堡的观察,即智力只在特定的社会和文化背景下表现出来。”。<br />
<br />
<br />
<br />
Furthermore, AI researchers have been able to create computers that can perform jobs that are complicated for people to do, such as mathematics, but conversely they have struggled to develop a computer that is capable of carrying out tasks that are simple for humans to do, such as walking ([[Moravec's paradox]]).{{sfn|Clocksin|2003}} A problem described by David Gelernter is that some people assume thinking and reasoning are equivalent.{{sfn|Gelernter|2010}} However, the idea of whether thoughts and the creator of those thoughts are isolated individually has intrigued AI researchers.{{sfn|Gelernter|2010}}<br />
<br />
Furthermore, AI researchers have been able to create computers that can perform jobs that are complicated for people to do, such as mathematics, but conversely they have struggled to develop a computer that is capable of carrying out tasks that are simple for humans to do, such as walking (Moravec's paradox). A problem described by David Gelernter is that some people assume thinking and reasoning are equivalent. However, the idea of whether thoughts and the creator of those thoughts are isolated individually has intrigued AI researchers.<br />
<br />
此外,人工智能研究人员已经能够创造出能够执行对人类而言复杂的工作(如数学)的计算机,但相反地,他们却难以开发出能够执行人类简单任务(如行走)的计算机(莫拉维克悖论)。大卫·格勒尼特(David Gelernter)描述的一个问题是,有些人认为思考和推理是等价的。然而,思想和这些思想的创造者是否被孤立的想法引起了人工智能研究者的兴趣。<br />
<br />
<br />
<br />
The problems that have been encountered in AI research over the past decades have further impeded the progress of AI. The failed predictions that have been promised by AI researchers and the lack of a complete understanding of human behaviors have helped diminish the primary idea of human-level AI.{{sfn|Goertzel|2007}} Although the progress of AI research has brought both improvement and disappointment, most investigators have established optimism about potentially achieving the goal of AI in the 21st century.{{sfn|Goertzel|2007}}<br />
<br />
The problems that have been encountered in AI research over the past decades have further impeded the progress of AI. The failed predictions that have been promised by AI researchers and the lack of a complete understanding of human behaviors have helped diminish the primary idea of human-level AI. Although the progress of AI research has brought both improvement and disappointment, most investigators have established optimism about potentially achieving the goal of AI in the 21st century.<br />
<br />
过去几十年人工智能研究中遇到的问题进一步阻碍了人工智能的发展。人工智能研究人员所承诺的失败的预测,以及对人类行为缺乏完整理解,已经帮助削弱了人类水平人工智能的最初设想。尽管人工智能研究的进展带来了进步和失望,但大多数研究人员对人工智能在21世纪可能实现的目标持乐观态度。<br />
<br />
<br />
<br />
Other possible reasons have been proposed for the lengthy research in the progress of strong AI. The intricacy of scientific problems and the need to fully understand the human brain through psychology and neurophysiology have limited many researchers in emulating the function of the human brain in computer hardware.{{sfn|McCarthy|2007}} Many researchers tend to underestimate any doubt that is involved with future predictions of AI, but without taking those issues seriously, people can then overlook solutions to problematic questions.{{sfn|Goertzel|2007}}<br />
<br />
Other possible reasons have been proposed for the lengthy research in the progress of strong AI. The intricacy of scientific problems and the need to fully understand the human brain through psychology and neurophysiology have limited many researchers in emulating the function of the human brain in computer hardware. Many researchers tend to underestimate any doubt that is involved with future predictions of AI, but without taking those issues seriously, people can then overlook solutions to problematic questions.<br />
<br />
还有其他可能的原因可以解释为什么对于强人工智能进行了长时间的研究。错综复杂的科学问题,以及通过心理学和神经生理学充分了解人脑的必要性,限制了许多研究人员在计算机硬件中模拟人脑的功能。许多研究人员倾向于低估与人工智能未来预测有关的任何怀疑,但是如果不认真对待这些问题,人们就会忽视不确定问题的解决方案。<br />
<br />
<br />
<br />
Clocksin says that a conceptual limitation that may impede the progress of AI research is that people may be using the wrong techniques for computer programs and implementation of equipment.{{sfn|Clocksin|2003}} When AI researchers first began to aim for the goal of artificial intelligence, a main interest was human reasoning.{{sfn|Holte|Choueiry|2003}} Researchers hoped to establish computational models of human knowledge through reasoning and to find out how to design a computer with a specific cognitive task.{{sfn|Holte|Choueiry|2003}}<br />
<br />
Clocksin says that a conceptual limitation that may impede the progress of AI research is that people may be using the wrong techniques for computer programs and implementation of equipment. When AI researchers first began to aim for the goal of artificial intelligence, a main interest was human reasoning. Researchers hoped to establish computational models of human knowledge through reasoning and to find out how to design a computer with a specific cognitive task.<br />
<br />
克罗克森说,阻碍人工智能研究进展的一个概念上的限制是,人们可能在计算机程序和安装设备方面使用了错误的技术。当人工智能研究人员第一次开始瞄准人工智能的目标时,主要的兴趣是人类推理。研究人员希望通过推理建立人类知识的计算模型,并找出如何设计一台具有特定认知任务的计算机。<br />
<br />
<br />
<br />
The practice of abstraction, which people tend to redefine when working with a particular context in research, provides researchers with a concentration on just a few concepts.{{sfn|Holte|Choueiry|2003}} The most productive use of abstraction in AI research comes from planning and problem solving.{{sfn|Holte|Choueiry|2003}} Although the aim is to increase the speed of a computation, the role of abstraction has posed questions about the involvement of abstraction operators.{{sfn|Zucker|2003}}<br />
<br />
The practice of abstraction, which people tend to redefine when working with a particular context in research, provides researchers with a concentration on just a few concepts. The most productive use of abstraction in AI research comes from planning and problem solving. Although the aim is to increase the speed of a computation, the role of abstraction has posed questions about the involvement of abstraction operators.<br />
<br />
抽象的实践,人们在研究中使用特定的语境时倾向于重新定义,使得研究人员集中在数个概念上。抽象在人工智能研究中最有效的应用来自规划和解决问题。虽然目标是提高计算速度,但是抽象的作用已经对抽象算子的参与提出了问题。<br />
<br />
<br />
<br />
A possible reason for the slowness in AI relates to the acknowledgement by many AI researchers that heuristics is a section that contains a significant breach between computer performance and human performance.{{sfn|McCarthy|2007}} The specific functions that are programmed to a computer may be able to account for many of the requirements that allow it to match human intelligence. These explanations are not necessarily guaranteed to be the fundamental causes for the delay in achieving strong AI, but they are widely agreed by numerous researchers.<br />
<br />
A possible reason for the slowness in AI relates to the acknowledgement by many AI researchers that heuristics is a section that contains a significant breach between computer performance and human performance. The specific functions that are programmed to a computer may be able to account for many of the requirements that allow it to match human intelligence. These explanations are not necessarily guaranteed to be the fundamental causes for the delay in achieving strong AI, but they are widely agreed by numerous researchers.<br />
<br />
人工智能发展缓慢的一个可能原因是许多人工智能研究人员承认启发式算法是计算机性能和人类表现之间的一个重大缺口。为编入计算机的特定功能可以满足许多要求,使计算机与人类智能相匹配。这些解释并不一定是造成人工智能实现延迟的根本原因,但它们得到了众多研究人员的广泛认同。<br />
<br />
<br />
<br />
There have been many AI researchers that debate over the idea whether [[affective computing|machines should be created with emotions]]. There are no emotions in typical models of AI and some researchers say programming emotions into machines allows them to have a mind of their own.{{sfn|Clocksin|2003}} Emotion sums up the experiences of humans because it allows them to remember those experiences.{{sfn|Gelernter|2010}} David Gelernter writes, "No computer will be creative unless it can simulate all the nuances of human emotion."{{sfn|Gelernter|2010}} This concern about emotion has posed problems for AI researchers and it connects to the concept of strong AI as its research progresses into the future.<ref>{{cite journal|doi=10.1016/j.bushor.2018.08.004|title=Kaplan Andreas and Haelein Michael (2019) Siri, Siri, in my hand: Who's the fairest in the land? On the interpretations, illustrations, and implications of artificial intelligence | volume=62 | year=2019|journal=Business Horizons|pages=15–25 | last1 = Kaplan | first1 = Andreas | last2 = Haenlein | first2 = Michael}}</ref><br />
<br />
There have been many AI researchers that debate over the idea whether machines should be created with emotions. There are no emotions in typical models of AI and some researchers say programming emotions into machines allows them to have a mind of their own. Emotion sums up the experiences of humans because it allows them to remember those experiences. David Gelernter writes, "No computer will be creative unless it can simulate all the nuances of human emotion." This concern about emotion has posed problems for AI researchers and it connects to the concept of strong AI as its research progresses into the future.<br />
<br />
许多人工智能研究人员一直在争论机器是否应该带有情感。典型的人工智能模型中没有情感,一些研究人员说,将情感编程到机器中可以让它们拥有自己的思想。情感总结了人类的经历,因为它允许人们记住那些经历。大卫 · 格勒尼特写道: “除非计算机能够模拟人类情感的所有细微差别,否则它不会具有创造力。”这种对情绪的关注给人工智能研究人员带来了一些问题,随着未来人工智能研究的进展,它与强人工智能的概念相联系。<br />
<br />
<br />
<br />
==Controversies and dangers==<br />
<br />
<br />
<br />
===Feasibility===<br />
<br />
{{expand section|date=February 2016}}<br />
<br />
As of March 2020, AGI remains speculative<ref name="spec1">[https://www.europarl.europa.eu/at-your-service/files/be-heard/religious-and-non-confessional-dialogue/events/en-20190319-how-artificial-intelligence-works.pdf europarl.europa.eu: How artificial intelligence works], "Concluding remarks: Today's AI is powerful and useful, but remains far from speculated AGI or ASI.", European Parliamentary Research Service, retrieved March 3, 2020</ref><ref name="spec2">[https://www.itu.int/en/journal/001/Documents/itu2018-9.pdf itu.int: Beyond Mad?: The Race For Artificial General Intelligence], "AGI represents a level of power that remains firmly in the realm of speculative fiction as on date." February 2, 2018, retrieved March 3, 2020</ref> as no such system has been demonstrated yet. Opinions vary both on ''whether'' and ''when'' artificial general intelligence will arrive. At one extreme, AI pioneer [[Herbert A. Simon]] wrote in 1965: "machines will be capable, within twenty years, of doing any work a man can do". However, this prediction failed to come true. Microsoft co-founder [[Paul Allen]] believed that such intelligence is unlikely in the 21st century because it would require "unforeseeable and fundamentally unpredictable breakthroughs" and a "scientifically deep understanding of cognition".<ref>{{cite news|last1=Allen|first1=Paul|title=The Singularity Isn't Near|url=http://www.technologyreview.com/view/425733/paul-allen-the-singularity-isnt-near/|accessdate=17 September 2014|work=[[MIT Technology Review]]}}</ref> Writing in [[The Guardian]], roboticist Alan Winfield claimed the gulf between modern computing and human-level artificial intelligence is as wide as the gulf between current space flight and practical faster-than-light spaceflight.<ref>{{cite news|last1=Winfield|first1=Alan|title=Artificial intelligence will not turn into a Frankenstein's monster|url=https://www.theguardian.com/technology/2014/aug/10/artificial-intelligence-will-not-become-a-frankensteins-monster-ian-winfield|accessdate=17 September 2014|work=[[The Guardian]]|archive-url=https://web.archive.org/web/20140917135230/http://www.theguardian.com/technology/2014/aug/10/artificial-intelligence-will-not-become-a-frankensteins-monster-ian-winfield|archive-date=17 September 2014|url-status=live}}</ref><br />
<br />
As of March 2020, AGI remains speculative as no such system has been demonstrated yet. Opinions vary both on whether and when artificial general intelligence will arrive. At one extreme, AI pioneer Herbert A. Simon wrote in 1965: "machines will be capable, within twenty years, of doing any work a man can do". However, this prediction failed to come true. Microsoft co-founder Paul Allen believed that such intelligence is unlikely in the 21st century because it would require "unforeseeable and fundamentally unpredictable breakthroughs" and a "scientifically deep understanding of cognition". Writing in The Guardian, roboticist Alan Winfield claimed the gulf between modern computing and human-level artificial intelligence is as wide as the gulf between current space flight and practical faster-than-light spaceflight.<br />
<br />
截至2020年3月,德盛安联仍处于投机状态,因为迄今尚未展示此类系统。对于人工通用智能是否会到来以及何时到来,人们的看法各不相同。在一个极端,人工智能的先驱赫伯特·西蒙在1965年写道: “机器将能在20年内完成人类能做的任何工作。”。然而,这个预言并没有实现。微软(Microsoft)联合创始人保罗•艾伦(Paul Allen)认为,这种情报在21世纪不太可能出现,因为它需要“不可预见且根本无法预测的突破”和“对认知的科学深入理解”。机器人专家 Alan Winfield 在《卫报》上发表文章称,现代计算机和人类水平的人工智能之间的鸿沟就像当前的太空飞行和实际的超光速空间飞行之间的鸿沟一样宽。<br />
<br />
<br />
<br />
AI experts' views on the feasibility of AGI wax and wane, and may have seen a resurgence in the 2010s. Four polls conducted in 2012 and 2013 suggested that the median guess among experts for when they would be 50% confident AGI would arrive was 2040 to 2050, depending on the poll, with the mean being 2081. Of the experts, 16.5% answered with "never" when asked the same question but with a 90% confidence instead.<ref name="new yorker doomsday">{{cite news|author1=Raffi Khatchadourian|title=The Doomsday Invention: Will artificial intelligence bring us utopia or destruction?|url=http://www.newyorker.com/magazine/2015/11/23/doomsday-invention-artificial-intelligence-nick-bostrom|accessdate=7 February 2016|work=[[The New Yorker (magazine)|The New Yorker]]|date=23 November 2015|archive-url=https://web.archive.org/web/20160128105955/http://www.newyorker.com/magazine/2015/11/23/doomsday-invention-artificial-intelligence-nick-bostrom|archive-date=28 January 2016|url-status=live}}</ref><ref>Müller, V. C., & Bostrom, N. (2016). Future progress in artificial intelligence: A survey of expert opinion. In Fundamental issues of artificial intelligence (pp. 555–572). Springer, Cham.</ref> Further current AGI progress considerations can be found below [[#Tests_for_confirming_human-level_AGI|''Tests for confirming human-level AGI'']] and [[#IQ-Tests_AGI|''IQ-tests AGI'']].<br />
<br />
AI experts' views on the feasibility of AGI wax and wane, and may have seen a resurgence in the 2010s. Four polls conducted in 2012 and 2013 suggested that the median guess among experts for when they would be 50% confident AGI would arrive was 2040 to 2050, depending on the poll, with the mean being 2081. Of the experts, 16.5% answered with "never" when asked the same question but with a 90% confidence instead. Further current AGI progress considerations can be found below Tests for confirming human-level AGI and IQ-tests AGI.<br />
<br />
人工智能专家对德盛安联兴衰可行性的看法,可能在2010年出现了复苏。2012年和2013年进行的四次民意调查显示,专家对德盛安联50% 有信心的平均猜测是2040年到2050年,具体取决于调查结果,平均猜测是2081年。在这些专家中,16.5% 的人在被问到同样的问题时回答“从来没有” ,但他们的自信心却达到了90% 。进一步的进展考虑可以在确认人类水平 AGI 和 iq 测试 AGI 的测试下面找到。<br />
<br />
<br />
<br />
===Potential threat to human existence{{anchor|Risk_of_human_extinction}}===<br />
<br />
{{Main|Existential risk from artificial general intelligence}}<br />
<br />
<br />
<br />
The thesis that AI poses an existential risk, and that this risk needs much more attention than it currently gets, has been endorsed by many public figures; perhaps the most famous are [[Elon Musk]], [[Bill Gates]], and [[Stephen Hawking]]. The most notable AI researcher to endorse the thesis is [[Stuart J. Russell]]. Endorsers of the thesis sometimes express bafflement at skeptics: Gates states he does not "understand why some people are not concerned",<ref name="BBC News">{{cite news|last1=Rawlinson|first1=Kevin|title=Microsoft's Bill Gates insists AI is a threat|url=https://www.bbc.co.uk/news/31047780|work=[[BBC News]]|accessdate=30 January 2015}}</ref> and Hawking criticized widespread indifference in his 2014 editorial: {{cquote|'So, facing possible futures of incalculable benefits and risks, the experts are surely doing everything possible to ensure the best outcome, right? Wrong. If a superior alien civilisation sent us a message saying, 'We'll arrive in a few decades,' would we just reply, 'OK, call us when you get here{{endash}}we'll leave the lights on?' Probably not{{endash}}but this is more or less what is happening with AI.'<ref name="hawking editorial">{{cite news |title=Stephen Hawking: 'Transcendence looks at the implications of artificial intelligence&nbsp;– but are we taking AI seriously enough?' |url=https://www.independent.co.uk/news/science/stephen-hawking-transcendence-looks-at-the-implications-of-artificial-intelligence--but-are-we-taking-ai-seriously-enough-9313474.html |accessdate=3 December 2014 |publisher=[[The Independent (UK)]]}}</ref>}}<br />
<br />
The thesis that AI poses an existential risk, and that this risk needs much more attention than it currently gets, has been endorsed by many public figures; perhaps the most famous are Elon Musk, Bill Gates, and Stephen Hawking. The most notable AI researcher to endorse the thesis is Stuart J. Russell. Endorsers of the thesis sometimes express bafflement at skeptics: Gates states he does not "understand why some people are not concerned", and Hawking criticized widespread indifference in his 2014 editorial: we'll leave the lights on?' Probably notbut this is more or less what is happening with AI.'}}<br />
<br />
人工智能构成了世界末日,这种风险需要比现在更多的关注,这一论点已经得到了许多公众人物的支持; 也许最著名的是埃隆 · 马斯克,比尔 · 盖茨和斯蒂芬 · 霍金。支持这一观点的最著名的人工智能研究者是斯图尔特 · 罗素。这篇论文的支持者有时会对怀疑论者表示困惑: 盖茨表示,他不“理解为什么有些人不关心” ,霍金在2014年的社论中批评了普遍的冷漠: 我们会让灯亮着吗可能不是,但这或多或少是正在发生的与人工智能<br />
<br />
<br />
<br />
Many of the scholars who are concerned about existential risk believe that the best way forward would be to conduct (possibly massive) research into solving the difficult "[[AI control problem|control problem]]" to answer the question: what types of safeguards, algorithms, or architectures can programmers implement to maximize the probability that their recursively-improving AI would continue to behave in a friendly, rather than destructive, manner after it reaches superintelligence?<ref name="superintelligence" >{{cite book|last1=Bostrom|first1=Nick|author-link=Nick Bostrom|title=Superintelligence: Paths, Dangers, Strategies|date=2014|isbn=978-0199678112|edition=First|quote=|title-link=Superintelligence: Paths, Dangers, Strategies}}<!-- preface --></ref><ref name="physica_scripta" >{{cite journal|title=Responses to catastrophic AGI risk: a survey|journal=[[Physica Scripta]]|date=19 December 2014|volume=90|issue=1|author1=Kaj Sotala|author2-link=Roman Yampolskiy|author2=Roman Yampolskiy}}</ref><br />
<br />
Many of the scholars who are concerned about existential risk believe that the best way forward would be to conduct (possibly massive) research into solving the difficult "control problem" to answer the question: what types of safeguards, algorithms, or architectures can programmers implement to maximize the probability that their recursively-improving AI would continue to behave in a friendly, rather than destructive, manner after it reaches superintelligence?<br />
<br />
许多关注世界末日的学者认为,最好的方法是进行(可能是大规模的)研究,解决困难的“控制问题” ,以回答这个问题: 程序员可以实现哪些类型的保障措施、算法或架构,以最大限度地提高其递归改进的人工智能在达到超级智能后继续以友好而不是破坏性的方式运行的可能性?<br />
<br />
<br />
<br />
The thesis that AI can pose existential risk also has many strong detractors. Skeptics sometimes charge that the thesis is crypto-religious, with an irrational belief in the possibility of superintelligence replacing an irrational belief in an omnipotent God; at an extreme, [[Jaron Lanier]] argues that the whole concept that current machines are in any way intelligent is "an illusion" and a "stupendous con" by the wealthy.<ref name="atlantic-but-what">{{cite magazine |url=https://www.theatlantic.com/health/archive/2014/05/but-what-does-the-end-of-humanity-mean-for-me/361931/ |title=But What Would the End of Humanity Mean for Me? |magazine=The Atlantic | date = 9 May 2014 | accessdate =12 December 2015}}</ref><br />
<br />
The thesis that AI can pose existential risk also has many strong detractors. Skeptics sometimes charge that the thesis is crypto-religious, with an irrational belief in the possibility of superintelligence replacing an irrational belief in an omnipotent God; at an extreme, Jaron Lanier argues that the whole concept that current machines are in any way intelligent is "an illusion" and a "stupendous con" by the wealthy.<br />
<br />
认为人工智能可以提出世界末日的观点也遭到了许多强烈的反对。怀疑论者有时指责该论点是秘密宗教性的,他们非理性地相信超级智能可能取代对万能的上帝的非理性信仰; 在极端情况下,杰伦 · 拉尼尔(Jaron Lanier)认为,目前的机器以任何方式具有智能的整个概念是“一种幻觉” ,是富人的“惊人骗局”。<br />
<br />
<br />
<br />
Much of existing criticism argues that AGI is unlikely in the short term. Computer scientist [[Gordon Bell]] argues that the human race will already destroy itself before it reaches the [[technological singularity]]. [[Gordon Moore]], the original proponent of [[Moore's Law]], declares that "I am a skeptic. I don't believe [a technological singularity] is likely to happen, at least for a long time. And I don't know why I feel that way."<ref>{{cite news |title=Tech Luminaries Address Singularity |url=https://spectrum.ieee.org/computing/hardware/tech-luminaries-address-singularity |accessdate=8 April 2020 |work=IEEE Spectrum: Technology, Engineering, and Science News |issue=SPECIAL REPORT: THE SINGULARITY |date=1 June 2008 |language=en}}</ref> [[Baidu]] Vice President [[Andrew Ng]] states AI existential risk is "like worrying about overpopulation on Mars when we have not even set foot on the planet yet."<ref name=shermer>{{cite news|last1=Shermer|first1=Michael|title=Apocalypse AI|url=https://www.scientificamerican.com/article/artificial-intelligence-is-not-a-threat-mdash-yet/|accessdate=27 November 2017|work=Scientific American|date=1 March 2017|pages=77|language=en|doi=10.1038/scientificamerican0317-77|bibcode=2017SciAm.316c..77S}}</ref><br />
<br />
Much of existing criticism argues that AGI is unlikely in the short term. Computer scientist Gordon Bell argues that the human race will already destroy itself before it reaches the technological singularity. Gordon Moore, the original proponent of Moore's Law, declares that "I am a skeptic. I don't believe [a technological singularity] is likely to happen, at least for a long time. And I don't know why I feel that way." Baidu Vice President Andrew Ng states AI existential risk is "like worrying about overpopulation on Mars when we have not even set foot on the planet yet."<br />
<br />
现有的许多批评认为,德盛安联短期内不太可能成功。计算机科学家 Gordon Bell 认为人类在到达技术奇异点之前就已经自我毁灭了。戈登 · 摩尔,摩尔定律的最初倡导者,宣称“我是一个怀疑论者。我不认为技术奇异点会发生,至少在很长一段时间内不会。我不知道为什么会有这种感觉。”百度副总裁 Andrew Ng 说,人工智能世界末日就像是在担心火星人口过剩,而我们甚至还没有踏上这个星球<br />
<br />
<br />
<br />
==See also==<br />
<br />
{{div col|colwidth=30em}}<br />
<br />
* [[Automated machine learning]]<br />
<br />
* [[Machine ethics]]<br />
<br />
* [[Multi-task learning]]<br />
<br />
* [[Superintelligence: Paths, Dangers, Strategies|Superintelligence]]<br />
<br />
* [[Nick Bostrom]]<br />
<br />
* [[Eliezer Yudkowsky]]<br />
<br />
* [[Future of Humanity Institute]]<br />
<br />
* [[Outline of artificial intelligence]]<br />
<br />
* [[Artificial brain]]<br />
<br />
* [[Transfer learning]]<br />
<br />
* [[Outline of transhumanism]]<br />
<br />
* [[General game playing]]<br />
<br />
* [[Synthetic intelligence]]<br />
<br />
* [[Intelligence amplification]] (IA), the use of information technology in augmenting human intelligence instead of creating an external autonomous "AGI"{{div col end}}<br />
<br />
<br />
<br />
==Notes==<br />
<br />
{{reflist|colwidth=30em}}<br />
<br />
<br />
<br />
==References==<br />
<br />
• Stages of Artificial Intelligence"[https://www.computerscience0.xyz/2020/04/ai-artificial-intelligence-stages-of.html Computer Science]" on 2nd April, 2020.{{refbegin|2}}<br />
<br />
• Stages of Artificial Intelligence"[https://www.computerscience0.xyz/2020/04/ai-artificial-intelligence-stages-of.html Computer Science]" on 2nd April, 2020.<br />
<br />
•2020年4月2日,《人工智能的阶段》[ https://www.computerscience0.xyz/2020/04/ai-Artificial-Intelligence-Stages-of.html 计算机科学]。<br />
<br />
* {{cite web |url=http://www.techcast.org/Upload/PDFs/633615794236495345_TCTheAutomationofThought.pdf |title=TechCast Article Series: The Automation of Thought |last1=Halal |first1=William E. |website= |access-date= |archive-url=https://web.archive.org/web/20130606101835/http://www.techcast.org/Upload/PDFs/633615794236495345_TCTheAutomationofThought.pdf |archive-date=6 June 2013}}<br />
<br />
* {{Citation | last=Aleksander | first=Igor | author-link=Igor Aleksander | year=1996 | title=Impossible Minds | publisher=World Scientific Publishing Company | isbn=978-1-86094-036-1 | url-access=registration | url=https://archive.org/details/impossiblemindsm0000alek }}<br />
<br />
* {{Citation | last = Omohundro|first= Steve| author-link= Steve Omohundro | year = 2008| title= The Nature of Self-Improving Artificial Intelligence| publisher= presented and distributed at the 2007 Singularity Summit, San Francisco, CA.}}<br />
<br />
* {{Citation|last=Sandberg |first=Anders|last2=Boström|first2=Nick|title=Whole Brain Emulation: A Roadmap|url=http://www.fhi.ox.ac.uk/Reports/2008-3.pdf|accessdate=5 April 2009|series= Technical Report #2008‐3|year=2008| publisher = Future of Humanity Institute, Oxford University}}<br />
<br />
* {{Citation|vauthors=Azevedo FA, Carvalho LR, Grinberg LT | title = Equal numbers of neuronal and nonneuronal cells make the human brain an isometrically scaled-up primate brain| journal = The Journal of Comparative Neurology| volume = 513| issue = 5| pages = 532–41|date=April 2009| pmid = 19226510| doi = 10.1002/cne.21974|url=https://www.researchgate.net/publication/24024444 |accessdate=4 September 2013| ref={{harvid|Azevedo et al.|2009}}|display-authors=etal}}<br />
<br />
* {{Citation<br />
<br />
| first=Anthony<br />
<br />
| first=Anthony<br />
<br />
首先是安东尼<br />
<br />
| last=Berglas<br />
<br />
| last=Berglas<br />
<br />
最后一个贝格拉斯<br />
<br />
| title=Artificial Intelligence will Kill our Grandchildren<br />
<br />
| title=Artificial Intelligence will Kill our Grandchildren<br />
<br />
人工智能会杀死我们的孙子<br />
<br />
| year=2008<br />
<br />
| year=2008<br />
<br />
2008年<br />
<br />
| url=http://berglas.org/Articles/AIKillGrandchildren/AIKillGrandchildren.html<br />
<br />
| url=http://berglas.org/Articles/AIKillGrandchildren/AIKillGrandchildren.html<br />
<br />
Http://berglas.org/articles/aikillgrandchildren/aikillgrandchildren.html<br />
<br />
}}<br />
<br />
}}<br />
<br />
}}<br />
<br />
* {{Citation | last = Chalmers |first = David | author-link=David Chalmers | year=1996 | title = The Conscious Mind |publisher=Oxford University Press.}}<br />
<br />
* {{Citation | last = Clocksin|first=William |date=Aug 2003 |title=Artificial intelligence and the future|journal=[[Philosophical Transactions of the Royal Society A]] |pmid=12952683 |volume=361 |issue=1809 |pages=1721–1748 |doi=10.1098/rsta.2003.1232 |postscript=.|bibcode=2003RSPTA.361.1721C }}<br />
<br />
* {{Crevier 1993}}<br />
<br />
* {{Citation | first = Brad | last = Darrach | date=20 November 1970 | title=Meet Shakey, the First Electronic Person | magazine=[[Life Magazine]] | pages = 58–68 }}.<br />
<br />
* {{Citation | last = Drachman | first = D | title = Do we have brain to spare? | journal = Neurology | volume = 64 | issue = 12 | pages = 2004–5 | year = 2005 | pmid = 15985565 | doi = 10.1212/01.WNL.0000166914.38327.BB | postscript = .}}<br />
<br />
* {{Citation | last = Feigenbaum | first = Edward A. | first2=Pamela | last2=McCorduck | author-link=Edward Feigenbaum | author2-link = Pamela McCorduck | title = The Fifth Generation: Artificial Intelligence and Japan's Computer Challenge to the World | publisher = Michael Joseph | year = 1983 | isbn = 978-0-7181-2401-4 }}<br />
<br />
* {{Citation<br />
<br />
| last = Gelernter | first = David<br />
<br />
| last = Gelernter | first = David<br />
<br />
最后的盖兰特 | 第一个大卫<br />
<br />
| year = 2010<br />
<br />
| year = 2010<br />
<br />
2010年<br />
<br />
| title = Dream-logic, the Internet and Artificial Thought<br />
<br />
| title = Dream-logic, the Internet and Artificial Thought<br />
<br />
梦的逻辑,互联网和人工思维<br />
<br />
| url=http://www.edge.org/3rd_culture/gelernter10.1/gelernter10.1_index.html | accessdate= 25 July 2010<br />
<br />
| url=http://www.edge.org/3rd_culture/gelernter10.1/gelernter10.1_index.html | accessdate= 25 July 2010<br />
<br />
Http://www.edge.org/3rd_culture/gelernter10.1/gelernter10.1_index.html : 2010年7月25日<br />
<br />
}}<br />
<br />
}}<br />
<br />
}}<br />
<br />
* {{Citation<br />
<br />
| editor1-last = Goertzel | editor1-first = Ben | authorlink = Ben Goertzel<br />
<br />
| editor1-last = Goertzel | editor1-first = Ben | authorlink = Ben Goertzel<br />
<br />
1-first Ben | authorlink Ben Goertzel<br />
<br />
| editor2-last = Pennachin | editor2-first= Cassio<br />
<br />
| editor2-last = Pennachin | editor2-first= Cassio<br />
<br />
| 编辑2-last Pennachin | 编辑2-first Cassio<br />
<br />
| year = 2006<br />
<br />
| year = 2006<br />
<br />
2006年<br />
<br />
| title=Artificial General Intelligence<br />
<br />
| title=Artificial General Intelligence<br />
<br />
人工通用智能<br />
<br />
| publisher = Springer <br />
<br />
| publisher = Springer <br />
<br />
出版商斯普林格<br />
<br />
| url=http://people.inf.elte.hu/csizsekp/ai/books/artificial-general-intelligence-cognitive-technologies.9783540237334.27156.pdf<br />
<br />
| url=http://people.inf.elte.hu/csizsekp/ai/books/artificial-general-intelligence-cognitive-technologies.9783540237334.27156.pdf<br />
<br />
Http://people.inf.elte.hu/csizsekp/ai/books/artificial-general-intelligence-cognitive-technologies.9783540237334.27156.pdf<br />
<br />
| isbn = 978-3-540-23733-4<br />
<br />
| isbn = 978-3-540-23733-4<br />
<br />
[国际标准图书馆编号978-3-540-23733-4]<br />
<br />
| archive-url = https://web.archive.org/web/20130320184603/http://people.inf.elte.hu/csizsekp/ai/books/artificial-general-intelligence-cognitive-technologies.9783540237334.27156.pdf<br />
<br />
| archive-url = https://web.archive.org/web/20130320184603/http://people.inf.elte.hu/csizsekp/ai/books/artificial-general-intelligence-cognitive-technologies.9783540237334.27156.pdf<br />
<br />
| 档案-url https://web.archive.org/web/20130320184603/http://people.inf.elte.hu/csizsekp/ai/books/artificial-general-intelligence-cognitive-technologies.9783540237334.27156.pdf<br />
<br />
| archive-date = 20 March 2013<br />
<br />
| archive-date = 20 March 2013<br />
<br />
| 档案-日期2013年3月20日<br />
<br />
}}<br />
<br />
}}<br />
<br />
}}<br />
<br />
* {{Citation<br />
<br />
| last = Goertzel | first = Ben | authorlink = Ben Goertzel<br />
<br />
| last = Goertzel | first = Ben | authorlink = Ben Goertzel<br />
<br />
作者: Ben Goertzel<br />
<br />
| last2 = Wang | first2 = Pei<br />
<br />
| last2 = Wang | first2 = Pei<br />
<br />
2 Wang | first2 Pei<br />
<br />
| year = 2006<br />
<br />
| year = 2006<br />
<br />
2006年<br />
<br />
| title = Introduction: Aspects of Artificial General Intelligence<br />
<br />
| title = Introduction: Aspects of Artificial General Intelligence<br />
<br />
| 题目简介: 人工通用智能的方方面面<br />
<br />
| url=http://sites.google.com/site/narswang/publications/wang-goertzel.AGI_Aspects.pdf?attredirects=1<br />
<br />
| url=http://sites.google.com/site/narswang/publications/wang-goertzel.AGI_Aspects.pdf?attredirects=1<br />
<br />
Http://sites.google.com/site/narswang/publications/wang-goertzel.agi_aspects.pdf?attredirects=1<br />
<br />
}}<br />
<br />
}}<br />
<br />
}}<br />
<br />
* {{Citation |last=Goertzel|first=Ben |authorlink=Ben Goertzel |date=Dec 2007 |title=Human-level artificial general intelligence and the possibility of a technological singularity: a reaction to Ray Kurzweil's The Singularity Is Near, and McDermott's critique of Kurzweil|journal=Artificial Intelligence |volume=171 |issue=18, Special Review Issue |pages=1161–1173 |url=https://scholar.google.com/scholar?hl=sv&lr=&cluster=15189798216526465792 |accessdate=1 April 2009 |doi=10.1016/j.artint.2007.10.011 |postscript=.}}<br />
<br />
* {{Citation | last = Gubrud | first = Mark | url=http://www.foresight.org/Conferences/MNT05/Papers/Gubrud/ | title = Nanotechnology and International Security | journal= Fifth Foresight Conference on Molecular Nanotechnology |date = November 1997| accessdate= 7 May 2011}}<br />
<br />
* {{Citation| last1 = Holte | first1=RC | last2=Choueiry |first2=BY| title = Abstraction and reformulation in artificial intelligence| journal = [[Philosophical Transactions of the Royal Society B]]| volume = 358| issue = 1435| pages = 1197–1204| year = 2003| pmid = 12903653| pmc = 1693218| doi = 10.1098/rstb.2003.1317| postscript = .}}<br />
<br />
* {{Citation | last = Howe | first = J. | url=http://www.dai.ed.ac.uk/AI_at_Edinburgh_perspective.html | title = Artificial Intelligence at Edinburgh University : a Perspective |date = November 1994| accessdate= 30 August 2007}}<br />
<br />
* {{Citation | last = Johnson| first= Mark | year = 1987| title =The body in the mind| publisher =Chicago|isbn= 978-0-226-40317-5}}<br />
<br />
* {{Citation | last = Kurzweil | first = Ray | author-link = Ray Kurzweil | title = The Singularity is Near | year = 2005 | publisher = Viking Press | title-link = The Singularity is Near }}<br />
<br />
* {{Citation | last = Lighthill | first = Professor Sir James | author-link=James Lighthill | year = 1973 | contribution= Artificial Intelligence: A General Survey | title = Artificial Intelligence: a paper symposium| publisher = Science Research Council }}<br />
<br />
* {{Citation|last=Luger|first=George|first2=William|last2=Stubblefield|year=2004|title=Artificial Intelligence: Structures and Strategies for Complex Problem Solving|edition=5th|publisher=The Benjamin/Cummings Publishing Company, Inc.|page=[https://archive.org/details/artificialintell0000luge/page/720 720]|isbn=978-0-8053-4780-7|url=https://archive.org/details/artificialintell0000luge/page/720}}<br />
<br />
* {{Citation |last=McCarthy|first=John |authorlink=John McCarthy (computer scientist)|date=Oct 2007 |title=From here to human-level AI|journal=Artificial Intelligence |volume=171 |pages=1174–1182 |doi=10.1016/j.artint.2007.10.009 | postscript=. |issue=18}}<br />
<br />
* {{McCorduck 2004}}<br />
<br />
* {{Citation | last = Moravec | first = Hans | author-link = Hans Moravec | year = 1976 | url = http://www.frc.ri.cmu.edu/users/hpm/project.archive/general.articles/1975/Raw.Power.html | title = The Role of Raw Power in Intelligence | access-date = 29 September 2007 | archive-url = https://web.archive.org/web/20160303232511/http://www.frc.ri.cmu.edu/users/hpm/project.archive/general.articles/1975/Raw.Power.html | archive-date = 3 March 2016 | url-status = dead }}<br />
<br />
* {{Citation | last = Moravec | first = Hans | author-link=Hans Moravec | year = 1988 | title = Mind Children | publisher = Harvard University Press}}<br />
<br />
* {{Citation| last=Nagel | year=1974 | title =What Is it Like to Be a Bat | journal = Philosophical Review | volume=83 | issue=4 | pages=435–50 | url=http://organizations.utep.edu/Portals/1475/nagel_bat.pdf| postscript=. | doi=10.2307/2183914 | jstor=2183914 }}<br />
<br />
* {{Citation | last = Newell | first = Allen | author-link=Allen Newell | last2 = Simon | first2=H. A. | year = 1963 | contribution=GPS: A Program that Simulates Human Thought| title=Computers and Thought | editor-last= Feigenbaum | editor-first= E.A. |editor2-last= Feldman |editor2-first= J. |publisher= McGraw-Hill | authorlink2 = Herbert A. Simon|location= New York }}<br />
<br />
* {{cite journal | doi = 10.1145/360018.360022 | last = Newell | first = Allen | last2 = Simon | first2=H. A. | year = 1976 | title=Computer Science as Empirical Inquiry: Symbols and Search| volume= 19 | pages = 113–126 | journal = Communications of the ACM| author-link=Allen Newell | authorlink2=Herbert A. Simon|issue=3 | ref=harv| doi-access= free }}<br />
<br />
* {{Citation| last=Nilsson | first=Nils | author-link=Nils Nilsson (researcher) | year=1998|title=Artificial Intelligence: A New Synthesis|publisher=Morgan Kaufmann Publishers|isbn=978-1-55860-467-4}}<br />
<br />
* {{Russell Norvig 2003}}<br />
<br />
* {{Citation | last = NRC| author-link=United States National Research Council | chapter=Developments in Artificial Intelligence|chapter-url=http://www.nap.edu/readingroom/books/far/ch9.html|title=Funding a Revolution: Government Support for Computing Research|publisher=National Academy Press|year=1999 }}<br />
<br />
* {{Citation | last = Poole | first = David | first2 = Alan | last2 = Mackworth | first3 = Randy | last3 = Goebel | publisher = Oxford University Press | year = 1998 | title = Computational Intelligence: A Logical Approach | url = http://www.cs.ubc.ca/spider/poole/ci.html | author-link=David Poole (researcher) | location = New York }}<br />
<br />
* {{Citation|last=Searle |first=John |author-link=John Searle |title=Minds, Brains and Programs |journal=Behavioral and Brain Sciences |volume=3 |issue=3 |pages=417–457 |year=1980 |doi=10.1017/S0140525X00005756 }}<br />
<br />
* {{Citation | last= Simon | first = H. A. | author-link=Herbert A. Simon | year = 1965 | title=The Shape of Automation for Men and Management | publisher =Harper & Row | location = New York }}<br />
<br />
* {{Citation |last=Sutherland|first= J.G. |year =1990| title= Holographic Model of Memory, Learning, and Expression|journal=International Journal of Neural Systems|volume= 1–3|pages= 256–267 |postscript=.}}<br />
<br />
* {{Citation|vauthors=Williams RW, Herrup K | title = The control of neuron number| journal = Annual Review of Neuroscience| volume = 11| pages = 423–53| year = 1988| pmid = 3284447| doi = 10.1146/annurev.ne.11.030188.002231| postscript = .}}<!--| accessdate = 20 June 2009--><br />
<br />
* {{Citation<br />
<br />
| editor1-last = de Vega | editor1-first = Manuel<br />
<br />
| editor1-last = de Vega | editor1-first = Manuel<br />
<br />
| 编辑1-last de Vega | 编辑1-first Manuel<br />
<br />
| editor2-last = Glenberg | editor2-first = Arthur<br />
<br />
| editor2-last = Glenberg | editor2-first = Arthur<br />
<br />
2- 最后的格伦伯格2- 第一个亚瑟<br />
<br />
| editor3-last = Graesser | editor3-first = Arthur<br />
<br />
| editor3-last = Graesser | editor3-first = Arthur<br />
<br />
| 编辑3-last Graesser | 编辑3-first Arthur<br />
<br />
| year = 2008<br />
<br />
| year = 2008<br />
<br />
2008年<br />
<br />
| title = Symbols and Embodiment: Debates on meaning and cognition<br />
<br />
| title = Symbols and Embodiment: Debates on meaning and cognition<br />
<br />
标题符号与具体化: 关于意义与认知的争论<br />
<br />
| publisher = Oxford University Press<br />
<br />
| publisher = Oxford University Press<br />
<br />
牛津大学出版社<br />
<br />
| isbn=978-0-19-921727-4<br />
<br />
| isbn=978-0-19-921727-4<br />
<br />
[国际标准图书编号978-0-19-921727-4]<br />
<br />
}}<br />
<br />
}}<br />
<br />
}}<br />
<br />
* {{Citation|last=Yudkowsky |first=Eliezer |author-link=Eliezer Yudkowsky |editor-last=Goertzel |editor-first=Ben |editor2-last=Pennachin |editor2-first=Cassio |title=Artificial General Intelligence |journal=Annual Review of Psychology |publisher=Springer |year=2006 |url=http://www.singinst.org/upload/LOGI//LOGI.pdf |doi=10.1146/annurev.psych.49.1.585 |pmid=9496632 |isbn=978-3-540-23733-4 |url-status=dead |archiveurl=https://web.archive.org/web/20090411050423/http://www.singinst.org/upload/LOGI/LOGI.pdf |archivedate=11 April 2009 |volume=49 |pages=585–612}} <br />
<br />
* {{Citation |last=Zucker|first=Jean-Daniel |date=July 2003 |title=A grounded theory of abstraction in artificial intelligence|journal=[[Philosophical Transactions of the Royal Society B]] |pmid=12903672 |volume=358 |issue=1435 |pmc=1693211 |pages=1293–1309 |doi=10.1098/rstb.2003.1308 |postscript=.}}<br />
<br />
* {{Citation| last=Yudkowsky | first=Eliezer | author-link=Eliezer Yudkowsky| year=2008 |title=Artificial Intelligence as a Positive and Negative Factor in Global Risk |journal=Global Catastrophic Risks | bibcode=2008gcr..book..303Y }}.<br />
<br />
{{refend}}<br />
<br />
<br />
<br />
==External links==<br />
<br />
* [https://cis.temple.edu/~pwang/AGI-Intro.html The AGI portal maintained by Pei Wang]<br />
<br />
* [https://web.archive.org/web/20050405071221/http://genesis.csail.mit.edu/index.html The Genesis Group at MIT's CSAIL] – Modern research on the computations that underlay human intelligence<br />
<br />
* [http://www.opencog.org/ OpenCog – open source project to develop a human-level AI]<br />
<br />
* [http://academia.wikia.com/wiki/A_Method_for_Simulating_the_Process_of_Logical_Human_Thought Simulating logical human thought]<br />
<br />
* [http://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/ai-timelines What Do We Know about AI Timelines?] – Literature review<br />
<br />
<br />
<br />
{{Existential risk from artificial intelligence}}<br />
<br />
<br />
<br />
{{DEFAULTSORT:Artificial general intelligence}}<br />
<br />
[[Category:Hypothetical technology]]<br />
<br />
Category:Hypothetical technology<br />
<br />
类别: 假设技术<br />
<br />
[[Category:Artificial intelligence]]<br />
<br />
Category:Artificial intelligence<br />
<br />
类别: 人工智能<br />
<br />
[[Category:Computational neuroscience]]<br />
<br />
Category:Computational neuroscience<br />
<br />
类别: 计算神经科学<br />
<br />
<br />
<br />
[[fr:Intelligence artificielle#Intelligence artificielle forte]]<br />
<br />
fr:Intelligence artificielle#Intelligence artificielle forte<br />
<br />
智力人工 # 智力人工强项<br />
<br />
<noinclude><br />
<br />
<small>This page was moved from [[wikipedia:en:Artificial general intelligence]]. Its edit history can be viewed at [[通用人工智能/edithistory]]</small></noinclude><br />
<br />
[[Category:待整理页面]]</div>粲兰https://wiki.swarma.org/index.php?title=%E9%80%9A%E7%94%A8%E4%BA%BA%E5%B7%A5%E6%99%BA%E8%83%BD&diff=15030通用人工智能2020-10-11T13:20:53Z<p>粲兰:</p>
<hr />
<div>此词条暂由彩云小译翻译,未经人工整理和审校,带来阅读不便,请见谅。<br />
<br />
{{Short description|Hypothetical human-level or stronger AI}}<br />
<br />
{{Use British English|date = March 2019}}<br />
<br />
{{Use dmy dates|date=December 2019}}<br />
<br />
{{Artificial intelligence}}<br />
<br />
'''Artificial general intelligence''' ('''AGI''') is the hypothetical<ref>{{cite news |title=DeepMind and Google: the battle to control artificial intelligence |url=https://www.economist.com/news/2019/04/05/deepmind-and-google-the-battle-to-control-artificial-intelligence |accessdate=15 March 2020 |work=[[The Economist]] ([[1843 (magazine)]]) |date=2019 |quote=AGI stands for Artificial General Intelligence, a hypothetical computer program...}}</ref> intelligence of a machine that has the capacity to understand or learn any intellectual task that a [[human being]] can. It is a primary goal of some [[artificial intelligence]] research and a common topic in [[science fiction]] and [[futures studies]]. AGI can also be referred to as '''strong AI''',<ref>Kurzweil, ''Singularity'' (2005) p. 260</ref><ref name = "Kurzweil 2005-08-05">{{Citation|url=https://www.forbes.com/home/free_forbes/2005/0815/030.htmlhttps://www.forbes.com/home/free_forbes/2005/0815/030.html |first=Ray |last=Kurzweil |date=5 August 2005 |magazine=[[Forbes]]|title=Long Live AI}}: Kurzweil describes strong AI as "machine intelligence with the full range of human intelligence."</ref><ref>{{Citation|url=https://crnano.typepad.com/crnblog/2005/08/advanced_human_.html |title=Advanced Human ntelligence<br />
<br />
Artificial general intelligence (AGI) is the hypothetical intelligence of a machine that has the capacity to understand or learn any intellectual task that a human being can. It is a primary goal of some artificial intelligence research and a common topic in science fiction and futures studies. AGI can also be referred to as strong AI,<ref>{{Citation|url=https://crnano.typepad.com/crnblog/2005/08/advanced_human_.html |title=Advanced Human ntelligence<br />
<br />
人工通用智能(Artificial general intelligence,AGI)是一种假设的机器智能,它有能力理解或学习任何人类能够完成的智力任务。这是一些人工智能研究的主要目标,也是科幻小说和未来研究的共同话题。也可以被称为强人工智能,参考文献{ Citation | url https://crnano.typepad.com/crnblog/2005/08/advanced_human_.html | title Advanced Human intelligence<br />
<br />
|first=Mike|last=Treder|work=Responsible Nanotechnology|date=10 August 2005 |archive-url=https://web.archive.org/web/20191016214415/https://crnano.typepad.com/crnblog/2005/08/advanced_human_.html|archive-date=16 October 2019 |url-status=live}}</ref> '''full AI''',<ref>{{Cite web |url=http://tedxtalks.ted.com/video/The-Age-of-Artificial-Intellige |title=The Age of Artificial Intelligence: George John at TEDxLondonBusinessSchool 2013 |access-date=22 February 2014 |archive-url=https://web.archive.org/web/20140226123940/http://tedxtalks.ted.com/video/The-Age-of-Artificial-Intellige |archive-date=26 February 2014 |url-status=live }}</ref> <br />
<br />
|first=Mike|last=Treder|work=Responsible Nanotechnology|date=10 August 2005 |archive-url=https://web.archive.org/web/20191016214415/https://crnano.typepad.com/crnblog/2005/08/advanced_human_.html|archive-date=16 October 2019 |url-status=live}}</ref> full AI, <br />
<br />
2005年8月10日2019年10月 https://web.archive.org/web/20191016214415/https://crnano.typepad.com/crnblog/2005/08/advanced_human_.html|archive-date=16,<br />
<br />
or '''general intelligent action'''.{{sfn|Newell|Simon|1976|ps=, This is the term they use for "human-level" intelligence in the [[physical symbol system]] hypothesis.}} <br />
<br />
or general intelligent action. <br />
<br />
或者是通用智能行为。<br />
<br />
Some academic sources reserve the term "strong AI" for machines that can experience [[Chinese room#Strong AI|consciousness]].{{sfn|Searle|1980|ps=, See below for the origin of the term "strong AI", and see the academic definition of "[[Chinese room#Strong AI|strong AI]]" in the article [[Chinese room]].}} Today's AI is speculated to be many years, if not decades, away from AGI.<ref>[https://www.europarl.europa.eu/at-your-service/files/be-heard/religious-and-non-confessional-dialogue/events/en-20190319-how-artificial-intelligence-works.pdf europarl.europa.eu: How artificial intelligence works], "Concluding remarks: Today's AI is powerful and useful, but remains far from speculated AGI or ASI.", European Parliamentary Research Service, retrieved March 3, 2020</ref><ref>{{Cite journal|last=Grace|first=Katja|last2=Salvatier|first2=John|last3=Dafoe|first3=Allan|last4=Zhang|first4=Baobao|last5=Evans|first5=Owain|date=2018-07-31|title=Viewpoint: When Will AI Exceed Human Performance? Evidence from AI Experts|journal=Journal of Artificial Intelligence Research|volume=62|pages=729–754|doi=10.1613/jair.1.11222|issn=1076-9757}}</ref><br />
<br />
Some academic sources reserve the term "strong AI" for machines that can experience consciousness. Today's AI is speculated to be many years, if not decades, away from AGI.<br />
<br />
一些学术资源保留了“强人工智能”这个术语,用来形容能够体会意识的机器。据推测,如果不是几十年的话,今天的人工智能将在很多年之后才能达到通用人工智能的地步。<br />
<br />
<br />
<br />
Some authorities emphasize a distinction between ''strong AI'' and ''applied AI'',<ref>Encyclopædia Britannica [http://www.britannica.com/eb/article-219086/artificial-intelligence Strong AI, applied AI, and cognitive simulation] {{Webarchive|url=https://web.archive.org/web/20071015054758/http://www.britannica.com/eb/article-219086/artificial-intelligence |date=15 October 2007 }} or Jack Copeland [http://www.cs.usfca.edu/www.AlanTuring.net/turing_archive/pages/Reference%20Articles/what_is_AI/What%20is%20AI02.html What is artificial intelligence?] {{Webarchive|url=https://web.archive.org/web/20070818125256/http://www.cs.usfca.edu/www.AlanTuring.net/turing_archive/pages/Reference%20Articles/what_is_AI/What%20is%20AI02.html |date=18 August 2007 }} on AlanTuring.net</ref> also called ''narrow AI''<ref name = "Kurzweil 2005-08-05"/> or ''[[weak AI]]''.<ref>{{Cite web |url=http://www.open2.net/nextbigthing/ai/ai_in_depth/in_depth.htm |title=The Open University on Strong and Weak AI |access-date=8 October 2007 |archive-url=https://web.archive.org/web/20090925043908/http://www.open2.net/nextbigthing/ai/ai_in_depth/in_depth.htm |archive-date=25 September 2009 |url-status=live }} {{Dead link |date=October 2019}}</ref> In contrast to strong AI, weak AI is not intended to perform human [[cognitive]] abilities. Rather, weak AI is limited to the use of software to study or accomplish specific [[problem solving]] or [[reason]]ing tasks.<br />
<br />
Some authorities emphasize a distinction between strong AI and applied AI, also called narrow AI In contrast to strong AI, weak AI is not intended to perform human cognitive abilities. Rather, weak AI is limited to the use of software to study or accomplish specific problem solving or reasoning tasks.<br />
<br />
一些权威机构强调'' 强人工智能 ''和'' 应用人工智能 ''之间的区别,也称为'' 狭义人工智能 ''与'' 强人工智能 ''相比,弱人工智能并不是为了执行人类的认知能力。相反,弱人工智能仅限于使用软件来研究或完成特定问题的解决或完成推理任务。<br />
<br />
<br />
<br />
As of 2017, over forty organizations are researching AGI.<ref name=baum/><br />
<br />
As of 2017, over forty organizations are researching AGI.<br />
<br />
截止到2017年,已经有超过四十家机构在研究 AGI。<br />
<br />
<br />
<br />
==Requirements 判定要求==<br />
<br />
{{main|Cognitive science}}<br />
<br />
<br />
<br />
Various criteria for [[intelligence]] have been proposed (most famously the [[Turing test]]) but to date, there is no definition that satisfies everyone.<ref>AI founder [[John McCarthy (computer scientist)|John McCarthy]] writes: "we cannot yet characterize in general what kinds of computational procedures we want to call intelligent." {{cite web| url=http://www-formal.stanford.edu/jmc/whatisai/node1.html| title=Basic Questions| last=McCarthy| first=John| authorlink=John McCarthy (computer scientist)| publisher=[[Stanford University]]| year=2007| access-date=6 December 2007| archive-url=https://web.archive.org/web/20071026100601/http://www-formal.stanford.edu/jmc/whatisai/node1.html| archive-date=26 October 2007| url-status=live}} (For a discussion of some definitions of intelligence used by [[artificial intelligence]] researchers, see [[philosophy of artificial intelligence]].)</ref> However, there ''is'' wide agreement among artificial intelligence researchers that intelligence is required to do the following:<ref><br />
<br />
Various criteria for intelligence have been proposed (most famously the Turing test) but to date, there is no definition that satisfies everyone. However, there is wide agreement among artificial intelligence researchers that intelligence is required to do the following:<ref><br />
<br />
人们提出了各种各样的智能标准(最著名的是图灵测试) ,但到目前为止,还没有一个定义能使所有人满意。然而,人工智能研究人员普遍认为,智能需要做到以下几点: <br />
<br />
This list of intelligent traits is based on the topics covered by major AI textbooks, including:<br />
<br />
This list of intelligent traits is based on the topics covered by major AI textbooks, including:<br />
<br />
这个智能特征的列表基于主流的人工智能教科书所涉及的主题,包括:<br />
<br />
{{Harvnb|Russell|Norvig|2003}},<br />
<br />
,<br />
<br />
,<br />
<br />
{{Harvnb|Luger|Stubblefield|2004}},<br />
<br />
,<br />
<br />
,<br />
<br />
{{Harvnb|Poole|Mackworth|Goebel|1998}} and<br />
<br />
and<br />
<br />
及<br />
<br />
{{Harvnb|Nilsson|1998}}.<br />
<br />
.<br />
<br />
.<br />
<br />
</ref><br />
<br />
</ref><br />
<br />
/ 参考<br />
<br />
* [[automated reasoning|reason]], use strategy, solve puzzles, and make judgments under [[uncertainty]];使用策略,解决问题,并且在不确定条件下做出决策。<br />
<br />
* [[knowledge representation|represent knowledge]], including [[Commonsense knowledge base|commonsense knowledge]];展现知识,包括常识。<br />
<br />
* [[automated planning and scheduling|plan]];规划。<br />
<br />
* [[machine learning|learn]];学习。<br />
<br />
* communicate in [[natural language processing|natural language]];使用自然语言进行交流。<br />
<br />
* and [[Artificial intelligence systems integration|integrate all these skills]] towards common goals.综合运用所有技巧以达到某个目的。<br />
<br />
<br />
<br />
Other important capabilities include the ability to [[machine perception|sense]] (e.g. [[computer vision|see]]) and the ability to act (e.g. [[robotics|move and manipulate objects]]) in the world where intelligent behaviour is to be observed.<ref>Pfeifer, R. and Bongard J. C., How the body shapes the way we think: a new view of intelligence (The MIT Press, 2007). {{ISBN|0-262-16239-3}}</ref> This would include an ability to detect and respond to [[hazard]].<ref>{{cite journal | last1 = White | first1 = R. W. | year = 1959 | title = Motivation reconsidered: The concept of competence | journal = Psychological Review | volume = 66 | issue = 5| pages = 297–333 | doi=10.1037/h0040934| pmid = 13844397 }}</ref> Many interdisciplinary approaches to intelligence (e.g. [[cognitive science]], [[computational intelligence]] and [[decision making]]) tend to emphasise the need to consider additional traits such as [[imagination]] (taken as the ability to form mental images and concepts that were not programmed in)<ref>{{Harvnb|Johnson|1987}}</ref> and [[Self-determination theory|autonomy]].<ref>deCharms, R. (1968). Personal causation. New York: Academic Press.</ref><br />
<br />
Other important capabilities include the ability to sense (e.g. see) and the ability to act (e.g. move and manipulate objects) in the world where intelligent behaviour is to be observed. This would include an ability to detect and respond to hazard. Many interdisciplinary approaches to intelligence (e.g. cognitive science, computational intelligence and decision making) tend to emphasise the need to consider additional traits such as imagination (taken as the ability to form mental images and concepts that were not programmed in) and autonomy.<br />
<br />
其他重要的能力包括在客观世界感知(例如:视觉)和行动(例如:移动和操纵物体)的能力。智能行为在客观世界中是可观测的。这将包括检测和应对危险的能力。许多跨学科的智力研究方法(例如:。认知科学、计算智能和决策)倾向于强调考虑额外特征的必要性,例如想象力(被认为是未编入程序的形成意象和概念的能力)和自主性。<br />
<br />
Computer based systems that exhibit many of these capabilities do exist (e.g. see [[computational creativity]], [[automated reasoning]], [[decision support system]], [[robot]], [[evolutionary computation]], [[intelligent agent]]), but not yet at human levels.<br />
<br />
Computer based systems that exhibit many of these capabilities do exist (e.g. see computational creativity, automated reasoning, decision support system, robot, evolutionary computation, intelligent agent), but not yet at human levels.<br />
<br />
基于计算机的系统,展示了许多这些能力确实存在(例如:。参见计算创造性、自动推理、决策支持系统、机器人、进化计算、智能代理) ,但还没有达到人类的水平。<br />
<br />
<br />
<br />
===Tests for confirming human-level AGI{{anchor|Tests_for_confirming_human-level_AGI}}===<br />
<br />
The following tests to confirm human-level AGI have been considered:<ref>{{cite web|last=Muehlhauser|first=Luke|title=What is AGI?|url=http://intelligence.org/2013/08/11/what-is-agi/|publisher=Machine Intelligence Research Institute|accessdate=1 May 2014|date=11 August 2013|archive-url=https://web.archive.org/web/20140425115445/http://intelligence.org/2013/08/11/what-is-agi/|archive-date=25 April 2014|url-status=live}}</ref><ref>{{Cite web|url=https://www.talkyblog.com/artificial_general_intelligence_agi/|title=What is Artificial General Intelligence (AGI)? {{!}} 4 Tests For Ensuring Artificial General Intelligence|date=13 July 2019|website=Talky Blog|language=en-US|access-date=17 July 2019|archive-url=https://web.archive.org/web/20190717071152/https://www.talkyblog.com/artificial_general_intelligence_agi/|archive-date=17 July 2019|url-status=live}}</ref><br />
<br />
The following tests to confirm human-level AGI have been considered:<br />
<br />
考虑了下列测试以确认人类水平 AGI:<br />
<br />
;[[Turing test|The Turing Test]] ([[Alan Turing|''Turing'']])<br />
<br />
The Turing Test (Turing)<br />
<br />
图灵测试(图灵)<br />
<br />
: A machine and a human both converse sight unseen with a second human, who must evaluate which of the two is the machine, which passes the test if it can fool the evaluator a significant fraction of the time. Note: Turing does not prescribe what should qualify as intelligence, only that knowing that it is a machine should disqualify it.<br />
<br />
A machine and a human both converse sight unseen with a second human, who must evaluate which of the two is the machine, which passes the test if it can fool the evaluator a significant fraction of the time. Note: Turing does not prescribe what should qualify as intelligence, only that knowing that it is a machine should disqualify it.<br />
<br />
一个机器人和一个人类都与另一个人类进行眼神交流,后者必须评估两者中哪一个是机器,如果它能骗过评估者很大一部分时间,那么机器就通过了测试。注意: 图灵并没有规定什么是智能,只要能认出它是一台机器就不应该认为它是智能的。<br />
<br />
;The Coffee Test ([[Steve Wozniak|''Wozniak'']])<br />
<br />
The Coffee Test (Wozniak)<br />
<br />
咖啡测试(沃兹尼亚克)<br />
<br />
: A machine is required to enter an average American home and figure out how to make coffee: find the coffee machine, find the coffee, add water, find a mug, and brew the coffee by pushing the proper buttons.<br />
<br />
A machine is required to enter an average American home and figure out how to make coffee: find the coffee machine, find the coffee, add water, find a mug, and brew the coffee by pushing the proper buttons.<br />
<br />
一台机器需要进入一个普通的美国家庭,并弄清楚如何制作咖啡: 找到咖啡机,找到咖啡,加水,找到一个马克杯,并通过按下正确的按钮来煮咖啡。<br />
<br />
;The Robot College Student Test ([[Ben Goertzel|''Goertzel'']])<br />
<br />
The Robot College Student Test (Goertzel)<br />
<br />
机器人大学生考试(格兹尔)<br />
<br />
: A machine enrolls in a university, taking and passing the same classes that humans would, and obtaining a degree.<br />
<br />
A machine enrolls in a university, taking and passing the same classes that humans would, and obtaining a degree.<br />
<br />
一台机器进入一所大学,学习并通过与人类相同的课程,并获得学位。<br />
<br />
;The Employment Test ([[Nils John Nilsson|''Nilsson'']])<br />
<br />
The Employment Test (Nilsson)<br />
<br />
就业测试(尼尔森)<br />
<br />
: A machine works an economically important job, performing at least as well as humans in the same job.<br />
<br />
A machine works an economically important job, performing at least as well as humans in the same job.<br />
<br />
机器从事一项经济上重要的工作,在同一项工作中表现至少和人类一样好。<br />
<br />
<br />
<br />
=== Problems requiring AGI to solve 等待通用人工智能解决的问题===<br />
<br />
{{Main|AI-complete}}<br />
<br />
<br />
<br />
The most difficult problems for computers are informally known as "AI-complete" or "AI-hard", implying that solving them is equivalent to the general aptitude of human intelligence, or strong AI, beyond the capabilities of a purpose-specific algorithm.<ref name="Shapiro92">Shapiro, Stuart C. (1992). [http://www.cse.buffalo.edu/~shapiro/Papers/ai.pdf Artificial Intelligence] {{Webarchive|url=https://web.archive.org/web/20160201014644/http://www.cse.buffalo.edu/~shapiro/Papers/ai.pdf |date=1 February 2016 }} In Stuart C. Shapiro (Ed.), ''Encyclopedia of Artificial Intelligence'' (Second Edition, pp.&nbsp;54–57). New York: John Wiley. (Section 4 is on "AI-Complete Tasks".)</ref><br />
<br />
The most difficult problems for computers are informally known as "AI-complete" or "AI-hard", implying that solving them is equivalent to the general aptitude of human intelligence, or strong AI, beyond the capabilities of a purpose-specific algorithm.<br />
<br />
对于计算机来说,最困难的问题被非正式地称为“ AI完全问题”或“ AI困难问题” ,这意味着解决这些问题相当于人类智能的一般才能,或强人工智能,超出了特定目的算法的能力。<br />
<br />
<br />
<br />
AI-complete problems are hypothesised to include general [[computer vision]], [[natural language understanding]], and dealing with unexpected circumstances while solving any real world problem.<ref>Roman V. Yampolskiy. Turing Test as a Defining Feature of AI-Completeness. In Artificial Intelligence, Evolutionary Computation and Metaheuristics (AIECM) --In the footsteps of Alan Turing. Xin-She Yang (Ed.). pp. 3–17. (Chapter 1). Springer, London. 2013. http://cecs.louisville.edu/ry/TuringTestasaDefiningFeature04270003.pdf {{Webarchive|url=https://web.archive.org/web/20130522094547/http://cecs.louisville.edu/ry/TuringTestasaDefiningFeature04270003.pdf |date=22 May 2013 }}</ref><br />
<br />
AI-complete problems are hypothesised to include general computer vision, natural language understanding, and dealing with unexpected circumstances while solving any real world problem.<br />
<br />
人工智能完全问题假设包括一般的计算机视觉,自然语言理解,以及在解决任何现实世界问题的同时处理意外情况。<br />
<br />
<br />
<br />
AI-complete problems cannot be solved with current computer technology alone, and also require [[human computation]]. This property could be useful, for example, to test for the presence of humans, as [[CAPTCHA]]s aim to do; and for [[computer security]] to repel [[brute-force attack]]s.<ref>Luis von Ahn, Manuel Blum, Nicholas Hopper, and John Langford. [http://www.captcha.net/captcha_crypt.pdf CAPTCHA: Using Hard AI Problems for Security] {{Webarchive|url=https://web.archive.org/web/20160304001102/http://www.captcha.net/captcha_crypt.pdf |date=4 March 2016 }}. In Proceedings of Eurocrypt, Vol. 2656 (2003), pp. 294–311.</ref><ref>{{cite journal | first = Richard | last = Bergmair | title = Natural Language Steganography and an "AI-complete" Security Primitive | citeseerx = 10.1.1.105.129 | date = 7 January 2006 }} (unpublished?)</ref><br />
<br />
AI-complete problems cannot be solved with current computer technology alone, and also require human computation. This property could be useful, for example, to test for the presence of humans, as CAPTCHAs aim to do; and for computer security to repel brute-force attacks.<br />
<br />
目前的计算机技术不能单独解决AI完全问题,而且还需要人工计算。例如,这个特性可以用来测试人类是否存在(CAPTCHAs 的目标就是这样做) ,以及应用于计算机安全以抵御强力攻击。<br />
<br />
<br />
<br />
== History 历史 == <br />
<br />
=== Classical AI 经典人工智能 ===<br />
<br />
{{Main|History of artificial intelligence}}<br />
<br />
Modern AI research began in the mid 1950s.<ref>{{Harvnb|Crevier|1993|pp=48–50}}</ref> The first generation of AI researchers were convinced that artificial general intelligence was possible and that it would exist in just a few decades. AI pioneer [[Herbert A. Simon]] wrote in 1965: "machines will be capable, within twenty years, of doing any work a man can do."<ref>{{Harvnb|Simon|1965|p=96}} quoted in {{Harvnb|Crevier|1993|p=109}}</ref> Their predictions were the inspiration for [[Stanley Kubrick]] and [[Arthur C. Clarke]]'s character [[HAL 9000]], who embodied what AI researchers believed they could create by the year 2001. AI pioneer [[Marvin Minsky]] was a consultant<ref>{{Cite web |url=http://mitpress.mit.edu/e-books/Hal/chap2/two1.html |title=Scientist on the Set: An Interview with Marvin Minsky |access-date=5 April 2008 |archive-url=https://web.archive.org/web/20120716182537/http://mitpress.mit.edu/e-books/Hal/chap2/two1.html |archive-date=16 July 2012 |url-status=live }}</ref> on the project of making HAL 9000 as realistic as possible according to the consensus predictions of the time; Crevier quotes him as having said on the subject in 1967, "Within a generation ... the problem of creating 'artificial intelligence' will substantially be solved,"<ref>Marvin Minsky to {{Harvtxt|Darrach|1970}}, quoted in {{Harvtxt|Crevier|1993|p=109}}.</ref> although Minsky states that he was misquoted.{{Citation needed|date=June 2011}}<br />
<br />
Modern AI research began in the mid 1950s. The first generation of AI researchers were convinced that artificial general intelligence was possible and that it would exist in just a few decades. AI pioneer Herbert A. Simon wrote in 1965: "machines will be capable, within twenty years, of doing any work a man can do." Their predictions were the inspiration for Stanley Kubrick and Arthur C. Clarke's character HAL 9000, who embodied what AI researchers believed they could create by the year 2001. AI pioneer Marvin Minsky was a consultant on the project of making HAL 9000 as realistic as possible according to the consensus prediction of the time; Crevier quotes him as having said on the subject in 1967, "Within a generation ... the problem of creating 'artificial intelligence' will substantially be solved," although Minsky states that he was misquoted.<br />
<br />
现代人工智能研究始于20世纪50年代中期。第一代人工智能研究人员确信,通用人工智能是可能的,并将在短短几十年内出现。人工智能的先驱赫伯特·A·西蒙(Herbert A. Simon)在1965年写道: “机器将在20年内拥有完成人类能做的任何工作的能力。”他们的预言启发了斯坦利·库布里克和亚瑟·查理斯·克拉克塑造的角色哈尔9000,它代表了人工智能研究人员相信他们截至2001年能够创造出的东西。人工智能先驱马文·明斯基(Marvin Minsky)是一个项目顾问,该项目旨在根据当时的一致预测,使哈尔9000尽可能逼真; 克里维尔援引他在1967年关于这个问题的话说,“在一代人的时间里... ... 创造‘人工智能’的问题将大体上得到解决,”尽管明斯基声称,他的话被错误引用了。<br />
<br />
<br />
<br />
However, in the early 1970s, it became obvious that researchers had grossly underestimated the difficulty of the project. Funding agencies became skeptical of AGI and put researchers under increasing pressure to produce useful "applied AI".<ref>The [[Lighthill report]] specifically criticized AI's "grandiose objectives" and led the dismantling of AI research in England. ({{Harvnb|Lighthill|1973}}; {{Harvnb|Howe|1994}}) In the U.S., [[DARPA]] became determined to fund only "mission-oriented direct research, rather than basic undirected research". See {{Harv|NRC|1999}} under "Shift to Applied Research Increases Investment". See also {{Harv|Crevier|1993|pp=115–117}} and {{Harv|Russell|Norvig|2003|pp=21–22}}</ref> As the 1980s began, Japan's [[Fifth Generation Computer]] Project revived interest in AGI, setting out a ten-year timeline that included AGI goals like "carry on a casual conversation".<ref>{{harvnb|Crevier|1993|p=211}}, {{harvnb|Russell|Norvig|2003|p=24}} and see also {{Harvnb|Feigenbaum|McCorduck|1983}}</ref> In response to this and the success of [[expert systems]], both industry and government pumped money back into the field.<ref>{{Harvnb|Crevier| 1993|pp=161–162,197–203,240}}; {{harvnb|Russell|Norvig|2003|p=25}}; {{harvnb|NRC|1999|loc=under "Shift to Applied Research Increases Investment"}}</ref> However, confidence in AI spectacularly collapsed in the late 1980s, and the goals of the Fifth Generation Computer Project were never fulfilled.<ref>{{Harvnb|Crevier|1993|pp=209–212}}</ref> For the second time in 20 years, AI researchers who had predicted the imminent achievement of AGI had been shown to be fundamentally mistaken. By the 1990s, AI researchers had gained a reputation for making vain promises. They became reluctant to make predictions at all<ref>As AI founder [[John McCarthy (computer scientist)|John McCarthy]] writes "it would be a great relief to the rest of the workers in AI if the inventors of new general formalisms would express their hopes in a more guarded form than has sometimes been the case." {{cite web | url=http://www-formal.stanford.edu/jmc/reviews/lighthill/lighthill.html | title=Reply to Lighthill | last=McCarthy | first=John | authorlink=John McCarthy (computer scientist) | publisher=Stanford University | year=2000 | access-date=29 September 2007 | archive-url=https://web.archive.org/web/20080930164952/http://www-formal.stanford.edu/jmc/reviews/lighthill/lighthill.html | archive-date=30 September 2008 | url-status=live }}</ref> and to avoid any mention of "human level" artificial intelligence for fear of being labeled "wild-eyed dreamer[s]."<ref>"At its low point, some computer scientists and software engineers avoided the term artificial intelligence for fear of being viewed as wild-eyed dreamers."{{cite news |first=John |last=Markoff |title=Behind Artificial Intelligence, a Squadron of Bright Real People |url=https://www.nytimes.com/2005/10/14/technology/14artificial.html?ei=5070&en=11ab55edb7cead5e&ex=1185940800&adxnnl=1&adxnnlx=1185805173-o7WsfW7qaP0x5/NUs1cQCQ |work=The New York Times|date=14 October 2005}}</ref><br />
<br />
However, in the early 1970s, it became obvious that researchers had grossly underestimated the difficulty of the project. Funding agencies became skeptical of AGI and put researchers under increasing pressure to produce useful "applied AI". As the 1980s began, Japan's Fifth Generation Computer Project revived interest in AGI, setting out a ten-year timeline that included AGI goals like "carry on a casual conversation". In response to this and the success of expert systems, both industry and government pumped money back into the field. However, confidence in AI spectacularly collapsed in the late 1980s, and the goals of the Fifth Generation Computer Project were never fulfilled. For the second time in 20 years, AI researchers who had predicted the imminent achievement of AGI had been shown to be fundamentally mistaken. By the 1990s, AI researchers had gained a reputation for making vain promises. They became reluctant to make predictions at all and to avoid any mention of "human level" artificial intelligence for fear of being labeled "wild-eyed dreamer[s]."<br />
<br />
然而,在20世纪70年代初,很明显,研究人员严重低估了该项目的难度。资助机构开始对通用人工智能持怀疑态度,并对研究人员施加越来越大的压力,要求他们生产出有用的“应用人工智能”。随着20世纪80年代的开始,日本的'''<font color="#ff8000">第五代计算机项目(Fifth Generation Computer Project)</font>'''重新唤起了人们对通用人工智能的兴趣,并设定了一个长达10年的时间线,其中包括通用人工智能的目标,比如“进行一次随意的交谈”。为了应对这种情况和专家系统的成功建立,工业界和政府都重新将资金投入这一领域。然而,人们对人工智能的信心在20世纪80年代末大幅下降,第五代计算机项目的目标从未实现。20年来的第二次,人工智能研究人员预测通用人工智能即将取得的成果已经被证明根本是错误的。到了20世纪90年代,人工智能研究人员因做出虚假承诺而臭名昭著。他们根本不愿意做预测,也不愿意提及“人类水平”的人工智能,因为他们害怕被贴上“狂热梦想家”的标签<br />
<br />
<br />
<br />
=== Narrow AI research 狭义人工智能的研究===<br />
<br />
{{Main|Artificial intelligence}}<br />
<br />
<br />
<br />
In the 1990s and early 21st century, mainstream AI achieved far greater commercial success and academic respectability by focusing on specific sub-problems where they can produce verifiable results and commercial applications, such as [[artificial neural networks]] and statistical [[machine learning]].<ref>{{Harvnb|Russell|Norvig|2003|pp=25–26}}</ref> These "applied AI" systems are now used extensively throughout the technology industry, and research in this vein is very heavily funded in both academia and industry. Currently, development on this field is considered an emerging trend, and a mature stage is expected to happen in more than 10 years.<ref>{{cite web |title=Trends in the Emerging Tech Hype Cycle |url=https://blogs.gartner.com/smarterwithgartner/files/2018/08/PR_490866_5_Trends_in_the_Emerging_Tech_Hype_Cycle_2018_Hype_Cycle.png |publisher=Gartner Reports |accessdate=7 May 2019 |archive-url=https://web.archive.org/web/20190522024829/https://blogs.gartner.com/smarterwithgartner/files/2018/08/PR_490866_5_Trends_in_the_Emerging_Tech_Hype_Cycle_2018_Hype_Cycle.png |archive-date=22 May 2019 |url-status=live }}</ref><br />
<br />
In the 1990s and early 21st century, mainstream AI achieved far greater commercial success and academic respectability by focusing on specific sub-problems where they can produce verifiable results and commercial applications, such as artificial neural networks and statistical machine learning. These "applied AI" systems are now used extensively throughout the technology industry, and research in this vein is very heavily funded in both academia and industry. Currently, development on this field is considered an emerging trend, and a mature stage is expected to happen in more than 10 years.<br />
<br />
在1990年代和21世纪初,主流人工智能取得了更大的商业成功和学术声望,因为它们把重点放在能够产生可验证结果和商业应用的具体子问题上,例如人工神经网络和统计机器学习。这些“应用人工智能”系统现在在整个技术产业中得到广泛应用,这方面的研究得到了学术界和产业界的大量资助。目前,这一领域的发展被认为是一个新兴的趋势,并有望在10多年内进入一个成熟的阶段。<br />
<br />
<br />
<br />
Most mainstream AI researchers hope that strong AI can be developed by combining the programs that solve various sub-problems. [[Hans Moravec]] wrote in 1988: <blockquote>"I am confident that this bottom-up route to artificial intelligence will one day meet the traditional top-down route more than half way, ready to provide the real world competence and the [[Commonsense knowledge base|commonsense knowledge]] that has been so frustratingly elusive in reasoning programs. Fully intelligent machines will result when the metaphorical [[golden spike]] is driven uniting the two efforts."<ref>{{Harvnb|Moravec|1988|p=20}}</ref></blockquote><br />
<br />
Most mainstream AI researchers hope that strong AI can be developed by combining the programs that solve various sub-problems. Hans Moravec wrote in 1988: <blockquote>"I am confident that this bottom-up route to artificial intelligence will one day meet the traditional top-down route more than half way, ready to provide the real world competence and the commonsense knowledge that has been so frustratingly elusive in reasoning programs. Fully intelligent machines will result when the metaphorical golden spike is driven uniting the two efforts."</blockquote><br />
<br />
大多数主流人工智能研究人员希望,通过结合解决各种子问题的项目,可以开发出强人工智能。汉斯·莫拉维克(Hans Moravec)在1988年写道: “我相信,这种自下而上的人工智能路线,终有一天会与传统的自上而下的路线在后半程相遇。令人沮丧的是,当下,真实世界的能力和常识知识在推理程序中一直难以捉摸。而这两种路线结合的人工智能将能为我们解决这些疑难。当一个黄金钉一样的东西将二者结合起来时,就会产生完全智能的机器。” / blockquote<br />
<br />
<br />
<br />
However, even this fundamental philosophy has been disputed; for example, Stevan Harnad of Princeton concluded his 1990 paper on the [[Symbol grounding problem|Symbol Grounding Hypothesis]] by stating: <blockquote>"The expectation has often been voiced that "top-down" (symbolic) approaches to modeling cognition will somehow meet "bottom-up" (sensory) approaches somewhere in between. If the grounding considerations in this paper are valid, then this expectation is hopelessly modular and there is really only one viable route from sense to symbols: from the ground up. A free-floating symbolic level like the software level of a computer will never be reached by this route (or vice versa) – nor is it clear why we should even try to reach such a level, since it looks as if getting there would just amount to uprooting our symbols from their intrinsic meanings (thereby merely reducing ourselves to the functional equivalent of a programmable computer)."<ref>{{cite journal | last1 = Harnad | first1 = S | year = 1990 | title = The Symbol Grounding Problem | journal = Physica D | volume = 42 | issue = 1–3| pages = 335–346 | doi=10.1016/0167-2789(90)90087-6| bibcode = 1990PhyD...42..335H | arxiv = cs/9906002}}</ref></blockquote><br />
<br />
However, even this fundamental philosophy has been disputed; for example, Stevan Harnad of Princeton concluded his 1990 paper on the Symbol Grounding Hypothesis by stating: <blockquote>"The expectation has often been voiced that "top-down" (symbolic) approaches to modeling cognition will somehow meet "bottom-up" (sensory) approaches somewhere in between. If the grounding considerations in this paper are valid, then this expectation is hopelessly modular and there is really only one viable route from sense to symbols: from the ground up. A free-floating symbolic level like the software level of a computer will never be reached by this route (or vice versa) – nor is it clear why we should even try to reach such a level, since it looks as if getting there would just amount to uprooting our symbols from their intrinsic meanings (thereby merely reducing ourselves to the functional equivalent of a programmable computer)."</blockquote><br />
<br />
然而,连如此基本的哲学问题也存在争议; 例如,普林斯顿大学的斯蒂文·哈纳德(Stevan Harnad)在1990年关于'''<font color="#ff8000">符号基础假说(the Symbol Grounding Hypothesis)</font>'''的论文中总结道: “人们经常提出这样的期望,即建立“自上而下”(符号)的认知模型的方法将在某种程度上与“自下而上”(感官)的方法在建模过程中的某处相会。如果本文中的基本考虑是正确的,那么绝望的是,这种期望是模块化的,并且从认知到符号真的只有一条可行的路径: 从头开始。类似计算机软件级别的自由浮动的符号永远不可能通过这条路径实现,反之亦然——甚至也不清楚为什么我们应该尝试达到这样一个级别,因为它看起来就像是把我们的符号从它们的内在意义上连根拔起(从而仅仅把我们自己降低为可编程计算机的功能等价物)。” / blockquote<br />
<br />
<br />
<br />
=== Modern artificial general intelligence research 现代通用人工智能的研究===<br />
<br />
The term "artificial general intelligence" was used as early as 1997, by Mark Gubrud<ref>{{Harvnb|Gubrud|1997}}</ref> in a discussion of the implications of fully automated military production and operations. The term was re-introduced and popularized by [[Shane Legg]] and [[Ben Goertzel]] around 2002.<ref>{{Cite web|url=http://goertzel.org/who-coined-the-term-agi/|title=Who coined the term "AGI"? » goertzel.org|language=en-US|access-date=28 December 2018|archive-url=https://web.archive.org/web/20181228083048/http://goertzel.org/who-coined-the-term-agi/|archive-date=28 December 2018|url-status=live}}, via [[Life 3.0]]: 'The term "AGI" was popularized by... Shane Legg, Mark Gubrud and Ben Goertzel'</ref> The research objective is much older, for example [[Doug Lenat]]'s [[Cyc]] project (that began in 1984), and [[Allen Newell]]'s [[Soar (cognitive architecture)|Soar]] project are regarded as within the scope of AGI. AGI research activity in 2006 was described by Pei Wang and Ben Goertzel<ref>{{harvnb|Goertzel|Wang|2006}}. See also {{harvtxt|Wang|2006}} with an up-to-date summary and lots of links.</ref> as "producing publications and preliminary results". The first summer school in AGI was organized in Xiamen, China in 2009<ref>https://goertzel.org/AGI_Summer_School_2009.htm</ref> by the Xiamen university's Artificial Brain Laboratory and OpenCog. The first university course was given in 2010<ref>http://fmi-plovdiv.org/index.jsp?id=1054&ln=1</ref> and 2011<ref>http://fmi.uni-plovdiv.bg/index.jsp?id=1139&ln=1</ref> at Plovdiv University, Bulgaria by Todor Arnaudov. MIT presented a course in AGI in 2018, organized by Lex Fridman and featuring a number of guest lecturers. However, as yet, most AI researchers have devoted little attention to AGI, with some claiming that intelligence is too complex to be completely replicated in the near term. However, a small number of computer scientists are active in AGI research, and many of this group are contributing to a series of [[Conference on Artificial General Intelligence|AGI conferences]]. The research is extremely diverse and often pioneering in nature. In the introduction to his book,{{sfn|Goertzel|Pennachin|2006}} Goertzel says that estimates of the time needed before a truly flexible AGI is built vary from 10 years to over a century, but the consensus in the AGI research community seems to be that the timeline discussed by [[Ray Kurzweil]] in ''[[The Singularity is Near]]''<ref name="K">{{Harv|Kurzweil|2005|p=260}} or see [http://crnano.typepad.com/crnblog/2005/08/advanced_human_.html Advanced Human Intelligence] {{Webarchive|url=https://web.archive.org/web/20110630032301/http://crnano.typepad.com/crnblog/2005/08/advanced_human_.html |date=30 June 2011 }} where he defines strong AI as "machine intelligence with the full range of human intelligence."</ref> (i.e. between 2015 and 2045) is plausible.{{sfn|Goertzel|2007}}<br />
<br />
The term "artificial general intelligence" was used as early as 1997, by Mark Gubrud in a discussion of the implications of fully automated military production and operations. The term was re-introduced and popularized by Shane Legg and Ben Goertzel around 2002. The research objective is much older, for example Doug Lenat's Cyc project (that began in 1984), and Allen Newell's Soar project are regarded as within the scope of AGI. AGI research activity in 2006 was described by Pei Wang and Ben Goertzel as "producing publications and preliminary results". The first summer school in AGI was organized in Xiamen, China in 2009 by the Xiamen university's Artificial Brain Laboratory and OpenCog. The first university course was given in 2010 and 2011 at Plovdiv University, Bulgaria by Todor Arnaudov. MIT presented a course in AGI in 2018, organized by Lex Fridman and featuring a number of guest lecturers. However, as yet, most AI researchers have devoted little attention to AGI, with some claiming that intelligence is too complex to be completely replicated in the near term. However, a small number of computer scientists are active in AGI research, and many of this group are contributing to a series of AGI conferences. The research is extremely diverse and often pioneering in nature. In the introduction to his book, Goertzel says that estimates of the time needed before a truly flexible AGI is built vary from 10 years to over a century, but the consensus in the AGI research community seems to be that the timeline discussed by Ray Kurzweil in The Singularity is Near (i.e. between 2015 and 2045) is plausible.<br />
<br />
”人工通用智能”一词早在1997年就由马克·古布鲁德(Mark Gubrud)在讨论全自动化军事生产和作业的影响时使用。这个术语在2002年左右被肖恩·莱格(Shane Legg)和本·格兹尔(Ben Goertzel)重新引入并推广。研究目标要古老得多,例如道格•雷纳特(Doug Lenat)的 Cyc 项目(始于1984年) ,以及艾伦•纽厄尔(Allen Newell)的 Soar 项目被认为属于通用人工智能的范围。王培(Pei Wang)和本·格兹尔将2006年的通用人工智能研究活动描述为“发表论文和取得初步成果”。2009年,厦门大学人工脑实验室和 OpenCog 在中国厦门组织了通用人工智能的第一个暑期学校。第一个大学课程于2010年和2011年在保加利亚普罗夫迪夫大学由托多尔·阿瑙多夫(Todor Arnaudov)开设。2018年,麻省理工学院开设了一门通用人工智能课程,由莱克斯·弗里德曼(Lex Fridman)组织,并邀请了一些客座讲师。然而,迄今为止,大多数人工智能研究人员对通用人工智能关注甚少,一些人声称,智能过于复杂,在短期内无法完全复制。然而,少数计算机科学家积极参与通用人工智能的研究,其中许多人正在为通用人工智能的一系列会议做出贡献。这项研究极其多样化,而且往往具有开创性。格兹尔在他的书的序言中,说,制造一个真正灵活的通用人工智能所需的时间约为10年到超过一个世纪不等,但是通用人工智能研究团体的似乎一致认为雷·库兹韦尔(Ray Kurzweil)在《奇点临近》(即在2015年至2045年之间)中讨论的时间线是可信的。<br />
<br />
<br />
<br />
However, most mainstream AI researchers doubt that progress will be this rapid.{{citation_needed|date=January 2017}} Organizations explicitly pursuing AGI include the Swiss AI lab [[IDSIA]],{{citation needed|date=December 2017}} Nnaisense,<ref>{{cite news|last1=Markoff|first1=John|title=When A.I. Matures, It May Call Jürgen Schmidhuber 'Dad'|url=https://www.nytimes.com/2016/11/27/technology/artificial-intelligence-pioneer-jurgen-schmidhuber-overlooked.html|accessdate=26 December 2017|work=The New York Times|date=27 November 2016|archive-url=https://web.archive.org/web/20171226234555/https://www.nytimes.com/2016/11/27/technology/artificial-intelligence-pioneer-jurgen-schmidhuber-overlooked.html|archive-date=26 December 2017|url-status=live}}</ref> [[Vicarious (company)|Vicarious]],<!--<ref name=baum/>--> [[Maluuba]],<ref name=baum/> the [[OpenCog|OpenCog Foundation]], Adaptive AI, [[LIDA (cognitive architecture)|LIDA]], and [[Numenta]] and the associated [[Redwood Neuroscience Institute]].<ref>{{cite book|author1=James Barrat|title=Our Final Invention: Artificial Intelligence and the End of the Human Era|date=2013|publisher=St. Martin's Press|location=New York|isbn=9780312622374|edition=First|chapter=Chapter 11: A Hard Takeoff|title-link=Our Final Invention|author1-link=James Barrat}}</ref> In addition, organizations such as the [[Machine Intelligence Research Institute]]<ref>{{cite web|title=About the Machine Intelligence Research Institute|url=https://intelligence.org/about/|website=Machine Intelligence Research Institute|accessdate=26 December 2017|archive-url=https://web.archive.org/web/20180121025925/https://intelligence.org/about/|archive-date=21 January 2018|url-status=live}}</ref> and [[OpenAI]]<ref>{{cite news|title=About OpenAI|url=https://openai.com/about/|accessdate=26 December 2017|work=[[OpenAI]]|language=en-us|archive-url=https://web.archive.org/web/20171222181056/https://openai.com/about/|archive-date=22 December 2017|url-status=live}}</ref> have been founded to influence the development path of AGI. Finally, projects such as the [[Human Brain Project]]<ref>{{cite news|last1=Theil|first1=Stefan|title=Trouble in Mind|url=https://www.scientificamerican.com/article/why-the-human-brain-project-went-wrong-and-how-to-fix-it/|accessdate=26 December 2017|work=Scientific American|pages=36–42|language=en|doi=10.1038/scientificamerican1015-36|bibcode=2015SciAm.313d..36T|archive-url=https://web.archive.org/web/20171109234151/https://www.scientificamerican.com/article/why-the-human-brain-project-went-wrong-and-how-to-fix-it/|archive-date=9 November 2017|url-status=live}}</ref> have the goal of building a functioning simulation of the human brain. A 2017 survey of AGI categorized forty-five known "active R&D projects" that explicitly or implicitly (through published research) research AGI, with the largest three being [[DeepMind]], the Human Brain Project, and [[OpenAI]].<ref name=baum>{{cite journal|title=Baum, Seth, A Survey of Artificial General Intelligence Projects for Ethics, Risk, and Policy (November 12, 2017). Global Catastrophic Risk Institute Working Paper 17-1|url=https://ssrn.com/abstract=3070741|date=12 November 2017|last1=Baum|first1=Seth}}</ref><br />
<br />
However, most mainstream AI researchers doubt that progress will be this rapid. Organizations explicitly pursuing AGI include the Swiss AI lab IDSIA, Nnaisense, Vicarious,<!-- In addition, organizations such as the Machine Intelligence Research Institute and OpenAI have been founded to influence the development path of AGI. Finally, projects such as the Human Brain Project have the goal of building a functioning simulation of the human brain. A 2017 survey of AGI categorized forty-five known "active R&D projects" that explicitly or implicitly (through published research) research AGI, with the largest three being DeepMind, the Human Brain Project, and OpenAI.<br />
<br />
然而,大多数主流的人工智能研究人员怀疑进展是否会如此之快。明确寻求通用人工智能的组织包括瑞士人工智能实验室IDSIA,Nnaisense,Vicarious,! -- 此外,机器智能研究所和 OpenAI 等机构也建立起来以影响通用人工智能的发展道路。最后,像人脑计划这样的项目的目标是建立一个人脑的功能模拟。2017年针对通用人工智能的一项调查(通过已发表的研究)对45个已知的明确的或暗中研究通用人工智能的“活跃研发项目”进行了分类 ,其中最大的三个是 DeepMind、人类大脑项目和 OpenAI。<br />
<br />
<br />
<br />
In 2017, researchers Feng Liu, Yong Shi and Ying Liu conducted intelligence tests on publicly available and freely accessible weak AI such as Google AI or Apple's Siri and others. At the maximum, these AI reached a value of about 47, which corresponds approximately to a six-year-old child in first grade. An adult comes to about 100 on average. Similar tests had been carried out in 2014, with the IQ score reaching a maximum value of 27.<ref>{{cite journal|title=Intelligence Quotient and Intelligence Grade of Artificial Intelligence|journal=Annals of Data Science|volume=4|issue=2|pages=179–191|arxiv=1709.10242|doi=10.1007/s40745-017-0109-0|year=2017|last1=Liu|first1=Feng|last2=Shi|first2=Yong|last3=Liu|first3=Ying|bibcode=2017arXiv170910242L}}</ref><ref>{{cite web|title=Google-KI doppelt so schlau wie Siri|url=https://t3n.de/news/iq-kind-schlauer-google-ki-siri-864003|accessdate=2 January 2019|archive-url=https://web.archive.org/web/20190103055657/https://t3n.de/news/iq-kind-schlauer-google-ki-siri-864003/|archive-date=3 January 2019|url-status=live}}</ref><br />
<br />
In 2017, researchers Feng Liu, Yong Shi and Ying Liu conducted intelligence tests on publicly available and freely accessible weak AI such as Google AI or Apple's Siri and others. At the maximum, these AI reached a value of about 47, which corresponds approximately to a six-year-old child in first grade. An adult comes to about 100 on average. Similar tests had been carried out in 2014, with the IQ score reaching a maximum value of 27.<br />
<br />
2017年,研究人员 Feng Liu,Yong Shi 和 Ying Liu 对公开的和可自由访问的弱智能进行了智能测试,如谷歌人工智能或苹果的 Siri 等。在最大值,这些人工智能达到了约47,这大约相当于一个的六岁儿童。一个成年人平均智商为100。2014年也进行了类似的测试,智商分数的最高值达到了27。<br />
<br />
<br />
<br />
In 2019, video game programmer and aerospace engineer [[John Carmack]] announced plans to research AGI.<ref name="lawler">{{cite web |url=https://www.engadget.com/2019-11-13-john-carmack-agi.html |title=John Carmack takes a step back at Oculus to work on human-like AI |date=November 13, 2019 |first=Richard |last=Lawler |accessdate=April 4, 2020 |publisher=[[Engadget]]}}</ref><br />
<br />
In 2019, video game programmer and aerospace engineer John Carmack announced plans to research AGI.<br />
<br />
2019年,游戏程序师和航空工程师约翰·卡迈克(John Carmack)宣布了研究通用人工智能的计划。<br />
<br />
<br />
<br />
==Processing power needed to simulate a brain 模拟人脑所需要的处理能力==<br />
<br />
<br />
<br />
===Whole brain emulation 全脑模拟===<br />
<br />
{{main|Mind uploading}}<br />
<br />
A popular discussed approach to achieving general intelligent action is [[whole brain emulation]]. A low-level brain model is built by [[brain scanning|scanning]] and [[Brain mapping|mapping]] a biological brain in detail and copying its state into a computer system or another computational device. The computer runs a [[computer simulation|simulation]] model so faithful to the original that it will behave in essentially the same way as the original brain, or for all practical purposes, indistinguishably.<ref name=Roadmap><br />
<br />
A popular discussed approach to achieving general intelligent action is whole brain emulation. A low-level brain model is built by scanning and mapping a biological brain in detail and copying its state into a computer system or another computational device. The computer runs a simulation model so faithful to the original that it will behave in essentially the same way as the original brain, or for all practical purposes, indistinguishably.<ref name=Roadmap><br />
<br />
实现通用智能行为的一种流行的讨论方法是全脑模拟。一个低层次的大脑模型是通过扫描和绘制生物大脑的详细情况,并将其状态复制到计算机系统或其他计算设备中来建立的。计算机运行的模拟模型无比忠实于原始模型,以至于它的行为在本质,或者对于所有的实际目的上与原始大脑相同,难以区分。 参考名称路线图<br />
<br />
{{Harvnb|Sandberg|Boström|2008}}. "The basic idea is to take a particular brain, scan its structure in detail, and construct a software model of it that is so faithful to the original that, when run on appropriate hardware, it will behave in essentially the same way as the original brain."</ref> Whole brain emulation is discussed in [[computational neuroscience]] and [[neuroinformatics]], in the context of [[brain simulation]] for medical research purposes. It is discussed in [[artificial intelligence]] research{{sfn|Goertzel|2007}} as an approach to strong AI. [[Neuroimaging]] technologies that could deliver the necessary detailed understanding are improving rapidly, and [[futurist]] Ray Kurzweil in the book ''The Singularity Is Near''<ref name=K/> predicts that a map of sufficient quality will become available on a similar timescale to the required computing power.<br />
<br />
."The basic idea is to take a particular brain, scan its structure in detail, and construct a software model of it that is so faithful to the original that, when run on appropriate hardware, it will behave in essentially the same way as the original brain."</ref> Whole brain emulation is discussed in computational neuroscience and neuroinformatics, in the context of brain simulation for medical research purposes. It is discussed in artificial intelligence research as an approach to strong AI. Neuroimaging technologies that could deliver the necessary detailed understanding are improving rapidly, and futurist Ray Kurzweil in the book The Singularity Is Near predicts that a map of sufficient quality will become available on a similar timescale to the required computing power.<br />
<br />
“基本思路是,取一个特定的大脑,详细地扫描其结构,并构建一个无比还原的原始大脑的软件模型,以至于在适当的硬件上运行时,它基本上与原始大脑的行为方式相同。基于医学研究的大脑模拟背景下,全脑模拟在计算神经科学和神经信息学医学期刊上被讨论过。它是人工智能研究中讨论的一种强人工智能的方法。可提供必要详细的理解的神经成像技术正在迅速提高,未来学家雷·库兹韦尔(Ray Kurzweil)在《奇点临近》书中预测,一张质量足够高的地图将在类似的时间尺度上达到所需的计算能力。<br />
<br />
<br />
<br />
===Early estimates 初步预测===<br />
<br />
[[File:Estimations of Human Brain Emulation Required Performance.svg|thumb|right|400px|Estimates of how much processing power is needed to emulate a human brain at various levels (from Ray Kurzweil, and [[Anders Sandberg]] and [[Nick Bostrom]]), along with the fastest supercomputer from [[TOP500]] mapped by year. Note the logarithmic scale and exponential trendline, which assumes the computational capacity doubles every 1.1 years. Kurzweil believes that mind uploading will be possible at neural simulation, while the Sandberg, Bostrom report is less certain about where [[consciousness]] arises.{{sfn|Sandberg|Boström|2008}}]] For low-level brain simulation, an extremely powerful computer would be required. The [[human brain]] has a huge number of [[synapses]]. Each of the 10<sup>11</sup> (one hundred billion) [[neurons]] has on average 7,000 synaptic connections (synapses) to other neurons. It has been estimated that the brain of a three-year-old child has about 10<sup>15</sup> synapses (1 quadrillion). This number declines with age, stabilizing by adulthood. Estimates vary for an adult, ranging from 10<sup>14</sup> to 5×10<sup>14</sup> synapses (100 to 500 trillion).{{sfn|Drachman|2005}} An estimate of the brain's processing power, based on a simple switch model for neuron activity, is around 10<sup>14</sup> (100 trillion) synaptic updates per second ([[SUPS]]).{{sfn|Russell|Norvig|2003}} In 1997, Kurzweil looked at various estimates for the hardware required to equal the human brain and adopted a figure of 10<sup>16</sup> computations per second (cps).<ref>In "Mind Children" {{Harvnb|Moravec|1988|page=61}} 10<sup>15</sup> cps is used. More recently, in 1997, <{{cite web|url=http://www.transhumanist.com/volume1/moravec.htm |title=Archived copy |accessdate=23 June 2006 |url-status=dead |archiveurl=https://web.archive.org/web/20060615031852/http://transhumanist.com/volume1/moravec.htm |archivedate=15 June 2006 }}> Moravec argued for 10<sup>8</sup> MIPS which would roughly correspond to 10<sup>14</sup> cps. Moravec talks in terms of MIPS, not "cps", which is a non-standard term Kurzweil introduced.</ref> (For comparison, if a "computation" was equivalent to one "[[FLOPS|floating point operation]]" – a measure used to rate current [[supercomputer]]s – then 10<sup>16</sup> "computations" would be equivalent to 10 [[Peta-|petaFLOPS]], [[FLOPS#Performance records|achieved in 2011]]). He used this figure to predict the necessary hardware would be available sometime between 2015 and 2025, if the exponential growth in computer power at the time of writing continued.<br />
<br />
Estimates of how much processing power is needed to emulate a human brain at various levels (from Ray Kurzweil, and [[Anders Sandberg and Nick Bostrom), along with the fastest supercomputer from TOP500 mapped by year. Note the logarithmic scale and exponential trendline, which assumes the computational capacity doubles every 1.1 years. Kurzweil believes that mind uploading will be possible at neural simulation, while the Sandberg, Bostrom report is less certain about where consciousness arises.]] For low-level brain simulation, an extremely powerful computer would be required. The human brain has a huge number of synapses. Each of the 10<sup>11</sup> (one hundred billion) neurons has on average 7,000 synaptic connections (synapses) to other neurons. It has been estimated that the brain of a three-year-old child has about 10<sup>15</sup> synapses (1 quadrillion). This number declines with age, stabilizing by adulthood. Estimates vary for an adult, ranging from 10<sup>14</sup> to 5×10<sup>14</sup> synapses (100 to 500 trillion). An estimate of the brain's processing power, based on a simple switch model for neuron activity, is around 10<sup>14</sup> (100 trillion) synaptic updates per second (SUPS). In 1997, Kurzweil looked at various estimates for the hardware required to equal the human brain and adopted a figure of 10<sup>16</sup> computations per second (cps). (For comparison, if a "computation" was equivalent to one "floating point operation" – a measure used to rate current supercomputers – then 10<sup>16</sup> "computations" would be equivalent to 10 petaFLOPS, achieved in 2011). He used this figure to predict the necessary hardware would be available sometime between 2015 and 2025, if the exponential growth in computer power at the time of writing continued.<br />
<br />
根据对在不同水平上模拟人类大脑的所需处理能力的估计(来自 Ray Kurzweil,[ Anders Sandberg 和 Nick Bostrom ]) ,以及每年从最快的五百台超级计算机获得的数据,绘制出对数尺度趋势线和指数趋势线。它呈现出计算能力每1.1年增长一倍。库兹韦尔相信,在神经模拟中上传思维是可能的,而桑德伯格和博斯特罗姆的报告对意识从何产生则不太确定。]为进行低层次的大脑模拟,需要一个非常强大的计算机。人类的大脑有大量的突触。每10个 sup 11 / sup (1000亿)神经元平均与其他神经元有7000个突触连接(突触)。据估计,一个三岁儿童的大脑约有10个 sup 15 / sup 突触(1千万亿)。这个数字随着年龄的增长而下降,成年后趋于稳定。而每个成年人的估计情况互不相同,从10个 sup 14 / sup 到5个 sup 10 sup 14 / sup 突触(100万亿到500万亿)不等。基于神经元活动的简单开关模型,对大脑处理能力的估计大约是每秒10次 / 秒(100万亿)突触更新(SUPS)。1997年,库兹韦尔研究了等价模拟人脑所需硬件的各种估计,并采纳了每秒10 sup 16 / sup 计算(cps)这个估计结果。(作为比较,如果一次“计算”相当于一次“浮点运算”——一种用于对当前超级计算机进行评级的措施——那么10 sup 16 / sup次“计算”相当于2011年达到的的每秒10000亿次浮点运算)。他用这个数字来预测,如果在撰写本文时计算机能力方面的指数增长继续下去的话,那么在2015年到2025年之间的某个时候,必要的硬件将会出现。<br />
<br />
<br />
<br />
===Modelling the neurons in more detail 对神经元的更精细的模拟===<br />
<br />
The [[artificial neuron]] model assumed by Kurzweil and used in many current [[artificial neural network]] implementations is simple compared with [[biological neuron model|biological neurons]]. A brain simulation would likely have to capture the detailed cellular behaviour of biological [[neurons]], presently understood only in the broadest of outlines. The overhead introduced by full modeling of the biological, chemical, and physical details of neural behaviour (especially on a molecular scale) would require computational powers several orders of magnitude larger than Kurzweil's estimate. In addition the estimates do not account for [[glial cells]], which are at least as numerous as neurons, and which may outnumber neurons by as much as 10:1, and are now known to play a role in cognitive processes.<ref name="Discover2011JanFeb">{{Cite journal|author=Swaminathan, Nikhil|title=Glia—the other brain cells|journal=Discover|date=Jan–Feb 2011|url=http://discovermagazine.com/2011/jan-feb/62|access-date=24 January 2014|archive-url=https://web.archive.org/web/20140208071350/http://discovermagazine.com/2011/jan-feb/62|archive-date=8 February 2014|url-status=live}}</ref><br />
<br />
The artificial neuron model assumed by Kurzweil and used in many current artificial neural network implementations is simple compared with biological neurons. A brain simulation would likely have to capture the detailed cellular behaviour of biological neurons, presently understood only in the broadest of outlines. The overhead introduced by full modeling of the biological, chemical, and physical details of neural behaviour (especially on a molecular scale) would require computational powers several orders of magnitude larger than Kurzweil's estimate. In addition the estimates do not account for glial cells, which are at least as numerous as neurons, and which may outnumber neurons by as much as 10:1, and are now known to play a role in cognitive processes.<br />
<br />
与生物神经元相比,库兹韦尔假设的人工神经元模型在当前许多人工神经网络实现中的应用是简单的。大脑模拟可能需要捕捉生物神经元细胞行为的细节,目前只能从最广泛的概要中理解。对神经行为的生物、化学和物理细节(特别是在分子尺度上)进行全面建模所需要的计算能力将比库兹韦尔的估计大数个数量级。此外,这些估计没有考虑到至少和神经元一样多的胶质细胞,其数量可能比神经元多十分之一,且现已知它们在认知过程中发挥作用。<br />
<br />
<br />
<br />
=== Current research 研究现状===<br />
<br />
There are some research projects that are investigating brain simulation using more sophisticated neural models, implemented on conventional computing architectures. The [[Artificial Intelligence System]] project implemented non-real time simulations of a "brain" (with 10<sup>11</sup> neurons) in 2005. It took 50 days on a cluster of 27 processors to simulate 1 second of a model.<ref>{{cite journal |last=Izhikevich |first=Eugene M. |last2=Edelman |first2=Gerald M. |date=4 March 2008 |title=Large-scale model of mammalian thalamocortical systems |url=http://vesicle.nsi.edu/users/izhikevich/publications/large-scale_model_of_human_brain.pdf |journal=PNAS |volume=105 |issue=9 |pages=3593–3598 |doi= 10.1073/pnas.0712231105|access-date=23 June 2015 |archive-url=https://web.archive.org/web/20090612095651/http://vesicle.nsi.edu/users/izhikevich/publications/large-scale_model_of_human_brain.pdf |archive-date=12 June 2009 |pmid=18292226 |pmc=2265160|bibcode=2008PNAS..105.3593I }}</ref> The [[Blue Brain]] project used one of the fastest supercomputer architectures in the world, [[IBM]]'s [[Blue Gene]] platform, to create a real time simulation of a single rat [[Neocortex|neocortical column]] consisting of approximately 10,000 neurons and 10<sup>8</sup> synapses in 2006.<ref>{{cite web|url=http://bluebrain.epfl.ch/Jahia/site/bluebrain/op/edit/pid/19085|title=Project Milestones|work=Blue Brain|accessdate=11 August 2008}}</ref> A longer term goal is to build a detailed, functional simulation of the physiological processes in the human brain: "It is not impossible to build a human brain and we can do it in 10 years," [[Henry Markram]], director of the Blue Brain Project said in 2009 at the [[TED (conference)|TED conference]] in Oxford.<ref>{{Cite news |url=http://news.bbc.co.uk/1/hi/technology/8164060.stm |title=Artificial brain '10 years away' 2009 BBC news |date=22 July 2009 |access-date=25 July 2009 |archive-url=https://web.archive.org/web/20170726040959/http://news.bbc.co.uk/1/hi/technology/8164060.stm |archive-date=26 July 2017 |url-status=live }}</ref> There have also been controversial claims to have simulated a [[cat intelligence#Computer simulation of the cat brain|cat brain]]. Neuro-silicon interfaces have been proposed as an alternative implementation strategy that may scale better.<ref>[http://gauntlet.ucalgary.ca/story/10343 University of Calgary news] {{Webarchive|url=https://web.archive.org/web/20090818081044/http://gauntlet.ucalgary.ca/story/10343 |date=18 August 2009 }}, [http://www.nbcnews.com/id/12037941 NBC News news] {{Webarchive|url=https://web.archive.org/web/20170704063922/http://www.nbcnews.com/id/12037941/ |date=4 July 2017 }}</ref><br />
<br />
There are some research projects that are investigating brain simulation using more sophisticated neural models, implemented on conventional computing architectures. The Artificial Intelligence System project implemented non-real time simulations of a "brain" (with 10<sup>11</sup> neurons) in 2005. It took 50 days on a cluster of 27 processors to simulate 1 second of a model. The Blue Brain project used one of the fastest supercomputer architectures in the world, IBM's Blue Gene platform, to create a real time simulation of a single rat neocortical column consisting of approximately 10,000 neurons and 10<sup>8</sup> synapses in 2006. A longer term goal is to build a detailed, functional simulation of the physiological processes in the human brain: "It is not impossible to build a human brain and we can do it in 10 years," Henry Markram, director of the Blue Brain Project said in 2009 at the TED conference in Oxford. There have also been controversial claims to have simulated a cat brain. Neuro-silicon interfaces have been proposed as an alternative implementation strategy that may scale better.<br />
<br />
有一些研究项目正在使用更复杂的神经模型研究大脑模拟,这些模型是在传统的计算机体系结构上实现的。人工智能系统项目在2005年实现了对一个“大脑”(有10个 sup 11 / sup 神经元)的非实时模拟。在一个由27个处理器组成的集群上,模拟一个模型的一秒钟花费了50天时间。2006年,蓝脑项目利用世界上最快的超级计算机架构之一——IBM 的蓝色基因平台,创建了一个包含大约10,000个神经元和10个 sup 8 / sup 突触的单个大鼠的新皮质柱的实时模拟。一个更长期的目标是建立一个人脑生理过程的详细的功能模拟: “建立一个人脑并不是不可能的,我们可以在10年内完成,”蓝脑项目主任亨利·马克拉姆(Henry Markram)于2009年在牛津举行的 TED 大会上说道。还有一些有争议的说法是模拟猫的大脑。神经-硅接口已作为一种可替代的实施策略被提出,它可能会更好地进行模拟。<br />
<br />
<br />
<br />
[[Hans Moravec]] addressed the above arguments ("brains are more complicated", "neurons have to be modeled in more detail") in his 1997 paper "When will computer hardware match the human brain?".<ref>{{cite web|url=http://www.transhumanist.com/volume1/moravec.htm |title=Archived copy |accessdate=23 June 2006 |url-status=dead |archiveurl=https://web.archive.org/web/20060615031852/http://transhumanist.com/volume1/moravec.htm |archivedate=15 June 2006 }}</ref> He measured the ability of existing software to simulate the functionality of neural tissue, specifically the retina. His results do not depend on the number of glial cells, nor on what kinds of processing neurons perform where.<br />
<br />
Hans Moravec addressed the above arguments ("brains are more complicated", "neurons have to be modeled in more detail") in his 1997 paper "When will computer hardware match the human brain?". He measured the ability of existing software to simulate the functionality of neural tissue, specifically the retina. His results do not depend on the number of glial cells, nor on what kinds of processing neurons perform where.<br />
<br />
汉斯·莫拉维克(Hans Moravec)在他1997年的论文《计算机硬件何时能与人脑匹敌》中提出了上述观点(“大脑更复杂” ,“神经元的建模必须更详细”) .他测量了现有软件模拟神经组织,特别是视网膜功能的能力。他的研究结果并不取决于神经胶质细胞的数量,也不取决于处理神经元在哪里工作。<br />
<br />
<br />
<br />
The actual complexity of modeling biological neurons has been explored in [[OpenWorm|OpenWorm project]] that was aimed on complete simulation of a worm that has only 302 neurons in its neural network (among about 1000 cells in total). The animal's neural network has been well documented before the start of the project. However, although the task seemed simple at the beginning, the models based on a generic neural network did not work. Currently, the efforts are focused on precise emulation of biological neurons (partly on the molecular level), but the result cannot be called a total success yet. Even if the number of issues to be solved in a human-brain-scale model is not proportional to the number of neurons, the amount of work along this path is obvious.<br />
<br />
The actual complexity of modeling biological neurons has been explored in OpenWorm project that was aimed on complete simulation of a worm that has only 302 neurons in its neural network (among about 1000 cells in total). The animal's neural network has been well documented before the start of the project. However, although the task seemed simple at the beginning, the models based on a generic neural network did not work. Currently, the efforts are focused on precise emulation of biological neurons (partly on the molecular level), but the result cannot be called a total success yet. Even if the number of issues to be solved in a human-brain-scale model is not proportional to the number of neurons, the amount of work along this path is obvious.<br />
<br />
OpenWorm 项目已经探讨了建模生物神经元的实际复杂性。该项目旨在完全模拟一个蠕虫,其神经网络中只有302个神经元(在总共约1000个细胞中)。项目开始之前,蠕虫的神经网络已经被很好地记录了下来。然而,尽管任务一开始看起来很简单,基于一般神经网络的模型并不起作用。目前,研究的重点是精确模拟生物神经元(部分在分子水平上) ,但结果还不能被称为完全成功。即使在人脑尺度的模型中需要解决的问题的数量与神经元的数量不成比例,沿着这条路径走下去的工作量也是显而易见的。<br />
<br />
<br />
<br />
===Criticisms of simulation-based approaches 对基于模拟的研究方法的批评===<br />
<br />
A fundamental criticism of the simulated brain approach derives from [[embodied cognition]] where human embodiment is taken as an essential aspect of human intelligence. Many researchers believe that embodiment is necessary to ground meaning.<ref>{{Harvnb|de Vega|Glenberg|Graesser|2008}}. A wide range of views in current research, all of which require grounding to some degree</ref> If this view is correct, any fully functional brain model will need to encompass more than just the neurons (i.e., a robotic body). Goertzel{{sfn|Goertzel|2007}} proposes virtual embodiment (like in ''[[Second Life]]''), but it is not yet known whether this would be sufficient.<br />
<br />
A fundamental criticism of the simulated brain approach derives from embodied cognition where human embodiment is taken as an essential aspect of human intelligence. Many researchers believe that embodiment is necessary to ground meaning. If this view is correct, any fully functional brain model will need to encompass more than just the neurons (i.e., a robotic body). Goertzel proposes virtual embodiment (like in Second Life), but it is not yet known whether this would be sufficient.<br />
<br />
对模拟大脑的方法的一个基本批评来自具象认知,其中人形化被视为人类智力的一个重要方面。许多研究者认为,具体化是必要的基础意义。如果这种观点是正确的,那么任何功能齐全的大脑模型除了神经元还要包含更多东西(例如,一个机器人身体)。格兹尔提出了虚拟体(就像在《第二人生》中那样) ,但是目前还不知道这是否足够。<br />
<br />
<br />
<br />
Desktop computers using microprocessors capable of more than 10<sup>9</sup> cps (Kurzweil's non-standard unit "computations per second", see above) have been available since 2005. According to the brain power estimates used by Kurzweil (and Moravec), this computer should be capable of supporting a simulation of a bee brain, but despite some interest<ref>{{Cite web |url=http://www.setiai.com/archives/cat_honey_bee_brain.html |title=some links to bee brain studies |access-date=30 March 2010 |archive-url=https://web.archive.org/web/20111005162232/http://www.setiai.com/archives/cat_honey_bee_brain.html |archive-date=5 October 2011 |url-status=dead }}</ref> no such simulation exists {{Citation needed|date=April 2011}}. There are at least three reasons for this:<br />
<br />
Desktop computers using microprocessors capable of more than 10<sup>9</sup> cps (Kurzweil's non-standard unit "computations per second", see above) have been available since 2005. According to the brain power estimates used by Kurzweil (and Moravec), this computer should be capable of supporting a simulation of a bee brain, but despite some interest no such simulation exists . There are at least three reasons for this:<br />
<br />
自2005年以来,台式计算机使用的微处理器能够超过10 sup 9 / sup cps (库兹韦尔的非标准单位“每秒计算”,见上文)。根据库兹韦尔(和莫拉维克)使用的大脑能量估算,这台计算机应该能够支持蜜蜂大脑的模拟,但是尽管有些人感兴趣,这样的模拟并不存在。这至少有三个原因:<br />
<br />
#The neuron model seems to be oversimplified (see next section).<br />
<br />
The neuron model seems to be oversimplified (see next section).<br />
<br />
神经元模型似乎过于简化了(见下一节)。<br />
<br />
#There is insufficient understanding of higher cognitive processes{{refn|In Goertzels' AGI book ([[#CITEREFYudkowsky2006|Yudkowsky 2006]]), Yudkowsky proposes 5 levels of organisation that must be understood – code/data, sensory modality, concept & category, thought, and deliberation (consciousness) – in order to use the available hardware}} to establish accurately what the brain's neural activity, observed using techniques such as [[Neuroimaging#Functional magnetic resonance imaging|functional magnetic resonance imaging]], correlates with.<br />
<br />
There is insufficient understanding of higher cognitive processes to establish accurately what the brain's neural activity, observed using techniques such as functional magnetic resonance imaging, correlates with.<br />
<br />
人们对高级认知过程的理解不够充分,无法准确地确定大脑的神经活动---- 与之相关的是使用功能性磁共振成像等技术观察到的大脑活动。<br />
<br />
#Even if our understanding of cognition advances sufficiently, early simulation programs are likely to be very inefficient and will, therefore, need considerably more hardware.<br />
<br />
Even if our understanding of cognition advances sufficiently, early simulation programs are likely to be very inefficient and will, therefore, need considerably more hardware.<br />
<br />
即使我们对认知的理解有了足够的进步,早期的仿真程序也可能非常低效,因此需要更多的硬件。<br />
<br />
#The brain of an organism, while critical, may not be an appropriate boundary for a cognitive model. To simulate a bee brain, it may be necessary to simulate the body, and the environment. [[The Extended Mind]] thesis formalizes the philosophical concept, and research into [[Cephalopoda|cephalopods]] has demonstrated clear examples of a decentralized system.<ref>{{cite journal | pmid = 15829594 | doi=10.1152/jn.00684.2004 | volume=94 | issue=2 | title=Dynamic model of the octopus arm. I. Biomechanics of the octopus reaching movement |date=August 2005 | journal=J. Neurophysiol. | pages=1443–58 | last1 = Yekutieli | first1 = Y | last2 = Sagiv-Zohar | first2 = R | last3 = Aharonov | first3 = R | last4 = Engel | first4 = Y | last5 = Hochner | first5 = B | last6 = Flash | first6 = T}}</ref><br />
<br />
The brain of an organism, while critical, may not be an appropriate boundary for a cognitive model. To simulate a bee brain, it may be necessary to simulate the body, and the environment. The Extended Mind thesis formalizes the philosophical concept, and research into cephalopods has demonstrated clear examples of a decentralized system.<br />
<br />
有机体的大脑虽然关键,但可能不是认知模型的合适边界。为了模拟蜜蜂的大脑,可能需要模拟身体和环境。扩展理智论点形式化了哲学概念,对头足类动物的研究已经展示了分散系统的明显的例子。<br />
<br />
<br />
<br />
In addition, the scale of the human brain is not currently well-constrained. One estimate puts the human brain at about 100 billion neurons and 100 trillion synapses.<ref>{{Harvnb|Williams|Herrup|1988}}</ref><ref>[http://search.eb.com/eb/article-75525 "nervous system, human."] ''[[Encyclopædia Britannica]]''. 9 January 2007</ref> Another estimate is 86 billion neurons of which 16.3 billion are in the [[cerebral cortex]] and 69 billion in the [[cerebellum]].{{sfn|Azevedo et al.|2009}} [[Glial cell]] synapses are currently unquantified but are known to be extremely numerous.<br />
<br />
In addition, the scale of the human brain is not currently well-constrained. One estimate puts the human brain at about 100 billion neurons and 100 trillion synapses. Another estimate is 86 billion neurons of which 16.3 billion are in the cerebral cortex and 69 billion in the cerebellum. Glial cell synapses are currently unquantified but are known to be extremely numerous.<br />
<br />
此外,人类大脑的规模目前还没有得到很好的限制。据估计,人类大脑大约有1000亿个神经元和100万亿个突触。另一个估计是860亿个神经元,其中163亿个在大脑皮层,690亿个在小脑。神经胶质细胞突触目前尚未定量,但已知数量极多。<br />
<br />
<br />
<br />
==Strong AI and consciousness 强人工智能和意识==<br />
<br />
{{See also|Philosophy of artificial intelligence|Turing test}}<br />
<br />
<br />
<br />
In 1980, philosopher [[John Searle]] coined the term "strong AI" as part of his [[Chinese room]] argument.<ref>{{Harvnb|Searle|1980}}</ref> He wanted to distinguish between two different hypotheses about artificial intelligence:<ref>As defined in a standard AI textbook: "The assertion that machines could possibly act intelligently (or, perhaps better, act as if they were intelligent) is called the 'weak AI' hypothesis by philosophers, and the assertion that machines that do so are actually thinking (as opposed to simulating thinking) is called the 'strong AI' hypothesis." {{Harv|Russell|Norvig|2003}}</ref><br />
<br />
In 1980, philosopher John Searle coined the term "strong AI" as part of his Chinese room argument. He wanted to distinguish between two different hypotheses about artificial intelligence:<br />
<br />
1980年,哲学家约翰•塞尔(John Searle)将“强人工智能”(strong AI)一词作为他在中文屋论证的一部分。他想要区分关于人工智能的两种不同假设:<br />
<br />
* An artificial intelligence system can ''think'' and have a ''mind''. (The word "mind" has a specific meaning for philosophers, as used in "the [[mind body problem]]" or "the [[philosophy of mind]]".)<br />
*一个人工智能系统可以思考并拥有思维。(词语“思维”对哲学家来说有特殊意义,正如在“身心问题”或“心灵哲学”中的使用一样。)<br />
<br />
* An artificial intelligence system can (only) ''act like'' it thinks and has a mind.<br />
*一个人工智能系统可以(仅仅)按它所想而“行动”,并且拥有思维。<br />
<br />
The first one is called "the ''strong'' AI hypothesis" and the second is "the ''weak'' AI hypothesis" because the first one makes the ''stronger'' statement: it assumes something special has happened to the machine that goes beyond all its abilities that we can test. Searle referred to the "strong AI hypothesis" as "strong AI". This usage is also common in academic AI research and textbooks.<ref>For example:<br />
<br />
The first one is called "the strong AI hypothesis" and the second is "the weak AI hypothesis" because the first one makes the stronger statement: it assumes something special has happened to the machine that goes beyond all its abilities that we can test. Searle referred to the "strong AI hypothesis" as "strong AI". This usage is also common in academic AI research and textbooks.<ref>For example:<br />
<br />
第一条被称为“强人工智能假设” ,第二条被称为“弱人工智能假设”,因为第一条假设提出了更强的陈述: 它假定机器发生了某种特殊的事件,超出了我们能够测试的所有能力。塞尔将“强人工智能假说”称为“强人工智能”。这种用法在人工智能学术研究和教科书中也很常见。例如:<br />
<br />
* {{Harvnb|Russell|Norvig|2003}},<br />
<br />
* [http://www.encyclopedia.com/doc/1O87-strongAI.html Oxford University Press Dictionary of Psychology] {{Webarchive|url=https://web.archive.org/web/20071203103022/http://www.encyclopedia.com/doc/1O87-strongAI.html |date=3 December 2007 }} (quoted in "High Beam Encyclopedia"),<br />
<br />
* [http://www.aaai.org/AITopics/html/phil.html MIT Encyclopedia of Cognitive Science] {{Webarchive|url=https://web.archive.org/web/20080719074502/http://www.aaai.org/AITopics/html/phil.html |date=19 July 2008 }} (quoted in "AITopics")<br />
<br />
* [http://planetmath.org/encyclopedia/StrongAIThesis.html Planet Math] {{Webarchive|url=https://web.archive.org/web/20070919012830/http://planetmath.org/encyclopedia/StrongAIThesis.html |date=19 September 2007 }}<br />
<br />
* [http://www.cbhd.org/resources/biotech/tongen_2003-11-07.htm Will Biological Computers Enable Artificially Intelligent Machines to Become Persons?] {{Webarchive|url=https://web.archive.org/web/20080513031753/http://www.cbhd.org/resources/biotech/tongen_2003-11-07.htm |date=13 May 2008 }} Anthony Tongen</ref><br />
<br />
<br />
<br />
The weak AI hypothesis is equivalent to the hypothesis that artificial general intelligence is possible. According to [[Stuart J. Russell|Russell]] and [[Peter Norvig|Norvig]], "Most AI researchers take the weak AI hypothesis for granted, and don't care about the strong AI hypothesis."{{sfn|Russell|Norvig|2003|p=947}}<br />
<br />
The weak AI hypothesis is equivalent to the hypothesis that artificial general intelligence is possible. According to Russell and Norvig, "Most AI researchers take the weak AI hypothesis for granted, and don't care about the strong AI hypothesis."<br />
<br />
弱人工智能假说等同于通用人工智能是可能的假说。根据罗素和诺维格的说法,“大多数人工智能研究人员认为弱人工智能假说是理所当然的,并且不关心强人工智能假说。”<br />
<br />
<br />
<br />
In contrast to Searle, [[Ray Kurzweil]] uses the term "strong AI" to describe any artificial intelligence system that acts like it has a mind,<ref name=K/> regardless of whether a philosopher would be able to determine if it ''actually'' has a mind or not.<br />
<br />
In contrast to Searle, Ray Kurzweil uses the term "strong AI" to describe any artificial intelligence system that acts like it has a mind, regardless of whether a philosopher would be able to determine if it actually has a mind or not.<br />
<br />
与 Searle 不同的是,Ray Kurzweil 使用“强人工智能”这个词来描述任何人工智能系统,这个系统的行为就像它有思想一样,不管哲学家能否确定它是否真的有思想。<br />
<br />
In science fiction, AGI is associated with traits such as [[consciousness]], [[sentience]], [[sapience]], and [[self-awareness]] observed in living beings. However, according to Searle, it is an open question whether general intelligence is sufficient for consciousness. "Strong AI" (as defined above by Kurzweil) should not be confused with Searle's "[[strong AI hypothesis]]." The strong AI hypothesis is the claim that a computer which behaves as intelligently as a person must also necessarily have a [[mind]] and [[consciousness]]. AGI refers only to the amount of intelligence that the machine displays, with or without a mind.<br />
<br />
In science fiction, AGI is associated with traits such as consciousness, sentience, sapience, and self-awareness observed in living beings. However, according to Searle, it is an open question whether general intelligence is sufficient for consciousness. "Strong AI" (as defined above by Kurzweil) should not be confused with Searle's "strong AI hypothesis." The strong AI hypothesis is the claim that a computer which behaves as intelligently as a person must also necessarily have a mind and consciousness. AGI refers only to the amount of intelligence that the machine displays, with or without a mind.<br />
<br />
在科幻小说中,通用人工智能与生物所具有的的意识、知觉、智慧和自我意识等特征有关。然而,根据塞尔的说法,一般智力是否足以产生意识还是一个悬而未决的问题。“强人工智能”(如上文库兹韦尔所定义的)不应与塞尔的“强人工智能假设”相混淆。强人工智能假说认为,一台像人一样智能运行的计算机必然具有思想和意识。通用人工智能只是指机器显示的智能程度,不管有没有头脑。<br />
<br />
<br />
<br />
===Consciousness 意识===<br />
<br />
There are other aspects of the human mind besides intelligence that are relevant to the concept of strong AI which play a major role in [[science fiction]] and the [[ethics of artificial intelligence]]:<br />
<br />
There are other aspects of the human mind besides intelligence that are relevant to the concept of strong AI which play a major role in science fiction and the ethics of artificial intelligence:<br />
<br />
除了与在科幻小说和人工智能伦理中扮演重要角色的强人工智能概念有关的智能,人类思维还有其他方面:<br />
<br />
* [[consciousness]]: To have [[qualia|subjective experience]] and [[thought]].<ref>Note that [[consciousness]] is difficult to define. A popular definition, due to [[Thomas Nagel]], is that it "feels like" something to be conscious. If we are not conscious, then it doesn't feel like anything. Nagel uses the example of a bat: we can sensibly ask "what does it feel like to be a bat?" However, we are unlikely to ask "what does it feel like to be a toaster?" Nagel concludes that a bat appears to be conscious (i.e. has consciousness) but a toaster does not. See {{Harv|Nagel|1974}}</ref><br />
意识:拥有主观体验和思想。值得一提的是意识是很难定义的。由托马斯·内格尔给出的一个著名定义陈述如下:一个事物如果能体会到某种感觉,那么它是有意识的。如果我们不是有意识的,那么我们不会有任何感觉。内格尔以蝙蝠为例:我们可以凭借感觉问出:“成为一只蝙蝠的感觉如何?”但是,我们不大可能问出:“成为一个吐司机的感觉如何?”内格尔总结认为蝙蝠像是有意识的(即拥有意识),但是吐司机却不是。<br />
<br />
* [[self-awareness]]: To be aware of oneself as a separate individual, especially to be aware of one's own thoughts.<br />
自我意识:能够意识到自己是一个独立的个体,尤其是意识到自己的思想。<br />
<br />
* [[sentience]]: The ability to "feel" perceptions or emotions subjectively.<br />
知觉:主观地感受概念或者情感的能力。<br />
<br />
* [[sapience]]: The capacity for wisdom.<br />
智慧:容纳知识的能力。<br />
<br />
<br />
<br />
These traits have a moral dimension, because a machine with this form of strong AI may have legal rights, analogous to the [[animal rights|rights of non-human animals]]. As such, preliminary work has been conducted on approaches to integrating full ethical agents with existing legal and social frameworks. These approaches have focused on the legal position and rights of 'strong' AI.<ref>{{Cite journal|last=Sotala|first=Kaj|last2=Yampolskiy|first2=Roman V|date=2014-12-19|title=Responses to catastrophic AGI risk: a survey|journal=Physica Scripta|volume=90|issue=1|pages=8|doi=10.1088/0031-8949/90/1/018001|issn=0031-8949|doi-access=free}}</ref><br />
<br />
These traits have a moral dimension, because a machine with this form of strong AI may have legal rights, analogous to the rights of non-human animals. As such, preliminary work has been conducted on approaches to integrating full ethical agents with existing legal and social frameworks. These approaches have focused on the legal position and rights of 'strong' AI.<br />
<br />
这些特征具有道德维度,因为拥有这种强人工智能形式的机器可能拥有法律权利,类似于非人类动物的权利。因此,已经开展了初步工作,探讨如何将全面的道德行为者纳入现有的法律和社会框架。这些方法都集中在强大的人工智能的法律地位和权利上。<br />
<br />
<br />
<br />
However, [[Bill Joy]], among others, argues a machine with these traits may be a threat to human life or dignity.<ref>{{cite journal| title=Why the future doesn't need us | last=Joy | first=Bill |author-link=Bill Joy | journal=Wired |date=April 2000 }}</ref> It remains to be shown whether any of these traits are [[Necessary and sufficient condition|necessary]] for strong AI. The role of [[consciousness]] is not clear, and currently there is no agreed test for its presence. If a machine is built with a device that simulates the [[neural correlates of consciousness]], would it automatically have self-awareness? It is also possible that some of these properties, such as sentience, [[Emergence|naturally emerge]] from a fully intelligent machine, or that it becomes natural to ''ascribe'' these properties to machines once they begin to act in a way that is clearly intelligent. For example, intelligent action may be sufficient for sentience, rather than the other way around.<br />
<br />
However, Bill Joy, among others, argues a machine with these traits may be a threat to human life or dignity. It remains to be shown whether any of these traits are necessary for strong AI. The role of consciousness is not clear, and currently there is no agreed test for its presence. If a machine is built with a device that simulates the neural correlates of consciousness, would it automatically have self-awareness? It is also possible that some of these properties, such as sentience, naturally emerge from a fully intelligent machine, or that it becomes natural to ascribe these properties to machines once they begin to act in a way that is clearly intelligent. For example, intelligent action may be sufficient for sentience, rather than the other way around.<br />
<br />
然而,比尔 · 乔伊等人认为,具有这些特征的机器可能会威胁到人类的生命或尊严。这些特征对于强 AI 来说是否是必要的还有待证明。意识的作用并不清楚,目前也没有对其存在的一致的测试。如果一台机器装有一个模拟意识相关神经区的装置,它会自动具有自我意识吗?也有可能这些特性中的一些,比如感知能力,自然而然地从一个完全智能的机器中产生,或者一旦机器开始以一种明显的智能方式行动,人们就会自然而然地把这些特性归因于机器。例如,智能行为可能足以产生知觉,而不是相反。<br />
<br />
<br />
<br />
===Artificial consciousness research===<br />
<br />
{{Main|Artificial consciousness}}<br />
<br />
<br />
<br />
Although the role of consciousness in strong AI/AGI is debatable, many AGI researchers{{sfn|Yudkowsky|2006}} regard research that investigates possibilities for implementing consciousness as vital. In an early effort [[Igor Aleksander]]{{sfn|Aleksander|1996}} argued that the principles for creating a conscious machine already existed but that it would take forty years to train such a machine to understand [[language]].<br />
<br />
Although the role of consciousness in strong AI/AGI is debatable, many AGI researchers regard research that investigates possibilities for implementing consciousness as vital. In an early effort Igor Aleksander argued that the principles for creating a conscious machine already existed but that it would take forty years to train such a machine to understand language.<br />
<br />
虽然意识在强 ai / AGI 中的作用是有争议的,但是很多 AGI 的研究人员认为研究实现意识的可能性是至关重要的。在早期的努力中,Igor Aleksander 认为创造一个有意识的机器的原则已经存在,但是训练这样一个机器去理解语言需要四十年。<br />
<br />
<br />
<br />
==Possible explanations for the slow progress of AI research==<br />
<br />
{{See also|History of artificial intelligence#The problems}}<br />
<br />
<br />
<br />
Since the launch of AI research in 1956, the growth of this field has slowed down over time and has stalled the aims of creating machines skilled with intelligent action at the human level.{{sfn|Clocksin|2003}} A possible explanation for this delay is that computers lack a sufficient scope of memory or processing power.{{sfn|Clocksin|2003}} In addition, the level of complexity that connects to the process of AI research may also limit the progress of AI research.{{sfn|Clocksin|2003}}<br />
<br />
Since the launch of AI research in 1956, the growth of this field has slowed down over time and has stalled the aims of creating machines skilled with intelligent action at the human level. A possible explanation for this delay is that computers lack a sufficient scope of memory or processing power. In addition, the level of complexity that connects to the process of AI research may also limit the progress of AI research.<br />
<br />
自从1956年开始人工智能研究以来,这一领域的发展速度已经随着时间的推移而放缓,并且阻碍了创造具有人类水平的智能行为的机器的目标。这种延迟的一个可能的解释是计算机缺乏足够的存储空间或处理能力。此外,与人工智能研究过程相关的复杂程度也可能限制人工智能研究的进展。<br />
<br />
<br />
<br />
While most AI researchers believe strong AI can be achieved in the future, there are some individuals like [[Hubert Dreyfus]] and [[Roger Penrose]] who deny the possibility of achieving strong AI.{{sfn|Clocksin|2003}} [[John McCarthy (computer scientist)|John McCarthy]] was one of various computer scientists who believe human-level AI will be accomplished, but a date cannot accurately be predicted.{{sfn|McCarthy|2003}}<br />
<br />
While most AI researchers believe strong AI can be achieved in the future, there are some individuals like Hubert Dreyfus and Roger Penrose who deny the possibility of achieving strong AI. John McCarthy was one of various computer scientists who believe human-level AI will be accomplished, but a date cannot accurately be predicted.<br />
<br />
虽然大多数人工智能研究人员认为强大的人工智能可以在未来实现,但也有一些人像休伯特 · 德雷福斯和罗杰 · 彭罗斯否认实现强大人工智能的可能性。约翰 · 麦卡锡是众多计算机科学家之一,他们相信人类水平的人工智能将会实现,但是日期无法准确预测。<br />
<br />
<br />
<br />
Conceptual limitations are another possible reason for the slowness in AI research.{{sfn|Clocksin|2003}} AI researchers may need to modify the conceptual framework of their discipline in order to provide a stronger base and contribution to the quest of achieving strong AI. As William Clocksin wrote in 2003: "the framework starts from Weizenbaum's observation that intelligence manifests itself only relative to specific social and cultural contexts".{{sfn|Clocksin|2003}}<br />
<br />
Conceptual limitations are another possible reason for the slowness in AI research. AI researchers may need to modify the conceptual framework of their discipline in order to provide a stronger base and contribution to the quest of achieving strong AI. As William Clocksin wrote in 2003: "the framework starts from Weizenbaum's observation that intelligence manifests itself only relative to specific social and cultural contexts".<br />
<br />
概念上的局限性是人工智能研究缓慢的另一个可能原因。人工智能研究人员可能需要修改他们学科的概念框架,以便为实现强大的人工智能提供一个更强大的基础和贡献。正如 William Clocksin 在2003年写的那样: “这个框架始于 Weizenbaum 的观察,即智力只在特定的社会和文化背景下表现出来。”。<br />
<br />
<br />
<br />
Furthermore, AI researchers have been able to create computers that can perform jobs that are complicated for people to do, such as mathematics, but conversely they have struggled to develop a computer that is capable of carrying out tasks that are simple for humans to do, such as walking ([[Moravec's paradox]]).{{sfn|Clocksin|2003}} A problem described by David Gelernter is that some people assume thinking and reasoning are equivalent.{{sfn|Gelernter|2010}} However, the idea of whether thoughts and the creator of those thoughts are isolated individually has intrigued AI researchers.{{sfn|Gelernter|2010}}<br />
<br />
Furthermore, AI researchers have been able to create computers that can perform jobs that are complicated for people to do, such as mathematics, but conversely they have struggled to develop a computer that is capable of carrying out tasks that are simple for humans to do, such as walking (Moravec's paradox). A problem described by David Gelernter is that some people assume thinking and reasoning are equivalent. However, the idea of whether thoughts and the creator of those thoughts are isolated individually has intrigued AI researchers.<br />
<br />
此外,人工智能研究人员已经能够创造出能够执行复杂工作(如数学)的计算机,但相反地,他们却难以开发出能够执行人类简单任务(如行走)的计算机(莫拉维克悖论)。大卫 · 格勒尼特描述的一个问题是,有些人认为思考和推理是等价的。然而,思想和这些思想的创造者是否被孤立的想法引起了人工智能研究者的兴趣。<br />
<br />
<br />
<br />
The problems that have been encountered in AI research over the past decades have further impeded the progress of AI. The failed predictions that have been promised by AI researchers and the lack of a complete understanding of human behaviors have helped diminish the primary idea of human-level AI.{{sfn|Goertzel|2007}} Although the progress of AI research has brought both improvement and disappointment, most investigators have established optimism about potentially achieving the goal of AI in the 21st century.{{sfn|Goertzel|2007}}<br />
<br />
The problems that have been encountered in AI research over the past decades have further impeded the progress of AI. The failed predictions that have been promised by AI researchers and the lack of a complete understanding of human behaviors have helped diminish the primary idea of human-level AI. Although the progress of AI research has brought both improvement and disappointment, most investigators have established optimism about potentially achieving the goal of AI in the 21st century.<br />
<br />
过去几十年人工智能研究中遇到的问题进一步阻碍了人工智能的发展。人工智能研究人员所承诺的失败的预测,以及对人类行为缺乏完整理解,已经帮助削弱了人类水平人工智能的基本概念。尽管人工智能研究的进展带来了进步和失望,但大多数研究人员对人工智能在21世纪可能实现的目标持乐观态度。<br />
<br />
<br />
<br />
Other possible reasons have been proposed for the lengthy research in the progress of strong AI. The intricacy of scientific problems and the need to fully understand the human brain through psychology and neurophysiology have limited many researchers in emulating the function of the human brain in computer hardware.{{sfn|McCarthy|2007}} Many researchers tend to underestimate any doubt that is involved with future predictions of AI, but without taking those issues seriously, people can then overlook solutions to problematic questions.{{sfn|Goertzel|2007}}<br />
<br />
Other possible reasons have been proposed for the lengthy research in the progress of strong AI. The intricacy of scientific problems and the need to fully understand the human brain through psychology and neurophysiology have limited many researchers in emulating the function of the human brain in computer hardware. Many researchers tend to underestimate any doubt that is involved with future predictions of AI, but without taking those issues seriously, people can then overlook solutions to problematic questions.<br />
<br />
还有其他可能的原因可以解释为什么对于强人工智能进行了长时间的研究。错综复杂的科学问题,以及通过心理学和神经生理学充分了解人脑的必要性,限制了许多研究人员在计算机硬件中模拟人脑的功能。许多研究人员倾向于低估与人工智能未来预测有关的任何怀疑,但是如果不认真对待这些问题,人们就会忽视问题的解决方案。<br />
<br />
<br />
<br />
Clocksin says that a conceptual limitation that may impede the progress of AI research is that people may be using the wrong techniques for computer programs and implementation of equipment.{{sfn|Clocksin|2003}} When AI researchers first began to aim for the goal of artificial intelligence, a main interest was human reasoning.{{sfn|Holte|Choueiry|2003}} Researchers hoped to establish computational models of human knowledge through reasoning and to find out how to design a computer with a specific cognitive task.{{sfn|Holte|Choueiry|2003}}<br />
<br />
Clocksin says that a conceptual limitation that may impede the progress of AI research is that people may be using the wrong techniques for computer programs and implementation of equipment. When AI researchers first began to aim for the goal of artificial intelligence, a main interest was human reasoning. Researchers hoped to establish computational models of human knowledge through reasoning and to find out how to design a computer with a specific cognitive task.<br />
<br />
Clocksin 说,阻碍人工智能研究进展的一个概念上的限制是,人们可能在计算机程序和设备实现方面使用了错误的技术。当人工智能研究人员第一次开始瞄准人工智能的目标时,主要的兴趣是人类推理。研究人员希望通过推理建立人类知识的计算模型,并找出如何设计一台具有特定认知任务的计算机。<br />
<br />
<br />
<br />
The practice of abstraction, which people tend to redefine when working with a particular context in research, provides researchers with a concentration on just a few concepts.{{sfn|Holte|Choueiry|2003}} The most productive use of abstraction in AI research comes from planning and problem solving.{{sfn|Holte|Choueiry|2003}} Although the aim is to increase the speed of a computation, the role of abstraction has posed questions about the involvement of abstraction operators.{{sfn|Zucker|2003}}<br />
<br />
The practice of abstraction, which people tend to redefine when working with a particular context in research, provides researchers with a concentration on just a few concepts. The most productive use of abstraction in AI research comes from planning and problem solving. Although the aim is to increase the speed of a computation, the role of abstraction has posed questions about the involvement of abstraction operators.<br />
<br />
抽象的实践,人们在研究中使用特定的语境时倾向于重新定义,为研究人员提供了集中在几个概念上的机会。抽象在人工智能研究中最有效的应用来自规划和解决问题。虽然目标是提高计算速度,但是抽象的作用已经对抽象操作符的参与提出了问题。<br />
<br />
<br />
<br />
A possible reason for the slowness in AI relates to the acknowledgement by many AI researchers that heuristics is a section that contains a significant breach between computer performance and human performance.{{sfn|McCarthy|2007}} The specific functions that are programmed to a computer may be able to account for many of the requirements that allow it to match human intelligence. These explanations are not necessarily guaranteed to be the fundamental causes for the delay in achieving strong AI, but they are widely agreed by numerous researchers.<br />
<br />
A possible reason for the slowness in AI relates to the acknowledgement by many AI researchers that heuristics is a section that contains a significant breach between computer performance and human performance. The specific functions that are programmed to a computer may be able to account for many of the requirements that allow it to match human intelligence. These explanations are not necessarily guaranteed to be the fundamental causes for the delay in achieving strong AI, but they are widely agreed by numerous researchers.<br />
<br />
人工智能发展缓慢的一个可能原因是许多人工智能研究人员承认启发式是计算机性能和人类性能之间的一个重大缺口。为计算机编程的特定功能可以满足许多要求,使计算机与人类智能相匹配。这些解释并不一定是造成人工智能实现延迟的根本原因,但它们得到了众多研究人员的广泛认同。<br />
<br />
<br />
<br />
There have been many AI researchers that debate over the idea whether [[affective computing|machines should be created with emotions]]. There are no emotions in typical models of AI and some researchers say programming emotions into machines allows them to have a mind of their own.{{sfn|Clocksin|2003}} Emotion sums up the experiences of humans because it allows them to remember those experiences.{{sfn|Gelernter|2010}} David Gelernter writes, "No computer will be creative unless it can simulate all the nuances of human emotion."{{sfn|Gelernter|2010}} This concern about emotion has posed problems for AI researchers and it connects to the concept of strong AI as its research progresses into the future.<ref>{{cite journal|doi=10.1016/j.bushor.2018.08.004|title=Kaplan Andreas and Haelein Michael (2019) Siri, Siri, in my hand: Who's the fairest in the land? On the interpretations, illustrations, and implications of artificial intelligence | volume=62 | year=2019|journal=Business Horizons|pages=15–25 | last1 = Kaplan | first1 = Andreas | last2 = Haenlein | first2 = Michael}}</ref><br />
<br />
There have been many AI researchers that debate over the idea whether machines should be created with emotions. There are no emotions in typical models of AI and some researchers say programming emotions into machines allows them to have a mind of their own. Emotion sums up the experiences of humans because it allows them to remember those experiences. David Gelernter writes, "No computer will be creative unless it can simulate all the nuances of human emotion." This concern about emotion has posed problems for AI researchers and it connects to the concept of strong AI as its research progresses into the future.<br />
<br />
许多人工智能研究人员一直在争论机器是否应该带有情感。典型的人工智能模型中没有情感,一些研究人员说,将情感编程到机器中可以让它们拥有自己的思想。情感总结了人类的经历,因为它允许人们记住那些经历。大卫 · 格勒尼特写道: “除非计算机能够模拟人类情感的所有细微差别,否则它不会具有创造力。”这种对情绪的关注给人工智能研究人员带来了一些问题,随着未来人工智能研究的进展,它与强人工智能的概念相联系。<br />
<br />
<br />
<br />
==Controversies and dangers==<br />
<br />
<br />
<br />
===Feasibility===<br />
<br />
{{expand section|date=February 2016}}<br />
<br />
As of March 2020, AGI remains speculative<ref name="spec1">[https://www.europarl.europa.eu/at-your-service/files/be-heard/religious-and-non-confessional-dialogue/events/en-20190319-how-artificial-intelligence-works.pdf europarl.europa.eu: How artificial intelligence works], "Concluding remarks: Today's AI is powerful and useful, but remains far from speculated AGI or ASI.", European Parliamentary Research Service, retrieved March 3, 2020</ref><ref name="spec2">[https://www.itu.int/en/journal/001/Documents/itu2018-9.pdf itu.int: Beyond Mad?: The Race For Artificial General Intelligence], "AGI represents a level of power that remains firmly in the realm of speculative fiction as on date." February 2, 2018, retrieved March 3, 2020</ref> as no such system has been demonstrated yet. Opinions vary both on ''whether'' and ''when'' artificial general intelligence will arrive. At one extreme, AI pioneer [[Herbert A. Simon]] wrote in 1965: "machines will be capable, within twenty years, of doing any work a man can do". However, this prediction failed to come true. Microsoft co-founder [[Paul Allen]] believed that such intelligence is unlikely in the 21st century because it would require "unforeseeable and fundamentally unpredictable breakthroughs" and a "scientifically deep understanding of cognition".<ref>{{cite news|last1=Allen|first1=Paul|title=The Singularity Isn't Near|url=http://www.technologyreview.com/view/425733/paul-allen-the-singularity-isnt-near/|accessdate=17 September 2014|work=[[MIT Technology Review]]}}</ref> Writing in [[The Guardian]], roboticist Alan Winfield claimed the gulf between modern computing and human-level artificial intelligence is as wide as the gulf between current space flight and practical faster-than-light spaceflight.<ref>{{cite news|last1=Winfield|first1=Alan|title=Artificial intelligence will not turn into a Frankenstein's monster|url=https://www.theguardian.com/technology/2014/aug/10/artificial-intelligence-will-not-become-a-frankensteins-monster-ian-winfield|accessdate=17 September 2014|work=[[The Guardian]]|archive-url=https://web.archive.org/web/20140917135230/http://www.theguardian.com/technology/2014/aug/10/artificial-intelligence-will-not-become-a-frankensteins-monster-ian-winfield|archive-date=17 September 2014|url-status=live}}</ref><br />
<br />
As of March 2020, AGI remains speculative as no such system has been demonstrated yet. Opinions vary both on whether and when artificial general intelligence will arrive. At one extreme, AI pioneer Herbert A. Simon wrote in 1965: "machines will be capable, within twenty years, of doing any work a man can do". However, this prediction failed to come true. Microsoft co-founder Paul Allen believed that such intelligence is unlikely in the 21st century because it would require "unforeseeable and fundamentally unpredictable breakthroughs" and a "scientifically deep understanding of cognition". Writing in The Guardian, roboticist Alan Winfield claimed the gulf between modern computing and human-level artificial intelligence is as wide as the gulf between current space flight and practical faster-than-light spaceflight.<br />
<br />
截至2020年3月,德盛安联仍处于投机状态,因为迄今尚未展示此类系统。对于人工通用智能是否会到来以及何时到来,人们的看法各不相同。在一个极端,人工智能的先驱赫伯特·西蒙在1965年写道: “机器将能在20年内完成人类能做的任何工作。”。然而,这个预言并没有实现。微软(Microsoft)联合创始人保罗•艾伦(Paul Allen)认为,这种情报在21世纪不太可能出现,因为它需要“不可预见且根本无法预测的突破”和“对认知的科学深入理解”。机器人专家 Alan Winfield 在《卫报》上发表文章称,现代计算机和人类水平的人工智能之间的鸿沟就像当前的太空飞行和实际的超光速空间飞行之间的鸿沟一样宽。<br />
<br />
<br />
<br />
AI experts' views on the feasibility of AGI wax and wane, and may have seen a resurgence in the 2010s. Four polls conducted in 2012 and 2013 suggested that the median guess among experts for when they would be 50% confident AGI would arrive was 2040 to 2050, depending on the poll, with the mean being 2081. Of the experts, 16.5% answered with "never" when asked the same question but with a 90% confidence instead.<ref name="new yorker doomsday">{{cite news|author1=Raffi Khatchadourian|title=The Doomsday Invention: Will artificial intelligence bring us utopia or destruction?|url=http://www.newyorker.com/magazine/2015/11/23/doomsday-invention-artificial-intelligence-nick-bostrom|accessdate=7 February 2016|work=[[The New Yorker (magazine)|The New Yorker]]|date=23 November 2015|archive-url=https://web.archive.org/web/20160128105955/http://www.newyorker.com/magazine/2015/11/23/doomsday-invention-artificial-intelligence-nick-bostrom|archive-date=28 January 2016|url-status=live}}</ref><ref>Müller, V. C., & Bostrom, N. (2016). Future progress in artificial intelligence: A survey of expert opinion. In Fundamental issues of artificial intelligence (pp. 555–572). Springer, Cham.</ref> Further current AGI progress considerations can be found below [[#Tests_for_confirming_human-level_AGI|''Tests for confirming human-level AGI'']] and [[#IQ-Tests_AGI|''IQ-tests AGI'']].<br />
<br />
AI experts' views on the feasibility of AGI wax and wane, and may have seen a resurgence in the 2010s. Four polls conducted in 2012 and 2013 suggested that the median guess among experts for when they would be 50% confident AGI would arrive was 2040 to 2050, depending on the poll, with the mean being 2081. Of the experts, 16.5% answered with "never" when asked the same question but with a 90% confidence instead. Further current AGI progress considerations can be found below Tests for confirming human-level AGI and IQ-tests AGI.<br />
<br />
人工智能专家对德盛安联兴衰可行性的看法,可能在2010年出现了复苏。2012年和2013年进行的四次民意调查显示,专家对德盛安联50% 有信心的平均猜测是2040年到2050年,具体取决于调查结果,平均猜测是2081年。在这些专家中,16.5% 的人在被问到同样的问题时回答“从来没有” ,但他们的自信心却达到了90% 。进一步的进展考虑可以在确认人类水平 AGI 和 iq 测试 AGI 的测试下面找到。<br />
<br />
<br />
<br />
===Potential threat to human existence{{anchor|Risk_of_human_extinction}}===<br />
<br />
{{Main|Existential risk from artificial general intelligence}}<br />
<br />
<br />
<br />
The thesis that AI poses an existential risk, and that this risk needs much more attention than it currently gets, has been endorsed by many public figures; perhaps the most famous are [[Elon Musk]], [[Bill Gates]], and [[Stephen Hawking]]. The most notable AI researcher to endorse the thesis is [[Stuart J. Russell]]. Endorsers of the thesis sometimes express bafflement at skeptics: Gates states he does not "understand why some people are not concerned",<ref name="BBC News">{{cite news|last1=Rawlinson|first1=Kevin|title=Microsoft's Bill Gates insists AI is a threat|url=https://www.bbc.co.uk/news/31047780|work=[[BBC News]]|accessdate=30 January 2015}}</ref> and Hawking criticized widespread indifference in his 2014 editorial: {{cquote|'So, facing possible futures of incalculable benefits and risks, the experts are surely doing everything possible to ensure the best outcome, right? Wrong. If a superior alien civilisation sent us a message saying, 'We'll arrive in a few decades,' would we just reply, 'OK, call us when you get here{{endash}}we'll leave the lights on?' Probably not{{endash}}but this is more or less what is happening with AI.'<ref name="hawking editorial">{{cite news |title=Stephen Hawking: 'Transcendence looks at the implications of artificial intelligence&nbsp;– but are we taking AI seriously enough?' |url=https://www.independent.co.uk/news/science/stephen-hawking-transcendence-looks-at-the-implications-of-artificial-intelligence--but-are-we-taking-ai-seriously-enough-9313474.html |accessdate=3 December 2014 |publisher=[[The Independent (UK)]]}}</ref>}}<br />
<br />
The thesis that AI poses an existential risk, and that this risk needs much more attention than it currently gets, has been endorsed by many public figures; perhaps the most famous are Elon Musk, Bill Gates, and Stephen Hawking. The most notable AI researcher to endorse the thesis is Stuart J. Russell. Endorsers of the thesis sometimes express bafflement at skeptics: Gates states he does not "understand why some people are not concerned", and Hawking criticized widespread indifference in his 2014 editorial: we'll leave the lights on?' Probably notbut this is more or less what is happening with AI.'}}<br />
<br />
人工智能构成了世界末日,这种风险需要比现在更多的关注,这一论点已经得到了许多公众人物的支持; 也许最著名的是埃隆 · 马斯克,比尔 · 盖茨和斯蒂芬 · 霍金。支持这一观点的最著名的人工智能研究者是斯图尔特 · 罗素。这篇论文的支持者有时会对怀疑论者表示困惑: 盖茨表示,他不“理解为什么有些人不关心” ,霍金在2014年的社论中批评了普遍的冷漠: 我们会让灯亮着吗可能不是,但这或多或少是正在发生的与人工智能<br />
<br />
<br />
<br />
Many of the scholars who are concerned about existential risk believe that the best way forward would be to conduct (possibly massive) research into solving the difficult "[[AI control problem|control problem]]" to answer the question: what types of safeguards, algorithms, or architectures can programmers implement to maximize the probability that their recursively-improving AI would continue to behave in a friendly, rather than destructive, manner after it reaches superintelligence?<ref name="superintelligence" >{{cite book|last1=Bostrom|first1=Nick|author-link=Nick Bostrom|title=Superintelligence: Paths, Dangers, Strategies|date=2014|isbn=978-0199678112|edition=First|quote=|title-link=Superintelligence: Paths, Dangers, Strategies}}<!-- preface --></ref><ref name="physica_scripta" >{{cite journal|title=Responses to catastrophic AGI risk: a survey|journal=[[Physica Scripta]]|date=19 December 2014|volume=90|issue=1|author1=Kaj Sotala|author2-link=Roman Yampolskiy|author2=Roman Yampolskiy}}</ref><br />
<br />
Many of the scholars who are concerned about existential risk believe that the best way forward would be to conduct (possibly massive) research into solving the difficult "control problem" to answer the question: what types of safeguards, algorithms, or architectures can programmers implement to maximize the probability that their recursively-improving AI would continue to behave in a friendly, rather than destructive, manner after it reaches superintelligence?<br />
<br />
许多关注世界末日的学者认为,最好的方法是进行(可能是大规模的)研究,解决困难的“控制问题” ,以回答这个问题: 程序员可以实现哪些类型的保障措施、算法或架构,以最大限度地提高其递归改进的人工智能在达到超级智能后继续以友好而不是破坏性的方式运行的可能性?<br />
<br />
<br />
<br />
The thesis that AI can pose existential risk also has many strong detractors. Skeptics sometimes charge that the thesis is crypto-religious, with an irrational belief in the possibility of superintelligence replacing an irrational belief in an omnipotent God; at an extreme, [[Jaron Lanier]] argues that the whole concept that current machines are in any way intelligent is "an illusion" and a "stupendous con" by the wealthy.<ref name="atlantic-but-what">{{cite magazine |url=https://www.theatlantic.com/health/archive/2014/05/but-what-does-the-end-of-humanity-mean-for-me/361931/ |title=But What Would the End of Humanity Mean for Me? |magazine=The Atlantic | date = 9 May 2014 | accessdate =12 December 2015}}</ref><br />
<br />
The thesis that AI can pose existential risk also has many strong detractors. Skeptics sometimes charge that the thesis is crypto-religious, with an irrational belief in the possibility of superintelligence replacing an irrational belief in an omnipotent God; at an extreme, Jaron Lanier argues that the whole concept that current machines are in any way intelligent is "an illusion" and a "stupendous con" by the wealthy.<br />
<br />
认为人工智能可以提出世界末日的观点也遭到了许多强烈的反对。怀疑论者有时指责该论点是秘密宗教性的,他们非理性地相信超级智能可能取代对万能的上帝的非理性信仰; 在极端情况下,杰伦 · 拉尼尔(Jaron Lanier)认为,目前的机器以任何方式具有智能的整个概念是“一种幻觉” ,是富人的“惊人骗局”。<br />
<br />
<br />
<br />
Much of existing criticism argues that AGI is unlikely in the short term. Computer scientist [[Gordon Bell]] argues that the human race will already destroy itself before it reaches the [[technological singularity]]. [[Gordon Moore]], the original proponent of [[Moore's Law]], declares that "I am a skeptic. I don't believe [a technological singularity] is likely to happen, at least for a long time. And I don't know why I feel that way."<ref>{{cite news |title=Tech Luminaries Address Singularity |url=https://spectrum.ieee.org/computing/hardware/tech-luminaries-address-singularity |accessdate=8 April 2020 |work=IEEE Spectrum: Technology, Engineering, and Science News |issue=SPECIAL REPORT: THE SINGULARITY |date=1 June 2008 |language=en}}</ref> [[Baidu]] Vice President [[Andrew Ng]] states AI existential risk is "like worrying about overpopulation on Mars when we have not even set foot on the planet yet."<ref name=shermer>{{cite news|last1=Shermer|first1=Michael|title=Apocalypse AI|url=https://www.scientificamerican.com/article/artificial-intelligence-is-not-a-threat-mdash-yet/|accessdate=27 November 2017|work=Scientific American|date=1 March 2017|pages=77|language=en|doi=10.1038/scientificamerican0317-77|bibcode=2017SciAm.316c..77S}}</ref><br />
<br />
Much of existing criticism argues that AGI is unlikely in the short term. Computer scientist Gordon Bell argues that the human race will already destroy itself before it reaches the technological singularity. Gordon Moore, the original proponent of Moore's Law, declares that "I am a skeptic. I don't believe [a technological singularity] is likely to happen, at least for a long time. And I don't know why I feel that way." Baidu Vice President Andrew Ng states AI existential risk is "like worrying about overpopulation on Mars when we have not even set foot on the planet yet."<br />
<br />
现有的许多批评认为,德盛安联短期内不太可能成功。计算机科学家 Gordon Bell 认为人类在到达技术奇异点之前就已经自我毁灭了。戈登 · 摩尔,摩尔定律的最初倡导者,宣称“我是一个怀疑论者。我不认为技术奇异点会发生,至少在很长一段时间内不会。我不知道为什么会有这种感觉。”百度副总裁 Andrew Ng 说,人工智能世界末日就像是在担心火星人口过剩,而我们甚至还没有踏上这个星球<br />
<br />
<br />
<br />
==See also==<br />
<br />
{{div col|colwidth=30em}}<br />
<br />
* [[Automated machine learning]]<br />
<br />
* [[Machine ethics]]<br />
<br />
* [[Multi-task learning]]<br />
<br />
* [[Superintelligence: Paths, Dangers, Strategies|Superintelligence]]<br />
<br />
* [[Nick Bostrom]]<br />
<br />
* [[Eliezer Yudkowsky]]<br />
<br />
* [[Future of Humanity Institute]]<br />
<br />
* [[Outline of artificial intelligence]]<br />
<br />
* [[Artificial brain]]<br />
<br />
* [[Transfer learning]]<br />
<br />
* [[Outline of transhumanism]]<br />
<br />
* [[General game playing]]<br />
<br />
* [[Synthetic intelligence]]<br />
<br />
* [[Intelligence amplification]] (IA), the use of information technology in augmenting human intelligence instead of creating an external autonomous "AGI"{{div col end}}<br />
<br />
<br />
<br />
==Notes==<br />
<br />
{{reflist|colwidth=30em}}<br />
<br />
<br />
<br />
==References==<br />
<br />
• Stages of Artificial Intelligence"[https://www.computerscience0.xyz/2020/04/ai-artificial-intelligence-stages-of.html Computer Science]" on 2nd April, 2020.{{refbegin|2}}<br />
<br />
• Stages of Artificial Intelligence"[https://www.computerscience0.xyz/2020/04/ai-artificial-intelligence-stages-of.html Computer Science]" on 2nd April, 2020.<br />
<br />
•2020年4月2日,《人工智能的阶段》[ https://www.computerscience0.xyz/2020/04/ai-Artificial-Intelligence-Stages-of.html 计算机科学]。<br />
<br />
* {{cite web |url=http://www.techcast.org/Upload/PDFs/633615794236495345_TCTheAutomationofThought.pdf |title=TechCast Article Series: The Automation of Thought |last1=Halal |first1=William E. |website= |access-date= |archive-url=https://web.archive.org/web/20130606101835/http://www.techcast.org/Upload/PDFs/633615794236495345_TCTheAutomationofThought.pdf |archive-date=6 June 2013}}<br />
<br />
* {{Citation | last=Aleksander | first=Igor | author-link=Igor Aleksander | year=1996 | title=Impossible Minds | publisher=World Scientific Publishing Company | isbn=978-1-86094-036-1 | url-access=registration | url=https://archive.org/details/impossiblemindsm0000alek }}<br />
<br />
* {{Citation | last = Omohundro|first= Steve| author-link= Steve Omohundro | year = 2008| title= The Nature of Self-Improving Artificial Intelligence| publisher= presented and distributed at the 2007 Singularity Summit, San Francisco, CA.}}<br />
<br />
* {{Citation|last=Sandberg |first=Anders|last2=Boström|first2=Nick|title=Whole Brain Emulation: A Roadmap|url=http://www.fhi.ox.ac.uk/Reports/2008-3.pdf|accessdate=5 April 2009|series= Technical Report #2008‐3|year=2008| publisher = Future of Humanity Institute, Oxford University}}<br />
<br />
* {{Citation|vauthors=Azevedo FA, Carvalho LR, Grinberg LT | title = Equal numbers of neuronal and nonneuronal cells make the human brain an isometrically scaled-up primate brain| journal = The Journal of Comparative Neurology| volume = 513| issue = 5| pages = 532–41|date=April 2009| pmid = 19226510| doi = 10.1002/cne.21974|url=https://www.researchgate.net/publication/24024444 |accessdate=4 September 2013| ref={{harvid|Azevedo et al.|2009}}|display-authors=etal}}<br />
<br />
* {{Citation<br />
<br />
| first=Anthony<br />
<br />
| first=Anthony<br />
<br />
首先是安东尼<br />
<br />
| last=Berglas<br />
<br />
| last=Berglas<br />
<br />
最后一个贝格拉斯<br />
<br />
| title=Artificial Intelligence will Kill our Grandchildren<br />
<br />
| title=Artificial Intelligence will Kill our Grandchildren<br />
<br />
人工智能会杀死我们的孙子<br />
<br />
| year=2008<br />
<br />
| year=2008<br />
<br />
2008年<br />
<br />
| url=http://berglas.org/Articles/AIKillGrandchildren/AIKillGrandchildren.html<br />
<br />
| url=http://berglas.org/Articles/AIKillGrandchildren/AIKillGrandchildren.html<br />
<br />
Http://berglas.org/articles/aikillgrandchildren/aikillgrandchildren.html<br />
<br />
}}<br />
<br />
}}<br />
<br />
}}<br />
<br />
* {{Citation | last = Chalmers |first = David | author-link=David Chalmers | year=1996 | title = The Conscious Mind |publisher=Oxford University Press.}}<br />
<br />
* {{Citation | last = Clocksin|first=William |date=Aug 2003 |title=Artificial intelligence and the future|journal=[[Philosophical Transactions of the Royal Society A]] |pmid=12952683 |volume=361 |issue=1809 |pages=1721–1748 |doi=10.1098/rsta.2003.1232 |postscript=.|bibcode=2003RSPTA.361.1721C }}<br />
<br />
* {{Crevier 1993}}<br />
<br />
* {{Citation | first = Brad | last = Darrach | date=20 November 1970 | title=Meet Shakey, the First Electronic Person | magazine=[[Life Magazine]] | pages = 58–68 }}.<br />
<br />
* {{Citation | last = Drachman | first = D | title = Do we have brain to spare? | journal = Neurology | volume = 64 | issue = 12 | pages = 2004–5 | year = 2005 | pmid = 15985565 | doi = 10.1212/01.WNL.0000166914.38327.BB | postscript = .}}<br />
<br />
* {{Citation | last = Feigenbaum | first = Edward A. | first2=Pamela | last2=McCorduck | author-link=Edward Feigenbaum | author2-link = Pamela McCorduck | title = The Fifth Generation: Artificial Intelligence and Japan's Computer Challenge to the World | publisher = Michael Joseph | year = 1983 | isbn = 978-0-7181-2401-4 }}<br />
<br />
* {{Citation<br />
<br />
| last = Gelernter | first = David<br />
<br />
| last = Gelernter | first = David<br />
<br />
最后的盖兰特 | 第一个大卫<br />
<br />
| year = 2010<br />
<br />
| year = 2010<br />
<br />
2010年<br />
<br />
| title = Dream-logic, the Internet and Artificial Thought<br />
<br />
| title = Dream-logic, the Internet and Artificial Thought<br />
<br />
梦的逻辑,互联网和人工思维<br />
<br />
| url=http://www.edge.org/3rd_culture/gelernter10.1/gelernter10.1_index.html | accessdate= 25 July 2010<br />
<br />
| url=http://www.edge.org/3rd_culture/gelernter10.1/gelernter10.1_index.html | accessdate= 25 July 2010<br />
<br />
Http://www.edge.org/3rd_culture/gelernter10.1/gelernter10.1_index.html : 2010年7月25日<br />
<br />
}}<br />
<br />
}}<br />
<br />
}}<br />
<br />
* {{Citation<br />
<br />
| editor1-last = Goertzel | editor1-first = Ben | authorlink = Ben Goertzel<br />
<br />
| editor1-last = Goertzel | editor1-first = Ben | authorlink = Ben Goertzel<br />
<br />
1-first Ben | authorlink Ben Goertzel<br />
<br />
| editor2-last = Pennachin | editor2-first= Cassio<br />
<br />
| editor2-last = Pennachin | editor2-first= Cassio<br />
<br />
| 编辑2-last Pennachin | 编辑2-first Cassio<br />
<br />
| year = 2006<br />
<br />
| year = 2006<br />
<br />
2006年<br />
<br />
| title=Artificial General Intelligence<br />
<br />
| title=Artificial General Intelligence<br />
<br />
人工通用智能<br />
<br />
| publisher = Springer <br />
<br />
| publisher = Springer <br />
<br />
出版商斯普林格<br />
<br />
| url=http://people.inf.elte.hu/csizsekp/ai/books/artificial-general-intelligence-cognitive-technologies.9783540237334.27156.pdf<br />
<br />
| url=http://people.inf.elte.hu/csizsekp/ai/books/artificial-general-intelligence-cognitive-technologies.9783540237334.27156.pdf<br />
<br />
Http://people.inf.elte.hu/csizsekp/ai/books/artificial-general-intelligence-cognitive-technologies.9783540237334.27156.pdf<br />
<br />
| isbn = 978-3-540-23733-4<br />
<br />
| isbn = 978-3-540-23733-4<br />
<br />
[国际标准图书馆编号978-3-540-23733-4]<br />
<br />
| archive-url = https://web.archive.org/web/20130320184603/http://people.inf.elte.hu/csizsekp/ai/books/artificial-general-intelligence-cognitive-technologies.9783540237334.27156.pdf<br />
<br />
| archive-url = https://web.archive.org/web/20130320184603/http://people.inf.elte.hu/csizsekp/ai/books/artificial-general-intelligence-cognitive-technologies.9783540237334.27156.pdf<br />
<br />
| 档案-url https://web.archive.org/web/20130320184603/http://people.inf.elte.hu/csizsekp/ai/books/artificial-general-intelligence-cognitive-technologies.9783540237334.27156.pdf<br />
<br />
| archive-date = 20 March 2013<br />
<br />
| archive-date = 20 March 2013<br />
<br />
| 档案-日期2013年3月20日<br />
<br />
}}<br />
<br />
}}<br />
<br />
}}<br />
<br />
* {{Citation<br />
<br />
| last = Goertzel | first = Ben | authorlink = Ben Goertzel<br />
<br />
| last = Goertzel | first = Ben | authorlink = Ben Goertzel<br />
<br />
作者: Ben Goertzel<br />
<br />
| last2 = Wang | first2 = Pei<br />
<br />
| last2 = Wang | first2 = Pei<br />
<br />
2 Wang | first2 Pei<br />
<br />
| year = 2006<br />
<br />
| year = 2006<br />
<br />
2006年<br />
<br />
| title = Introduction: Aspects of Artificial General Intelligence<br />
<br />
| title = Introduction: Aspects of Artificial General Intelligence<br />
<br />
| 题目简介: 人工通用智能的方方面面<br />
<br />
| url=http://sites.google.com/site/narswang/publications/wang-goertzel.AGI_Aspects.pdf?attredirects=1<br />
<br />
| url=http://sites.google.com/site/narswang/publications/wang-goertzel.AGI_Aspects.pdf?attredirects=1<br />
<br />
Http://sites.google.com/site/narswang/publications/wang-goertzel.agi_aspects.pdf?attredirects=1<br />
<br />
}}<br />
<br />
}}<br />
<br />
}}<br />
<br />
* {{Citation |last=Goertzel|first=Ben |authorlink=Ben Goertzel |date=Dec 2007 |title=Human-level artificial general intelligence and the possibility of a technological singularity: a reaction to Ray Kurzweil's The Singularity Is Near, and McDermott's critique of Kurzweil|journal=Artificial Intelligence |volume=171 |issue=18, Special Review Issue |pages=1161–1173 |url=https://scholar.google.com/scholar?hl=sv&lr=&cluster=15189798216526465792 |accessdate=1 April 2009 |doi=10.1016/j.artint.2007.10.011 |postscript=.}}<br />
<br />
* {{Citation | last = Gubrud | first = Mark | url=http://www.foresight.org/Conferences/MNT05/Papers/Gubrud/ | title = Nanotechnology and International Security | journal= Fifth Foresight Conference on Molecular Nanotechnology |date = November 1997| accessdate= 7 May 2011}}<br />
<br />
* {{Citation| last1 = Holte | first1=RC | last2=Choueiry |first2=BY| title = Abstraction and reformulation in artificial intelligence| journal = [[Philosophical Transactions of the Royal Society B]]| volume = 358| issue = 1435| pages = 1197–1204| year = 2003| pmid = 12903653| pmc = 1693218| doi = 10.1098/rstb.2003.1317| postscript = .}}<br />
<br />
* {{Citation | last = Howe | first = J. | url=http://www.dai.ed.ac.uk/AI_at_Edinburgh_perspective.html | title = Artificial Intelligence at Edinburgh University : a Perspective |date = November 1994| accessdate= 30 August 2007}}<br />
<br />
* {{Citation | last = Johnson| first= Mark | year = 1987| title =The body in the mind| publisher =Chicago|isbn= 978-0-226-40317-5}}<br />
<br />
* {{Citation | last = Kurzweil | first = Ray | author-link = Ray Kurzweil | title = The Singularity is Near | year = 2005 | publisher = Viking Press | title-link = The Singularity is Near }}<br />
<br />
* {{Citation | last = Lighthill | first = Professor Sir James | author-link=James Lighthill | year = 1973 | contribution= Artificial Intelligence: A General Survey | title = Artificial Intelligence: a paper symposium| publisher = Science Research Council }}<br />
<br />
* {{Citation|last=Luger|first=George|first2=William|last2=Stubblefield|year=2004|title=Artificial Intelligence: Structures and Strategies for Complex Problem Solving|edition=5th|publisher=The Benjamin/Cummings Publishing Company, Inc.|page=[https://archive.org/details/artificialintell0000luge/page/720 720]|isbn=978-0-8053-4780-7|url=https://archive.org/details/artificialintell0000luge/page/720}}<br />
<br />
* {{Citation |last=McCarthy|first=John |authorlink=John McCarthy (computer scientist)|date=Oct 2007 |title=From here to human-level AI|journal=Artificial Intelligence |volume=171 |pages=1174–1182 |doi=10.1016/j.artint.2007.10.009 | postscript=. |issue=18}}<br />
<br />
* {{McCorduck 2004}}<br />
<br />
* {{Citation | last = Moravec | first = Hans | author-link = Hans Moravec | year = 1976 | url = http://www.frc.ri.cmu.edu/users/hpm/project.archive/general.articles/1975/Raw.Power.html | title = The Role of Raw Power in Intelligence | access-date = 29 September 2007 | archive-url = https://web.archive.org/web/20160303232511/http://www.frc.ri.cmu.edu/users/hpm/project.archive/general.articles/1975/Raw.Power.html | archive-date = 3 March 2016 | url-status = dead }}<br />
<br />
* {{Citation | last = Moravec | first = Hans | author-link=Hans Moravec | year = 1988 | title = Mind Children | publisher = Harvard University Press}}<br />
<br />
* {{Citation| last=Nagel | year=1974 | title =What Is it Like to Be a Bat | journal = Philosophical Review | volume=83 | issue=4 | pages=435–50 | url=http://organizations.utep.edu/Portals/1475/nagel_bat.pdf| postscript=. | doi=10.2307/2183914 | jstor=2183914 }}<br />
<br />
* {{Citation | last = Newell | first = Allen | author-link=Allen Newell | last2 = Simon | first2=H. A. | year = 1963 | contribution=GPS: A Program that Simulates Human Thought| title=Computers and Thought | editor-last= Feigenbaum | editor-first= E.A. |editor2-last= Feldman |editor2-first= J. |publisher= McGraw-Hill | authorlink2 = Herbert A. Simon|location= New York }}<br />
<br />
* {{cite journal | doi = 10.1145/360018.360022 | last = Newell | first = Allen | last2 = Simon | first2=H. A. | year = 1976 | title=Computer Science as Empirical Inquiry: Symbols and Search| volume= 19 | pages = 113–126 | journal = Communications of the ACM| author-link=Allen Newell | authorlink2=Herbert A. Simon|issue=3 | ref=harv| doi-access= free }}<br />
<br />
* {{Citation| last=Nilsson | first=Nils | author-link=Nils Nilsson (researcher) | year=1998|title=Artificial Intelligence: A New Synthesis|publisher=Morgan Kaufmann Publishers|isbn=978-1-55860-467-4}}<br />
<br />
* {{Russell Norvig 2003}}<br />
<br />
* {{Citation | last = NRC| author-link=United States National Research Council | chapter=Developments in Artificial Intelligence|chapter-url=http://www.nap.edu/readingroom/books/far/ch9.html|title=Funding a Revolution: Government Support for Computing Research|publisher=National Academy Press|year=1999 }}<br />
<br />
* {{Citation | last = Poole | first = David | first2 = Alan | last2 = Mackworth | first3 = Randy | last3 = Goebel | publisher = Oxford University Press | year = 1998 | title = Computational Intelligence: A Logical Approach | url = http://www.cs.ubc.ca/spider/poole/ci.html | author-link=David Poole (researcher) | location = New York }}<br />
<br />
* {{Citation|last=Searle |first=John |author-link=John Searle |title=Minds, Brains and Programs |journal=Behavioral and Brain Sciences |volume=3 |issue=3 |pages=417–457 |year=1980 |doi=10.1017/S0140525X00005756 }}<br />
<br />
* {{Citation | last= Simon | first = H. A. | author-link=Herbert A. Simon | year = 1965 | title=The Shape of Automation for Men and Management | publisher =Harper & Row | location = New York }}<br />
<br />
* {{Citation |last=Sutherland|first= J.G. |year =1990| title= Holographic Model of Memory, Learning, and Expression|journal=International Journal of Neural Systems|volume= 1–3|pages= 256–267 |postscript=.}}<br />
<br />
* {{Citation|vauthors=Williams RW, Herrup K | title = The control of neuron number| journal = Annual Review of Neuroscience| volume = 11| pages = 423–53| year = 1988| pmid = 3284447| doi = 10.1146/annurev.ne.11.030188.002231| postscript = .}}<!--| accessdate = 20 June 2009--><br />
<br />
* {{Citation<br />
<br />
| editor1-last = de Vega | editor1-first = Manuel<br />
<br />
| editor1-last = de Vega | editor1-first = Manuel<br />
<br />
| 编辑1-last de Vega | 编辑1-first Manuel<br />
<br />
| editor2-last = Glenberg | editor2-first = Arthur<br />
<br />
| editor2-last = Glenberg | editor2-first = Arthur<br />
<br />
2- 最后的格伦伯格2- 第一个亚瑟<br />
<br />
| editor3-last = Graesser | editor3-first = Arthur<br />
<br />
| editor3-last = Graesser | editor3-first = Arthur<br />
<br />
| 编辑3-last Graesser | 编辑3-first Arthur<br />
<br />
| year = 2008<br />
<br />
| year = 2008<br />
<br />
2008年<br />
<br />
| title = Symbols and Embodiment: Debates on meaning and cognition<br />
<br />
| title = Symbols and Embodiment: Debates on meaning and cognition<br />
<br />
标题符号与具体化: 关于意义与认知的争论<br />
<br />
| publisher = Oxford University Press<br />
<br />
| publisher = Oxford University Press<br />
<br />
牛津大学出版社<br />
<br />
| isbn=978-0-19-921727-4<br />
<br />
| isbn=978-0-19-921727-4<br />
<br />
[国际标准图书编号978-0-19-921727-4]<br />
<br />
}}<br />
<br />
}}<br />
<br />
}}<br />
<br />
* {{Citation|last=Yudkowsky |first=Eliezer |author-link=Eliezer Yudkowsky |editor-last=Goertzel |editor-first=Ben |editor2-last=Pennachin |editor2-first=Cassio |title=Artificial General Intelligence |journal=Annual Review of Psychology |publisher=Springer |year=2006 |url=http://www.singinst.org/upload/LOGI//LOGI.pdf |doi=10.1146/annurev.psych.49.1.585 |pmid=9496632 |isbn=978-3-540-23733-4 |url-status=dead |archiveurl=https://web.archive.org/web/20090411050423/http://www.singinst.org/upload/LOGI/LOGI.pdf |archivedate=11 April 2009 |volume=49 |pages=585–612}} <br />
<br />
* {{Citation |last=Zucker|first=Jean-Daniel |date=July 2003 |title=A grounded theory of abstraction in artificial intelligence|journal=[[Philosophical Transactions of the Royal Society B]] |pmid=12903672 |volume=358 |issue=1435 |pmc=1693211 |pages=1293–1309 |doi=10.1098/rstb.2003.1308 |postscript=.}}<br />
<br />
* {{Citation| last=Yudkowsky | first=Eliezer | author-link=Eliezer Yudkowsky| year=2008 |title=Artificial Intelligence as a Positive and Negative Factor in Global Risk |journal=Global Catastrophic Risks | bibcode=2008gcr..book..303Y }}.<br />
<br />
{{refend}}<br />
<br />
<br />
<br />
==External links==<br />
<br />
* [https://cis.temple.edu/~pwang/AGI-Intro.html The AGI portal maintained by Pei Wang]<br />
<br />
* [https://web.archive.org/web/20050405071221/http://genesis.csail.mit.edu/index.html The Genesis Group at MIT's CSAIL] – Modern research on the computations that underlay human intelligence<br />
<br />
* [http://www.opencog.org/ OpenCog – open source project to develop a human-level AI]<br />
<br />
* [http://academia.wikia.com/wiki/A_Method_for_Simulating_the_Process_of_Logical_Human_Thought Simulating logical human thought]<br />
<br />
* [http://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/ai-timelines What Do We Know about AI Timelines?] – Literature review<br />
<br />
<br />
<br />
{{Existential risk from artificial intelligence}}<br />
<br />
<br />
<br />
{{DEFAULTSORT:Artificial general intelligence}}<br />
<br />
[[Category:Hypothetical technology]]<br />
<br />
Category:Hypothetical technology<br />
<br />
类别: 假设技术<br />
<br />
[[Category:Artificial intelligence]]<br />
<br />
Category:Artificial intelligence<br />
<br />
类别: 人工智能<br />
<br />
[[Category:Computational neuroscience]]<br />
<br />
Category:Computational neuroscience<br />
<br />
类别: 计算神经科学<br />
<br />
<br />
<br />
[[fr:Intelligence artificielle#Intelligence artificielle forte]]<br />
<br />
fr:Intelligence artificielle#Intelligence artificielle forte<br />
<br />
智力人工 # 智力人工强项<br />
<br />
<noinclude><br />
<br />
<small>This page was moved from [[wikipedia:en:Artificial general intelligence]]. Its edit history can be viewed at [[通用人工智能/edithistory]]</small></noinclude><br />
<br />
[[Category:待整理页面]]</div>粲兰https://wiki.swarma.org/index.php?title=%E9%80%9A%E7%94%A8%E4%BA%BA%E5%B7%A5%E6%99%BA%E8%83%BD&diff=15011通用人工智能2020-10-10T15:33:42Z<p>粲兰:</p>
<hr />
<div>此词条暂由彩云小译翻译,未经人工整理和审校,带来阅读不便,请见谅。<br />
<br />
{{Short description|Hypothetical human-level or stronger AI}}<br />
<br />
{{Use British English|date = March 2019}}<br />
<br />
{{Use dmy dates|date=December 2019}}<br />
<br />
{{Artificial intelligence}}<br />
<br />
'''Artificial general intelligence''' ('''AGI''') is the hypothetical<ref>{{cite news |title=DeepMind and Google: the battle to control artificial intelligence |url=https://www.economist.com/news/2019/04/05/deepmind-and-google-the-battle-to-control-artificial-intelligence |accessdate=15 March 2020 |work=[[The Economist]] ([[1843 (magazine)]]) |date=2019 |quote=AGI stands for Artificial General Intelligence, a hypothetical computer program...}}</ref> intelligence of a machine that has the capacity to understand or learn any intellectual task that a [[human being]] can. It is a primary goal of some [[artificial intelligence]] research and a common topic in [[science fiction]] and [[futures studies]]. AGI can also be referred to as '''strong AI''',<ref>Kurzweil, ''Singularity'' (2005) p. 260</ref><ref name = "Kurzweil 2005-08-05">{{Citation|url=https://www.forbes.com/home/free_forbes/2005/0815/030.htmlhttps://www.forbes.com/home/free_forbes/2005/0815/030.html |first=Ray |last=Kurzweil |date=5 August 2005 |magazine=[[Forbes]]|title=Long Live AI}}: Kurzweil describes strong AI as "machine intelligence with the full range of human intelligence."</ref><ref>{{Citation|url=https://crnano.typepad.com/crnblog/2005/08/advanced_human_.html |title=Advanced Human ntelligence<br />
<br />
Artificial general intelligence (AGI) is the hypothetical intelligence of a machine that has the capacity to understand or learn any intellectual task that a human being can. It is a primary goal of some artificial intelligence research and a common topic in science fiction and futures studies. AGI can also be referred to as strong AI,<ref>{{Citation|url=https://crnano.typepad.com/crnblog/2005/08/advanced_human_.html |title=Advanced Human ntelligence<br />
<br />
人工通用智能(Artificial general intelligence,AGI)是一种假设的机器智能,它有能力理解或学习任何人类能够完成的智力任务。这是一些人工智能研究的主要目标,也是科幻小说和未来研究的共同话题。也可以被称为强人工智能,参考文献{ Citation | url https://crnano.typepad.com/crnblog/2005/08/advanced_human_.html | title Advanced Human intelligence<br />
<br />
|first=Mike|last=Treder|work=Responsible Nanotechnology|date=10 August 2005 |archive-url=https://web.archive.org/web/20191016214415/https://crnano.typepad.com/crnblog/2005/08/advanced_human_.html|archive-date=16 October 2019 |url-status=live}}</ref> '''full AI''',<ref>{{Cite web |url=http://tedxtalks.ted.com/video/The-Age-of-Artificial-Intellige |title=The Age of Artificial Intelligence: George John at TEDxLondonBusinessSchool 2013 |access-date=22 February 2014 |archive-url=https://web.archive.org/web/20140226123940/http://tedxtalks.ted.com/video/The-Age-of-Artificial-Intellige |archive-date=26 February 2014 |url-status=live }}</ref> <br />
<br />
|first=Mike|last=Treder|work=Responsible Nanotechnology|date=10 August 2005 |archive-url=https://web.archive.org/web/20191016214415/https://crnano.typepad.com/crnblog/2005/08/advanced_human_.html|archive-date=16 October 2019 |url-status=live}}</ref> full AI, <br />
<br />
2005年8月10日2019年10月 https://web.archive.org/web/20191016214415/https://crnano.typepad.com/crnblog/2005/08/advanced_human_.html|archive-date=16,<br />
<br />
or '''general intelligent action'''.{{sfn|Newell|Simon|1976|ps=, This is the term they use for "human-level" intelligence in the [[physical symbol system]] hypothesis.}} <br />
<br />
or general intelligent action. <br />
<br />
或者是通用智能行为。<br />
<br />
Some academic sources reserve the term "strong AI" for machines that can experience [[Chinese room#Strong AI|consciousness]].{{sfn|Searle|1980|ps=, See below for the origin of the term "strong AI", and see the academic definition of "[[Chinese room#Strong AI|strong AI]]" in the article [[Chinese room]].}} Today's AI is speculated to be many years, if not decades, away from AGI.<ref>[https://www.europarl.europa.eu/at-your-service/files/be-heard/religious-and-non-confessional-dialogue/events/en-20190319-how-artificial-intelligence-works.pdf europarl.europa.eu: How artificial intelligence works], "Concluding remarks: Today's AI is powerful and useful, but remains far from speculated AGI or ASI.", European Parliamentary Research Service, retrieved March 3, 2020</ref><ref>{{Cite journal|last=Grace|first=Katja|last2=Salvatier|first2=John|last3=Dafoe|first3=Allan|last4=Zhang|first4=Baobao|last5=Evans|first5=Owain|date=2018-07-31|title=Viewpoint: When Will AI Exceed Human Performance? Evidence from AI Experts|journal=Journal of Artificial Intelligence Research|volume=62|pages=729–754|doi=10.1613/jair.1.11222|issn=1076-9757}}</ref><br />
<br />
Some academic sources reserve the term "strong AI" for machines that can experience consciousness. Today's AI is speculated to be many years, if not decades, away from AGI.<br />
<br />
一些学术资源保留了“强人工智能”这个术语,用来形容能够体会意识的机器。据推测,如果不是几十年的话,今天的人工智能将在很多年之后才能达到通用人工智能的地步。<br />
<br />
<br />
<br />
Some authorities emphasize a distinction between ''strong AI'' and ''applied AI'',<ref>Encyclopædia Britannica [http://www.britannica.com/eb/article-219086/artificial-intelligence Strong AI, applied AI, and cognitive simulation] {{Webarchive|url=https://web.archive.org/web/20071015054758/http://www.britannica.com/eb/article-219086/artificial-intelligence |date=15 October 2007 }} or Jack Copeland [http://www.cs.usfca.edu/www.AlanTuring.net/turing_archive/pages/Reference%20Articles/what_is_AI/What%20is%20AI02.html What is artificial intelligence?] {{Webarchive|url=https://web.archive.org/web/20070818125256/http://www.cs.usfca.edu/www.AlanTuring.net/turing_archive/pages/Reference%20Articles/what_is_AI/What%20is%20AI02.html |date=18 August 2007 }} on AlanTuring.net</ref> also called ''narrow AI''<ref name = "Kurzweil 2005-08-05"/> or ''[[weak AI]]''.<ref>{{Cite web |url=http://www.open2.net/nextbigthing/ai/ai_in_depth/in_depth.htm |title=The Open University on Strong and Weak AI |access-date=8 October 2007 |archive-url=https://web.archive.org/web/20090925043908/http://www.open2.net/nextbigthing/ai/ai_in_depth/in_depth.htm |archive-date=25 September 2009 |url-status=live }} {{Dead link |date=October 2019}}</ref> In contrast to strong AI, weak AI is not intended to perform human [[cognitive]] abilities. Rather, weak AI is limited to the use of software to study or accomplish specific [[problem solving]] or [[reason]]ing tasks.<br />
<br />
Some authorities emphasize a distinction between strong AI and applied AI, also called narrow AI In contrast to strong AI, weak AI is not intended to perform human cognitive abilities. Rather, weak AI is limited to the use of software to study or accomplish specific problem solving or reasoning tasks.<br />
<br />
一些权威机构强调'' 强人工智能 ''和'' 应用人工智能 ''之间的区别,也称为'' 狭义人工智能 ''与'' 强人工智能 ''相比,弱人工智能并不是为了执行人类的认知能力。相反,弱人工智能仅限于使用软件来研究或完成特定问题的解决或完成推理任务。<br />
<br />
<br />
<br />
As of 2017, over forty organizations are researching AGI.<ref name=baum/><br />
<br />
As of 2017, over forty organizations are researching AGI.<br />
<br />
截止到2017年,已经有超过四十家机构在研究 AGI。<br />
<br />
<br />
<br />
==Requirements 判定要求==<br />
<br />
{{main|Cognitive science}}<br />
<br />
<br />
<br />
Various criteria for [[intelligence]] have been proposed (most famously the [[Turing test]]) but to date, there is no definition that satisfies everyone.<ref>AI founder [[John McCarthy (computer scientist)|John McCarthy]] writes: "we cannot yet characterize in general what kinds of computational procedures we want to call intelligent." {{cite web| url=http://www-formal.stanford.edu/jmc/whatisai/node1.html| title=Basic Questions| last=McCarthy| first=John| authorlink=John McCarthy (computer scientist)| publisher=[[Stanford University]]| year=2007| access-date=6 December 2007| archive-url=https://web.archive.org/web/20071026100601/http://www-formal.stanford.edu/jmc/whatisai/node1.html| archive-date=26 October 2007| url-status=live}} (For a discussion of some definitions of intelligence used by [[artificial intelligence]] researchers, see [[philosophy of artificial intelligence]].)</ref> However, there ''is'' wide agreement among artificial intelligence researchers that intelligence is required to do the following:<ref><br />
<br />
Various criteria for intelligence have been proposed (most famously the Turing test) but to date, there is no definition that satisfies everyone. However, there is wide agreement among artificial intelligence researchers that intelligence is required to do the following:<ref><br />
<br />
人们提出了各种各样的智能标准(最著名的是图灵测试) ,但到目前为止,还没有一个定义能使所有人满意。然而,人工智能研究人员普遍认为,智能需要做到以下几点: <br />
<br />
This list of intelligent traits is based on the topics covered by major AI textbooks, including:<br />
<br />
This list of intelligent traits is based on the topics covered by major AI textbooks, including:<br />
<br />
这个智能特征的列表基于主流的人工智能教科书所涉及的主题,包括:<br />
<br />
{{Harvnb|Russell|Norvig|2003}},<br />
<br />
,<br />
<br />
,<br />
<br />
{{Harvnb|Luger|Stubblefield|2004}},<br />
<br />
,<br />
<br />
,<br />
<br />
{{Harvnb|Poole|Mackworth|Goebel|1998}} and<br />
<br />
and<br />
<br />
及<br />
<br />
{{Harvnb|Nilsson|1998}}.<br />
<br />
.<br />
<br />
.<br />
<br />
</ref><br />
<br />
</ref><br />
<br />
/ 参考<br />
<br />
* [[automated reasoning|reason]], use strategy, solve puzzles, and make judgments under [[uncertainty]];使用策略,解决问题,并且在不确定条件下做出决策。<br />
<br />
* [[knowledge representation|represent knowledge]], including [[Commonsense knowledge base|commonsense knowledge]];展现知识,包括常识。<br />
<br />
* [[automated planning and scheduling|plan]];规划。<br />
<br />
* [[machine learning|learn]];学习。<br />
<br />
* communicate in [[natural language processing|natural language]];使用自然语言进行交流。<br />
<br />
* and [[Artificial intelligence systems integration|integrate all these skills]] towards common goals.综合运用所有技巧以达到某个目的。<br />
<br />
<br />
<br />
Other important capabilities include the ability to [[machine perception|sense]] (e.g. [[computer vision|see]]) and the ability to act (e.g. [[robotics|move and manipulate objects]]) in the world where intelligent behaviour is to be observed.<ref>Pfeifer, R. and Bongard J. C., How the body shapes the way we think: a new view of intelligence (The MIT Press, 2007). {{ISBN|0-262-16239-3}}</ref> This would include an ability to detect and respond to [[hazard]].<ref>{{cite journal | last1 = White | first1 = R. W. | year = 1959 | title = Motivation reconsidered: The concept of competence | journal = Psychological Review | volume = 66 | issue = 5| pages = 297–333 | doi=10.1037/h0040934| pmid = 13844397 }}</ref> Many interdisciplinary approaches to intelligence (e.g. [[cognitive science]], [[computational intelligence]] and [[decision making]]) tend to emphasise the need to consider additional traits such as [[imagination]] (taken as the ability to form mental images and concepts that were not programmed in)<ref>{{Harvnb|Johnson|1987}}</ref> and [[Self-determination theory|autonomy]].<ref>deCharms, R. (1968). Personal causation. New York: Academic Press.</ref><br />
<br />
Other important capabilities include the ability to sense (e.g. see) and the ability to act (e.g. move and manipulate objects) in the world where intelligent behaviour is to be observed. This would include an ability to detect and respond to hazard. Many interdisciplinary approaches to intelligence (e.g. cognitive science, computational intelligence and decision making) tend to emphasise the need to consider additional traits such as imagination (taken as the ability to form mental images and concepts that were not programmed in) and autonomy.<br />
<br />
其他重要的能力包括在客观世界感知(例如:视觉)和行动(例如:移动和操纵物体)的能力。智能行为在客观世界中是可观测的。这将包括检测和应对危险的能力。许多跨学科的智力研究方法(例如:。认知科学、计算智能和决策)倾向于强调考虑额外特征的必要性,例如想象力(被认为是未编入程序的形成意象和概念的能力)和自主性。<br />
<br />
Computer based systems that exhibit many of these capabilities do exist (e.g. see [[computational creativity]], [[automated reasoning]], [[decision support system]], [[robot]], [[evolutionary computation]], [[intelligent agent]]), but not yet at human levels.<br />
<br />
Computer based systems that exhibit many of these capabilities do exist (e.g. see computational creativity, automated reasoning, decision support system, robot, evolutionary computation, intelligent agent), but not yet at human levels.<br />
<br />
基于计算机的系统,展示了许多这些能力确实存在(例如:。参见计算创造性、自动推理、决策支持系统、机器人、进化计算、智能代理) ,但还没有达到人类的水平。<br />
<br />
<br />
<br />
===Tests for confirming human-level AGI{{anchor|Tests_for_confirming_human-level_AGI}}===<br />
<br />
The following tests to confirm human-level AGI have been considered:<ref>{{cite web|last=Muehlhauser|first=Luke|title=What is AGI?|url=http://intelligence.org/2013/08/11/what-is-agi/|publisher=Machine Intelligence Research Institute|accessdate=1 May 2014|date=11 August 2013|archive-url=https://web.archive.org/web/20140425115445/http://intelligence.org/2013/08/11/what-is-agi/|archive-date=25 April 2014|url-status=live}}</ref><ref>{{Cite web|url=https://www.talkyblog.com/artificial_general_intelligence_agi/|title=What is Artificial General Intelligence (AGI)? {{!}} 4 Tests For Ensuring Artificial General Intelligence|date=13 July 2019|website=Talky Blog|language=en-US|access-date=17 July 2019|archive-url=https://web.archive.org/web/20190717071152/https://www.talkyblog.com/artificial_general_intelligence_agi/|archive-date=17 July 2019|url-status=live}}</ref><br />
<br />
The following tests to confirm human-level AGI have been considered:<br />
<br />
考虑了下列测试以确认人类水平 AGI:<br />
<br />
;[[Turing test|The Turing Test]] ([[Alan Turing|''Turing'']])<br />
<br />
The Turing Test (Turing)<br />
<br />
图灵测试(图灵)<br />
<br />
: A machine and a human both converse sight unseen with a second human, who must evaluate which of the two is the machine, which passes the test if it can fool the evaluator a significant fraction of the time. Note: Turing does not prescribe what should qualify as intelligence, only that knowing that it is a machine should disqualify it.<br />
<br />
A machine and a human both converse sight unseen with a second human, who must evaluate which of the two is the machine, which passes the test if it can fool the evaluator a significant fraction of the time. Note: Turing does not prescribe what should qualify as intelligence, only that knowing that it is a machine should disqualify it.<br />
<br />
一个机器人和一个人类都与另一个人类进行眼神交流,后者必须评估两者中哪一个是机器,如果它能骗过评估者很大一部分时间,那么机器就通过了测试。注意: 图灵并没有规定什么是智能,只要能认出它是一台机器就不应该认为它是智能的。<br />
<br />
;The Coffee Test ([[Steve Wozniak|''Wozniak'']])<br />
<br />
The Coffee Test (Wozniak)<br />
<br />
咖啡测试(沃兹尼亚克)<br />
<br />
: A machine is required to enter an average American home and figure out how to make coffee: find the coffee machine, find the coffee, add water, find a mug, and brew the coffee by pushing the proper buttons.<br />
<br />
A machine is required to enter an average American home and figure out how to make coffee: find the coffee machine, find the coffee, add water, find a mug, and brew the coffee by pushing the proper buttons.<br />
<br />
一台机器需要进入一个普通的美国家庭,并弄清楚如何制作咖啡: 找到咖啡机,找到咖啡,加水,找到一个马克杯,并通过按下正确的按钮来煮咖啡。<br />
<br />
;The Robot College Student Test ([[Ben Goertzel|''Goertzel'']])<br />
<br />
The Robot College Student Test (Goertzel)<br />
<br />
机器人大学生考试(格兹尔)<br />
<br />
: A machine enrolls in a university, taking and passing the same classes that humans would, and obtaining a degree.<br />
<br />
A machine enrolls in a university, taking and passing the same classes that humans would, and obtaining a degree.<br />
<br />
一台机器进入一所大学,学习并通过与人类相同的课程,并获得学位。<br />
<br />
;The Employment Test ([[Nils John Nilsson|''Nilsson'']])<br />
<br />
The Employment Test (Nilsson)<br />
<br />
就业测试(尼尔森)<br />
<br />
: A machine works an economically important job, performing at least as well as humans in the same job.<br />
<br />
A machine works an economically important job, performing at least as well as humans in the same job.<br />
<br />
机器从事一项经济上重要的工作,在同一项工作中表现至少和人类一样好。<br />
<br />
<br />
<br />
=== Problems requiring AGI to solve 等待通用人工智能解决的问题===<br />
<br />
{{Main|AI-complete}}<br />
<br />
<br />
<br />
The most difficult problems for computers are informally known as "AI-complete" or "AI-hard", implying that solving them is equivalent to the general aptitude of human intelligence, or strong AI, beyond the capabilities of a purpose-specific algorithm.<ref name="Shapiro92">Shapiro, Stuart C. (1992). [http://www.cse.buffalo.edu/~shapiro/Papers/ai.pdf Artificial Intelligence] {{Webarchive|url=https://web.archive.org/web/20160201014644/http://www.cse.buffalo.edu/~shapiro/Papers/ai.pdf |date=1 February 2016 }} In Stuart C. Shapiro (Ed.), ''Encyclopedia of Artificial Intelligence'' (Second Edition, pp.&nbsp;54–57). New York: John Wiley. (Section 4 is on "AI-Complete Tasks".)</ref><br />
<br />
The most difficult problems for computers are informally known as "AI-complete" or "AI-hard", implying that solving them is equivalent to the general aptitude of human intelligence, or strong AI, beyond the capabilities of a purpose-specific algorithm.<br />
<br />
对于计算机来说,最困难的问题被非正式地称为“ AI完全问题”或“ AI困难问题” ,这意味着解决这些问题相当于人类智能的一般才能,或强人工智能,超出了特定目的算法的能力。<br />
<br />
<br />
<br />
AI-complete problems are hypothesised to include general [[computer vision]], [[natural language understanding]], and dealing with unexpected circumstances while solving any real world problem.<ref>Roman V. Yampolskiy. Turing Test as a Defining Feature of AI-Completeness. In Artificial Intelligence, Evolutionary Computation and Metaheuristics (AIECM) --In the footsteps of Alan Turing. Xin-She Yang (Ed.). pp. 3–17. (Chapter 1). Springer, London. 2013. http://cecs.louisville.edu/ry/TuringTestasaDefiningFeature04270003.pdf {{Webarchive|url=https://web.archive.org/web/20130522094547/http://cecs.louisville.edu/ry/TuringTestasaDefiningFeature04270003.pdf |date=22 May 2013 }}</ref><br />
<br />
AI-complete problems are hypothesised to include general computer vision, natural language understanding, and dealing with unexpected circumstances while solving any real world problem.<br />
<br />
人工智能完全问题假设包括一般的计算机视觉,自然语言理解,以及在解决任何现实世界问题的同时处理意外情况。<br />
<br />
<br />
<br />
AI-complete problems cannot be solved with current computer technology alone, and also require [[human computation]]. This property could be useful, for example, to test for the presence of humans, as [[CAPTCHA]]s aim to do; and for [[computer security]] to repel [[brute-force attack]]s.<ref>Luis von Ahn, Manuel Blum, Nicholas Hopper, and John Langford. [http://www.captcha.net/captcha_crypt.pdf CAPTCHA: Using Hard AI Problems for Security] {{Webarchive|url=https://web.archive.org/web/20160304001102/http://www.captcha.net/captcha_crypt.pdf |date=4 March 2016 }}. In Proceedings of Eurocrypt, Vol. 2656 (2003), pp. 294–311.</ref><ref>{{cite journal | first = Richard | last = Bergmair | title = Natural Language Steganography and an "AI-complete" Security Primitive | citeseerx = 10.1.1.105.129 | date = 7 January 2006 }} (unpublished?)</ref><br />
<br />
AI-complete problems cannot be solved with current computer technology alone, and also require human computation. This property could be useful, for example, to test for the presence of humans, as CAPTCHAs aim to do; and for computer security to repel brute-force attacks.<br />
<br />
目前的计算机技术不能单独解决AI完全问题,而且还需要人工计算。例如,这个特性可以用来测试人类是否存在(CAPTCHAs 的目标就是这样做) ,以及应用于计算机安全以抵御强力攻击。<br />
<br />
<br />
<br />
== History 历史 == <br />
<br />
=== Classical AI 经典人工智能 ===<br />
<br />
{{Main|History of artificial intelligence}}<br />
<br />
Modern AI research began in the mid 1950s.<ref>{{Harvnb|Crevier|1993|pp=48–50}}</ref> The first generation of AI researchers were convinced that artificial general intelligence was possible and that it would exist in just a few decades. AI pioneer [[Herbert A. Simon]] wrote in 1965: "machines will be capable, within twenty years, of doing any work a man can do."<ref>{{Harvnb|Simon|1965|p=96}} quoted in {{Harvnb|Crevier|1993|p=109}}</ref> Their predictions were the inspiration for [[Stanley Kubrick]] and [[Arthur C. Clarke]]'s character [[HAL 9000]], who embodied what AI researchers believed they could create by the year 2001. AI pioneer [[Marvin Minsky]] was a consultant<ref>{{Cite web |url=http://mitpress.mit.edu/e-books/Hal/chap2/two1.html |title=Scientist on the Set: An Interview with Marvin Minsky |access-date=5 April 2008 |archive-url=https://web.archive.org/web/20120716182537/http://mitpress.mit.edu/e-books/Hal/chap2/two1.html |archive-date=16 July 2012 |url-status=live }}</ref> on the project of making HAL 9000 as realistic as possible according to the consensus predictions of the time; Crevier quotes him as having said on the subject in 1967, "Within a generation ... the problem of creating 'artificial intelligence' will substantially be solved,"<ref>Marvin Minsky to {{Harvtxt|Darrach|1970}}, quoted in {{Harvtxt|Crevier|1993|p=109}}.</ref> although Minsky states that he was misquoted.{{Citation needed|date=June 2011}}<br />
<br />
Modern AI research began in the mid 1950s. The first generation of AI researchers were convinced that artificial general intelligence was possible and that it would exist in just a few decades. AI pioneer Herbert A. Simon wrote in 1965: "machines will be capable, within twenty years, of doing any work a man can do." Their predictions were the inspiration for Stanley Kubrick and Arthur C. Clarke's character HAL 9000, who embodied what AI researchers believed they could create by the year 2001. AI pioneer Marvin Minsky was a consultant on the project of making HAL 9000 as realistic as possible according to the consensus prediction of the time; Crevier quotes him as having said on the subject in 1967, "Within a generation ... the problem of creating 'artificial intelligence' will substantially be solved," although Minsky states that he was misquoted.<br />
<br />
现代人工智能研究始于20世纪50年代中期。第一代人工智能研究人员确信,通用人工智能是可能的,并将在短短几十年内出现。人工智能的先驱赫伯特·A·西蒙(Herbert A. Simon)在1965年写道: “机器将在20年内拥有完成人类能做的任何工作的能力。”他们的预言启发了斯坦利·库布里克和亚瑟·查理斯·克拉克塑造的角色哈尔9000,它代表了人工智能研究人员相信他们截至2001年能够创造出的东西。人工智能先驱马文·明斯基(Marvin Minsky)是一个项目顾问,该项目旨在根据当时的一致预测,使哈尔9000尽可能逼真; 克里维尔援引他在1967年关于这个问题的话说,“在一代人的时间里... ... 创造‘人工智能’的问题将大体上得到解决,”尽管明斯基声称,他的话被错误引用了。<br />
<br />
<br />
<br />
However, in the early 1970s, it became obvious that researchers had grossly underestimated the difficulty of the project. Funding agencies became skeptical of AGI and put researchers under increasing pressure to produce useful "applied AI".<ref>The [[Lighthill report]] specifically criticized AI's "grandiose objectives" and led the dismantling of AI research in England. ({{Harvnb|Lighthill|1973}}; {{Harvnb|Howe|1994}}) In the U.S., [[DARPA]] became determined to fund only "mission-oriented direct research, rather than basic undirected research". See {{Harv|NRC|1999}} under "Shift to Applied Research Increases Investment". See also {{Harv|Crevier|1993|pp=115–117}} and {{Harv|Russell|Norvig|2003|pp=21–22}}</ref> As the 1980s began, Japan's [[Fifth Generation Computer]] Project revived interest in AGI, setting out a ten-year timeline that included AGI goals like "carry on a casual conversation".<ref>{{harvnb|Crevier|1993|p=211}}, {{harvnb|Russell|Norvig|2003|p=24}} and see also {{Harvnb|Feigenbaum|McCorduck|1983}}</ref> In response to this and the success of [[expert systems]], both industry and government pumped money back into the field.<ref>{{Harvnb|Crevier| 1993|pp=161–162,197–203,240}}; {{harvnb|Russell|Norvig|2003|p=25}}; {{harvnb|NRC|1999|loc=under "Shift to Applied Research Increases Investment"}}</ref> However, confidence in AI spectacularly collapsed in the late 1980s, and the goals of the Fifth Generation Computer Project were never fulfilled.<ref>{{Harvnb|Crevier|1993|pp=209–212}}</ref> For the second time in 20 years, AI researchers who had predicted the imminent achievement of AGI had been shown to be fundamentally mistaken. By the 1990s, AI researchers had gained a reputation for making vain promises. They became reluctant to make predictions at all<ref>As AI founder [[John McCarthy (computer scientist)|John McCarthy]] writes "it would be a great relief to the rest of the workers in AI if the inventors of new general formalisms would express their hopes in a more guarded form than has sometimes been the case." {{cite web | url=http://www-formal.stanford.edu/jmc/reviews/lighthill/lighthill.html | title=Reply to Lighthill | last=McCarthy | first=John | authorlink=John McCarthy (computer scientist) | publisher=Stanford University | year=2000 | access-date=29 September 2007 | archive-url=https://web.archive.org/web/20080930164952/http://www-formal.stanford.edu/jmc/reviews/lighthill/lighthill.html | archive-date=30 September 2008 | url-status=live }}</ref> and to avoid any mention of "human level" artificial intelligence for fear of being labeled "wild-eyed dreamer[s]."<ref>"At its low point, some computer scientists and software engineers avoided the term artificial intelligence for fear of being viewed as wild-eyed dreamers."{{cite news |first=John |last=Markoff |title=Behind Artificial Intelligence, a Squadron of Bright Real People |url=https://www.nytimes.com/2005/10/14/technology/14artificial.html?ei=5070&en=11ab55edb7cead5e&ex=1185940800&adxnnl=1&adxnnlx=1185805173-o7WsfW7qaP0x5/NUs1cQCQ |work=The New York Times|date=14 October 2005}}</ref><br />
<br />
However, in the early 1970s, it became obvious that researchers had grossly underestimated the difficulty of the project. Funding agencies became skeptical of AGI and put researchers under increasing pressure to produce useful "applied AI". As the 1980s began, Japan's Fifth Generation Computer Project revived interest in AGI, setting out a ten-year timeline that included AGI goals like "carry on a casual conversation". In response to this and the success of expert systems, both industry and government pumped money back into the field. However, confidence in AI spectacularly collapsed in the late 1980s, and the goals of the Fifth Generation Computer Project were never fulfilled. For the second time in 20 years, AI researchers who had predicted the imminent achievement of AGI had been shown to be fundamentally mistaken. By the 1990s, AI researchers had gained a reputation for making vain promises. They became reluctant to make predictions at all and to avoid any mention of "human level" artificial intelligence for fear of being labeled "wild-eyed dreamer[s]."<br />
<br />
然而,在20世纪70年代初,很明显,研究人员严重低估了该项目的难度。资助机构开始对通用人工智能持怀疑态度,并对研究人员施加越来越大的压力,要求他们生产出有用的“应用人工智能”。随着20世纪80年代的开始,日本的'''<font color="#ff8000">第五代计算机项目(Fifth Generation Computer Project)</font>'''重新唤起了人们对通用人工智能的兴趣,并设定了一个长达10年的时间线,其中包括通用人工智能的目标,比如“进行一次随意的交谈”。为了应对这种情况和专家系统的成功建立,工业界和政府都重新将资金投入这一领域。然而,人们对人工智能的信心在20世纪80年代末大幅下降,第五代计算机项目的目标从未实现。20年来的第二次,人工智能研究人员预测通用人工智能即将取得的成果已经被证明根本是错误的。到了20世纪90年代,人工智能研究人员因做出虚假承诺而臭名昭著。他们根本不愿意做预测,也不愿意提及“人类水平”的人工智能,因为他们害怕被贴上“狂热梦想家”的标签<br />
<br />
<br />
<br />
=== Narrow AI research 狭义人工智能的研究===<br />
<br />
{{Main|Artificial intelligence}}<br />
<br />
<br />
<br />
In the 1990s and early 21st century, mainstream AI achieved far greater commercial success and academic respectability by focusing on specific sub-problems where they can produce verifiable results and commercial applications, such as [[artificial neural networks]] and statistical [[machine learning]].<ref>{{Harvnb|Russell|Norvig|2003|pp=25–26}}</ref> These "applied AI" systems are now used extensively throughout the technology industry, and research in this vein is very heavily funded in both academia and industry. Currently, development on this field is considered an emerging trend, and a mature stage is expected to happen in more than 10 years.<ref>{{cite web |title=Trends in the Emerging Tech Hype Cycle |url=https://blogs.gartner.com/smarterwithgartner/files/2018/08/PR_490866_5_Trends_in_the_Emerging_Tech_Hype_Cycle_2018_Hype_Cycle.png |publisher=Gartner Reports |accessdate=7 May 2019 |archive-url=https://web.archive.org/web/20190522024829/https://blogs.gartner.com/smarterwithgartner/files/2018/08/PR_490866_5_Trends_in_the_Emerging_Tech_Hype_Cycle_2018_Hype_Cycle.png |archive-date=22 May 2019 |url-status=live }}</ref><br />
<br />
In the 1990s and early 21st century, mainstream AI achieved far greater commercial success and academic respectability by focusing on specific sub-problems where they can produce verifiable results and commercial applications, such as artificial neural networks and statistical machine learning. These "applied AI" systems are now used extensively throughout the technology industry, and research in this vein is very heavily funded in both academia and industry. Currently, development on this field is considered an emerging trend, and a mature stage is expected to happen in more than 10 years.<br />
<br />
在1990年代和21世纪初,主流人工智能取得了更大的商业成功和学术声望,因为它们把重点放在能够产生可验证结果和商业应用的具体子问题上,例如人工神经网络和统计机器学习。这些“应用人工智能”系统现在在整个技术产业中得到广泛应用,这方面的研究得到了学术界和产业界的大量资助。目前,这一领域的发展被认为是一个新兴的趋势,并有望在10多年内进入一个成熟的阶段。<br />
<br />
<br />
<br />
Most mainstream AI researchers hope that strong AI can be developed by combining the programs that solve various sub-problems. [[Hans Moravec]] wrote in 1988: <blockquote>"I am confident that this bottom-up route to artificial intelligence will one day meet the traditional top-down route more than half way, ready to provide the real world competence and the [[Commonsense knowledge base|commonsense knowledge]] that has been so frustratingly elusive in reasoning programs. Fully intelligent machines will result when the metaphorical [[golden spike]] is driven uniting the two efforts."<ref>{{Harvnb|Moravec|1988|p=20}}</ref></blockquote><br />
<br />
Most mainstream AI researchers hope that strong AI can be developed by combining the programs that solve various sub-problems. Hans Moravec wrote in 1988: <blockquote>"I am confident that this bottom-up route to artificial intelligence will one day meet the traditional top-down route more than half way, ready to provide the real world competence and the commonsense knowledge that has been so frustratingly elusive in reasoning programs. Fully intelligent machines will result when the metaphorical golden spike is driven uniting the two efforts."</blockquote><br />
<br />
大多数主流人工智能研究人员希望,通过结合解决各种子问题的项目,可以开发出强人工智能。汉斯·莫拉维克(Hans Moravec)在1988年写道: “我相信,这种自下而上的人工智能路线,终有一天会与传统的自上而下的路线在后半程相遇。令人沮丧的是,当下,真实世界的能力和常识知识在推理程序中一直难以捉摸。而这两种路线结合的人工智能将能为我们解决这些疑难。当一个黄金钉一样的东西将二者结合起来时,就会产生完全智能的机器。” / blockquote<br />
<br />
<br />
<br />
However, even this fundamental philosophy has been disputed; for example, Stevan Harnad of Princeton concluded his 1990 paper on the [[Symbol grounding problem|Symbol Grounding Hypothesis]] by stating: <blockquote>"The expectation has often been voiced that "top-down" (symbolic) approaches to modeling cognition will somehow meet "bottom-up" (sensory) approaches somewhere in between. If the grounding considerations in this paper are valid, then this expectation is hopelessly modular and there is really only one viable route from sense to symbols: from the ground up. A free-floating symbolic level like the software level of a computer will never be reached by this route (or vice versa) – nor is it clear why we should even try to reach such a level, since it looks as if getting there would just amount to uprooting our symbols from their intrinsic meanings (thereby merely reducing ourselves to the functional equivalent of a programmable computer)."<ref>{{cite journal | last1 = Harnad | first1 = S | year = 1990 | title = The Symbol Grounding Problem | journal = Physica D | volume = 42 | issue = 1–3| pages = 335–346 | doi=10.1016/0167-2789(90)90087-6| bibcode = 1990PhyD...42..335H | arxiv = cs/9906002}}</ref></blockquote><br />
<br />
However, even this fundamental philosophy has been disputed; for example, Stevan Harnad of Princeton concluded his 1990 paper on the Symbol Grounding Hypothesis by stating: <blockquote>"The expectation has often been voiced that "top-down" (symbolic) approaches to modeling cognition will somehow meet "bottom-up" (sensory) approaches somewhere in between. If the grounding considerations in this paper are valid, then this expectation is hopelessly modular and there is really only one viable route from sense to symbols: from the ground up. A free-floating symbolic level like the software level of a computer will never be reached by this route (or vice versa) – nor is it clear why we should even try to reach such a level, since it looks as if getting there would just amount to uprooting our symbols from their intrinsic meanings (thereby merely reducing ourselves to the functional equivalent of a programmable computer)."</blockquote><br />
<br />
然而,连如此基本的哲学问题也存在争议; 例如,普林斯顿大学的斯蒂文·哈纳德(Stevan Harnad)在1990年关于'''<font color="#ff8000">符号基础假说(the Symbol Grounding Hypothesis)</font>'''的论文中总结道: “人们经常提出这样的期望,即建立“自上而下”(符号)的认知模型的方法将在某种程度上与“自下而上”(感官)的方法在建模过程中的某处相会。如果本文中的基本考虑是正确的,那么绝望的是,这种期望是模块化的,并且从认知到符号真的只有一条可行的路径: 从头开始。类似计算机软件级别的自由浮动的符号永远不可能通过这条路径实现,反之亦然——甚至也不清楚为什么我们应该尝试达到这样一个级别,因为它看起来就像是把我们的符号从它们的内在意义上连根拔起(从而仅仅把我们自己降低为可编程计算机的功能等价物)。” / blockquote<br />
<br />
<br />
<br />
=== Modern artificial general intelligence research 现代通用人工智能的研究===<br />
<br />
The term "artificial general intelligence" was used as early as 1997, by Mark Gubrud<ref>{{Harvnb|Gubrud|1997}}</ref> in a discussion of the implications of fully automated military production and operations. The term was re-introduced and popularized by [[Shane Legg]] and [[Ben Goertzel]] around 2002.<ref>{{Cite web|url=http://goertzel.org/who-coined-the-term-agi/|title=Who coined the term "AGI"? » goertzel.org|language=en-US|access-date=28 December 2018|archive-url=https://web.archive.org/web/20181228083048/http://goertzel.org/who-coined-the-term-agi/|archive-date=28 December 2018|url-status=live}}, via [[Life 3.0]]: 'The term "AGI" was popularized by... Shane Legg, Mark Gubrud and Ben Goertzel'</ref> The research objective is much older, for example [[Doug Lenat]]'s [[Cyc]] project (that began in 1984), and [[Allen Newell]]'s [[Soar (cognitive architecture)|Soar]] project are regarded as within the scope of AGI. AGI research activity in 2006 was described by Pei Wang and Ben Goertzel<ref>{{harvnb|Goertzel|Wang|2006}}. See also {{harvtxt|Wang|2006}} with an up-to-date summary and lots of links.</ref> as "producing publications and preliminary results". The first summer school in AGI was organized in Xiamen, China in 2009<ref>https://goertzel.org/AGI_Summer_School_2009.htm</ref> by the Xiamen university's Artificial Brain Laboratory and OpenCog. The first university course was given in 2010<ref>http://fmi-plovdiv.org/index.jsp?id=1054&ln=1</ref> and 2011<ref>http://fmi.uni-plovdiv.bg/index.jsp?id=1139&ln=1</ref> at Plovdiv University, Bulgaria by Todor Arnaudov. MIT presented a course in AGI in 2018, organized by Lex Fridman and featuring a number of guest lecturers. However, as yet, most AI researchers have devoted little attention to AGI, with some claiming that intelligence is too complex to be completely replicated in the near term. However, a small number of computer scientists are active in AGI research, and many of this group are contributing to a series of [[Conference on Artificial General Intelligence|AGI conferences]]. The research is extremely diverse and often pioneering in nature. In the introduction to his book,{{sfn|Goertzel|Pennachin|2006}} Goertzel says that estimates of the time needed before a truly flexible AGI is built vary from 10 years to over a century, but the consensus in the AGI research community seems to be that the timeline discussed by [[Ray Kurzweil]] in ''[[The Singularity is Near]]''<ref name="K">{{Harv|Kurzweil|2005|p=260}} or see [http://crnano.typepad.com/crnblog/2005/08/advanced_human_.html Advanced Human Intelligence] {{Webarchive|url=https://web.archive.org/web/20110630032301/http://crnano.typepad.com/crnblog/2005/08/advanced_human_.html |date=30 June 2011 }} where he defines strong AI as "machine intelligence with the full range of human intelligence."</ref> (i.e. between 2015 and 2045) is plausible.{{sfn|Goertzel|2007}}<br />
<br />
The term "artificial general intelligence" was used as early as 1997, by Mark Gubrud in a discussion of the implications of fully automated military production and operations. The term was re-introduced and popularized by Shane Legg and Ben Goertzel around 2002. The research objective is much older, for example Doug Lenat's Cyc project (that began in 1984), and Allen Newell's Soar project are regarded as within the scope of AGI. AGI research activity in 2006 was described by Pei Wang and Ben Goertzel as "producing publications and preliminary results". The first summer school in AGI was organized in Xiamen, China in 2009 by the Xiamen university's Artificial Brain Laboratory and OpenCog. The first university course was given in 2010 and 2011 at Plovdiv University, Bulgaria by Todor Arnaudov. MIT presented a course in AGI in 2018, organized by Lex Fridman and featuring a number of guest lecturers. However, as yet, most AI researchers have devoted little attention to AGI, with some claiming that intelligence is too complex to be completely replicated in the near term. However, a small number of computer scientists are active in AGI research, and many of this group are contributing to a series of AGI conferences. The research is extremely diverse and often pioneering in nature. In the introduction to his book, Goertzel says that estimates of the time needed before a truly flexible AGI is built vary from 10 years to over a century, but the consensus in the AGI research community seems to be that the timeline discussed by Ray Kurzweil in The Singularity is Near (i.e. between 2015 and 2045) is plausible.<br />
<br />
”人工通用智能”一词早在1997年就由马克·古布鲁德(Mark Gubrud)在讨论全自动化军事生产和作业的影响时使用。这个术语在2002年左右被肖恩·莱格(Shane Legg)和本·格兹尔(Ben Goertzel)重新引入并推广。研究目标要古老得多,例如道格•雷纳特(Doug Lenat)的 Cyc 项目(始于1984年) ,以及艾伦•纽厄尔(Allen Newell)的 Soar 项目被认为属于通用人工智能的范围。王培(Pei Wang)和本·格兹尔将2006年的通用人工智能研究活动描述为“发表论文和取得初步成果”。2009年,厦门大学人工脑实验室和 OpenCog 在中国厦门组织了通用人工智能的第一个暑期学校。第一个大学课程于2010年和2011年在保加利亚普罗夫迪夫大学由托多尔·阿瑙多夫(Todor Arnaudov)开设。2018年,麻省理工学院开设了一门通用人工智能课程,由莱克斯·弗里德曼(Lex Fridman)组织,并邀请了一些客座讲师。然而,迄今为止,大多数人工智能研究人员对通用人工智能关注甚少,一些人声称,智能过于复杂,在短期内无法完全复制。然而,少数计算机科学家积极参与通用人工智能的研究,其中许多人正在为通用人工智能的一系列会议做出贡献。这项研究极其多样化,而且往往具有开创性。格兹尔在他的书的序言中,说,制造一个真正灵活的通用人工智能所需的时间约为10年到超过一个世纪不等,但是通用人工智能研究团体的似乎一致认为雷·库兹韦尔(Ray Kurzweil)在《奇点临近》(即在2015年至2045年之间)中讨论的时间线是可信的。<br />
<br />
<br />
<br />
However, most mainstream AI researchers doubt that progress will be this rapid.{{citation_needed|date=January 2017}} Organizations explicitly pursuing AGI include the Swiss AI lab [[IDSIA]],{{citation needed|date=December 2017}} Nnaisense,<ref>{{cite news|last1=Markoff|first1=John|title=When A.I. Matures, It May Call Jürgen Schmidhuber 'Dad'|url=https://www.nytimes.com/2016/11/27/technology/artificial-intelligence-pioneer-jurgen-schmidhuber-overlooked.html|accessdate=26 December 2017|work=The New York Times|date=27 November 2016|archive-url=https://web.archive.org/web/20171226234555/https://www.nytimes.com/2016/11/27/technology/artificial-intelligence-pioneer-jurgen-schmidhuber-overlooked.html|archive-date=26 December 2017|url-status=live}}</ref> [[Vicarious (company)|Vicarious]],<!--<ref name=baum/>--> [[Maluuba]],<ref name=baum/> the [[OpenCog|OpenCog Foundation]], Adaptive AI, [[LIDA (cognitive architecture)|LIDA]], and [[Numenta]] and the associated [[Redwood Neuroscience Institute]].<ref>{{cite book|author1=James Barrat|title=Our Final Invention: Artificial Intelligence and the End of the Human Era|date=2013|publisher=St. Martin's Press|location=New York|isbn=9780312622374|edition=First|chapter=Chapter 11: A Hard Takeoff|title-link=Our Final Invention|author1-link=James Barrat}}</ref> In addition, organizations such as the [[Machine Intelligence Research Institute]]<ref>{{cite web|title=About the Machine Intelligence Research Institute|url=https://intelligence.org/about/|website=Machine Intelligence Research Institute|accessdate=26 December 2017|archive-url=https://web.archive.org/web/20180121025925/https://intelligence.org/about/|archive-date=21 January 2018|url-status=live}}</ref> and [[OpenAI]]<ref>{{cite news|title=About OpenAI|url=https://openai.com/about/|accessdate=26 December 2017|work=[[OpenAI]]|language=en-us|archive-url=https://web.archive.org/web/20171222181056/https://openai.com/about/|archive-date=22 December 2017|url-status=live}}</ref> have been founded to influence the development path of AGI. Finally, projects such as the [[Human Brain Project]]<ref>{{cite news|last1=Theil|first1=Stefan|title=Trouble in Mind|url=https://www.scientificamerican.com/article/why-the-human-brain-project-went-wrong-and-how-to-fix-it/|accessdate=26 December 2017|work=Scientific American|pages=36–42|language=en|doi=10.1038/scientificamerican1015-36|bibcode=2015SciAm.313d..36T|archive-url=https://web.archive.org/web/20171109234151/https://www.scientificamerican.com/article/why-the-human-brain-project-went-wrong-and-how-to-fix-it/|archive-date=9 November 2017|url-status=live}}</ref> have the goal of building a functioning simulation of the human brain. A 2017 survey of AGI categorized forty-five known "active R&D projects" that explicitly or implicitly (through published research) research AGI, with the largest three being [[DeepMind]], the Human Brain Project, and [[OpenAI]].<ref name=baum>{{cite journal|title=Baum, Seth, A Survey of Artificial General Intelligence Projects for Ethics, Risk, and Policy (November 12, 2017). Global Catastrophic Risk Institute Working Paper 17-1|url=https://ssrn.com/abstract=3070741|date=12 November 2017|last1=Baum|first1=Seth}}</ref><br />
<br />
However, most mainstream AI researchers doubt that progress will be this rapid. Organizations explicitly pursuing AGI include the Swiss AI lab IDSIA, Nnaisense, Vicarious,<!-- In addition, organizations such as the Machine Intelligence Research Institute and OpenAI have been founded to influence the development path of AGI. Finally, projects such as the Human Brain Project have the goal of building a functioning simulation of the human brain. A 2017 survey of AGI categorized forty-five known "active R&D projects" that explicitly or implicitly (through published research) research AGI, with the largest three being DeepMind, the Human Brain Project, and OpenAI.<br />
<br />
然而,大多数主流的人工智能研究人员怀疑进展是否会如此之快。明确寻求通用人工智能的组织包括瑞士人工智能实验室IDSIA,Nnaisense,Vicarious,! -- 此外,机器智能研究所和 OpenAI 等机构也建立起来以影响通用人工智能的发展道路。最后,像人脑计划这样的项目的目标是建立一个人脑的功能模拟。2017年针对通用人工智能的一项调查(通过已发表的研究)对45个已知的明确的或暗中研究通用人工智能的“活跃研发项目”进行了分类 ,其中最大的三个是 DeepMind、人类大脑项目和 OpenAI。<br />
<br />
<br />
<br />
In 2017, researchers Feng Liu, Yong Shi and Ying Liu conducted intelligence tests on publicly available and freely accessible weak AI such as Google AI or Apple's Siri and others. At the maximum, these AI reached a value of about 47, which corresponds approximately to a six-year-old child in first grade. An adult comes to about 100 on average. Similar tests had been carried out in 2014, with the IQ score reaching a maximum value of 27.<ref>{{cite journal|title=Intelligence Quotient and Intelligence Grade of Artificial Intelligence|journal=Annals of Data Science|volume=4|issue=2|pages=179–191|arxiv=1709.10242|doi=10.1007/s40745-017-0109-0|year=2017|last1=Liu|first1=Feng|last2=Shi|first2=Yong|last3=Liu|first3=Ying|bibcode=2017arXiv170910242L}}</ref><ref>{{cite web|title=Google-KI doppelt so schlau wie Siri|url=https://t3n.de/news/iq-kind-schlauer-google-ki-siri-864003|accessdate=2 January 2019|archive-url=https://web.archive.org/web/20190103055657/https://t3n.de/news/iq-kind-schlauer-google-ki-siri-864003/|archive-date=3 January 2019|url-status=live}}</ref><br />
<br />
In 2017, researchers Feng Liu, Yong Shi and Ying Liu conducted intelligence tests on publicly available and freely accessible weak AI such as Google AI or Apple's Siri and others. At the maximum, these AI reached a value of about 47, which corresponds approximately to a six-year-old child in first grade. An adult comes to about 100 on average. Similar tests had been carried out in 2014, with the IQ score reaching a maximum value of 27.<br />
<br />
2017年,研究人员 Feng Liu,Yong Shi 和 Ying Liu 对公开的和可自由访问的弱智能进行了智能测试,如谷歌人工智能或苹果的 Siri 等。在最大值,这些人工智能达到了约47,这大约相当于一个的六岁儿童。一个成年人平均智商为100。2014年也进行了类似的测试,智商分数的最高值达到了27。<br />
<br />
<br />
<br />
In 2019, video game programmer and aerospace engineer [[John Carmack]] announced plans to research AGI.<ref name="lawler">{{cite web |url=https://www.engadget.com/2019-11-13-john-carmack-agi.html |title=John Carmack takes a step back at Oculus to work on human-like AI |date=November 13, 2019 |first=Richard |last=Lawler |accessdate=April 4, 2020 |publisher=[[Engadget]]}}</ref><br />
<br />
In 2019, video game programmer and aerospace engineer John Carmack announced plans to research AGI.<br />
<br />
2019年,游戏程序师和航空工程师约翰·卡迈克(John Carmack)宣布了研究通用人工智能的计划。<br />
<br />
<br />
<br />
==Processing power needed to simulate a brain 模拟人脑所需要的处理能力==<br />
<br />
<br />
<br />
===Whole brain emulation 全脑模拟===<br />
<br />
{{main|Mind uploading}}<br />
<br />
A popular discussed approach to achieving general intelligent action is [[whole brain emulation]]. A low-level brain model is built by [[brain scanning|scanning]] and [[Brain mapping|mapping]] a biological brain in detail and copying its state into a computer system or another computational device. The computer runs a [[computer simulation|simulation]] model so faithful to the original that it will behave in essentially the same way as the original brain, or for all practical purposes, indistinguishably.<ref name=Roadmap><br />
<br />
A popular discussed approach to achieving general intelligent action is whole brain emulation. A low-level brain model is built by scanning and mapping a biological brain in detail and copying its state into a computer system or another computational device. The computer runs a simulation model so faithful to the original that it will behave in essentially the same way as the original brain, or for all practical purposes, indistinguishably.<ref name=Roadmap><br />
<br />
实现通用智能行为的一种流行的讨论方法是全脑模拟。一个低层次的大脑模型是通过扫描和绘制生物大脑的详细情况,并将其状态复制到计算机系统或其他计算设备中来建立的。计算机运行的模拟模型无比忠实于原始模型,以至于它的行为在本质,或者对于所有的实际目的上与原始大脑相同,难以区分。 参考名称路线图<br />
<br />
{{Harvnb|Sandberg|Boström|2008}}. "The basic idea is to take a particular brain, scan its structure in detail, and construct a software model of it that is so faithful to the original that, when run on appropriate hardware, it will behave in essentially the same way as the original brain."</ref> Whole brain emulation is discussed in [[computational neuroscience]] and [[neuroinformatics]], in the context of [[brain simulation]] for medical research purposes. It is discussed in [[artificial intelligence]] research{{sfn|Goertzel|2007}} as an approach to strong AI. [[Neuroimaging]] technologies that could deliver the necessary detailed understanding are improving rapidly, and [[futurist]] Ray Kurzweil in the book ''The Singularity Is Near''<ref name=K/> predicts that a map of sufficient quality will become available on a similar timescale to the required computing power.<br />
<br />
."The basic idea is to take a particular brain, scan its structure in detail, and construct a software model of it that is so faithful to the original that, when run on appropriate hardware, it will behave in essentially the same way as the original brain."</ref> Whole brain emulation is discussed in computational neuroscience and neuroinformatics, in the context of brain simulation for medical research purposes. It is discussed in artificial intelligence research as an approach to strong AI. Neuroimaging technologies that could deliver the necessary detailed understanding are improving rapidly, and futurist Ray Kurzweil in the book The Singularity Is Near predicts that a map of sufficient quality will become available on a similar timescale to the required computing power.<br />
<br />
“基本思路是,取一个特定的大脑,详细地扫描其结构,并构建一个无比还原的原始大脑的软件模型,以至于在适当的硬件上运行时,它基本上与原始大脑的行为方式相同。基于医学研究的大脑模拟背景下,全脑模拟在计算神经科学和神经信息学医学期刊上被讨论过。它是人工智能研究中讨论的一种强人工智能的方法。可提供必要详细的理解的神经成像技术正在迅速提高,未来学家雷·库兹韦尔(Ray Kurzweil)在《奇点临近》书中预测,一张质量足够高的地图将在类似的时间尺度上达到所需的计算能力。<br />
<br />
<br />
<br />
===Early estimates 初步预测===<br />
<br />
[[File:Estimations of Human Brain Emulation Required Performance.svg|thumb|right|400px|Estimates of how much processing power is needed to emulate a human brain at various levels (from Ray Kurzweil, and [[Anders Sandberg]] and [[Nick Bostrom]]), along with the fastest supercomputer from [[TOP500]] mapped by year. Note the logarithmic scale and exponential trendline, which assumes the computational capacity doubles every 1.1 years. Kurzweil believes that mind uploading will be possible at neural simulation, while the Sandberg, Bostrom report is less certain about where [[consciousness]] arises.{{sfn|Sandberg|Boström|2008}}]] For low-level brain simulation, an extremely powerful computer would be required. The [[human brain]] has a huge number of [[synapses]]. Each of the 10<sup>11</sup> (one hundred billion) [[neurons]] has on average 7,000 synaptic connections (synapses) to other neurons. It has been estimated that the brain of a three-year-old child has about 10<sup>15</sup> synapses (1 quadrillion). This number declines with age, stabilizing by adulthood. Estimates vary for an adult, ranging from 10<sup>14</sup> to 5×10<sup>14</sup> synapses (100 to 500 trillion).{{sfn|Drachman|2005}} An estimate of the brain's processing power, based on a simple switch model for neuron activity, is around 10<sup>14</sup> (100 trillion) synaptic updates per second ([[SUPS]]).{{sfn|Russell|Norvig|2003}} In 1997, Kurzweil looked at various estimates for the hardware required to equal the human brain and adopted a figure of 10<sup>16</sup> computations per second (cps).<ref>In "Mind Children" {{Harvnb|Moravec|1988|page=61}} 10<sup>15</sup> cps is used. More recently, in 1997, <{{cite web|url=http://www.transhumanist.com/volume1/moravec.htm |title=Archived copy |accessdate=23 June 2006 |url-status=dead |archiveurl=https://web.archive.org/web/20060615031852/http://transhumanist.com/volume1/moravec.htm |archivedate=15 June 2006 }}> Moravec argued for 10<sup>8</sup> MIPS which would roughly correspond to 10<sup>14</sup> cps. Moravec talks in terms of MIPS, not "cps", which is a non-standard term Kurzweil introduced.</ref> (For comparison, if a "computation" was equivalent to one "[[FLOPS|floating point operation]]" – a measure used to rate current [[supercomputer]]s – then 10<sup>16</sup> "computations" would be equivalent to 10 [[Peta-|petaFLOPS]], [[FLOPS#Performance records|achieved in 2011]]). He used this figure to predict the necessary hardware would be available sometime between 2015 and 2025, if the exponential growth in computer power at the time of writing continued.<br />
<br />
Estimates of how much processing power is needed to emulate a human brain at various levels (from Ray Kurzweil, and [[Anders Sandberg and Nick Bostrom), along with the fastest supercomputer from TOP500 mapped by year. Note the logarithmic scale and exponential trendline, which assumes the computational capacity doubles every 1.1 years. Kurzweil believes that mind uploading will be possible at neural simulation, while the Sandberg, Bostrom report is less certain about where consciousness arises.]] For low-level brain simulation, an extremely powerful computer would be required. The human brain has a huge number of synapses. Each of the 10<sup>11</sup> (one hundred billion) neurons has on average 7,000 synaptic connections (synapses) to other neurons. It has been estimated that the brain of a three-year-old child has about 10<sup>15</sup> synapses (1 quadrillion). This number declines with age, stabilizing by adulthood. Estimates vary for an adult, ranging from 10<sup>14</sup> to 5×10<sup>14</sup> synapses (100 to 500 trillion). An estimate of the brain's processing power, based on a simple switch model for neuron activity, is around 10<sup>14</sup> (100 trillion) synaptic updates per second (SUPS). In 1997, Kurzweil looked at various estimates for the hardware required to equal the human brain and adopted a figure of 10<sup>16</sup> computations per second (cps). (For comparison, if a "computation" was equivalent to one "floating point operation" – a measure used to rate current supercomputers – then 10<sup>16</sup> "computations" would be equivalent to 10 petaFLOPS, achieved in 2011). He used this figure to predict the necessary hardware would be available sometime between 2015 and 2025, if the exponential growth in computer power at the time of writing continued.<br />
<br />
根据对在不同水平上模拟人类大脑的所需处理能力的估计(来自 Ray Kurzweil,[ Anders Sandberg 和 Nick Bostrom ]) ,以及每年从最快的五百台超级计算机获得的数据,绘制出对数尺度趋势线和指数趋势线。它呈现出计算能力每1.1年增长一倍。库兹韦尔相信,在神经模拟中上传思维是可能的,而桑德伯格和博斯特罗姆的报告对意识从何产生则不太确定。]为进行低层次的大脑模拟,需要一个非常强大的计算机。人类的大脑有大量的突触。每10个 sup 11 / sup (1000亿)神经元平均与其他神经元有7000个突触连接(突触)。据估计,一个三岁儿童的大脑约有10个 sup 15 / sup 突触(1千万亿)。这个数字随着年龄的增长而下降,成年后趋于稳定。而每个成年人的估计情况互不相同,从10个 sup 14 / sup 到5个 sup 10 sup 14 / sup 突触(100万亿到500万亿)不等。基于神经元活动的简单开关模型,对大脑处理能力的估计大约是每秒10次 / 秒(100万亿)突触更新(SUPS)。1997年,库兹韦尔研究了等价模拟人脑所需硬件的各种估计,并采纳了每秒10 sup 16 / sup 计算(cps)这个估计结果。(作为比较,如果一次“计算”相当于一次“浮点运算”——一种用于对当前超级计算机进行评级的措施——那么10 sup 16 / sup次“计算”相当于2011年达到的的每秒10000亿次浮点运算)。他用这个数字来预测,如果在撰写本文时计算机能力方面的指数增长继续下去的话,那么在2015年到2025年之间的某个时候,必要的硬件将会出现。<br />
<br />
<br />
<br />
===Modelling the neurons in more detail 对神经元的更精细的模拟===<br />
<br />
The [[artificial neuron]] model assumed by Kurzweil and used in many current [[artificial neural network]] implementations is simple compared with [[biological neuron model|biological neurons]]. A brain simulation would likely have to capture the detailed cellular behaviour of biological [[neurons]], presently understood only in the broadest of outlines. The overhead introduced by full modeling of the biological, chemical, and physical details of neural behaviour (especially on a molecular scale) would require computational powers several orders of magnitude larger than Kurzweil's estimate. In addition the estimates do not account for [[glial cells]], which are at least as numerous as neurons, and which may outnumber neurons by as much as 10:1, and are now known to play a role in cognitive processes.<ref name="Discover2011JanFeb">{{Cite journal|author=Swaminathan, Nikhil|title=Glia—the other brain cells|journal=Discover|date=Jan–Feb 2011|url=http://discovermagazine.com/2011/jan-feb/62|access-date=24 January 2014|archive-url=https://web.archive.org/web/20140208071350/http://discovermagazine.com/2011/jan-feb/62|archive-date=8 February 2014|url-status=live}}</ref><br />
<br />
The artificial neuron model assumed by Kurzweil and used in many current artificial neural network implementations is simple compared with biological neurons. A brain simulation would likely have to capture the detailed cellular behaviour of biological neurons, presently understood only in the broadest of outlines. The overhead introduced by full modeling of the biological, chemical, and physical details of neural behaviour (especially on a molecular scale) would require computational powers several orders of magnitude larger than Kurzweil's estimate. In addition the estimates do not account for glial cells, which are at least as numerous as neurons, and which may outnumber neurons by as much as 10:1, and are now known to play a role in cognitive processes.<br />
<br />
与生物神经元相比,库兹韦尔假设的人工神经元模型在当前许多人工神经网络实现中的应用是简单的。大脑模拟可能需要捕捉生物神经元细胞行为的细节,目前只能从最广泛的概要中理解。对神经行为的生物、化学和物理细节(特别是在分子尺度上)进行全面建模所需要的计算能力将比库兹韦尔的估计大数个数量级。此外,这些估计没有考虑到至少和神经元一样多的胶质细胞,其数量可能比神经元多十分之一,且现已知它们在认知过程中发挥作用。<br />
<br />
<br />
<br />
=== Current research 研究现状===<br />
<br />
There are some research projects that are investigating brain simulation using more sophisticated neural models, implemented on conventional computing architectures. The [[Artificial Intelligence System]] project implemented non-real time simulations of a "brain" (with 10<sup>11</sup> neurons) in 2005. It took 50 days on a cluster of 27 processors to simulate 1 second of a model.<ref>{{cite journal |last=Izhikevich |first=Eugene M. |last2=Edelman |first2=Gerald M. |date=4 March 2008 |title=Large-scale model of mammalian thalamocortical systems |url=http://vesicle.nsi.edu/users/izhikevich/publications/large-scale_model_of_human_brain.pdf |journal=PNAS |volume=105 |issue=9 |pages=3593–3598 |doi= 10.1073/pnas.0712231105|access-date=23 June 2015 |archive-url=https://web.archive.org/web/20090612095651/http://vesicle.nsi.edu/users/izhikevich/publications/large-scale_model_of_human_brain.pdf |archive-date=12 June 2009 |pmid=18292226 |pmc=2265160|bibcode=2008PNAS..105.3593I }}</ref> The [[Blue Brain]] project used one of the fastest supercomputer architectures in the world, [[IBM]]'s [[Blue Gene]] platform, to create a real time simulation of a single rat [[Neocortex|neocortical column]] consisting of approximately 10,000 neurons and 10<sup>8</sup> synapses in 2006.<ref>{{cite web|url=http://bluebrain.epfl.ch/Jahia/site/bluebrain/op/edit/pid/19085|title=Project Milestones|work=Blue Brain|accessdate=11 August 2008}}</ref> A longer term goal is to build a detailed, functional simulation of the physiological processes in the human brain: "It is not impossible to build a human brain and we can do it in 10 years," [[Henry Markram]], director of the Blue Brain Project said in 2009 at the [[TED (conference)|TED conference]] in Oxford.<ref>{{Cite news |url=http://news.bbc.co.uk/1/hi/technology/8164060.stm |title=Artificial brain '10 years away' 2009 BBC news |date=22 July 2009 |access-date=25 July 2009 |archive-url=https://web.archive.org/web/20170726040959/http://news.bbc.co.uk/1/hi/technology/8164060.stm |archive-date=26 July 2017 |url-status=live }}</ref> There have also been controversial claims to have simulated a [[cat intelligence#Computer simulation of the cat brain|cat brain]]. Neuro-silicon interfaces have been proposed as an alternative implementation strategy that may scale better.<ref>[http://gauntlet.ucalgary.ca/story/10343 University of Calgary news] {{Webarchive|url=https://web.archive.org/web/20090818081044/http://gauntlet.ucalgary.ca/story/10343 |date=18 August 2009 }}, [http://www.nbcnews.com/id/12037941 NBC News news] {{Webarchive|url=https://web.archive.org/web/20170704063922/http://www.nbcnews.com/id/12037941/ |date=4 July 2017 }}</ref><br />
<br />
There are some research projects that are investigating brain simulation using more sophisticated neural models, implemented on conventional computing architectures. The Artificial Intelligence System project implemented non-real time simulations of a "brain" (with 10<sup>11</sup> neurons) in 2005. It took 50 days on a cluster of 27 processors to simulate 1 second of a model. The Blue Brain project used one of the fastest supercomputer architectures in the world, IBM's Blue Gene platform, to create a real time simulation of a single rat neocortical column consisting of approximately 10,000 neurons and 10<sup>8</sup> synapses in 2006. A longer term goal is to build a detailed, functional simulation of the physiological processes in the human brain: "It is not impossible to build a human brain and we can do it in 10 years," Henry Markram, director of the Blue Brain Project said in 2009 at the TED conference in Oxford. There have also been controversial claims to have simulated a cat brain. Neuro-silicon interfaces have been proposed as an alternative implementation strategy that may scale better.<br />
<br />
有一些研究项目正在使用更复杂的神经模型研究大脑模拟,这些模型是在传统的计算机体系结构上实现的。人工智能系统项目在2005年实现了对一个“大脑”(有10个 sup 11 / sup 神经元)的非实时模拟。在一个由27个处理器组成的集群上,模拟一个模型的一秒钟花费了50天时间。2006年,蓝脑项目利用世界上最快的超级计算机架构之一——IBM 的蓝色基因平台,创建了一个包含大约10,000个神经元和10个 sup 8 / sup 突触的单个大鼠的新皮质柱的实时模拟。一个更长期的目标是建立一个人脑生理过程的详细的功能模拟: “建立一个人脑并不是不可能的,我们可以在10年内完成,”蓝脑项目主任亨利·马克拉姆(Henry Markram)于2009年在牛津举行的 TED 大会上说道。还有一些有争议的说法是模拟猫的大脑。神经-硅接口已作为一种可替代的实施策略被提出,它可能会更好地进行模拟。<br />
<br />
<br />
<br />
[[Hans Moravec]] addressed the above arguments ("brains are more complicated", "neurons have to be modeled in more detail") in his 1997 paper "When will computer hardware match the human brain?".<ref>{{cite web|url=http://www.transhumanist.com/volume1/moravec.htm |title=Archived copy |accessdate=23 June 2006 |url-status=dead |archiveurl=https://web.archive.org/web/20060615031852/http://transhumanist.com/volume1/moravec.htm |archivedate=15 June 2006 }}</ref> He measured the ability of existing software to simulate the functionality of neural tissue, specifically the retina. His results do not depend on the number of glial cells, nor on what kinds of processing neurons perform where.<br />
<br />
Hans Moravec addressed the above arguments ("brains are more complicated", "neurons have to be modeled in more detail") in his 1997 paper "When will computer hardware match the human brain?". He measured the ability of existing software to simulate the functionality of neural tissue, specifically the retina. His results do not depend on the number of glial cells, nor on what kinds of processing neurons perform where.<br />
<br />
汉斯·莫拉维克(Hans Moravec)在他1997年的论文《计算机硬件何时能与人脑匹敌》中提出了上述观点(“大脑更复杂” ,“神经元的建模必须更详细”) .他测量了现有软件模拟神经组织,特别是视网膜功能的能力。他的研究结果并不取决于神经胶质细胞的数量,也不取决于处理神经元在哪里工作。<br />
<br />
<br />
<br />
The actual complexity of modeling biological neurons has been explored in [[OpenWorm|OpenWorm project]] that was aimed on complete simulation of a worm that has only 302 neurons in its neural network (among about 1000 cells in total). The animal's neural network has been well documented before the start of the project. However, although the task seemed simple at the beginning, the models based on a generic neural network did not work. Currently, the efforts are focused on precise emulation of biological neurons (partly on the molecular level), but the result cannot be called a total success yet. Even if the number of issues to be solved in a human-brain-scale model is not proportional to the number of neurons, the amount of work along this path is obvious.<br />
<br />
The actual complexity of modeling biological neurons has been explored in OpenWorm project that was aimed on complete simulation of a worm that has only 302 neurons in its neural network (among about 1000 cells in total). The animal's neural network has been well documented before the start of the project. However, although the task seemed simple at the beginning, the models based on a generic neural network did not work. Currently, the efforts are focused on precise emulation of biological neurons (partly on the molecular level), but the result cannot be called a total success yet. Even if the number of issues to be solved in a human-brain-scale model is not proportional to the number of neurons, the amount of work along this path is obvious.<br />
<br />
OpenWorm 项目已经探讨了建模生物神经元的实际复杂性。该项目旨在完全模拟一个蠕虫,其神经网络中只有302个神经元(在总共约1000个细胞中)。项目开始之前,蠕虫的神经网络已经被很好地记录了下来。然而,尽管任务一开始看起来很简单,基于一般神经网络的模型并不起作用。目前,研究的重点是精确模拟生物神经元(部分在分子水平上) ,但结果还不能被称为完全成功。即使在人脑尺度的模型中需要解决的问题的数量与神经元的数量不成比例,沿着这条路径走下去的工作量也是显而易见的。<br />
<br />
<br />
<br />
===Criticisms of simulation-based approaches 对基于模拟的研究方法的批评===<br />
<br />
A fundamental criticism of the simulated brain approach derives from [[embodied cognition]] where human embodiment is taken as an essential aspect of human intelligence. Many researchers believe that embodiment is necessary to ground meaning.<ref>{{Harvnb|de Vega|Glenberg|Graesser|2008}}. A wide range of views in current research, all of which require grounding to some degree</ref> If this view is correct, any fully functional brain model will need to encompass more than just the neurons (i.e., a robotic body). Goertzel{{sfn|Goertzel|2007}} proposes virtual embodiment (like in ''[[Second Life]]''), but it is not yet known whether this would be sufficient.<br />
<br />
A fundamental criticism of the simulated brain approach derives from embodied cognition where human embodiment is taken as an essential aspect of human intelligence. Many researchers believe that embodiment is necessary to ground meaning. If this view is correct, any fully functional brain model will need to encompass more than just the neurons (i.e., a robotic body). Goertzel proposes virtual embodiment (like in Second Life), but it is not yet known whether this would be sufficient.<br />
<br />
对模拟大脑的方法的一个基本批评来自具象认知,其中人形化被视为人类智力的一个重要方面。许多研究者认为,具体化是必要的基础意义。如果这种观点是正确的,那么任何功能齐全的大脑模型除了神经元还要包含更多东西(例如,一个机器人身体)。格兹尔提出了虚拟体(就像在《第二人生》中那样) ,但是目前还不知道这是否足够。<br />
<br />
<br />
<br />
Desktop computers using microprocessors capable of more than 10<sup>9</sup> cps (Kurzweil's non-standard unit "computations per second", see above) have been available since 2005. According to the brain power estimates used by Kurzweil (and Moravec), this computer should be capable of supporting a simulation of a bee brain, but despite some interest<ref>{{Cite web |url=http://www.setiai.com/archives/cat_honey_bee_brain.html |title=some links to bee brain studies |access-date=30 March 2010 |archive-url=https://web.archive.org/web/20111005162232/http://www.setiai.com/archives/cat_honey_bee_brain.html |archive-date=5 October 2011 |url-status=dead }}</ref> no such simulation exists {{Citation needed|date=April 2011}}. There are at least three reasons for this:<br />
<br />
Desktop computers using microprocessors capable of more than 10<sup>9</sup> cps (Kurzweil's non-standard unit "computations per second", see above) have been available since 2005. According to the brain power estimates used by Kurzweil (and Moravec), this computer should be capable of supporting a simulation of a bee brain, but despite some interest no such simulation exists . There are at least three reasons for this:<br />
<br />
自2005年以来,台式计算机使用的微处理器能够超过10 sup 9 / sup cps (库兹韦尔的非标准单位“每秒计算”,见上文)。根据库兹韦尔(和莫拉维克)使用的大脑能量估算,这台计算机应该能够支持蜜蜂大脑的模拟,但是尽管有些人感兴趣,这样的模拟并不存在。这至少有三个原因:<br />
<br />
#The neuron model seems to be oversimplified (see next section).<br />
<br />
The neuron model seems to be oversimplified (see next section).<br />
<br />
神经元模型似乎过于简化了(见下一节)。<br />
<br />
#There is insufficient understanding of higher cognitive processes{{refn|In Goertzels' AGI book ([[#CITEREFYudkowsky2006|Yudkowsky 2006]]), Yudkowsky proposes 5 levels of organisation that must be understood – code/data, sensory modality, concept & category, thought, and deliberation (consciousness) – in order to use the available hardware}} to establish accurately what the brain's neural activity, observed using techniques such as [[Neuroimaging#Functional magnetic resonance imaging|functional magnetic resonance imaging]], correlates with.<br />
<br />
There is insufficient understanding of higher cognitive processes to establish accurately what the brain's neural activity, observed using techniques such as functional magnetic resonance imaging, correlates with.<br />
<br />
人们对高级认知过程的理解不够充分,无法准确地确定大脑的神经活动---- 与之相关的是使用功能性磁共振成像等技术观察到的大脑活动。<br />
<br />
#Even if our understanding of cognition advances sufficiently, early simulation programs are likely to be very inefficient and will, therefore, need considerably more hardware.<br />
<br />
Even if our understanding of cognition advances sufficiently, early simulation programs are likely to be very inefficient and will, therefore, need considerably more hardware.<br />
<br />
即使我们对认知的理解有了足够的进步,早期的仿真程序也可能非常低效,因此需要更多的硬件。<br />
<br />
#The brain of an organism, while critical, may not be an appropriate boundary for a cognitive model. To simulate a bee brain, it may be necessary to simulate the body, and the environment. [[The Extended Mind]] thesis formalizes the philosophical concept, and research into [[Cephalopoda|cephalopods]] has demonstrated clear examples of a decentralized system.<ref>{{cite journal | pmid = 15829594 | doi=10.1152/jn.00684.2004 | volume=94 | issue=2 | title=Dynamic model of the octopus arm. I. Biomechanics of the octopus reaching movement |date=August 2005 | journal=J. Neurophysiol. | pages=1443–58 | last1 = Yekutieli | first1 = Y | last2 = Sagiv-Zohar | first2 = R | last3 = Aharonov | first3 = R | last4 = Engel | first4 = Y | last5 = Hochner | first5 = B | last6 = Flash | first6 = T}}</ref><br />
<br />
The brain of an organism, while critical, may not be an appropriate boundary for a cognitive model. To simulate a bee brain, it may be necessary to simulate the body, and the environment. The Extended Mind thesis formalizes the philosophical concept, and research into cephalopods has demonstrated clear examples of a decentralized system.<br />
<br />
有机体的大脑虽然关键,但可能不是认知模型的合适边界。为了模拟蜜蜂的大脑,可能需要模拟身体和环境。扩展理智论点形式化了哲学概念,对头足类动物的研究已经展示了分散系统的明显的例子。<br />
<br />
<br />
<br />
In addition, the scale of the human brain is not currently well-constrained. One estimate puts the human brain at about 100 billion neurons and 100 trillion synapses.<ref>{{Harvnb|Williams|Herrup|1988}}</ref><ref>[http://search.eb.com/eb/article-75525 "nervous system, human."] ''[[Encyclopædia Britannica]]''. 9 January 2007</ref> Another estimate is 86 billion neurons of which 16.3 billion are in the [[cerebral cortex]] and 69 billion in the [[cerebellum]].{{sfn|Azevedo et al.|2009}} [[Glial cell]] synapses are currently unquantified but are known to be extremely numerous.<br />
<br />
In addition, the scale of the human brain is not currently well-constrained. One estimate puts the human brain at about 100 billion neurons and 100 trillion synapses. Another estimate is 86 billion neurons of which 16.3 billion are in the cerebral cortex and 69 billion in the cerebellum. Glial cell synapses are currently unquantified but are known to be extremely numerous.<br />
<br />
此外,人类大脑的规模目前还没有得到很好的限制。据估计,人类大脑大约有1000亿个神经元和100万亿个突触。另一个估计是860亿个神经元,其中163亿个在大脑皮层,690亿个在小脑。神经胶质细胞突触目前尚未定量,但已知数量极多。<br />
<br />
<br />
<br />
==Strong AI and consciousness 强人工智能和意识==<br />
<br />
{{See also|Philosophy of artificial intelligence|Turing test}}<br />
<br />
<br />
<br />
In 1980, philosopher [[John Searle]] coined the term "strong AI" as part of his [[Chinese room]] argument.<ref>{{Harvnb|Searle|1980}}</ref> He wanted to distinguish between two different hypotheses about artificial intelligence:<ref>As defined in a standard AI textbook: "The assertion that machines could possibly act intelligently (or, perhaps better, act as if they were intelligent) is called the 'weak AI' hypothesis by philosophers, and the assertion that machines that do so are actually thinking (as opposed to simulating thinking) is called the 'strong AI' hypothesis." {{Harv|Russell|Norvig|2003}}</ref><br />
<br />
In 1980, philosopher John Searle coined the term "strong AI" as part of his Chinese room argument. He wanted to distinguish between two different hypotheses about artificial intelligence:<br />
<br />
1980年,哲学家约翰•塞尔(John Searle)将“强人工智能”(strong AI)一词作为他在中文屋论证的一部分。他想要区分关于人工智能的两种不同假设:<br />
<br />
* An artificial intelligence system can ''think'' and have a ''mind''. (The word "mind" has a specific meaning for philosophers, as used in "the [[mind body problem]]" or "the [[philosophy of mind]]".)<br />
*一个人工智能系统可以思考并拥有心灵。(词语“心灵”对哲学家来说有特殊意义,正如在“身心问题”或“心灵哲学”中的使用一样。)<br />
<br />
* An artificial intelligence system can (only) ''act like'' it thinks and has a mind.<br />
*一个人工智能系统可以(仅仅)按它所想而“行动”,并且拥有心灵。<br />
<br />
The first one is called "the ''strong'' AI hypothesis" and the second is "the ''weak'' AI hypothesis" because the first one makes the ''stronger'' statement: it assumes something special has happened to the machine that goes beyond all its abilities that we can test. Searle referred to the "strong AI hypothesis" as "strong AI". This usage is also common in academic AI research and textbooks.<ref>For example:<br />
<br />
The first one is called "the strong AI hypothesis" and the second is "the weak AI hypothesis" because the first one makes the stronger statement: it assumes something special has happened to the machine that goes beyond all its abilities that we can test. Searle referred to the "strong AI hypothesis" as "strong AI". This usage is also common in academic AI research and textbooks.<ref>For example:<br />
<br />
第一个被称为“强人工智能假设” ,第二个被称为“弱人工智能假设” ,因为第一个假设提出了更强的陈述: 它假定机器发生了某种特殊的事情,超出了我们能够测试的所有能力。Searle 将“强 AI 假说”称为“强 AI”。这种用法在人工智能学术研究和教科书中也很常见。例如:<br />
<br />
* {{Harvnb|Russell|Norvig|2003}},<br />
<br />
* [http://www.encyclopedia.com/doc/1O87-strongAI.html Oxford University Press Dictionary of Psychology] {{Webarchive|url=https://web.archive.org/web/20071203103022/http://www.encyclopedia.com/doc/1O87-strongAI.html |date=3 December 2007 }} (quoted in "High Beam Encyclopedia"),<br />
<br />
* [http://www.aaai.org/AITopics/html/phil.html MIT Encyclopedia of Cognitive Science] {{Webarchive|url=https://web.archive.org/web/20080719074502/http://www.aaai.org/AITopics/html/phil.html |date=19 July 2008 }} (quoted in "AITopics")<br />
<br />
* [http://planetmath.org/encyclopedia/StrongAIThesis.html Planet Math] {{Webarchive|url=https://web.archive.org/web/20070919012830/http://planetmath.org/encyclopedia/StrongAIThesis.html |date=19 September 2007 }}<br />
<br />
* [http://www.cbhd.org/resources/biotech/tongen_2003-11-07.htm Will Biological Computers Enable Artificially Intelligent Machines to Become Persons?] {{Webarchive|url=https://web.archive.org/web/20080513031753/http://www.cbhd.org/resources/biotech/tongen_2003-11-07.htm |date=13 May 2008 }} Anthony Tongen</ref><br />
<br />
<br />
<br />
The weak AI hypothesis is equivalent to the hypothesis that artificial general intelligence is possible. According to [[Stuart J. Russell|Russell]] and [[Peter Norvig|Norvig]], "Most AI researchers take the weak AI hypothesis for granted, and don't care about the strong AI hypothesis."{{sfn|Russell|Norvig|2003|p=947}}<br />
<br />
The weak AI hypothesis is equivalent to the hypothesis that artificial general intelligence is possible. According to Russell and Norvig, "Most AI researchers take the weak AI hypothesis for granted, and don't care about the strong AI hypothesis."<br />
<br />
弱人工智能假说等同于人工一般智能是可能的假说。根据罗素和诺维格的说法,“大多数人工智能研究人员认为弱人工智能假说是理所当然的,并且不关心强人工智能假说。”<br />
<br />
<br />
<br />
In contrast to Searle, [[Ray Kurzweil]] uses the term "strong AI" to describe any artificial intelligence system that acts like it has a mind,<ref name=K/> regardless of whether a philosopher would be able to determine if it ''actually'' has a mind or not.<br />
<br />
In contrast to Searle, Ray Kurzweil uses the term "strong AI" to describe any artificial intelligence system that acts like it has a mind, regardless of whether a philosopher would be able to determine if it actually has a mind or not.<br />
<br />
与 Searle 不同的是,Ray Kurzweil 使用“强人工智能”这个词来描述任何人工智能系统,这个系统的行为就像它有思想一样,不管哲学家是否能够确定它是否真的有思想。<br />
<br />
In science fiction, AGI is associated with traits such as [[consciousness]], [[sentience]], [[sapience]], and [[self-awareness]] observed in living beings. However, according to Searle, it is an open question whether general intelligence is sufficient for consciousness. "Strong AI" (as defined above by Kurzweil) should not be confused with Searle's "[[strong AI hypothesis]]." The strong AI hypothesis is the claim that a computer which behaves as intelligently as a person must also necessarily have a [[mind]] and [[consciousness]]. AGI refers only to the amount of intelligence that the machine displays, with or without a mind.<br />
<br />
In science fiction, AGI is associated with traits such as consciousness, sentience, sapience, and self-awareness observed in living beings. However, according to Searle, it is an open question whether general intelligence is sufficient for consciousness. "Strong AI" (as defined above by Kurzweil) should not be confused with Searle's "strong AI hypothesis." The strong AI hypothesis is the claim that a computer which behaves as intelligently as a person must also necessarily have a mind and consciousness. AGI refers only to the amount of intelligence that the machine displays, with or without a mind.<br />
<br />
在科幻小说中,AGI 与生物的意识、知觉、智慧和自我意识等特征有关。然而,根据 Searle 的说法,一般智力是否足以产生意识还是一个悬而未决的问题。“强 AI”(如上文库兹韦尔所定义的)不应与塞尔的“强 AI 假设”相混淆强有力的人工智能假说认为,一台像人一样智能运行的计算机必然具有思想和意识。Agi 只是指机器显示的智能量,不管有没有头脑。<br />
<br />
<br />
<br />
===Consciousness===<br />
<br />
There are other aspects of the human mind besides intelligence that are relevant to the concept of strong AI which play a major role in [[science fiction]] and the [[ethics of artificial intelligence]]:<br />
<br />
There are other aspects of the human mind besides intelligence that are relevant to the concept of strong AI which play a major role in science fiction and the ethics of artificial intelligence:<br />
<br />
在科幻小说和人工智能伦理中扮演重要角色的强大人工智能概念中,除了与智能有关的人类思维还有其他方面:<br />
<br />
* [[consciousness]]: To have [[qualia|subjective experience]] and [[thought]].<ref>Note that [[consciousness]] is difficult to define. A popular definition, due to [[Thomas Nagel]], is that it "feels like" something to be conscious. If we are not conscious, then it doesn't feel like anything. Nagel uses the example of a bat: we can sensibly ask "what does it feel like to be a bat?" However, we are unlikely to ask "what does it feel like to be a toaster?" Nagel concludes that a bat appears to be conscious (i.e. has consciousness) but a toaster does not. See {{Harv|Nagel|1974}}</ref><br />
<br />
* [[self-awareness]]: To be aware of oneself as a separate individual, especially to be aware of one's own thoughts.<br />
<br />
* [[sentience]]: The ability to "feel" perceptions or emotions subjectively.<br />
<br />
* [[sapience]]: The capacity for wisdom.<br />
<br />
<br />
<br />
These traits have a moral dimension, because a machine with this form of strong AI may have legal rights, analogous to the [[animal rights|rights of non-human animals]]. As such, preliminary work has been conducted on approaches to integrating full ethical agents with existing legal and social frameworks. These approaches have focused on the legal position and rights of 'strong' AI.<ref>{{Cite journal|last=Sotala|first=Kaj|last2=Yampolskiy|first2=Roman V|date=2014-12-19|title=Responses to catastrophic AGI risk: a survey|journal=Physica Scripta|volume=90|issue=1|pages=8|doi=10.1088/0031-8949/90/1/018001|issn=0031-8949|doi-access=free}}</ref><br />
<br />
These traits have a moral dimension, because a machine with this form of strong AI may have legal rights, analogous to the rights of non-human animals. As such, preliminary work has been conducted on approaches to integrating full ethical agents with existing legal and social frameworks. These approaches have focused on the legal position and rights of 'strong' AI.<br />
<br />
这些特征具有道德维度,因为拥有这种强人工智能形式的机器可能拥有法律权利,类似于非人类动物的权利。因此,已经开展了初步工作,探讨如何将全面的道德行为者纳入现有的法律和社会框架。这些方法都集中在强大的人工智能的法律地位和权利上。<br />
<br />
<br />
<br />
However, [[Bill Joy]], among others, argues a machine with these traits may be a threat to human life or dignity.<ref>{{cite journal| title=Why the future doesn't need us | last=Joy | first=Bill |author-link=Bill Joy | journal=Wired |date=April 2000 }}</ref> It remains to be shown whether any of these traits are [[Necessary and sufficient condition|necessary]] for strong AI. The role of [[consciousness]] is not clear, and currently there is no agreed test for its presence. If a machine is built with a device that simulates the [[neural correlates of consciousness]], would it automatically have self-awareness? It is also possible that some of these properties, such as sentience, [[Emergence|naturally emerge]] from a fully intelligent machine, or that it becomes natural to ''ascribe'' these properties to machines once they begin to act in a way that is clearly intelligent. For example, intelligent action may be sufficient for sentience, rather than the other way around.<br />
<br />
However, Bill Joy, among others, argues a machine with these traits may be a threat to human life or dignity. It remains to be shown whether any of these traits are necessary for strong AI. The role of consciousness is not clear, and currently there is no agreed test for its presence. If a machine is built with a device that simulates the neural correlates of consciousness, would it automatically have self-awareness? It is also possible that some of these properties, such as sentience, naturally emerge from a fully intelligent machine, or that it becomes natural to ascribe these properties to machines once they begin to act in a way that is clearly intelligent. For example, intelligent action may be sufficient for sentience, rather than the other way around.<br />
<br />
然而,比尔 · 乔伊等人认为,具有这些特征的机器可能会威胁到人类的生命或尊严。这些特征对于强 AI 来说是否是必要的还有待证明。意识的作用并不清楚,目前也没有对其存在的一致的测试。如果一台机器装有一个模拟意识相关神经区的装置,它会自动具有自我意识吗?也有可能这些特性中的一些,比如感知能力,自然而然地从一个完全智能的机器中产生,或者一旦机器开始以一种明显的智能方式行动,人们就会自然而然地把这些特性归因于机器。例如,智能行为可能足以产生知觉,而不是相反。<br />
<br />
<br />
<br />
===Artificial consciousness research===<br />
<br />
{{Main|Artificial consciousness}}<br />
<br />
<br />
<br />
Although the role of consciousness in strong AI/AGI is debatable, many AGI researchers{{sfn|Yudkowsky|2006}} regard research that investigates possibilities for implementing consciousness as vital. In an early effort [[Igor Aleksander]]{{sfn|Aleksander|1996}} argued that the principles for creating a conscious machine already existed but that it would take forty years to train such a machine to understand [[language]].<br />
<br />
Although the role of consciousness in strong AI/AGI is debatable, many AGI researchers regard research that investigates possibilities for implementing consciousness as vital. In an early effort Igor Aleksander argued that the principles for creating a conscious machine already existed but that it would take forty years to train such a machine to understand language.<br />
<br />
虽然意识在强 ai / AGI 中的作用是有争议的,但是很多 AGI 的研究人员认为研究实现意识的可能性是至关重要的。在早期的努力中,Igor Aleksander 认为创造一个有意识的机器的原则已经存在,但是训练这样一个机器去理解语言需要四十年。<br />
<br />
<br />
<br />
==Possible explanations for the slow progress of AI research==<br />
<br />
{{See also|History of artificial intelligence#The problems}}<br />
<br />
<br />
<br />
Since the launch of AI research in 1956, the growth of this field has slowed down over time and has stalled the aims of creating machines skilled with intelligent action at the human level.{{sfn|Clocksin|2003}} A possible explanation for this delay is that computers lack a sufficient scope of memory or processing power.{{sfn|Clocksin|2003}} In addition, the level of complexity that connects to the process of AI research may also limit the progress of AI research.{{sfn|Clocksin|2003}}<br />
<br />
Since the launch of AI research in 1956, the growth of this field has slowed down over time and has stalled the aims of creating machines skilled with intelligent action at the human level. A possible explanation for this delay is that computers lack a sufficient scope of memory or processing power. In addition, the level of complexity that connects to the process of AI research may also limit the progress of AI research.<br />
<br />
自从1956年开始人工智能研究以来,这一领域的发展速度已经随着时间的推移而放缓,并且阻碍了创造具有人类水平的智能行为的机器的目标。这种延迟的一个可能的解释是计算机缺乏足够的存储空间或处理能力。此外,与人工智能研究过程相关的复杂程度也可能限制人工智能研究的进展。<br />
<br />
<br />
<br />
While most AI researchers believe strong AI can be achieved in the future, there are some individuals like [[Hubert Dreyfus]] and [[Roger Penrose]] who deny the possibility of achieving strong AI.{{sfn|Clocksin|2003}} [[John McCarthy (computer scientist)|John McCarthy]] was one of various computer scientists who believe human-level AI will be accomplished, but a date cannot accurately be predicted.{{sfn|McCarthy|2003}}<br />
<br />
While most AI researchers believe strong AI can be achieved in the future, there are some individuals like Hubert Dreyfus and Roger Penrose who deny the possibility of achieving strong AI. John McCarthy was one of various computer scientists who believe human-level AI will be accomplished, but a date cannot accurately be predicted.<br />
<br />
虽然大多数人工智能研究人员认为强大的人工智能可以在未来实现,但也有一些人像休伯特 · 德雷福斯和罗杰 · 彭罗斯否认实现强大人工智能的可能性。约翰 · 麦卡锡是众多计算机科学家之一,他们相信人类水平的人工智能将会实现,但是日期无法准确预测。<br />
<br />
<br />
<br />
Conceptual limitations are another possible reason for the slowness in AI research.{{sfn|Clocksin|2003}} AI researchers may need to modify the conceptual framework of their discipline in order to provide a stronger base and contribution to the quest of achieving strong AI. As William Clocksin wrote in 2003: "the framework starts from Weizenbaum's observation that intelligence manifests itself only relative to specific social and cultural contexts".{{sfn|Clocksin|2003}}<br />
<br />
Conceptual limitations are another possible reason for the slowness in AI research. AI researchers may need to modify the conceptual framework of their discipline in order to provide a stronger base and contribution to the quest of achieving strong AI. As William Clocksin wrote in 2003: "the framework starts from Weizenbaum's observation that intelligence manifests itself only relative to specific social and cultural contexts".<br />
<br />
概念上的局限性是人工智能研究缓慢的另一个可能原因。人工智能研究人员可能需要修改他们学科的概念框架,以便为实现强大的人工智能提供一个更强大的基础和贡献。正如 William Clocksin 在2003年写的那样: “这个框架始于 Weizenbaum 的观察,即智力只在特定的社会和文化背景下表现出来。”。<br />
<br />
<br />
<br />
Furthermore, AI researchers have been able to create computers that can perform jobs that are complicated for people to do, such as mathematics, but conversely they have struggled to develop a computer that is capable of carrying out tasks that are simple for humans to do, such as walking ([[Moravec's paradox]]).{{sfn|Clocksin|2003}} A problem described by David Gelernter is that some people assume thinking and reasoning are equivalent.{{sfn|Gelernter|2010}} However, the idea of whether thoughts and the creator of those thoughts are isolated individually has intrigued AI researchers.{{sfn|Gelernter|2010}}<br />
<br />
Furthermore, AI researchers have been able to create computers that can perform jobs that are complicated for people to do, such as mathematics, but conversely they have struggled to develop a computer that is capable of carrying out tasks that are simple for humans to do, such as walking (Moravec's paradox). A problem described by David Gelernter is that some people assume thinking and reasoning are equivalent. However, the idea of whether thoughts and the creator of those thoughts are isolated individually has intrigued AI researchers.<br />
<br />
此外,人工智能研究人员已经能够创造出能够执行复杂工作(如数学)的计算机,但相反地,他们却难以开发出能够执行人类简单任务(如行走)的计算机(莫拉维克悖论)。大卫 · 格勒尼特描述的一个问题是,有些人认为思考和推理是等价的。然而,思想和这些思想的创造者是否被孤立的想法引起了人工智能研究者的兴趣。<br />
<br />
<br />
<br />
The problems that have been encountered in AI research over the past decades have further impeded the progress of AI. The failed predictions that have been promised by AI researchers and the lack of a complete understanding of human behaviors have helped diminish the primary idea of human-level AI.{{sfn|Goertzel|2007}} Although the progress of AI research has brought both improvement and disappointment, most investigators have established optimism about potentially achieving the goal of AI in the 21st century.{{sfn|Goertzel|2007}}<br />
<br />
The problems that have been encountered in AI research over the past decades have further impeded the progress of AI. The failed predictions that have been promised by AI researchers and the lack of a complete understanding of human behaviors have helped diminish the primary idea of human-level AI. Although the progress of AI research has brought both improvement and disappointment, most investigators have established optimism about potentially achieving the goal of AI in the 21st century.<br />
<br />
过去几十年人工智能研究中遇到的问题进一步阻碍了人工智能的发展。人工智能研究人员所承诺的失败的预测,以及对人类行为缺乏完整理解,已经帮助削弱了人类水平人工智能的基本概念。尽管人工智能研究的进展带来了进步和失望,但大多数研究人员对人工智能在21世纪可能实现的目标持乐观态度。<br />
<br />
<br />
<br />
Other possible reasons have been proposed for the lengthy research in the progress of strong AI. The intricacy of scientific problems and the need to fully understand the human brain through psychology and neurophysiology have limited many researchers in emulating the function of the human brain in computer hardware.{{sfn|McCarthy|2007}} Many researchers tend to underestimate any doubt that is involved with future predictions of AI, but without taking those issues seriously, people can then overlook solutions to problematic questions.{{sfn|Goertzel|2007}}<br />
<br />
Other possible reasons have been proposed for the lengthy research in the progress of strong AI. The intricacy of scientific problems and the need to fully understand the human brain through psychology and neurophysiology have limited many researchers in emulating the function of the human brain in computer hardware. Many researchers tend to underestimate any doubt that is involved with future predictions of AI, but without taking those issues seriously, people can then overlook solutions to problematic questions.<br />
<br />
还有其他可能的原因可以解释为什么对于强人工智能进行了长时间的研究。错综复杂的科学问题,以及通过心理学和神经生理学充分了解人脑的必要性,限制了许多研究人员在计算机硬件中模拟人脑的功能。许多研究人员倾向于低估与人工智能未来预测有关的任何怀疑,但是如果不认真对待这些问题,人们就会忽视问题的解决方案。<br />
<br />
<br />
<br />
Clocksin says that a conceptual limitation that may impede the progress of AI research is that people may be using the wrong techniques for computer programs and implementation of equipment.{{sfn|Clocksin|2003}} When AI researchers first began to aim for the goal of artificial intelligence, a main interest was human reasoning.{{sfn|Holte|Choueiry|2003}} Researchers hoped to establish computational models of human knowledge through reasoning and to find out how to design a computer with a specific cognitive task.{{sfn|Holte|Choueiry|2003}}<br />
<br />
Clocksin says that a conceptual limitation that may impede the progress of AI research is that people may be using the wrong techniques for computer programs and implementation of equipment. When AI researchers first began to aim for the goal of artificial intelligence, a main interest was human reasoning. Researchers hoped to establish computational models of human knowledge through reasoning and to find out how to design a computer with a specific cognitive task.<br />
<br />
Clocksin 说,阻碍人工智能研究进展的一个概念上的限制是,人们可能在计算机程序和设备实现方面使用了错误的技术。当人工智能研究人员第一次开始瞄准人工智能的目标时,主要的兴趣是人类推理。研究人员希望通过推理建立人类知识的计算模型,并找出如何设计一台具有特定认知任务的计算机。<br />
<br />
<br />
<br />
The practice of abstraction, which people tend to redefine when working with a particular context in research, provides researchers with a concentration on just a few concepts.{{sfn|Holte|Choueiry|2003}} The most productive use of abstraction in AI research comes from planning and problem solving.{{sfn|Holte|Choueiry|2003}} Although the aim is to increase the speed of a computation, the role of abstraction has posed questions about the involvement of abstraction operators.{{sfn|Zucker|2003}}<br />
<br />
The practice of abstraction, which people tend to redefine when working with a particular context in research, provides researchers with a concentration on just a few concepts. The most productive use of abstraction in AI research comes from planning and problem solving. Although the aim is to increase the speed of a computation, the role of abstraction has posed questions about the involvement of abstraction operators.<br />
<br />
抽象的实践,人们在研究中使用特定的语境时倾向于重新定义,为研究人员提供了集中在几个概念上的机会。抽象在人工智能研究中最有效的应用来自规划和解决问题。虽然目标是提高计算速度,但是抽象的作用已经对抽象操作符的参与提出了问题。<br />
<br />
<br />
<br />
A possible reason for the slowness in AI relates to the acknowledgement by many AI researchers that heuristics is a section that contains a significant breach between computer performance and human performance.{{sfn|McCarthy|2007}} The specific functions that are programmed to a computer may be able to account for many of the requirements that allow it to match human intelligence. These explanations are not necessarily guaranteed to be the fundamental causes for the delay in achieving strong AI, but they are widely agreed by numerous researchers.<br />
<br />
A possible reason for the slowness in AI relates to the acknowledgement by many AI researchers that heuristics is a section that contains a significant breach between computer performance and human performance. The specific functions that are programmed to a computer may be able to account for many of the requirements that allow it to match human intelligence. These explanations are not necessarily guaranteed to be the fundamental causes for the delay in achieving strong AI, but they are widely agreed by numerous researchers.<br />
<br />
人工智能发展缓慢的一个可能原因是许多人工智能研究人员承认启发式是计算机性能和人类性能之间的一个重大缺口。为计算机编程的特定功能可以满足许多要求,使计算机与人类智能相匹配。这些解释并不一定是造成人工智能实现延迟的根本原因,但它们得到了众多研究人员的广泛认同。<br />
<br />
<br />
<br />
There have been many AI researchers that debate over the idea whether [[affective computing|machines should be created with emotions]]. There are no emotions in typical models of AI and some researchers say programming emotions into machines allows them to have a mind of their own.{{sfn|Clocksin|2003}} Emotion sums up the experiences of humans because it allows them to remember those experiences.{{sfn|Gelernter|2010}} David Gelernter writes, "No computer will be creative unless it can simulate all the nuances of human emotion."{{sfn|Gelernter|2010}} This concern about emotion has posed problems for AI researchers and it connects to the concept of strong AI as its research progresses into the future.<ref>{{cite journal|doi=10.1016/j.bushor.2018.08.004|title=Kaplan Andreas and Haelein Michael (2019) Siri, Siri, in my hand: Who's the fairest in the land? On the interpretations, illustrations, and implications of artificial intelligence | volume=62 | year=2019|journal=Business Horizons|pages=15–25 | last1 = Kaplan | first1 = Andreas | last2 = Haenlein | first2 = Michael}}</ref><br />
<br />
There have been many AI researchers that debate over the idea whether machines should be created with emotions. There are no emotions in typical models of AI and some researchers say programming emotions into machines allows them to have a mind of their own. Emotion sums up the experiences of humans because it allows them to remember those experiences. David Gelernter writes, "No computer will be creative unless it can simulate all the nuances of human emotion." This concern about emotion has posed problems for AI researchers and it connects to the concept of strong AI as its research progresses into the future.<br />
<br />
许多人工智能研究人员一直在争论机器是否应该带有情感。典型的人工智能模型中没有情感,一些研究人员说,将情感编程到机器中可以让它们拥有自己的思想。情感总结了人类的经历,因为它允许人们记住那些经历。大卫 · 格勒尼特写道: “除非计算机能够模拟人类情感的所有细微差别,否则它不会具有创造力。”这种对情绪的关注给人工智能研究人员带来了一些问题,随着未来人工智能研究的进展,它与强人工智能的概念相联系。<br />
<br />
<br />
<br />
==Controversies and dangers==<br />
<br />
<br />
<br />
===Feasibility===<br />
<br />
{{expand section|date=February 2016}}<br />
<br />
As of March 2020, AGI remains speculative<ref name="spec1">[https://www.europarl.europa.eu/at-your-service/files/be-heard/religious-and-non-confessional-dialogue/events/en-20190319-how-artificial-intelligence-works.pdf europarl.europa.eu: How artificial intelligence works], "Concluding remarks: Today's AI is powerful and useful, but remains far from speculated AGI or ASI.", European Parliamentary Research Service, retrieved March 3, 2020</ref><ref name="spec2">[https://www.itu.int/en/journal/001/Documents/itu2018-9.pdf itu.int: Beyond Mad?: The Race For Artificial General Intelligence], "AGI represents a level of power that remains firmly in the realm of speculative fiction as on date." February 2, 2018, retrieved March 3, 2020</ref> as no such system has been demonstrated yet. Opinions vary both on ''whether'' and ''when'' artificial general intelligence will arrive. At one extreme, AI pioneer [[Herbert A. Simon]] wrote in 1965: "machines will be capable, within twenty years, of doing any work a man can do". However, this prediction failed to come true. Microsoft co-founder [[Paul Allen]] believed that such intelligence is unlikely in the 21st century because it would require "unforeseeable and fundamentally unpredictable breakthroughs" and a "scientifically deep understanding of cognition".<ref>{{cite news|last1=Allen|first1=Paul|title=The Singularity Isn't Near|url=http://www.technologyreview.com/view/425733/paul-allen-the-singularity-isnt-near/|accessdate=17 September 2014|work=[[MIT Technology Review]]}}</ref> Writing in [[The Guardian]], roboticist Alan Winfield claimed the gulf between modern computing and human-level artificial intelligence is as wide as the gulf between current space flight and practical faster-than-light spaceflight.<ref>{{cite news|last1=Winfield|first1=Alan|title=Artificial intelligence will not turn into a Frankenstein's monster|url=https://www.theguardian.com/technology/2014/aug/10/artificial-intelligence-will-not-become-a-frankensteins-monster-ian-winfield|accessdate=17 September 2014|work=[[The Guardian]]|archive-url=https://web.archive.org/web/20140917135230/http://www.theguardian.com/technology/2014/aug/10/artificial-intelligence-will-not-become-a-frankensteins-monster-ian-winfield|archive-date=17 September 2014|url-status=live}}</ref><br />
<br />
As of March 2020, AGI remains speculative as no such system has been demonstrated yet. Opinions vary both on whether and when artificial general intelligence will arrive. At one extreme, AI pioneer Herbert A. Simon wrote in 1965: "machines will be capable, within twenty years, of doing any work a man can do". However, this prediction failed to come true. Microsoft co-founder Paul Allen believed that such intelligence is unlikely in the 21st century because it would require "unforeseeable and fundamentally unpredictable breakthroughs" and a "scientifically deep understanding of cognition". Writing in The Guardian, roboticist Alan Winfield claimed the gulf between modern computing and human-level artificial intelligence is as wide as the gulf between current space flight and practical faster-than-light spaceflight.<br />
<br />
截至2020年3月,德盛安联仍处于投机状态,因为迄今尚未展示此类系统。对于人工通用智能是否会到来以及何时到来,人们的看法各不相同。在一个极端,人工智能的先驱赫伯特·西蒙在1965年写道: “机器将能在20年内完成人类能做的任何工作。”。然而,这个预言并没有实现。微软(Microsoft)联合创始人保罗•艾伦(Paul Allen)认为,这种情报在21世纪不太可能出现,因为它需要“不可预见且根本无法预测的突破”和“对认知的科学深入理解”。机器人专家 Alan Winfield 在《卫报》上发表文章称,现代计算机和人类水平的人工智能之间的鸿沟就像当前的太空飞行和实际的超光速空间飞行之间的鸿沟一样宽。<br />
<br />
<br />
<br />
AI experts' views on the feasibility of AGI wax and wane, and may have seen a resurgence in the 2010s. Four polls conducted in 2012 and 2013 suggested that the median guess among experts for when they would be 50% confident AGI would arrive was 2040 to 2050, depending on the poll, with the mean being 2081. Of the experts, 16.5% answered with "never" when asked the same question but with a 90% confidence instead.<ref name="new yorker doomsday">{{cite news|author1=Raffi Khatchadourian|title=The Doomsday Invention: Will artificial intelligence bring us utopia or destruction?|url=http://www.newyorker.com/magazine/2015/11/23/doomsday-invention-artificial-intelligence-nick-bostrom|accessdate=7 February 2016|work=[[The New Yorker (magazine)|The New Yorker]]|date=23 November 2015|archive-url=https://web.archive.org/web/20160128105955/http://www.newyorker.com/magazine/2015/11/23/doomsday-invention-artificial-intelligence-nick-bostrom|archive-date=28 January 2016|url-status=live}}</ref><ref>Müller, V. C., & Bostrom, N. (2016). Future progress in artificial intelligence: A survey of expert opinion. In Fundamental issues of artificial intelligence (pp. 555–572). Springer, Cham.</ref> Further current AGI progress considerations can be found below [[#Tests_for_confirming_human-level_AGI|''Tests for confirming human-level AGI'']] and [[#IQ-Tests_AGI|''IQ-tests AGI'']].<br />
<br />
AI experts' views on the feasibility of AGI wax and wane, and may have seen a resurgence in the 2010s. Four polls conducted in 2012 and 2013 suggested that the median guess among experts for when they would be 50% confident AGI would arrive was 2040 to 2050, depending on the poll, with the mean being 2081. Of the experts, 16.5% answered with "never" when asked the same question but with a 90% confidence instead. Further current AGI progress considerations can be found below Tests for confirming human-level AGI and IQ-tests AGI.<br />
<br />
人工智能专家对德盛安联兴衰可行性的看法,可能在2010年出现了复苏。2012年和2013年进行的四次民意调查显示,专家对德盛安联50% 有信心的平均猜测是2040年到2050年,具体取决于调查结果,平均猜测是2081年。在这些专家中,16.5% 的人在被问到同样的问题时回答“从来没有” ,但他们的自信心却达到了90% 。进一步的进展考虑可以在确认人类水平 AGI 和 iq 测试 AGI 的测试下面找到。<br />
<br />
<br />
<br />
===Potential threat to human existence{{anchor|Risk_of_human_extinction}}===<br />
<br />
{{Main|Existential risk from artificial general intelligence}}<br />
<br />
<br />
<br />
The thesis that AI poses an existential risk, and that this risk needs much more attention than it currently gets, has been endorsed by many public figures; perhaps the most famous are [[Elon Musk]], [[Bill Gates]], and [[Stephen Hawking]]. The most notable AI researcher to endorse the thesis is [[Stuart J. Russell]]. Endorsers of the thesis sometimes express bafflement at skeptics: Gates states he does not "understand why some people are not concerned",<ref name="BBC News">{{cite news|last1=Rawlinson|first1=Kevin|title=Microsoft's Bill Gates insists AI is a threat|url=https://www.bbc.co.uk/news/31047780|work=[[BBC News]]|accessdate=30 January 2015}}</ref> and Hawking criticized widespread indifference in his 2014 editorial: {{cquote|'So, facing possible futures of incalculable benefits and risks, the experts are surely doing everything possible to ensure the best outcome, right? Wrong. If a superior alien civilisation sent us a message saying, 'We'll arrive in a few decades,' would we just reply, 'OK, call us when you get here{{endash}}we'll leave the lights on?' Probably not{{endash}}but this is more or less what is happening with AI.'<ref name="hawking editorial">{{cite news |title=Stephen Hawking: 'Transcendence looks at the implications of artificial intelligence&nbsp;– but are we taking AI seriously enough?' |url=https://www.independent.co.uk/news/science/stephen-hawking-transcendence-looks-at-the-implications-of-artificial-intelligence--but-are-we-taking-ai-seriously-enough-9313474.html |accessdate=3 December 2014 |publisher=[[The Independent (UK)]]}}</ref>}}<br />
<br />
The thesis that AI poses an existential risk, and that this risk needs much more attention than it currently gets, has been endorsed by many public figures; perhaps the most famous are Elon Musk, Bill Gates, and Stephen Hawking. The most notable AI researcher to endorse the thesis is Stuart J. Russell. Endorsers of the thesis sometimes express bafflement at skeptics: Gates states he does not "understand why some people are not concerned", and Hawking criticized widespread indifference in his 2014 editorial: we'll leave the lights on?' Probably notbut this is more or less what is happening with AI.'}}<br />
<br />
人工智能构成了世界末日,这种风险需要比现在更多的关注,这一论点已经得到了许多公众人物的支持; 也许最著名的是埃隆 · 马斯克,比尔 · 盖茨和斯蒂芬 · 霍金。支持这一观点的最著名的人工智能研究者是斯图尔特 · 罗素。这篇论文的支持者有时会对怀疑论者表示困惑: 盖茨表示,他不“理解为什么有些人不关心” ,霍金在2014年的社论中批评了普遍的冷漠: 我们会让灯亮着吗可能不是,但这或多或少是正在发生的与人工智能<br />
<br />
<br />
<br />
Many of the scholars who are concerned about existential risk believe that the best way forward would be to conduct (possibly massive) research into solving the difficult "[[AI control problem|control problem]]" to answer the question: what types of safeguards, algorithms, or architectures can programmers implement to maximize the probability that their recursively-improving AI would continue to behave in a friendly, rather than destructive, manner after it reaches superintelligence?<ref name="superintelligence" >{{cite book|last1=Bostrom|first1=Nick|author-link=Nick Bostrom|title=Superintelligence: Paths, Dangers, Strategies|date=2014|isbn=978-0199678112|edition=First|quote=|title-link=Superintelligence: Paths, Dangers, Strategies}}<!-- preface --></ref><ref name="physica_scripta" >{{cite journal|title=Responses to catastrophic AGI risk: a survey|journal=[[Physica Scripta]]|date=19 December 2014|volume=90|issue=1|author1=Kaj Sotala|author2-link=Roman Yampolskiy|author2=Roman Yampolskiy}}</ref><br />
<br />
Many of the scholars who are concerned about existential risk believe that the best way forward would be to conduct (possibly massive) research into solving the difficult "control problem" to answer the question: what types of safeguards, algorithms, or architectures can programmers implement to maximize the probability that their recursively-improving AI would continue to behave in a friendly, rather than destructive, manner after it reaches superintelligence?<br />
<br />
许多关注世界末日的学者认为,最好的方法是进行(可能是大规模的)研究,解决困难的“控制问题” ,以回答这个问题: 程序员可以实现哪些类型的保障措施、算法或架构,以最大限度地提高其递归改进的人工智能在达到超级智能后继续以友好而不是破坏性的方式运行的可能性?<br />
<br />
<br />
<br />
The thesis that AI can pose existential risk also has many strong detractors. Skeptics sometimes charge that the thesis is crypto-religious, with an irrational belief in the possibility of superintelligence replacing an irrational belief in an omnipotent God; at an extreme, [[Jaron Lanier]] argues that the whole concept that current machines are in any way intelligent is "an illusion" and a "stupendous con" by the wealthy.<ref name="atlantic-but-what">{{cite magazine |url=https://www.theatlantic.com/health/archive/2014/05/but-what-does-the-end-of-humanity-mean-for-me/361931/ |title=But What Would the End of Humanity Mean for Me? |magazine=The Atlantic | date = 9 May 2014 | accessdate =12 December 2015}}</ref><br />
<br />
The thesis that AI can pose existential risk also has many strong detractors. Skeptics sometimes charge that the thesis is crypto-religious, with an irrational belief in the possibility of superintelligence replacing an irrational belief in an omnipotent God; at an extreme, Jaron Lanier argues that the whole concept that current machines are in any way intelligent is "an illusion" and a "stupendous con" by the wealthy.<br />
<br />
认为人工智能可以提出世界末日的观点也遭到了许多强烈的反对。怀疑论者有时指责该论点是秘密宗教性的,他们非理性地相信超级智能可能取代对万能的上帝的非理性信仰; 在极端情况下,杰伦 · 拉尼尔(Jaron Lanier)认为,目前的机器以任何方式具有智能的整个概念是“一种幻觉” ,是富人的“惊人骗局”。<br />
<br />
<br />
<br />
Much of existing criticism argues that AGI is unlikely in the short term. Computer scientist [[Gordon Bell]] argues that the human race will already destroy itself before it reaches the [[technological singularity]]. [[Gordon Moore]], the original proponent of [[Moore's Law]], declares that "I am a skeptic. I don't believe [a technological singularity] is likely to happen, at least for a long time. And I don't know why I feel that way."<ref>{{cite news |title=Tech Luminaries Address Singularity |url=https://spectrum.ieee.org/computing/hardware/tech-luminaries-address-singularity |accessdate=8 April 2020 |work=IEEE Spectrum: Technology, Engineering, and Science News |issue=SPECIAL REPORT: THE SINGULARITY |date=1 June 2008 |language=en}}</ref> [[Baidu]] Vice President [[Andrew Ng]] states AI existential risk is "like worrying about overpopulation on Mars when we have not even set foot on the planet yet."<ref name=shermer>{{cite news|last1=Shermer|first1=Michael|title=Apocalypse AI|url=https://www.scientificamerican.com/article/artificial-intelligence-is-not-a-threat-mdash-yet/|accessdate=27 November 2017|work=Scientific American|date=1 March 2017|pages=77|language=en|doi=10.1038/scientificamerican0317-77|bibcode=2017SciAm.316c..77S}}</ref><br />
<br />
Much of existing criticism argues that AGI is unlikely in the short term. Computer scientist Gordon Bell argues that the human race will already destroy itself before it reaches the technological singularity. Gordon Moore, the original proponent of Moore's Law, declares that "I am a skeptic. I don't believe [a technological singularity] is likely to happen, at least for a long time. And I don't know why I feel that way." Baidu Vice President Andrew Ng states AI existential risk is "like worrying about overpopulation on Mars when we have not even set foot on the planet yet."<br />
<br />
现有的许多批评认为,德盛安联短期内不太可能成功。计算机科学家 Gordon Bell 认为人类在到达技术奇异点之前就已经自我毁灭了。戈登 · 摩尔,摩尔定律的最初倡导者,宣称“我是一个怀疑论者。我不认为技术奇异点会发生,至少在很长一段时间内不会。我不知道为什么会有这种感觉。”百度副总裁 Andrew Ng 说,人工智能世界末日就像是在担心火星人口过剩,而我们甚至还没有踏上这个星球<br />
<br />
<br />
<br />
==See also==<br />
<br />
{{div col|colwidth=30em}}<br />
<br />
* [[Automated machine learning]]<br />
<br />
* [[Machine ethics]]<br />
<br />
* [[Multi-task learning]]<br />
<br />
* [[Superintelligence: Paths, Dangers, Strategies|Superintelligence]]<br />
<br />
* [[Nick Bostrom]]<br />
<br />
* [[Eliezer Yudkowsky]]<br />
<br />
* [[Future of Humanity Institute]]<br />
<br />
* [[Outline of artificial intelligence]]<br />
<br />
* [[Artificial brain]]<br />
<br />
* [[Transfer learning]]<br />
<br />
* [[Outline of transhumanism]]<br />
<br />
* [[General game playing]]<br />
<br />
* [[Synthetic intelligence]]<br />
<br />
* [[Intelligence amplification]] (IA), the use of information technology in augmenting human intelligence instead of creating an external autonomous "AGI"{{div col end}}<br />
<br />
<br />
<br />
==Notes==<br />
<br />
{{reflist|colwidth=30em}}<br />
<br />
<br />
<br />
==References==<br />
<br />
• Stages of Artificial Intelligence"[https://www.computerscience0.xyz/2020/04/ai-artificial-intelligence-stages-of.html Computer Science]" on 2nd April, 2020.{{refbegin|2}}<br />
<br />
• Stages of Artificial Intelligence"[https://www.computerscience0.xyz/2020/04/ai-artificial-intelligence-stages-of.html Computer Science]" on 2nd April, 2020.<br />
<br />
•2020年4月2日,《人工智能的阶段》[ https://www.computerscience0.xyz/2020/04/ai-Artificial-Intelligence-Stages-of.html 计算机科学]。<br />
<br />
* {{cite web |url=http://www.techcast.org/Upload/PDFs/633615794236495345_TCTheAutomationofThought.pdf |title=TechCast Article Series: The Automation of Thought |last1=Halal |first1=William E. |website= |access-date= |archive-url=https://web.archive.org/web/20130606101835/http://www.techcast.org/Upload/PDFs/633615794236495345_TCTheAutomationofThought.pdf |archive-date=6 June 2013}}<br />
<br />
* {{Citation | last=Aleksander | first=Igor | author-link=Igor Aleksander | year=1996 | title=Impossible Minds | publisher=World Scientific Publishing Company | isbn=978-1-86094-036-1 | url-access=registration | url=https://archive.org/details/impossiblemindsm0000alek }}<br />
<br />
* {{Citation | last = Omohundro|first= Steve| author-link= Steve Omohundro | year = 2008| title= The Nature of Self-Improving Artificial Intelligence| publisher= presented and distributed at the 2007 Singularity Summit, San Francisco, CA.}}<br />
<br />
* {{Citation|last=Sandberg |first=Anders|last2=Boström|first2=Nick|title=Whole Brain Emulation: A Roadmap|url=http://www.fhi.ox.ac.uk/Reports/2008-3.pdf|accessdate=5 April 2009|series= Technical Report #2008‐3|year=2008| publisher = Future of Humanity Institute, Oxford University}}<br />
<br />
* {{Citation|vauthors=Azevedo FA, Carvalho LR, Grinberg LT | title = Equal numbers of neuronal and nonneuronal cells make the human brain an isometrically scaled-up primate brain| journal = The Journal of Comparative Neurology| volume = 513| issue = 5| pages = 532–41|date=April 2009| pmid = 19226510| doi = 10.1002/cne.21974|url=https://www.researchgate.net/publication/24024444 |accessdate=4 September 2013| ref={{harvid|Azevedo et al.|2009}}|display-authors=etal}}<br />
<br />
* {{Citation<br />
<br />
| first=Anthony<br />
<br />
| first=Anthony<br />
<br />
首先是安东尼<br />
<br />
| last=Berglas<br />
<br />
| last=Berglas<br />
<br />
最后一个贝格拉斯<br />
<br />
| title=Artificial Intelligence will Kill our Grandchildren<br />
<br />
| title=Artificial Intelligence will Kill our Grandchildren<br />
<br />
人工智能会杀死我们的孙子<br />
<br />
| year=2008<br />
<br />
| year=2008<br />
<br />
2008年<br />
<br />
| url=http://berglas.org/Articles/AIKillGrandchildren/AIKillGrandchildren.html<br />
<br />
| url=http://berglas.org/Articles/AIKillGrandchildren/AIKillGrandchildren.html<br />
<br />
Http://berglas.org/articles/aikillgrandchildren/aikillgrandchildren.html<br />
<br />
}}<br />
<br />
}}<br />
<br />
}}<br />
<br />
* {{Citation | last = Chalmers |first = David | author-link=David Chalmers | year=1996 | title = The Conscious Mind |publisher=Oxford University Press.}}<br />
<br />
* {{Citation | last = Clocksin|first=William |date=Aug 2003 |title=Artificial intelligence and the future|journal=[[Philosophical Transactions of the Royal Society A]] |pmid=12952683 |volume=361 |issue=1809 |pages=1721–1748 |doi=10.1098/rsta.2003.1232 |postscript=.|bibcode=2003RSPTA.361.1721C }}<br />
<br />
* {{Crevier 1993}}<br />
<br />
* {{Citation | first = Brad | last = Darrach | date=20 November 1970 | title=Meet Shakey, the First Electronic Person | magazine=[[Life Magazine]] | pages = 58–68 }}.<br />
<br />
* {{Citation | last = Drachman | first = D | title = Do we have brain to spare? | journal = Neurology | volume = 64 | issue = 12 | pages = 2004–5 | year = 2005 | pmid = 15985565 | doi = 10.1212/01.WNL.0000166914.38327.BB | postscript = .}}<br />
<br />
* {{Citation | last = Feigenbaum | first = Edward A. | first2=Pamela | last2=McCorduck | author-link=Edward Feigenbaum | author2-link = Pamela McCorduck | title = The Fifth Generation: Artificial Intelligence and Japan's Computer Challenge to the World | publisher = Michael Joseph | year = 1983 | isbn = 978-0-7181-2401-4 }}<br />
<br />
* {{Citation<br />
<br />
| last = Gelernter | first = David<br />
<br />
| last = Gelernter | first = David<br />
<br />
最后的盖兰特 | 第一个大卫<br />
<br />
| year = 2010<br />
<br />
| year = 2010<br />
<br />
2010年<br />
<br />
| title = Dream-logic, the Internet and Artificial Thought<br />
<br />
| title = Dream-logic, the Internet and Artificial Thought<br />
<br />
梦的逻辑,互联网和人工思维<br />
<br />
| url=http://www.edge.org/3rd_culture/gelernter10.1/gelernter10.1_index.html | accessdate= 25 July 2010<br />
<br />
| url=http://www.edge.org/3rd_culture/gelernter10.1/gelernter10.1_index.html | accessdate= 25 July 2010<br />
<br />
Http://www.edge.org/3rd_culture/gelernter10.1/gelernter10.1_index.html : 2010年7月25日<br />
<br />
}}<br />
<br />
}}<br />
<br />
}}<br />
<br />
* {{Citation<br />
<br />
| editor1-last = Goertzel | editor1-first = Ben | authorlink = Ben Goertzel<br />
<br />
| editor1-last = Goertzel | editor1-first = Ben | authorlink = Ben Goertzel<br />
<br />
1-first Ben | authorlink Ben Goertzel<br />
<br />
| editor2-last = Pennachin | editor2-first= Cassio<br />
<br />
| editor2-last = Pennachin | editor2-first= Cassio<br />
<br />
| 编辑2-last Pennachin | 编辑2-first Cassio<br />
<br />
| year = 2006<br />
<br />
| year = 2006<br />
<br />
2006年<br />
<br />
| title=Artificial General Intelligence<br />
<br />
| title=Artificial General Intelligence<br />
<br />
人工通用智能<br />
<br />
| publisher = Springer <br />
<br />
| publisher = Springer <br />
<br />
出版商斯普林格<br />
<br />
| url=http://people.inf.elte.hu/csizsekp/ai/books/artificial-general-intelligence-cognitive-technologies.9783540237334.27156.pdf<br />
<br />
| url=http://people.inf.elte.hu/csizsekp/ai/books/artificial-general-intelligence-cognitive-technologies.9783540237334.27156.pdf<br />
<br />
Http://people.inf.elte.hu/csizsekp/ai/books/artificial-general-intelligence-cognitive-technologies.9783540237334.27156.pdf<br />
<br />
| isbn = 978-3-540-23733-4<br />
<br />
| isbn = 978-3-540-23733-4<br />
<br />
[国际标准图书馆编号978-3-540-23733-4]<br />
<br />
| archive-url = https://web.archive.org/web/20130320184603/http://people.inf.elte.hu/csizsekp/ai/books/artificial-general-intelligence-cognitive-technologies.9783540237334.27156.pdf<br />
<br />
| archive-url = https://web.archive.org/web/20130320184603/http://people.inf.elte.hu/csizsekp/ai/books/artificial-general-intelligence-cognitive-technologies.9783540237334.27156.pdf<br />
<br />
| 档案-url https://web.archive.org/web/20130320184603/http://people.inf.elte.hu/csizsekp/ai/books/artificial-general-intelligence-cognitive-technologies.9783540237334.27156.pdf<br />
<br />
| archive-date = 20 March 2013<br />
<br />
| archive-date = 20 March 2013<br />
<br />
| 档案-日期2013年3月20日<br />
<br />
}}<br />
<br />
}}<br />
<br />
}}<br />
<br />
* {{Citation<br />
<br />
| last = Goertzel | first = Ben | authorlink = Ben Goertzel<br />
<br />
| last = Goertzel | first = Ben | authorlink = Ben Goertzel<br />
<br />
作者: Ben Goertzel<br />
<br />
| last2 = Wang | first2 = Pei<br />
<br />
| last2 = Wang | first2 = Pei<br />
<br />
2 Wang | first2 Pei<br />
<br />
| year = 2006<br />
<br />
| year = 2006<br />
<br />
2006年<br />
<br />
| title = Introduction: Aspects of Artificial General Intelligence<br />
<br />
| title = Introduction: Aspects of Artificial General Intelligence<br />
<br />
| 题目简介: 人工通用智能的方方面面<br />
<br />
| url=http://sites.google.com/site/narswang/publications/wang-goertzel.AGI_Aspects.pdf?attredirects=1<br />
<br />
| url=http://sites.google.com/site/narswang/publications/wang-goertzel.AGI_Aspects.pdf?attredirects=1<br />
<br />
Http://sites.google.com/site/narswang/publications/wang-goertzel.agi_aspects.pdf?attredirects=1<br />
<br />
}}<br />
<br />
}}<br />
<br />
}}<br />
<br />
* {{Citation |last=Goertzel|first=Ben |authorlink=Ben Goertzel |date=Dec 2007 |title=Human-level artificial general intelligence and the possibility of a technological singularity: a reaction to Ray Kurzweil's The Singularity Is Near, and McDermott's critique of Kurzweil|journal=Artificial Intelligence |volume=171 |issue=18, Special Review Issue |pages=1161–1173 |url=https://scholar.google.com/scholar?hl=sv&lr=&cluster=15189798216526465792 |accessdate=1 April 2009 |doi=10.1016/j.artint.2007.10.011 |postscript=.}}<br />
<br />
* {{Citation | last = Gubrud | first = Mark | url=http://www.foresight.org/Conferences/MNT05/Papers/Gubrud/ | title = Nanotechnology and International Security | journal= Fifth Foresight Conference on Molecular Nanotechnology |date = November 1997| accessdate= 7 May 2011}}<br />
<br />
* {{Citation| last1 = Holte | first1=RC | last2=Choueiry |first2=BY| title = Abstraction and reformulation in artificial intelligence| journal = [[Philosophical Transactions of the Royal Society B]]| volume = 358| issue = 1435| pages = 1197–1204| year = 2003| pmid = 12903653| pmc = 1693218| doi = 10.1098/rstb.2003.1317| postscript = .}}<br />
<br />
* {{Citation | last = Howe | first = J. | url=http://www.dai.ed.ac.uk/AI_at_Edinburgh_perspective.html | title = Artificial Intelligence at Edinburgh University : a Perspective |date = November 1994| accessdate= 30 August 2007}}<br />
<br />
* {{Citation | last = Johnson| first= Mark | year = 1987| title =The body in the mind| publisher =Chicago|isbn= 978-0-226-40317-5}}<br />
<br />
* {{Citation | last = Kurzweil | first = Ray | author-link = Ray Kurzweil | title = The Singularity is Near | year = 2005 | publisher = Viking Press | title-link = The Singularity is Near }}<br />
<br />
* {{Citation | last = Lighthill | first = Professor Sir James | author-link=James Lighthill | year = 1973 | contribution= Artificial Intelligence: A General Survey | title = Artificial Intelligence: a paper symposium| publisher = Science Research Council }}<br />
<br />
* {{Citation|last=Luger|first=George|first2=William|last2=Stubblefield|year=2004|title=Artificial Intelligence: Structures and Strategies for Complex Problem Solving|edition=5th|publisher=The Benjamin/Cummings Publishing Company, Inc.|page=[https://archive.org/details/artificialintell0000luge/page/720 720]|isbn=978-0-8053-4780-7|url=https://archive.org/details/artificialintell0000luge/page/720}}<br />
<br />
* {{Citation |last=McCarthy|first=John |authorlink=John McCarthy (computer scientist)|date=Oct 2007 |title=From here to human-level AI|journal=Artificial Intelligence |volume=171 |pages=1174–1182 |doi=10.1016/j.artint.2007.10.009 | postscript=. |issue=18}}<br />
<br />
* {{McCorduck 2004}}<br />
<br />
* {{Citation | last = Moravec | first = Hans | author-link = Hans Moravec | year = 1976 | url = http://www.frc.ri.cmu.edu/users/hpm/project.archive/general.articles/1975/Raw.Power.html | title = The Role of Raw Power in Intelligence | access-date = 29 September 2007 | archive-url = https://web.archive.org/web/20160303232511/http://www.frc.ri.cmu.edu/users/hpm/project.archive/general.articles/1975/Raw.Power.html | archive-date = 3 March 2016 | url-status = dead }}<br />
<br />
* {{Citation | last = Moravec | first = Hans | author-link=Hans Moravec | year = 1988 | title = Mind Children | publisher = Harvard University Press}}<br />
<br />
* {{Citation| last=Nagel | year=1974 | title =What Is it Like to Be a Bat | journal = Philosophical Review | volume=83 | issue=4 | pages=435–50 | url=http://organizations.utep.edu/Portals/1475/nagel_bat.pdf| postscript=. | doi=10.2307/2183914 | jstor=2183914 }}<br />
<br />
* {{Citation | last = Newell | first = Allen | author-link=Allen Newell | last2 = Simon | first2=H. A. | year = 1963 | contribution=GPS: A Program that Simulates Human Thought| title=Computers and Thought | editor-last= Feigenbaum | editor-first= E.A. |editor2-last= Feldman |editor2-first= J. |publisher= McGraw-Hill | authorlink2 = Herbert A. Simon|location= New York }}<br />
<br />
* {{cite journal | doi = 10.1145/360018.360022 | last = Newell | first = Allen | last2 = Simon | first2=H. A. | year = 1976 | title=Computer Science as Empirical Inquiry: Symbols and Search| volume= 19 | pages = 113–126 | journal = Communications of the ACM| author-link=Allen Newell | authorlink2=Herbert A. Simon|issue=3 | ref=harv| doi-access= free }}<br />
<br />
* {{Citation| last=Nilsson | first=Nils | author-link=Nils Nilsson (researcher) | year=1998|title=Artificial Intelligence: A New Synthesis|publisher=Morgan Kaufmann Publishers|isbn=978-1-55860-467-4}}<br />
<br />
* {{Russell Norvig 2003}}<br />
<br />
* {{Citation | last = NRC| author-link=United States National Research Council | chapter=Developments in Artificial Intelligence|chapter-url=http://www.nap.edu/readingroom/books/far/ch9.html|title=Funding a Revolution: Government Support for Computing Research|publisher=National Academy Press|year=1999 }}<br />
<br />
* {{Citation | last = Poole | first = David | first2 = Alan | last2 = Mackworth | first3 = Randy | last3 = Goebel | publisher = Oxford University Press | year = 1998 | title = Computational Intelligence: A Logical Approach | url = http://www.cs.ubc.ca/spider/poole/ci.html | author-link=David Poole (researcher) | location = New York }}<br />
<br />
* {{Citation|last=Searle |first=John |author-link=John Searle |title=Minds, Brains and Programs |journal=Behavioral and Brain Sciences |volume=3 |issue=3 |pages=417–457 |year=1980 |doi=10.1017/S0140525X00005756 }}<br />
<br />
* {{Citation | last= Simon | first = H. A. | author-link=Herbert A. Simon | year = 1965 | title=The Shape of Automation for Men and Management | publisher =Harper & Row | location = New York }}<br />
<br />
* {{Citation |last=Sutherland|first= J.G. |year =1990| title= Holographic Model of Memory, Learning, and Expression|journal=International Journal of Neural Systems|volume= 1–3|pages= 256–267 |postscript=.}}<br />
<br />
* {{Citation|vauthors=Williams RW, Herrup K | title = The control of neuron number| journal = Annual Review of Neuroscience| volume = 11| pages = 423–53| year = 1988| pmid = 3284447| doi = 10.1146/annurev.ne.11.030188.002231| postscript = .}}<!--| accessdate = 20 June 2009--><br />
<br />
* {{Citation<br />
<br />
| editor1-last = de Vega | editor1-first = Manuel<br />
<br />
| editor1-last = de Vega | editor1-first = Manuel<br />
<br />
| 编辑1-last de Vega | 编辑1-first Manuel<br />
<br />
| editor2-last = Glenberg | editor2-first = Arthur<br />
<br />
| editor2-last = Glenberg | editor2-first = Arthur<br />
<br />
2- 最后的格伦伯格2- 第一个亚瑟<br />
<br />
| editor3-last = Graesser | editor3-first = Arthur<br />
<br />
| editor3-last = Graesser | editor3-first = Arthur<br />
<br />
| 编辑3-last Graesser | 编辑3-first Arthur<br />
<br />
| year = 2008<br />
<br />
| year = 2008<br />
<br />
2008年<br />
<br />
| title = Symbols and Embodiment: Debates on meaning and cognition<br />
<br />
| title = Symbols and Embodiment: Debates on meaning and cognition<br />
<br />
标题符号与具体化: 关于意义与认知的争论<br />
<br />
| publisher = Oxford University Press<br />
<br />
| publisher = Oxford University Press<br />
<br />
牛津大学出版社<br />
<br />
| isbn=978-0-19-921727-4<br />
<br />
| isbn=978-0-19-921727-4<br />
<br />
[国际标准图书编号978-0-19-921727-4]<br />
<br />
}}<br />
<br />
}}<br />
<br />
}}<br />
<br />
* {{Citation|last=Yudkowsky |first=Eliezer |author-link=Eliezer Yudkowsky |editor-last=Goertzel |editor-first=Ben |editor2-last=Pennachin |editor2-first=Cassio |title=Artificial General Intelligence |journal=Annual Review of Psychology |publisher=Springer |year=2006 |url=http://www.singinst.org/upload/LOGI//LOGI.pdf |doi=10.1146/annurev.psych.49.1.585 |pmid=9496632 |isbn=978-3-540-23733-4 |url-status=dead |archiveurl=https://web.archive.org/web/20090411050423/http://www.singinst.org/upload/LOGI/LOGI.pdf |archivedate=11 April 2009 |volume=49 |pages=585–612}} <br />
<br />
* {{Citation |last=Zucker|first=Jean-Daniel |date=July 2003 |title=A grounded theory of abstraction in artificial intelligence|journal=[[Philosophical Transactions of the Royal Society B]] |pmid=12903672 |volume=358 |issue=1435 |pmc=1693211 |pages=1293–1309 |doi=10.1098/rstb.2003.1308 |postscript=.}}<br />
<br />
* {{Citation| last=Yudkowsky | first=Eliezer | author-link=Eliezer Yudkowsky| year=2008 |title=Artificial Intelligence as a Positive and Negative Factor in Global Risk |journal=Global Catastrophic Risks | bibcode=2008gcr..book..303Y }}.<br />
<br />
{{refend}}<br />
<br />
<br />
<br />
==External links==<br />
<br />
* [https://cis.temple.edu/~pwang/AGI-Intro.html The AGI portal maintained by Pei Wang]<br />
<br />
* [https://web.archive.org/web/20050405071221/http://genesis.csail.mit.edu/index.html The Genesis Group at MIT's CSAIL] – Modern research on the computations that underlay human intelligence<br />
<br />
* [http://www.opencog.org/ OpenCog – open source project to develop a human-level AI]<br />
<br />
* [http://academia.wikia.com/wiki/A_Method_for_Simulating_the_Process_of_Logical_Human_Thought Simulating logical human thought]<br />
<br />
* [http://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/ai-timelines What Do We Know about AI Timelines?] – Literature review<br />
<br />
<br />
<br />
{{Existential risk from artificial intelligence}}<br />
<br />
<br />
<br />
{{DEFAULTSORT:Artificial general intelligence}}<br />
<br />
[[Category:Hypothetical technology]]<br />
<br />
Category:Hypothetical technology<br />
<br />
类别: 假设技术<br />
<br />
[[Category:Artificial intelligence]]<br />
<br />
Category:Artificial intelligence<br />
<br />
类别: 人工智能<br />
<br />
[[Category:Computational neuroscience]]<br />
<br />
Category:Computational neuroscience<br />
<br />
类别: 计算神经科学<br />
<br />
<br />
<br />
[[fr:Intelligence artificielle#Intelligence artificielle forte]]<br />
<br />
fr:Intelligence artificielle#Intelligence artificielle forte<br />
<br />
智力人工 # 智力人工强项<br />
<br />
<noinclude><br />
<br />
<small>This page was moved from [[wikipedia:en:Artificial general intelligence]]. Its edit history can be viewed at [[通用人工智能/edithistory]]</small></noinclude><br />
<br />
[[Category:待整理页面]]</div>粲兰https://wiki.swarma.org/index.php?title=%E9%80%9A%E7%94%A8%E4%BA%BA%E5%B7%A5%E6%99%BA%E8%83%BD&diff=14882通用人工智能2020-10-09T15:01:50Z<p>粲兰:</p>
<hr />
<div>此词条暂由彩云小译翻译,未经人工整理和审校,带来阅读不便,请见谅。<br />
<br />
{{Short description|Hypothetical human-level or stronger AI}}<br />
<br />
{{Use British English|date = March 2019}}<br />
<br />
{{Use dmy dates|date=December 2019}}<br />
<br />
{{Artificial intelligence}}<br />
<br />
'''Artificial general intelligence''' ('''AGI''') is the hypothetical<ref>{{cite news |title=DeepMind and Google: the battle to control artificial intelligence |url=https://www.economist.com/news/2019/04/05/deepmind-and-google-the-battle-to-control-artificial-intelligence |accessdate=15 March 2020 |work=[[The Economist]] ([[1843 (magazine)]]) |date=2019 |quote=AGI stands for Artificial General Intelligence, a hypothetical computer program...}}</ref> intelligence of a machine that has the capacity to understand or learn any intellectual task that a [[human being]] can. It is a primary goal of some [[artificial intelligence]] research and a common topic in [[science fiction]] and [[futures studies]]. AGI can also be referred to as '''strong AI''',<ref>Kurzweil, ''Singularity'' (2005) p. 260</ref><ref name = "Kurzweil 2005-08-05">{{Citation|url=https://www.forbes.com/home/free_forbes/2005/0815/030.htmlhttps://www.forbes.com/home/free_forbes/2005/0815/030.html |first=Ray |last=Kurzweil |date=5 August 2005 |magazine=[[Forbes]]|title=Long Live AI}}: Kurzweil describes strong AI as "machine intelligence with the full range of human intelligence."</ref><ref>{{Citation|url=https://crnano.typepad.com/crnblog/2005/08/advanced_human_.html |title=Advanced Human ntelligence<br />
<br />
Artificial general intelligence (AGI) is the hypothetical intelligence of a machine that has the capacity to understand or learn any intellectual task that a human being can. It is a primary goal of some artificial intelligence research and a common topic in science fiction and futures studies. AGI can also be referred to as strong AI,<ref>{{Citation|url=https://crnano.typepad.com/crnblog/2005/08/advanced_human_.html |title=Advanced Human ntelligence<br />
<br />
人工通用智能(Artificial general intelligence,AGI)是一种假设的机器智能,它有能力理解或学习任何人类能够完成的智力任务。这是一些人工智能研究的主要目标,也是科幻小说和未来研究的共同话题。也可以被称为强人工智能,参考文献{ Citation | url https://crnano.typepad.com/crnblog/2005/08/advanced_human_.html | title Advanced Human intelligence<br />
<br />
|first=Mike|last=Treder|work=Responsible Nanotechnology|date=10 August 2005 |archive-url=https://web.archive.org/web/20191016214415/https://crnano.typepad.com/crnblog/2005/08/advanced_human_.html|archive-date=16 October 2019 |url-status=live}}</ref> '''full AI''',<ref>{{Cite web |url=http://tedxtalks.ted.com/video/The-Age-of-Artificial-Intellige |title=The Age of Artificial Intelligence: George John at TEDxLondonBusinessSchool 2013 |access-date=22 February 2014 |archive-url=https://web.archive.org/web/20140226123940/http://tedxtalks.ted.com/video/The-Age-of-Artificial-Intellige |archive-date=26 February 2014 |url-status=live }}</ref> <br />
<br />
|first=Mike|last=Treder|work=Responsible Nanotechnology|date=10 August 2005 |archive-url=https://web.archive.org/web/20191016214415/https://crnano.typepad.com/crnblog/2005/08/advanced_human_.html|archive-date=16 October 2019 |url-status=live}}</ref> full AI, <br />
<br />
2005年8月10日2019年10月 https://web.archive.org/web/20191016214415/https://crnano.typepad.com/crnblog/2005/08/advanced_human_.html|archive-date=16,<br />
<br />
or '''general intelligent action'''.{{sfn|Newell|Simon|1976|ps=, This is the term they use for "human-level" intelligence in the [[physical symbol system]] hypothesis.}} <br />
<br />
or general intelligent action. <br />
<br />
或者是通用智能行为。<br />
<br />
Some academic sources reserve the term "strong AI" for machines that can experience [[Chinese room#Strong AI|consciousness]].{{sfn|Searle|1980|ps=, See below for the origin of the term "strong AI", and see the academic definition of "[[Chinese room#Strong AI|strong AI]]" in the article [[Chinese room]].}} Today's AI is speculated to be many years, if not decades, away from AGI.<ref>[https://www.europarl.europa.eu/at-your-service/files/be-heard/religious-and-non-confessional-dialogue/events/en-20190319-how-artificial-intelligence-works.pdf europarl.europa.eu: How artificial intelligence works], "Concluding remarks: Today's AI is powerful and useful, but remains far from speculated AGI or ASI.", European Parliamentary Research Service, retrieved March 3, 2020</ref><ref>{{Cite journal|last=Grace|first=Katja|last2=Salvatier|first2=John|last3=Dafoe|first3=Allan|last4=Zhang|first4=Baobao|last5=Evans|first5=Owain|date=2018-07-31|title=Viewpoint: When Will AI Exceed Human Performance? Evidence from AI Experts|journal=Journal of Artificial Intelligence Research|volume=62|pages=729–754|doi=10.1613/jair.1.11222|issn=1076-9757}}</ref><br />
<br />
Some academic sources reserve the term "strong AI" for machines that can experience consciousness. Today's AI is speculated to be many years, if not decades, away from AGI.<br />
<br />
一些学术资源保留了“强人工智能”这个术语,用来形容能够体会意识的机器。据推测,如果不是几十年的话,今天的人工智能将在很多年之后才能达到通用人工智能的地步。<br />
<br />
<br />
<br />
Some authorities emphasize a distinction between ''strong AI'' and ''applied AI'',<ref>Encyclopædia Britannica [http://www.britannica.com/eb/article-219086/artificial-intelligence Strong AI, applied AI, and cognitive simulation] {{Webarchive|url=https://web.archive.org/web/20071015054758/http://www.britannica.com/eb/article-219086/artificial-intelligence |date=15 October 2007 }} or Jack Copeland [http://www.cs.usfca.edu/www.AlanTuring.net/turing_archive/pages/Reference%20Articles/what_is_AI/What%20is%20AI02.html What is artificial intelligence?] {{Webarchive|url=https://web.archive.org/web/20070818125256/http://www.cs.usfca.edu/www.AlanTuring.net/turing_archive/pages/Reference%20Articles/what_is_AI/What%20is%20AI02.html |date=18 August 2007 }} on AlanTuring.net</ref> also called ''narrow AI''<ref name = "Kurzweil 2005-08-05"/> or ''[[weak AI]]''.<ref>{{Cite web |url=http://www.open2.net/nextbigthing/ai/ai_in_depth/in_depth.htm |title=The Open University on Strong and Weak AI |access-date=8 October 2007 |archive-url=https://web.archive.org/web/20090925043908/http://www.open2.net/nextbigthing/ai/ai_in_depth/in_depth.htm |archive-date=25 September 2009 |url-status=live }} {{Dead link |date=October 2019}}</ref> In contrast to strong AI, weak AI is not intended to perform human [[cognitive]] abilities. Rather, weak AI is limited to the use of software to study or accomplish specific [[problem solving]] or [[reason]]ing tasks.<br />
<br />
Some authorities emphasize a distinction between strong AI and applied AI, also called narrow AI In contrast to strong AI, weak AI is not intended to perform human cognitive abilities. Rather, weak AI is limited to the use of software to study or accomplish specific problem solving or reasoning tasks.<br />
<br />
一些权威机构强调'' 强人工智能 ''和'' 应用人工智能 ''之间的区别,也称为'' 狭义人工智能 ''与'' 强人工智能 ''相比,弱人工智能并不是为了执行人类的认知能力。相反,弱人工智能仅限于使用软件来研究或完成特定问题的解决或完成推理任务。<br />
<br />
<br />
<br />
As of 2017, over forty organizations are researching AGI.<ref name=baum/><br />
<br />
As of 2017, over forty organizations are researching AGI.<br />
<br />
截止到2017年,已经有超过四十家机构在研究 AGI。<br />
<br />
<br />
<br />
==Requirements 判定要求==<br />
<br />
{{main|Cognitive science}}<br />
<br />
<br />
<br />
Various criteria for [[intelligence]] have been proposed (most famously the [[Turing test]]) but to date, there is no definition that satisfies everyone.<ref>AI founder [[John McCarthy (computer scientist)|John McCarthy]] writes: "we cannot yet characterize in general what kinds of computational procedures we want to call intelligent." {{cite web| url=http://www-formal.stanford.edu/jmc/whatisai/node1.html| title=Basic Questions| last=McCarthy| first=John| authorlink=John McCarthy (computer scientist)| publisher=[[Stanford University]]| year=2007| access-date=6 December 2007| archive-url=https://web.archive.org/web/20071026100601/http://www-formal.stanford.edu/jmc/whatisai/node1.html| archive-date=26 October 2007| url-status=live}} (For a discussion of some definitions of intelligence used by [[artificial intelligence]] researchers, see [[philosophy of artificial intelligence]].)</ref> However, there ''is'' wide agreement among artificial intelligence researchers that intelligence is required to do the following:<ref><br />
<br />
Various criteria for intelligence have been proposed (most famously the Turing test) but to date, there is no definition that satisfies everyone. However, there is wide agreement among artificial intelligence researchers that intelligence is required to do the following:<ref><br />
<br />
人们提出了各种各样的智能标准(最著名的是图灵测试) ,但到目前为止,还没有一个定义能使所有人满意。然而,人工智能研究人员普遍认为,智能需要做到以下几点: <br />
<br />
This list of intelligent traits is based on the topics covered by major AI textbooks, including:<br />
<br />
This list of intelligent traits is based on the topics covered by major AI textbooks, including:<br />
<br />
这个智能特征的列表基于主流的人工智能教科书所涉及的主题,包括:<br />
<br />
{{Harvnb|Russell|Norvig|2003}},<br />
<br />
,<br />
<br />
,<br />
<br />
{{Harvnb|Luger|Stubblefield|2004}},<br />
<br />
,<br />
<br />
,<br />
<br />
{{Harvnb|Poole|Mackworth|Goebel|1998}} and<br />
<br />
and<br />
<br />
及<br />
<br />
{{Harvnb|Nilsson|1998}}.<br />
<br />
.<br />
<br />
.<br />
<br />
</ref><br />
<br />
</ref><br />
<br />
/ 参考<br />
<br />
* [[automated reasoning|reason]], use strategy, solve puzzles, and make judgments under [[uncertainty]];使用策略,解决问题,并且在不确定条件下做出决策。<br />
<br />
* [[knowledge representation|represent knowledge]], including [[Commonsense knowledge base|commonsense knowledge]];展现知识,包括常识。<br />
<br />
* [[automated planning and scheduling|plan]];规划。<br />
<br />
* [[machine learning|learn]];学习。<br />
<br />
* communicate in [[natural language processing|natural language]];使用自然语言进行交流。<br />
<br />
* and [[Artificial intelligence systems integration|integrate all these skills]] towards common goals.综合运用所有技巧以达到某个目的。<br />
<br />
<br />
<br />
Other important capabilities include the ability to [[machine perception|sense]] (e.g. [[computer vision|see]]) and the ability to act (e.g. [[robotics|move and manipulate objects]]) in the world where intelligent behaviour is to be observed.<ref>Pfeifer, R. and Bongard J. C., How the body shapes the way we think: a new view of intelligence (The MIT Press, 2007). {{ISBN|0-262-16239-3}}</ref> This would include an ability to detect and respond to [[hazard]].<ref>{{cite journal | last1 = White | first1 = R. W. | year = 1959 | title = Motivation reconsidered: The concept of competence | journal = Psychological Review | volume = 66 | issue = 5| pages = 297–333 | doi=10.1037/h0040934| pmid = 13844397 }}</ref> Many interdisciplinary approaches to intelligence (e.g. [[cognitive science]], [[computational intelligence]] and [[decision making]]) tend to emphasise the need to consider additional traits such as [[imagination]] (taken as the ability to form mental images and concepts that were not programmed in)<ref>{{Harvnb|Johnson|1987}}</ref> and [[Self-determination theory|autonomy]].<ref>deCharms, R. (1968). Personal causation. New York: Academic Press.</ref><br />
<br />
Other important capabilities include the ability to sense (e.g. see) and the ability to act (e.g. move and manipulate objects) in the world where intelligent behaviour is to be observed. This would include an ability to detect and respond to hazard. Many interdisciplinary approaches to intelligence (e.g. cognitive science, computational intelligence and decision making) tend to emphasise the need to consider additional traits such as imagination (taken as the ability to form mental images and concepts that were not programmed in) and autonomy.<br />
<br />
其他重要的能力包括在客观世界感知(例如:视觉)和行动(例如:移动和操纵物体)的能力。智能行为在客观世界中是可观测的。这将包括检测和应对危险的能力。许多跨学科的智力研究方法(例如:。认知科学、计算智能和决策)倾向于强调考虑额外特征的必要性,例如想象力(被认为是未编入程序的形成意象和概念的能力)和自主性。<br />
<br />
Computer based systems that exhibit many of these capabilities do exist (e.g. see [[computational creativity]], [[automated reasoning]], [[decision support system]], [[robot]], [[evolutionary computation]], [[intelligent agent]]), but not yet at human levels.<br />
<br />
Computer based systems that exhibit many of these capabilities do exist (e.g. see computational creativity, automated reasoning, decision support system, robot, evolutionary computation, intelligent agent), but not yet at human levels.<br />
<br />
基于计算机的系统,展示了许多这些能力确实存在(例如:。参见计算创造性、自动推理、决策支持系统、机器人、进化计算、智能代理) ,但还没有达到人类的水平。<br />
<br />
<br />
<br />
===Tests for confirming human-level AGI{{anchor|Tests_for_confirming_human-level_AGI}}===<br />
<br />
The following tests to confirm human-level AGI have been considered:<ref>{{cite web|last=Muehlhauser|first=Luke|title=What is AGI?|url=http://intelligence.org/2013/08/11/what-is-agi/|publisher=Machine Intelligence Research Institute|accessdate=1 May 2014|date=11 August 2013|archive-url=https://web.archive.org/web/20140425115445/http://intelligence.org/2013/08/11/what-is-agi/|archive-date=25 April 2014|url-status=live}}</ref><ref>{{Cite web|url=https://www.talkyblog.com/artificial_general_intelligence_agi/|title=What is Artificial General Intelligence (AGI)? {{!}} 4 Tests For Ensuring Artificial General Intelligence|date=13 July 2019|website=Talky Blog|language=en-US|access-date=17 July 2019|archive-url=https://web.archive.org/web/20190717071152/https://www.talkyblog.com/artificial_general_intelligence_agi/|archive-date=17 July 2019|url-status=live}}</ref><br />
<br />
The following tests to confirm human-level AGI have been considered:<br />
<br />
考虑了下列测试以确认人类水平 AGI:<br />
<br />
;[[Turing test|The Turing Test]] ([[Alan Turing|''Turing'']])<br />
<br />
The Turing Test (Turing)<br />
<br />
图灵测试(图灵)<br />
<br />
: A machine and a human both converse sight unseen with a second human, who must evaluate which of the two is the machine, which passes the test if it can fool the evaluator a significant fraction of the time. Note: Turing does not prescribe what should qualify as intelligence, only that knowing that it is a machine should disqualify it.<br />
<br />
A machine and a human both converse sight unseen with a second human, who must evaluate which of the two is the machine, which passes the test if it can fool the evaluator a significant fraction of the time. Note: Turing does not prescribe what should qualify as intelligence, only that knowing that it is a machine should disqualify it.<br />
<br />
一个机器人和一个人类都与另一个人类进行眼神交流,后者必须评估两者中哪一个是机器,如果它能骗过评估者很大一部分时间,那么机器就通过了测试。注意: 图灵并没有规定什么是智能,只要能认出它是一台机器就不应该认为它是智能的。<br />
<br />
;The Coffee Test ([[Steve Wozniak|''Wozniak'']])<br />
<br />
The Coffee Test (Wozniak)<br />
<br />
咖啡测试(沃兹尼亚克)<br />
<br />
: A machine is required to enter an average American home and figure out how to make coffee: find the coffee machine, find the coffee, add water, find a mug, and brew the coffee by pushing the proper buttons.<br />
<br />
A machine is required to enter an average American home and figure out how to make coffee: find the coffee machine, find the coffee, add water, find a mug, and brew the coffee by pushing the proper buttons.<br />
<br />
一台机器需要进入一个普通的美国家庭,并弄清楚如何制作咖啡: 找到咖啡机,找到咖啡,加水,找到一个马克杯,并通过按下正确的按钮来煮咖啡。<br />
<br />
;The Robot College Student Test ([[Ben Goertzel|''Goertzel'']])<br />
<br />
The Robot College Student Test (Goertzel)<br />
<br />
机器人大学生考试(格兹尔)<br />
<br />
: A machine enrolls in a university, taking and passing the same classes that humans would, and obtaining a degree.<br />
<br />
A machine enrolls in a university, taking and passing the same classes that humans would, and obtaining a degree.<br />
<br />
一台机器进入一所大学,学习并通过与人类相同的课程,并获得学位。<br />
<br />
;The Employment Test ([[Nils John Nilsson|''Nilsson'']])<br />
<br />
The Employment Test (Nilsson)<br />
<br />
就业测试(尼尔森)<br />
<br />
: A machine works an economically important job, performing at least as well as humans in the same job.<br />
<br />
A machine works an economically important job, performing at least as well as humans in the same job.<br />
<br />
机器从事一项经济上重要的工作,在同一项工作中表现至少和人类一样好。<br />
<br />
<br />
<br />
=== Problems requiring AGI to solve 等待通用人工智能解决的问题===<br />
<br />
{{Main|AI-complete}}<br />
<br />
<br />
<br />
The most difficult problems for computers are informally known as "AI-complete" or "AI-hard", implying that solving them is equivalent to the general aptitude of human intelligence, or strong AI, beyond the capabilities of a purpose-specific algorithm.<ref name="Shapiro92">Shapiro, Stuart C. (1992). [http://www.cse.buffalo.edu/~shapiro/Papers/ai.pdf Artificial Intelligence] {{Webarchive|url=https://web.archive.org/web/20160201014644/http://www.cse.buffalo.edu/~shapiro/Papers/ai.pdf |date=1 February 2016 }} In Stuart C. Shapiro (Ed.), ''Encyclopedia of Artificial Intelligence'' (Second Edition, pp.&nbsp;54–57). New York: John Wiley. (Section 4 is on "AI-Complete Tasks".)</ref><br />
<br />
The most difficult problems for computers are informally known as "AI-complete" or "AI-hard", implying that solving them is equivalent to the general aptitude of human intelligence, or strong AI, beyond the capabilities of a purpose-specific algorithm.<br />
<br />
对于计算机来说,最困难的问题被非正式地称为“ AI完全问题”或“ AI困难问题” ,这意味着解决这些问题相当于人类智能的一般才能,或强人工智能,超出了特定目的算法的能力。<br />
<br />
<br />
<br />
AI-complete problems are hypothesised to include general [[computer vision]], [[natural language understanding]], and dealing with unexpected circumstances while solving any real world problem.<ref>Roman V. Yampolskiy. Turing Test as a Defining Feature of AI-Completeness. In Artificial Intelligence, Evolutionary Computation and Metaheuristics (AIECM) --In the footsteps of Alan Turing. Xin-She Yang (Ed.). pp. 3–17. (Chapter 1). Springer, London. 2013. http://cecs.louisville.edu/ry/TuringTestasaDefiningFeature04270003.pdf {{Webarchive|url=https://web.archive.org/web/20130522094547/http://cecs.louisville.edu/ry/TuringTestasaDefiningFeature04270003.pdf |date=22 May 2013 }}</ref><br />
<br />
AI-complete problems are hypothesised to include general computer vision, natural language understanding, and dealing with unexpected circumstances while solving any real world problem.<br />
<br />
人工智能完全问题假设包括一般的计算机视觉,自然语言理解,以及在解决任何现实世界问题的同时处理意外情况。<br />
<br />
<br />
<br />
AI-complete problems cannot be solved with current computer technology alone, and also require [[human computation]]. This property could be useful, for example, to test for the presence of humans, as [[CAPTCHA]]s aim to do; and for [[computer security]] to repel [[brute-force attack]]s.<ref>Luis von Ahn, Manuel Blum, Nicholas Hopper, and John Langford. [http://www.captcha.net/captcha_crypt.pdf CAPTCHA: Using Hard AI Problems for Security] {{Webarchive|url=https://web.archive.org/web/20160304001102/http://www.captcha.net/captcha_crypt.pdf |date=4 March 2016 }}. In Proceedings of Eurocrypt, Vol. 2656 (2003), pp. 294–311.</ref><ref>{{cite journal | first = Richard | last = Bergmair | title = Natural Language Steganography and an "AI-complete" Security Primitive | citeseerx = 10.1.1.105.129 | date = 7 January 2006 }} (unpublished?)</ref><br />
<br />
AI-complete problems cannot be solved with current computer technology alone, and also require human computation. This property could be useful, for example, to test for the presence of humans, as CAPTCHAs aim to do; and for computer security to repel brute-force attacks.<br />
<br />
目前的计算机技术不能单独解决AI完全问题,而且还需要人工计算。例如,这个特性可以用来测试人类是否存在(CAPTCHAs 的目标就是这样做) ,以及应用于计算机安全以抵御强力攻击。<br />
<br />
<br />
<br />
== History 历史 == <br />
<br />
=== Classical AI 经典人工智能 ===<br />
<br />
{{Main|History of artificial intelligence}}<br />
<br />
Modern AI research began in the mid 1950s.<ref>{{Harvnb|Crevier|1993|pp=48–50}}</ref> The first generation of AI researchers were convinced that artificial general intelligence was possible and that it would exist in just a few decades. AI pioneer [[Herbert A. Simon]] wrote in 1965: "machines will be capable, within twenty years, of doing any work a man can do."<ref>{{Harvnb|Simon|1965|p=96}} quoted in {{Harvnb|Crevier|1993|p=109}}</ref> Their predictions were the inspiration for [[Stanley Kubrick]] and [[Arthur C. Clarke]]'s character [[HAL 9000]], who embodied what AI researchers believed they could create by the year 2001. AI pioneer [[Marvin Minsky]] was a consultant<ref>{{Cite web |url=http://mitpress.mit.edu/e-books/Hal/chap2/two1.html |title=Scientist on the Set: An Interview with Marvin Minsky |access-date=5 April 2008 |archive-url=https://web.archive.org/web/20120716182537/http://mitpress.mit.edu/e-books/Hal/chap2/two1.html |archive-date=16 July 2012 |url-status=live }}</ref> on the project of making HAL 9000 as realistic as possible according to the consensus predictions of the time; Crevier quotes him as having said on the subject in 1967, "Within a generation ... the problem of creating 'artificial intelligence' will substantially be solved,"<ref>Marvin Minsky to {{Harvtxt|Darrach|1970}}, quoted in {{Harvtxt|Crevier|1993|p=109}}.</ref> although Minsky states that he was misquoted.{{Citation needed|date=June 2011}}<br />
<br />
Modern AI research began in the mid 1950s. The first generation of AI researchers were convinced that artificial general intelligence was possible and that it would exist in just a few decades. AI pioneer Herbert A. Simon wrote in 1965: "machines will be capable, within twenty years, of doing any work a man can do." Their predictions were the inspiration for Stanley Kubrick and Arthur C. Clarke's character HAL 9000, who embodied what AI researchers believed they could create by the year 2001. AI pioneer Marvin Minsky was a consultant on the project of making HAL 9000 as realistic as possible according to the consensus prediction of the time; Crevier quotes him as having said on the subject in 1967, "Within a generation ... the problem of creating 'artificial intelligence' will substantially be solved," although Minsky states that he was misquoted.<br />
<br />
现代人工智能研究始于20世纪50年代中期。第一代人工智能研究人员确信,通用人工智能是可能的,并将在短短几十年内出现。人工智能的先驱赫伯特·A·西蒙(Herbert A. Simon)在1965年写道: “机器将在20年内拥有完成人类能做的任何工作的能力。”他们的预言启发了斯坦利·库布里克和亚瑟·查理斯·克拉克塑造的角色哈尔9000,它代表了人工智能研究人员相信他们截至2001年能够创造出的东西。人工智能先驱马文·明斯基(Marvin Minsky)是一个项目顾问,该项目旨在根据当时的一致预测,使哈尔9000尽可能逼真; 克里维尔援引他在1967年关于这个问题的话说,“在一代人的时间里... ... 创造‘人工智能’的问题将大体上得到解决,”尽管明斯基声称,他的话被错误引用了。<br />
<br />
<br />
<br />
However, in the early 1970s, it became obvious that researchers had grossly underestimated the difficulty of the project. Funding agencies became skeptical of AGI and put researchers under increasing pressure to produce useful "applied AI".<ref>The [[Lighthill report]] specifically criticized AI's "grandiose objectives" and led the dismantling of AI research in England. ({{Harvnb|Lighthill|1973}}; {{Harvnb|Howe|1994}}) In the U.S., [[DARPA]] became determined to fund only "mission-oriented direct research, rather than basic undirected research". See {{Harv|NRC|1999}} under "Shift to Applied Research Increases Investment". See also {{Harv|Crevier|1993|pp=115–117}} and {{Harv|Russell|Norvig|2003|pp=21–22}}</ref> As the 1980s began, Japan's [[Fifth Generation Computer]] Project revived interest in AGI, setting out a ten-year timeline that included AGI goals like "carry on a casual conversation".<ref>{{harvnb|Crevier|1993|p=211}}, {{harvnb|Russell|Norvig|2003|p=24}} and see also {{Harvnb|Feigenbaum|McCorduck|1983}}</ref> In response to this and the success of [[expert systems]], both industry and government pumped money back into the field.<ref>{{Harvnb|Crevier| 1993|pp=161–162,197–203,240}}; {{harvnb|Russell|Norvig|2003|p=25}}; {{harvnb|NRC|1999|loc=under "Shift to Applied Research Increases Investment"}}</ref> However, confidence in AI spectacularly collapsed in the late 1980s, and the goals of the Fifth Generation Computer Project were never fulfilled.<ref>{{Harvnb|Crevier|1993|pp=209–212}}</ref> For the second time in 20 years, AI researchers who had predicted the imminent achievement of AGI had been shown to be fundamentally mistaken. By the 1990s, AI researchers had gained a reputation for making vain promises. They became reluctant to make predictions at all<ref>As AI founder [[John McCarthy (computer scientist)|John McCarthy]] writes "it would be a great relief to the rest of the workers in AI if the inventors of new general formalisms would express their hopes in a more guarded form than has sometimes been the case." {{cite web | url=http://www-formal.stanford.edu/jmc/reviews/lighthill/lighthill.html | title=Reply to Lighthill | last=McCarthy | first=John | authorlink=John McCarthy (computer scientist) | publisher=Stanford University | year=2000 | access-date=29 September 2007 | archive-url=https://web.archive.org/web/20080930164952/http://www-formal.stanford.edu/jmc/reviews/lighthill/lighthill.html | archive-date=30 September 2008 | url-status=live }}</ref> and to avoid any mention of "human level" artificial intelligence for fear of being labeled "wild-eyed dreamer[s]."<ref>"At its low point, some computer scientists and software engineers avoided the term artificial intelligence for fear of being viewed as wild-eyed dreamers."{{cite news |first=John |last=Markoff |title=Behind Artificial Intelligence, a Squadron of Bright Real People |url=https://www.nytimes.com/2005/10/14/technology/14artificial.html?ei=5070&en=11ab55edb7cead5e&ex=1185940800&adxnnl=1&adxnnlx=1185805173-o7WsfW7qaP0x5/NUs1cQCQ |work=The New York Times|date=14 October 2005}}</ref><br />
<br />
However, in the early 1970s, it became obvious that researchers had grossly underestimated the difficulty of the project. Funding agencies became skeptical of AGI and put researchers under increasing pressure to produce useful "applied AI". As the 1980s began, Japan's Fifth Generation Computer Project revived interest in AGI, setting out a ten-year timeline that included AGI goals like "carry on a casual conversation". In response to this and the success of expert systems, both industry and government pumped money back into the field. However, confidence in AI spectacularly collapsed in the late 1980s, and the goals of the Fifth Generation Computer Project were never fulfilled. For the second time in 20 years, AI researchers who had predicted the imminent achievement of AGI had been shown to be fundamentally mistaken. By the 1990s, AI researchers had gained a reputation for making vain promises. They became reluctant to make predictions at all and to avoid any mention of "human level" artificial intelligence for fear of being labeled "wild-eyed dreamer[s]."<br />
<br />
然而,在20世纪70年代初,很明显,研究人员严重低估了该项目的难度。资助机构开始对通用人工智能持怀疑态度,并对研究人员施加越来越大的压力,要求他们生产出有用的“应用人工智能”。随着20世纪80年代的开始,日本的'''<font color="#ff8000">第五代计算机项目(Fifth Generation Computer Project)</font>'''重新唤起了人们对通用人工智能的兴趣,并设定了一个长达10年的时间线,其中包括通用人工智能的目标,比如“进行一次随意的交谈”。为了应对这种情况和专家系统的成功建立,工业界和政府都重新将资金投入这一领域。然而,人们对人工智能的信心在20世纪80年代末大幅下降,第五代计算机项目的目标从未实现。20年来的第二次,人工智能研究人员预测通用人工智能即将取得的成果已经被证明根本是错误的。到了20世纪90年代,人工智能研究人员因做出虚假承诺而臭名昭著。他们根本不愿意做预测,也不愿意提及“人类水平”的人工智能,因为他们害怕被贴上“狂热梦想家”的标签<br />
<br />
<br />
<br />
=== Narrow AI research 狭义人工智能的研究===<br />
<br />
{{Main|Artificial intelligence}}<br />
<br />
<br />
<br />
In the 1990s and early 21st century, mainstream AI achieved far greater commercial success and academic respectability by focusing on specific sub-problems where they can produce verifiable results and commercial applications, such as [[artificial neural networks]] and statistical [[machine learning]].<ref>{{Harvnb|Russell|Norvig|2003|pp=25–26}}</ref> These "applied AI" systems are now used extensively throughout the technology industry, and research in this vein is very heavily funded in both academia and industry. Currently, development on this field is considered an emerging trend, and a mature stage is expected to happen in more than 10 years.<ref>{{cite web |title=Trends in the Emerging Tech Hype Cycle |url=https://blogs.gartner.com/smarterwithgartner/files/2018/08/PR_490866_5_Trends_in_the_Emerging_Tech_Hype_Cycle_2018_Hype_Cycle.png |publisher=Gartner Reports |accessdate=7 May 2019 |archive-url=https://web.archive.org/web/20190522024829/https://blogs.gartner.com/smarterwithgartner/files/2018/08/PR_490866_5_Trends_in_the_Emerging_Tech_Hype_Cycle_2018_Hype_Cycle.png |archive-date=22 May 2019 |url-status=live }}</ref><br />
<br />
In the 1990s and early 21st century, mainstream AI achieved far greater commercial success and academic respectability by focusing on specific sub-problems where they can produce verifiable results and commercial applications, such as artificial neural networks and statistical machine learning. These "applied AI" systems are now used extensively throughout the technology industry, and research in this vein is very heavily funded in both academia and industry. Currently, development on this field is considered an emerging trend, and a mature stage is expected to happen in more than 10 years.<br />
<br />
在1990年代和21世纪初,主流人工智能取得了更大的商业成功和学术声望,因为它们把重点放在能够产生可验证结果和商业应用的具体子问题上,例如人工神经网络和统计机器学习。这些“应用人工智能”系统现在在整个技术产业中得到广泛应用,这方面的研究得到了学术界和产业界的大量资助。目前,这一领域的发展被认为是一个新兴的趋势,并有望在10多年内进入一个成熟的阶段。<br />
<br />
<br />
<br />
Most mainstream AI researchers hope that strong AI can be developed by combining the programs that solve various sub-problems. [[Hans Moravec]] wrote in 1988: <blockquote>"I am confident that this bottom-up route to artificial intelligence will one day meet the traditional top-down route more than half way, ready to provide the real world competence and the [[Commonsense knowledge base|commonsense knowledge]] that has been so frustratingly elusive in reasoning programs. Fully intelligent machines will result when the metaphorical [[golden spike]] is driven uniting the two efforts."<ref>{{Harvnb|Moravec|1988|p=20}}</ref></blockquote><br />
<br />
Most mainstream AI researchers hope that strong AI can be developed by combining the programs that solve various sub-problems. Hans Moravec wrote in 1988: <blockquote>"I am confident that this bottom-up route to artificial intelligence will one day meet the traditional top-down route more than half way, ready to provide the real world competence and the commonsense knowledge that has been so frustratingly elusive in reasoning programs. Fully intelligent machines will result when the metaphorical golden spike is driven uniting the two efforts."</blockquote><br />
<br />
大多数主流人工智能研究人员希望,通过结合解决各种子问题的项目,可以开发出强人工智能。汉斯·莫拉维克(Hans Moravec)在1988年写道: “我相信,这种自下而上的人工智能路线,终有一天会与传统的自上而下的路线在后半程相遇。令人沮丧的是,当下,真实世界的能力和常识知识在推理程序中一直难以捉摸。而这两种路线结合的人工智能将能为我们解决这些疑难。当一个黄金钉一样的东西将二者结合起来时,就会产生完全智能的机器。” / blockquote<br />
<br />
<br />
<br />
However, even this fundamental philosophy has been disputed; for example, Stevan Harnad of Princeton concluded his 1990 paper on the [[Symbol grounding problem|Symbol Grounding Hypothesis]] by stating: <blockquote>"The expectation has often been voiced that "top-down" (symbolic) approaches to modeling cognition will somehow meet "bottom-up" (sensory) approaches somewhere in between. If the grounding considerations in this paper are valid, then this expectation is hopelessly modular and there is really only one viable route from sense to symbols: from the ground up. A free-floating symbolic level like the software level of a computer will never be reached by this route (or vice versa) – nor is it clear why we should even try to reach such a level, since it looks as if getting there would just amount to uprooting our symbols from their intrinsic meanings (thereby merely reducing ourselves to the functional equivalent of a programmable computer)."<ref>{{cite journal | last1 = Harnad | first1 = S | year = 1990 | title = The Symbol Grounding Problem | journal = Physica D | volume = 42 | issue = 1–3| pages = 335–346 | doi=10.1016/0167-2789(90)90087-6| bibcode = 1990PhyD...42..335H | arxiv = cs/9906002}}</ref></blockquote><br />
<br />
However, even this fundamental philosophy has been disputed; for example, Stevan Harnad of Princeton concluded his 1990 paper on the Symbol Grounding Hypothesis by stating: <blockquote>"The expectation has often been voiced that "top-down" (symbolic) approaches to modeling cognition will somehow meet "bottom-up" (sensory) approaches somewhere in between. If the grounding considerations in this paper are valid, then this expectation is hopelessly modular and there is really only one viable route from sense to symbols: from the ground up. A free-floating symbolic level like the software level of a computer will never be reached by this route (or vice versa) – nor is it clear why we should even try to reach such a level, since it looks as if getting there would just amount to uprooting our symbols from their intrinsic meanings (thereby merely reducing ourselves to the functional equivalent of a programmable computer)."</blockquote><br />
<br />
然而,连如此基本的哲学问题也存在争议; 例如,普林斯顿大学的斯蒂文·哈纳德(Stevan Harnad)在1990年关于'''<font color="#ff8000">符号基础假说(the Symbol Grounding Hypothesis)</font>'''的论文中总结道: “人们经常提出这样的期望,即建立“自上而下”(符号)的认知模型的方法将在某种程度上与“自下而上”(感官)的方法在建模过程中的某处相会。如果本文中的基本考虑是正确的,那么绝望的是,这种期望是模块化的,并且从认知到符号真的只有一条可行的路径: 从头开始。类似计算机软件级别的自由浮动的符号永远不可能通过这条路径实现,反之亦然——甚至也不清楚为什么我们应该尝试达到这样一个级别,因为它看起来就像是把我们的符号从它们的内在意义上连根拔起(从而仅仅把我们自己降低为可编程计算机的功能等价物)。” / blockquote<br />
<br />
<br />
<br />
=== Modern artificial general intelligence research 现代通用人工智能的研究===<br />
<br />
The term "artificial general intelligence" was used as early as 1997, by Mark Gubrud<ref>{{Harvnb|Gubrud|1997}}</ref> in a discussion of the implications of fully automated military production and operations. The term was re-introduced and popularized by [[Shane Legg]] and [[Ben Goertzel]] around 2002.<ref>{{Cite web|url=http://goertzel.org/who-coined-the-term-agi/|title=Who coined the term "AGI"? » goertzel.org|language=en-US|access-date=28 December 2018|archive-url=https://web.archive.org/web/20181228083048/http://goertzel.org/who-coined-the-term-agi/|archive-date=28 December 2018|url-status=live}}, via [[Life 3.0]]: 'The term "AGI" was popularized by... Shane Legg, Mark Gubrud and Ben Goertzel'</ref> The research objective is much older, for example [[Doug Lenat]]'s [[Cyc]] project (that began in 1984), and [[Allen Newell]]'s [[Soar (cognitive architecture)|Soar]] project are regarded as within the scope of AGI. AGI research activity in 2006 was described by Pei Wang and Ben Goertzel<ref>{{harvnb|Goertzel|Wang|2006}}. See also {{harvtxt|Wang|2006}} with an up-to-date summary and lots of links.</ref> as "producing publications and preliminary results". The first summer school in AGI was organized in Xiamen, China in 2009<ref>https://goertzel.org/AGI_Summer_School_2009.htm</ref> by the Xiamen university's Artificial Brain Laboratory and OpenCog. The first university course was given in 2010<ref>http://fmi-plovdiv.org/index.jsp?id=1054&ln=1</ref> and 2011<ref>http://fmi.uni-plovdiv.bg/index.jsp?id=1139&ln=1</ref> at Plovdiv University, Bulgaria by Todor Arnaudov. MIT presented a course in AGI in 2018, organized by Lex Fridman and featuring a number of guest lecturers. However, as yet, most AI researchers have devoted little attention to AGI, with some claiming that intelligence is too complex to be completely replicated in the near term. However, a small number of computer scientists are active in AGI research, and many of this group are contributing to a series of [[Conference on Artificial General Intelligence|AGI conferences]]. The research is extremely diverse and often pioneering in nature. In the introduction to his book,{{sfn|Goertzel|Pennachin|2006}} Goertzel says that estimates of the time needed before a truly flexible AGI is built vary from 10 years to over a century, but the consensus in the AGI research community seems to be that the timeline discussed by [[Ray Kurzweil]] in ''[[The Singularity is Near]]''<ref name="K">{{Harv|Kurzweil|2005|p=260}} or see [http://crnano.typepad.com/crnblog/2005/08/advanced_human_.html Advanced Human Intelligence] {{Webarchive|url=https://web.archive.org/web/20110630032301/http://crnano.typepad.com/crnblog/2005/08/advanced_human_.html |date=30 June 2011 }} where he defines strong AI as "machine intelligence with the full range of human intelligence."</ref> (i.e. between 2015 and 2045) is plausible.{{sfn|Goertzel|2007}}<br />
<br />
The term "artificial general intelligence" was used as early as 1997, by Mark Gubrud in a discussion of the implications of fully automated military production and operations. The term was re-introduced and popularized by Shane Legg and Ben Goertzel around 2002. The research objective is much older, for example Doug Lenat's Cyc project (that began in 1984), and Allen Newell's Soar project are regarded as within the scope of AGI. AGI research activity in 2006 was described by Pei Wang and Ben Goertzel as "producing publications and preliminary results". The first summer school in AGI was organized in Xiamen, China in 2009 by the Xiamen university's Artificial Brain Laboratory and OpenCog. The first university course was given in 2010 and 2011 at Plovdiv University, Bulgaria by Todor Arnaudov. MIT presented a course in AGI in 2018, organized by Lex Fridman and featuring a number of guest lecturers. However, as yet, most AI researchers have devoted little attention to AGI, with some claiming that intelligence is too complex to be completely replicated in the near term. However, a small number of computer scientists are active in AGI research, and many of this group are contributing to a series of AGI conferences. The research is extremely diverse and often pioneering in nature. In the introduction to his book, Goertzel says that estimates of the time needed before a truly flexible AGI is built vary from 10 years to over a century, but the consensus in the AGI research community seems to be that the timeline discussed by Ray Kurzweil in The Singularity is Near (i.e. between 2015 and 2045) is plausible.<br />
<br />
”人工通用智能”一词早在1997年就由马克·古布鲁德(Mark Gubrud)在讨论全自动化军事生产和作业的影响时使用。这个术语在2002年左右被肖恩·莱格(Shane Legg)和本·格兹尔(Ben Goertzel)重新引入并推广。研究目标要古老得多,例如道格•雷纳特(Doug Lenat)的 Cyc 项目(始于1984年) ,以及艾伦•纽厄尔(Allen Newell)的 Soar 项目被认为属于通用人工智能的范围。王培(Pei Wang)和本·格兹尔将2006年的通用人工智能研究活动描述为“发表论文和取得初步成果”。2009年,厦门大学人工脑实验室和 OpenCog 在中国厦门组织了通用人工智能的第一个暑期学校。第一个大学课程于2010年和2011年在保加利亚普罗夫迪夫大学由托多尔·阿瑙多夫(Todor Arnaudov)开设。2018年,麻省理工学院开设了一门通用人工智能课程,由莱克斯·弗里德曼(Lex Fridman)组织,并邀请了一些客座讲师。然而,迄今为止,大多数人工智能研究人员对通用人工智能关注甚少,一些人声称,智能过于复杂,在短期内无法完全复制。然而,少数计算机科学家积极参与通用人工智能的研究,其中许多人正在为通用人工智能的一系列会议做出贡献。这项研究极其多样化,而且往往具有开创性。格兹尔在他的书的序言中,说,制造一个真正灵活的通用人工智能所需的时间约为10年到超过一个世纪不等,但是通用人工智能研究团体的似乎一致认为雷·库兹韦尔(Ray Kurzweil)在《奇点临近》(即在2015年至2045年之间)中讨论的时间线是可信的。<br />
<br />
<br />
<br />
However, most mainstream AI researchers doubt that progress will be this rapid.{{citation_needed|date=January 2017}} Organizations explicitly pursuing AGI include the Swiss AI lab [[IDSIA]],{{citation needed|date=December 2017}} Nnaisense,<ref>{{cite news|last1=Markoff|first1=John|title=When A.I. Matures, It May Call Jürgen Schmidhuber 'Dad'|url=https://www.nytimes.com/2016/11/27/technology/artificial-intelligence-pioneer-jurgen-schmidhuber-overlooked.html|accessdate=26 December 2017|work=The New York Times|date=27 November 2016|archive-url=https://web.archive.org/web/20171226234555/https://www.nytimes.com/2016/11/27/technology/artificial-intelligence-pioneer-jurgen-schmidhuber-overlooked.html|archive-date=26 December 2017|url-status=live}}</ref> [[Vicarious (company)|Vicarious]],<!--<ref name=baum/>--> [[Maluuba]],<ref name=baum/> the [[OpenCog|OpenCog Foundation]], Adaptive AI, [[LIDA (cognitive architecture)|LIDA]], and [[Numenta]] and the associated [[Redwood Neuroscience Institute]].<ref>{{cite book|author1=James Barrat|title=Our Final Invention: Artificial Intelligence and the End of the Human Era|date=2013|publisher=St. Martin's Press|location=New York|isbn=9780312622374|edition=First|chapter=Chapter 11: A Hard Takeoff|title-link=Our Final Invention|author1-link=James Barrat}}</ref> In addition, organizations such as the [[Machine Intelligence Research Institute]]<ref>{{cite web|title=About the Machine Intelligence Research Institute|url=https://intelligence.org/about/|website=Machine Intelligence Research Institute|accessdate=26 December 2017|archive-url=https://web.archive.org/web/20180121025925/https://intelligence.org/about/|archive-date=21 January 2018|url-status=live}}</ref> and [[OpenAI]]<ref>{{cite news|title=About OpenAI|url=https://openai.com/about/|accessdate=26 December 2017|work=[[OpenAI]]|language=en-us|archive-url=https://web.archive.org/web/20171222181056/https://openai.com/about/|archive-date=22 December 2017|url-status=live}}</ref> have been founded to influence the development path of AGI. Finally, projects such as the [[Human Brain Project]]<ref>{{cite news|last1=Theil|first1=Stefan|title=Trouble in Mind|url=https://www.scientificamerican.com/article/why-the-human-brain-project-went-wrong-and-how-to-fix-it/|accessdate=26 December 2017|work=Scientific American|pages=36–42|language=en|doi=10.1038/scientificamerican1015-36|bibcode=2015SciAm.313d..36T|archive-url=https://web.archive.org/web/20171109234151/https://www.scientificamerican.com/article/why-the-human-brain-project-went-wrong-and-how-to-fix-it/|archive-date=9 November 2017|url-status=live}}</ref> have the goal of building a functioning simulation of the human brain. A 2017 survey of AGI categorized forty-five known "active R&D projects" that explicitly or implicitly (through published research) research AGI, with the largest three being [[DeepMind]], the Human Brain Project, and [[OpenAI]].<ref name=baum>{{cite journal|title=Baum, Seth, A Survey of Artificial General Intelligence Projects for Ethics, Risk, and Policy (November 12, 2017). Global Catastrophic Risk Institute Working Paper 17-1|url=https://ssrn.com/abstract=3070741|date=12 November 2017|last1=Baum|first1=Seth}}</ref><br />
<br />
However, most mainstream AI researchers doubt that progress will be this rapid. Organizations explicitly pursuing AGI include the Swiss AI lab IDSIA, Nnaisense, Vicarious,<!-- In addition, organizations such as the Machine Intelligence Research Institute and OpenAI have been founded to influence the development path of AGI. Finally, projects such as the Human Brain Project have the goal of building a functioning simulation of the human brain. A 2017 survey of AGI categorized forty-five known "active R&D projects" that explicitly or implicitly (through published research) research AGI, with the largest three being DeepMind, the Human Brain Project, and OpenAI.<br />
<br />
然而,大多数主流的人工智能研究人员怀疑进展是否会如此之快。明确寻求通用人工智能的组织包括瑞士人工智能实验室IDSIA,Nnaisense,Vicarious,! -- 此外,机器智能研究所和 OpenAI 等机构也建立起来以影响通用人工智能的发展道路。最后,像人脑计划这样的项目的目标是建立一个人脑的功能模拟。2017年针对通用人工智能的一项调查(通过已发表的研究)对45个已知的明确的或暗中研究通用人工智能的“活跃研发项目”进行了分类 ,其中最大的三个是 DeepMind、人类大脑项目和 OpenAI。<br />
<br />
<br />
<br />
In 2017, researchers Feng Liu, Yong Shi and Ying Liu conducted intelligence tests on publicly available and freely accessible weak AI such as Google AI or Apple's Siri and others. At the maximum, these AI reached a value of about 47, which corresponds approximately to a six-year-old child in first grade. An adult comes to about 100 on average. Similar tests had been carried out in 2014, with the IQ score reaching a maximum value of 27.<ref>{{cite journal|title=Intelligence Quotient and Intelligence Grade of Artificial Intelligence|journal=Annals of Data Science|volume=4|issue=2|pages=179–191|arxiv=1709.10242|doi=10.1007/s40745-017-0109-0|year=2017|last1=Liu|first1=Feng|last2=Shi|first2=Yong|last3=Liu|first3=Ying|bibcode=2017arXiv170910242L}}</ref><ref>{{cite web|title=Google-KI doppelt so schlau wie Siri|url=https://t3n.de/news/iq-kind-schlauer-google-ki-siri-864003|accessdate=2 January 2019|archive-url=https://web.archive.org/web/20190103055657/https://t3n.de/news/iq-kind-schlauer-google-ki-siri-864003/|archive-date=3 January 2019|url-status=live}}</ref><br />
<br />
In 2017, researchers Feng Liu, Yong Shi and Ying Liu conducted intelligence tests on publicly available and freely accessible weak AI such as Google AI or Apple's Siri and others. At the maximum, these AI reached a value of about 47, which corresponds approximately to a six-year-old child in first grade. An adult comes to about 100 on average. Similar tests had been carried out in 2014, with the IQ score reaching a maximum value of 27.<br />
<br />
2017年,研究人员 Feng Liu,Yong Shi 和 Ying Liu 对公开的和可自由访问的弱智能进行了智能测试,如谷歌人工智能或苹果的 Siri 等。在最大值,这些人工智能达到了约47,这大约相当于一个的六岁儿童。一个成年人平均智商为100。2014年也进行了类似的测试,智商分数的最高值达到了27。<br />
<br />
<br />
<br />
In 2019, video game programmer and aerospace engineer [[John Carmack]] announced plans to research AGI.<ref name="lawler">{{cite web |url=https://www.engadget.com/2019-11-13-john-carmack-agi.html |title=John Carmack takes a step back at Oculus to work on human-like AI |date=November 13, 2019 |first=Richard |last=Lawler |accessdate=April 4, 2020 |publisher=[[Engadget]]}}</ref><br />
<br />
In 2019, video game programmer and aerospace engineer John Carmack announced plans to research AGI.<br />
<br />
2019年,游戏程序师和航空工程师约翰·卡迈克(John Carmack)宣布了研究通用人工智能的计划。<br />
<br />
<br />
<br />
==Processing power needed to simulate a brain 模拟人脑所需要的处理能力==<br />
<br />
<br />
<br />
===Whole brain emulation 全脑模拟===<br />
<br />
{{main|Mind uploading}}<br />
<br />
A popular discussed approach to achieving general intelligent action is [[whole brain emulation]]. A low-level brain model is built by [[brain scanning|scanning]] and [[Brain mapping|mapping]] a biological brain in detail and copying its state into a computer system or another computational device. The computer runs a [[computer simulation|simulation]] model so faithful to the original that it will behave in essentially the same way as the original brain, or for all practical purposes, indistinguishably.<ref name=Roadmap><br />
<br />
A popular discussed approach to achieving general intelligent action is whole brain emulation. A low-level brain model is built by scanning and mapping a biological brain in detail and copying its state into a computer system or another computational device. The computer runs a simulation model so faithful to the original that it will behave in essentially the same way as the original brain, or for all practical purposes, indistinguishably.<ref name=Roadmap><br />
<br />
实现通用智能行为的一种流行的讨论方法是全脑模拟。一个低层次的大脑模型是通过扫描和绘制生物大脑的详细情况,并将其状态复制到计算机系统或其他计算设备中来建立的。计算机运行的模拟模型无比忠实于原始模型,以至于它的行为在本质,或者对于所有的实际目的上与原始大脑相同,难以区分。 参考名称路线图<br />
<br />
{{Harvnb|Sandberg|Boström|2008}}. "The basic idea is to take a particular brain, scan its structure in detail, and construct a software model of it that is so faithful to the original that, when run on appropriate hardware, it will behave in essentially the same way as the original brain."</ref> Whole brain emulation is discussed in [[computational neuroscience]] and [[neuroinformatics]], in the context of [[brain simulation]] for medical research purposes. It is discussed in [[artificial intelligence]] research{{sfn|Goertzel|2007}} as an approach to strong AI. [[Neuroimaging]] technologies that could deliver the necessary detailed understanding are improving rapidly, and [[futurist]] Ray Kurzweil in the book ''The Singularity Is Near''<ref name=K/> predicts that a map of sufficient quality will become available on a similar timescale to the required computing power.<br />
<br />
."The basic idea is to take a particular brain, scan its structure in detail, and construct a software model of it that is so faithful to the original that, when run on appropriate hardware, it will behave in essentially the same way as the original brain."</ref> Whole brain emulation is discussed in computational neuroscience and neuroinformatics, in the context of brain simulation for medical research purposes. It is discussed in artificial intelligence research as an approach to strong AI. Neuroimaging technologies that could deliver the necessary detailed understanding are improving rapidly, and futurist Ray Kurzweil in the book The Singularity Is Near predicts that a map of sufficient quality will become available on a similar timescale to the required computing power.<br />
<br />
“基本思路是,取一个特定的大脑,详细地扫描其结构,并构建一个无比还原的原始大脑的软件模型,以至于在适当的硬件上运行时,它基本上与原始大脑的行为方式相同。基于医学研究的大脑模拟背景下,全脑模拟在计算神经科学和神经信息学医学期刊上被讨论过。它是人工智能研究中讨论的一种强人工智能的方法。可提供必要详细的理解的神经成像技术正在迅速提高,未来学家雷·库兹韦尔(Ray Kurzweil)在《奇点临近》书中预测,一张质量足够高的地图将在类似的时间尺度上达到所需的计算能力。<br />
<br />
<br />
<br />
===Early estimates 初步预测===<br />
<br />
[[File:Estimations of Human Brain Emulation Required Performance.svg|thumb|right|400px|Estimates of how much processing power is needed to emulate a human brain at various levels (from Ray Kurzweil, and [[Anders Sandberg]] and [[Nick Bostrom]]), along with the fastest supercomputer from [[TOP500]] mapped by year. Note the logarithmic scale and exponential trendline, which assumes the computational capacity doubles every 1.1 years. Kurzweil believes that mind uploading will be possible at neural simulation, while the Sandberg, Bostrom report is less certain about where [[consciousness]] arises.{{sfn|Sandberg|Boström|2008}}]] For low-level brain simulation, an extremely powerful computer would be required. The [[human brain]] has a huge number of [[synapses]]. Each of the 10<sup>11</sup> (one hundred billion) [[neurons]] has on average 7,000 synaptic connections (synapses) to other neurons. It has been estimated that the brain of a three-year-old child has about 10<sup>15</sup> synapses (1 quadrillion). This number declines with age, stabilizing by adulthood. Estimates vary for an adult, ranging from 10<sup>14</sup> to 5×10<sup>14</sup> synapses (100 to 500 trillion).{{sfn|Drachman|2005}} An estimate of the brain's processing power, based on a simple switch model for neuron activity, is around 10<sup>14</sup> (100 trillion) synaptic updates per second ([[SUPS]]).{{sfn|Russell|Norvig|2003}} In 1997, Kurzweil looked at various estimates for the hardware required to equal the human brain and adopted a figure of 10<sup>16</sup> computations per second (cps).<ref>In "Mind Children" {{Harvnb|Moravec|1988|page=61}} 10<sup>15</sup> cps is used. More recently, in 1997, <{{cite web|url=http://www.transhumanist.com/volume1/moravec.htm |title=Archived copy |accessdate=23 June 2006 |url-status=dead |archiveurl=https://web.archive.org/web/20060615031852/http://transhumanist.com/volume1/moravec.htm |archivedate=15 June 2006 }}> Moravec argued for 10<sup>8</sup> MIPS which would roughly correspond to 10<sup>14</sup> cps. Moravec talks in terms of MIPS, not "cps", which is a non-standard term Kurzweil introduced.</ref> (For comparison, if a "computation" was equivalent to one "[[FLOPS|floating point operation]]" – a measure used to rate current [[supercomputer]]s – then 10<sup>16</sup> "computations" would be equivalent to 10 [[Peta-|petaFLOPS]], [[FLOPS#Performance records|achieved in 2011]]). He used this figure to predict the necessary hardware would be available sometime between 2015 and 2025, if the exponential growth in computer power at the time of writing continued.<br />
<br />
Estimates of how much processing power is needed to emulate a human brain at various levels (from Ray Kurzweil, and [[Anders Sandberg and Nick Bostrom), along with the fastest supercomputer from TOP500 mapped by year. Note the logarithmic scale and exponential trendline, which assumes the computational capacity doubles every 1.1 years. Kurzweil believes that mind uploading will be possible at neural simulation, while the Sandberg, Bostrom report is less certain about where consciousness arises.]] For low-level brain simulation, an extremely powerful computer would be required. The human brain has a huge number of synapses. Each of the 10<sup>11</sup> (one hundred billion) neurons has on average 7,000 synaptic connections (synapses) to other neurons. It has been estimated that the brain of a three-year-old child has about 10<sup>15</sup> synapses (1 quadrillion). This number declines with age, stabilizing by adulthood. Estimates vary for an adult, ranging from 10<sup>14</sup> to 5×10<sup>14</sup> synapses (100 to 500 trillion). An estimate of the brain's processing power, based on a simple switch model for neuron activity, is around 10<sup>14</sup> (100 trillion) synaptic updates per second (SUPS). In 1997, Kurzweil looked at various estimates for the hardware required to equal the human brain and adopted a figure of 10<sup>16</sup> computations per second (cps). (For comparison, if a "computation" was equivalent to one "floating point operation" – a measure used to rate current supercomputers – then 10<sup>16</sup> "computations" would be equivalent to 10 petaFLOPS, achieved in 2011). He used this figure to predict the necessary hardware would be available sometime between 2015 and 2025, if the exponential growth in computer power at the time of writing continued.<br />
<br />
估计需要多少处理能力才能在不同水平上模拟人类大脑(来自 Ray Kurzweil,[ Anders Sandberg 和 Nick Bostrom ]) ,以及每年从 TOP500绘制出的最快超级计算机。请注意对数尺度趋势线和指数趋势线,它假设计算能力每1.1年翻一番。库兹韦尔相信,在神经模拟中上传思维是可能的,而桑德伯格和博斯特罗姆的报告对意识在哪里产生则不太确定。]对于低层次的大脑模拟,需要一个非常强大的计算机。人类的大脑有大量的突触。每个10个 sup 11 / sup (1000亿)神经元平均有7000个突触连接(突触)到其他神经元。据估计,一个三岁儿童的大脑约有10个 sup 15 / sup 突触(1千万亿)。这个数字随着年龄的增长而下降,成年后趋于稳定。对于一个成年人的估计有所不同,从10个 sup 14 / sup 到5个 sup 10 sup 14 / sup 突触(100万亿到500万亿)不等。基于神经元活动的简单开关模型,对大脑处理能力的估计大约是每秒10次 / 秒(100万亿)突触更新(SUPS)。1997年,库兹韦尔研究了相当于人脑所需硬件的各种估计,并采用了每秒10 sup 16 / sup 计算(cps)的数字。(作为比较,如果一次“计算”相当于一次“浮点运算”——一种用于对当前超级计算机进行评级的措施——那么10 sup 16 / sup“计算”相当于2011年完成的10petaflops)。他用这个数字来预测,如果在撰写本文时计算机能力方面的指数增长继续下去的话,那么在2015年到2025年之间的某个时候,必要的硬件将会出现。<br />
<br />
<br />
<br />
===Modelling the neurons in more detail===<br />
<br />
The [[artificial neuron]] model assumed by Kurzweil and used in many current [[artificial neural network]] implementations is simple compared with [[biological neuron model|biological neurons]]. A brain simulation would likely have to capture the detailed cellular behaviour of biological [[neurons]], presently understood only in the broadest of outlines. The overhead introduced by full modeling of the biological, chemical, and physical details of neural behaviour (especially on a molecular scale) would require computational powers several orders of magnitude larger than Kurzweil's estimate. In addition the estimates do not account for [[glial cells]], which are at least as numerous as neurons, and which may outnumber neurons by as much as 10:1, and are now known to play a role in cognitive processes.<ref name="Discover2011JanFeb">{{Cite journal|author=Swaminathan, Nikhil|title=Glia—the other brain cells|journal=Discover|date=Jan–Feb 2011|url=http://discovermagazine.com/2011/jan-feb/62|access-date=24 January 2014|archive-url=https://web.archive.org/web/20140208071350/http://discovermagazine.com/2011/jan-feb/62|archive-date=8 February 2014|url-status=live}}</ref><br />
<br />
The artificial neuron model assumed by Kurzweil and used in many current artificial neural network implementations is simple compared with biological neurons. A brain simulation would likely have to capture the detailed cellular behaviour of biological neurons, presently understood only in the broadest of outlines. The overhead introduced by full modeling of the biological, chemical, and physical details of neural behaviour (especially on a molecular scale) would require computational powers several orders of magnitude larger than Kurzweil's estimate. In addition the estimates do not account for glial cells, which are at least as numerous as neurons, and which may outnumber neurons by as much as 10:1, and are now known to play a role in cognitive processes.<br />
<br />
与生物神经元相比,Kurzweil 假设的人工神经元模型在当前许多人工神经网络实现中的应用是简单的。大脑模拟可能需要捕捉生物神经元细胞行为的细节,目前只能从最广泛的轮廓中理解。对神经行为的生物、化学和物理细节(特别是在分子尺度上)进行全面建模所需要的计算能力将比 Kurzweil 的估计大几百万数量级。此外,这些估计没有考虑胶质细胞,胶质细胞至少和神经元一样多,数量可能比神经元多10:1,现在已知它们在认知过程中发挥作用。<br />
<br />
<br />
<br />
=== Current research===<br />
<br />
There are some research projects that are investigating brain simulation using more sophisticated neural models, implemented on conventional computing architectures. The [[Artificial Intelligence System]] project implemented non-real time simulations of a "brain" (with 10<sup>11</sup> neurons) in 2005. It took 50 days on a cluster of 27 processors to simulate 1 second of a model.<ref>{{cite journal |last=Izhikevich |first=Eugene M. |last2=Edelman |first2=Gerald M. |date=4 March 2008 |title=Large-scale model of mammalian thalamocortical systems |url=http://vesicle.nsi.edu/users/izhikevich/publications/large-scale_model_of_human_brain.pdf |journal=PNAS |volume=105 |issue=9 |pages=3593–3598 |doi= 10.1073/pnas.0712231105|access-date=23 June 2015 |archive-url=https://web.archive.org/web/20090612095651/http://vesicle.nsi.edu/users/izhikevich/publications/large-scale_model_of_human_brain.pdf |archive-date=12 June 2009 |pmid=18292226 |pmc=2265160|bibcode=2008PNAS..105.3593I }}</ref> The [[Blue Brain]] project used one of the fastest supercomputer architectures in the world, [[IBM]]'s [[Blue Gene]] platform, to create a real time simulation of a single rat [[Neocortex|neocortical column]] consisting of approximately 10,000 neurons and 10<sup>8</sup> synapses in 2006.<ref>{{cite web|url=http://bluebrain.epfl.ch/Jahia/site/bluebrain/op/edit/pid/19085|title=Project Milestones|work=Blue Brain|accessdate=11 August 2008}}</ref> A longer term goal is to build a detailed, functional simulation of the physiological processes in the human brain: "It is not impossible to build a human brain and we can do it in 10 years," [[Henry Markram]], director of the Blue Brain Project said in 2009 at the [[TED (conference)|TED conference]] in Oxford.<ref>{{Cite news |url=http://news.bbc.co.uk/1/hi/technology/8164060.stm |title=Artificial brain '10 years away' 2009 BBC news |date=22 July 2009 |access-date=25 July 2009 |archive-url=https://web.archive.org/web/20170726040959/http://news.bbc.co.uk/1/hi/technology/8164060.stm |archive-date=26 July 2017 |url-status=live }}</ref> There have also been controversial claims to have simulated a [[cat intelligence#Computer simulation of the cat brain|cat brain]]. Neuro-silicon interfaces have been proposed as an alternative implementation strategy that may scale better.<ref>[http://gauntlet.ucalgary.ca/story/10343 University of Calgary news] {{Webarchive|url=https://web.archive.org/web/20090818081044/http://gauntlet.ucalgary.ca/story/10343 |date=18 August 2009 }}, [http://www.nbcnews.com/id/12037941 NBC News news] {{Webarchive|url=https://web.archive.org/web/20170704063922/http://www.nbcnews.com/id/12037941/ |date=4 July 2017 }}</ref><br />
<br />
There are some research projects that are investigating brain simulation using more sophisticated neural models, implemented on conventional computing architectures. The Artificial Intelligence System project implemented non-real time simulations of a "brain" (with 10<sup>11</sup> neurons) in 2005. It took 50 days on a cluster of 27 processors to simulate 1 second of a model. The Blue Brain project used one of the fastest supercomputer architectures in the world, IBM's Blue Gene platform, to create a real time simulation of a single rat neocortical column consisting of approximately 10,000 neurons and 10<sup>8</sup> synapses in 2006. A longer term goal is to build a detailed, functional simulation of the physiological processes in the human brain: "It is not impossible to build a human brain and we can do it in 10 years," Henry Markram, director of the Blue Brain Project said in 2009 at the TED conference in Oxford. There have also been controversial claims to have simulated a cat brain. Neuro-silicon interfaces have been proposed as an alternative implementation strategy that may scale better.<br />
<br />
有一些研究项目正在使用更复杂的神经模型研究大脑模拟,这些模型是在传统的计算机体系结构上实现的。人工智能系统项目在2005年实现了一个“大脑”(有10个 sup 11 / sup 神经元)的非实时模拟。在一个由27个处理器组成的集群上,模拟一个模型的一秒钟花费了50天时间。2006年,蓝脑项目利用世界上最快的超级计算机架构之一,IBM 的蓝色基因平台,创建了一个包含大约10,000个神经元和10个 sup 8 / sup 突触的单个大鼠皮层柱的实时模拟。一个更长期的目标是建立一个人脑生理过程的详细的功能模拟: “建立一个人脑并不是不可能的,我们可以在10年内完成,”2009年在牛津举行的 TED 大会上,蓝脑项目主任亨利 · 马克拉姆说。还有一些有争议的说法是模拟猫的大脑。神经硅接口已被提出作为一种替代的实施策略,可能会更好地伸缩。<br />
<br />
<br />
<br />
[[Hans Moravec]] addressed the above arguments ("brains are more complicated", "neurons have to be modeled in more detail") in his 1997 paper "When will computer hardware match the human brain?".<ref>{{cite web|url=http://www.transhumanist.com/volume1/moravec.htm |title=Archived copy |accessdate=23 June 2006 |url-status=dead |archiveurl=https://web.archive.org/web/20060615031852/http://transhumanist.com/volume1/moravec.htm |archivedate=15 June 2006 }}</ref> He measured the ability of existing software to simulate the functionality of neural tissue, specifically the retina. His results do not depend on the number of glial cells, nor on what kinds of processing neurons perform where.<br />
<br />
Hans Moravec addressed the above arguments ("brains are more complicated", "neurons have to be modeled in more detail") in his 1997 paper "When will computer hardware match the human brain?". He measured the ability of existing software to simulate the functionality of neural tissue, specifically the retina. His results do not depend on the number of glial cells, nor on what kinds of processing neurons perform where.<br />
<br />
Hans Moravec 在他1997年的论文《计算机硬件何时能与人脑相匹配? 》中提出了上述观点(“大脑更复杂” ,“神经元必须建模得更详细”) .他测量了现有软件模拟神经组织,特别是视网膜功能的能力。他的研究结果并不取决于神经胶质细胞的数量,也不取决于处理神经元在哪里工作。<br />
<br />
<br />
<br />
The actual complexity of modeling biological neurons has been explored in [[OpenWorm|OpenWorm project]] that was aimed on complete simulation of a worm that has only 302 neurons in its neural network (among about 1000 cells in total). The animal's neural network has been well documented before the start of the project. However, although the task seemed simple at the beginning, the models based on a generic neural network did not work. Currently, the efforts are focused on precise emulation of biological neurons (partly on the molecular level), but the result cannot be called a total success yet. Even if the number of issues to be solved in a human-brain-scale model is not proportional to the number of neurons, the amount of work along this path is obvious.<br />
<br />
The actual complexity of modeling biological neurons has been explored in OpenWorm project that was aimed on complete simulation of a worm that has only 302 neurons in its neural network (among about 1000 cells in total). The animal's neural network has been well documented before the start of the project. However, although the task seemed simple at the beginning, the models based on a generic neural network did not work. Currently, the efforts are focused on precise emulation of biological neurons (partly on the molecular level), but the result cannot be called a total success yet. Even if the number of issues to be solved in a human-brain-scale model is not proportional to the number of neurons, the amount of work along this path is obvious.<br />
<br />
在 OpenWorm 项目中,已经探讨了建模生物神经元的实际复杂性,该项目旨在完全模拟一个蠕虫,其神经网络中只有302个神经元(在总共约1000个细胞中)。在项目开始之前,动物的神经网络已经被很好地记录了下来。然而,尽管一开始任务看起来很简单,基于一般神经网络的模型并不起作用。目前,研究的重点是精确模拟生物神经元(部分在分子水平上) ,但结果还不能被称为完全成功。即使在人脑尺度模型中需要解决的问题的数量与神经元的数量不成比例,沿着这条路径所做的工作量也是显而易见的。<br />
<br />
<br />
<br />
===Criticisms of simulation-based approaches===<br />
<br />
A fundamental criticism of the simulated brain approach derives from [[embodied cognition]] where human embodiment is taken as an essential aspect of human intelligence. Many researchers believe that embodiment is necessary to ground meaning.<ref>{{Harvnb|de Vega|Glenberg|Graesser|2008}}. A wide range of views in current research, all of which require grounding to some degree</ref> If this view is correct, any fully functional brain model will need to encompass more than just the neurons (i.e., a robotic body). Goertzel{{sfn|Goertzel|2007}} proposes virtual embodiment (like in ''[[Second Life]]''), but it is not yet known whether this would be sufficient.<br />
<br />
A fundamental criticism of the simulated brain approach derives from embodied cognition where human embodiment is taken as an essential aspect of human intelligence. Many researchers believe that embodiment is necessary to ground meaning. If this view is correct, any fully functional brain model will need to encompass more than just the neurons (i.e., a robotic body). Goertzel proposes virtual embodiment (like in Second Life), but it is not yet known whether this would be sufficient.<br />
<br />
对模拟大脑方法的一个基本批评来自具身认知,在那里人体化被视为人类智力的一个重要方面。许多研究者认为,具体化是必要的基础意义。如果这种观点是正确的,那么任何功能齐全的大脑模型都需要包含更多的神经元(例如,一个机器人身体)。Goertzel 提出了虚拟化身(就像在《第二人生》中那样) ,但是目前还不知道这是否足够。<br />
<br />
<br />
<br />
Desktop computers using microprocessors capable of more than 10<sup>9</sup> cps (Kurzweil's non-standard unit "computations per second", see above) have been available since 2005. According to the brain power estimates used by Kurzweil (and Moravec), this computer should be capable of supporting a simulation of a bee brain, but despite some interest<ref>{{Cite web |url=http://www.setiai.com/archives/cat_honey_bee_brain.html |title=some links to bee brain studies |access-date=30 March 2010 |archive-url=https://web.archive.org/web/20111005162232/http://www.setiai.com/archives/cat_honey_bee_brain.html |archive-date=5 October 2011 |url-status=dead }}</ref> no such simulation exists {{Citation needed|date=April 2011}}. There are at least three reasons for this:<br />
<br />
Desktop computers using microprocessors capable of more than 10<sup>9</sup> cps (Kurzweil's non-standard unit "computations per second", see above) have been available since 2005. According to the brain power estimates used by Kurzweil (and Moravec), this computer should be capable of supporting a simulation of a bee brain, but despite some interest no such simulation exists . There are at least three reasons for this:<br />
<br />
自2005年以来,台式计算机使用的微处理器能够超过10 sup 9 / sup cps (库兹韦尔的非标准单位“每秒计算” ,见上文)。根据 Kurzweil (和 Moravec)使用的大脑能量估算,这台计算机应该能够支持蜜蜂大脑的模拟,但是尽管有些人感兴趣,这样的模拟并不存在。这至少有三个原因:<br />
<br />
#The neuron model seems to be oversimplified (see next section).<br />
<br />
The neuron model seems to be oversimplified (see next section).<br />
<br />
神经元模型似乎过于简化了(见下一节)。<br />
<br />
#There is insufficient understanding of higher cognitive processes{{refn|In Goertzels' AGI book ([[#CITEREFYudkowsky2006|Yudkowsky 2006]]), Yudkowsky proposes 5 levels of organisation that must be understood – code/data, sensory modality, concept & category, thought, and deliberation (consciousness) – in order to use the available hardware}} to establish accurately what the brain's neural activity, observed using techniques such as [[Neuroimaging#Functional magnetic resonance imaging|functional magnetic resonance imaging]], correlates with.<br />
<br />
There is insufficient understanding of higher cognitive processes to establish accurately what the brain's neural activity, observed using techniques such as functional magnetic resonance imaging, correlates with.<br />
<br />
人们对高级认知过程的理解不够充分,无法准确地确定大脑的神经活动---- 使用功能性磁共振成像等技术观察到的活动---- 与之相关。<br />
<br />
#Even if our understanding of cognition advances sufficiently, early simulation programs are likely to be very inefficient and will, therefore, need considerably more hardware.<br />
<br />
Even if our understanding of cognition advances sufficiently, early simulation programs are likely to be very inefficient and will, therefore, need considerably more hardware.<br />
<br />
即使我们对认知的理解有了足够的进步,早期的仿真程序也可能非常低效,因此需要更多的硬件。<br />
<br />
#The brain of an organism, while critical, may not be an appropriate boundary for a cognitive model. To simulate a bee brain, it may be necessary to simulate the body, and the environment. [[The Extended Mind]] thesis formalizes the philosophical concept, and research into [[Cephalopoda|cephalopods]] has demonstrated clear examples of a decentralized system.<ref>{{cite journal | pmid = 15829594 | doi=10.1152/jn.00684.2004 | volume=94 | issue=2 | title=Dynamic model of the octopus arm. I. Biomechanics of the octopus reaching movement |date=August 2005 | journal=J. Neurophysiol. | pages=1443–58 | last1 = Yekutieli | first1 = Y | last2 = Sagiv-Zohar | first2 = R | last3 = Aharonov | first3 = R | last4 = Engel | first4 = Y | last5 = Hochner | first5 = B | last6 = Flash | first6 = T}}</ref><br />
<br />
The brain of an organism, while critical, may not be an appropriate boundary for a cognitive model. To simulate a bee brain, it may be necessary to simulate the body, and the environment. The Extended Mind thesis formalizes the philosophical concept, and research into cephalopods has demonstrated clear examples of a decentralized system.<br />
<br />
有机体的大脑虽然关键,但可能不是认知模型的合适边界。为了模拟蜜蜂的大脑,可能需要模拟身体和环境。扩展心智论文形式化了哲学概念,对头足类动物的研究已经证明了分散系统的明显例子。<br />
<br />
<br />
<br />
In addition, the scale of the human brain is not currently well-constrained. One estimate puts the human brain at about 100 billion neurons and 100 trillion synapses.<ref>{{Harvnb|Williams|Herrup|1988}}</ref><ref>[http://search.eb.com/eb/article-75525 "nervous system, human."] ''[[Encyclopædia Britannica]]''. 9 January 2007</ref> Another estimate is 86 billion neurons of which 16.3 billion are in the [[cerebral cortex]] and 69 billion in the [[cerebellum]].{{sfn|Azevedo et al.|2009}} [[Glial cell]] synapses are currently unquantified but are known to be extremely numerous.<br />
<br />
In addition, the scale of the human brain is not currently well-constrained. One estimate puts the human brain at about 100 billion neurons and 100 trillion synapses. Another estimate is 86 billion neurons of which 16.3 billion are in the cerebral cortex and 69 billion in the cerebellum. Glial cell synapses are currently unquantified but are known to be extremely numerous.<br />
<br />
此外,人类大脑的规模目前还没有得到很好的限制。据估计,人类大脑大约有1000亿个神经元和100万亿个突触。另一个估计是860亿个神经元,其中163亿在大脑皮层,690亿在小脑。神经胶质细胞突触目前尚未定量,但已知数量极多。<br />
<br />
<br />
<br />
==Strong AI and consciousness==<br />
<br />
{{See also|Philosophy of artificial intelligence|Turing test}}<br />
<br />
<br />
<br />
In 1980, philosopher [[John Searle]] coined the term "strong AI" as part of his [[Chinese room]] argument.<ref>{{Harvnb|Searle|1980}}</ref> He wanted to distinguish between two different hypotheses about artificial intelligence:<ref>As defined in a standard AI textbook: "The assertion that machines could possibly act intelligently (or, perhaps better, act as if they were intelligent) is called the 'weak AI' hypothesis by philosophers, and the assertion that machines that do so are actually thinking (as opposed to simulating thinking) is called the 'strong AI' hypothesis." {{Harv|Russell|Norvig|2003}}</ref><br />
<br />
In 1980, philosopher John Searle coined the term "strong AI" as part of his Chinese room argument. He wanted to distinguish between two different hypotheses about artificial intelligence:<br />
<br />
1980年,哲学家约翰•塞尔(John Searle)将“强人工智能”(strong AI)一词作为他在中文房间里辩论的一部分。他想要区分关于人工智能的两种不同假设:<br />
<br />
* An artificial intelligence system can ''think'' and have a ''mind''. (The word "mind" has a specific meaning for philosophers, as used in "the [[mind body problem]]" or "the [[philosophy of mind]]".)<br />
<br />
* An artificial intelligence system can (only) ''act like'' it thinks and has a mind.<br />
<br />
The first one is called "the ''strong'' AI hypothesis" and the second is "the ''weak'' AI hypothesis" because the first one makes the ''stronger'' statement: it assumes something special has happened to the machine that goes beyond all its abilities that we can test. Searle referred to the "strong AI hypothesis" as "strong AI". This usage is also common in academic AI research and textbooks.<ref>For example:<br />
<br />
The first one is called "the strong AI hypothesis" and the second is "the weak AI hypothesis" because the first one makes the stronger statement: it assumes something special has happened to the machine that goes beyond all its abilities that we can test. Searle referred to the "strong AI hypothesis" as "strong AI". This usage is also common in academic AI research and textbooks.<ref>For example:<br />
<br />
第一个被称为“强人工智能假设” ,第二个被称为“弱人工智能假设” ,因为第一个假设提出了更强的陈述: 它假定机器发生了某种特殊的事情,超出了我们能够测试的所有能力。Searle 将“强 AI 假说”称为“强 AI”。这种用法在人工智能学术研究和教科书中也很常见。例如:<br />
<br />
* {{Harvnb|Russell|Norvig|2003}},<br />
<br />
* [http://www.encyclopedia.com/doc/1O87-strongAI.html Oxford University Press Dictionary of Psychology] {{Webarchive|url=https://web.archive.org/web/20071203103022/http://www.encyclopedia.com/doc/1O87-strongAI.html |date=3 December 2007 }} (quoted in "High Beam Encyclopedia"),<br />
<br />
* [http://www.aaai.org/AITopics/html/phil.html MIT Encyclopedia of Cognitive Science] {{Webarchive|url=https://web.archive.org/web/20080719074502/http://www.aaai.org/AITopics/html/phil.html |date=19 July 2008 }} (quoted in "AITopics")<br />
<br />
* [http://planetmath.org/encyclopedia/StrongAIThesis.html Planet Math] {{Webarchive|url=https://web.archive.org/web/20070919012830/http://planetmath.org/encyclopedia/StrongAIThesis.html |date=19 September 2007 }}<br />
<br />
* [http://www.cbhd.org/resources/biotech/tongen_2003-11-07.htm Will Biological Computers Enable Artificially Intelligent Machines to Become Persons?] {{Webarchive|url=https://web.archive.org/web/20080513031753/http://www.cbhd.org/resources/biotech/tongen_2003-11-07.htm |date=13 May 2008 }} Anthony Tongen</ref><br />
<br />
<br />
<br />
The weak AI hypothesis is equivalent to the hypothesis that artificial general intelligence is possible. According to [[Stuart J. Russell|Russell]] and [[Peter Norvig|Norvig]], "Most AI researchers take the weak AI hypothesis for granted, and don't care about the strong AI hypothesis."{{sfn|Russell|Norvig|2003|p=947}}<br />
<br />
The weak AI hypothesis is equivalent to the hypothesis that artificial general intelligence is possible. According to Russell and Norvig, "Most AI researchers take the weak AI hypothesis for granted, and don't care about the strong AI hypothesis."<br />
<br />
弱人工智能假说等同于人工一般智能是可能的假说。根据罗素和诺维格的说法,“大多数人工智能研究人员认为弱人工智能假说是理所当然的,并且不关心强人工智能假说。”<br />
<br />
<br />
<br />
In contrast to Searle, [[Ray Kurzweil]] uses the term "strong AI" to describe any artificial intelligence system that acts like it has a mind,<ref name=K/> regardless of whether a philosopher would be able to determine if it ''actually'' has a mind or not.<br />
<br />
In contrast to Searle, Ray Kurzweil uses the term "strong AI" to describe any artificial intelligence system that acts like it has a mind, regardless of whether a philosopher would be able to determine if it actually has a mind or not.<br />
<br />
与 Searle 不同的是,Ray Kurzweil 使用“强人工智能”这个词来描述任何人工智能系统,这个系统的行为就像它有思想一样,不管哲学家是否能够确定它是否真的有思想。<br />
<br />
In science fiction, AGI is associated with traits such as [[consciousness]], [[sentience]], [[sapience]], and [[self-awareness]] observed in living beings. However, according to Searle, it is an open question whether general intelligence is sufficient for consciousness. "Strong AI" (as defined above by Kurzweil) should not be confused with Searle's "[[strong AI hypothesis]]." The strong AI hypothesis is the claim that a computer which behaves as intelligently as a person must also necessarily have a [[mind]] and [[consciousness]]. AGI refers only to the amount of intelligence that the machine displays, with or without a mind.<br />
<br />
In science fiction, AGI is associated with traits such as consciousness, sentience, sapience, and self-awareness observed in living beings. However, according to Searle, it is an open question whether general intelligence is sufficient for consciousness. "Strong AI" (as defined above by Kurzweil) should not be confused with Searle's "strong AI hypothesis." The strong AI hypothesis is the claim that a computer which behaves as intelligently as a person must also necessarily have a mind and consciousness. AGI refers only to the amount of intelligence that the machine displays, with or without a mind.<br />
<br />
在科幻小说中,AGI 与生物的意识、知觉、智慧和自我意识等特征有关。然而,根据 Searle 的说法,一般智力是否足以产生意识还是一个悬而未决的问题。“强 AI”(如上文库兹韦尔所定义的)不应与塞尔的“强 AI 假设”相混淆强有力的人工智能假说认为,一台像人一样智能运行的计算机必然具有思想和意识。Agi 只是指机器显示的智能量,不管有没有头脑。<br />
<br />
<br />
<br />
===Consciousness===<br />
<br />
There are other aspects of the human mind besides intelligence that are relevant to the concept of strong AI which play a major role in [[science fiction]] and the [[ethics of artificial intelligence]]:<br />
<br />
There are other aspects of the human mind besides intelligence that are relevant to the concept of strong AI which play a major role in science fiction and the ethics of artificial intelligence:<br />
<br />
在科幻小说和人工智能伦理中扮演重要角色的强大人工智能概念中,除了与智能有关的人类思维还有其他方面:<br />
<br />
* [[consciousness]]: To have [[qualia|subjective experience]] and [[thought]].<ref>Note that [[consciousness]] is difficult to define. A popular definition, due to [[Thomas Nagel]], is that it "feels like" something to be conscious. If we are not conscious, then it doesn't feel like anything. Nagel uses the example of a bat: we can sensibly ask "what does it feel like to be a bat?" However, we are unlikely to ask "what does it feel like to be a toaster?" Nagel concludes that a bat appears to be conscious (i.e. has consciousness) but a toaster does not. See {{Harv|Nagel|1974}}</ref><br />
<br />
* [[self-awareness]]: To be aware of oneself as a separate individual, especially to be aware of one's own thoughts.<br />
<br />
* [[sentience]]: The ability to "feel" perceptions or emotions subjectively.<br />
<br />
* [[sapience]]: The capacity for wisdom.<br />
<br />
<br />
<br />
These traits have a moral dimension, because a machine with this form of strong AI may have legal rights, analogous to the [[animal rights|rights of non-human animals]]. As such, preliminary work has been conducted on approaches to integrating full ethical agents with existing legal and social frameworks. These approaches have focused on the legal position and rights of 'strong' AI.<ref>{{Cite journal|last=Sotala|first=Kaj|last2=Yampolskiy|first2=Roman V|date=2014-12-19|title=Responses to catastrophic AGI risk: a survey|journal=Physica Scripta|volume=90|issue=1|pages=8|doi=10.1088/0031-8949/90/1/018001|issn=0031-8949|doi-access=free}}</ref><br />
<br />
These traits have a moral dimension, because a machine with this form of strong AI may have legal rights, analogous to the rights of non-human animals. As such, preliminary work has been conducted on approaches to integrating full ethical agents with existing legal and social frameworks. These approaches have focused on the legal position and rights of 'strong' AI.<br />
<br />
这些特征具有道德维度,因为拥有这种强人工智能形式的机器可能拥有法律权利,类似于非人类动物的权利。因此,已经开展了初步工作,探讨如何将全面的道德行为者纳入现有的法律和社会框架。这些方法都集中在强大的人工智能的法律地位和权利上。<br />
<br />
<br />
<br />
However, [[Bill Joy]], among others, argues a machine with these traits may be a threat to human life or dignity.<ref>{{cite journal| title=Why the future doesn't need us | last=Joy | first=Bill |author-link=Bill Joy | journal=Wired |date=April 2000 }}</ref> It remains to be shown whether any of these traits are [[Necessary and sufficient condition|necessary]] for strong AI. The role of [[consciousness]] is not clear, and currently there is no agreed test for its presence. If a machine is built with a device that simulates the [[neural correlates of consciousness]], would it automatically have self-awareness? It is also possible that some of these properties, such as sentience, [[Emergence|naturally emerge]] from a fully intelligent machine, or that it becomes natural to ''ascribe'' these properties to machines once they begin to act in a way that is clearly intelligent. For example, intelligent action may be sufficient for sentience, rather than the other way around.<br />
<br />
However, Bill Joy, among others, argues a machine with these traits may be a threat to human life or dignity. It remains to be shown whether any of these traits are necessary for strong AI. The role of consciousness is not clear, and currently there is no agreed test for its presence. If a machine is built with a device that simulates the neural correlates of consciousness, would it automatically have self-awareness? It is also possible that some of these properties, such as sentience, naturally emerge from a fully intelligent machine, or that it becomes natural to ascribe these properties to machines once they begin to act in a way that is clearly intelligent. For example, intelligent action may be sufficient for sentience, rather than the other way around.<br />
<br />
然而,比尔 · 乔伊等人认为,具有这些特征的机器可能会威胁到人类的生命或尊严。这些特征对于强 AI 来说是否是必要的还有待证明。意识的作用并不清楚,目前也没有对其存在的一致的测试。如果一台机器装有一个模拟意识相关神经区的装置,它会自动具有自我意识吗?也有可能这些特性中的一些,比如感知能力,自然而然地从一个完全智能的机器中产生,或者一旦机器开始以一种明显的智能方式行动,人们就会自然而然地把这些特性归因于机器。例如,智能行为可能足以产生知觉,而不是相反。<br />
<br />
<br />
<br />
===Artificial consciousness research===<br />
<br />
{{Main|Artificial consciousness}}<br />
<br />
<br />
<br />
Although the role of consciousness in strong AI/AGI is debatable, many AGI researchers{{sfn|Yudkowsky|2006}} regard research that investigates possibilities for implementing consciousness as vital. In an early effort [[Igor Aleksander]]{{sfn|Aleksander|1996}} argued that the principles for creating a conscious machine already existed but that it would take forty years to train such a machine to understand [[language]].<br />
<br />
Although the role of consciousness in strong AI/AGI is debatable, many AGI researchers regard research that investigates possibilities for implementing consciousness as vital. In an early effort Igor Aleksander argued that the principles for creating a conscious machine already existed but that it would take forty years to train such a machine to understand language.<br />
<br />
虽然意识在强 ai / AGI 中的作用是有争议的,但是很多 AGI 的研究人员认为研究实现意识的可能性是至关重要的。在早期的努力中,Igor Aleksander 认为创造一个有意识的机器的原则已经存在,但是训练这样一个机器去理解语言需要四十年。<br />
<br />
<br />
<br />
==Possible explanations for the slow progress of AI research==<br />
<br />
{{See also|History of artificial intelligence#The problems}}<br />
<br />
<br />
<br />
Since the launch of AI research in 1956, the growth of this field has slowed down over time and has stalled the aims of creating machines skilled with intelligent action at the human level.{{sfn|Clocksin|2003}} A possible explanation for this delay is that computers lack a sufficient scope of memory or processing power.{{sfn|Clocksin|2003}} In addition, the level of complexity that connects to the process of AI research may also limit the progress of AI research.{{sfn|Clocksin|2003}}<br />
<br />
Since the launch of AI research in 1956, the growth of this field has slowed down over time and has stalled the aims of creating machines skilled with intelligent action at the human level. A possible explanation for this delay is that computers lack a sufficient scope of memory or processing power. In addition, the level of complexity that connects to the process of AI research may also limit the progress of AI research.<br />
<br />
自从1956年开始人工智能研究以来,这一领域的发展速度已经随着时间的推移而放缓,并且阻碍了创造具有人类水平的智能行为的机器的目标。这种延迟的一个可能的解释是计算机缺乏足够的存储空间或处理能力。此外,与人工智能研究过程相关的复杂程度也可能限制人工智能研究的进展。<br />
<br />
<br />
<br />
While most AI researchers believe strong AI can be achieved in the future, there are some individuals like [[Hubert Dreyfus]] and [[Roger Penrose]] who deny the possibility of achieving strong AI.{{sfn|Clocksin|2003}} [[John McCarthy (computer scientist)|John McCarthy]] was one of various computer scientists who believe human-level AI will be accomplished, but a date cannot accurately be predicted.{{sfn|McCarthy|2003}}<br />
<br />
While most AI researchers believe strong AI can be achieved in the future, there are some individuals like Hubert Dreyfus and Roger Penrose who deny the possibility of achieving strong AI. John McCarthy was one of various computer scientists who believe human-level AI will be accomplished, but a date cannot accurately be predicted.<br />
<br />
虽然大多数人工智能研究人员认为强大的人工智能可以在未来实现,但也有一些人像休伯特 · 德雷福斯和罗杰 · 彭罗斯否认实现强大人工智能的可能性。约翰 · 麦卡锡是众多计算机科学家之一,他们相信人类水平的人工智能将会实现,但是日期无法准确预测。<br />
<br />
<br />
<br />
Conceptual limitations are another possible reason for the slowness in AI research.{{sfn|Clocksin|2003}} AI researchers may need to modify the conceptual framework of their discipline in order to provide a stronger base and contribution to the quest of achieving strong AI. As William Clocksin wrote in 2003: "the framework starts from Weizenbaum's observation that intelligence manifests itself only relative to specific social and cultural contexts".{{sfn|Clocksin|2003}}<br />
<br />
Conceptual limitations are another possible reason for the slowness in AI research. AI researchers may need to modify the conceptual framework of their discipline in order to provide a stronger base and contribution to the quest of achieving strong AI. As William Clocksin wrote in 2003: "the framework starts from Weizenbaum's observation that intelligence manifests itself only relative to specific social and cultural contexts".<br />
<br />
概念上的局限性是人工智能研究缓慢的另一个可能原因。人工智能研究人员可能需要修改他们学科的概念框架,以便为实现强大的人工智能提供一个更强大的基础和贡献。正如 William Clocksin 在2003年写的那样: “这个框架始于 Weizenbaum 的观察,即智力只在特定的社会和文化背景下表现出来。”。<br />
<br />
<br />
<br />
Furthermore, AI researchers have been able to create computers that can perform jobs that are complicated for people to do, such as mathematics, but conversely they have struggled to develop a computer that is capable of carrying out tasks that are simple for humans to do, such as walking ([[Moravec's paradox]]).{{sfn|Clocksin|2003}} A problem described by David Gelernter is that some people assume thinking and reasoning are equivalent.{{sfn|Gelernter|2010}} However, the idea of whether thoughts and the creator of those thoughts are isolated individually has intrigued AI researchers.{{sfn|Gelernter|2010}}<br />
<br />
Furthermore, AI researchers have been able to create computers that can perform jobs that are complicated for people to do, such as mathematics, but conversely they have struggled to develop a computer that is capable of carrying out tasks that are simple for humans to do, such as walking (Moravec's paradox). A problem described by David Gelernter is that some people assume thinking and reasoning are equivalent. However, the idea of whether thoughts and the creator of those thoughts are isolated individually has intrigued AI researchers.<br />
<br />
此外,人工智能研究人员已经能够创造出能够执行复杂工作(如数学)的计算机,但相反地,他们却难以开发出能够执行人类简单任务(如行走)的计算机(莫拉维克悖论)。大卫 · 格勒尼特描述的一个问题是,有些人认为思考和推理是等价的。然而,思想和这些思想的创造者是否被孤立的想法引起了人工智能研究者的兴趣。<br />
<br />
<br />
<br />
The problems that have been encountered in AI research over the past decades have further impeded the progress of AI. The failed predictions that have been promised by AI researchers and the lack of a complete understanding of human behaviors have helped diminish the primary idea of human-level AI.{{sfn|Goertzel|2007}} Although the progress of AI research has brought both improvement and disappointment, most investigators have established optimism about potentially achieving the goal of AI in the 21st century.{{sfn|Goertzel|2007}}<br />
<br />
The problems that have been encountered in AI research over the past decades have further impeded the progress of AI. The failed predictions that have been promised by AI researchers and the lack of a complete understanding of human behaviors have helped diminish the primary idea of human-level AI. Although the progress of AI research has brought both improvement and disappointment, most investigators have established optimism about potentially achieving the goal of AI in the 21st century.<br />
<br />
过去几十年人工智能研究中遇到的问题进一步阻碍了人工智能的发展。人工智能研究人员所承诺的失败的预测,以及对人类行为缺乏完整理解,已经帮助削弱了人类水平人工智能的基本概念。尽管人工智能研究的进展带来了进步和失望,但大多数研究人员对人工智能在21世纪可能实现的目标持乐观态度。<br />
<br />
<br />
<br />
Other possible reasons have been proposed for the lengthy research in the progress of strong AI. The intricacy of scientific problems and the need to fully understand the human brain through psychology and neurophysiology have limited many researchers in emulating the function of the human brain in computer hardware.{{sfn|McCarthy|2007}} Many researchers tend to underestimate any doubt that is involved with future predictions of AI, but without taking those issues seriously, people can then overlook solutions to problematic questions.{{sfn|Goertzel|2007}}<br />
<br />
Other possible reasons have been proposed for the lengthy research in the progress of strong AI. The intricacy of scientific problems and the need to fully understand the human brain through psychology and neurophysiology have limited many researchers in emulating the function of the human brain in computer hardware. Many researchers tend to underestimate any doubt that is involved with future predictions of AI, but without taking those issues seriously, people can then overlook solutions to problematic questions.<br />
<br />
还有其他可能的原因可以解释为什么对于强人工智能进行了长时间的研究。错综复杂的科学问题,以及通过心理学和神经生理学充分了解人脑的必要性,限制了许多研究人员在计算机硬件中模拟人脑的功能。许多研究人员倾向于低估与人工智能未来预测有关的任何怀疑,但是如果不认真对待这些问题,人们就会忽视问题的解决方案。<br />
<br />
<br />
<br />
Clocksin says that a conceptual limitation that may impede the progress of AI research is that people may be using the wrong techniques for computer programs and implementation of equipment.{{sfn|Clocksin|2003}} When AI researchers first began to aim for the goal of artificial intelligence, a main interest was human reasoning.{{sfn|Holte|Choueiry|2003}} Researchers hoped to establish computational models of human knowledge through reasoning and to find out how to design a computer with a specific cognitive task.{{sfn|Holte|Choueiry|2003}}<br />
<br />
Clocksin says that a conceptual limitation that may impede the progress of AI research is that people may be using the wrong techniques for computer programs and implementation of equipment. When AI researchers first began to aim for the goal of artificial intelligence, a main interest was human reasoning. Researchers hoped to establish computational models of human knowledge through reasoning and to find out how to design a computer with a specific cognitive task.<br />
<br />
Clocksin 说,阻碍人工智能研究进展的一个概念上的限制是,人们可能在计算机程序和设备实现方面使用了错误的技术。当人工智能研究人员第一次开始瞄准人工智能的目标时,主要的兴趣是人类推理。研究人员希望通过推理建立人类知识的计算模型,并找出如何设计一台具有特定认知任务的计算机。<br />
<br />
<br />
<br />
The practice of abstraction, which people tend to redefine when working with a particular context in research, provides researchers with a concentration on just a few concepts.{{sfn|Holte|Choueiry|2003}} The most productive use of abstraction in AI research comes from planning and problem solving.{{sfn|Holte|Choueiry|2003}} Although the aim is to increase the speed of a computation, the role of abstraction has posed questions about the involvement of abstraction operators.{{sfn|Zucker|2003}}<br />
<br />
The practice of abstraction, which people tend to redefine when working with a particular context in research, provides researchers with a concentration on just a few concepts. The most productive use of abstraction in AI research comes from planning and problem solving. Although the aim is to increase the speed of a computation, the role of abstraction has posed questions about the involvement of abstraction operators.<br />
<br />
抽象的实践,人们在研究中使用特定的语境时倾向于重新定义,为研究人员提供了集中在几个概念上的机会。抽象在人工智能研究中最有效的应用来自规划和解决问题。虽然目标是提高计算速度,但是抽象的作用已经对抽象操作符的参与提出了问题。<br />
<br />
<br />
<br />
A possible reason for the slowness in AI relates to the acknowledgement by many AI researchers that heuristics is a section that contains a significant breach between computer performance and human performance.{{sfn|McCarthy|2007}} The specific functions that are programmed to a computer may be able to account for many of the requirements that allow it to match human intelligence. These explanations are not necessarily guaranteed to be the fundamental causes for the delay in achieving strong AI, but they are widely agreed by numerous researchers.<br />
<br />
A possible reason for the slowness in AI relates to the acknowledgement by many AI researchers that heuristics is a section that contains a significant breach between computer performance and human performance. The specific functions that are programmed to a computer may be able to account for many of the requirements that allow it to match human intelligence. These explanations are not necessarily guaranteed to be the fundamental causes for the delay in achieving strong AI, but they are widely agreed by numerous researchers.<br />
<br />
人工智能发展缓慢的一个可能原因是许多人工智能研究人员承认启发式是计算机性能和人类性能之间的一个重大缺口。为计算机编程的特定功能可以满足许多要求,使计算机与人类智能相匹配。这些解释并不一定是造成人工智能实现延迟的根本原因,但它们得到了众多研究人员的广泛认同。<br />
<br />
<br />
<br />
There have been many AI researchers that debate over the idea whether [[affective computing|machines should be created with emotions]]. There are no emotions in typical models of AI and some researchers say programming emotions into machines allows them to have a mind of their own.{{sfn|Clocksin|2003}} Emotion sums up the experiences of humans because it allows them to remember those experiences.{{sfn|Gelernter|2010}} David Gelernter writes, "No computer will be creative unless it can simulate all the nuances of human emotion."{{sfn|Gelernter|2010}} This concern about emotion has posed problems for AI researchers and it connects to the concept of strong AI as its research progresses into the future.<ref>{{cite journal|doi=10.1016/j.bushor.2018.08.004|title=Kaplan Andreas and Haelein Michael (2019) Siri, Siri, in my hand: Who's the fairest in the land? On the interpretations, illustrations, and implications of artificial intelligence | volume=62 | year=2019|journal=Business Horizons|pages=15–25 | last1 = Kaplan | first1 = Andreas | last2 = Haenlein | first2 = Michael}}</ref><br />
<br />
There have been many AI researchers that debate over the idea whether machines should be created with emotions. There are no emotions in typical models of AI and some researchers say programming emotions into machines allows them to have a mind of their own. Emotion sums up the experiences of humans because it allows them to remember those experiences. David Gelernter writes, "No computer will be creative unless it can simulate all the nuances of human emotion." This concern about emotion has posed problems for AI researchers and it connects to the concept of strong AI as its research progresses into the future.<br />
<br />
许多人工智能研究人员一直在争论机器是否应该带有情感。典型的人工智能模型中没有情感,一些研究人员说,将情感编程到机器中可以让它们拥有自己的思想。情感总结了人类的经历,因为它允许人们记住那些经历。大卫 · 格勒尼特写道: “除非计算机能够模拟人类情感的所有细微差别,否则它不会具有创造力。”这种对情绪的关注给人工智能研究人员带来了一些问题,随着未来人工智能研究的进展,它与强人工智能的概念相联系。<br />
<br />
<br />
<br />
==Controversies and dangers==<br />
<br />
<br />
<br />
===Feasibility===<br />
<br />
{{expand section|date=February 2016}}<br />
<br />
As of March 2020, AGI remains speculative<ref name="spec1">[https://www.europarl.europa.eu/at-your-service/files/be-heard/religious-and-non-confessional-dialogue/events/en-20190319-how-artificial-intelligence-works.pdf europarl.europa.eu: How artificial intelligence works], "Concluding remarks: Today's AI is powerful and useful, but remains far from speculated AGI or ASI.", European Parliamentary Research Service, retrieved March 3, 2020</ref><ref name="spec2">[https://www.itu.int/en/journal/001/Documents/itu2018-9.pdf itu.int: Beyond Mad?: The Race For Artificial General Intelligence], "AGI represents a level of power that remains firmly in the realm of speculative fiction as on date." February 2, 2018, retrieved March 3, 2020</ref> as no such system has been demonstrated yet. Opinions vary both on ''whether'' and ''when'' artificial general intelligence will arrive. At one extreme, AI pioneer [[Herbert A. Simon]] wrote in 1965: "machines will be capable, within twenty years, of doing any work a man can do". However, this prediction failed to come true. Microsoft co-founder [[Paul Allen]] believed that such intelligence is unlikely in the 21st century because it would require "unforeseeable and fundamentally unpredictable breakthroughs" and a "scientifically deep understanding of cognition".<ref>{{cite news|last1=Allen|first1=Paul|title=The Singularity Isn't Near|url=http://www.technologyreview.com/view/425733/paul-allen-the-singularity-isnt-near/|accessdate=17 September 2014|work=[[MIT Technology Review]]}}</ref> Writing in [[The Guardian]], roboticist Alan Winfield claimed the gulf between modern computing and human-level artificial intelligence is as wide as the gulf between current space flight and practical faster-than-light spaceflight.<ref>{{cite news|last1=Winfield|first1=Alan|title=Artificial intelligence will not turn into a Frankenstein's monster|url=https://www.theguardian.com/technology/2014/aug/10/artificial-intelligence-will-not-become-a-frankensteins-monster-ian-winfield|accessdate=17 September 2014|work=[[The Guardian]]|archive-url=https://web.archive.org/web/20140917135230/http://www.theguardian.com/technology/2014/aug/10/artificial-intelligence-will-not-become-a-frankensteins-monster-ian-winfield|archive-date=17 September 2014|url-status=live}}</ref><br />
<br />
As of March 2020, AGI remains speculative as no such system has been demonstrated yet. Opinions vary both on whether and when artificial general intelligence will arrive. At one extreme, AI pioneer Herbert A. Simon wrote in 1965: "machines will be capable, within twenty years, of doing any work a man can do". However, this prediction failed to come true. Microsoft co-founder Paul Allen believed that such intelligence is unlikely in the 21st century because it would require "unforeseeable and fundamentally unpredictable breakthroughs" and a "scientifically deep understanding of cognition". Writing in The Guardian, roboticist Alan Winfield claimed the gulf between modern computing and human-level artificial intelligence is as wide as the gulf between current space flight and practical faster-than-light spaceflight.<br />
<br />
截至2020年3月,德盛安联仍处于投机状态,因为迄今尚未展示此类系统。对于人工通用智能是否会到来以及何时到来,人们的看法各不相同。在一个极端,人工智能的先驱赫伯特·西蒙在1965年写道: “机器将能在20年内完成人类能做的任何工作。”。然而,这个预言并没有实现。微软(Microsoft)联合创始人保罗•艾伦(Paul Allen)认为,这种情报在21世纪不太可能出现,因为它需要“不可预见且根本无法预测的突破”和“对认知的科学深入理解”。机器人专家 Alan Winfield 在《卫报》上发表文章称,现代计算机和人类水平的人工智能之间的鸿沟就像当前的太空飞行和实际的超光速空间飞行之间的鸿沟一样宽。<br />
<br />
<br />
<br />
AI experts' views on the feasibility of AGI wax and wane, and may have seen a resurgence in the 2010s. Four polls conducted in 2012 and 2013 suggested that the median guess among experts for when they would be 50% confident AGI would arrive was 2040 to 2050, depending on the poll, with the mean being 2081. Of the experts, 16.5% answered with "never" when asked the same question but with a 90% confidence instead.<ref name="new yorker doomsday">{{cite news|author1=Raffi Khatchadourian|title=The Doomsday Invention: Will artificial intelligence bring us utopia or destruction?|url=http://www.newyorker.com/magazine/2015/11/23/doomsday-invention-artificial-intelligence-nick-bostrom|accessdate=7 February 2016|work=[[The New Yorker (magazine)|The New Yorker]]|date=23 November 2015|archive-url=https://web.archive.org/web/20160128105955/http://www.newyorker.com/magazine/2015/11/23/doomsday-invention-artificial-intelligence-nick-bostrom|archive-date=28 January 2016|url-status=live}}</ref><ref>Müller, V. C., & Bostrom, N. (2016). Future progress in artificial intelligence: A survey of expert opinion. In Fundamental issues of artificial intelligence (pp. 555–572). Springer, Cham.</ref> Further current AGI progress considerations can be found below [[#Tests_for_confirming_human-level_AGI|''Tests for confirming human-level AGI'']] and [[#IQ-Tests_AGI|''IQ-tests AGI'']].<br />
<br />
AI experts' views on the feasibility of AGI wax and wane, and may have seen a resurgence in the 2010s. Four polls conducted in 2012 and 2013 suggested that the median guess among experts for when they would be 50% confident AGI would arrive was 2040 to 2050, depending on the poll, with the mean being 2081. Of the experts, 16.5% answered with "never" when asked the same question but with a 90% confidence instead. Further current AGI progress considerations can be found below Tests for confirming human-level AGI and IQ-tests AGI.<br />
<br />
人工智能专家对德盛安联兴衰可行性的看法,可能在2010年出现了复苏。2012年和2013年进行的四次民意调查显示,专家对德盛安联50% 有信心的平均猜测是2040年到2050年,具体取决于调查结果,平均猜测是2081年。在这些专家中,16.5% 的人在被问到同样的问题时回答“从来没有” ,但他们的自信心却达到了90% 。进一步的进展考虑可以在确认人类水平 AGI 和 iq 测试 AGI 的测试下面找到。<br />
<br />
<br />
<br />
===Potential threat to human existence{{anchor|Risk_of_human_extinction}}===<br />
<br />
{{Main|Existential risk from artificial general intelligence}}<br />
<br />
<br />
<br />
The thesis that AI poses an existential risk, and that this risk needs much more attention than it currently gets, has been endorsed by many public figures; perhaps the most famous are [[Elon Musk]], [[Bill Gates]], and [[Stephen Hawking]]. The most notable AI researcher to endorse the thesis is [[Stuart J. Russell]]. Endorsers of the thesis sometimes express bafflement at skeptics: Gates states he does not "understand why some people are not concerned",<ref name="BBC News">{{cite news|last1=Rawlinson|first1=Kevin|title=Microsoft's Bill Gates insists AI is a threat|url=https://www.bbc.co.uk/news/31047780|work=[[BBC News]]|accessdate=30 January 2015}}</ref> and Hawking criticized widespread indifference in his 2014 editorial: {{cquote|'So, facing possible futures of incalculable benefits and risks, the experts are surely doing everything possible to ensure the best outcome, right? Wrong. If a superior alien civilisation sent us a message saying, 'We'll arrive in a few decades,' would we just reply, 'OK, call us when you get here{{endash}}we'll leave the lights on?' Probably not{{endash}}but this is more or less what is happening with AI.'<ref name="hawking editorial">{{cite news |title=Stephen Hawking: 'Transcendence looks at the implications of artificial intelligence&nbsp;– but are we taking AI seriously enough?' |url=https://www.independent.co.uk/news/science/stephen-hawking-transcendence-looks-at-the-implications-of-artificial-intelligence--but-are-we-taking-ai-seriously-enough-9313474.html |accessdate=3 December 2014 |publisher=[[The Independent (UK)]]}}</ref>}}<br />
<br />
The thesis that AI poses an existential risk, and that this risk needs much more attention than it currently gets, has been endorsed by many public figures; perhaps the most famous are Elon Musk, Bill Gates, and Stephen Hawking. The most notable AI researcher to endorse the thesis is Stuart J. Russell. Endorsers of the thesis sometimes express bafflement at skeptics: Gates states he does not "understand why some people are not concerned", and Hawking criticized widespread indifference in his 2014 editorial: we'll leave the lights on?' Probably notbut this is more or less what is happening with AI.'}}<br />
<br />
人工智能构成了世界末日,这种风险需要比现在更多的关注,这一论点已经得到了许多公众人物的支持; 也许最著名的是埃隆 · 马斯克,比尔 · 盖茨和斯蒂芬 · 霍金。支持这一观点的最著名的人工智能研究者是斯图尔特 · 罗素。这篇论文的支持者有时会对怀疑论者表示困惑: 盖茨表示,他不“理解为什么有些人不关心” ,霍金在2014年的社论中批评了普遍的冷漠: 我们会让灯亮着吗可能不是,但这或多或少是正在发生的与人工智能<br />
<br />
<br />
<br />
Many of the scholars who are concerned about existential risk believe that the best way forward would be to conduct (possibly massive) research into solving the difficult "[[AI control problem|control problem]]" to answer the question: what types of safeguards, algorithms, or architectures can programmers implement to maximize the probability that their recursively-improving AI would continue to behave in a friendly, rather than destructive, manner after it reaches superintelligence?<ref name="superintelligence" >{{cite book|last1=Bostrom|first1=Nick|author-link=Nick Bostrom|title=Superintelligence: Paths, Dangers, Strategies|date=2014|isbn=978-0199678112|edition=First|quote=|title-link=Superintelligence: Paths, Dangers, Strategies}}<!-- preface --></ref><ref name="physica_scripta" >{{cite journal|title=Responses to catastrophic AGI risk: a survey|journal=[[Physica Scripta]]|date=19 December 2014|volume=90|issue=1|author1=Kaj Sotala|author2-link=Roman Yampolskiy|author2=Roman Yampolskiy}}</ref><br />
<br />
Many of the scholars who are concerned about existential risk believe that the best way forward would be to conduct (possibly massive) research into solving the difficult "control problem" to answer the question: what types of safeguards, algorithms, or architectures can programmers implement to maximize the probability that their recursively-improving AI would continue to behave in a friendly, rather than destructive, manner after it reaches superintelligence?<br />
<br />
许多关注世界末日的学者认为,最好的方法是进行(可能是大规模的)研究,解决困难的“控制问题” ,以回答这个问题: 程序员可以实现哪些类型的保障措施、算法或架构,以最大限度地提高其递归改进的人工智能在达到超级智能后继续以友好而不是破坏性的方式运行的可能性?<br />
<br />
<br />
<br />
The thesis that AI can pose existential risk also has many strong detractors. Skeptics sometimes charge that the thesis is crypto-religious, with an irrational belief in the possibility of superintelligence replacing an irrational belief in an omnipotent God; at an extreme, [[Jaron Lanier]] argues that the whole concept that current machines are in any way intelligent is "an illusion" and a "stupendous con" by the wealthy.<ref name="atlantic-but-what">{{cite magazine |url=https://www.theatlantic.com/health/archive/2014/05/but-what-does-the-end-of-humanity-mean-for-me/361931/ |title=But What Would the End of Humanity Mean for Me? |magazine=The Atlantic | date = 9 May 2014 | accessdate =12 December 2015}}</ref><br />
<br />
The thesis that AI can pose existential risk also has many strong detractors. Skeptics sometimes charge that the thesis is crypto-religious, with an irrational belief in the possibility of superintelligence replacing an irrational belief in an omnipotent God; at an extreme, Jaron Lanier argues that the whole concept that current machines are in any way intelligent is "an illusion" and a "stupendous con" by the wealthy.<br />
<br />
认为人工智能可以提出世界末日的观点也遭到了许多强烈的反对。怀疑论者有时指责该论点是秘密宗教性的,他们非理性地相信超级智能可能取代对万能的上帝的非理性信仰; 在极端情况下,杰伦 · 拉尼尔(Jaron Lanier)认为,目前的机器以任何方式具有智能的整个概念是“一种幻觉” ,是富人的“惊人骗局”。<br />
<br />
<br />
<br />
Much of existing criticism argues that AGI is unlikely in the short term. Computer scientist [[Gordon Bell]] argues that the human race will already destroy itself before it reaches the [[technological singularity]]. [[Gordon Moore]], the original proponent of [[Moore's Law]], declares that "I am a skeptic. I don't believe [a technological singularity] is likely to happen, at least for a long time. And I don't know why I feel that way."<ref>{{cite news |title=Tech Luminaries Address Singularity |url=https://spectrum.ieee.org/computing/hardware/tech-luminaries-address-singularity |accessdate=8 April 2020 |work=IEEE Spectrum: Technology, Engineering, and Science News |issue=SPECIAL REPORT: THE SINGULARITY |date=1 June 2008 |language=en}}</ref> [[Baidu]] Vice President [[Andrew Ng]] states AI existential risk is "like worrying about overpopulation on Mars when we have not even set foot on the planet yet."<ref name=shermer>{{cite news|last1=Shermer|first1=Michael|title=Apocalypse AI|url=https://www.scientificamerican.com/article/artificial-intelligence-is-not-a-threat-mdash-yet/|accessdate=27 November 2017|work=Scientific American|date=1 March 2017|pages=77|language=en|doi=10.1038/scientificamerican0317-77|bibcode=2017SciAm.316c..77S}}</ref><br />
<br />
Much of existing criticism argues that AGI is unlikely in the short term. Computer scientist Gordon Bell argues that the human race will already destroy itself before it reaches the technological singularity. Gordon Moore, the original proponent of Moore's Law, declares that "I am a skeptic. I don't believe [a technological singularity] is likely to happen, at least for a long time. And I don't know why I feel that way." Baidu Vice President Andrew Ng states AI existential risk is "like worrying about overpopulation on Mars when we have not even set foot on the planet yet."<br />
<br />
现有的许多批评认为,德盛安联短期内不太可能成功。计算机科学家 Gordon Bell 认为人类在到达技术奇异点之前就已经自我毁灭了。戈登 · 摩尔,摩尔定律的最初倡导者,宣称“我是一个怀疑论者。我不认为技术奇异点会发生,至少在很长一段时间内不会。我不知道为什么会有这种感觉。”百度副总裁 Andrew Ng 说,人工智能世界末日就像是在担心火星人口过剩,而我们甚至还没有踏上这个星球<br />
<br />
<br />
<br />
==See also==<br />
<br />
{{div col|colwidth=30em}}<br />
<br />
* [[Automated machine learning]]<br />
<br />
* [[Machine ethics]]<br />
<br />
* [[Multi-task learning]]<br />
<br />
* [[Superintelligence: Paths, Dangers, Strategies|Superintelligence]]<br />
<br />
* [[Nick Bostrom]]<br />
<br />
* [[Eliezer Yudkowsky]]<br />
<br />
* [[Future of Humanity Institute]]<br />
<br />
* [[Outline of artificial intelligence]]<br />
<br />
* [[Artificial brain]]<br />
<br />
* [[Transfer learning]]<br />
<br />
* [[Outline of transhumanism]]<br />
<br />
* [[General game playing]]<br />
<br />
* [[Synthetic intelligence]]<br />
<br />
* [[Intelligence amplification]] (IA), the use of information technology in augmenting human intelligence instead of creating an external autonomous "AGI"{{div col end}}<br />
<br />
<br />
<br />
==Notes==<br />
<br />
{{reflist|colwidth=30em}}<br />
<br />
<br />
<br />
==References==<br />
<br />
• Stages of Artificial Intelligence"[https://www.computerscience0.xyz/2020/04/ai-artificial-intelligence-stages-of.html Computer Science]" on 2nd April, 2020.{{refbegin|2}}<br />
<br />
• Stages of Artificial Intelligence"[https://www.computerscience0.xyz/2020/04/ai-artificial-intelligence-stages-of.html Computer Science]" on 2nd April, 2020.<br />
<br />
•2020年4月2日,《人工智能的阶段》[ https://www.computerscience0.xyz/2020/04/ai-Artificial-Intelligence-Stages-of.html 计算机科学]。<br />
<br />
* {{cite web |url=http://www.techcast.org/Upload/PDFs/633615794236495345_TCTheAutomationofThought.pdf |title=TechCast Article Series: The Automation of Thought |last1=Halal |first1=William E. |website= |access-date= |archive-url=https://web.archive.org/web/20130606101835/http://www.techcast.org/Upload/PDFs/633615794236495345_TCTheAutomationofThought.pdf |archive-date=6 June 2013}}<br />
<br />
* {{Citation | last=Aleksander | first=Igor | author-link=Igor Aleksander | year=1996 | title=Impossible Minds | publisher=World Scientific Publishing Company | isbn=978-1-86094-036-1 | url-access=registration | url=https://archive.org/details/impossiblemindsm0000alek }}<br />
<br />
* {{Citation | last = Omohundro|first= Steve| author-link= Steve Omohundro | year = 2008| title= The Nature of Self-Improving Artificial Intelligence| publisher= presented and distributed at the 2007 Singularity Summit, San Francisco, CA.}}<br />
<br />
* {{Citation|last=Sandberg |first=Anders|last2=Boström|first2=Nick|title=Whole Brain Emulation: A Roadmap|url=http://www.fhi.ox.ac.uk/Reports/2008-3.pdf|accessdate=5 April 2009|series= Technical Report #2008‐3|year=2008| publisher = Future of Humanity Institute, Oxford University}}<br />
<br />
* {{Citation|vauthors=Azevedo FA, Carvalho LR, Grinberg LT | title = Equal numbers of neuronal and nonneuronal cells make the human brain an isometrically scaled-up primate brain| journal = The Journal of Comparative Neurology| volume = 513| issue = 5| pages = 532–41|date=April 2009| pmid = 19226510| doi = 10.1002/cne.21974|url=https://www.researchgate.net/publication/24024444 |accessdate=4 September 2013| ref={{harvid|Azevedo et al.|2009}}|display-authors=etal}}<br />
<br />
* {{Citation<br />
<br />
| first=Anthony<br />
<br />
| first=Anthony<br />
<br />
首先是安东尼<br />
<br />
| last=Berglas<br />
<br />
| last=Berglas<br />
<br />
最后一个贝格拉斯<br />
<br />
| title=Artificial Intelligence will Kill our Grandchildren<br />
<br />
| title=Artificial Intelligence will Kill our Grandchildren<br />
<br />
人工智能会杀死我们的孙子<br />
<br />
| year=2008<br />
<br />
| year=2008<br />
<br />
2008年<br />
<br />
| url=http://berglas.org/Articles/AIKillGrandchildren/AIKillGrandchildren.html<br />
<br />
| url=http://berglas.org/Articles/AIKillGrandchildren/AIKillGrandchildren.html<br />
<br />
Http://berglas.org/articles/aikillgrandchildren/aikillgrandchildren.html<br />
<br />
}}<br />
<br />
}}<br />
<br />
}}<br />
<br />
* {{Citation | last = Chalmers |first = David | author-link=David Chalmers | year=1996 | title = The Conscious Mind |publisher=Oxford University Press.}}<br />
<br />
* {{Citation | last = Clocksin|first=William |date=Aug 2003 |title=Artificial intelligence and the future|journal=[[Philosophical Transactions of the Royal Society A]] |pmid=12952683 |volume=361 |issue=1809 |pages=1721–1748 |doi=10.1098/rsta.2003.1232 |postscript=.|bibcode=2003RSPTA.361.1721C }}<br />
<br />
* {{Crevier 1993}}<br />
<br />
* {{Citation | first = Brad | last = Darrach | date=20 November 1970 | title=Meet Shakey, the First Electronic Person | magazine=[[Life Magazine]] | pages = 58–68 }}.<br />
<br />
* {{Citation | last = Drachman | first = D | title = Do we have brain to spare? | journal = Neurology | volume = 64 | issue = 12 | pages = 2004–5 | year = 2005 | pmid = 15985565 | doi = 10.1212/01.WNL.0000166914.38327.BB | postscript = .}}<br />
<br />
* {{Citation | last = Feigenbaum | first = Edward A. | first2=Pamela | last2=McCorduck | author-link=Edward Feigenbaum | author2-link = Pamela McCorduck | title = The Fifth Generation: Artificial Intelligence and Japan's Computer Challenge to the World | publisher = Michael Joseph | year = 1983 | isbn = 978-0-7181-2401-4 }}<br />
<br />
* {{Citation<br />
<br />
| last = Gelernter | first = David<br />
<br />
| last = Gelernter | first = David<br />
<br />
最后的盖兰特 | 第一个大卫<br />
<br />
| year = 2010<br />
<br />
| year = 2010<br />
<br />
2010年<br />
<br />
| title = Dream-logic, the Internet and Artificial Thought<br />
<br />
| title = Dream-logic, the Internet and Artificial Thought<br />
<br />
梦的逻辑,互联网和人工思维<br />
<br />
| url=http://www.edge.org/3rd_culture/gelernter10.1/gelernter10.1_index.html | accessdate= 25 July 2010<br />
<br />
| url=http://www.edge.org/3rd_culture/gelernter10.1/gelernter10.1_index.html | accessdate= 25 July 2010<br />
<br />
Http://www.edge.org/3rd_culture/gelernter10.1/gelernter10.1_index.html : 2010年7月25日<br />
<br />
}}<br />
<br />
}}<br />
<br />
}}<br />
<br />
* {{Citation<br />
<br />
| editor1-last = Goertzel | editor1-first = Ben | authorlink = Ben Goertzel<br />
<br />
| editor1-last = Goertzel | editor1-first = Ben | authorlink = Ben Goertzel<br />
<br />
1-first Ben | authorlink Ben Goertzel<br />
<br />
| editor2-last = Pennachin | editor2-first= Cassio<br />
<br />
| editor2-last = Pennachin | editor2-first= Cassio<br />
<br />
| 编辑2-last Pennachin | 编辑2-first Cassio<br />
<br />
| year = 2006<br />
<br />
| year = 2006<br />
<br />
2006年<br />
<br />
| title=Artificial General Intelligence<br />
<br />
| title=Artificial General Intelligence<br />
<br />
人工通用智能<br />
<br />
| publisher = Springer <br />
<br />
| publisher = Springer <br />
<br />
出版商斯普林格<br />
<br />
| url=http://people.inf.elte.hu/csizsekp/ai/books/artificial-general-intelligence-cognitive-technologies.9783540237334.27156.pdf<br />
<br />
| url=http://people.inf.elte.hu/csizsekp/ai/books/artificial-general-intelligence-cognitive-technologies.9783540237334.27156.pdf<br />
<br />
Http://people.inf.elte.hu/csizsekp/ai/books/artificial-general-intelligence-cognitive-technologies.9783540237334.27156.pdf<br />
<br />
| isbn = 978-3-540-23733-4<br />
<br />
| isbn = 978-3-540-23733-4<br />
<br />
[国际标准图书馆编号978-3-540-23733-4]<br />
<br />
| archive-url = https://web.archive.org/web/20130320184603/http://people.inf.elte.hu/csizsekp/ai/books/artificial-general-intelligence-cognitive-technologies.9783540237334.27156.pdf<br />
<br />
| archive-url = https://web.archive.org/web/20130320184603/http://people.inf.elte.hu/csizsekp/ai/books/artificial-general-intelligence-cognitive-technologies.9783540237334.27156.pdf<br />
<br />
| 档案-url https://web.archive.org/web/20130320184603/http://people.inf.elte.hu/csizsekp/ai/books/artificial-general-intelligence-cognitive-technologies.9783540237334.27156.pdf<br />
<br />
| archive-date = 20 March 2013<br />
<br />
| archive-date = 20 March 2013<br />
<br />
| 档案-日期2013年3月20日<br />
<br />
}}<br />
<br />
}}<br />
<br />
}}<br />
<br />
* {{Citation<br />
<br />
| last = Goertzel | first = Ben | authorlink = Ben Goertzel<br />
<br />
| last = Goertzel | first = Ben | authorlink = Ben Goertzel<br />
<br />
作者: Ben Goertzel<br />
<br />
| last2 = Wang | first2 = Pei<br />
<br />
| last2 = Wang | first2 = Pei<br />
<br />
2 Wang | first2 Pei<br />
<br />
| year = 2006<br />
<br />
| year = 2006<br />
<br />
2006年<br />
<br />
| title = Introduction: Aspects of Artificial General Intelligence<br />
<br />
| title = Introduction: Aspects of Artificial General Intelligence<br />
<br />
| 题目简介: 人工通用智能的方方面面<br />
<br />
| url=http://sites.google.com/site/narswang/publications/wang-goertzel.AGI_Aspects.pdf?attredirects=1<br />
<br />
| url=http://sites.google.com/site/narswang/publications/wang-goertzel.AGI_Aspects.pdf?attredirects=1<br />
<br />
Http://sites.google.com/site/narswang/publications/wang-goertzel.agi_aspects.pdf?attredirects=1<br />
<br />
}}<br />
<br />
}}<br />
<br />
}}<br />
<br />
* {{Citation |last=Goertzel|first=Ben |authorlink=Ben Goertzel |date=Dec 2007 |title=Human-level artificial general intelligence and the possibility of a technological singularity: a reaction to Ray Kurzweil's The Singularity Is Near, and McDermott's critique of Kurzweil|journal=Artificial Intelligence |volume=171 |issue=18, Special Review Issue |pages=1161–1173 |url=https://scholar.google.com/scholar?hl=sv&lr=&cluster=15189798216526465792 |accessdate=1 April 2009 |doi=10.1016/j.artint.2007.10.011 |postscript=.}}<br />
<br />
* {{Citation | last = Gubrud | first = Mark | url=http://www.foresight.org/Conferences/MNT05/Papers/Gubrud/ | title = Nanotechnology and International Security | journal= Fifth Foresight Conference on Molecular Nanotechnology |date = November 1997| accessdate= 7 May 2011}}<br />
<br />
* {{Citation| last1 = Holte | first1=RC | last2=Choueiry |first2=BY| title = Abstraction and reformulation in artificial intelligence| journal = [[Philosophical Transactions of the Royal Society B]]| volume = 358| issue = 1435| pages = 1197–1204| year = 2003| pmid = 12903653| pmc = 1693218| doi = 10.1098/rstb.2003.1317| postscript = .}}<br />
<br />
* {{Citation | last = Howe | first = J. | url=http://www.dai.ed.ac.uk/AI_at_Edinburgh_perspective.html | title = Artificial Intelligence at Edinburgh University : a Perspective |date = November 1994| accessdate= 30 August 2007}}<br />
<br />
* {{Citation | last = Johnson| first= Mark | year = 1987| title =The body in the mind| publisher =Chicago|isbn= 978-0-226-40317-5}}<br />
<br />
* {{Citation | last = Kurzweil | first = Ray | author-link = Ray Kurzweil | title = The Singularity is Near | year = 2005 | publisher = Viking Press | title-link = The Singularity is Near }}<br />
<br />
* {{Citation | last = Lighthill | first = Professor Sir James | author-link=James Lighthill | year = 1973 | contribution= Artificial Intelligence: A General Survey | title = Artificial Intelligence: a paper symposium| publisher = Science Research Council }}<br />
<br />
* {{Citation|last=Luger|first=George|first2=William|last2=Stubblefield|year=2004|title=Artificial Intelligence: Structures and Strategies for Complex Problem Solving|edition=5th|publisher=The Benjamin/Cummings Publishing Company, Inc.|page=[https://archive.org/details/artificialintell0000luge/page/720 720]|isbn=978-0-8053-4780-7|url=https://archive.org/details/artificialintell0000luge/page/720}}<br />
<br />
* {{Citation |last=McCarthy|first=John |authorlink=John McCarthy (computer scientist)|date=Oct 2007 |title=From here to human-level AI|journal=Artificial Intelligence |volume=171 |pages=1174–1182 |doi=10.1016/j.artint.2007.10.009 | postscript=. |issue=18}}<br />
<br />
* {{McCorduck 2004}}<br />
<br />
* {{Citation | last = Moravec | first = Hans | author-link = Hans Moravec | year = 1976 | url = http://www.frc.ri.cmu.edu/users/hpm/project.archive/general.articles/1975/Raw.Power.html | title = The Role of Raw Power in Intelligence | access-date = 29 September 2007 | archive-url = https://web.archive.org/web/20160303232511/http://www.frc.ri.cmu.edu/users/hpm/project.archive/general.articles/1975/Raw.Power.html | archive-date = 3 March 2016 | url-status = dead }}<br />
<br />
* {{Citation | last = Moravec | first = Hans | author-link=Hans Moravec | year = 1988 | title = Mind Children | publisher = Harvard University Press}}<br />
<br />
* {{Citation| last=Nagel | year=1974 | title =What Is it Like to Be a Bat | journal = Philosophical Review | volume=83 | issue=4 | pages=435–50 | url=http://organizations.utep.edu/Portals/1475/nagel_bat.pdf| postscript=. | doi=10.2307/2183914 | jstor=2183914 }}<br />
<br />
* {{Citation | last = Newell | first = Allen | author-link=Allen Newell | last2 = Simon | first2=H. A. | year = 1963 | contribution=GPS: A Program that Simulates Human Thought| title=Computers and Thought | editor-last= Feigenbaum | editor-first= E.A. |editor2-last= Feldman |editor2-first= J. |publisher= McGraw-Hill | authorlink2 = Herbert A. Simon|location= New York }}<br />
<br />
* {{cite journal | doi = 10.1145/360018.360022 | last = Newell | first = Allen | last2 = Simon | first2=H. A. | year = 1976 | title=Computer Science as Empirical Inquiry: Symbols and Search| volume= 19 | pages = 113–126 | journal = Communications of the ACM| author-link=Allen Newell | authorlink2=Herbert A. Simon|issue=3 | ref=harv| doi-access= free }}<br />
<br />
* {{Citation| last=Nilsson | first=Nils | author-link=Nils Nilsson (researcher) | year=1998|title=Artificial Intelligence: A New Synthesis|publisher=Morgan Kaufmann Publishers|isbn=978-1-55860-467-4}}<br />
<br />
* {{Russell Norvig 2003}}<br />
<br />
* {{Citation | last = NRC| author-link=United States National Research Council | chapter=Developments in Artificial Intelligence|chapter-url=http://www.nap.edu/readingroom/books/far/ch9.html|title=Funding a Revolution: Government Support for Computing Research|publisher=National Academy Press|year=1999 }}<br />
<br />
* {{Citation | last = Poole | first = David | first2 = Alan | last2 = Mackworth | first3 = Randy | last3 = Goebel | publisher = Oxford University Press | year = 1998 | title = Computational Intelligence: A Logical Approach | url = http://www.cs.ubc.ca/spider/poole/ci.html | author-link=David Poole (researcher) | location = New York }}<br />
<br />
* {{Citation|last=Searle |first=John |author-link=John Searle |title=Minds, Brains and Programs |journal=Behavioral and Brain Sciences |volume=3 |issue=3 |pages=417–457 |year=1980 |doi=10.1017/S0140525X00005756 }}<br />
<br />
* {{Citation | last= Simon | first = H. A. | author-link=Herbert A. Simon | year = 1965 | title=The Shape of Automation for Men and Management | publisher =Harper & Row | location = New York }}<br />
<br />
* {{Citation |last=Sutherland|first= J.G. |year =1990| title= Holographic Model of Memory, Learning, and Expression|journal=International Journal of Neural Systems|volume= 1–3|pages= 256–267 |postscript=.}}<br />
<br />
* {{Citation|vauthors=Williams RW, Herrup K | title = The control of neuron number| journal = Annual Review of Neuroscience| volume = 11| pages = 423–53| year = 1988| pmid = 3284447| doi = 10.1146/annurev.ne.11.030188.002231| postscript = .}}<!--| accessdate = 20 June 2009--><br />
<br />
* {{Citation<br />
<br />
| editor1-last = de Vega | editor1-first = Manuel<br />
<br />
| editor1-last = de Vega | editor1-first = Manuel<br />
<br />
| 编辑1-last de Vega | 编辑1-first Manuel<br />
<br />
| editor2-last = Glenberg | editor2-first = Arthur<br />
<br />
| editor2-last = Glenberg | editor2-first = Arthur<br />
<br />
2- 最后的格伦伯格2- 第一个亚瑟<br />
<br />
| editor3-last = Graesser | editor3-first = Arthur<br />
<br />
| editor3-last = Graesser | editor3-first = Arthur<br />
<br />
| 编辑3-last Graesser | 编辑3-first Arthur<br />
<br />
| year = 2008<br />
<br />
| year = 2008<br />
<br />
2008年<br />
<br />
| title = Symbols and Embodiment: Debates on meaning and cognition<br />
<br />
| title = Symbols and Embodiment: Debates on meaning and cognition<br />
<br />
标题符号与具体化: 关于意义与认知的争论<br />
<br />
| publisher = Oxford University Press<br />
<br />
| publisher = Oxford University Press<br />
<br />
牛津大学出版社<br />
<br />
| isbn=978-0-19-921727-4<br />
<br />
| isbn=978-0-19-921727-4<br />
<br />
[国际标准图书编号978-0-19-921727-4]<br />
<br />
}}<br />
<br />
}}<br />
<br />
}}<br />
<br />
* {{Citation|last=Yudkowsky |first=Eliezer |author-link=Eliezer Yudkowsky |editor-last=Goertzel |editor-first=Ben |editor2-last=Pennachin |editor2-first=Cassio |title=Artificial General Intelligence |journal=Annual Review of Psychology |publisher=Springer |year=2006 |url=http://www.singinst.org/upload/LOGI//LOGI.pdf |doi=10.1146/annurev.psych.49.1.585 |pmid=9496632 |isbn=978-3-540-23733-4 |url-status=dead |archiveurl=https://web.archive.org/web/20090411050423/http://www.singinst.org/upload/LOGI/LOGI.pdf |archivedate=11 April 2009 |volume=49 |pages=585–612}} <br />
<br />
* {{Citation |last=Zucker|first=Jean-Daniel |date=July 2003 |title=A grounded theory of abstraction in artificial intelligence|journal=[[Philosophical Transactions of the Royal Society B]] |pmid=12903672 |volume=358 |issue=1435 |pmc=1693211 |pages=1293–1309 |doi=10.1098/rstb.2003.1308 |postscript=.}}<br />
<br />
* {{Citation| last=Yudkowsky | first=Eliezer | author-link=Eliezer Yudkowsky| year=2008 |title=Artificial Intelligence as a Positive and Negative Factor in Global Risk |journal=Global Catastrophic Risks | bibcode=2008gcr..book..303Y }}.<br />
<br />
{{refend}}<br />
<br />
<br />
<br />
==External links==<br />
<br />
* [https://cis.temple.edu/~pwang/AGI-Intro.html The AGI portal maintained by Pei Wang]<br />
<br />
* [https://web.archive.org/web/20050405071221/http://genesis.csail.mit.edu/index.html The Genesis Group at MIT's CSAIL] – Modern research on the computations that underlay human intelligence<br />
<br />
* [http://www.opencog.org/ OpenCog – open source project to develop a human-level AI]<br />
<br />
* [http://academia.wikia.com/wiki/A_Method_for_Simulating_the_Process_of_Logical_Human_Thought Simulating logical human thought]<br />
<br />
* [http://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/ai-timelines What Do We Know about AI Timelines?] – Literature review<br />
<br />
<br />
<br />
{{Existential risk from artificial intelligence}}<br />
<br />
<br />
<br />
{{DEFAULTSORT:Artificial general intelligence}}<br />
<br />
[[Category:Hypothetical technology]]<br />
<br />
Category:Hypothetical technology<br />
<br />
类别: 假设技术<br />
<br />
[[Category:Artificial intelligence]]<br />
<br />
Category:Artificial intelligence<br />
<br />
类别: 人工智能<br />
<br />
[[Category:Computational neuroscience]]<br />
<br />
Category:Computational neuroscience<br />
<br />
类别: 计算神经科学<br />
<br />
<br />
<br />
[[fr:Intelligence artificielle#Intelligence artificielle forte]]<br />
<br />
fr:Intelligence artificielle#Intelligence artificielle forte<br />
<br />
智力人工 # 智力人工强项<br />
<br />
<noinclude><br />
<br />
<small>This page was moved from [[wikipedia:en:Artificial general intelligence]]. Its edit history can be viewed at [[通用人工智能/edithistory]]</small></noinclude><br />
<br />
[[Category:待整理页面]]</div>粲兰https://wiki.swarma.org/index.php?title=%E9%80%9A%E7%94%A8%E4%BA%BA%E5%B7%A5%E6%99%BA%E8%83%BD&diff=14881通用人工智能2020-10-09T13:21:32Z<p>粲兰:</p>
<hr />
<div>此词条暂由彩云小译翻译,未经人工整理和审校,带来阅读不便,请见谅。<br />
<br />
{{Short description|Hypothetical human-level or stronger AI}}<br />
<br />
{{Use British English|date = March 2019}}<br />
<br />
{{Use dmy dates|date=December 2019}}<br />
<br />
{{Artificial intelligence}}<br />
<br />
'''Artificial general intelligence''' ('''AGI''') is the hypothetical<ref>{{cite news |title=DeepMind and Google: the battle to control artificial intelligence |url=https://www.economist.com/news/2019/04/05/deepmind-and-google-the-battle-to-control-artificial-intelligence |accessdate=15 March 2020 |work=[[The Economist]] ([[1843 (magazine)]]) |date=2019 |quote=AGI stands for Artificial General Intelligence, a hypothetical computer program...}}</ref> intelligence of a machine that has the capacity to understand or learn any intellectual task that a [[human being]] can. It is a primary goal of some [[artificial intelligence]] research and a common topic in [[science fiction]] and [[futures studies]]. AGI can also be referred to as '''strong AI''',<ref>Kurzweil, ''Singularity'' (2005) p. 260</ref><ref name = "Kurzweil 2005-08-05">{{Citation|url=https://www.forbes.com/home/free_forbes/2005/0815/030.htmlhttps://www.forbes.com/home/free_forbes/2005/0815/030.html |first=Ray |last=Kurzweil |date=5 August 2005 |magazine=[[Forbes]]|title=Long Live AI}}: Kurzweil describes strong AI as "machine intelligence with the full range of human intelligence."</ref><ref>{{Citation|url=https://crnano.typepad.com/crnblog/2005/08/advanced_human_.html |title=Advanced Human ntelligence<br />
<br />
Artificial general intelligence (AGI) is the hypothetical intelligence of a machine that has the capacity to understand or learn any intellectual task that a human being can. It is a primary goal of some artificial intelligence research and a common topic in science fiction and futures studies. AGI can also be referred to as strong AI,<ref>{{Citation|url=https://crnano.typepad.com/crnblog/2005/08/advanced_human_.html |title=Advanced Human ntelligence<br />
<br />
人工通用智能(Artificial general intelligence,AGI)是一种假设的机器智能,它有能力理解或学习任何人类能够完成的智力任务。这是一些人工智能研究的主要目标,也是科幻小说和未来研究的共同话题。也可以被称为强人工智能,参考文献{ Citation | url https://crnano.typepad.com/crnblog/2005/08/advanced_human_.html | title Advanced Human intelligence<br />
<br />
|first=Mike|last=Treder|work=Responsible Nanotechnology|date=10 August 2005 |archive-url=https://web.archive.org/web/20191016214415/https://crnano.typepad.com/crnblog/2005/08/advanced_human_.html|archive-date=16 October 2019 |url-status=live}}</ref> '''full AI''',<ref>{{Cite web |url=http://tedxtalks.ted.com/video/The-Age-of-Artificial-Intellige |title=The Age of Artificial Intelligence: George John at TEDxLondonBusinessSchool 2013 |access-date=22 February 2014 |archive-url=https://web.archive.org/web/20140226123940/http://tedxtalks.ted.com/video/The-Age-of-Artificial-Intellige |archive-date=26 February 2014 |url-status=live }}</ref> <br />
<br />
|first=Mike|last=Treder|work=Responsible Nanotechnology|date=10 August 2005 |archive-url=https://web.archive.org/web/20191016214415/https://crnano.typepad.com/crnblog/2005/08/advanced_human_.html|archive-date=16 October 2019 |url-status=live}}</ref> full AI, <br />
<br />
2005年8月10日2019年10月 https://web.archive.org/web/20191016214415/https://crnano.typepad.com/crnblog/2005/08/advanced_human_.html|archive-date=16,<br />
<br />
or '''general intelligent action'''.{{sfn|Newell|Simon|1976|ps=, This is the term they use for "human-level" intelligence in the [[physical symbol system]] hypothesis.}} <br />
<br />
or general intelligent action. <br />
<br />
或者是通用智能行为。<br />
<br />
Some academic sources reserve the term "strong AI" for machines that can experience [[Chinese room#Strong AI|consciousness]].{{sfn|Searle|1980|ps=, See below for the origin of the term "strong AI", and see the academic definition of "[[Chinese room#Strong AI|strong AI]]" in the article [[Chinese room]].}} Today's AI is speculated to be many years, if not decades, away from AGI.<ref>[https://www.europarl.europa.eu/at-your-service/files/be-heard/religious-and-non-confessional-dialogue/events/en-20190319-how-artificial-intelligence-works.pdf europarl.europa.eu: How artificial intelligence works], "Concluding remarks: Today's AI is powerful and useful, but remains far from speculated AGI or ASI.", European Parliamentary Research Service, retrieved March 3, 2020</ref><ref>{{Cite journal|last=Grace|first=Katja|last2=Salvatier|first2=John|last3=Dafoe|first3=Allan|last4=Zhang|first4=Baobao|last5=Evans|first5=Owain|date=2018-07-31|title=Viewpoint: When Will AI Exceed Human Performance? Evidence from AI Experts|journal=Journal of Artificial Intelligence Research|volume=62|pages=729–754|doi=10.1613/jair.1.11222|issn=1076-9757}}</ref><br />
<br />
Some academic sources reserve the term "strong AI" for machines that can experience consciousness. Today's AI is speculated to be many years, if not decades, away from AGI.<br />
<br />
一些学术资源保留了“强人工智能”这个术语,用来形容能够体会意识的机器。据推测,如果不是几十年的话,今天的人工智能将在很多年之后才能达到通用人工智能的地步。<br />
<br />
<br />
<br />
Some authorities emphasize a distinction between ''strong AI'' and ''applied AI'',<ref>Encyclopædia Britannica [http://www.britannica.com/eb/article-219086/artificial-intelligence Strong AI, applied AI, and cognitive simulation] {{Webarchive|url=https://web.archive.org/web/20071015054758/http://www.britannica.com/eb/article-219086/artificial-intelligence |date=15 October 2007 }} or Jack Copeland [http://www.cs.usfca.edu/www.AlanTuring.net/turing_archive/pages/Reference%20Articles/what_is_AI/What%20is%20AI02.html What is artificial intelligence?] {{Webarchive|url=https://web.archive.org/web/20070818125256/http://www.cs.usfca.edu/www.AlanTuring.net/turing_archive/pages/Reference%20Articles/what_is_AI/What%20is%20AI02.html |date=18 August 2007 }} on AlanTuring.net</ref> also called ''narrow AI''<ref name = "Kurzweil 2005-08-05"/> or ''[[weak AI]]''.<ref>{{Cite web |url=http://www.open2.net/nextbigthing/ai/ai_in_depth/in_depth.htm |title=The Open University on Strong and Weak AI |access-date=8 October 2007 |archive-url=https://web.archive.org/web/20090925043908/http://www.open2.net/nextbigthing/ai/ai_in_depth/in_depth.htm |archive-date=25 September 2009 |url-status=live }} {{Dead link |date=October 2019}}</ref> In contrast to strong AI, weak AI is not intended to perform human [[cognitive]] abilities. Rather, weak AI is limited to the use of software to study or accomplish specific [[problem solving]] or [[reason]]ing tasks.<br />
<br />
Some authorities emphasize a distinction between strong AI and applied AI, also called narrow AI In contrast to strong AI, weak AI is not intended to perform human cognitive abilities. Rather, weak AI is limited to the use of software to study or accomplish specific problem solving or reasoning tasks.<br />
<br />
一些权威机构强调'' 强人工智能 ''和'' 应用人工智能 ''之间的区别,也称为'' 狭义人工智能 ''与'' 强人工智能 ''相比,弱人工智能并不是为了执行人类的认知能力。相反,弱人工智能仅限于使用软件来研究或完成特定问题的解决或完成推理任务。<br />
<br />
<br />
<br />
As of 2017, over forty organizations are researching AGI.<ref name=baum/><br />
<br />
As of 2017, over forty organizations are researching AGI.<br />
<br />
截止到2017年,已经有超过四十家机构在研究 AGI。<br />
<br />
<br />
<br />
==Requirements 判定要求==<br />
<br />
{{main|Cognitive science}}<br />
<br />
<br />
<br />
Various criteria for [[intelligence]] have been proposed (most famously the [[Turing test]]) but to date, there is no definition that satisfies everyone.<ref>AI founder [[John McCarthy (computer scientist)|John McCarthy]] writes: "we cannot yet characterize in general what kinds of computational procedures we want to call intelligent." {{cite web| url=http://www-formal.stanford.edu/jmc/whatisai/node1.html| title=Basic Questions| last=McCarthy| first=John| authorlink=John McCarthy (computer scientist)| publisher=[[Stanford University]]| year=2007| access-date=6 December 2007| archive-url=https://web.archive.org/web/20071026100601/http://www-formal.stanford.edu/jmc/whatisai/node1.html| archive-date=26 October 2007| url-status=live}} (For a discussion of some definitions of intelligence used by [[artificial intelligence]] researchers, see [[philosophy of artificial intelligence]].)</ref> However, there ''is'' wide agreement among artificial intelligence researchers that intelligence is required to do the following:<ref><br />
<br />
Various criteria for intelligence have been proposed (most famously the Turing test) but to date, there is no definition that satisfies everyone. However, there is wide agreement among artificial intelligence researchers that intelligence is required to do the following:<ref><br />
<br />
人们提出了各种各样的智能标准(最著名的是图灵测试) ,但到目前为止,还没有一个定义能使所有人满意。然而,人工智能研究人员普遍认为,智能需要做到以下几点: <br />
<br />
This list of intelligent traits is based on the topics covered by major AI textbooks, including:<br />
<br />
This list of intelligent traits is based on the topics covered by major AI textbooks, including:<br />
<br />
这个智能特征的列表基于主流的人工智能教科书所涉及的主题,包括:<br />
<br />
{{Harvnb|Russell|Norvig|2003}},<br />
<br />
,<br />
<br />
,<br />
<br />
{{Harvnb|Luger|Stubblefield|2004}},<br />
<br />
,<br />
<br />
,<br />
<br />
{{Harvnb|Poole|Mackworth|Goebel|1998}} and<br />
<br />
and<br />
<br />
及<br />
<br />
{{Harvnb|Nilsson|1998}}.<br />
<br />
.<br />
<br />
.<br />
<br />
</ref><br />
<br />
</ref><br />
<br />
/ 参考<br />
<br />
* [[automated reasoning|reason]], use strategy, solve puzzles, and make judgments under [[uncertainty]];使用策略,解决问题,并且在不确定条件下做出决策。<br />
<br />
* [[knowledge representation|represent knowledge]], including [[Commonsense knowledge base|commonsense knowledge]];展现知识,包括常识。<br />
<br />
* [[automated planning and scheduling|plan]];规划。<br />
<br />
* [[machine learning|learn]];学习。<br />
<br />
* communicate in [[natural language processing|natural language]];使用自然语言进行交流。<br />
<br />
* and [[Artificial intelligence systems integration|integrate all these skills]] towards common goals.综合运用所有技巧以达到某个目的。<br />
<br />
<br />
<br />
Other important capabilities include the ability to [[machine perception|sense]] (e.g. [[computer vision|see]]) and the ability to act (e.g. [[robotics|move and manipulate objects]]) in the world where intelligent behaviour is to be observed.<ref>Pfeifer, R. and Bongard J. C., How the body shapes the way we think: a new view of intelligence (The MIT Press, 2007). {{ISBN|0-262-16239-3}}</ref> This would include an ability to detect and respond to [[hazard]].<ref>{{cite journal | last1 = White | first1 = R. W. | year = 1959 | title = Motivation reconsidered: The concept of competence | journal = Psychological Review | volume = 66 | issue = 5| pages = 297–333 | doi=10.1037/h0040934| pmid = 13844397 }}</ref> Many interdisciplinary approaches to intelligence (e.g. [[cognitive science]], [[computational intelligence]] and [[decision making]]) tend to emphasise the need to consider additional traits such as [[imagination]] (taken as the ability to form mental images and concepts that were not programmed in)<ref>{{Harvnb|Johnson|1987}}</ref> and [[Self-determination theory|autonomy]].<ref>deCharms, R. (1968). Personal causation. New York: Academic Press.</ref><br />
<br />
Other important capabilities include the ability to sense (e.g. see) and the ability to act (e.g. move and manipulate objects) in the world where intelligent behaviour is to be observed. This would include an ability to detect and respond to hazard. Many interdisciplinary approaches to intelligence (e.g. cognitive science, computational intelligence and decision making) tend to emphasise the need to consider additional traits such as imagination (taken as the ability to form mental images and concepts that were not programmed in) and autonomy.<br />
<br />
其他重要的能力包括在客观世界感知(例如:视觉)和行动(例如:移动和操纵物体)的能力。智能行为在客观世界中是可观测的。这将包括检测和应对危险的能力。许多跨学科的智力研究方法(例如:。认知科学、计算智能和决策)倾向于强调考虑额外特征的必要性,例如想象力(被认为是未编入程序的形成意象和概念的能力)和自主性。<br />
<br />
Computer based systems that exhibit many of these capabilities do exist (e.g. see [[computational creativity]], [[automated reasoning]], [[decision support system]], [[robot]], [[evolutionary computation]], [[intelligent agent]]), but not yet at human levels.<br />
<br />
Computer based systems that exhibit many of these capabilities do exist (e.g. see computational creativity, automated reasoning, decision support system, robot, evolutionary computation, intelligent agent), but not yet at human levels.<br />
<br />
基于计算机的系统,展示了许多这些能力确实存在(例如:。参见计算创造性、自动推理、决策支持系统、机器人、进化计算、智能代理) ,但还没有达到人类的水平。<br />
<br />
<br />
<br />
===Tests for confirming human-level AGI{{anchor|Tests_for_confirming_human-level_AGI}}===<br />
<br />
The following tests to confirm human-level AGI have been considered:<ref>{{cite web|last=Muehlhauser|first=Luke|title=What is AGI?|url=http://intelligence.org/2013/08/11/what-is-agi/|publisher=Machine Intelligence Research Institute|accessdate=1 May 2014|date=11 August 2013|archive-url=https://web.archive.org/web/20140425115445/http://intelligence.org/2013/08/11/what-is-agi/|archive-date=25 April 2014|url-status=live}}</ref><ref>{{Cite web|url=https://www.talkyblog.com/artificial_general_intelligence_agi/|title=What is Artificial General Intelligence (AGI)? {{!}} 4 Tests For Ensuring Artificial General Intelligence|date=13 July 2019|website=Talky Blog|language=en-US|access-date=17 July 2019|archive-url=https://web.archive.org/web/20190717071152/https://www.talkyblog.com/artificial_general_intelligence_agi/|archive-date=17 July 2019|url-status=live}}</ref><br />
<br />
The following tests to confirm human-level AGI have been considered:<br />
<br />
考虑了下列测试以确认人类水平 AGI:<br />
<br />
;[[Turing test|The Turing Test]] ([[Alan Turing|''Turing'']])<br />
<br />
The Turing Test (Turing)<br />
<br />
图灵测试(图灵)<br />
<br />
: A machine and a human both converse sight unseen with a second human, who must evaluate which of the two is the machine, which passes the test if it can fool the evaluator a significant fraction of the time. Note: Turing does not prescribe what should qualify as intelligence, only that knowing that it is a machine should disqualify it.<br />
<br />
A machine and a human both converse sight unseen with a second human, who must evaluate which of the two is the machine, which passes the test if it can fool the evaluator a significant fraction of the time. Note: Turing does not prescribe what should qualify as intelligence, only that knowing that it is a machine should disqualify it.<br />
<br />
一个机器人和一个人类都与另一个人类进行眼神交流,后者必须评估两者中哪一个是机器,如果它能骗过评估者很大一部分时间,那么机器就通过了测试。注意: 图灵并没有规定什么是智能,只要能认出它是一台机器就不应该认为它是智能的。<br />
<br />
;The Coffee Test ([[Steve Wozniak|''Wozniak'']])<br />
<br />
The Coffee Test (Wozniak)<br />
<br />
咖啡测试(沃兹尼亚克)<br />
<br />
: A machine is required to enter an average American home and figure out how to make coffee: find the coffee machine, find the coffee, add water, find a mug, and brew the coffee by pushing the proper buttons.<br />
<br />
A machine is required to enter an average American home and figure out how to make coffee: find the coffee machine, find the coffee, add water, find a mug, and brew the coffee by pushing the proper buttons.<br />
<br />
一台机器需要进入一个普通的美国家庭,并弄清楚如何制作咖啡: 找到咖啡机,找到咖啡,加水,找到一个马克杯,并通过按下正确的按钮来煮咖啡。<br />
<br />
;The Robot College Student Test ([[Ben Goertzel|''Goertzel'']])<br />
<br />
The Robot College Student Test (Goertzel)<br />
<br />
机器人大学生考试(格兹尔)<br />
<br />
: A machine enrolls in a university, taking and passing the same classes that humans would, and obtaining a degree.<br />
<br />
A machine enrolls in a university, taking and passing the same classes that humans would, and obtaining a degree.<br />
<br />
一台机器进入一所大学,学习并通过与人类相同的课程,并获得学位。<br />
<br />
;The Employment Test ([[Nils John Nilsson|''Nilsson'']])<br />
<br />
The Employment Test (Nilsson)<br />
<br />
就业测试(尼尔森)<br />
<br />
: A machine works an economically important job, performing at least as well as humans in the same job.<br />
<br />
A machine works an economically important job, performing at least as well as humans in the same job.<br />
<br />
机器从事一项经济上重要的工作,在同一项工作中表现至少和人类一样好。<br />
<br />
<br />
<br />
=== Problems requiring AGI to solve 等待通用人工智能解决的问题===<br />
<br />
{{Main|AI-complete}}<br />
<br />
<br />
<br />
The most difficult problems for computers are informally known as "AI-complete" or "AI-hard", implying that solving them is equivalent to the general aptitude of human intelligence, or strong AI, beyond the capabilities of a purpose-specific algorithm.<ref name="Shapiro92">Shapiro, Stuart C. (1992). [http://www.cse.buffalo.edu/~shapiro/Papers/ai.pdf Artificial Intelligence] {{Webarchive|url=https://web.archive.org/web/20160201014644/http://www.cse.buffalo.edu/~shapiro/Papers/ai.pdf |date=1 February 2016 }} In Stuart C. Shapiro (Ed.), ''Encyclopedia of Artificial Intelligence'' (Second Edition, pp.&nbsp;54–57). New York: John Wiley. (Section 4 is on "AI-Complete Tasks".)</ref><br />
<br />
The most difficult problems for computers are informally known as "AI-complete" or "AI-hard", implying that solving them is equivalent to the general aptitude of human intelligence, or strong AI, beyond the capabilities of a purpose-specific algorithm.<br />
<br />
对于计算机来说,最困难的问题被非正式地称为“ AI完全问题”或“ AI困难问题” ,这意味着解决这些问题相当于人类智能的一般才能,或强人工智能,超出了特定目的算法的能力。<br />
<br />
<br />
<br />
AI-complete problems are hypothesised to include general [[computer vision]], [[natural language understanding]], and dealing with unexpected circumstances while solving any real world problem.<ref>Roman V. Yampolskiy. Turing Test as a Defining Feature of AI-Completeness. In Artificial Intelligence, Evolutionary Computation and Metaheuristics (AIECM) --In the footsteps of Alan Turing. Xin-She Yang (Ed.). pp. 3–17. (Chapter 1). Springer, London. 2013. http://cecs.louisville.edu/ry/TuringTestasaDefiningFeature04270003.pdf {{Webarchive|url=https://web.archive.org/web/20130522094547/http://cecs.louisville.edu/ry/TuringTestasaDefiningFeature04270003.pdf |date=22 May 2013 }}</ref><br />
<br />
AI-complete problems are hypothesised to include general computer vision, natural language understanding, and dealing with unexpected circumstances while solving any real world problem.<br />
<br />
人工智能完全问题假设包括一般的计算机视觉,自然语言理解,以及在解决任何现实世界问题的同时处理意外情况。<br />
<br />
<br />
<br />
AI-complete problems cannot be solved with current computer technology alone, and also require [[human computation]]. This property could be useful, for example, to test for the presence of humans, as [[CAPTCHA]]s aim to do; and for [[computer security]] to repel [[brute-force attack]]s.<ref>Luis von Ahn, Manuel Blum, Nicholas Hopper, and John Langford. [http://www.captcha.net/captcha_crypt.pdf CAPTCHA: Using Hard AI Problems for Security] {{Webarchive|url=https://web.archive.org/web/20160304001102/http://www.captcha.net/captcha_crypt.pdf |date=4 March 2016 }}. In Proceedings of Eurocrypt, Vol. 2656 (2003), pp. 294–311.</ref><ref>{{cite journal | first = Richard | last = Bergmair | title = Natural Language Steganography and an "AI-complete" Security Primitive | citeseerx = 10.1.1.105.129 | date = 7 January 2006 }} (unpublished?)</ref><br />
<br />
AI-complete problems cannot be solved with current computer technology alone, and also require human computation. This property could be useful, for example, to test for the presence of humans, as CAPTCHAs aim to do; and for computer security to repel brute-force attacks.<br />
<br />
目前的计算机技术不能单独解决AI完全问题,而且还需要人工计算。例如,这个特性可以用来测试人类是否存在(CAPTCHAs 的目标就是这样做) ,以及应用于计算机安全以抵御强力攻击。<br />
<br />
<br />
<br />
== History 历史 == <br />
<br />
=== Classical AI 经典人工智能 ===<br />
<br />
{{Main|History of artificial intelligence}}<br />
<br />
Modern AI research began in the mid 1950s.<ref>{{Harvnb|Crevier|1993|pp=48–50}}</ref> The first generation of AI researchers were convinced that artificial general intelligence was possible and that it would exist in just a few decades. AI pioneer [[Herbert A. Simon]] wrote in 1965: "machines will be capable, within twenty years, of doing any work a man can do."<ref>{{Harvnb|Simon|1965|p=96}} quoted in {{Harvnb|Crevier|1993|p=109}}</ref> Their predictions were the inspiration for [[Stanley Kubrick]] and [[Arthur C. Clarke]]'s character [[HAL 9000]], who embodied what AI researchers believed they could create by the year 2001. AI pioneer [[Marvin Minsky]] was a consultant<ref>{{Cite web |url=http://mitpress.mit.edu/e-books/Hal/chap2/two1.html |title=Scientist on the Set: An Interview with Marvin Minsky |access-date=5 April 2008 |archive-url=https://web.archive.org/web/20120716182537/http://mitpress.mit.edu/e-books/Hal/chap2/two1.html |archive-date=16 July 2012 |url-status=live }}</ref> on the project of making HAL 9000 as realistic as possible according to the consensus predictions of the time; Crevier quotes him as having said on the subject in 1967, "Within a generation ... the problem of creating 'artificial intelligence' will substantially be solved,"<ref>Marvin Minsky to {{Harvtxt|Darrach|1970}}, quoted in {{Harvtxt|Crevier|1993|p=109}}.</ref> although Minsky states that he was misquoted.{{Citation needed|date=June 2011}}<br />
<br />
Modern AI research began in the mid 1950s. The first generation of AI researchers were convinced that artificial general intelligence was possible and that it would exist in just a few decades. AI pioneer Herbert A. Simon wrote in 1965: "machines will be capable, within twenty years, of doing any work a man can do." Their predictions were the inspiration for Stanley Kubrick and Arthur C. Clarke's character HAL 9000, who embodied what AI researchers believed they could create by the year 2001. AI pioneer Marvin Minsky was a consultant on the project of making HAL 9000 as realistic as possible according to the consensus prediction of the time; Crevier quotes him as having said on the subject in 1967, "Within a generation ... the problem of creating 'artificial intelligence' will substantially be solved," although Minsky states that he was misquoted.<br />
<br />
现代人工智能研究始于20世纪50年代中期。第一代人工智能研究人员确信,通用人工智能是可能的,并将在短短几十年内出现。人工智能的先驱赫伯特·A·西蒙(Herbert A. Simon)在1965年写道: “机器将在20年内拥有完成人类能做的任何工作的能力。”他们的预言启发了斯坦利·库布里克和亚瑟·查理斯·克拉克塑造的角色哈尔9000,它代表了人工智能研究人员相信他们截至2001年能够创造出的东西。人工智能先驱马文·明斯基(Marvin Minsky)是一个项目顾问,该项目旨在根据当时的一致预测,使哈尔9000尽可能逼真; 克里维尔援引他在1967年关于这个问题的话说,“在一代人的时间里... ... 创造‘人工智能’的问题将大体上得到解决,”尽管明斯基声称,他的话被错误引用了。<br />
<br />
<br />
<br />
However, in the early 1970s, it became obvious that researchers had grossly underestimated the difficulty of the project. Funding agencies became skeptical of AGI and put researchers under increasing pressure to produce useful "applied AI".<ref>The [[Lighthill report]] specifically criticized AI's "grandiose objectives" and led the dismantling of AI research in England. ({{Harvnb|Lighthill|1973}}; {{Harvnb|Howe|1994}}) In the U.S., [[DARPA]] became determined to fund only "mission-oriented direct research, rather than basic undirected research". See {{Harv|NRC|1999}} under "Shift to Applied Research Increases Investment". See also {{Harv|Crevier|1993|pp=115–117}} and {{Harv|Russell|Norvig|2003|pp=21–22}}</ref> As the 1980s began, Japan's [[Fifth Generation Computer]] Project revived interest in AGI, setting out a ten-year timeline that included AGI goals like "carry on a casual conversation".<ref>{{harvnb|Crevier|1993|p=211}}, {{harvnb|Russell|Norvig|2003|p=24}} and see also {{Harvnb|Feigenbaum|McCorduck|1983}}</ref> In response to this and the success of [[expert systems]], both industry and government pumped money back into the field.<ref>{{Harvnb|Crevier| 1993|pp=161–162,197–203,240}}; {{harvnb|Russell|Norvig|2003|p=25}}; {{harvnb|NRC|1999|loc=under "Shift to Applied Research Increases Investment"}}</ref> However, confidence in AI spectacularly collapsed in the late 1980s, and the goals of the Fifth Generation Computer Project were never fulfilled.<ref>{{Harvnb|Crevier|1993|pp=209–212}}</ref> For the second time in 20 years, AI researchers who had predicted the imminent achievement of AGI had been shown to be fundamentally mistaken. By the 1990s, AI researchers had gained a reputation for making vain promises. They became reluctant to make predictions at all<ref>As AI founder [[John McCarthy (computer scientist)|John McCarthy]] writes "it would be a great relief to the rest of the workers in AI if the inventors of new general formalisms would express their hopes in a more guarded form than has sometimes been the case." {{cite web | url=http://www-formal.stanford.edu/jmc/reviews/lighthill/lighthill.html | title=Reply to Lighthill | last=McCarthy | first=John | authorlink=John McCarthy (computer scientist) | publisher=Stanford University | year=2000 | access-date=29 September 2007 | archive-url=https://web.archive.org/web/20080930164952/http://www-formal.stanford.edu/jmc/reviews/lighthill/lighthill.html | archive-date=30 September 2008 | url-status=live }}</ref> and to avoid any mention of "human level" artificial intelligence for fear of being labeled "wild-eyed dreamer[s]."<ref>"At its low point, some computer scientists and software engineers avoided the term artificial intelligence for fear of being viewed as wild-eyed dreamers."{{cite news |first=John |last=Markoff |title=Behind Artificial Intelligence, a Squadron of Bright Real People |url=https://www.nytimes.com/2005/10/14/technology/14artificial.html?ei=5070&en=11ab55edb7cead5e&ex=1185940800&adxnnl=1&adxnnlx=1185805173-o7WsfW7qaP0x5/NUs1cQCQ |work=The New York Times|date=14 October 2005}}</ref><br />
<br />
However, in the early 1970s, it became obvious that researchers had grossly underestimated the difficulty of the project. Funding agencies became skeptical of AGI and put researchers under increasing pressure to produce useful "applied AI". As the 1980s began, Japan's Fifth Generation Computer Project revived interest in AGI, setting out a ten-year timeline that included AGI goals like "carry on a casual conversation". In response to this and the success of expert systems, both industry and government pumped money back into the field. However, confidence in AI spectacularly collapsed in the late 1980s, and the goals of the Fifth Generation Computer Project were never fulfilled. For the second time in 20 years, AI researchers who had predicted the imminent achievement of AGI had been shown to be fundamentally mistaken. By the 1990s, AI researchers had gained a reputation for making vain promises. They became reluctant to make predictions at all and to avoid any mention of "human level" artificial intelligence for fear of being labeled "wild-eyed dreamer[s]."<br />
<br />
然而,在20世纪70年代初,很明显,研究人员严重低估了该项目的难度。资助机构开始对通用人工智能持怀疑态度,并对研究人员施加越来越大的压力,要求他们生产出有用的“应用人工智能”。随着20世纪80年代的开始,日本的'''<font color="#ff8000">第五代计算机项目(Fifth Generation Computer Project)</font>'''重新唤起了人们对通用人工智能的兴趣,并设定了一个长达10年的时间线,其中包括通用人工智能的目标,比如“进行一次随意的交谈”。为了应对这种情况和专家系统的成功建立,工业界和政府都重新将资金投入这一领域。然而,人们对人工智能的信心在20世纪80年代末大幅下降,第五代计算机项目的目标从未实现。20年来的第二次,人工智能研究人员预测通用人工智能即将取得的成果已经被证明根本是错误的。到了20世纪90年代,人工智能研究人员因做出虚假承诺而臭名昭著。他们根本不愿意做预测,也不愿意提及“人类水平”的人工智能,因为他们害怕被贴上“狂热梦想家”的标签<br />
<br />
<br />
<br />
=== Narrow AI research 狭义人工智能的研究===<br />
<br />
{{Main|Artificial intelligence}}<br />
<br />
<br />
<br />
In the 1990s and early 21st century, mainstream AI achieved far greater commercial success and academic respectability by focusing on specific sub-problems where they can produce verifiable results and commercial applications, such as [[artificial neural networks]] and statistical [[machine learning]].<ref>{{Harvnb|Russell|Norvig|2003|pp=25–26}}</ref> These "applied AI" systems are now used extensively throughout the technology industry, and research in this vein is very heavily funded in both academia and industry. Currently, development on this field is considered an emerging trend, and a mature stage is expected to happen in more than 10 years.<ref>{{cite web |title=Trends in the Emerging Tech Hype Cycle |url=https://blogs.gartner.com/smarterwithgartner/files/2018/08/PR_490866_5_Trends_in_the_Emerging_Tech_Hype_Cycle_2018_Hype_Cycle.png |publisher=Gartner Reports |accessdate=7 May 2019 |archive-url=https://web.archive.org/web/20190522024829/https://blogs.gartner.com/smarterwithgartner/files/2018/08/PR_490866_5_Trends_in_the_Emerging_Tech_Hype_Cycle_2018_Hype_Cycle.png |archive-date=22 May 2019 |url-status=live }}</ref><br />
<br />
In the 1990s and early 21st century, mainstream AI achieved far greater commercial success and academic respectability by focusing on specific sub-problems where they can produce verifiable results and commercial applications, such as artificial neural networks and statistical machine learning. These "applied AI" systems are now used extensively throughout the technology industry, and research in this vein is very heavily funded in both academia and industry. Currently, development on this field is considered an emerging trend, and a mature stage is expected to happen in more than 10 years.<br />
<br />
在1990年代和21世纪初,主流人工智能取得了更大的商业成功和学术声望,因为它们把重点放在能够产生可验证结果和商业应用的具体子问题上,例如人工神经网络和统计机器学习。这些“应用人工智能”系统现在在整个技术产业中得到广泛应用,这方面的研究得到了学术界和产业界的大量资助。目前,这一领域的发展被认为是一个新兴的趋势,并有望在10多年内进入一个成熟的阶段。<br />
<br />
<br />
<br />
Most mainstream AI researchers hope that strong AI can be developed by combining the programs that solve various sub-problems. [[Hans Moravec]] wrote in 1988: <blockquote>"I am confident that this bottom-up route to artificial intelligence will one day meet the traditional top-down route more than half way, ready to provide the real world competence and the [[Commonsense knowledge base|commonsense knowledge]] that has been so frustratingly elusive in reasoning programs. Fully intelligent machines will result when the metaphorical [[golden spike]] is driven uniting the two efforts."<ref>{{Harvnb|Moravec|1988|p=20}}</ref></blockquote><br />
<br />
Most mainstream AI researchers hope that strong AI can be developed by combining the programs that solve various sub-problems. Hans Moravec wrote in 1988: <blockquote>"I am confident that this bottom-up route to artificial intelligence will one day meet the traditional top-down route more than half way, ready to provide the real world competence and the commonsense knowledge that has been so frustratingly elusive in reasoning programs. Fully intelligent machines will result when the metaphorical golden spike is driven uniting the two efforts."</blockquote><br />
<br />
大多数主流人工智能研究人员希望,通过结合解决各种子问题的项目,可以开发出强人工智能。汉斯·莫拉维克(Hans Moravec)在1988年写道: “我相信,这种自下而上的人工智能路线,终有一天会与传统的自上而下的路线在后半程相遇。令人沮丧的是,当下,真实世界的能力和常识知识在推理程序中一直难以捉摸。而这两种路线结合的人工智能将能为我们解决这些疑难。当一个黄金钉一样的东西将二者结合起来时,就会产生完全智能的机器。” / blockquote<br />
<br />
<br />
<br />
However, even this fundamental philosophy has been disputed; for example, Stevan Harnad of Princeton concluded his 1990 paper on the [[Symbol grounding problem|Symbol Grounding Hypothesis]] by stating: <blockquote>"The expectation has often been voiced that "top-down" (symbolic) approaches to modeling cognition will somehow meet "bottom-up" (sensory) approaches somewhere in between. If the grounding considerations in this paper are valid, then this expectation is hopelessly modular and there is really only one viable route from sense to symbols: from the ground up. A free-floating symbolic level like the software level of a computer will never be reached by this route (or vice versa) – nor is it clear why we should even try to reach such a level, since it looks as if getting there would just amount to uprooting our symbols from their intrinsic meanings (thereby merely reducing ourselves to the functional equivalent of a programmable computer)."<ref>{{cite journal | last1 = Harnad | first1 = S | year = 1990 | title = The Symbol Grounding Problem | journal = Physica D | volume = 42 | issue = 1–3| pages = 335–346 | doi=10.1016/0167-2789(90)90087-6| bibcode = 1990PhyD...42..335H | arxiv = cs/9906002}}</ref></blockquote><br />
<br />
However, even this fundamental philosophy has been disputed; for example, Stevan Harnad of Princeton concluded his 1990 paper on the Symbol Grounding Hypothesis by stating: <blockquote>"The expectation has often been voiced that "top-down" (symbolic) approaches to modeling cognition will somehow meet "bottom-up" (sensory) approaches somewhere in between. If the grounding considerations in this paper are valid, then this expectation is hopelessly modular and there is really only one viable route from sense to symbols: from the ground up. A free-floating symbolic level like the software level of a computer will never be reached by this route (or vice versa) – nor is it clear why we should even try to reach such a level, since it looks as if getting there would just amount to uprooting our symbols from their intrinsic meanings (thereby merely reducing ourselves to the functional equivalent of a programmable computer)."</blockquote><br />
<br />
然而,连如此基本的哲学问题也存在争议; 例如,普林斯顿大学的斯蒂文·哈纳德(Stevan Harnad)在1990年关于'''<font color="#ff8000">符号基础假说(the Symbol Grounding Hypothesis)</font>'''的论文中总结道: “人们经常提出这样的期望,即建立“自上而下”(符号)的认知模型的方法将在某种程度上与“自下而上”(感官)的方法在建模过程中的某处相会。如果本文中的基本考虑是正确的,那么绝望的是,这种期望是模块化的,并且从认知到符号真的只有一条可行的路径: 从头开始。类似计算机软件级别的自由浮动的符号永远不可能通过这条路径实现,反之亦然——甚至也不清楚为什么我们应该尝试达到这样一个级别,因为它看起来就像是把我们的符号从它们的内在意义上连根拔起(从而仅仅把我们自己降低为可编程计算机的功能等价物)。” / blockquote<br />
<br />
<br />
<br />
=== Modern artificial general intelligence research 现代通用人工智能的研究===<br />
<br />
The term "artificial general intelligence" was used as early as 1997, by Mark Gubrud<ref>{{Harvnb|Gubrud|1997}}</ref> in a discussion of the implications of fully automated military production and operations. The term was re-introduced and popularized by [[Shane Legg]] and [[Ben Goertzel]] around 2002.<ref>{{Cite web|url=http://goertzel.org/who-coined-the-term-agi/|title=Who coined the term "AGI"? » goertzel.org|language=en-US|access-date=28 December 2018|archive-url=https://web.archive.org/web/20181228083048/http://goertzel.org/who-coined-the-term-agi/|archive-date=28 December 2018|url-status=live}}, via [[Life 3.0]]: 'The term "AGI" was popularized by... Shane Legg, Mark Gubrud and Ben Goertzel'</ref> The research objective is much older, for example [[Doug Lenat]]'s [[Cyc]] project (that began in 1984), and [[Allen Newell]]'s [[Soar (cognitive architecture)|Soar]] project are regarded as within the scope of AGI. AGI research activity in 2006 was described by Pei Wang and Ben Goertzel<ref>{{harvnb|Goertzel|Wang|2006}}. See also {{harvtxt|Wang|2006}} with an up-to-date summary and lots of links.</ref> as "producing publications and preliminary results". The first summer school in AGI was organized in Xiamen, China in 2009<ref>https://goertzel.org/AGI_Summer_School_2009.htm</ref> by the Xiamen university's Artificial Brain Laboratory and OpenCog. The first university course was given in 2010<ref>http://fmi-plovdiv.org/index.jsp?id=1054&ln=1</ref> and 2011<ref>http://fmi.uni-plovdiv.bg/index.jsp?id=1139&ln=1</ref> at Plovdiv University, Bulgaria by Todor Arnaudov. MIT presented a course in AGI in 2018, organized by Lex Fridman and featuring a number of guest lecturers. However, as yet, most AI researchers have devoted little attention to AGI, with some claiming that intelligence is too complex to be completely replicated in the near term. However, a small number of computer scientists are active in AGI research, and many of this group are contributing to a series of [[Conference on Artificial General Intelligence|AGI conferences]]. The research is extremely diverse and often pioneering in nature. In the introduction to his book,{{sfn|Goertzel|Pennachin|2006}} Goertzel says that estimates of the time needed before a truly flexible AGI is built vary from 10 years to over a century, but the consensus in the AGI research community seems to be that the timeline discussed by [[Ray Kurzweil]] in ''[[The Singularity is Near]]''<ref name="K">{{Harv|Kurzweil|2005|p=260}} or see [http://crnano.typepad.com/crnblog/2005/08/advanced_human_.html Advanced Human Intelligence] {{Webarchive|url=https://web.archive.org/web/20110630032301/http://crnano.typepad.com/crnblog/2005/08/advanced_human_.html |date=30 June 2011 }} where he defines strong AI as "machine intelligence with the full range of human intelligence."</ref> (i.e. between 2015 and 2045) is plausible.{{sfn|Goertzel|2007}}<br />
<br />
The term "artificial general intelligence" was used as early as 1997, by Mark Gubrud in a discussion of the implications of fully automated military production and operations. The term was re-introduced and popularized by Shane Legg and Ben Goertzel around 2002. The research objective is much older, for example Doug Lenat's Cyc project (that began in 1984), and Allen Newell's Soar project are regarded as within the scope of AGI. AGI research activity in 2006 was described by Pei Wang and Ben Goertzel as "producing publications and preliminary results". The first summer school in AGI was organized in Xiamen, China in 2009 by the Xiamen university's Artificial Brain Laboratory and OpenCog. The first university course was given in 2010 and 2011 at Plovdiv University, Bulgaria by Todor Arnaudov. MIT presented a course in AGI in 2018, organized by Lex Fridman and featuring a number of guest lecturers. However, as yet, most AI researchers have devoted little attention to AGI, with some claiming that intelligence is too complex to be completely replicated in the near term. However, a small number of computer scientists are active in AGI research, and many of this group are contributing to a series of AGI conferences. The research is extremely diverse and often pioneering in nature. In the introduction to his book, Goertzel says that estimates of the time needed before a truly flexible AGI is built vary from 10 years to over a century, but the consensus in the AGI research community seems to be that the timeline discussed by Ray Kurzweil in The Singularity is Near (i.e. between 2015 and 2045) is plausible.<br />
<br />
”人工通用智能”一词早在1997年就由马克·古布鲁德(Mark Gubrud)在讨论全自动化军事生产和作业的影响时使用。这个术语在2002年左右被肖恩·莱格(Shane Legg)和本·格兹尔(Ben Goertzel)重新引入并推广。研究目标要古老得多,例如道格•雷纳特(Doug Lenat)的 Cyc 项目(始于1984年) ,以及艾伦•纽厄尔(Allen Newell)的 Soar 项目被认为属于通用人工智能的范围。王培(Pei Wang)和本·格兹尔将2006年的通用人工智能研究活动描述为“发表论文和取得初步成果”。2009年,厦门大学人工脑实验室和 OpenCog 在中国厦门组织了通用人工智能的第一个暑期学校。第一个大学课程于2010年和2011年在保加利亚普罗夫迪夫大学由托多尔·阿瑙多夫(Todor Arnaudov)开设。2018年,麻省理工学院开设了一门通用人工智能课程,由莱克斯·弗里德曼(Lex Fridman)组织,并邀请了一些客座讲师。然而,迄今为止,大多数人工智能研究人员对通用人工智能关注甚少,一些人声称,智能过于复杂,在短期内无法完全复制。然而,少数计算机科学家积极参与通用人工智能的研究,其中许多人正在为通用人工智能的一系列会议做出贡献。这项研究极其多样化,而且往往具有开创性。格兹尔在他的书的序言中,说,制造一个真正灵活的通用人工智能所需的时间约为10年到超过一个世纪不等,但是通用人工智能研究团体的似乎一致认为雷·库兹韦尔(Ray Kurzweil)在《奇点临近》(即在2015年至2045年之间)中讨论的时间线是可信的。<br />
<br />
<br />
<br />
However, most mainstream AI researchers doubt that progress will be this rapid.{{citation_needed|date=January 2017}} Organizations explicitly pursuing AGI include the Swiss AI lab [[IDSIA]],{{citation needed|date=December 2017}} Nnaisense,<ref>{{cite news|last1=Markoff|first1=John|title=When A.I. Matures, It May Call Jürgen Schmidhuber 'Dad'|url=https://www.nytimes.com/2016/11/27/technology/artificial-intelligence-pioneer-jurgen-schmidhuber-overlooked.html|accessdate=26 December 2017|work=The New York Times|date=27 November 2016|archive-url=https://web.archive.org/web/20171226234555/https://www.nytimes.com/2016/11/27/technology/artificial-intelligence-pioneer-jurgen-schmidhuber-overlooked.html|archive-date=26 December 2017|url-status=live}}</ref> [[Vicarious (company)|Vicarious]],<!--<ref name=baum/>--> [[Maluuba]],<ref name=baum/> the [[OpenCog|OpenCog Foundation]], Adaptive AI, [[LIDA (cognitive architecture)|LIDA]], and [[Numenta]] and the associated [[Redwood Neuroscience Institute]].<ref>{{cite book|author1=James Barrat|title=Our Final Invention: Artificial Intelligence and the End of the Human Era|date=2013|publisher=St. Martin's Press|location=New York|isbn=9780312622374|edition=First|chapter=Chapter 11: A Hard Takeoff|title-link=Our Final Invention|author1-link=James Barrat}}</ref> In addition, organizations such as the [[Machine Intelligence Research Institute]]<ref>{{cite web|title=About the Machine Intelligence Research Institute|url=https://intelligence.org/about/|website=Machine Intelligence Research Institute|accessdate=26 December 2017|archive-url=https://web.archive.org/web/20180121025925/https://intelligence.org/about/|archive-date=21 January 2018|url-status=live}}</ref> and [[OpenAI]]<ref>{{cite news|title=About OpenAI|url=https://openai.com/about/|accessdate=26 December 2017|work=[[OpenAI]]|language=en-us|archive-url=https://web.archive.org/web/20171222181056/https://openai.com/about/|archive-date=22 December 2017|url-status=live}}</ref> have been founded to influence the development path of AGI. Finally, projects such as the [[Human Brain Project]]<ref>{{cite news|last1=Theil|first1=Stefan|title=Trouble in Mind|url=https://www.scientificamerican.com/article/why-the-human-brain-project-went-wrong-and-how-to-fix-it/|accessdate=26 December 2017|work=Scientific American|pages=36–42|language=en|doi=10.1038/scientificamerican1015-36|bibcode=2015SciAm.313d..36T|archive-url=https://web.archive.org/web/20171109234151/https://www.scientificamerican.com/article/why-the-human-brain-project-went-wrong-and-how-to-fix-it/|archive-date=9 November 2017|url-status=live}}</ref> have the goal of building a functioning simulation of the human brain. A 2017 survey of AGI categorized forty-five known "active R&D projects" that explicitly or implicitly (through published research) research AGI, with the largest three being [[DeepMind]], the Human Brain Project, and [[OpenAI]].<ref name=baum>{{cite journal|title=Baum, Seth, A Survey of Artificial General Intelligence Projects for Ethics, Risk, and Policy (November 12, 2017). Global Catastrophic Risk Institute Working Paper 17-1|url=https://ssrn.com/abstract=3070741|date=12 November 2017|last1=Baum|first1=Seth}}</ref><br />
<br />
However, most mainstream AI researchers doubt that progress will be this rapid. Organizations explicitly pursuing AGI include the Swiss AI lab IDSIA, Nnaisense, Vicarious,<!-- In addition, organizations such as the Machine Intelligence Research Institute and OpenAI have been founded to influence the development path of AGI. Finally, projects such as the Human Brain Project have the goal of building a functioning simulation of the human brain. A 2017 survey of AGI categorized forty-five known "active R&D projects" that explicitly or implicitly (through published research) research AGI, with the largest three being DeepMind, the Human Brain Project, and OpenAI.<br />
<br />
然而,大多数主流的人工智能研究人员怀疑进展是否会如此之快。明确寻求通用人工智能的组织包括瑞士人工智能实验室IDSIA,Nnaisense,Vicarious,! -- 此外,机器智能研究所和 OpenAI 等机构也建立起来以影响通用人工智能的发展道路。最后,像人脑计划这样的项目的目标是建立一个人脑的功能模拟。2017年针对通用人工智能的一项调查(通过已发表的研究)对45个已知的明确的或暗中研究通用人工智能的“活跃研发项目”进行了分类 ,其中最大的三个是 DeepMind、人类大脑项目和 OpenAI。<br />
<br />
<br />
<br />
In 2017, researchers Feng Liu, Yong Shi and Ying Liu conducted intelligence tests on publicly available and freely accessible weak AI such as Google AI or Apple's Siri and others. At the maximum, these AI reached a value of about 47, which corresponds approximately to a six-year-old child in first grade. An adult comes to about 100 on average. Similar tests had been carried out in 2014, with the IQ score reaching a maximum value of 27.<ref>{{cite journal|title=Intelligence Quotient and Intelligence Grade of Artificial Intelligence|journal=Annals of Data Science|volume=4|issue=2|pages=179–191|arxiv=1709.10242|doi=10.1007/s40745-017-0109-0|year=2017|last1=Liu|first1=Feng|last2=Shi|first2=Yong|last3=Liu|first3=Ying|bibcode=2017arXiv170910242L}}</ref><ref>{{cite web|title=Google-KI doppelt so schlau wie Siri|url=https://t3n.de/news/iq-kind-schlauer-google-ki-siri-864003|accessdate=2 January 2019|archive-url=https://web.archive.org/web/20190103055657/https://t3n.de/news/iq-kind-schlauer-google-ki-siri-864003/|archive-date=3 January 2019|url-status=live}}</ref><br />
<br />
In 2017, researchers Feng Liu, Yong Shi and Ying Liu conducted intelligence tests on publicly available and freely accessible weak AI such as Google AI or Apple's Siri and others. At the maximum, these AI reached a value of about 47, which corresponds approximately to a six-year-old child in first grade. An adult comes to about 100 on average. Similar tests had been carried out in 2014, with the IQ score reaching a maximum value of 27.<br />
<br />
2017年,研究人员 Feng Liu,Yong Shi 和 Ying Liu 对公开的和可自由访问的弱智能进行了智能测试,如谷歌人工智能或苹果的 Siri 等。在最大值,这些 AI 达到了约47,这大约相当于一个六岁的儿童在一年级。一个成年人平均体重约为100磅。2014年也进行了类似的测试,智商分数达到了最高值27。<br />
<br />
<br />
<br />
In 2019, video game programmer and aerospace engineer [[John Carmack]] announced plans to research AGI.<ref name="lawler">{{cite web |url=https://www.engadget.com/2019-11-13-john-carmack-agi.html |title=John Carmack takes a step back at Oculus to work on human-like AI |date=November 13, 2019 |first=Richard |last=Lawler |accessdate=April 4, 2020 |publisher=[[Engadget]]}}</ref><br />
<br />
In 2019, video game programmer and aerospace engineer John Carmack announced plans to research AGI.<br />
<br />
2019年,游戏程序师和航空工程师 John Carmack 宣布了研究 AGI 的计划。<br />
<br />
<br />
<br />
==Processing power needed to simulate a brain ==<br />
<br />
<br />
<br />
===Whole brain emulation===<br />
<br />
{{main|Mind uploading}}<br />
<br />
A popular discussed approach to achieving general intelligent action is [[whole brain emulation]]. A low-level brain model is built by [[brain scanning|scanning]] and [[Brain mapping|mapping]] a biological brain in detail and copying its state into a computer system or another computational device. The computer runs a [[computer simulation|simulation]] model so faithful to the original that it will behave in essentially the same way as the original brain, or for all practical purposes, indistinguishably.<ref name=Roadmap><br />
<br />
A popular discussed approach to achieving general intelligent action is whole brain emulation. A low-level brain model is built by scanning and mapping a biological brain in detail and copying its state into a computer system or another computational device. The computer runs a simulation model so faithful to the original that it will behave in essentially the same way as the original brain, or for all practical purposes, indistinguishably.<ref name=Roadmap><br />
<br />
实现一般智能行为的一种流行的讨论方法是全脑模拟。一个低层次的大脑模型是通过扫描和绘制生物大脑的详细情况,并将其状态复制到计算机系统或其他计算设备中来建立的。计算机运行的模拟模型如此忠实于原始模型,以至于它的行为在本质上与原始大脑相同,或者对于所有的实际目的,难以区分。 参考名称路线图<br />
<br />
{{Harvnb|Sandberg|Boström|2008}}. "The basic idea is to take a particular brain, scan its structure in detail, and construct a software model of it that is so faithful to the original that, when run on appropriate hardware, it will behave in essentially the same way as the original brain."</ref> Whole brain emulation is discussed in [[computational neuroscience]] and [[neuroinformatics]], in the context of [[brain simulation]] for medical research purposes. It is discussed in [[artificial intelligence]] research{{sfn|Goertzel|2007}} as an approach to strong AI. [[Neuroimaging]] technologies that could deliver the necessary detailed understanding are improving rapidly, and [[futurist]] Ray Kurzweil in the book ''The Singularity Is Near''<ref name=K/> predicts that a map of sufficient quality will become available on a similar timescale to the required computing power.<br />
<br />
. "The basic idea is to take a particular brain, scan its structure in detail, and construct a software model of it that is so faithful to the original that, when run on appropriate hardware, it will behave in essentially the same way as the original brain."</ref> Whole brain emulation is discussed in computational neuroscience and neuroinformatics, in the context of brain simulation for medical research purposes. It is discussed in artificial intelligence research as an approach to strong AI. Neuroimaging technologies that could deliver the necessary detailed understanding are improving rapidly, and futurist Ray Kurzweil in the book The Singularity Is Near predicts that a map of sufficient quality will become available on a similar timescale to the required computing power.<br />
<br />
.“基本思想是,取一个特定的大脑,详细地扫描其结构,并构建一个与原始大脑如此忠实的软件模型,以至于在适当的硬件上运行时,它基本上与原始大脑的行为方式相同。整个大脑模拟在计算神经科学和神经信息学医学期刊上讨论过,这是为了医学研究的大脑模拟。它是人工智能研究中讨论的一种强人工智能的方法。神经成像技术可以提供必要的详细的理解正在迅速提高,未来学家 Ray Kurzweil 在《奇点迫近书中预测,一张具有足够质量的地图将在类似的时间尺度上达到所需的计算能力。<br />
<br />
<br />
<br />
===Early estimates ===<br />
<br />
[[File:Estimations of Human Brain Emulation Required Performance.svg|thumb|right|400px|Estimates of how much processing power is needed to emulate a human brain at various levels (from Ray Kurzweil, and [[Anders Sandberg]] and [[Nick Bostrom]]), along with the fastest supercomputer from [[TOP500]] mapped by year. Note the logarithmic scale and exponential trendline, which assumes the computational capacity doubles every 1.1 years. Kurzweil believes that mind uploading will be possible at neural simulation, while the Sandberg, Bostrom report is less certain about where [[consciousness]] arises.{{sfn|Sandberg|Boström|2008}}]] For low-level brain simulation, an extremely powerful computer would be required. The [[human brain]] has a huge number of [[synapses]]. Each of the 10<sup>11</sup> (one hundred billion) [[neurons]] has on average 7,000 synaptic connections (synapses) to other neurons. It has been estimated that the brain of a three-year-old child has about 10<sup>15</sup> synapses (1 quadrillion). This number declines with age, stabilizing by adulthood. Estimates vary for an adult, ranging from 10<sup>14</sup> to 5×10<sup>14</sup> synapses (100 to 500 trillion).{{sfn|Drachman|2005}} An estimate of the brain's processing power, based on a simple switch model for neuron activity, is around 10<sup>14</sup> (100 trillion) synaptic updates per second ([[SUPS]]).{{sfn|Russell|Norvig|2003}} In 1997, Kurzweil looked at various estimates for the hardware required to equal the human brain and adopted a figure of 10<sup>16</sup> computations per second (cps).<ref>In "Mind Children" {{Harvnb|Moravec|1988|page=61}} 10<sup>15</sup> cps is used. More recently, in 1997, <{{cite web|url=http://www.transhumanist.com/volume1/moravec.htm |title=Archived copy |accessdate=23 June 2006 |url-status=dead |archiveurl=https://web.archive.org/web/20060615031852/http://transhumanist.com/volume1/moravec.htm |archivedate=15 June 2006 }}> Moravec argued for 10<sup>8</sup> MIPS which would roughly correspond to 10<sup>14</sup> cps. Moravec talks in terms of MIPS, not "cps", which is a non-standard term Kurzweil introduced.</ref> (For comparison, if a "computation" was equivalent to one "[[FLOPS|floating point operation]]" – a measure used to rate current [[supercomputer]]s – then 10<sup>16</sup> "computations" would be equivalent to 10 [[Peta-|petaFLOPS]], [[FLOPS#Performance records|achieved in 2011]]). He used this figure to predict the necessary hardware would be available sometime between 2015 and 2025, if the exponential growth in computer power at the time of writing continued.<br />
<br />
Estimates of how much processing power is needed to emulate a human brain at various levels (from Ray Kurzweil, and [[Anders Sandberg and Nick Bostrom), along with the fastest supercomputer from TOP500 mapped by year. Note the logarithmic scale and exponential trendline, which assumes the computational capacity doubles every 1.1 years. Kurzweil believes that mind uploading will be possible at neural simulation, while the Sandberg, Bostrom report is less certain about where consciousness arises.]] For low-level brain simulation, an extremely powerful computer would be required. The human brain has a huge number of synapses. Each of the 10<sup>11</sup> (one hundred billion) neurons has on average 7,000 synaptic connections (synapses) to other neurons. It has been estimated that the brain of a three-year-old child has about 10<sup>15</sup> synapses (1 quadrillion). This number declines with age, stabilizing by adulthood. Estimates vary for an adult, ranging from 10<sup>14</sup> to 5×10<sup>14</sup> synapses (100 to 500 trillion). An estimate of the brain's processing power, based on a simple switch model for neuron activity, is around 10<sup>14</sup> (100 trillion) synaptic updates per second (SUPS). In 1997, Kurzweil looked at various estimates for the hardware required to equal the human brain and adopted a figure of 10<sup>16</sup> computations per second (cps). (For comparison, if a "computation" was equivalent to one "floating point operation" – a measure used to rate current supercomputers – then 10<sup>16</sup> "computations" would be equivalent to 10 petaFLOPS, achieved in 2011). He used this figure to predict the necessary hardware would be available sometime between 2015 and 2025, if the exponential growth in computer power at the time of writing continued.<br />
<br />
估计需要多少处理能力才能在不同水平上模拟人类大脑(来自 Ray Kurzweil,[ Anders Sandberg 和 Nick Bostrom ]) ,以及每年从 TOP500绘制出的最快超级计算机。请注意对数尺度趋势线和指数趋势线,它假设计算能力每1.1年翻一番。库兹韦尔相信,在神经模拟中上传思维是可能的,而桑德伯格和博斯特罗姆的报告对意识在哪里产生则不太确定。]对于低层次的大脑模拟,需要一个非常强大的计算机。人类的大脑有大量的突触。每个10个 sup 11 / sup (1000亿)神经元平均有7000个突触连接(突触)到其他神经元。据估计,一个三岁儿童的大脑约有10个 sup 15 / sup 突触(1千万亿)。这个数字随着年龄的增长而下降,成年后趋于稳定。对于一个成年人的估计有所不同,从10个 sup 14 / sup 到5个 sup 10 sup 14 / sup 突触(100万亿到500万亿)不等。基于神经元活动的简单开关模型,对大脑处理能力的估计大约是每秒10次 / 秒(100万亿)突触更新(SUPS)。1997年,库兹韦尔研究了相当于人脑所需硬件的各种估计,并采用了每秒10 sup 16 / sup 计算(cps)的数字。(作为比较,如果一次“计算”相当于一次“浮点运算”——一种用于对当前超级计算机进行评级的措施——那么10 sup 16 / sup“计算”相当于2011年完成的10petaflops)。他用这个数字来预测,如果在撰写本文时计算机能力方面的指数增长继续下去的话,那么在2015年到2025年之间的某个时候,必要的硬件将会出现。<br />
<br />
<br />
<br />
===Modelling the neurons in more detail===<br />
<br />
The [[artificial neuron]] model assumed by Kurzweil and used in many current [[artificial neural network]] implementations is simple compared with [[biological neuron model|biological neurons]]. A brain simulation would likely have to capture the detailed cellular behaviour of biological [[neurons]], presently understood only in the broadest of outlines. The overhead introduced by full modeling of the biological, chemical, and physical details of neural behaviour (especially on a molecular scale) would require computational powers several orders of magnitude larger than Kurzweil's estimate. In addition the estimates do not account for [[glial cells]], which are at least as numerous as neurons, and which may outnumber neurons by as much as 10:1, and are now known to play a role in cognitive processes.<ref name="Discover2011JanFeb">{{Cite journal|author=Swaminathan, Nikhil|title=Glia—the other brain cells|journal=Discover|date=Jan–Feb 2011|url=http://discovermagazine.com/2011/jan-feb/62|access-date=24 January 2014|archive-url=https://web.archive.org/web/20140208071350/http://discovermagazine.com/2011/jan-feb/62|archive-date=8 February 2014|url-status=live}}</ref><br />
<br />
The artificial neuron model assumed by Kurzweil and used in many current artificial neural network implementations is simple compared with biological neurons. A brain simulation would likely have to capture the detailed cellular behaviour of biological neurons, presently understood only in the broadest of outlines. The overhead introduced by full modeling of the biological, chemical, and physical details of neural behaviour (especially on a molecular scale) would require computational powers several orders of magnitude larger than Kurzweil's estimate. In addition the estimates do not account for glial cells, which are at least as numerous as neurons, and which may outnumber neurons by as much as 10:1, and are now known to play a role in cognitive processes.<br />
<br />
与生物神经元相比,Kurzweil 假设的人工神经元模型在当前许多人工神经网络实现中的应用是简单的。大脑模拟可能需要捕捉生物神经元细胞行为的细节,目前只能从最广泛的轮廓中理解。对神经行为的生物、化学和物理细节(特别是在分子尺度上)进行全面建模所需要的计算能力将比 Kurzweil 的估计大几百万数量级。此外,这些估计没有考虑胶质细胞,胶质细胞至少和神经元一样多,数量可能比神经元多10:1,现在已知它们在认知过程中发挥作用。<br />
<br />
<br />
<br />
=== Current research===<br />
<br />
There are some research projects that are investigating brain simulation using more sophisticated neural models, implemented on conventional computing architectures. The [[Artificial Intelligence System]] project implemented non-real time simulations of a "brain" (with 10<sup>11</sup> neurons) in 2005. It took 50 days on a cluster of 27 processors to simulate 1 second of a model.<ref>{{cite journal |last=Izhikevich |first=Eugene M. |last2=Edelman |first2=Gerald M. |date=4 March 2008 |title=Large-scale model of mammalian thalamocortical systems |url=http://vesicle.nsi.edu/users/izhikevich/publications/large-scale_model_of_human_brain.pdf |journal=PNAS |volume=105 |issue=9 |pages=3593–3598 |doi= 10.1073/pnas.0712231105|access-date=23 June 2015 |archive-url=https://web.archive.org/web/20090612095651/http://vesicle.nsi.edu/users/izhikevich/publications/large-scale_model_of_human_brain.pdf |archive-date=12 June 2009 |pmid=18292226 |pmc=2265160|bibcode=2008PNAS..105.3593I }}</ref> The [[Blue Brain]] project used one of the fastest supercomputer architectures in the world, [[IBM]]'s [[Blue Gene]] platform, to create a real time simulation of a single rat [[Neocortex|neocortical column]] consisting of approximately 10,000 neurons and 10<sup>8</sup> synapses in 2006.<ref>{{cite web|url=http://bluebrain.epfl.ch/Jahia/site/bluebrain/op/edit/pid/19085|title=Project Milestones|work=Blue Brain|accessdate=11 August 2008}}</ref> A longer term goal is to build a detailed, functional simulation of the physiological processes in the human brain: "It is not impossible to build a human brain and we can do it in 10 years," [[Henry Markram]], director of the Blue Brain Project said in 2009 at the [[TED (conference)|TED conference]] in Oxford.<ref>{{Cite news |url=http://news.bbc.co.uk/1/hi/technology/8164060.stm |title=Artificial brain '10 years away' 2009 BBC news |date=22 July 2009 |access-date=25 July 2009 |archive-url=https://web.archive.org/web/20170726040959/http://news.bbc.co.uk/1/hi/technology/8164060.stm |archive-date=26 July 2017 |url-status=live }}</ref> There have also been controversial claims to have simulated a [[cat intelligence#Computer simulation of the cat brain|cat brain]]. Neuro-silicon interfaces have been proposed as an alternative implementation strategy that may scale better.<ref>[http://gauntlet.ucalgary.ca/story/10343 University of Calgary news] {{Webarchive|url=https://web.archive.org/web/20090818081044/http://gauntlet.ucalgary.ca/story/10343 |date=18 August 2009 }}, [http://www.nbcnews.com/id/12037941 NBC News news] {{Webarchive|url=https://web.archive.org/web/20170704063922/http://www.nbcnews.com/id/12037941/ |date=4 July 2017 }}</ref><br />
<br />
There are some research projects that are investigating brain simulation using more sophisticated neural models, implemented on conventional computing architectures. The Artificial Intelligence System project implemented non-real time simulations of a "brain" (with 10<sup>11</sup> neurons) in 2005. It took 50 days on a cluster of 27 processors to simulate 1 second of a model. The Blue Brain project used one of the fastest supercomputer architectures in the world, IBM's Blue Gene platform, to create a real time simulation of a single rat neocortical column consisting of approximately 10,000 neurons and 10<sup>8</sup> synapses in 2006. A longer term goal is to build a detailed, functional simulation of the physiological processes in the human brain: "It is not impossible to build a human brain and we can do it in 10 years," Henry Markram, director of the Blue Brain Project said in 2009 at the TED conference in Oxford. There have also been controversial claims to have simulated a cat brain. Neuro-silicon interfaces have been proposed as an alternative implementation strategy that may scale better.<br />
<br />
有一些研究项目正在使用更复杂的神经模型研究大脑模拟,这些模型是在传统的计算机体系结构上实现的。人工智能系统项目在2005年实现了一个“大脑”(有10个 sup 11 / sup 神经元)的非实时模拟。在一个由27个处理器组成的集群上,模拟一个模型的一秒钟花费了50天时间。2006年,蓝脑项目利用世界上最快的超级计算机架构之一,IBM 的蓝色基因平台,创建了一个包含大约10,000个神经元和10个 sup 8 / sup 突触的单个大鼠皮层柱的实时模拟。一个更长期的目标是建立一个人脑生理过程的详细的功能模拟: “建立一个人脑并不是不可能的,我们可以在10年内完成,”2009年在牛津举行的 TED 大会上,蓝脑项目主任亨利 · 马克拉姆说。还有一些有争议的说法是模拟猫的大脑。神经硅接口已被提出作为一种替代的实施策略,可能会更好地伸缩。<br />
<br />
<br />
<br />
[[Hans Moravec]] addressed the above arguments ("brains are more complicated", "neurons have to be modeled in more detail") in his 1997 paper "When will computer hardware match the human brain?".<ref>{{cite web|url=http://www.transhumanist.com/volume1/moravec.htm |title=Archived copy |accessdate=23 June 2006 |url-status=dead |archiveurl=https://web.archive.org/web/20060615031852/http://transhumanist.com/volume1/moravec.htm |archivedate=15 June 2006 }}</ref> He measured the ability of existing software to simulate the functionality of neural tissue, specifically the retina. His results do not depend on the number of glial cells, nor on what kinds of processing neurons perform where.<br />
<br />
Hans Moravec addressed the above arguments ("brains are more complicated", "neurons have to be modeled in more detail") in his 1997 paper "When will computer hardware match the human brain?". He measured the ability of existing software to simulate the functionality of neural tissue, specifically the retina. His results do not depend on the number of glial cells, nor on what kinds of processing neurons perform where.<br />
<br />
Hans Moravec 在他1997年的论文《计算机硬件何时能与人脑相匹配? 》中提出了上述观点(“大脑更复杂” ,“神经元必须建模得更详细”) .他测量了现有软件模拟神经组织,特别是视网膜功能的能力。他的研究结果并不取决于神经胶质细胞的数量,也不取决于处理神经元在哪里工作。<br />
<br />
<br />
<br />
The actual complexity of modeling biological neurons has been explored in [[OpenWorm|OpenWorm project]] that was aimed on complete simulation of a worm that has only 302 neurons in its neural network (among about 1000 cells in total). The animal's neural network has been well documented before the start of the project. However, although the task seemed simple at the beginning, the models based on a generic neural network did not work. Currently, the efforts are focused on precise emulation of biological neurons (partly on the molecular level), but the result cannot be called a total success yet. Even if the number of issues to be solved in a human-brain-scale model is not proportional to the number of neurons, the amount of work along this path is obvious.<br />
<br />
The actual complexity of modeling biological neurons has been explored in OpenWorm project that was aimed on complete simulation of a worm that has only 302 neurons in its neural network (among about 1000 cells in total). The animal's neural network has been well documented before the start of the project. However, although the task seemed simple at the beginning, the models based on a generic neural network did not work. Currently, the efforts are focused on precise emulation of biological neurons (partly on the molecular level), but the result cannot be called a total success yet. Even if the number of issues to be solved in a human-brain-scale model is not proportional to the number of neurons, the amount of work along this path is obvious.<br />
<br />
在 OpenWorm 项目中,已经探讨了建模生物神经元的实际复杂性,该项目旨在完全模拟一个蠕虫,其神经网络中只有302个神经元(在总共约1000个细胞中)。在项目开始之前,动物的神经网络已经被很好地记录了下来。然而,尽管一开始任务看起来很简单,基于一般神经网络的模型并不起作用。目前,研究的重点是精确模拟生物神经元(部分在分子水平上) ,但结果还不能被称为完全成功。即使在人脑尺度模型中需要解决的问题的数量与神经元的数量不成比例,沿着这条路径所做的工作量也是显而易见的。<br />
<br />
<br />
<br />
===Criticisms of simulation-based approaches===<br />
<br />
A fundamental criticism of the simulated brain approach derives from [[embodied cognition]] where human embodiment is taken as an essential aspect of human intelligence. Many researchers believe that embodiment is necessary to ground meaning.<ref>{{Harvnb|de Vega|Glenberg|Graesser|2008}}. A wide range of views in current research, all of which require grounding to some degree</ref> If this view is correct, any fully functional brain model will need to encompass more than just the neurons (i.e., a robotic body). Goertzel{{sfn|Goertzel|2007}} proposes virtual embodiment (like in ''[[Second Life]]''), but it is not yet known whether this would be sufficient.<br />
<br />
A fundamental criticism of the simulated brain approach derives from embodied cognition where human embodiment is taken as an essential aspect of human intelligence. Many researchers believe that embodiment is necessary to ground meaning. If this view is correct, any fully functional brain model will need to encompass more than just the neurons (i.e., a robotic body). Goertzel proposes virtual embodiment (like in Second Life), but it is not yet known whether this would be sufficient.<br />
<br />
对模拟大脑方法的一个基本批评来自具身认知,在那里人体化被视为人类智力的一个重要方面。许多研究者认为,具体化是必要的基础意义。如果这种观点是正确的,那么任何功能齐全的大脑模型都需要包含更多的神经元(例如,一个机器人身体)。Goertzel 提出了虚拟化身(就像在《第二人生》中那样) ,但是目前还不知道这是否足够。<br />
<br />
<br />
<br />
Desktop computers using microprocessors capable of more than 10<sup>9</sup> cps (Kurzweil's non-standard unit "computations per second", see above) have been available since 2005. According to the brain power estimates used by Kurzweil (and Moravec), this computer should be capable of supporting a simulation of a bee brain, but despite some interest<ref>{{Cite web |url=http://www.setiai.com/archives/cat_honey_bee_brain.html |title=some links to bee brain studies |access-date=30 March 2010 |archive-url=https://web.archive.org/web/20111005162232/http://www.setiai.com/archives/cat_honey_bee_brain.html |archive-date=5 October 2011 |url-status=dead }}</ref> no such simulation exists {{Citation needed|date=April 2011}}. There are at least three reasons for this:<br />
<br />
Desktop computers using microprocessors capable of more than 10<sup>9</sup> cps (Kurzweil's non-standard unit "computations per second", see above) have been available since 2005. According to the brain power estimates used by Kurzweil (and Moravec), this computer should be capable of supporting a simulation of a bee brain, but despite some interest no such simulation exists . There are at least three reasons for this:<br />
<br />
自2005年以来,台式计算机使用的微处理器能够超过10 sup 9 / sup cps (库兹韦尔的非标准单位“每秒计算” ,见上文)。根据 Kurzweil (和 Moravec)使用的大脑能量估算,这台计算机应该能够支持蜜蜂大脑的模拟,但是尽管有些人感兴趣,这样的模拟并不存在。这至少有三个原因:<br />
<br />
#The neuron model seems to be oversimplified (see next section).<br />
<br />
The neuron model seems to be oversimplified (see next section).<br />
<br />
神经元模型似乎过于简化了(见下一节)。<br />
<br />
#There is insufficient understanding of higher cognitive processes{{refn|In Goertzels' AGI book ([[#CITEREFYudkowsky2006|Yudkowsky 2006]]), Yudkowsky proposes 5 levels of organisation that must be understood – code/data, sensory modality, concept & category, thought, and deliberation (consciousness) – in order to use the available hardware}} to establish accurately what the brain's neural activity, observed using techniques such as [[Neuroimaging#Functional magnetic resonance imaging|functional magnetic resonance imaging]], correlates with.<br />
<br />
There is insufficient understanding of higher cognitive processes to establish accurately what the brain's neural activity, observed using techniques such as functional magnetic resonance imaging, correlates with.<br />
<br />
人们对高级认知过程的理解不够充分,无法准确地确定大脑的神经活动---- 使用功能性磁共振成像等技术观察到的活动---- 与之相关。<br />
<br />
#Even if our understanding of cognition advances sufficiently, early simulation programs are likely to be very inefficient and will, therefore, need considerably more hardware.<br />
<br />
Even if our understanding of cognition advances sufficiently, early simulation programs are likely to be very inefficient and will, therefore, need considerably more hardware.<br />
<br />
即使我们对认知的理解有了足够的进步,早期的仿真程序也可能非常低效,因此需要更多的硬件。<br />
<br />
#The brain of an organism, while critical, may not be an appropriate boundary for a cognitive model. To simulate a bee brain, it may be necessary to simulate the body, and the environment. [[The Extended Mind]] thesis formalizes the philosophical concept, and research into [[Cephalopoda|cephalopods]] has demonstrated clear examples of a decentralized system.<ref>{{cite journal | pmid = 15829594 | doi=10.1152/jn.00684.2004 | volume=94 | issue=2 | title=Dynamic model of the octopus arm. I. Biomechanics of the octopus reaching movement |date=August 2005 | journal=J. Neurophysiol. | pages=1443–58 | last1 = Yekutieli | first1 = Y | last2 = Sagiv-Zohar | first2 = R | last3 = Aharonov | first3 = R | last4 = Engel | first4 = Y | last5 = Hochner | first5 = B | last6 = Flash | first6 = T}}</ref><br />
<br />
The brain of an organism, while critical, may not be an appropriate boundary for a cognitive model. To simulate a bee brain, it may be necessary to simulate the body, and the environment. The Extended Mind thesis formalizes the philosophical concept, and research into cephalopods has demonstrated clear examples of a decentralized system.<br />
<br />
有机体的大脑虽然关键,但可能不是认知模型的合适边界。为了模拟蜜蜂的大脑,可能需要模拟身体和环境。扩展心智论文形式化了哲学概念,对头足类动物的研究已经证明了分散系统的明显例子。<br />
<br />
<br />
<br />
In addition, the scale of the human brain is not currently well-constrained. One estimate puts the human brain at about 100 billion neurons and 100 trillion synapses.<ref>{{Harvnb|Williams|Herrup|1988}}</ref><ref>[http://search.eb.com/eb/article-75525 "nervous system, human."] ''[[Encyclopædia Britannica]]''. 9 January 2007</ref> Another estimate is 86 billion neurons of which 16.3 billion are in the [[cerebral cortex]] and 69 billion in the [[cerebellum]].{{sfn|Azevedo et al.|2009}} [[Glial cell]] synapses are currently unquantified but are known to be extremely numerous.<br />
<br />
In addition, the scale of the human brain is not currently well-constrained. One estimate puts the human brain at about 100 billion neurons and 100 trillion synapses. Another estimate is 86 billion neurons of which 16.3 billion are in the cerebral cortex and 69 billion in the cerebellum. Glial cell synapses are currently unquantified but are known to be extremely numerous.<br />
<br />
此外,人类大脑的规模目前还没有得到很好的限制。据估计,人类大脑大约有1000亿个神经元和100万亿个突触。另一个估计是860亿个神经元,其中163亿在大脑皮层,690亿在小脑。神经胶质细胞突触目前尚未定量,但已知数量极多。<br />
<br />
<br />
<br />
==Strong AI and consciousness==<br />
<br />
{{See also|Philosophy of artificial intelligence|Turing test}}<br />
<br />
<br />
<br />
In 1980, philosopher [[John Searle]] coined the term "strong AI" as part of his [[Chinese room]] argument.<ref>{{Harvnb|Searle|1980}}</ref> He wanted to distinguish between two different hypotheses about artificial intelligence:<ref>As defined in a standard AI textbook: "The assertion that machines could possibly act intelligently (or, perhaps better, act as if they were intelligent) is called the 'weak AI' hypothesis by philosophers, and the assertion that machines that do so are actually thinking (as opposed to simulating thinking) is called the 'strong AI' hypothesis." {{Harv|Russell|Norvig|2003}}</ref><br />
<br />
In 1980, philosopher John Searle coined the term "strong AI" as part of his Chinese room argument. He wanted to distinguish between two different hypotheses about artificial intelligence:<br />
<br />
1980年,哲学家约翰•塞尔(John Searle)将“强人工智能”(strong AI)一词作为他在中文房间里辩论的一部分。他想要区分关于人工智能的两种不同假设:<br />
<br />
* An artificial intelligence system can ''think'' and have a ''mind''. (The word "mind" has a specific meaning for philosophers, as used in "the [[mind body problem]]" or "the [[philosophy of mind]]".)<br />
<br />
* An artificial intelligence system can (only) ''act like'' it thinks and has a mind.<br />
<br />
The first one is called "the ''strong'' AI hypothesis" and the second is "the ''weak'' AI hypothesis" because the first one makes the ''stronger'' statement: it assumes something special has happened to the machine that goes beyond all its abilities that we can test. Searle referred to the "strong AI hypothesis" as "strong AI". This usage is also common in academic AI research and textbooks.<ref>For example:<br />
<br />
The first one is called "the strong AI hypothesis" and the second is "the weak AI hypothesis" because the first one makes the stronger statement: it assumes something special has happened to the machine that goes beyond all its abilities that we can test. Searle referred to the "strong AI hypothesis" as "strong AI". This usage is also common in academic AI research and textbooks.<ref>For example:<br />
<br />
第一个被称为“强人工智能假设” ,第二个被称为“弱人工智能假设” ,因为第一个假设提出了更强的陈述: 它假定机器发生了某种特殊的事情,超出了我们能够测试的所有能力。Searle 将“强 AI 假说”称为“强 AI”。这种用法在人工智能学术研究和教科书中也很常见。例如:<br />
<br />
* {{Harvnb|Russell|Norvig|2003}},<br />
<br />
* [http://www.encyclopedia.com/doc/1O87-strongAI.html Oxford University Press Dictionary of Psychology] {{Webarchive|url=https://web.archive.org/web/20071203103022/http://www.encyclopedia.com/doc/1O87-strongAI.html |date=3 December 2007 }} (quoted in "High Beam Encyclopedia"),<br />
<br />
* [http://www.aaai.org/AITopics/html/phil.html MIT Encyclopedia of Cognitive Science] {{Webarchive|url=https://web.archive.org/web/20080719074502/http://www.aaai.org/AITopics/html/phil.html |date=19 July 2008 }} (quoted in "AITopics")<br />
<br />
* [http://planetmath.org/encyclopedia/StrongAIThesis.html Planet Math] {{Webarchive|url=https://web.archive.org/web/20070919012830/http://planetmath.org/encyclopedia/StrongAIThesis.html |date=19 September 2007 }}<br />
<br />
* [http://www.cbhd.org/resources/biotech/tongen_2003-11-07.htm Will Biological Computers Enable Artificially Intelligent Machines to Become Persons?] {{Webarchive|url=https://web.archive.org/web/20080513031753/http://www.cbhd.org/resources/biotech/tongen_2003-11-07.htm |date=13 May 2008 }} Anthony Tongen</ref><br />
<br />
<br />
<br />
The weak AI hypothesis is equivalent to the hypothesis that artificial general intelligence is possible. According to [[Stuart J. Russell|Russell]] and [[Peter Norvig|Norvig]], "Most AI researchers take the weak AI hypothesis for granted, and don't care about the strong AI hypothesis."{{sfn|Russell|Norvig|2003|p=947}}<br />
<br />
The weak AI hypothesis is equivalent to the hypothesis that artificial general intelligence is possible. According to Russell and Norvig, "Most AI researchers take the weak AI hypothesis for granted, and don't care about the strong AI hypothesis."<br />
<br />
弱人工智能假说等同于人工一般智能是可能的假说。根据罗素和诺维格的说法,“大多数人工智能研究人员认为弱人工智能假说是理所当然的,并且不关心强人工智能假说。”<br />
<br />
<br />
<br />
In contrast to Searle, [[Ray Kurzweil]] uses the term "strong AI" to describe any artificial intelligence system that acts like it has a mind,<ref name=K/> regardless of whether a philosopher would be able to determine if it ''actually'' has a mind or not.<br />
<br />
In contrast to Searle, Ray Kurzweil uses the term "strong AI" to describe any artificial intelligence system that acts like it has a mind, regardless of whether a philosopher would be able to determine if it actually has a mind or not.<br />
<br />
与 Searle 不同的是,Ray Kurzweil 使用“强人工智能”这个词来描述任何人工智能系统,这个系统的行为就像它有思想一样,不管哲学家是否能够确定它是否真的有思想。<br />
<br />
In science fiction, AGI is associated with traits such as [[consciousness]], [[sentience]], [[sapience]], and [[self-awareness]] observed in living beings. However, according to Searle, it is an open question whether general intelligence is sufficient for consciousness. "Strong AI" (as defined above by Kurzweil) should not be confused with Searle's "[[strong AI hypothesis]]." The strong AI hypothesis is the claim that a computer which behaves as intelligently as a person must also necessarily have a [[mind]] and [[consciousness]]. AGI refers only to the amount of intelligence that the machine displays, with or without a mind.<br />
<br />
In science fiction, AGI is associated with traits such as consciousness, sentience, sapience, and self-awareness observed in living beings. However, according to Searle, it is an open question whether general intelligence is sufficient for consciousness. "Strong AI" (as defined above by Kurzweil) should not be confused with Searle's "strong AI hypothesis." The strong AI hypothesis is the claim that a computer which behaves as intelligently as a person must also necessarily have a mind and consciousness. AGI refers only to the amount of intelligence that the machine displays, with or without a mind.<br />
<br />
在科幻小说中,AGI 与生物的意识、知觉、智慧和自我意识等特征有关。然而,根据 Searle 的说法,一般智力是否足以产生意识还是一个悬而未决的问题。“强 AI”(如上文库兹韦尔所定义的)不应与塞尔的“强 AI 假设”相混淆强有力的人工智能假说认为,一台像人一样智能运行的计算机必然具有思想和意识。Agi 只是指机器显示的智能量,不管有没有头脑。<br />
<br />
<br />
<br />
===Consciousness===<br />
<br />
There are other aspects of the human mind besides intelligence that are relevant to the concept of strong AI which play a major role in [[science fiction]] and the [[ethics of artificial intelligence]]:<br />
<br />
There are other aspects of the human mind besides intelligence that are relevant to the concept of strong AI which play a major role in science fiction and the ethics of artificial intelligence:<br />
<br />
在科幻小说和人工智能伦理中扮演重要角色的强大人工智能概念中,除了与智能有关的人类思维还有其他方面:<br />
<br />
* [[consciousness]]: To have [[qualia|subjective experience]] and [[thought]].<ref>Note that [[consciousness]] is difficult to define. A popular definition, due to [[Thomas Nagel]], is that it "feels like" something to be conscious. If we are not conscious, then it doesn't feel like anything. Nagel uses the example of a bat: we can sensibly ask "what does it feel like to be a bat?" However, we are unlikely to ask "what does it feel like to be a toaster?" Nagel concludes that a bat appears to be conscious (i.e. has consciousness) but a toaster does not. See {{Harv|Nagel|1974}}</ref><br />
<br />
* [[self-awareness]]: To be aware of oneself as a separate individual, especially to be aware of one's own thoughts.<br />
<br />
* [[sentience]]: The ability to "feel" perceptions or emotions subjectively.<br />
<br />
* [[sapience]]: The capacity for wisdom.<br />
<br />
<br />
<br />
These traits have a moral dimension, because a machine with this form of strong AI may have legal rights, analogous to the [[animal rights|rights of non-human animals]]. As such, preliminary work has been conducted on approaches to integrating full ethical agents with existing legal and social frameworks. These approaches have focused on the legal position and rights of 'strong' AI.<ref>{{Cite journal|last=Sotala|first=Kaj|last2=Yampolskiy|first2=Roman V|date=2014-12-19|title=Responses to catastrophic AGI risk: a survey|journal=Physica Scripta|volume=90|issue=1|pages=8|doi=10.1088/0031-8949/90/1/018001|issn=0031-8949|doi-access=free}}</ref><br />
<br />
These traits have a moral dimension, because a machine with this form of strong AI may have legal rights, analogous to the rights of non-human animals. As such, preliminary work has been conducted on approaches to integrating full ethical agents with existing legal and social frameworks. These approaches have focused on the legal position and rights of 'strong' AI.<br />
<br />
这些特征具有道德维度,因为拥有这种强人工智能形式的机器可能拥有法律权利,类似于非人类动物的权利。因此,已经开展了初步工作,探讨如何将全面的道德行为者纳入现有的法律和社会框架。这些方法都集中在强大的人工智能的法律地位和权利上。<br />
<br />
<br />
<br />
However, [[Bill Joy]], among others, argues a machine with these traits may be a threat to human life or dignity.<ref>{{cite journal| title=Why the future doesn't need us | last=Joy | first=Bill |author-link=Bill Joy | journal=Wired |date=April 2000 }}</ref> It remains to be shown whether any of these traits are [[Necessary and sufficient condition|necessary]] for strong AI. The role of [[consciousness]] is not clear, and currently there is no agreed test for its presence. If a machine is built with a device that simulates the [[neural correlates of consciousness]], would it automatically have self-awareness? It is also possible that some of these properties, such as sentience, [[Emergence|naturally emerge]] from a fully intelligent machine, or that it becomes natural to ''ascribe'' these properties to machines once they begin to act in a way that is clearly intelligent. For example, intelligent action may be sufficient for sentience, rather than the other way around.<br />
<br />
However, Bill Joy, among others, argues a machine with these traits may be a threat to human life or dignity. It remains to be shown whether any of these traits are necessary for strong AI. The role of consciousness is not clear, and currently there is no agreed test for its presence. If a machine is built with a device that simulates the neural correlates of consciousness, would it automatically have self-awareness? It is also possible that some of these properties, such as sentience, naturally emerge from a fully intelligent machine, or that it becomes natural to ascribe these properties to machines once they begin to act in a way that is clearly intelligent. For example, intelligent action may be sufficient for sentience, rather than the other way around.<br />
<br />
然而,比尔 · 乔伊等人认为,具有这些特征的机器可能会威胁到人类的生命或尊严。这些特征对于强 AI 来说是否是必要的还有待证明。意识的作用并不清楚,目前也没有对其存在的一致的测试。如果一台机器装有一个模拟意识相关神经区的装置,它会自动具有自我意识吗?也有可能这些特性中的一些,比如感知能力,自然而然地从一个完全智能的机器中产生,或者一旦机器开始以一种明显的智能方式行动,人们就会自然而然地把这些特性归因于机器。例如,智能行为可能足以产生知觉,而不是相反。<br />
<br />
<br />
<br />
===Artificial consciousness research===<br />
<br />
{{Main|Artificial consciousness}}<br />
<br />
<br />
<br />
Although the role of consciousness in strong AI/AGI is debatable, many AGI researchers{{sfn|Yudkowsky|2006}} regard research that investigates possibilities for implementing consciousness as vital. In an early effort [[Igor Aleksander]]{{sfn|Aleksander|1996}} argued that the principles for creating a conscious machine already existed but that it would take forty years to train such a machine to understand [[language]].<br />
<br />
Although the role of consciousness in strong AI/AGI is debatable, many AGI researchers regard research that investigates possibilities for implementing consciousness as vital. In an early effort Igor Aleksander argued that the principles for creating a conscious machine already existed but that it would take forty years to train such a machine to understand language.<br />
<br />
虽然意识在强 ai / AGI 中的作用是有争议的,但是很多 AGI 的研究人员认为研究实现意识的可能性是至关重要的。在早期的努力中,Igor Aleksander 认为创造一个有意识的机器的原则已经存在,但是训练这样一个机器去理解语言需要四十年。<br />
<br />
<br />
<br />
==Possible explanations for the slow progress of AI research==<br />
<br />
{{See also|History of artificial intelligence#The problems}}<br />
<br />
<br />
<br />
Since the launch of AI research in 1956, the growth of this field has slowed down over time and has stalled the aims of creating machines skilled with intelligent action at the human level.{{sfn|Clocksin|2003}} A possible explanation for this delay is that computers lack a sufficient scope of memory or processing power.{{sfn|Clocksin|2003}} In addition, the level of complexity that connects to the process of AI research may also limit the progress of AI research.{{sfn|Clocksin|2003}}<br />
<br />
Since the launch of AI research in 1956, the growth of this field has slowed down over time and has stalled the aims of creating machines skilled with intelligent action at the human level. A possible explanation for this delay is that computers lack a sufficient scope of memory or processing power. In addition, the level of complexity that connects to the process of AI research may also limit the progress of AI research.<br />
<br />
自从1956年开始人工智能研究以来,这一领域的发展速度已经随着时间的推移而放缓,并且阻碍了创造具有人类水平的智能行为的机器的目标。这种延迟的一个可能的解释是计算机缺乏足够的存储空间或处理能力。此外,与人工智能研究过程相关的复杂程度也可能限制人工智能研究的进展。<br />
<br />
<br />
<br />
While most AI researchers believe strong AI can be achieved in the future, there are some individuals like [[Hubert Dreyfus]] and [[Roger Penrose]] who deny the possibility of achieving strong AI.{{sfn|Clocksin|2003}} [[John McCarthy (computer scientist)|John McCarthy]] was one of various computer scientists who believe human-level AI will be accomplished, but a date cannot accurately be predicted.{{sfn|McCarthy|2003}}<br />
<br />
While most AI researchers believe strong AI can be achieved in the future, there are some individuals like Hubert Dreyfus and Roger Penrose who deny the possibility of achieving strong AI. John McCarthy was one of various computer scientists who believe human-level AI will be accomplished, but a date cannot accurately be predicted.<br />
<br />
虽然大多数人工智能研究人员认为强大的人工智能可以在未来实现,但也有一些人像休伯特 · 德雷福斯和罗杰 · 彭罗斯否认实现强大人工智能的可能性。约翰 · 麦卡锡是众多计算机科学家之一,他们相信人类水平的人工智能将会实现,但是日期无法准确预测。<br />
<br />
<br />
<br />
Conceptual limitations are another possible reason for the slowness in AI research.{{sfn|Clocksin|2003}} AI researchers may need to modify the conceptual framework of their discipline in order to provide a stronger base and contribution to the quest of achieving strong AI. As William Clocksin wrote in 2003: "the framework starts from Weizenbaum's observation that intelligence manifests itself only relative to specific social and cultural contexts".{{sfn|Clocksin|2003}}<br />
<br />
Conceptual limitations are another possible reason for the slowness in AI research. AI researchers may need to modify the conceptual framework of their discipline in order to provide a stronger base and contribution to the quest of achieving strong AI. As William Clocksin wrote in 2003: "the framework starts from Weizenbaum's observation that intelligence manifests itself only relative to specific social and cultural contexts".<br />
<br />
概念上的局限性是人工智能研究缓慢的另一个可能原因。人工智能研究人员可能需要修改他们学科的概念框架,以便为实现强大的人工智能提供一个更强大的基础和贡献。正如 William Clocksin 在2003年写的那样: “这个框架始于 Weizenbaum 的观察,即智力只在特定的社会和文化背景下表现出来。”。<br />
<br />
<br />
<br />
Furthermore, AI researchers have been able to create computers that can perform jobs that are complicated for people to do, such as mathematics, but conversely they have struggled to develop a computer that is capable of carrying out tasks that are simple for humans to do, such as walking ([[Moravec's paradox]]).{{sfn|Clocksin|2003}} A problem described by David Gelernter is that some people assume thinking and reasoning are equivalent.{{sfn|Gelernter|2010}} However, the idea of whether thoughts and the creator of those thoughts are isolated individually has intrigued AI researchers.{{sfn|Gelernter|2010}}<br />
<br />
Furthermore, AI researchers have been able to create computers that can perform jobs that are complicated for people to do, such as mathematics, but conversely they have struggled to develop a computer that is capable of carrying out tasks that are simple for humans to do, such as walking (Moravec's paradox). A problem described by David Gelernter is that some people assume thinking and reasoning are equivalent. However, the idea of whether thoughts and the creator of those thoughts are isolated individually has intrigued AI researchers.<br />
<br />
此外,人工智能研究人员已经能够创造出能够执行复杂工作(如数学)的计算机,但相反地,他们却难以开发出能够执行人类简单任务(如行走)的计算机(莫拉维克悖论)。大卫 · 格勒尼特描述的一个问题是,有些人认为思考和推理是等价的。然而,思想和这些思想的创造者是否被孤立的想法引起了人工智能研究者的兴趣。<br />
<br />
<br />
<br />
The problems that have been encountered in AI research over the past decades have further impeded the progress of AI. The failed predictions that have been promised by AI researchers and the lack of a complete understanding of human behaviors have helped diminish the primary idea of human-level AI.{{sfn|Goertzel|2007}} Although the progress of AI research has brought both improvement and disappointment, most investigators have established optimism about potentially achieving the goal of AI in the 21st century.{{sfn|Goertzel|2007}}<br />
<br />
The problems that have been encountered in AI research over the past decades have further impeded the progress of AI. The failed predictions that have been promised by AI researchers and the lack of a complete understanding of human behaviors have helped diminish the primary idea of human-level AI. Although the progress of AI research has brought both improvement and disappointment, most investigators have established optimism about potentially achieving the goal of AI in the 21st century.<br />
<br />
过去几十年人工智能研究中遇到的问题进一步阻碍了人工智能的发展。人工智能研究人员所承诺的失败的预测,以及对人类行为缺乏完整理解,已经帮助削弱了人类水平人工智能的基本概念。尽管人工智能研究的进展带来了进步和失望,但大多数研究人员对人工智能在21世纪可能实现的目标持乐观态度。<br />
<br />
<br />
<br />
Other possible reasons have been proposed for the lengthy research in the progress of strong AI. The intricacy of scientific problems and the need to fully understand the human brain through psychology and neurophysiology have limited many researchers in emulating the function of the human brain in computer hardware.{{sfn|McCarthy|2007}} Many researchers tend to underestimate any doubt that is involved with future predictions of AI, but without taking those issues seriously, people can then overlook solutions to problematic questions.{{sfn|Goertzel|2007}}<br />
<br />
Other possible reasons have been proposed for the lengthy research in the progress of strong AI. The intricacy of scientific problems and the need to fully understand the human brain through psychology and neurophysiology have limited many researchers in emulating the function of the human brain in computer hardware. Many researchers tend to underestimate any doubt that is involved with future predictions of AI, but without taking those issues seriously, people can then overlook solutions to problematic questions.<br />
<br />
还有其他可能的原因可以解释为什么对于强人工智能进行了长时间的研究。错综复杂的科学问题,以及通过心理学和神经生理学充分了解人脑的必要性,限制了许多研究人员在计算机硬件中模拟人脑的功能。许多研究人员倾向于低估与人工智能未来预测有关的任何怀疑,但是如果不认真对待这些问题,人们就会忽视问题的解决方案。<br />
<br />
<br />
<br />
Clocksin says that a conceptual limitation that may impede the progress of AI research is that people may be using the wrong techniques for computer programs and implementation of equipment.{{sfn|Clocksin|2003}} When AI researchers first began to aim for the goal of artificial intelligence, a main interest was human reasoning.{{sfn|Holte|Choueiry|2003}} Researchers hoped to establish computational models of human knowledge through reasoning and to find out how to design a computer with a specific cognitive task.{{sfn|Holte|Choueiry|2003}}<br />
<br />
Clocksin says that a conceptual limitation that may impede the progress of AI research is that people may be using the wrong techniques for computer programs and implementation of equipment. When AI researchers first began to aim for the goal of artificial intelligence, a main interest was human reasoning. Researchers hoped to establish computational models of human knowledge through reasoning and to find out how to design a computer with a specific cognitive task.<br />
<br />
Clocksin 说,阻碍人工智能研究进展的一个概念上的限制是,人们可能在计算机程序和设备实现方面使用了错误的技术。当人工智能研究人员第一次开始瞄准人工智能的目标时,主要的兴趣是人类推理。研究人员希望通过推理建立人类知识的计算模型,并找出如何设计一台具有特定认知任务的计算机。<br />
<br />
<br />
<br />
The practice of abstraction, which people tend to redefine when working with a particular context in research, provides researchers with a concentration on just a few concepts.{{sfn|Holte|Choueiry|2003}} The most productive use of abstraction in AI research comes from planning and problem solving.{{sfn|Holte|Choueiry|2003}} Although the aim is to increase the speed of a computation, the role of abstraction has posed questions about the involvement of abstraction operators.{{sfn|Zucker|2003}}<br />
<br />
The practice of abstraction, which people tend to redefine when working with a particular context in research, provides researchers with a concentration on just a few concepts. The most productive use of abstraction in AI research comes from planning and problem solving. Although the aim is to increase the speed of a computation, the role of abstraction has posed questions about the involvement of abstraction operators.<br />
<br />
抽象的实践,人们在研究中使用特定的语境时倾向于重新定义,为研究人员提供了集中在几个概念上的机会。抽象在人工智能研究中最有效的应用来自规划和解决问题。虽然目标是提高计算速度,但是抽象的作用已经对抽象操作符的参与提出了问题。<br />
<br />
<br />
<br />
A possible reason for the slowness in AI relates to the acknowledgement by many AI researchers that heuristics is a section that contains a significant breach between computer performance and human performance.{{sfn|McCarthy|2007}} The specific functions that are programmed to a computer may be able to account for many of the requirements that allow it to match human intelligence. These explanations are not necessarily guaranteed to be the fundamental causes for the delay in achieving strong AI, but they are widely agreed by numerous researchers.<br />
<br />
A possible reason for the slowness in AI relates to the acknowledgement by many AI researchers that heuristics is a section that contains a significant breach between computer performance and human performance. The specific functions that are programmed to a computer may be able to account for many of the requirements that allow it to match human intelligence. These explanations are not necessarily guaranteed to be the fundamental causes for the delay in achieving strong AI, but they are widely agreed by numerous researchers.<br />
<br />
人工智能发展缓慢的一个可能原因是许多人工智能研究人员承认启发式是计算机性能和人类性能之间的一个重大缺口。为计算机编程的特定功能可以满足许多要求,使计算机与人类智能相匹配。这些解释并不一定是造成人工智能实现延迟的根本原因,但它们得到了众多研究人员的广泛认同。<br />
<br />
<br />
<br />
There have been many AI researchers that debate over the idea whether [[affective computing|machines should be created with emotions]]. There are no emotions in typical models of AI and some researchers say programming emotions into machines allows them to have a mind of their own.{{sfn|Clocksin|2003}} Emotion sums up the experiences of humans because it allows them to remember those experiences.{{sfn|Gelernter|2010}} David Gelernter writes, "No computer will be creative unless it can simulate all the nuances of human emotion."{{sfn|Gelernter|2010}} This concern about emotion has posed problems for AI researchers and it connects to the concept of strong AI as its research progresses into the future.<ref>{{cite journal|doi=10.1016/j.bushor.2018.08.004|title=Kaplan Andreas and Haelein Michael (2019) Siri, Siri, in my hand: Who's the fairest in the land? On the interpretations, illustrations, and implications of artificial intelligence | volume=62 | year=2019|journal=Business Horizons|pages=15–25 | last1 = Kaplan | first1 = Andreas | last2 = Haenlein | first2 = Michael}}</ref><br />
<br />
There have been many AI researchers that debate over the idea whether machines should be created with emotions. There are no emotions in typical models of AI and some researchers say programming emotions into machines allows them to have a mind of their own. Emotion sums up the experiences of humans because it allows them to remember those experiences. David Gelernter writes, "No computer will be creative unless it can simulate all the nuances of human emotion." This concern about emotion has posed problems for AI researchers and it connects to the concept of strong AI as its research progresses into the future.<br />
<br />
许多人工智能研究人员一直在争论机器是否应该带有情感。典型的人工智能模型中没有情感,一些研究人员说,将情感编程到机器中可以让它们拥有自己的思想。情感总结了人类的经历,因为它允许人们记住那些经历。大卫 · 格勒尼特写道: “除非计算机能够模拟人类情感的所有细微差别,否则它不会具有创造力。”这种对情绪的关注给人工智能研究人员带来了一些问题,随着未来人工智能研究的进展,它与强人工智能的概念相联系。<br />
<br />
<br />
<br />
==Controversies and dangers==<br />
<br />
<br />
<br />
===Feasibility===<br />
<br />
{{expand section|date=February 2016}}<br />
<br />
As of March 2020, AGI remains speculative<ref name="spec1">[https://www.europarl.europa.eu/at-your-service/files/be-heard/religious-and-non-confessional-dialogue/events/en-20190319-how-artificial-intelligence-works.pdf europarl.europa.eu: How artificial intelligence works], "Concluding remarks: Today's AI is powerful and useful, but remains far from speculated AGI or ASI.", European Parliamentary Research Service, retrieved March 3, 2020</ref><ref name="spec2">[https://www.itu.int/en/journal/001/Documents/itu2018-9.pdf itu.int: Beyond Mad?: The Race For Artificial General Intelligence], "AGI represents a level of power that remains firmly in the realm of speculative fiction as on date." February 2, 2018, retrieved March 3, 2020</ref> as no such system has been demonstrated yet. Opinions vary both on ''whether'' and ''when'' artificial general intelligence will arrive. At one extreme, AI pioneer [[Herbert A. Simon]] wrote in 1965: "machines will be capable, within twenty years, of doing any work a man can do". However, this prediction failed to come true. Microsoft co-founder [[Paul Allen]] believed that such intelligence is unlikely in the 21st century because it would require "unforeseeable and fundamentally unpredictable breakthroughs" and a "scientifically deep understanding of cognition".<ref>{{cite news|last1=Allen|first1=Paul|title=The Singularity Isn't Near|url=http://www.technologyreview.com/view/425733/paul-allen-the-singularity-isnt-near/|accessdate=17 September 2014|work=[[MIT Technology Review]]}}</ref> Writing in [[The Guardian]], roboticist Alan Winfield claimed the gulf between modern computing and human-level artificial intelligence is as wide as the gulf between current space flight and practical faster-than-light spaceflight.<ref>{{cite news|last1=Winfield|first1=Alan|title=Artificial intelligence will not turn into a Frankenstein's monster|url=https://www.theguardian.com/technology/2014/aug/10/artificial-intelligence-will-not-become-a-frankensteins-monster-ian-winfield|accessdate=17 September 2014|work=[[The Guardian]]|archive-url=https://web.archive.org/web/20140917135230/http://www.theguardian.com/technology/2014/aug/10/artificial-intelligence-will-not-become-a-frankensteins-monster-ian-winfield|archive-date=17 September 2014|url-status=live}}</ref><br />
<br />
As of March 2020, AGI remains speculative as no such system has been demonstrated yet. Opinions vary both on whether and when artificial general intelligence will arrive. At one extreme, AI pioneer Herbert A. Simon wrote in 1965: "machines will be capable, within twenty years, of doing any work a man can do". However, this prediction failed to come true. Microsoft co-founder Paul Allen believed that such intelligence is unlikely in the 21st century because it would require "unforeseeable and fundamentally unpredictable breakthroughs" and a "scientifically deep understanding of cognition". Writing in The Guardian, roboticist Alan Winfield claimed the gulf between modern computing and human-level artificial intelligence is as wide as the gulf between current space flight and practical faster-than-light spaceflight.<br />
<br />
截至2020年3月,德盛安联仍处于投机状态,因为迄今尚未展示此类系统。对于人工通用智能是否会到来以及何时到来,人们的看法各不相同。在一个极端,人工智能的先驱赫伯特·西蒙在1965年写道: “机器将能在20年内完成人类能做的任何工作。”。然而,这个预言并没有实现。微软(Microsoft)联合创始人保罗•艾伦(Paul Allen)认为,这种情报在21世纪不太可能出现,因为它需要“不可预见且根本无法预测的突破”和“对认知的科学深入理解”。机器人专家 Alan Winfield 在《卫报》上发表文章称,现代计算机和人类水平的人工智能之间的鸿沟就像当前的太空飞行和实际的超光速空间飞行之间的鸿沟一样宽。<br />
<br />
<br />
<br />
AI experts' views on the feasibility of AGI wax and wane, and may have seen a resurgence in the 2010s. Four polls conducted in 2012 and 2013 suggested that the median guess among experts for when they would be 50% confident AGI would arrive was 2040 to 2050, depending on the poll, with the mean being 2081. Of the experts, 16.5% answered with "never" when asked the same question but with a 90% confidence instead.<ref name="new yorker doomsday">{{cite news|author1=Raffi Khatchadourian|title=The Doomsday Invention: Will artificial intelligence bring us utopia or destruction?|url=http://www.newyorker.com/magazine/2015/11/23/doomsday-invention-artificial-intelligence-nick-bostrom|accessdate=7 February 2016|work=[[The New Yorker (magazine)|The New Yorker]]|date=23 November 2015|archive-url=https://web.archive.org/web/20160128105955/http://www.newyorker.com/magazine/2015/11/23/doomsday-invention-artificial-intelligence-nick-bostrom|archive-date=28 January 2016|url-status=live}}</ref><ref>Müller, V. C., & Bostrom, N. (2016). Future progress in artificial intelligence: A survey of expert opinion. In Fundamental issues of artificial intelligence (pp. 555–572). Springer, Cham.</ref> Further current AGI progress considerations can be found below [[#Tests_for_confirming_human-level_AGI|''Tests for confirming human-level AGI'']] and [[#IQ-Tests_AGI|''IQ-tests AGI'']].<br />
<br />
AI experts' views on the feasibility of AGI wax and wane, and may have seen a resurgence in the 2010s. Four polls conducted in 2012 and 2013 suggested that the median guess among experts for when they would be 50% confident AGI would arrive was 2040 to 2050, depending on the poll, with the mean being 2081. Of the experts, 16.5% answered with "never" when asked the same question but with a 90% confidence instead. Further current AGI progress considerations can be found below Tests for confirming human-level AGI and IQ-tests AGI.<br />
<br />
人工智能专家对德盛安联兴衰可行性的看法,可能在2010年出现了复苏。2012年和2013年进行的四次民意调查显示,专家对德盛安联50% 有信心的平均猜测是2040年到2050年,具体取决于调查结果,平均猜测是2081年。在这些专家中,16.5% 的人在被问到同样的问题时回答“从来没有” ,但他们的自信心却达到了90% 。进一步的进展考虑可以在确认人类水平 AGI 和 iq 测试 AGI 的测试下面找到。<br />
<br />
<br />
<br />
===Potential threat to human existence{{anchor|Risk_of_human_extinction}}===<br />
<br />
{{Main|Existential risk from artificial general intelligence}}<br />
<br />
<br />
<br />
The thesis that AI poses an existential risk, and that this risk needs much more attention than it currently gets, has been endorsed by many public figures; perhaps the most famous are [[Elon Musk]], [[Bill Gates]], and [[Stephen Hawking]]. The most notable AI researcher to endorse the thesis is [[Stuart J. Russell]]. Endorsers of the thesis sometimes express bafflement at skeptics: Gates states he does not "understand why some people are not concerned",<ref name="BBC News">{{cite news|last1=Rawlinson|first1=Kevin|title=Microsoft's Bill Gates insists AI is a threat|url=https://www.bbc.co.uk/news/31047780|work=[[BBC News]]|accessdate=30 January 2015}}</ref> and Hawking criticized widespread indifference in his 2014 editorial: {{cquote|'So, facing possible futures of incalculable benefits and risks, the experts are surely doing everything possible to ensure the best outcome, right? Wrong. If a superior alien civilisation sent us a message saying, 'We'll arrive in a few decades,' would we just reply, 'OK, call us when you get here{{endash}}we'll leave the lights on?' Probably not{{endash}}but this is more or less what is happening with AI.'<ref name="hawking editorial">{{cite news |title=Stephen Hawking: 'Transcendence looks at the implications of artificial intelligence&nbsp;– but are we taking AI seriously enough?' |url=https://www.independent.co.uk/news/science/stephen-hawking-transcendence-looks-at-the-implications-of-artificial-intelligence--but-are-we-taking-ai-seriously-enough-9313474.html |accessdate=3 December 2014 |publisher=[[The Independent (UK)]]}}</ref>}}<br />
<br />
The thesis that AI poses an existential risk, and that this risk needs much more attention than it currently gets, has been endorsed by many public figures; perhaps the most famous are Elon Musk, Bill Gates, and Stephen Hawking. The most notable AI researcher to endorse the thesis is Stuart J. Russell. Endorsers of the thesis sometimes express bafflement at skeptics: Gates states he does not "understand why some people are not concerned", and Hawking criticized widespread indifference in his 2014 editorial: we'll leave the lights on?' Probably notbut this is more or less what is happening with AI.'}}<br />
<br />
人工智能构成了世界末日,这种风险需要比现在更多的关注,这一论点已经得到了许多公众人物的支持; 也许最著名的是埃隆 · 马斯克,比尔 · 盖茨和斯蒂芬 · 霍金。支持这一观点的最著名的人工智能研究者是斯图尔特 · 罗素。这篇论文的支持者有时会对怀疑论者表示困惑: 盖茨表示,他不“理解为什么有些人不关心” ,霍金在2014年的社论中批评了普遍的冷漠: 我们会让灯亮着吗可能不是,但这或多或少是正在发生的与人工智能<br />
<br />
<br />
<br />
Many of the scholars who are concerned about existential risk believe that the best way forward would be to conduct (possibly massive) research into solving the difficult "[[AI control problem|control problem]]" to answer the question: what types of safeguards, algorithms, or architectures can programmers implement to maximize the probability that their recursively-improving AI would continue to behave in a friendly, rather than destructive, manner after it reaches superintelligence?<ref name="superintelligence" >{{cite book|last1=Bostrom|first1=Nick|author-link=Nick Bostrom|title=Superintelligence: Paths, Dangers, Strategies|date=2014|isbn=978-0199678112|edition=First|quote=|title-link=Superintelligence: Paths, Dangers, Strategies}}<!-- preface --></ref><ref name="physica_scripta" >{{cite journal|title=Responses to catastrophic AGI risk: a survey|journal=[[Physica Scripta]]|date=19 December 2014|volume=90|issue=1|author1=Kaj Sotala|author2-link=Roman Yampolskiy|author2=Roman Yampolskiy}}</ref><br />
<br />
Many of the scholars who are concerned about existential risk believe that the best way forward would be to conduct (possibly massive) research into solving the difficult "control problem" to answer the question: what types of safeguards, algorithms, or architectures can programmers implement to maximize the probability that their recursively-improving AI would continue to behave in a friendly, rather than destructive, manner after it reaches superintelligence?<br />
<br />
许多关注世界末日的学者认为,最好的方法是进行(可能是大规模的)研究,解决困难的“控制问题” ,以回答这个问题: 程序员可以实现哪些类型的保障措施、算法或架构,以最大限度地提高其递归改进的人工智能在达到超级智能后继续以友好而不是破坏性的方式运行的可能性?<br />
<br />
<br />
<br />
The thesis that AI can pose existential risk also has many strong detractors. Skeptics sometimes charge that the thesis is crypto-religious, with an irrational belief in the possibility of superintelligence replacing an irrational belief in an omnipotent God; at an extreme, [[Jaron Lanier]] argues that the whole concept that current machines are in any way intelligent is "an illusion" and a "stupendous con" by the wealthy.<ref name="atlantic-but-what">{{cite magazine |url=https://www.theatlantic.com/health/archive/2014/05/but-what-does-the-end-of-humanity-mean-for-me/361931/ |title=But What Would the End of Humanity Mean for Me? |magazine=The Atlantic | date = 9 May 2014 | accessdate =12 December 2015}}</ref><br />
<br />
The thesis that AI can pose existential risk also has many strong detractors. Skeptics sometimes charge that the thesis is crypto-religious, with an irrational belief in the possibility of superintelligence replacing an irrational belief in an omnipotent God; at an extreme, Jaron Lanier argues that the whole concept that current machines are in any way intelligent is "an illusion" and a "stupendous con" by the wealthy.<br />
<br />
认为人工智能可以提出世界末日的观点也遭到了许多强烈的反对。怀疑论者有时指责该论点是秘密宗教性的,他们非理性地相信超级智能可能取代对万能的上帝的非理性信仰; 在极端情况下,杰伦 · 拉尼尔(Jaron Lanier)认为,目前的机器以任何方式具有智能的整个概念是“一种幻觉” ,是富人的“惊人骗局”。<br />
<br />
<br />
<br />
Much of existing criticism argues that AGI is unlikely in the short term. Computer scientist [[Gordon Bell]] argues that the human race will already destroy itself before it reaches the [[technological singularity]]. [[Gordon Moore]], the original proponent of [[Moore's Law]], declares that "I am a skeptic. I don't believe [a technological singularity] is likely to happen, at least for a long time. And I don't know why I feel that way."<ref>{{cite news |title=Tech Luminaries Address Singularity |url=https://spectrum.ieee.org/computing/hardware/tech-luminaries-address-singularity |accessdate=8 April 2020 |work=IEEE Spectrum: Technology, Engineering, and Science News |issue=SPECIAL REPORT: THE SINGULARITY |date=1 June 2008 |language=en}}</ref> [[Baidu]] Vice President [[Andrew Ng]] states AI existential risk is "like worrying about overpopulation on Mars when we have not even set foot on the planet yet."<ref name=shermer>{{cite news|last1=Shermer|first1=Michael|title=Apocalypse AI|url=https://www.scientificamerican.com/article/artificial-intelligence-is-not-a-threat-mdash-yet/|accessdate=27 November 2017|work=Scientific American|date=1 March 2017|pages=77|language=en|doi=10.1038/scientificamerican0317-77|bibcode=2017SciAm.316c..77S}}</ref><br />
<br />
Much of existing criticism argues that AGI is unlikely in the short term. Computer scientist Gordon Bell argues that the human race will already destroy itself before it reaches the technological singularity. Gordon Moore, the original proponent of Moore's Law, declares that "I am a skeptic. I don't believe [a technological singularity] is likely to happen, at least for a long time. And I don't know why I feel that way." Baidu Vice President Andrew Ng states AI existential risk is "like worrying about overpopulation on Mars when we have not even set foot on the planet yet."<br />
<br />
现有的许多批评认为,德盛安联短期内不太可能成功。计算机科学家 Gordon Bell 认为人类在到达技术奇异点之前就已经自我毁灭了。戈登 · 摩尔,摩尔定律的最初倡导者,宣称“我是一个怀疑论者。我不认为技术奇异点会发生,至少在很长一段时间内不会。我不知道为什么会有这种感觉。”百度副总裁 Andrew Ng 说,人工智能世界末日就像是在担心火星人口过剩,而我们甚至还没有踏上这个星球<br />
<br />
<br />
<br />
==See also==<br />
<br />
{{div col|colwidth=30em}}<br />
<br />
* [[Automated machine learning]]<br />
<br />
* [[Machine ethics]]<br />
<br />
* [[Multi-task learning]]<br />
<br />
* [[Superintelligence: Paths, Dangers, Strategies|Superintelligence]]<br />
<br />
* [[Nick Bostrom]]<br />
<br />
* [[Eliezer Yudkowsky]]<br />
<br />
* [[Future of Humanity Institute]]<br />
<br />
* [[Outline of artificial intelligence]]<br />
<br />
* [[Artificial brain]]<br />
<br />
* [[Transfer learning]]<br />
<br />
* [[Outline of transhumanism]]<br />
<br />
* [[General game playing]]<br />
<br />
* [[Synthetic intelligence]]<br />
<br />
* [[Intelligence amplification]] (IA), the use of information technology in augmenting human intelligence instead of creating an external autonomous "AGI"{{div col end}}<br />
<br />
<br />
<br />
==Notes==<br />
<br />
{{reflist|colwidth=30em}}<br />
<br />
<br />
<br />
==References==<br />
<br />
• Stages of Artificial Intelligence"[https://www.computerscience0.xyz/2020/04/ai-artificial-intelligence-stages-of.html Computer Science]" on 2nd April, 2020.{{refbegin|2}}<br />
<br />
• Stages of Artificial Intelligence"[https://www.computerscience0.xyz/2020/04/ai-artificial-intelligence-stages-of.html Computer Science]" on 2nd April, 2020.<br />
<br />
•2020年4月2日,《人工智能的阶段》[ https://www.computerscience0.xyz/2020/04/ai-Artificial-Intelligence-Stages-of.html 计算机科学]。<br />
<br />
* {{cite web |url=http://www.techcast.org/Upload/PDFs/633615794236495345_TCTheAutomationofThought.pdf |title=TechCast Article Series: The Automation of Thought |last1=Halal |first1=William E. |website= |access-date= |archive-url=https://web.archive.org/web/20130606101835/http://www.techcast.org/Upload/PDFs/633615794236495345_TCTheAutomationofThought.pdf |archive-date=6 June 2013}}<br />
<br />
* {{Citation | last=Aleksander | first=Igor | author-link=Igor Aleksander | year=1996 | title=Impossible Minds | publisher=World Scientific Publishing Company | isbn=978-1-86094-036-1 | url-access=registration | url=https://archive.org/details/impossiblemindsm0000alek }}<br />
<br />
* {{Citation | last = Omohundro|first= Steve| author-link= Steve Omohundro | year = 2008| title= The Nature of Self-Improving Artificial Intelligence| publisher= presented and distributed at the 2007 Singularity Summit, San Francisco, CA.}}<br />
<br />
* {{Citation|last=Sandberg |first=Anders|last2=Boström|first2=Nick|title=Whole Brain Emulation: A Roadmap|url=http://www.fhi.ox.ac.uk/Reports/2008-3.pdf|accessdate=5 April 2009|series= Technical Report #2008‐3|year=2008| publisher = Future of Humanity Institute, Oxford University}}<br />
<br />
* {{Citation|vauthors=Azevedo FA, Carvalho LR, Grinberg LT | title = Equal numbers of neuronal and nonneuronal cells make the human brain an isometrically scaled-up primate brain| journal = The Journal of Comparative Neurology| volume = 513| issue = 5| pages = 532–41|date=April 2009| pmid = 19226510| doi = 10.1002/cne.21974|url=https://www.researchgate.net/publication/24024444 |accessdate=4 September 2013| ref={{harvid|Azevedo et al.|2009}}|display-authors=etal}}<br />
<br />
* {{Citation<br />
<br />
| first=Anthony<br />
<br />
| first=Anthony<br />
<br />
首先是安东尼<br />
<br />
| last=Berglas<br />
<br />
| last=Berglas<br />
<br />
最后一个贝格拉斯<br />
<br />
| title=Artificial Intelligence will Kill our Grandchildren<br />
<br />
| title=Artificial Intelligence will Kill our Grandchildren<br />
<br />
人工智能会杀死我们的孙子<br />
<br />
| year=2008<br />
<br />
| year=2008<br />
<br />
2008年<br />
<br />
| url=http://berglas.org/Articles/AIKillGrandchildren/AIKillGrandchildren.html<br />
<br />
| url=http://berglas.org/Articles/AIKillGrandchildren/AIKillGrandchildren.html<br />
<br />
Http://berglas.org/articles/aikillgrandchildren/aikillgrandchildren.html<br />
<br />
}}<br />
<br />
}}<br />
<br />
}}<br />
<br />
* {{Citation | last = Chalmers |first = David | author-link=David Chalmers | year=1996 | title = The Conscious Mind |publisher=Oxford University Press.}}<br />
<br />
* {{Citation | last = Clocksin|first=William |date=Aug 2003 |title=Artificial intelligence and the future|journal=[[Philosophical Transactions of the Royal Society A]] |pmid=12952683 |volume=361 |issue=1809 |pages=1721–1748 |doi=10.1098/rsta.2003.1232 |postscript=.|bibcode=2003RSPTA.361.1721C }}<br />
<br />
* {{Crevier 1993}}<br />
<br />
* {{Citation | first = Brad | last = Darrach | date=20 November 1970 | title=Meet Shakey, the First Electronic Person | magazine=[[Life Magazine]] | pages = 58–68 }}.<br />
<br />
* {{Citation | last = Drachman | first = D | title = Do we have brain to spare? | journal = Neurology | volume = 64 | issue = 12 | pages = 2004–5 | year = 2005 | pmid = 15985565 | doi = 10.1212/01.WNL.0000166914.38327.BB | postscript = .}}<br />
<br />
* {{Citation | last = Feigenbaum | first = Edward A. | first2=Pamela | last2=McCorduck | author-link=Edward Feigenbaum | author2-link = Pamela McCorduck | title = The Fifth Generation: Artificial Intelligence and Japan's Computer Challenge to the World | publisher = Michael Joseph | year = 1983 | isbn = 978-0-7181-2401-4 }}<br />
<br />
* {{Citation<br />
<br />
| last = Gelernter | first = David<br />
<br />
| last = Gelernter | first = David<br />
<br />
最后的盖兰特 | 第一个大卫<br />
<br />
| year = 2010<br />
<br />
| year = 2010<br />
<br />
2010年<br />
<br />
| title = Dream-logic, the Internet and Artificial Thought<br />
<br />
| title = Dream-logic, the Internet and Artificial Thought<br />
<br />
梦的逻辑,互联网和人工思维<br />
<br />
| url=http://www.edge.org/3rd_culture/gelernter10.1/gelernter10.1_index.html | accessdate= 25 July 2010<br />
<br />
| url=http://www.edge.org/3rd_culture/gelernter10.1/gelernter10.1_index.html | accessdate= 25 July 2010<br />
<br />
Http://www.edge.org/3rd_culture/gelernter10.1/gelernter10.1_index.html : 2010年7月25日<br />
<br />
}}<br />
<br />
}}<br />
<br />
}}<br />
<br />
* {{Citation<br />
<br />
| editor1-last = Goertzel | editor1-first = Ben | authorlink = Ben Goertzel<br />
<br />
| editor1-last = Goertzel | editor1-first = Ben | authorlink = Ben Goertzel<br />
<br />
1-first Ben | authorlink Ben Goertzel<br />
<br />
| editor2-last = Pennachin | editor2-first= Cassio<br />
<br />
| editor2-last = Pennachin | editor2-first= Cassio<br />
<br />
| 编辑2-last Pennachin | 编辑2-first Cassio<br />
<br />
| year = 2006<br />
<br />
| year = 2006<br />
<br />
2006年<br />
<br />
| title=Artificial General Intelligence<br />
<br />
| title=Artificial General Intelligence<br />
<br />
人工通用智能<br />
<br />
| publisher = Springer <br />
<br />
| publisher = Springer <br />
<br />
出版商斯普林格<br />
<br />
| url=http://people.inf.elte.hu/csizsekp/ai/books/artificial-general-intelligence-cognitive-technologies.9783540237334.27156.pdf<br />
<br />
| url=http://people.inf.elte.hu/csizsekp/ai/books/artificial-general-intelligence-cognitive-technologies.9783540237334.27156.pdf<br />
<br />
Http://people.inf.elte.hu/csizsekp/ai/books/artificial-general-intelligence-cognitive-technologies.9783540237334.27156.pdf<br />
<br />
| isbn = 978-3-540-23733-4<br />
<br />
| isbn = 978-3-540-23733-4<br />
<br />
[国际标准图书馆编号978-3-540-23733-4]<br />
<br />
| archive-url = https://web.archive.org/web/20130320184603/http://people.inf.elte.hu/csizsekp/ai/books/artificial-general-intelligence-cognitive-technologies.9783540237334.27156.pdf<br />
<br />
| archive-url = https://web.archive.org/web/20130320184603/http://people.inf.elte.hu/csizsekp/ai/books/artificial-general-intelligence-cognitive-technologies.9783540237334.27156.pdf<br />
<br />
| 档案-url https://web.archive.org/web/20130320184603/http://people.inf.elte.hu/csizsekp/ai/books/artificial-general-intelligence-cognitive-technologies.9783540237334.27156.pdf<br />
<br />
| archive-date = 20 March 2013<br />
<br />
| archive-date = 20 March 2013<br />
<br />
| 档案-日期2013年3月20日<br />
<br />
}}<br />
<br />
}}<br />
<br />
}}<br />
<br />
* {{Citation<br />
<br />
| last = Goertzel | first = Ben | authorlink = Ben Goertzel<br />
<br />
| last = Goertzel | first = Ben | authorlink = Ben Goertzel<br />
<br />
作者: Ben Goertzel<br />
<br />
| last2 = Wang | first2 = Pei<br />
<br />
| last2 = Wang | first2 = Pei<br />
<br />
2 Wang | first2 Pei<br />
<br />
| year = 2006<br />
<br />
| year = 2006<br />
<br />
2006年<br />
<br />
| title = Introduction: Aspects of Artificial General Intelligence<br />
<br />
| title = Introduction: Aspects of Artificial General Intelligence<br />
<br />
| 题目简介: 人工通用智能的方方面面<br />
<br />
| url=http://sites.google.com/site/narswang/publications/wang-goertzel.AGI_Aspects.pdf?attredirects=1<br />
<br />
| url=http://sites.google.com/site/narswang/publications/wang-goertzel.AGI_Aspects.pdf?attredirects=1<br />
<br />
Http://sites.google.com/site/narswang/publications/wang-goertzel.agi_aspects.pdf?attredirects=1<br />
<br />
}}<br />
<br />
}}<br />
<br />
}}<br />
<br />
* {{Citation |last=Goertzel|first=Ben |authorlink=Ben Goertzel |date=Dec 2007 |title=Human-level artificial general intelligence and the possibility of a technological singularity: a reaction to Ray Kurzweil's The Singularity Is Near, and McDermott's critique of Kurzweil|journal=Artificial Intelligence |volume=171 |issue=18, Special Review Issue |pages=1161–1173 |url=https://scholar.google.com/scholar?hl=sv&lr=&cluster=15189798216526465792 |accessdate=1 April 2009 |doi=10.1016/j.artint.2007.10.011 |postscript=.}}<br />
<br />
* {{Citation | last = Gubrud | first = Mark | url=http://www.foresight.org/Conferences/MNT05/Papers/Gubrud/ | title = Nanotechnology and International Security | journal= Fifth Foresight Conference on Molecular Nanotechnology |date = November 1997| accessdate= 7 May 2011}}<br />
<br />
* {{Citation| last1 = Holte | first1=RC | last2=Choueiry |first2=BY| title = Abstraction and reformulation in artificial intelligence| journal = [[Philosophical Transactions of the Royal Society B]]| volume = 358| issue = 1435| pages = 1197–1204| year = 2003| pmid = 12903653| pmc = 1693218| doi = 10.1098/rstb.2003.1317| postscript = .}}<br />
<br />
* {{Citation | last = Howe | first = J. | url=http://www.dai.ed.ac.uk/AI_at_Edinburgh_perspective.html | title = Artificial Intelligence at Edinburgh University : a Perspective |date = November 1994| accessdate= 30 August 2007}}<br />
<br />
* {{Citation | last = Johnson| first= Mark | year = 1987| title =The body in the mind| publisher =Chicago|isbn= 978-0-226-40317-5}}<br />
<br />
* {{Citation | last = Kurzweil | first = Ray | author-link = Ray Kurzweil | title = The Singularity is Near | year = 2005 | publisher = Viking Press | title-link = The Singularity is Near }}<br />
<br />
* {{Citation | last = Lighthill | first = Professor Sir James | author-link=James Lighthill | year = 1973 | contribution= Artificial Intelligence: A General Survey | title = Artificial Intelligence: a paper symposium| publisher = Science Research Council }}<br />
<br />
* {{Citation|last=Luger|first=George|first2=William|last2=Stubblefield|year=2004|title=Artificial Intelligence: Structures and Strategies for Complex Problem Solving|edition=5th|publisher=The Benjamin/Cummings Publishing Company, Inc.|page=[https://archive.org/details/artificialintell0000luge/page/720 720]|isbn=978-0-8053-4780-7|url=https://archive.org/details/artificialintell0000luge/page/720}}<br />
<br />
* {{Citation |last=McCarthy|first=John |authorlink=John McCarthy (computer scientist)|date=Oct 2007 |title=From here to human-level AI|journal=Artificial Intelligence |volume=171 |pages=1174–1182 |doi=10.1016/j.artint.2007.10.009 | postscript=. |issue=18}}<br />
<br />
* {{McCorduck 2004}}<br />
<br />
* {{Citation | last = Moravec | first = Hans | author-link = Hans Moravec | year = 1976 | url = http://www.frc.ri.cmu.edu/users/hpm/project.archive/general.articles/1975/Raw.Power.html | title = The Role of Raw Power in Intelligence | access-date = 29 September 2007 | archive-url = https://web.archive.org/web/20160303232511/http://www.frc.ri.cmu.edu/users/hpm/project.archive/general.articles/1975/Raw.Power.html | archive-date = 3 March 2016 | url-status = dead }}<br />
<br />
* {{Citation | last = Moravec | first = Hans | author-link=Hans Moravec | year = 1988 | title = Mind Children | publisher = Harvard University Press}}<br />
<br />
* {{Citation| last=Nagel | year=1974 | title =What Is it Like to Be a Bat | journal = Philosophical Review | volume=83 | issue=4 | pages=435–50 | url=http://organizations.utep.edu/Portals/1475/nagel_bat.pdf| postscript=. | doi=10.2307/2183914 | jstor=2183914 }}<br />
<br />
* {{Citation | last = Newell | first = Allen | author-link=Allen Newell | last2 = Simon | first2=H. A. | year = 1963 | contribution=GPS: A Program that Simulates Human Thought| title=Computers and Thought | editor-last= Feigenbaum | editor-first= E.A. |editor2-last= Feldman |editor2-first= J. |publisher= McGraw-Hill | authorlink2 = Herbert A. Simon|location= New York }}<br />
<br />
* {{cite journal | doi = 10.1145/360018.360022 | last = Newell | first = Allen | last2 = Simon | first2=H. A. | year = 1976 | title=Computer Science as Empirical Inquiry: Symbols and Search| volume= 19 | pages = 113–126 | journal = Communications of the ACM| author-link=Allen Newell | authorlink2=Herbert A. Simon|issue=3 | ref=harv| doi-access= free }}<br />
<br />
* {{Citation| last=Nilsson | first=Nils | author-link=Nils Nilsson (researcher) | year=1998|title=Artificial Intelligence: A New Synthesis|publisher=Morgan Kaufmann Publishers|isbn=978-1-55860-467-4}}<br />
<br />
* {{Russell Norvig 2003}}<br />
<br />
* {{Citation | last = NRC| author-link=United States National Research Council | chapter=Developments in Artificial Intelligence|chapter-url=http://www.nap.edu/readingroom/books/far/ch9.html|title=Funding a Revolution: Government Support for Computing Research|publisher=National Academy Press|year=1999 }}<br />
<br />
* {{Citation | last = Poole | first = David | first2 = Alan | last2 = Mackworth | first3 = Randy | last3 = Goebel | publisher = Oxford University Press | year = 1998 | title = Computational Intelligence: A Logical Approach | url = http://www.cs.ubc.ca/spider/poole/ci.html | author-link=David Poole (researcher) | location = New York }}<br />
<br />
* {{Citation|last=Searle |first=John |author-link=John Searle |title=Minds, Brains and Programs |journal=Behavioral and Brain Sciences |volume=3 |issue=3 |pages=417–457 |year=1980 |doi=10.1017/S0140525X00005756 }}<br />
<br />
* {{Citation | last= Simon | first = H. A. | author-link=Herbert A. Simon | year = 1965 | title=The Shape of Automation for Men and Management | publisher =Harper & Row | location = New York }}<br />
<br />
* {{Citation |last=Sutherland|first= J.G. |year =1990| title= Holographic Model of Memory, Learning, and Expression|journal=International Journal of Neural Systems|volume= 1–3|pages= 256–267 |postscript=.}}<br />
<br />
* {{Citation|vauthors=Williams RW, Herrup K | title = The control of neuron number| journal = Annual Review of Neuroscience| volume = 11| pages = 423–53| year = 1988| pmid = 3284447| doi = 10.1146/annurev.ne.11.030188.002231| postscript = .}}<!--| accessdate = 20 June 2009--><br />
<br />
* {{Citation<br />
<br />
| editor1-last = de Vega | editor1-first = Manuel<br />
<br />
| editor1-last = de Vega | editor1-first = Manuel<br />
<br />
| 编辑1-last de Vega | 编辑1-first Manuel<br />
<br />
| editor2-last = Glenberg | editor2-first = Arthur<br />
<br />
| editor2-last = Glenberg | editor2-first = Arthur<br />
<br />
2- 最后的格伦伯格2- 第一个亚瑟<br />
<br />
| editor3-last = Graesser | editor3-first = Arthur<br />
<br />
| editor3-last = Graesser | editor3-first = Arthur<br />
<br />
| 编辑3-last Graesser | 编辑3-first Arthur<br />
<br />
| year = 2008<br />
<br />
| year = 2008<br />
<br />
2008年<br />
<br />
| title = Symbols and Embodiment: Debates on meaning and cognition<br />
<br />
| title = Symbols and Embodiment: Debates on meaning and cognition<br />
<br />
标题符号与具体化: 关于意义与认知的争论<br />
<br />
| publisher = Oxford University Press<br />
<br />
| publisher = Oxford University Press<br />
<br />
牛津大学出版社<br />
<br />
| isbn=978-0-19-921727-4<br />
<br />
| isbn=978-0-19-921727-4<br />
<br />
[国际标准图书编号978-0-19-921727-4]<br />
<br />
}}<br />
<br />
}}<br />
<br />
}}<br />
<br />
* {{Citation|last=Yudkowsky |first=Eliezer |author-link=Eliezer Yudkowsky |editor-last=Goertzel |editor-first=Ben |editor2-last=Pennachin |editor2-first=Cassio |title=Artificial General Intelligence |journal=Annual Review of Psychology |publisher=Springer |year=2006 |url=http://www.singinst.org/upload/LOGI//LOGI.pdf |doi=10.1146/annurev.psych.49.1.585 |pmid=9496632 |isbn=978-3-540-23733-4 |url-status=dead |archiveurl=https://web.archive.org/web/20090411050423/http://www.singinst.org/upload/LOGI/LOGI.pdf |archivedate=11 April 2009 |volume=49 |pages=585–612}} <br />
<br />
* {{Citation |last=Zucker|first=Jean-Daniel |date=July 2003 |title=A grounded theory of abstraction in artificial intelligence|journal=[[Philosophical Transactions of the Royal Society B]] |pmid=12903672 |volume=358 |issue=1435 |pmc=1693211 |pages=1293–1309 |doi=10.1098/rstb.2003.1308 |postscript=.}}<br />
<br />
* {{Citation| last=Yudkowsky | first=Eliezer | author-link=Eliezer Yudkowsky| year=2008 |title=Artificial Intelligence as a Positive and Negative Factor in Global Risk |journal=Global Catastrophic Risks | bibcode=2008gcr..book..303Y }}.<br />
<br />
{{refend}}<br />
<br />
<br />
<br />
==External links==<br />
<br />
* [https://cis.temple.edu/~pwang/AGI-Intro.html The AGI portal maintained by Pei Wang]<br />
<br />
* [https://web.archive.org/web/20050405071221/http://genesis.csail.mit.edu/index.html The Genesis Group at MIT's CSAIL] – Modern research on the computations that underlay human intelligence<br />
<br />
* [http://www.opencog.org/ OpenCog – open source project to develop a human-level AI]<br />
<br />
* [http://academia.wikia.com/wiki/A_Method_for_Simulating_the_Process_of_Logical_Human_Thought Simulating logical human thought]<br />
<br />
* [http://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/ai-timelines What Do We Know about AI Timelines?] – Literature review<br />
<br />
<br />
<br />
{{Existential risk from artificial intelligence}}<br />
<br />
<br />
<br />
{{DEFAULTSORT:Artificial general intelligence}}<br />
<br />
[[Category:Hypothetical technology]]<br />
<br />
Category:Hypothetical technology<br />
<br />
类别: 假设技术<br />
<br />
[[Category:Artificial intelligence]]<br />
<br />
Category:Artificial intelligence<br />
<br />
类别: 人工智能<br />
<br />
[[Category:Computational neuroscience]]<br />
<br />
Category:Computational neuroscience<br />
<br />
类别: 计算神经科学<br />
<br />
<br />
<br />
[[fr:Intelligence artificielle#Intelligence artificielle forte]]<br />
<br />
fr:Intelligence artificielle#Intelligence artificielle forte<br />
<br />
智力人工 # 智力人工强项<br />
<br />
<noinclude><br />
<br />
<small>This page was moved from [[wikipedia:en:Artificial general intelligence]]. Its edit history can be viewed at [[通用人工智能/edithistory]]</small></noinclude><br />
<br />
[[Category:待整理页面]]</div>粲兰https://wiki.swarma.org/index.php?title=%E9%80%9A%E7%94%A8%E4%BA%BA%E5%B7%A5%E6%99%BA%E8%83%BD&diff=14880通用人工智能2020-10-09T12:27:53Z<p>粲兰:</p>
<hr />
<div>此词条暂由彩云小译翻译,未经人工整理和审校,带来阅读不便,请见谅。<br />
<br />
{{Short description|Hypothetical human-level or stronger AI}}<br />
<br />
{{Use British English|date = March 2019}}<br />
<br />
{{Use dmy dates|date=December 2019}}<br />
<br />
{{Artificial intelligence}}<br />
<br />
'''Artificial general intelligence''' ('''AGI''') is the hypothetical<ref>{{cite news |title=DeepMind and Google: the battle to control artificial intelligence |url=https://www.economist.com/news/2019/04/05/deepmind-and-google-the-battle-to-control-artificial-intelligence |accessdate=15 March 2020 |work=[[The Economist]] ([[1843 (magazine)]]) |date=2019 |quote=AGI stands for Artificial General Intelligence, a hypothetical computer program...}}</ref> intelligence of a machine that has the capacity to understand or learn any intellectual task that a [[human being]] can. It is a primary goal of some [[artificial intelligence]] research and a common topic in [[science fiction]] and [[futures studies]]. AGI can also be referred to as '''strong AI''',<ref>Kurzweil, ''Singularity'' (2005) p. 260</ref><ref name = "Kurzweil 2005-08-05">{{Citation|url=https://www.forbes.com/home/free_forbes/2005/0815/030.htmlhttps://www.forbes.com/home/free_forbes/2005/0815/030.html |first=Ray |last=Kurzweil |date=5 August 2005 |magazine=[[Forbes]]|title=Long Live AI}}: Kurzweil describes strong AI as "machine intelligence with the full range of human intelligence."</ref><ref>{{Citation|url=https://crnano.typepad.com/crnblog/2005/08/advanced_human_.html |title=Advanced Human ntelligence<br />
<br />
Artificial general intelligence (AGI) is the hypothetical intelligence of a machine that has the capacity to understand or learn any intellectual task that a human being can. It is a primary goal of some artificial intelligence research and a common topic in science fiction and futures studies. AGI can also be referred to as strong AI,<ref>{{Citation|url=https://crnano.typepad.com/crnblog/2005/08/advanced_human_.html |title=Advanced Human ntelligence<br />
<br />
人工通用智能(Artificial general intelligence,AGI)是一种假设的机器智能,它有能力理解或学习任何人类能够完成的智力任务。这是一些人工智能研究的主要目标,也是科幻小说和未来研究的共同话题。也可以被称为强人工智能,参考文献{ Citation | url https://crnano.typepad.com/crnblog/2005/08/advanced_human_.html | title Advanced Human intelligence<br />
<br />
|first=Mike|last=Treder|work=Responsible Nanotechnology|date=10 August 2005 |archive-url=https://web.archive.org/web/20191016214415/https://crnano.typepad.com/crnblog/2005/08/advanced_human_.html|archive-date=16 October 2019 |url-status=live}}</ref> '''full AI''',<ref>{{Cite web |url=http://tedxtalks.ted.com/video/The-Age-of-Artificial-Intellige |title=The Age of Artificial Intelligence: George John at TEDxLondonBusinessSchool 2013 |access-date=22 February 2014 |archive-url=https://web.archive.org/web/20140226123940/http://tedxtalks.ted.com/video/The-Age-of-Artificial-Intellige |archive-date=26 February 2014 |url-status=live }}</ref> <br />
<br />
|first=Mike|last=Treder|work=Responsible Nanotechnology|date=10 August 2005 |archive-url=https://web.archive.org/web/20191016214415/https://crnano.typepad.com/crnblog/2005/08/advanced_human_.html|archive-date=16 October 2019 |url-status=live}}</ref> full AI, <br />
<br />
2005年8月10日2019年10月 https://web.archive.org/web/20191016214415/https://crnano.typepad.com/crnblog/2005/08/advanced_human_.html|archive-date=16,<br />
<br />
or '''general intelligent action'''.{{sfn|Newell|Simon|1976|ps=, This is the term they use for "human-level" intelligence in the [[physical symbol system]] hypothesis.}} <br />
<br />
or general intelligent action. <br />
<br />
或者是通用智能行为。<br />
<br />
Some academic sources reserve the term "strong AI" for machines that can experience [[Chinese room#Strong AI|consciousness]].{{sfn|Searle|1980|ps=, See below for the origin of the term "strong AI", and see the academic definition of "[[Chinese room#Strong AI|strong AI]]" in the article [[Chinese room]].}} Today's AI is speculated to be many years, if not decades, away from AGI.<ref>[https://www.europarl.europa.eu/at-your-service/files/be-heard/religious-and-non-confessional-dialogue/events/en-20190319-how-artificial-intelligence-works.pdf europarl.europa.eu: How artificial intelligence works], "Concluding remarks: Today's AI is powerful and useful, but remains far from speculated AGI or ASI.", European Parliamentary Research Service, retrieved March 3, 2020</ref><ref>{{Cite journal|last=Grace|first=Katja|last2=Salvatier|first2=John|last3=Dafoe|first3=Allan|last4=Zhang|first4=Baobao|last5=Evans|first5=Owain|date=2018-07-31|title=Viewpoint: When Will AI Exceed Human Performance? Evidence from AI Experts|journal=Journal of Artificial Intelligence Research|volume=62|pages=729–754|doi=10.1613/jair.1.11222|issn=1076-9757}}</ref><br />
<br />
Some academic sources reserve the term "strong AI" for machines that can experience consciousness. Today's AI is speculated to be many years, if not decades, away from AGI.<br />
<br />
一些学术资源保留了“强人工智能”这个术语,用来形容能够体会意识的机器。据推测,如果不是几十年的话,今天的人工智能将在很多年之后才能达到通用人工智能的地步。<br />
<br />
<br />
<br />
Some authorities emphasize a distinction between ''strong AI'' and ''applied AI'',<ref>Encyclopædia Britannica [http://www.britannica.com/eb/article-219086/artificial-intelligence Strong AI, applied AI, and cognitive simulation] {{Webarchive|url=https://web.archive.org/web/20071015054758/http://www.britannica.com/eb/article-219086/artificial-intelligence |date=15 October 2007 }} or Jack Copeland [http://www.cs.usfca.edu/www.AlanTuring.net/turing_archive/pages/Reference%20Articles/what_is_AI/What%20is%20AI02.html What is artificial intelligence?] {{Webarchive|url=https://web.archive.org/web/20070818125256/http://www.cs.usfca.edu/www.AlanTuring.net/turing_archive/pages/Reference%20Articles/what_is_AI/What%20is%20AI02.html |date=18 August 2007 }} on AlanTuring.net</ref> also called ''narrow AI''<ref name = "Kurzweil 2005-08-05"/> or ''[[weak AI]]''.<ref>{{Cite web |url=http://www.open2.net/nextbigthing/ai/ai_in_depth/in_depth.htm |title=The Open University on Strong and Weak AI |access-date=8 October 2007 |archive-url=https://web.archive.org/web/20090925043908/http://www.open2.net/nextbigthing/ai/ai_in_depth/in_depth.htm |archive-date=25 September 2009 |url-status=live }} {{Dead link |date=October 2019}}</ref> In contrast to strong AI, weak AI is not intended to perform human [[cognitive]] abilities. Rather, weak AI is limited to the use of software to study or accomplish specific [[problem solving]] or [[reason]]ing tasks.<br />
<br />
Some authorities emphasize a distinction between strong AI and applied AI, also called narrow AI In contrast to strong AI, weak AI is not intended to perform human cognitive abilities. Rather, weak AI is limited to the use of software to study or accomplish specific problem solving or reasoning tasks.<br />
<br />
一些权威机构强调'' 强人工智能 ''和'' 应用人工智能 ''之间的区别,也称为'' 狭义人工智能 ''与'' 强人工智能 ''相比,弱人工智能并不是为了执行人类的认知能力。相反,弱人工智能仅限于使用软件来研究或完成特定问题的解决或完成推理任务。<br />
<br />
<br />
<br />
As of 2017, over forty organizations are researching AGI.<ref name=baum/><br />
<br />
As of 2017, over forty organizations are researching AGI.<br />
<br />
截止到2017年,已经有超过四十家机构在研究 AGI。<br />
<br />
<br />
<br />
==Requirements 判定要求==<br />
<br />
{{main|Cognitive science}}<br />
<br />
<br />
<br />
Various criteria for [[intelligence]] have been proposed (most famously the [[Turing test]]) but to date, there is no definition that satisfies everyone.<ref>AI founder [[John McCarthy (computer scientist)|John McCarthy]] writes: "we cannot yet characterize in general what kinds of computational procedures we want to call intelligent." {{cite web| url=http://www-formal.stanford.edu/jmc/whatisai/node1.html| title=Basic Questions| last=McCarthy| first=John| authorlink=John McCarthy (computer scientist)| publisher=[[Stanford University]]| year=2007| access-date=6 December 2007| archive-url=https://web.archive.org/web/20071026100601/http://www-formal.stanford.edu/jmc/whatisai/node1.html| archive-date=26 October 2007| url-status=live}} (For a discussion of some definitions of intelligence used by [[artificial intelligence]] researchers, see [[philosophy of artificial intelligence]].)</ref> However, there ''is'' wide agreement among artificial intelligence researchers that intelligence is required to do the following:<ref><br />
<br />
Various criteria for intelligence have been proposed (most famously the Turing test) but to date, there is no definition that satisfies everyone. However, there is wide agreement among artificial intelligence researchers that intelligence is required to do the following:<ref><br />
<br />
人们提出了各种各样的智能标准(最著名的是图灵测试) ,但到目前为止,还没有一个定义能使所有人满意。然而,人工智能研究人员普遍认为,智能需要做到以下几点: <br />
<br />
This list of intelligent traits is based on the topics covered by major AI textbooks, including:<br />
<br />
This list of intelligent traits is based on the topics covered by major AI textbooks, including:<br />
<br />
这个智能特征的列表基于主流的人工智能教科书所涉及的主题,包括:<br />
<br />
{{Harvnb|Russell|Norvig|2003}},<br />
<br />
,<br />
<br />
,<br />
<br />
{{Harvnb|Luger|Stubblefield|2004}},<br />
<br />
,<br />
<br />
,<br />
<br />
{{Harvnb|Poole|Mackworth|Goebel|1998}} and<br />
<br />
and<br />
<br />
及<br />
<br />
{{Harvnb|Nilsson|1998}}.<br />
<br />
.<br />
<br />
.<br />
<br />
</ref><br />
<br />
</ref><br />
<br />
/ 参考<br />
<br />
* [[automated reasoning|reason]], use strategy, solve puzzles, and make judgments under [[uncertainty]];使用策略,解决问题,并且在不确定条件下做出决策。<br />
<br />
* [[knowledge representation|represent knowledge]], including [[Commonsense knowledge base|commonsense knowledge]];展现知识,包括常识。<br />
<br />
* [[automated planning and scheduling|plan]];规划。<br />
<br />
* [[machine learning|learn]];学习。<br />
<br />
* communicate in [[natural language processing|natural language]];使用自然语言进行交流。<br />
<br />
* and [[Artificial intelligence systems integration|integrate all these skills]] towards common goals.综合运用所有技巧以达到某个目的。<br />
<br />
<br />
<br />
Other important capabilities include the ability to [[machine perception|sense]] (e.g. [[computer vision|see]]) and the ability to act (e.g. [[robotics|move and manipulate objects]]) in the world where intelligent behaviour is to be observed.<ref>Pfeifer, R. and Bongard J. C., How the body shapes the way we think: a new view of intelligence (The MIT Press, 2007). {{ISBN|0-262-16239-3}}</ref> This would include an ability to detect and respond to [[hazard]].<ref>{{cite journal | last1 = White | first1 = R. W. | year = 1959 | title = Motivation reconsidered: The concept of competence | journal = Psychological Review | volume = 66 | issue = 5| pages = 297–333 | doi=10.1037/h0040934| pmid = 13844397 }}</ref> Many interdisciplinary approaches to intelligence (e.g. [[cognitive science]], [[computational intelligence]] and [[decision making]]) tend to emphasise the need to consider additional traits such as [[imagination]] (taken as the ability to form mental images and concepts that were not programmed in)<ref>{{Harvnb|Johnson|1987}}</ref> and [[Self-determination theory|autonomy]].<ref>deCharms, R. (1968). Personal causation. New York: Academic Press.</ref><br />
<br />
Other important capabilities include the ability to sense (e.g. see) and the ability to act (e.g. move and manipulate objects) in the world where intelligent behaviour is to be observed. This would include an ability to detect and respond to hazard. Many interdisciplinary approaches to intelligence (e.g. cognitive science, computational intelligence and decision making) tend to emphasise the need to consider additional traits such as imagination (taken as the ability to form mental images and concepts that were not programmed in) and autonomy.<br />
<br />
其他重要的能力包括在客观世界感知(例如:视觉)和行动(例如:移动和操纵物体)的能力。智能行为在客观世界中是可观测的。这将包括检测和应对危险的能力。许多跨学科的智力研究方法(例如:。认知科学、计算智能和决策)倾向于强调考虑额外特征的必要性,例如想象力(被认为是未编入程序的形成意象和概念的能力)和自主性。<br />
<br />
Computer based systems that exhibit many of these capabilities do exist (e.g. see [[computational creativity]], [[automated reasoning]], [[decision support system]], [[robot]], [[evolutionary computation]], [[intelligent agent]]), but not yet at human levels.<br />
<br />
Computer based systems that exhibit many of these capabilities do exist (e.g. see computational creativity, automated reasoning, decision support system, robot, evolutionary computation, intelligent agent), but not yet at human levels.<br />
<br />
基于计算机的系统,展示了许多这些能力确实存在(例如:。参见计算创造性、自动推理、决策支持系统、机器人、进化计算、智能代理) ,但还没有达到人类的水平。<br />
<br />
<br />
<br />
===Tests for confirming human-level AGI{{anchor|Tests_for_confirming_human-level_AGI}}===<br />
<br />
The following tests to confirm human-level AGI have been considered:<ref>{{cite web|last=Muehlhauser|first=Luke|title=What is AGI?|url=http://intelligence.org/2013/08/11/what-is-agi/|publisher=Machine Intelligence Research Institute|accessdate=1 May 2014|date=11 August 2013|archive-url=https://web.archive.org/web/20140425115445/http://intelligence.org/2013/08/11/what-is-agi/|archive-date=25 April 2014|url-status=live}}</ref><ref>{{Cite web|url=https://www.talkyblog.com/artificial_general_intelligence_agi/|title=What is Artificial General Intelligence (AGI)? {{!}} 4 Tests For Ensuring Artificial General Intelligence|date=13 July 2019|website=Talky Blog|language=en-US|access-date=17 July 2019|archive-url=https://web.archive.org/web/20190717071152/https://www.talkyblog.com/artificial_general_intelligence_agi/|archive-date=17 July 2019|url-status=live}}</ref><br />
<br />
The following tests to confirm human-level AGI have been considered:<br />
<br />
考虑了下列测试以确认人类水平 AGI:<br />
<br />
;[[Turing test|The Turing Test]] ([[Alan Turing|''Turing'']])<br />
<br />
The Turing Test (Turing)<br />
<br />
图灵测试(图灵)<br />
<br />
: A machine and a human both converse sight unseen with a second human, who must evaluate which of the two is the machine, which passes the test if it can fool the evaluator a significant fraction of the time. Note: Turing does not prescribe what should qualify as intelligence, only that knowing that it is a machine should disqualify it.<br />
<br />
A machine and a human both converse sight unseen with a second human, who must evaluate which of the two is the machine, which passes the test if it can fool the evaluator a significant fraction of the time. Note: Turing does not prescribe what should qualify as intelligence, only that knowing that it is a machine should disqualify it.<br />
<br />
一个机器人和一个人类都与另一个人类进行眼神交流,后者必须评估两者中哪一个是机器,如果它能骗过评估者很大一部分时间,那么机器就通过了测试。注意: 图灵并没有规定什么是智能,只要能认出它是一台机器就不应该认为它是智能的。<br />
<br />
;The Coffee Test ([[Steve Wozniak|''Wozniak'']])<br />
<br />
The Coffee Test (Wozniak)<br />
<br />
咖啡测试(沃兹尼亚克)<br />
<br />
: A machine is required to enter an average American home and figure out how to make coffee: find the coffee machine, find the coffee, add water, find a mug, and brew the coffee by pushing the proper buttons.<br />
<br />
A machine is required to enter an average American home and figure out how to make coffee: find the coffee machine, find the coffee, add water, find a mug, and brew the coffee by pushing the proper buttons.<br />
<br />
一台机器需要进入一个普通的美国家庭,并弄清楚如何制作咖啡: 找到咖啡机,找到咖啡,加水,找到一个马克杯,并通过按下正确的按钮来煮咖啡。<br />
<br />
;The Robot College Student Test ([[Ben Goertzel|''Goertzel'']])<br />
<br />
The Robot College Student Test (Goertzel)<br />
<br />
机器人大学生考试(格兹尔)<br />
<br />
: A machine enrolls in a university, taking and passing the same classes that humans would, and obtaining a degree.<br />
<br />
A machine enrolls in a university, taking and passing the same classes that humans would, and obtaining a degree.<br />
<br />
一台机器进入一所大学,学习并通过与人类相同的课程,并获得学位。<br />
<br />
;The Employment Test ([[Nils John Nilsson|''Nilsson'']])<br />
<br />
The Employment Test (Nilsson)<br />
<br />
就业测试(尼尔森)<br />
<br />
: A machine works an economically important job, performing at least as well as humans in the same job.<br />
<br />
A machine works an economically important job, performing at least as well as humans in the same job.<br />
<br />
机器从事一项经济上重要的工作,在同一项工作中表现至少和人类一样好。<br />
<br />
<br />
<br />
=== Problems requiring AGI to solve 等待通用人工智能解决的问题===<br />
<br />
{{Main|AI-complete}}<br />
<br />
<br />
<br />
The most difficult problems for computers are informally known as "AI-complete" or "AI-hard", implying that solving them is equivalent to the general aptitude of human intelligence, or strong AI, beyond the capabilities of a purpose-specific algorithm.<ref name="Shapiro92">Shapiro, Stuart C. (1992). [http://www.cse.buffalo.edu/~shapiro/Papers/ai.pdf Artificial Intelligence] {{Webarchive|url=https://web.archive.org/web/20160201014644/http://www.cse.buffalo.edu/~shapiro/Papers/ai.pdf |date=1 February 2016 }} In Stuart C. Shapiro (Ed.), ''Encyclopedia of Artificial Intelligence'' (Second Edition, pp.&nbsp;54–57). New York: John Wiley. (Section 4 is on "AI-Complete Tasks".)</ref><br />
<br />
The most difficult problems for computers are informally known as "AI-complete" or "AI-hard", implying that solving them is equivalent to the general aptitude of human intelligence, or strong AI, beyond the capabilities of a purpose-specific algorithm.<br />
<br />
对于计算机来说,最困难的问题被非正式地称为“ AI完全问题”或“ AI困难问题” ,这意味着解决这些问题相当于人类智能的一般才能,或强人工智能,超出了特定目的算法的能力。<br />
<br />
<br />
<br />
AI-complete problems are hypothesised to include general [[computer vision]], [[natural language understanding]], and dealing with unexpected circumstances while solving any real world problem.<ref>Roman V. Yampolskiy. Turing Test as a Defining Feature of AI-Completeness. In Artificial Intelligence, Evolutionary Computation and Metaheuristics (AIECM) --In the footsteps of Alan Turing. Xin-She Yang (Ed.). pp. 3–17. (Chapter 1). Springer, London. 2013. http://cecs.louisville.edu/ry/TuringTestasaDefiningFeature04270003.pdf {{Webarchive|url=https://web.archive.org/web/20130522094547/http://cecs.louisville.edu/ry/TuringTestasaDefiningFeature04270003.pdf |date=22 May 2013 }}</ref><br />
<br />
AI-complete problems are hypothesised to include general computer vision, natural language understanding, and dealing with unexpected circumstances while solving any real world problem.<br />
<br />
人工智能完全问题假设包括一般的计算机视觉,自然语言理解,以及在解决任何现实世界问题的同时处理意外情况。<br />
<br />
<br />
<br />
AI-complete problems cannot be solved with current computer technology alone, and also require [[human computation]]. This property could be useful, for example, to test for the presence of humans, as [[CAPTCHA]]s aim to do; and for [[computer security]] to repel [[brute-force attack]]s.<ref>Luis von Ahn, Manuel Blum, Nicholas Hopper, and John Langford. [http://www.captcha.net/captcha_crypt.pdf CAPTCHA: Using Hard AI Problems for Security] {{Webarchive|url=https://web.archive.org/web/20160304001102/http://www.captcha.net/captcha_crypt.pdf |date=4 March 2016 }}. In Proceedings of Eurocrypt, Vol. 2656 (2003), pp. 294–311.</ref><ref>{{cite journal | first = Richard | last = Bergmair | title = Natural Language Steganography and an "AI-complete" Security Primitive | citeseerx = 10.1.1.105.129 | date = 7 January 2006 }} (unpublished?)</ref><br />
<br />
AI-complete problems cannot be solved with current computer technology alone, and also require human computation. This property could be useful, for example, to test for the presence of humans, as CAPTCHAs aim to do; and for computer security to repel brute-force attacks.<br />
<br />
目前的计算机技术不能单独解决AI完全问题,而且还需要人工计算。例如,这个特性可以用来测试人类是否存在(CAPTCHAs 的目标就是这样做) ,以及应用于计算机安全以抵御强力攻击。<br />
<br />
<br />
<br />
== History 历史 == <br />
<br />
=== Classical AI 经典人工智能 ===<br />
<br />
{{Main|History of artificial intelligence}}<br />
<br />
Modern AI research began in the mid 1950s.<ref>{{Harvnb|Crevier|1993|pp=48–50}}</ref> The first generation of AI researchers were convinced that artificial general intelligence was possible and that it would exist in just a few decades. AI pioneer [[Herbert A. Simon]] wrote in 1965: "machines will be capable, within twenty years, of doing any work a man can do."<ref>{{Harvnb|Simon|1965|p=96}} quoted in {{Harvnb|Crevier|1993|p=109}}</ref> Their predictions were the inspiration for [[Stanley Kubrick]] and [[Arthur C. Clarke]]'s character [[HAL 9000]], who embodied what AI researchers believed they could create by the year 2001. AI pioneer [[Marvin Minsky]] was a consultant<ref>{{Cite web |url=http://mitpress.mit.edu/e-books/Hal/chap2/two1.html |title=Scientist on the Set: An Interview with Marvin Minsky |access-date=5 April 2008 |archive-url=https://web.archive.org/web/20120716182537/http://mitpress.mit.edu/e-books/Hal/chap2/two1.html |archive-date=16 July 2012 |url-status=live }}</ref> on the project of making HAL 9000 as realistic as possible according to the consensus predictions of the time; Crevier quotes him as having said on the subject in 1967, "Within a generation ... the problem of creating 'artificial intelligence' will substantially be solved,"<ref>Marvin Minsky to {{Harvtxt|Darrach|1970}}, quoted in {{Harvtxt|Crevier|1993|p=109}}.</ref> although Minsky states that he was misquoted.{{Citation needed|date=June 2011}}<br />
<br />
Modern AI research began in the mid 1950s. The first generation of AI researchers were convinced that artificial general intelligence was possible and that it would exist in just a few decades. AI pioneer Herbert A. Simon wrote in 1965: "machines will be capable, within twenty years, of doing any work a man can do." Their predictions were the inspiration for Stanley Kubrick and Arthur C. Clarke's character HAL 9000, who embodied what AI researchers believed they could create by the year 2001. AI pioneer Marvin Minsky was a consultant on the project of making HAL 9000 as realistic as possible according to the consensus prediction of the time; Crevier quotes him as having said on the subject in 1967, "Within a generation ... the problem of creating 'artificial intelligence' will substantially be solved," although Minsky states that he was misquoted.<br />
<br />
现代人工智能研究始于20世纪50年代中期。第一代人工智能研究人员确信,通用人工智能是可能的,并将在短短几十年内出现。人工智能的先驱赫伯特·A·西蒙(Herbert A. Simon)在1965年写道: “机器将在20年内拥有完成人类能做的任何工作的能力。”他们的预言启发了斯坦利·库布里克和亚瑟·查理斯·克拉克塑造的角色哈尔9000,它代表了人工智能研究人员相信他们截至2001年能够创造出的东西。人工智能先驱马文·明斯基(Marvin Minsky)是一个项目顾问,该项目旨在根据当时的一致预测,使哈尔9000尽可能逼真; 克里维尔援引他在1967年关于这个问题的话说,“在一代人的时间里... ... 创造‘人工智能’的问题将大体上得到解决,”尽管明斯基声称,他的话被错误引用了。<br />
<br />
<br />
<br />
However, in the early 1970s, it became obvious that researchers had grossly underestimated the difficulty of the project. Funding agencies became skeptical of AGI and put researchers under increasing pressure to produce useful "applied AI".<ref>The [[Lighthill report]] specifically criticized AI's "grandiose objectives" and led the dismantling of AI research in England. ({{Harvnb|Lighthill|1973}}; {{Harvnb|Howe|1994}}) In the U.S., [[DARPA]] became determined to fund only "mission-oriented direct research, rather than basic undirected research". See {{Harv|NRC|1999}} under "Shift to Applied Research Increases Investment". See also {{Harv|Crevier|1993|pp=115–117}} and {{Harv|Russell|Norvig|2003|pp=21–22}}</ref> As the 1980s began, Japan's [[Fifth Generation Computer]] Project revived interest in AGI, setting out a ten-year timeline that included AGI goals like "carry on a casual conversation".<ref>{{harvnb|Crevier|1993|p=211}}, {{harvnb|Russell|Norvig|2003|p=24}} and see also {{Harvnb|Feigenbaum|McCorduck|1983}}</ref> In response to this and the success of [[expert systems]], both industry and government pumped money back into the field.<ref>{{Harvnb|Crevier| 1993|pp=161–162,197–203,240}}; {{harvnb|Russell|Norvig|2003|p=25}}; {{harvnb|NRC|1999|loc=under "Shift to Applied Research Increases Investment"}}</ref> However, confidence in AI spectacularly collapsed in the late 1980s, and the goals of the Fifth Generation Computer Project were never fulfilled.<ref>{{Harvnb|Crevier|1993|pp=209–212}}</ref> For the second time in 20 years, AI researchers who had predicted the imminent achievement of AGI had been shown to be fundamentally mistaken. By the 1990s, AI researchers had gained a reputation for making vain promises. They became reluctant to make predictions at all<ref>As AI founder [[John McCarthy (computer scientist)|John McCarthy]] writes "it would be a great relief to the rest of the workers in AI if the inventors of new general formalisms would express their hopes in a more guarded form than has sometimes been the case." {{cite web | url=http://www-formal.stanford.edu/jmc/reviews/lighthill/lighthill.html | title=Reply to Lighthill | last=McCarthy | first=John | authorlink=John McCarthy (computer scientist) | publisher=Stanford University | year=2000 | access-date=29 September 2007 | archive-url=https://web.archive.org/web/20080930164952/http://www-formal.stanford.edu/jmc/reviews/lighthill/lighthill.html | archive-date=30 September 2008 | url-status=live }}</ref> and to avoid any mention of "human level" artificial intelligence for fear of being labeled "wild-eyed dreamer[s]."<ref>"At its low point, some computer scientists and software engineers avoided the term artificial intelligence for fear of being viewed as wild-eyed dreamers."{{cite news |first=John |last=Markoff |title=Behind Artificial Intelligence, a Squadron of Bright Real People |url=https://www.nytimes.com/2005/10/14/technology/14artificial.html?ei=5070&en=11ab55edb7cead5e&ex=1185940800&adxnnl=1&adxnnlx=1185805173-o7WsfW7qaP0x5/NUs1cQCQ |work=The New York Times|date=14 October 2005}}</ref><br />
<br />
However, in the early 1970s, it became obvious that researchers had grossly underestimated the difficulty of the project. Funding agencies became skeptical of AGI and put researchers under increasing pressure to produce useful "applied AI". As the 1980s began, Japan's Fifth Generation Computer Project revived interest in AGI, setting out a ten-year timeline that included AGI goals like "carry on a casual conversation". In response to this and the success of expert systems, both industry and government pumped money back into the field. However, confidence in AI spectacularly collapsed in the late 1980s, and the goals of the Fifth Generation Computer Project were never fulfilled. For the second time in 20 years, AI researchers who had predicted the imminent achievement of AGI had been shown to be fundamentally mistaken. By the 1990s, AI researchers had gained a reputation for making vain promises. They became reluctant to make predictions at all and to avoid any mention of "human level" artificial intelligence for fear of being labeled "wild-eyed dreamer[s]."<br />
<br />
然而,在20世纪70年代初,很明显,研究人员严重低估了该项目的难度。资助机构开始对通用人工智能持怀疑态度,并对研究人员施加越来越大的压力,要求他们生产出有用的“应用人工智能”。随着20世纪80年代的开始,日本的'''<font color="#ff8000">第五代计算机项目(Fifth Generation Computer Project)</font>'''重新唤起了人们对通用人工智能的兴趣,并设定了一个长达10年的时间线,其中包括通用人工智能的目标,比如“进行一次随意的交谈”。为了应对这种情况和专家系统的成功建立,工业界和政府都重新将资金投入这一领域。然而,人们对人工智能的信心在20世纪80年代末大幅下降,第五代计算机项目的目标从未实现。20年来的第二次,人工智能研究人员预测通用人工智能即将取得的成果已经被证明根本是错误的。到了20世纪90年代,人工智能研究人员因做出虚假承诺而臭名昭著。他们根本不愿意做预测,也不愿意提及“人类水平”的人工智能,因为他们害怕被贴上“狂热梦想家”的标签<br />
<br />
<br />
<br />
=== Narrow AI research 狭义人工智能的研究===<br />
<br />
{{Main|Artificial intelligence}}<br />
<br />
<br />
<br />
In the 1990s and early 21st century, mainstream AI achieved far greater commercial success and academic respectability by focusing on specific sub-problems where they can produce verifiable results and commercial applications, such as [[artificial neural networks]] and statistical [[machine learning]].<ref>{{Harvnb|Russell|Norvig|2003|pp=25–26}}</ref> These "applied AI" systems are now used extensively throughout the technology industry, and research in this vein is very heavily funded in both academia and industry. Currently, development on this field is considered an emerging trend, and a mature stage is expected to happen in more than 10 years.<ref>{{cite web |title=Trends in the Emerging Tech Hype Cycle |url=https://blogs.gartner.com/smarterwithgartner/files/2018/08/PR_490866_5_Trends_in_the_Emerging_Tech_Hype_Cycle_2018_Hype_Cycle.png |publisher=Gartner Reports |accessdate=7 May 2019 |archive-url=https://web.archive.org/web/20190522024829/https://blogs.gartner.com/smarterwithgartner/files/2018/08/PR_490866_5_Trends_in_the_Emerging_Tech_Hype_Cycle_2018_Hype_Cycle.png |archive-date=22 May 2019 |url-status=live }}</ref><br />
<br />
In the 1990s and early 21st century, mainstream AI achieved far greater commercial success and academic respectability by focusing on specific sub-problems where they can produce verifiable results and commercial applications, such as artificial neural networks and statistical machine learning. These "applied AI" systems are now used extensively throughout the technology industry, and research in this vein is very heavily funded in both academia and industry. Currently, development on this field is considered an emerging trend, and a mature stage is expected to happen in more than 10 years.<br />
<br />
在1990年代和21世纪初,主流人工智能取得了更大的商业成功和学术声望,因为它们把重点放在能够产生可验证结果和商业应用的具体子问题上,例如人工神经网络和统计机器学习。这些“应用人工智能”系统现在在整个技术产业中得到广泛应用,这方面的研究得到了学术界和产业界的大量资助。目前,这一领域的发展被认为是一个新兴的趋势,并有望在10多年内进入一个成熟的阶段。<br />
<br />
<br />
<br />
Most mainstream AI researchers hope that strong AI can be developed by combining the programs that solve various sub-problems. [[Hans Moravec]] wrote in 1988: <blockquote>"I am confident that this bottom-up route to artificial intelligence will one day meet the traditional top-down route more than half way, ready to provide the real world competence and the [[Commonsense knowledge base|commonsense knowledge]] that has been so frustratingly elusive in reasoning programs. Fully intelligent machines will result when the metaphorical [[golden spike]] is driven uniting the two efforts."<ref>{{Harvnb|Moravec|1988|p=20}}</ref></blockquote><br />
<br />
Most mainstream AI researchers hope that strong AI can be developed by combining the programs that solve various sub-problems. Hans Moravec wrote in 1988: <blockquote>"I am confident that this bottom-up route to artificial intelligence will one day meet the traditional top-down route more than half way, ready to provide the real world competence and the commonsense knowledge that has been so frustratingly elusive in reasoning programs. Fully intelligent machines will result when the metaphorical golden spike is driven uniting the two efforts."</blockquote><br />
<br />
大多数主流人工智能研究人员希望,通过结合解决各种子问题的项目,可以开发出强人工智能。汉斯·莫拉维克(Hans Moravec)在1988年写道: “我相信,这种自下而上的人工智能路线,终有一天会与传统的自上而下的路线在后半程相遇。令人沮丧的是,当下,真实世界的能力和常识知识在推理程序中一直难以捉摸。而这两种路线结合的人工智能将能为我们解决这些疑难。当一个黄金钉一样的东西将二者结合起来时,就会产生完全智能的机器。” / blockquote<br />
<br />
<br />
<br />
However, even this fundamental philosophy has been disputed; for example, Stevan Harnad of Princeton concluded his 1990 paper on the [[Symbol grounding problem|Symbol Grounding Hypothesis]] by stating: <blockquote>"The expectation has often been voiced that "top-down" (symbolic) approaches to modeling cognition will somehow meet "bottom-up" (sensory) approaches somewhere in between. If the grounding considerations in this paper are valid, then this expectation is hopelessly modular and there is really only one viable route from sense to symbols: from the ground up. A free-floating symbolic level like the software level of a computer will never be reached by this route (or vice versa) – nor is it clear why we should even try to reach such a level, since it looks as if getting there would just amount to uprooting our symbols from their intrinsic meanings (thereby merely reducing ourselves to the functional equivalent of a programmable computer)."<ref>{{cite journal | last1 = Harnad | first1 = S | year = 1990 | title = The Symbol Grounding Problem | journal = Physica D | volume = 42 | issue = 1–3| pages = 335–346 | doi=10.1016/0167-2789(90)90087-6| bibcode = 1990PhyD...42..335H | arxiv = cs/9906002}}</ref></blockquote><br />
<br />
However, even this fundamental philosophy has been disputed; for example, Stevan Harnad of Princeton concluded his 1990 paper on the Symbol Grounding Hypothesis by stating: <blockquote>"The expectation has often been voiced that "top-down" (symbolic) approaches to modeling cognition will somehow meet "bottom-up" (sensory) approaches somewhere in between. If the grounding considerations in this paper are valid, then this expectation is hopelessly modular and there is really only one viable route from sense to symbols: from the ground up. A free-floating symbolic level like the software level of a computer will never be reached by this route (or vice versa) – nor is it clear why we should even try to reach such a level, since it looks as if getting there would just amount to uprooting our symbols from their intrinsic meanings (thereby merely reducing ourselves to the functional equivalent of a programmable computer)."</blockquote><br />
<br />
然而,连如此基本的哲学问题也存在争议; 例如,普林斯顿大学的斯蒂文·哈纳德(Stevan Harnad)在1990年关于'''<font color="#ff8000">符号基础假说(the Symbol Grounding Hypothesis)</font>'''的论文中总结道: “人们经常提出这样的期望,即建立“自上而下”(符号)的认知模型的方法将在某种程度上与“自下而上”(感官)的方法在建模过程中的某处相会。如果本文中的基本考虑是正确的,那么绝望的是,这种期望是模块化的,并且从认知到符号真的只有一条可行的路径: 从头开始。类似计算机软件级别的自由浮动的符号永远不可能通过这条路径实现,反之亦然——甚至也不清楚为什么我们应该尝试达到这样一个级别,因为它看起来就像是把我们的符号从它们的内在意义上连根拔起(从而仅仅把我们自己降低为可编程计算机的功能等价物)。” / blockquote<br />
<br />
<br />
<br />
=== Modern artificial general intelligence research 现代通用人工智能的研究===<br />
<br />
The term "artificial general intelligence" was used as early as 1997, by Mark Gubrud<ref>{{Harvnb|Gubrud|1997}}</ref> in a discussion of the implications of fully automated military production and operations. The term was re-introduced and popularized by [[Shane Legg]] and [[Ben Goertzel]] around 2002.<ref>{{Cite web|url=http://goertzel.org/who-coined-the-term-agi/|title=Who coined the term "AGI"? » goertzel.org|language=en-US|access-date=28 December 2018|archive-url=https://web.archive.org/web/20181228083048/http://goertzel.org/who-coined-the-term-agi/|archive-date=28 December 2018|url-status=live}}, via [[Life 3.0]]: 'The term "AGI" was popularized by... Shane Legg, Mark Gubrud and Ben Goertzel'</ref> The research objective is much older, for example [[Doug Lenat]]'s [[Cyc]] project (that began in 1984), and [[Allen Newell]]'s [[Soar (cognitive architecture)|Soar]] project are regarded as within the scope of AGI. AGI research activity in 2006 was described by Pei Wang and Ben Goertzel<ref>{{harvnb|Goertzel|Wang|2006}}. See also {{harvtxt|Wang|2006}} with an up-to-date summary and lots of links.</ref> as "producing publications and preliminary results". The first summer school in AGI was organized in Xiamen, China in 2009<ref>https://goertzel.org/AGI_Summer_School_2009.htm</ref> by the Xiamen university's Artificial Brain Laboratory and OpenCog. The first university course was given in 2010<ref>http://fmi-plovdiv.org/index.jsp?id=1054&ln=1</ref> and 2011<ref>http://fmi.uni-plovdiv.bg/index.jsp?id=1139&ln=1</ref> at Plovdiv University, Bulgaria by Todor Arnaudov. MIT presented a course in AGI in 2018, organized by Lex Fridman and featuring a number of guest lecturers. However, as yet, most AI researchers have devoted little attention to AGI, with some claiming that intelligence is too complex to be completely replicated in the near term. However, a small number of computer scientists are active in AGI research, and many of this group are contributing to a series of [[Conference on Artificial General Intelligence|AGI conferences]]. The research is extremely diverse and often pioneering in nature. In the introduction to his book,{{sfn|Goertzel|Pennachin|2006}} Goertzel says that estimates of the time needed before a truly flexible AGI is built vary from 10 years to over a century, but the consensus in the AGI research community seems to be that the timeline discussed by [[Ray Kurzweil]] in ''[[The Singularity is Near]]''<ref name="K">{{Harv|Kurzweil|2005|p=260}} or see [http://crnano.typepad.com/crnblog/2005/08/advanced_human_.html Advanced Human Intelligence] {{Webarchive|url=https://web.archive.org/web/20110630032301/http://crnano.typepad.com/crnblog/2005/08/advanced_human_.html |date=30 June 2011 }} where he defines strong AI as "machine intelligence with the full range of human intelligence."</ref> (i.e. between 2015 and 2045) is plausible.{{sfn|Goertzel|2007}}<br />
<br />
The term "artificial general intelligence" was used as early as 1997, by Mark Gubrud in a discussion of the implications of fully automated military production and operations. The term was re-introduced and popularized by Shane Legg and Ben Goertzel around 2002. The research objective is much older, for example Doug Lenat's Cyc project (that began in 1984), and Allen Newell's Soar project are regarded as within the scope of AGI. AGI research activity in 2006 was described by Pei Wang and Ben Goertzel as "producing publications and preliminary results". The first summer school in AGI was organized in Xiamen, China in 2009 by the Xiamen university's Artificial Brain Laboratory and OpenCog. The first university course was given in 2010 and 2011 at Plovdiv University, Bulgaria by Todor Arnaudov. MIT presented a course in AGI in 2018, organized by Lex Fridman and featuring a number of guest lecturers. However, as yet, most AI researchers have devoted little attention to AGI, with some claiming that intelligence is too complex to be completely replicated in the near term. However, a small number of computer scientists are active in AGI research, and many of this group are contributing to a series of AGI conferences. The research is extremely diverse and often pioneering in nature. In the introduction to his book, Goertzel says that estimates of the time needed before a truly flexible AGI is built vary from 10 years to over a century, but the consensus in the AGI research community seems to be that the timeline discussed by Ray Kurzweil in The Singularity is Near (i.e. between 2015 and 2045) is plausible.<br />
<br />
”人工通用智能”一词早在1997年就由马克·古布鲁德(Mark Gubrud)在讨论全自动化军事生产和作业的影响时使用。这个术语在2002年左右被肖恩·莱格(Shane Legg)和本·格兹尔(Ben Goertzel)重新引入并推广。研究目标要古老得多,例如道格•雷纳特(Doug Lenat)的 Cyc 项目(始于1984年) ,以及艾伦•纽厄尔(Allen Newell)的 Soar 项目被认为属于通用人工智能的范围。王培(Pei Wang)和本·格兹尔将2006年的通用人工智能研究活动描述为“发表论文和取得初步成果”。2009年,厦门大学人工脑实验室和 OpenCog 在中国厦门组织了通用人工智能的第一个暑期学校。第一个大学课程于2010年和2011年在保加利亚普罗夫迪夫大学由 Todor Arnaudov 开设。2018年,麻省理工学院在 AGI 开设了一门课程,由 Lex Fridman 组织,并邀请了一些客座讲师。然而,迄今为止,大多数人工智能研究人员对 AGI 关注甚少,一些人声称,智能过于复杂,无法在短期内完全复制。然而,少数计算机科学家积极参与 AGI 的研究,其中许多人正在为 AGI 的一系列会议做出贡献。这项研究极其多样化,而且往往具有开创性。在他的书的序言中,Goertzel 说,一个真正灵活的 AGI 制造所需的时间估计从10年到超过一个世纪不等,但是 AGI 研究团体的共识似乎是 Ray Kurzweil 在《奇点迫近讨论的时间表。在2015年至2045年之间)是合理的。<br />
<br />
<br />
<br />
However, most mainstream AI researchers doubt that progress will be this rapid.{{citation_needed|date=January 2017}} Organizations explicitly pursuing AGI include the Swiss AI lab [[IDSIA]],{{citation needed|date=December 2017}} Nnaisense,<ref>{{cite news|last1=Markoff|first1=John|title=When A.I. Matures, It May Call Jürgen Schmidhuber 'Dad'|url=https://www.nytimes.com/2016/11/27/technology/artificial-intelligence-pioneer-jurgen-schmidhuber-overlooked.html|accessdate=26 December 2017|work=The New York Times|date=27 November 2016|archive-url=https://web.archive.org/web/20171226234555/https://www.nytimes.com/2016/11/27/technology/artificial-intelligence-pioneer-jurgen-schmidhuber-overlooked.html|archive-date=26 December 2017|url-status=live}}</ref> [[Vicarious (company)|Vicarious]],<!--<ref name=baum/>--> [[Maluuba]],<ref name=baum/> the [[OpenCog|OpenCog Foundation]], Adaptive AI, [[LIDA (cognitive architecture)|LIDA]], and [[Numenta]] and the associated [[Redwood Neuroscience Institute]].<ref>{{cite book|author1=James Barrat|title=Our Final Invention: Artificial Intelligence and the End of the Human Era|date=2013|publisher=St. Martin's Press|location=New York|isbn=9780312622374|edition=First|chapter=Chapter 11: A Hard Takeoff|title-link=Our Final Invention|author1-link=James Barrat}}</ref> In addition, organizations such as the [[Machine Intelligence Research Institute]]<ref>{{cite web|title=About the Machine Intelligence Research Institute|url=https://intelligence.org/about/|website=Machine Intelligence Research Institute|accessdate=26 December 2017|archive-url=https://web.archive.org/web/20180121025925/https://intelligence.org/about/|archive-date=21 January 2018|url-status=live}}</ref> and [[OpenAI]]<ref>{{cite news|title=About OpenAI|url=https://openai.com/about/|accessdate=26 December 2017|work=[[OpenAI]]|language=en-us|archive-url=https://web.archive.org/web/20171222181056/https://openai.com/about/|archive-date=22 December 2017|url-status=live}}</ref> have been founded to influence the development path of AGI. Finally, projects such as the [[Human Brain Project]]<ref>{{cite news|last1=Theil|first1=Stefan|title=Trouble in Mind|url=https://www.scientificamerican.com/article/why-the-human-brain-project-went-wrong-and-how-to-fix-it/|accessdate=26 December 2017|work=Scientific American|pages=36–42|language=en|doi=10.1038/scientificamerican1015-36|bibcode=2015SciAm.313d..36T|archive-url=https://web.archive.org/web/20171109234151/https://www.scientificamerican.com/article/why-the-human-brain-project-went-wrong-and-how-to-fix-it/|archive-date=9 November 2017|url-status=live}}</ref> have the goal of building a functioning simulation of the human brain. A 2017 survey of AGI categorized forty-five known "active R&D projects" that explicitly or implicitly (through published research) research AGI, with the largest three being [[DeepMind]], the Human Brain Project, and [[OpenAI]].<ref name=baum>{{cite journal|title=Baum, Seth, A Survey of Artificial General Intelligence Projects for Ethics, Risk, and Policy (November 12, 2017). Global Catastrophic Risk Institute Working Paper 17-1|url=https://ssrn.com/abstract=3070741|date=12 November 2017|last1=Baum|first1=Seth}}</ref><br />
<br />
However, most mainstream AI researchers doubt that progress will be this rapid. Organizations explicitly pursuing AGI include the Swiss AI lab IDSIA, Nnaisense, Vicarious,<!-- In addition, organizations such as the Machine Intelligence Research Institute and OpenAI have been founded to influence the development path of AGI. Finally, projects such as the Human Brain Project have the goal of building a functioning simulation of the human brain. A 2017 survey of AGI categorized forty-five known "active R&D projects" that explicitly or implicitly (through published research) research AGI, with the largest three being DeepMind, the Human Brain Project, and OpenAI.<br />
<br />
然而,大多数主流的人工智能研究人员怀疑进展是否会如此之快。明确寻求 AGI 的组织包括瑞士人工智能实验室 IDSIA,Nnaisense,Vicarious,! -- 此外,还成立了机器智能研究所和 OpenAI 等机构来影响 AGI 的发展道路。最后,像人脑计划这样的项目的目标是建立一个人脑的功能模拟。2017年针对 AGI 的一项调查将45个已知的“活跃研发项目”(通过已发表的研究)归类为“活跃研发项目” ,其中最大的三个是 DeepMind、人类大脑项目和 OpenAI。<br />
<br />
<br />
<br />
In 2017, researchers Feng Liu, Yong Shi and Ying Liu conducted intelligence tests on publicly available and freely accessible weak AI such as Google AI or Apple's Siri and others. At the maximum, these AI reached a value of about 47, which corresponds approximately to a six-year-old child in first grade. An adult comes to about 100 on average. Similar tests had been carried out in 2014, with the IQ score reaching a maximum value of 27.<ref>{{cite journal|title=Intelligence Quotient and Intelligence Grade of Artificial Intelligence|journal=Annals of Data Science|volume=4|issue=2|pages=179–191|arxiv=1709.10242|doi=10.1007/s40745-017-0109-0|year=2017|last1=Liu|first1=Feng|last2=Shi|first2=Yong|last3=Liu|first3=Ying|bibcode=2017arXiv170910242L}}</ref><ref>{{cite web|title=Google-KI doppelt so schlau wie Siri|url=https://t3n.de/news/iq-kind-schlauer-google-ki-siri-864003|accessdate=2 January 2019|archive-url=https://web.archive.org/web/20190103055657/https://t3n.de/news/iq-kind-schlauer-google-ki-siri-864003/|archive-date=3 January 2019|url-status=live}}</ref><br />
<br />
In 2017, researchers Feng Liu, Yong Shi and Ying Liu conducted intelligence tests on publicly available and freely accessible weak AI such as Google AI or Apple's Siri and others. At the maximum, these AI reached a value of about 47, which corresponds approximately to a six-year-old child in first grade. An adult comes to about 100 on average. Similar tests had been carried out in 2014, with the IQ score reaching a maximum value of 27.<br />
<br />
2017年,研究人员 Feng Liu,Yong Shi 和 Ying Liu 对公开的和可自由访问的弱智能进行了智能测试,如谷歌人工智能或苹果的 Siri 等。在最大值,这些 AI 达到了约47,这大约相当于一个六岁的儿童在一年级。一个成年人平均体重约为100磅。2014年也进行了类似的测试,智商分数达到了最高值27。<br />
<br />
<br />
<br />
In 2019, video game programmer and aerospace engineer [[John Carmack]] announced plans to research AGI.<ref name="lawler">{{cite web |url=https://www.engadget.com/2019-11-13-john-carmack-agi.html |title=John Carmack takes a step back at Oculus to work on human-like AI |date=November 13, 2019 |first=Richard |last=Lawler |accessdate=April 4, 2020 |publisher=[[Engadget]]}}</ref><br />
<br />
In 2019, video game programmer and aerospace engineer John Carmack announced plans to research AGI.<br />
<br />
2019年,游戏程序师和航空工程师 John Carmack 宣布了研究 AGI 的计划。<br />
<br />
<br />
<br />
==Processing power needed to simulate a brain ==<br />
<br />
<br />
<br />
===Whole brain emulation===<br />
<br />
{{main|Mind uploading}}<br />
<br />
A popular discussed approach to achieving general intelligent action is [[whole brain emulation]]. A low-level brain model is built by [[brain scanning|scanning]] and [[Brain mapping|mapping]] a biological brain in detail and copying its state into a computer system or another computational device. The computer runs a [[computer simulation|simulation]] model so faithful to the original that it will behave in essentially the same way as the original brain, or for all practical purposes, indistinguishably.<ref name=Roadmap><br />
<br />
A popular discussed approach to achieving general intelligent action is whole brain emulation. A low-level brain model is built by scanning and mapping a biological brain in detail and copying its state into a computer system or another computational device. The computer runs a simulation model so faithful to the original that it will behave in essentially the same way as the original brain, or for all practical purposes, indistinguishably.<ref name=Roadmap><br />
<br />
实现一般智能行为的一种流行的讨论方法是全脑模拟。一个低层次的大脑模型是通过扫描和绘制生物大脑的详细情况,并将其状态复制到计算机系统或其他计算设备中来建立的。计算机运行的模拟模型如此忠实于原始模型,以至于它的行为在本质上与原始大脑相同,或者对于所有的实际目的,难以区分。 参考名称路线图<br />
<br />
{{Harvnb|Sandberg|Boström|2008}}. "The basic idea is to take a particular brain, scan its structure in detail, and construct a software model of it that is so faithful to the original that, when run on appropriate hardware, it will behave in essentially the same way as the original brain."</ref> Whole brain emulation is discussed in [[computational neuroscience]] and [[neuroinformatics]], in the context of [[brain simulation]] for medical research purposes. It is discussed in [[artificial intelligence]] research{{sfn|Goertzel|2007}} as an approach to strong AI. [[Neuroimaging]] technologies that could deliver the necessary detailed understanding are improving rapidly, and [[futurist]] Ray Kurzweil in the book ''The Singularity Is Near''<ref name=K/> predicts that a map of sufficient quality will become available on a similar timescale to the required computing power.<br />
<br />
. "The basic idea is to take a particular brain, scan its structure in detail, and construct a software model of it that is so faithful to the original that, when run on appropriate hardware, it will behave in essentially the same way as the original brain."</ref> Whole brain emulation is discussed in computational neuroscience and neuroinformatics, in the context of brain simulation for medical research purposes. It is discussed in artificial intelligence research as an approach to strong AI. Neuroimaging technologies that could deliver the necessary detailed understanding are improving rapidly, and futurist Ray Kurzweil in the book The Singularity Is Near predicts that a map of sufficient quality will become available on a similar timescale to the required computing power.<br />
<br />
.“基本思想是,取一个特定的大脑,详细地扫描其结构,并构建一个与原始大脑如此忠实的软件模型,以至于在适当的硬件上运行时,它基本上与原始大脑的行为方式相同。整个大脑模拟在计算神经科学和神经信息学医学期刊上讨论过,这是为了医学研究的大脑模拟。它是人工智能研究中讨论的一种强人工智能的方法。神经成像技术可以提供必要的详细的理解正在迅速提高,未来学家 Ray Kurzweil 在《奇点迫近书中预测,一张具有足够质量的地图将在类似的时间尺度上达到所需的计算能力。<br />
<br />
<br />
<br />
===Early estimates ===<br />
<br />
[[File:Estimations of Human Brain Emulation Required Performance.svg|thumb|right|400px|Estimates of how much processing power is needed to emulate a human brain at various levels (from Ray Kurzweil, and [[Anders Sandberg]] and [[Nick Bostrom]]), along with the fastest supercomputer from [[TOP500]] mapped by year. Note the logarithmic scale and exponential trendline, which assumes the computational capacity doubles every 1.1 years. Kurzweil believes that mind uploading will be possible at neural simulation, while the Sandberg, Bostrom report is less certain about where [[consciousness]] arises.{{sfn|Sandberg|Boström|2008}}]] For low-level brain simulation, an extremely powerful computer would be required. The [[human brain]] has a huge number of [[synapses]]. Each of the 10<sup>11</sup> (one hundred billion) [[neurons]] has on average 7,000 synaptic connections (synapses) to other neurons. It has been estimated that the brain of a three-year-old child has about 10<sup>15</sup> synapses (1 quadrillion). This number declines with age, stabilizing by adulthood. Estimates vary for an adult, ranging from 10<sup>14</sup> to 5×10<sup>14</sup> synapses (100 to 500 trillion).{{sfn|Drachman|2005}} An estimate of the brain's processing power, based on a simple switch model for neuron activity, is around 10<sup>14</sup> (100 trillion) synaptic updates per second ([[SUPS]]).{{sfn|Russell|Norvig|2003}} In 1997, Kurzweil looked at various estimates for the hardware required to equal the human brain and adopted a figure of 10<sup>16</sup> computations per second (cps).<ref>In "Mind Children" {{Harvnb|Moravec|1988|page=61}} 10<sup>15</sup> cps is used. More recently, in 1997, <{{cite web|url=http://www.transhumanist.com/volume1/moravec.htm |title=Archived copy |accessdate=23 June 2006 |url-status=dead |archiveurl=https://web.archive.org/web/20060615031852/http://transhumanist.com/volume1/moravec.htm |archivedate=15 June 2006 }}> Moravec argued for 10<sup>8</sup> MIPS which would roughly correspond to 10<sup>14</sup> cps. Moravec talks in terms of MIPS, not "cps", which is a non-standard term Kurzweil introduced.</ref> (For comparison, if a "computation" was equivalent to one "[[FLOPS|floating point operation]]" – a measure used to rate current [[supercomputer]]s – then 10<sup>16</sup> "computations" would be equivalent to 10 [[Peta-|petaFLOPS]], [[FLOPS#Performance records|achieved in 2011]]). He used this figure to predict the necessary hardware would be available sometime between 2015 and 2025, if the exponential growth in computer power at the time of writing continued.<br />
<br />
Estimates of how much processing power is needed to emulate a human brain at various levels (from Ray Kurzweil, and [[Anders Sandberg and Nick Bostrom), along with the fastest supercomputer from TOP500 mapped by year. Note the logarithmic scale and exponential trendline, which assumes the computational capacity doubles every 1.1 years. Kurzweil believes that mind uploading will be possible at neural simulation, while the Sandberg, Bostrom report is less certain about where consciousness arises.]] For low-level brain simulation, an extremely powerful computer would be required. The human brain has a huge number of synapses. Each of the 10<sup>11</sup> (one hundred billion) neurons has on average 7,000 synaptic connections (synapses) to other neurons. It has been estimated that the brain of a three-year-old child has about 10<sup>15</sup> synapses (1 quadrillion). This number declines with age, stabilizing by adulthood. Estimates vary for an adult, ranging from 10<sup>14</sup> to 5×10<sup>14</sup> synapses (100 to 500 trillion). An estimate of the brain's processing power, based on a simple switch model for neuron activity, is around 10<sup>14</sup> (100 trillion) synaptic updates per second (SUPS). In 1997, Kurzweil looked at various estimates for the hardware required to equal the human brain and adopted a figure of 10<sup>16</sup> computations per second (cps). (For comparison, if a "computation" was equivalent to one "floating point operation" – a measure used to rate current supercomputers – then 10<sup>16</sup> "computations" would be equivalent to 10 petaFLOPS, achieved in 2011). He used this figure to predict the necessary hardware would be available sometime between 2015 and 2025, if the exponential growth in computer power at the time of writing continued.<br />
<br />
估计需要多少处理能力才能在不同水平上模拟人类大脑(来自 Ray Kurzweil,[ Anders Sandberg 和 Nick Bostrom ]) ,以及每年从 TOP500绘制出的最快超级计算机。请注意对数尺度趋势线和指数趋势线,它假设计算能力每1.1年翻一番。库兹韦尔相信,在神经模拟中上传思维是可能的,而桑德伯格和博斯特罗姆的报告对意识在哪里产生则不太确定。]对于低层次的大脑模拟,需要一个非常强大的计算机。人类的大脑有大量的突触。每个10个 sup 11 / sup (1000亿)神经元平均有7000个突触连接(突触)到其他神经元。据估计,一个三岁儿童的大脑约有10个 sup 15 / sup 突触(1千万亿)。这个数字随着年龄的增长而下降,成年后趋于稳定。对于一个成年人的估计有所不同,从10个 sup 14 / sup 到5个 sup 10 sup 14 / sup 突触(100万亿到500万亿)不等。基于神经元活动的简单开关模型,对大脑处理能力的估计大约是每秒10次 / 秒(100万亿)突触更新(SUPS)。1997年,库兹韦尔研究了相当于人脑所需硬件的各种估计,并采用了每秒10 sup 16 / sup 计算(cps)的数字。(作为比较,如果一次“计算”相当于一次“浮点运算”——一种用于对当前超级计算机进行评级的措施——那么10 sup 16 / sup“计算”相当于2011年完成的10petaflops)。他用这个数字来预测,如果在撰写本文时计算机能力方面的指数增长继续下去的话,那么在2015年到2025年之间的某个时候,必要的硬件将会出现。<br />
<br />
<br />
<br />
===Modelling the neurons in more detail===<br />
<br />
The [[artificial neuron]] model assumed by Kurzweil and used in many current [[artificial neural network]] implementations is simple compared with [[biological neuron model|biological neurons]]. A brain simulation would likely have to capture the detailed cellular behaviour of biological [[neurons]], presently understood only in the broadest of outlines. The overhead introduced by full modeling of the biological, chemical, and physical details of neural behaviour (especially on a molecular scale) would require computational powers several orders of magnitude larger than Kurzweil's estimate. In addition the estimates do not account for [[glial cells]], which are at least as numerous as neurons, and which may outnumber neurons by as much as 10:1, and are now known to play a role in cognitive processes.<ref name="Discover2011JanFeb">{{Cite journal|author=Swaminathan, Nikhil|title=Glia—the other brain cells|journal=Discover|date=Jan–Feb 2011|url=http://discovermagazine.com/2011/jan-feb/62|access-date=24 January 2014|archive-url=https://web.archive.org/web/20140208071350/http://discovermagazine.com/2011/jan-feb/62|archive-date=8 February 2014|url-status=live}}</ref><br />
<br />
The artificial neuron model assumed by Kurzweil and used in many current artificial neural network implementations is simple compared with biological neurons. A brain simulation would likely have to capture the detailed cellular behaviour of biological neurons, presently understood only in the broadest of outlines. The overhead introduced by full modeling of the biological, chemical, and physical details of neural behaviour (especially on a molecular scale) would require computational powers several orders of magnitude larger than Kurzweil's estimate. In addition the estimates do not account for glial cells, which are at least as numerous as neurons, and which may outnumber neurons by as much as 10:1, and are now known to play a role in cognitive processes.<br />
<br />
与生物神经元相比,Kurzweil 假设的人工神经元模型在当前许多人工神经网络实现中的应用是简单的。大脑模拟可能需要捕捉生物神经元细胞行为的细节,目前只能从最广泛的轮廓中理解。对神经行为的生物、化学和物理细节(特别是在分子尺度上)进行全面建模所需要的计算能力将比 Kurzweil 的估计大几百万数量级。此外,这些估计没有考虑胶质细胞,胶质细胞至少和神经元一样多,数量可能比神经元多10:1,现在已知它们在认知过程中发挥作用。<br />
<br />
<br />
<br />
=== Current research===<br />
<br />
There are some research projects that are investigating brain simulation using more sophisticated neural models, implemented on conventional computing architectures. The [[Artificial Intelligence System]] project implemented non-real time simulations of a "brain" (with 10<sup>11</sup> neurons) in 2005. It took 50 days on a cluster of 27 processors to simulate 1 second of a model.<ref>{{cite journal |last=Izhikevich |first=Eugene M. |last2=Edelman |first2=Gerald M. |date=4 March 2008 |title=Large-scale model of mammalian thalamocortical systems |url=http://vesicle.nsi.edu/users/izhikevich/publications/large-scale_model_of_human_brain.pdf |journal=PNAS |volume=105 |issue=9 |pages=3593–3598 |doi= 10.1073/pnas.0712231105|access-date=23 June 2015 |archive-url=https://web.archive.org/web/20090612095651/http://vesicle.nsi.edu/users/izhikevich/publications/large-scale_model_of_human_brain.pdf |archive-date=12 June 2009 |pmid=18292226 |pmc=2265160|bibcode=2008PNAS..105.3593I }}</ref> The [[Blue Brain]] project used one of the fastest supercomputer architectures in the world, [[IBM]]'s [[Blue Gene]] platform, to create a real time simulation of a single rat [[Neocortex|neocortical column]] consisting of approximately 10,000 neurons and 10<sup>8</sup> synapses in 2006.<ref>{{cite web|url=http://bluebrain.epfl.ch/Jahia/site/bluebrain/op/edit/pid/19085|title=Project Milestones|work=Blue Brain|accessdate=11 August 2008}}</ref> A longer term goal is to build a detailed, functional simulation of the physiological processes in the human brain: "It is not impossible to build a human brain and we can do it in 10 years," [[Henry Markram]], director of the Blue Brain Project said in 2009 at the [[TED (conference)|TED conference]] in Oxford.<ref>{{Cite news |url=http://news.bbc.co.uk/1/hi/technology/8164060.stm |title=Artificial brain '10 years away' 2009 BBC news |date=22 July 2009 |access-date=25 July 2009 |archive-url=https://web.archive.org/web/20170726040959/http://news.bbc.co.uk/1/hi/technology/8164060.stm |archive-date=26 July 2017 |url-status=live }}</ref> There have also been controversial claims to have simulated a [[cat intelligence#Computer simulation of the cat brain|cat brain]]. Neuro-silicon interfaces have been proposed as an alternative implementation strategy that may scale better.<ref>[http://gauntlet.ucalgary.ca/story/10343 University of Calgary news] {{Webarchive|url=https://web.archive.org/web/20090818081044/http://gauntlet.ucalgary.ca/story/10343 |date=18 August 2009 }}, [http://www.nbcnews.com/id/12037941 NBC News news] {{Webarchive|url=https://web.archive.org/web/20170704063922/http://www.nbcnews.com/id/12037941/ |date=4 July 2017 }}</ref><br />
<br />
There are some research projects that are investigating brain simulation using more sophisticated neural models, implemented on conventional computing architectures. The Artificial Intelligence System project implemented non-real time simulations of a "brain" (with 10<sup>11</sup> neurons) in 2005. It took 50 days on a cluster of 27 processors to simulate 1 second of a model. The Blue Brain project used one of the fastest supercomputer architectures in the world, IBM's Blue Gene platform, to create a real time simulation of a single rat neocortical column consisting of approximately 10,000 neurons and 10<sup>8</sup> synapses in 2006. A longer term goal is to build a detailed, functional simulation of the physiological processes in the human brain: "It is not impossible to build a human brain and we can do it in 10 years," Henry Markram, director of the Blue Brain Project said in 2009 at the TED conference in Oxford. There have also been controversial claims to have simulated a cat brain. Neuro-silicon interfaces have been proposed as an alternative implementation strategy that may scale better.<br />
<br />
有一些研究项目正在使用更复杂的神经模型研究大脑模拟,这些模型是在传统的计算机体系结构上实现的。人工智能系统项目在2005年实现了一个“大脑”(有10个 sup 11 / sup 神经元)的非实时模拟。在一个由27个处理器组成的集群上,模拟一个模型的一秒钟花费了50天时间。2006年,蓝脑项目利用世界上最快的超级计算机架构之一,IBM 的蓝色基因平台,创建了一个包含大约10,000个神经元和10个 sup 8 / sup 突触的单个大鼠皮层柱的实时模拟。一个更长期的目标是建立一个人脑生理过程的详细的功能模拟: “建立一个人脑并不是不可能的,我们可以在10年内完成,”2009年在牛津举行的 TED 大会上,蓝脑项目主任亨利 · 马克拉姆说。还有一些有争议的说法是模拟猫的大脑。神经硅接口已被提出作为一种替代的实施策略,可能会更好地伸缩。<br />
<br />
<br />
<br />
[[Hans Moravec]] addressed the above arguments ("brains are more complicated", "neurons have to be modeled in more detail") in his 1997 paper "When will computer hardware match the human brain?".<ref>{{cite web|url=http://www.transhumanist.com/volume1/moravec.htm |title=Archived copy |accessdate=23 June 2006 |url-status=dead |archiveurl=https://web.archive.org/web/20060615031852/http://transhumanist.com/volume1/moravec.htm |archivedate=15 June 2006 }}</ref> He measured the ability of existing software to simulate the functionality of neural tissue, specifically the retina. His results do not depend on the number of glial cells, nor on what kinds of processing neurons perform where.<br />
<br />
Hans Moravec addressed the above arguments ("brains are more complicated", "neurons have to be modeled in more detail") in his 1997 paper "When will computer hardware match the human brain?". He measured the ability of existing software to simulate the functionality of neural tissue, specifically the retina. His results do not depend on the number of glial cells, nor on what kinds of processing neurons perform where.<br />
<br />
Hans Moravec 在他1997年的论文《计算机硬件何时能与人脑相匹配? 》中提出了上述观点(“大脑更复杂” ,“神经元必须建模得更详细”) .他测量了现有软件模拟神经组织,特别是视网膜功能的能力。他的研究结果并不取决于神经胶质细胞的数量,也不取决于处理神经元在哪里工作。<br />
<br />
<br />
<br />
The actual complexity of modeling biological neurons has been explored in [[OpenWorm|OpenWorm project]] that was aimed on complete simulation of a worm that has only 302 neurons in its neural network (among about 1000 cells in total). The animal's neural network has been well documented before the start of the project. However, although the task seemed simple at the beginning, the models based on a generic neural network did not work. Currently, the efforts are focused on precise emulation of biological neurons (partly on the molecular level), but the result cannot be called a total success yet. Even if the number of issues to be solved in a human-brain-scale model is not proportional to the number of neurons, the amount of work along this path is obvious.<br />
<br />
The actual complexity of modeling biological neurons has been explored in OpenWorm project that was aimed on complete simulation of a worm that has only 302 neurons in its neural network (among about 1000 cells in total). The animal's neural network has been well documented before the start of the project. However, although the task seemed simple at the beginning, the models based on a generic neural network did not work. Currently, the efforts are focused on precise emulation of biological neurons (partly on the molecular level), but the result cannot be called a total success yet. Even if the number of issues to be solved in a human-brain-scale model is not proportional to the number of neurons, the amount of work along this path is obvious.<br />
<br />
在 OpenWorm 项目中,已经探讨了建模生物神经元的实际复杂性,该项目旨在完全模拟一个蠕虫,其神经网络中只有302个神经元(在总共约1000个细胞中)。在项目开始之前,动物的神经网络已经被很好地记录了下来。然而,尽管一开始任务看起来很简单,基于一般神经网络的模型并不起作用。目前,研究的重点是精确模拟生物神经元(部分在分子水平上) ,但结果还不能被称为完全成功。即使在人脑尺度模型中需要解决的问题的数量与神经元的数量不成比例,沿着这条路径所做的工作量也是显而易见的。<br />
<br />
<br />
<br />
===Criticisms of simulation-based approaches===<br />
<br />
A fundamental criticism of the simulated brain approach derives from [[embodied cognition]] where human embodiment is taken as an essential aspect of human intelligence. Many researchers believe that embodiment is necessary to ground meaning.<ref>{{Harvnb|de Vega|Glenberg|Graesser|2008}}. A wide range of views in current research, all of which require grounding to some degree</ref> If this view is correct, any fully functional brain model will need to encompass more than just the neurons (i.e., a robotic body). Goertzel{{sfn|Goertzel|2007}} proposes virtual embodiment (like in ''[[Second Life]]''), but it is not yet known whether this would be sufficient.<br />
<br />
A fundamental criticism of the simulated brain approach derives from embodied cognition where human embodiment is taken as an essential aspect of human intelligence. Many researchers believe that embodiment is necessary to ground meaning. If this view is correct, any fully functional brain model will need to encompass more than just the neurons (i.e., a robotic body). Goertzel proposes virtual embodiment (like in Second Life), but it is not yet known whether this would be sufficient.<br />
<br />
对模拟大脑方法的一个基本批评来自具身认知,在那里人体化被视为人类智力的一个重要方面。许多研究者认为,具体化是必要的基础意义。如果这种观点是正确的,那么任何功能齐全的大脑模型都需要包含更多的神经元(例如,一个机器人身体)。Goertzel 提出了虚拟化身(就像在《第二人生》中那样) ,但是目前还不知道这是否足够。<br />
<br />
<br />
<br />
Desktop computers using microprocessors capable of more than 10<sup>9</sup> cps (Kurzweil's non-standard unit "computations per second", see above) have been available since 2005. According to the brain power estimates used by Kurzweil (and Moravec), this computer should be capable of supporting a simulation of a bee brain, but despite some interest<ref>{{Cite web |url=http://www.setiai.com/archives/cat_honey_bee_brain.html |title=some links to bee brain studies |access-date=30 March 2010 |archive-url=https://web.archive.org/web/20111005162232/http://www.setiai.com/archives/cat_honey_bee_brain.html |archive-date=5 October 2011 |url-status=dead }}</ref> no such simulation exists {{Citation needed|date=April 2011}}. There are at least three reasons for this:<br />
<br />
Desktop computers using microprocessors capable of more than 10<sup>9</sup> cps (Kurzweil's non-standard unit "computations per second", see above) have been available since 2005. According to the brain power estimates used by Kurzweil (and Moravec), this computer should be capable of supporting a simulation of a bee brain, but despite some interest no such simulation exists . There are at least three reasons for this:<br />
<br />
自2005年以来,台式计算机使用的微处理器能够超过10 sup 9 / sup cps (库兹韦尔的非标准单位“每秒计算” ,见上文)。根据 Kurzweil (和 Moravec)使用的大脑能量估算,这台计算机应该能够支持蜜蜂大脑的模拟,但是尽管有些人感兴趣,这样的模拟并不存在。这至少有三个原因:<br />
<br />
#The neuron model seems to be oversimplified (see next section).<br />
<br />
The neuron model seems to be oversimplified (see next section).<br />
<br />
神经元模型似乎过于简化了(见下一节)。<br />
<br />
#There is insufficient understanding of higher cognitive processes{{refn|In Goertzels' AGI book ([[#CITEREFYudkowsky2006|Yudkowsky 2006]]), Yudkowsky proposes 5 levels of organisation that must be understood – code/data, sensory modality, concept & category, thought, and deliberation (consciousness) – in order to use the available hardware}} to establish accurately what the brain's neural activity, observed using techniques such as [[Neuroimaging#Functional magnetic resonance imaging|functional magnetic resonance imaging]], correlates with.<br />
<br />
There is insufficient understanding of higher cognitive processes to establish accurately what the brain's neural activity, observed using techniques such as functional magnetic resonance imaging, correlates with.<br />
<br />
人们对高级认知过程的理解不够充分,无法准确地确定大脑的神经活动---- 使用功能性磁共振成像等技术观察到的活动---- 与之相关。<br />
<br />
#Even if our understanding of cognition advances sufficiently, early simulation programs are likely to be very inefficient and will, therefore, need considerably more hardware.<br />
<br />
Even if our understanding of cognition advances sufficiently, early simulation programs are likely to be very inefficient and will, therefore, need considerably more hardware.<br />
<br />
即使我们对认知的理解有了足够的进步,早期的仿真程序也可能非常低效,因此需要更多的硬件。<br />
<br />
#The brain of an organism, while critical, may not be an appropriate boundary for a cognitive model. To simulate a bee brain, it may be necessary to simulate the body, and the environment. [[The Extended Mind]] thesis formalizes the philosophical concept, and research into [[Cephalopoda|cephalopods]] has demonstrated clear examples of a decentralized system.<ref>{{cite journal | pmid = 15829594 | doi=10.1152/jn.00684.2004 | volume=94 | issue=2 | title=Dynamic model of the octopus arm. I. Biomechanics of the octopus reaching movement |date=August 2005 | journal=J. Neurophysiol. | pages=1443–58 | last1 = Yekutieli | first1 = Y | last2 = Sagiv-Zohar | first2 = R | last3 = Aharonov | first3 = R | last4 = Engel | first4 = Y | last5 = Hochner | first5 = B | last6 = Flash | first6 = T}}</ref><br />
<br />
The brain of an organism, while critical, may not be an appropriate boundary for a cognitive model. To simulate a bee brain, it may be necessary to simulate the body, and the environment. The Extended Mind thesis formalizes the philosophical concept, and research into cephalopods has demonstrated clear examples of a decentralized system.<br />
<br />
有机体的大脑虽然关键,但可能不是认知模型的合适边界。为了模拟蜜蜂的大脑,可能需要模拟身体和环境。扩展心智论文形式化了哲学概念,对头足类动物的研究已经证明了分散系统的明显例子。<br />
<br />
<br />
<br />
In addition, the scale of the human brain is not currently well-constrained. One estimate puts the human brain at about 100 billion neurons and 100 trillion synapses.<ref>{{Harvnb|Williams|Herrup|1988}}</ref><ref>[http://search.eb.com/eb/article-75525 "nervous system, human."] ''[[Encyclopædia Britannica]]''. 9 January 2007</ref> Another estimate is 86 billion neurons of which 16.3 billion are in the [[cerebral cortex]] and 69 billion in the [[cerebellum]].{{sfn|Azevedo et al.|2009}} [[Glial cell]] synapses are currently unquantified but are known to be extremely numerous.<br />
<br />
In addition, the scale of the human brain is not currently well-constrained. One estimate puts the human brain at about 100 billion neurons and 100 trillion synapses. Another estimate is 86 billion neurons of which 16.3 billion are in the cerebral cortex and 69 billion in the cerebellum. Glial cell synapses are currently unquantified but are known to be extremely numerous.<br />
<br />
此外,人类大脑的规模目前还没有得到很好的限制。据估计,人类大脑大约有1000亿个神经元和100万亿个突触。另一个估计是860亿个神经元,其中163亿在大脑皮层,690亿在小脑。神经胶质细胞突触目前尚未定量,但已知数量极多。<br />
<br />
<br />
<br />
==Strong AI and consciousness==<br />
<br />
{{See also|Philosophy of artificial intelligence|Turing test}}<br />
<br />
<br />
<br />
In 1980, philosopher [[John Searle]] coined the term "strong AI" as part of his [[Chinese room]] argument.<ref>{{Harvnb|Searle|1980}}</ref> He wanted to distinguish between two different hypotheses about artificial intelligence:<ref>As defined in a standard AI textbook: "The assertion that machines could possibly act intelligently (or, perhaps better, act as if they were intelligent) is called the 'weak AI' hypothesis by philosophers, and the assertion that machines that do so are actually thinking (as opposed to simulating thinking) is called the 'strong AI' hypothesis." {{Harv|Russell|Norvig|2003}}</ref><br />
<br />
In 1980, philosopher John Searle coined the term "strong AI" as part of his Chinese room argument. He wanted to distinguish between two different hypotheses about artificial intelligence:<br />
<br />
1980年,哲学家约翰•塞尔(John Searle)将“强人工智能”(strong AI)一词作为他在中文房间里辩论的一部分。他想要区分关于人工智能的两种不同假设:<br />
<br />
* An artificial intelligence system can ''think'' and have a ''mind''. (The word "mind" has a specific meaning for philosophers, as used in "the [[mind body problem]]" or "the [[philosophy of mind]]".)<br />
<br />
* An artificial intelligence system can (only) ''act like'' it thinks and has a mind.<br />
<br />
The first one is called "the ''strong'' AI hypothesis" and the second is "the ''weak'' AI hypothesis" because the first one makes the ''stronger'' statement: it assumes something special has happened to the machine that goes beyond all its abilities that we can test. Searle referred to the "strong AI hypothesis" as "strong AI". This usage is also common in academic AI research and textbooks.<ref>For example:<br />
<br />
The first one is called "the strong AI hypothesis" and the second is "the weak AI hypothesis" because the first one makes the stronger statement: it assumes something special has happened to the machine that goes beyond all its abilities that we can test. Searle referred to the "strong AI hypothesis" as "strong AI". This usage is also common in academic AI research and textbooks.<ref>For example:<br />
<br />
第一个被称为“强人工智能假设” ,第二个被称为“弱人工智能假设” ,因为第一个假设提出了更强的陈述: 它假定机器发生了某种特殊的事情,超出了我们能够测试的所有能力。Searle 将“强 AI 假说”称为“强 AI”。这种用法在人工智能学术研究和教科书中也很常见。例如:<br />
<br />
* {{Harvnb|Russell|Norvig|2003}},<br />
<br />
* [http://www.encyclopedia.com/doc/1O87-strongAI.html Oxford University Press Dictionary of Psychology] {{Webarchive|url=https://web.archive.org/web/20071203103022/http://www.encyclopedia.com/doc/1O87-strongAI.html |date=3 December 2007 }} (quoted in "High Beam Encyclopedia"),<br />
<br />
* [http://www.aaai.org/AITopics/html/phil.html MIT Encyclopedia of Cognitive Science] {{Webarchive|url=https://web.archive.org/web/20080719074502/http://www.aaai.org/AITopics/html/phil.html |date=19 July 2008 }} (quoted in "AITopics")<br />
<br />
* [http://planetmath.org/encyclopedia/StrongAIThesis.html Planet Math] {{Webarchive|url=https://web.archive.org/web/20070919012830/http://planetmath.org/encyclopedia/StrongAIThesis.html |date=19 September 2007 }}<br />
<br />
* [http://www.cbhd.org/resources/biotech/tongen_2003-11-07.htm Will Biological Computers Enable Artificially Intelligent Machines to Become Persons?] {{Webarchive|url=https://web.archive.org/web/20080513031753/http://www.cbhd.org/resources/biotech/tongen_2003-11-07.htm |date=13 May 2008 }} Anthony Tongen</ref><br />
<br />
<br />
<br />
The weak AI hypothesis is equivalent to the hypothesis that artificial general intelligence is possible. According to [[Stuart J. Russell|Russell]] and [[Peter Norvig|Norvig]], "Most AI researchers take the weak AI hypothesis for granted, and don't care about the strong AI hypothesis."{{sfn|Russell|Norvig|2003|p=947}}<br />
<br />
The weak AI hypothesis is equivalent to the hypothesis that artificial general intelligence is possible. According to Russell and Norvig, "Most AI researchers take the weak AI hypothesis for granted, and don't care about the strong AI hypothesis."<br />
<br />
弱人工智能假说等同于人工一般智能是可能的假说。根据罗素和诺维格的说法,“大多数人工智能研究人员认为弱人工智能假说是理所当然的,并且不关心强人工智能假说。”<br />
<br />
<br />
<br />
In contrast to Searle, [[Ray Kurzweil]] uses the term "strong AI" to describe any artificial intelligence system that acts like it has a mind,<ref name=K/> regardless of whether a philosopher would be able to determine if it ''actually'' has a mind or not.<br />
<br />
In contrast to Searle, Ray Kurzweil uses the term "strong AI" to describe any artificial intelligence system that acts like it has a mind, regardless of whether a philosopher would be able to determine if it actually has a mind or not.<br />
<br />
与 Searle 不同的是,Ray Kurzweil 使用“强人工智能”这个词来描述任何人工智能系统,这个系统的行为就像它有思想一样,不管哲学家是否能够确定它是否真的有思想。<br />
<br />
In science fiction, AGI is associated with traits such as [[consciousness]], [[sentience]], [[sapience]], and [[self-awareness]] observed in living beings. However, according to Searle, it is an open question whether general intelligence is sufficient for consciousness. "Strong AI" (as defined above by Kurzweil) should not be confused with Searle's "[[strong AI hypothesis]]." The strong AI hypothesis is the claim that a computer which behaves as intelligently as a person must also necessarily have a [[mind]] and [[consciousness]]. AGI refers only to the amount of intelligence that the machine displays, with or without a mind.<br />
<br />
In science fiction, AGI is associated with traits such as consciousness, sentience, sapience, and self-awareness observed in living beings. However, according to Searle, it is an open question whether general intelligence is sufficient for consciousness. "Strong AI" (as defined above by Kurzweil) should not be confused with Searle's "strong AI hypothesis." The strong AI hypothesis is the claim that a computer which behaves as intelligently as a person must also necessarily have a mind and consciousness. AGI refers only to the amount of intelligence that the machine displays, with or without a mind.<br />
<br />
在科幻小说中,AGI 与生物的意识、知觉、智慧和自我意识等特征有关。然而,根据 Searle 的说法,一般智力是否足以产生意识还是一个悬而未决的问题。“强 AI”(如上文库兹韦尔所定义的)不应与塞尔的“强 AI 假设”相混淆强有力的人工智能假说认为,一台像人一样智能运行的计算机必然具有思想和意识。Agi 只是指机器显示的智能量,不管有没有头脑。<br />
<br />
<br />
<br />
===Consciousness===<br />
<br />
There are other aspects of the human mind besides intelligence that are relevant to the concept of strong AI which play a major role in [[science fiction]] and the [[ethics of artificial intelligence]]:<br />
<br />
There are other aspects of the human mind besides intelligence that are relevant to the concept of strong AI which play a major role in science fiction and the ethics of artificial intelligence:<br />
<br />
在科幻小说和人工智能伦理中扮演重要角色的强大人工智能概念中,除了与智能有关的人类思维还有其他方面:<br />
<br />
* [[consciousness]]: To have [[qualia|subjective experience]] and [[thought]].<ref>Note that [[consciousness]] is difficult to define. A popular definition, due to [[Thomas Nagel]], is that it "feels like" something to be conscious. If we are not conscious, then it doesn't feel like anything. Nagel uses the example of a bat: we can sensibly ask "what does it feel like to be a bat?" However, we are unlikely to ask "what does it feel like to be a toaster?" Nagel concludes that a bat appears to be conscious (i.e. has consciousness) but a toaster does not. See {{Harv|Nagel|1974}}</ref><br />
<br />
* [[self-awareness]]: To be aware of oneself as a separate individual, especially to be aware of one's own thoughts.<br />
<br />
* [[sentience]]: The ability to "feel" perceptions or emotions subjectively.<br />
<br />
* [[sapience]]: The capacity for wisdom.<br />
<br />
<br />
<br />
These traits have a moral dimension, because a machine with this form of strong AI may have legal rights, analogous to the [[animal rights|rights of non-human animals]]. As such, preliminary work has been conducted on approaches to integrating full ethical agents with existing legal and social frameworks. These approaches have focused on the legal position and rights of 'strong' AI.<ref>{{Cite journal|last=Sotala|first=Kaj|last2=Yampolskiy|first2=Roman V|date=2014-12-19|title=Responses to catastrophic AGI risk: a survey|journal=Physica Scripta|volume=90|issue=1|pages=8|doi=10.1088/0031-8949/90/1/018001|issn=0031-8949|doi-access=free}}</ref><br />
<br />
These traits have a moral dimension, because a machine with this form of strong AI may have legal rights, analogous to the rights of non-human animals. As such, preliminary work has been conducted on approaches to integrating full ethical agents with existing legal and social frameworks. These approaches have focused on the legal position and rights of 'strong' AI.<br />
<br />
这些特征具有道德维度,因为拥有这种强人工智能形式的机器可能拥有法律权利,类似于非人类动物的权利。因此,已经开展了初步工作,探讨如何将全面的道德行为者纳入现有的法律和社会框架。这些方法都集中在强大的人工智能的法律地位和权利上。<br />
<br />
<br />
<br />
However, [[Bill Joy]], among others, argues a machine with these traits may be a threat to human life or dignity.<ref>{{cite journal| title=Why the future doesn't need us | last=Joy | first=Bill |author-link=Bill Joy | journal=Wired |date=April 2000 }}</ref> It remains to be shown whether any of these traits are [[Necessary and sufficient condition|necessary]] for strong AI. The role of [[consciousness]] is not clear, and currently there is no agreed test for its presence. If a machine is built with a device that simulates the [[neural correlates of consciousness]], would it automatically have self-awareness? It is also possible that some of these properties, such as sentience, [[Emergence|naturally emerge]] from a fully intelligent machine, or that it becomes natural to ''ascribe'' these properties to machines once they begin to act in a way that is clearly intelligent. For example, intelligent action may be sufficient for sentience, rather than the other way around.<br />
<br />
However, Bill Joy, among others, argues a machine with these traits may be a threat to human life or dignity. It remains to be shown whether any of these traits are necessary for strong AI. The role of consciousness is not clear, and currently there is no agreed test for its presence. If a machine is built with a device that simulates the neural correlates of consciousness, would it automatically have self-awareness? It is also possible that some of these properties, such as sentience, naturally emerge from a fully intelligent machine, or that it becomes natural to ascribe these properties to machines once they begin to act in a way that is clearly intelligent. For example, intelligent action may be sufficient for sentience, rather than the other way around.<br />
<br />
然而,比尔 · 乔伊等人认为,具有这些特征的机器可能会威胁到人类的生命或尊严。这些特征对于强 AI 来说是否是必要的还有待证明。意识的作用并不清楚,目前也没有对其存在的一致的测试。如果一台机器装有一个模拟意识相关神经区的装置,它会自动具有自我意识吗?也有可能这些特性中的一些,比如感知能力,自然而然地从一个完全智能的机器中产生,或者一旦机器开始以一种明显的智能方式行动,人们就会自然而然地把这些特性归因于机器。例如,智能行为可能足以产生知觉,而不是相反。<br />
<br />
<br />
<br />
===Artificial consciousness research===<br />
<br />
{{Main|Artificial consciousness}}<br />
<br />
<br />
<br />
Although the role of consciousness in strong AI/AGI is debatable, many AGI researchers{{sfn|Yudkowsky|2006}} regard research that investigates possibilities for implementing consciousness as vital. In an early effort [[Igor Aleksander]]{{sfn|Aleksander|1996}} argued that the principles for creating a conscious machine already existed but that it would take forty years to train such a machine to understand [[language]].<br />
<br />
Although the role of consciousness in strong AI/AGI is debatable, many AGI researchers regard research that investigates possibilities for implementing consciousness as vital. In an early effort Igor Aleksander argued that the principles for creating a conscious machine already existed but that it would take forty years to train such a machine to understand language.<br />
<br />
虽然意识在强 ai / AGI 中的作用是有争议的,但是很多 AGI 的研究人员认为研究实现意识的可能性是至关重要的。在早期的努力中,Igor Aleksander 认为创造一个有意识的机器的原则已经存在,但是训练这样一个机器去理解语言需要四十年。<br />
<br />
<br />
<br />
==Possible explanations for the slow progress of AI research==<br />
<br />
{{See also|History of artificial intelligence#The problems}}<br />
<br />
<br />
<br />
Since the launch of AI research in 1956, the growth of this field has slowed down over time and has stalled the aims of creating machines skilled with intelligent action at the human level.{{sfn|Clocksin|2003}} A possible explanation for this delay is that computers lack a sufficient scope of memory or processing power.{{sfn|Clocksin|2003}} In addition, the level of complexity that connects to the process of AI research may also limit the progress of AI research.{{sfn|Clocksin|2003}}<br />
<br />
Since the launch of AI research in 1956, the growth of this field has slowed down over time and has stalled the aims of creating machines skilled with intelligent action at the human level. A possible explanation for this delay is that computers lack a sufficient scope of memory or processing power. In addition, the level of complexity that connects to the process of AI research may also limit the progress of AI research.<br />
<br />
自从1956年开始人工智能研究以来,这一领域的发展速度已经随着时间的推移而放缓,并且阻碍了创造具有人类水平的智能行为的机器的目标。这种延迟的一个可能的解释是计算机缺乏足够的存储空间或处理能力。此外,与人工智能研究过程相关的复杂程度也可能限制人工智能研究的进展。<br />
<br />
<br />
<br />
While most AI researchers believe strong AI can be achieved in the future, there are some individuals like [[Hubert Dreyfus]] and [[Roger Penrose]] who deny the possibility of achieving strong AI.{{sfn|Clocksin|2003}} [[John McCarthy (computer scientist)|John McCarthy]] was one of various computer scientists who believe human-level AI will be accomplished, but a date cannot accurately be predicted.{{sfn|McCarthy|2003}}<br />
<br />
While most AI researchers believe strong AI can be achieved in the future, there are some individuals like Hubert Dreyfus and Roger Penrose who deny the possibility of achieving strong AI. John McCarthy was one of various computer scientists who believe human-level AI will be accomplished, but a date cannot accurately be predicted.<br />
<br />
虽然大多数人工智能研究人员认为强大的人工智能可以在未来实现,但也有一些人像休伯特 · 德雷福斯和罗杰 · 彭罗斯否认实现强大人工智能的可能性。约翰 · 麦卡锡是众多计算机科学家之一,他们相信人类水平的人工智能将会实现,但是日期无法准确预测。<br />
<br />
<br />
<br />
Conceptual limitations are another possible reason for the slowness in AI research.{{sfn|Clocksin|2003}} AI researchers may need to modify the conceptual framework of their discipline in order to provide a stronger base and contribution to the quest of achieving strong AI. As William Clocksin wrote in 2003: "the framework starts from Weizenbaum's observation that intelligence manifests itself only relative to specific social and cultural contexts".{{sfn|Clocksin|2003}}<br />
<br />
Conceptual limitations are another possible reason for the slowness in AI research. AI researchers may need to modify the conceptual framework of their discipline in order to provide a stronger base and contribution to the quest of achieving strong AI. As William Clocksin wrote in 2003: "the framework starts from Weizenbaum's observation that intelligence manifests itself only relative to specific social and cultural contexts".<br />
<br />
概念上的局限性是人工智能研究缓慢的另一个可能原因。人工智能研究人员可能需要修改他们学科的概念框架,以便为实现强大的人工智能提供一个更强大的基础和贡献。正如 William Clocksin 在2003年写的那样: “这个框架始于 Weizenbaum 的观察,即智力只在特定的社会和文化背景下表现出来。”。<br />
<br />
<br />
<br />
Furthermore, AI researchers have been able to create computers that can perform jobs that are complicated for people to do, such as mathematics, but conversely they have struggled to develop a computer that is capable of carrying out tasks that are simple for humans to do, such as walking ([[Moravec's paradox]]).{{sfn|Clocksin|2003}} A problem described by David Gelernter is that some people assume thinking and reasoning are equivalent.{{sfn|Gelernter|2010}} However, the idea of whether thoughts and the creator of those thoughts are isolated individually has intrigued AI researchers.{{sfn|Gelernter|2010}}<br />
<br />
Furthermore, AI researchers have been able to create computers that can perform jobs that are complicated for people to do, such as mathematics, but conversely they have struggled to develop a computer that is capable of carrying out tasks that are simple for humans to do, such as walking (Moravec's paradox). A problem described by David Gelernter is that some people assume thinking and reasoning are equivalent. However, the idea of whether thoughts and the creator of those thoughts are isolated individually has intrigued AI researchers.<br />
<br />
此外,人工智能研究人员已经能够创造出能够执行复杂工作(如数学)的计算机,但相反地,他们却难以开发出能够执行人类简单任务(如行走)的计算机(莫拉维克悖论)。大卫 · 格勒尼特描述的一个问题是,有些人认为思考和推理是等价的。然而,思想和这些思想的创造者是否被孤立的想法引起了人工智能研究者的兴趣。<br />
<br />
<br />
<br />
The problems that have been encountered in AI research over the past decades have further impeded the progress of AI. The failed predictions that have been promised by AI researchers and the lack of a complete understanding of human behaviors have helped diminish the primary idea of human-level AI.{{sfn|Goertzel|2007}} Although the progress of AI research has brought both improvement and disappointment, most investigators have established optimism about potentially achieving the goal of AI in the 21st century.{{sfn|Goertzel|2007}}<br />
<br />
The problems that have been encountered in AI research over the past decades have further impeded the progress of AI. The failed predictions that have been promised by AI researchers and the lack of a complete understanding of human behaviors have helped diminish the primary idea of human-level AI. Although the progress of AI research has brought both improvement and disappointment, most investigators have established optimism about potentially achieving the goal of AI in the 21st century.<br />
<br />
过去几十年人工智能研究中遇到的问题进一步阻碍了人工智能的发展。人工智能研究人员所承诺的失败的预测,以及对人类行为缺乏完整理解,已经帮助削弱了人类水平人工智能的基本概念。尽管人工智能研究的进展带来了进步和失望,但大多数研究人员对人工智能在21世纪可能实现的目标持乐观态度。<br />
<br />
<br />
<br />
Other possible reasons have been proposed for the lengthy research in the progress of strong AI. The intricacy of scientific problems and the need to fully understand the human brain through psychology and neurophysiology have limited many researchers in emulating the function of the human brain in computer hardware.{{sfn|McCarthy|2007}} Many researchers tend to underestimate any doubt that is involved with future predictions of AI, but without taking those issues seriously, people can then overlook solutions to problematic questions.{{sfn|Goertzel|2007}}<br />
<br />
Other possible reasons have been proposed for the lengthy research in the progress of strong AI. The intricacy of scientific problems and the need to fully understand the human brain through psychology and neurophysiology have limited many researchers in emulating the function of the human brain in computer hardware. Many researchers tend to underestimate any doubt that is involved with future predictions of AI, but without taking those issues seriously, people can then overlook solutions to problematic questions.<br />
<br />
还有其他可能的原因可以解释为什么对于强人工智能进行了长时间的研究。错综复杂的科学问题,以及通过心理学和神经生理学充分了解人脑的必要性,限制了许多研究人员在计算机硬件中模拟人脑的功能。许多研究人员倾向于低估与人工智能未来预测有关的任何怀疑,但是如果不认真对待这些问题,人们就会忽视问题的解决方案。<br />
<br />
<br />
<br />
Clocksin says that a conceptual limitation that may impede the progress of AI research is that people may be using the wrong techniques for computer programs and implementation of equipment.{{sfn|Clocksin|2003}} When AI researchers first began to aim for the goal of artificial intelligence, a main interest was human reasoning.{{sfn|Holte|Choueiry|2003}} Researchers hoped to establish computational models of human knowledge through reasoning and to find out how to design a computer with a specific cognitive task.{{sfn|Holte|Choueiry|2003}}<br />
<br />
Clocksin says that a conceptual limitation that may impede the progress of AI research is that people may be using the wrong techniques for computer programs and implementation of equipment. When AI researchers first began to aim for the goal of artificial intelligence, a main interest was human reasoning. Researchers hoped to establish computational models of human knowledge through reasoning and to find out how to design a computer with a specific cognitive task.<br />
<br />
Clocksin 说,阻碍人工智能研究进展的一个概念上的限制是,人们可能在计算机程序和设备实现方面使用了错误的技术。当人工智能研究人员第一次开始瞄准人工智能的目标时,主要的兴趣是人类推理。研究人员希望通过推理建立人类知识的计算模型,并找出如何设计一台具有特定认知任务的计算机。<br />
<br />
<br />
<br />
The practice of abstraction, which people tend to redefine when working with a particular context in research, provides researchers with a concentration on just a few concepts.{{sfn|Holte|Choueiry|2003}} The most productive use of abstraction in AI research comes from planning and problem solving.{{sfn|Holte|Choueiry|2003}} Although the aim is to increase the speed of a computation, the role of abstraction has posed questions about the involvement of abstraction operators.{{sfn|Zucker|2003}}<br />
<br />
The practice of abstraction, which people tend to redefine when working with a particular context in research, provides researchers with a concentration on just a few concepts. The most productive use of abstraction in AI research comes from planning and problem solving. Although the aim is to increase the speed of a computation, the role of abstraction has posed questions about the involvement of abstraction operators.<br />
<br />
抽象的实践,人们在研究中使用特定的语境时倾向于重新定义,为研究人员提供了集中在几个概念上的机会。抽象在人工智能研究中最有效的应用来自规划和解决问题。虽然目标是提高计算速度,但是抽象的作用已经对抽象操作符的参与提出了问题。<br />
<br />
<br />
<br />
A possible reason for the slowness in AI relates to the acknowledgement by many AI researchers that heuristics is a section that contains a significant breach between computer performance and human performance.{{sfn|McCarthy|2007}} The specific functions that are programmed to a computer may be able to account for many of the requirements that allow it to match human intelligence. These explanations are not necessarily guaranteed to be the fundamental causes for the delay in achieving strong AI, but they are widely agreed by numerous researchers.<br />
<br />
A possible reason for the slowness in AI relates to the acknowledgement by many AI researchers that heuristics is a section that contains a significant breach between computer performance and human performance. The specific functions that are programmed to a computer may be able to account for many of the requirements that allow it to match human intelligence. These explanations are not necessarily guaranteed to be the fundamental causes for the delay in achieving strong AI, but they are widely agreed by numerous researchers.<br />
<br />
人工智能发展缓慢的一个可能原因是许多人工智能研究人员承认启发式是计算机性能和人类性能之间的一个重大缺口。为计算机编程的特定功能可以满足许多要求,使计算机与人类智能相匹配。这些解释并不一定是造成人工智能实现延迟的根本原因,但它们得到了众多研究人员的广泛认同。<br />
<br />
<br />
<br />
There have been many AI researchers that debate over the idea whether [[affective computing|machines should be created with emotions]]. There are no emotions in typical models of AI and some researchers say programming emotions into machines allows them to have a mind of their own.{{sfn|Clocksin|2003}} Emotion sums up the experiences of humans because it allows them to remember those experiences.{{sfn|Gelernter|2010}} David Gelernter writes, "No computer will be creative unless it can simulate all the nuances of human emotion."{{sfn|Gelernter|2010}} This concern about emotion has posed problems for AI researchers and it connects to the concept of strong AI as its research progresses into the future.<ref>{{cite journal|doi=10.1016/j.bushor.2018.08.004|title=Kaplan Andreas and Haelein Michael (2019) Siri, Siri, in my hand: Who's the fairest in the land? On the interpretations, illustrations, and implications of artificial intelligence | volume=62 | year=2019|journal=Business Horizons|pages=15–25 | last1 = Kaplan | first1 = Andreas | last2 = Haenlein | first2 = Michael}}</ref><br />
<br />
There have been many AI researchers that debate over the idea whether machines should be created with emotions. There are no emotions in typical models of AI and some researchers say programming emotions into machines allows them to have a mind of their own. Emotion sums up the experiences of humans because it allows them to remember those experiences. David Gelernter writes, "No computer will be creative unless it can simulate all the nuances of human emotion." This concern about emotion has posed problems for AI researchers and it connects to the concept of strong AI as its research progresses into the future.<br />
<br />
许多人工智能研究人员一直在争论机器是否应该带有情感。典型的人工智能模型中没有情感,一些研究人员说,将情感编程到机器中可以让它们拥有自己的思想。情感总结了人类的经历,因为它允许人们记住那些经历。大卫 · 格勒尼特写道: “除非计算机能够模拟人类情感的所有细微差别,否则它不会具有创造力。”这种对情绪的关注给人工智能研究人员带来了一些问题,随着未来人工智能研究的进展,它与强人工智能的概念相联系。<br />
<br />
<br />
<br />
==Controversies and dangers==<br />
<br />
<br />
<br />
===Feasibility===<br />
<br />
{{expand section|date=February 2016}}<br />
<br />
As of March 2020, AGI remains speculative<ref name="spec1">[https://www.europarl.europa.eu/at-your-service/files/be-heard/religious-and-non-confessional-dialogue/events/en-20190319-how-artificial-intelligence-works.pdf europarl.europa.eu: How artificial intelligence works], "Concluding remarks: Today's AI is powerful and useful, but remains far from speculated AGI or ASI.", European Parliamentary Research Service, retrieved March 3, 2020</ref><ref name="spec2">[https://www.itu.int/en/journal/001/Documents/itu2018-9.pdf itu.int: Beyond Mad?: The Race For Artificial General Intelligence], "AGI represents a level of power that remains firmly in the realm of speculative fiction as on date." February 2, 2018, retrieved March 3, 2020</ref> as no such system has been demonstrated yet. Opinions vary both on ''whether'' and ''when'' artificial general intelligence will arrive. At one extreme, AI pioneer [[Herbert A. Simon]] wrote in 1965: "machines will be capable, within twenty years, of doing any work a man can do". However, this prediction failed to come true. Microsoft co-founder [[Paul Allen]] believed that such intelligence is unlikely in the 21st century because it would require "unforeseeable and fundamentally unpredictable breakthroughs" and a "scientifically deep understanding of cognition".<ref>{{cite news|last1=Allen|first1=Paul|title=The Singularity Isn't Near|url=http://www.technologyreview.com/view/425733/paul-allen-the-singularity-isnt-near/|accessdate=17 September 2014|work=[[MIT Technology Review]]}}</ref> Writing in [[The Guardian]], roboticist Alan Winfield claimed the gulf between modern computing and human-level artificial intelligence is as wide as the gulf between current space flight and practical faster-than-light spaceflight.<ref>{{cite news|last1=Winfield|first1=Alan|title=Artificial intelligence will not turn into a Frankenstein's monster|url=https://www.theguardian.com/technology/2014/aug/10/artificial-intelligence-will-not-become-a-frankensteins-monster-ian-winfield|accessdate=17 September 2014|work=[[The Guardian]]|archive-url=https://web.archive.org/web/20140917135230/http://www.theguardian.com/technology/2014/aug/10/artificial-intelligence-will-not-become-a-frankensteins-monster-ian-winfield|archive-date=17 September 2014|url-status=live}}</ref><br />
<br />
As of March 2020, AGI remains speculative as no such system has been demonstrated yet. Opinions vary both on whether and when artificial general intelligence will arrive. At one extreme, AI pioneer Herbert A. Simon wrote in 1965: "machines will be capable, within twenty years, of doing any work a man can do". However, this prediction failed to come true. Microsoft co-founder Paul Allen believed that such intelligence is unlikely in the 21st century because it would require "unforeseeable and fundamentally unpredictable breakthroughs" and a "scientifically deep understanding of cognition". Writing in The Guardian, roboticist Alan Winfield claimed the gulf between modern computing and human-level artificial intelligence is as wide as the gulf between current space flight and practical faster-than-light spaceflight.<br />
<br />
截至2020年3月,德盛安联仍处于投机状态,因为迄今尚未展示此类系统。对于人工通用智能是否会到来以及何时到来,人们的看法各不相同。在一个极端,人工智能的先驱赫伯特·西蒙在1965年写道: “机器将能在20年内完成人类能做的任何工作。”。然而,这个预言并没有实现。微软(Microsoft)联合创始人保罗•艾伦(Paul Allen)认为,这种情报在21世纪不太可能出现,因为它需要“不可预见且根本无法预测的突破”和“对认知的科学深入理解”。机器人专家 Alan Winfield 在《卫报》上发表文章称,现代计算机和人类水平的人工智能之间的鸿沟就像当前的太空飞行和实际的超光速空间飞行之间的鸿沟一样宽。<br />
<br />
<br />
<br />
AI experts' views on the feasibility of AGI wax and wane, and may have seen a resurgence in the 2010s. Four polls conducted in 2012 and 2013 suggested that the median guess among experts for when they would be 50% confident AGI would arrive was 2040 to 2050, depending on the poll, with the mean being 2081. Of the experts, 16.5% answered with "never" when asked the same question but with a 90% confidence instead.<ref name="new yorker doomsday">{{cite news|author1=Raffi Khatchadourian|title=The Doomsday Invention: Will artificial intelligence bring us utopia or destruction?|url=http://www.newyorker.com/magazine/2015/11/23/doomsday-invention-artificial-intelligence-nick-bostrom|accessdate=7 February 2016|work=[[The New Yorker (magazine)|The New Yorker]]|date=23 November 2015|archive-url=https://web.archive.org/web/20160128105955/http://www.newyorker.com/magazine/2015/11/23/doomsday-invention-artificial-intelligence-nick-bostrom|archive-date=28 January 2016|url-status=live}}</ref><ref>Müller, V. C., & Bostrom, N. (2016). Future progress in artificial intelligence: A survey of expert opinion. In Fundamental issues of artificial intelligence (pp. 555–572). Springer, Cham.</ref> Further current AGI progress considerations can be found below [[#Tests_for_confirming_human-level_AGI|''Tests for confirming human-level AGI'']] and [[#IQ-Tests_AGI|''IQ-tests AGI'']].<br />
<br />
AI experts' views on the feasibility of AGI wax and wane, and may have seen a resurgence in the 2010s. Four polls conducted in 2012 and 2013 suggested that the median guess among experts for when they would be 50% confident AGI would arrive was 2040 to 2050, depending on the poll, with the mean being 2081. Of the experts, 16.5% answered with "never" when asked the same question but with a 90% confidence instead. Further current AGI progress considerations can be found below Tests for confirming human-level AGI and IQ-tests AGI.<br />
<br />
人工智能专家对德盛安联兴衰可行性的看法,可能在2010年出现了复苏。2012年和2013年进行的四次民意调查显示,专家对德盛安联50% 有信心的平均猜测是2040年到2050年,具体取决于调查结果,平均猜测是2081年。在这些专家中,16.5% 的人在被问到同样的问题时回答“从来没有” ,但他们的自信心却达到了90% 。进一步的进展考虑可以在确认人类水平 AGI 和 iq 测试 AGI 的测试下面找到。<br />
<br />
<br />
<br />
===Potential threat to human existence{{anchor|Risk_of_human_extinction}}===<br />
<br />
{{Main|Existential risk from artificial general intelligence}}<br />
<br />
<br />
<br />
The thesis that AI poses an existential risk, and that this risk needs much more attention than it currently gets, has been endorsed by many public figures; perhaps the most famous are [[Elon Musk]], [[Bill Gates]], and [[Stephen Hawking]]. The most notable AI researcher to endorse the thesis is [[Stuart J. Russell]]. Endorsers of the thesis sometimes express bafflement at skeptics: Gates states he does not "understand why some people are not concerned",<ref name="BBC News">{{cite news|last1=Rawlinson|first1=Kevin|title=Microsoft's Bill Gates insists AI is a threat|url=https://www.bbc.co.uk/news/31047780|work=[[BBC News]]|accessdate=30 January 2015}}</ref> and Hawking criticized widespread indifference in his 2014 editorial: {{cquote|'So, facing possible futures of incalculable benefits and risks, the experts are surely doing everything possible to ensure the best outcome, right? Wrong. If a superior alien civilisation sent us a message saying, 'We'll arrive in a few decades,' would we just reply, 'OK, call us when you get here{{endash}}we'll leave the lights on?' Probably not{{endash}}but this is more or less what is happening with AI.'<ref name="hawking editorial">{{cite news |title=Stephen Hawking: 'Transcendence looks at the implications of artificial intelligence&nbsp;– but are we taking AI seriously enough?' |url=https://www.independent.co.uk/news/science/stephen-hawking-transcendence-looks-at-the-implications-of-artificial-intelligence--but-are-we-taking-ai-seriously-enough-9313474.html |accessdate=3 December 2014 |publisher=[[The Independent (UK)]]}}</ref>}}<br />
<br />
The thesis that AI poses an existential risk, and that this risk needs much more attention than it currently gets, has been endorsed by many public figures; perhaps the most famous are Elon Musk, Bill Gates, and Stephen Hawking. The most notable AI researcher to endorse the thesis is Stuart J. Russell. Endorsers of the thesis sometimes express bafflement at skeptics: Gates states he does not "understand why some people are not concerned", and Hawking criticized widespread indifference in his 2014 editorial: we'll leave the lights on?' Probably notbut this is more or less what is happening with AI.'}}<br />
<br />
人工智能构成了世界末日,这种风险需要比现在更多的关注,这一论点已经得到了许多公众人物的支持; 也许最著名的是埃隆 · 马斯克,比尔 · 盖茨和斯蒂芬 · 霍金。支持这一观点的最著名的人工智能研究者是斯图尔特 · 罗素。这篇论文的支持者有时会对怀疑论者表示困惑: 盖茨表示,他不“理解为什么有些人不关心” ,霍金在2014年的社论中批评了普遍的冷漠: 我们会让灯亮着吗可能不是,但这或多或少是正在发生的与人工智能<br />
<br />
<br />
<br />
Many of the scholars who are concerned about existential risk believe that the best way forward would be to conduct (possibly massive) research into solving the difficult "[[AI control problem|control problem]]" to answer the question: what types of safeguards, algorithms, or architectures can programmers implement to maximize the probability that their recursively-improving AI would continue to behave in a friendly, rather than destructive, manner after it reaches superintelligence?<ref name="superintelligence" >{{cite book|last1=Bostrom|first1=Nick|author-link=Nick Bostrom|title=Superintelligence: Paths, Dangers, Strategies|date=2014|isbn=978-0199678112|edition=First|quote=|title-link=Superintelligence: Paths, Dangers, Strategies}}<!-- preface --></ref><ref name="physica_scripta" >{{cite journal|title=Responses to catastrophic AGI risk: a survey|journal=[[Physica Scripta]]|date=19 December 2014|volume=90|issue=1|author1=Kaj Sotala|author2-link=Roman Yampolskiy|author2=Roman Yampolskiy}}</ref><br />
<br />
Many of the scholars who are concerned about existential risk believe that the best way forward would be to conduct (possibly massive) research into solving the difficult "control problem" to answer the question: what types of safeguards, algorithms, or architectures can programmers implement to maximize the probability that their recursively-improving AI would continue to behave in a friendly, rather than destructive, manner after it reaches superintelligence?<br />
<br />
许多关注世界末日的学者认为,最好的方法是进行(可能是大规模的)研究,解决困难的“控制问题” ,以回答这个问题: 程序员可以实现哪些类型的保障措施、算法或架构,以最大限度地提高其递归改进的人工智能在达到超级智能后继续以友好而不是破坏性的方式运行的可能性?<br />
<br />
<br />
<br />
The thesis that AI can pose existential risk also has many strong detractors. Skeptics sometimes charge that the thesis is crypto-religious, with an irrational belief in the possibility of superintelligence replacing an irrational belief in an omnipotent God; at an extreme, [[Jaron Lanier]] argues that the whole concept that current machines are in any way intelligent is "an illusion" and a "stupendous con" by the wealthy.<ref name="atlantic-but-what">{{cite magazine |url=https://www.theatlantic.com/health/archive/2014/05/but-what-does-the-end-of-humanity-mean-for-me/361931/ |title=But What Would the End of Humanity Mean for Me? |magazine=The Atlantic | date = 9 May 2014 | accessdate =12 December 2015}}</ref><br />
<br />
The thesis that AI can pose existential risk also has many strong detractors. Skeptics sometimes charge that the thesis is crypto-religious, with an irrational belief in the possibility of superintelligence replacing an irrational belief in an omnipotent God; at an extreme, Jaron Lanier argues that the whole concept that current machines are in any way intelligent is "an illusion" and a "stupendous con" by the wealthy.<br />
<br />
认为人工智能可以提出世界末日的观点也遭到了许多强烈的反对。怀疑论者有时指责该论点是秘密宗教性的,他们非理性地相信超级智能可能取代对万能的上帝的非理性信仰; 在极端情况下,杰伦 · 拉尼尔(Jaron Lanier)认为,目前的机器以任何方式具有智能的整个概念是“一种幻觉” ,是富人的“惊人骗局”。<br />
<br />
<br />
<br />
Much of existing criticism argues that AGI is unlikely in the short term. Computer scientist [[Gordon Bell]] argues that the human race will already destroy itself before it reaches the [[technological singularity]]. [[Gordon Moore]], the original proponent of [[Moore's Law]], declares that "I am a skeptic. I don't believe [a technological singularity] is likely to happen, at least for a long time. And I don't know why I feel that way."<ref>{{cite news |title=Tech Luminaries Address Singularity |url=https://spectrum.ieee.org/computing/hardware/tech-luminaries-address-singularity |accessdate=8 April 2020 |work=IEEE Spectrum: Technology, Engineering, and Science News |issue=SPECIAL REPORT: THE SINGULARITY |date=1 June 2008 |language=en}}</ref> [[Baidu]] Vice President [[Andrew Ng]] states AI existential risk is "like worrying about overpopulation on Mars when we have not even set foot on the planet yet."<ref name=shermer>{{cite news|last1=Shermer|first1=Michael|title=Apocalypse AI|url=https://www.scientificamerican.com/article/artificial-intelligence-is-not-a-threat-mdash-yet/|accessdate=27 November 2017|work=Scientific American|date=1 March 2017|pages=77|language=en|doi=10.1038/scientificamerican0317-77|bibcode=2017SciAm.316c..77S}}</ref><br />
<br />
Much of existing criticism argues that AGI is unlikely in the short term. Computer scientist Gordon Bell argues that the human race will already destroy itself before it reaches the technological singularity. Gordon Moore, the original proponent of Moore's Law, declares that "I am a skeptic. I don't believe [a technological singularity] is likely to happen, at least for a long time. And I don't know why I feel that way." Baidu Vice President Andrew Ng states AI existential risk is "like worrying about overpopulation on Mars when we have not even set foot on the planet yet."<br />
<br />
现有的许多批评认为,德盛安联短期内不太可能成功。计算机科学家 Gordon Bell 认为人类在到达技术奇异点之前就已经自我毁灭了。戈登 · 摩尔,摩尔定律的最初倡导者,宣称“我是一个怀疑论者。我不认为技术奇异点会发生,至少在很长一段时间内不会。我不知道为什么会有这种感觉。”百度副总裁 Andrew Ng 说,人工智能世界末日就像是在担心火星人口过剩,而我们甚至还没有踏上这个星球<br />
<br />
<br />
<br />
==See also==<br />
<br />
{{div col|colwidth=30em}}<br />
<br />
* [[Automated machine learning]]<br />
<br />
* [[Machine ethics]]<br />
<br />
* [[Multi-task learning]]<br />
<br />
* [[Superintelligence: Paths, Dangers, Strategies|Superintelligence]]<br />
<br />
* [[Nick Bostrom]]<br />
<br />
* [[Eliezer Yudkowsky]]<br />
<br />
* [[Future of Humanity Institute]]<br />
<br />
* [[Outline of artificial intelligence]]<br />
<br />
* [[Artificial brain]]<br />
<br />
* [[Transfer learning]]<br />
<br />
* [[Outline of transhumanism]]<br />
<br />
* [[General game playing]]<br />
<br />
* [[Synthetic intelligence]]<br />
<br />
* [[Intelligence amplification]] (IA), the use of information technology in augmenting human intelligence instead of creating an external autonomous "AGI"{{div col end}}<br />
<br />
<br />
<br />
==Notes==<br />
<br />
{{reflist|colwidth=30em}}<br />
<br />
<br />
<br />
==References==<br />
<br />
• Stages of Artificial Intelligence"[https://www.computerscience0.xyz/2020/04/ai-artificial-intelligence-stages-of.html Computer Science]" on 2nd April, 2020.{{refbegin|2}}<br />
<br />
• Stages of Artificial Intelligence"[https://www.computerscience0.xyz/2020/04/ai-artificial-intelligence-stages-of.html Computer Science]" on 2nd April, 2020.<br />
<br />
•2020年4月2日,《人工智能的阶段》[ https://www.computerscience0.xyz/2020/04/ai-Artificial-Intelligence-Stages-of.html 计算机科学]。<br />
<br />
* {{cite web |url=http://www.techcast.org/Upload/PDFs/633615794236495345_TCTheAutomationofThought.pdf |title=TechCast Article Series: The Automation of Thought |last1=Halal |first1=William E. |website= |access-date= |archive-url=https://web.archive.org/web/20130606101835/http://www.techcast.org/Upload/PDFs/633615794236495345_TCTheAutomationofThought.pdf |archive-date=6 June 2013}}<br />
<br />
* {{Citation | last=Aleksander | first=Igor | author-link=Igor Aleksander | year=1996 | title=Impossible Minds | publisher=World Scientific Publishing Company | isbn=978-1-86094-036-1 | url-access=registration | url=https://archive.org/details/impossiblemindsm0000alek }}<br />
<br />
* {{Citation | last = Omohundro|first= Steve| author-link= Steve Omohundro | year = 2008| title= The Nature of Self-Improving Artificial Intelligence| publisher= presented and distributed at the 2007 Singularity Summit, San Francisco, CA.}}<br />
<br />
* {{Citation|last=Sandberg |first=Anders|last2=Boström|first2=Nick|title=Whole Brain Emulation: A Roadmap|url=http://www.fhi.ox.ac.uk/Reports/2008-3.pdf|accessdate=5 April 2009|series= Technical Report #2008‐3|year=2008| publisher = Future of Humanity Institute, Oxford University}}<br />
<br />
* {{Citation|vauthors=Azevedo FA, Carvalho LR, Grinberg LT | title = Equal numbers of neuronal and nonneuronal cells make the human brain an isometrically scaled-up primate brain| journal = The Journal of Comparative Neurology| volume = 513| issue = 5| pages = 532–41|date=April 2009| pmid = 19226510| doi = 10.1002/cne.21974|url=https://www.researchgate.net/publication/24024444 |accessdate=4 September 2013| ref={{harvid|Azevedo et al.|2009}}|display-authors=etal}}<br />
<br />
* {{Citation<br />
<br />
| first=Anthony<br />
<br />
| first=Anthony<br />
<br />
首先是安东尼<br />
<br />
| last=Berglas<br />
<br />
| last=Berglas<br />
<br />
最后一个贝格拉斯<br />
<br />
| title=Artificial Intelligence will Kill our Grandchildren<br />
<br />
| title=Artificial Intelligence will Kill our Grandchildren<br />
<br />
人工智能会杀死我们的孙子<br />
<br />
| year=2008<br />
<br />
| year=2008<br />
<br />
2008年<br />
<br />
| url=http://berglas.org/Articles/AIKillGrandchildren/AIKillGrandchildren.html<br />
<br />
| url=http://berglas.org/Articles/AIKillGrandchildren/AIKillGrandchildren.html<br />
<br />
Http://berglas.org/articles/aikillgrandchildren/aikillgrandchildren.html<br />
<br />
}}<br />
<br />
}}<br />
<br />
}}<br />
<br />
* {{Citation | last = Chalmers |first = David | author-link=David Chalmers | year=1996 | title = The Conscious Mind |publisher=Oxford University Press.}}<br />
<br />
* {{Citation | last = Clocksin|first=William |date=Aug 2003 |title=Artificial intelligence and the future|journal=[[Philosophical Transactions of the Royal Society A]] |pmid=12952683 |volume=361 |issue=1809 |pages=1721–1748 |doi=10.1098/rsta.2003.1232 |postscript=.|bibcode=2003RSPTA.361.1721C }}<br />
<br />
* {{Crevier 1993}}<br />
<br />
* {{Citation | first = Brad | last = Darrach | date=20 November 1970 | title=Meet Shakey, the First Electronic Person | magazine=[[Life Magazine]] | pages = 58–68 }}.<br />
<br />
* {{Citation | last = Drachman | first = D | title = Do we have brain to spare? | journal = Neurology | volume = 64 | issue = 12 | pages = 2004–5 | year = 2005 | pmid = 15985565 | doi = 10.1212/01.WNL.0000166914.38327.BB | postscript = .}}<br />
<br />
* {{Citation | last = Feigenbaum | first = Edward A. | first2=Pamela | last2=McCorduck | author-link=Edward Feigenbaum | author2-link = Pamela McCorduck | title = The Fifth Generation: Artificial Intelligence and Japan's Computer Challenge to the World | publisher = Michael Joseph | year = 1983 | isbn = 978-0-7181-2401-4 }}<br />
<br />
* {{Citation<br />
<br />
| last = Gelernter | first = David<br />
<br />
| last = Gelernter | first = David<br />
<br />
最后的盖兰特 | 第一个大卫<br />
<br />
| year = 2010<br />
<br />
| year = 2010<br />
<br />
2010年<br />
<br />
| title = Dream-logic, the Internet and Artificial Thought<br />
<br />
| title = Dream-logic, the Internet and Artificial Thought<br />
<br />
梦的逻辑,互联网和人工思维<br />
<br />
| url=http://www.edge.org/3rd_culture/gelernter10.1/gelernter10.1_index.html | accessdate= 25 July 2010<br />
<br />
| url=http://www.edge.org/3rd_culture/gelernter10.1/gelernter10.1_index.html | accessdate= 25 July 2010<br />
<br />
Http://www.edge.org/3rd_culture/gelernter10.1/gelernter10.1_index.html : 2010年7月25日<br />
<br />
}}<br />
<br />
}}<br />
<br />
}}<br />
<br />
* {{Citation<br />
<br />
| editor1-last = Goertzel | editor1-first = Ben | authorlink = Ben Goertzel<br />
<br />
| editor1-last = Goertzel | editor1-first = Ben | authorlink = Ben Goertzel<br />
<br />
1-first Ben | authorlink Ben Goertzel<br />
<br />
| editor2-last = Pennachin | editor2-first= Cassio<br />
<br />
| editor2-last = Pennachin | editor2-first= Cassio<br />
<br />
| 编辑2-last Pennachin | 编辑2-first Cassio<br />
<br />
| year = 2006<br />
<br />
| year = 2006<br />
<br />
2006年<br />
<br />
| title=Artificial General Intelligence<br />
<br />
| title=Artificial General Intelligence<br />
<br />
人工通用智能<br />
<br />
| publisher = Springer <br />
<br />
| publisher = Springer <br />
<br />
出版商斯普林格<br />
<br />
| url=http://people.inf.elte.hu/csizsekp/ai/books/artificial-general-intelligence-cognitive-technologies.9783540237334.27156.pdf<br />
<br />
| url=http://people.inf.elte.hu/csizsekp/ai/books/artificial-general-intelligence-cognitive-technologies.9783540237334.27156.pdf<br />
<br />
Http://people.inf.elte.hu/csizsekp/ai/books/artificial-general-intelligence-cognitive-technologies.9783540237334.27156.pdf<br />
<br />
| isbn = 978-3-540-23733-4<br />
<br />
| isbn = 978-3-540-23733-4<br />
<br />
[国际标准图书馆编号978-3-540-23733-4]<br />
<br />
| archive-url = https://web.archive.org/web/20130320184603/http://people.inf.elte.hu/csizsekp/ai/books/artificial-general-intelligence-cognitive-technologies.9783540237334.27156.pdf<br />
<br />
| archive-url = https://web.archive.org/web/20130320184603/http://people.inf.elte.hu/csizsekp/ai/books/artificial-general-intelligence-cognitive-technologies.9783540237334.27156.pdf<br />
<br />
| 档案-url https://web.archive.org/web/20130320184603/http://people.inf.elte.hu/csizsekp/ai/books/artificial-general-intelligence-cognitive-technologies.9783540237334.27156.pdf<br />
<br />
| archive-date = 20 March 2013<br />
<br />
| archive-date = 20 March 2013<br />
<br />
| 档案-日期2013年3月20日<br />
<br />
}}<br />
<br />
}}<br />
<br />
}}<br />
<br />
* {{Citation<br />
<br />
| last = Goertzel | first = Ben | authorlink = Ben Goertzel<br />
<br />
| last = Goertzel | first = Ben | authorlink = Ben Goertzel<br />
<br />
作者: Ben Goertzel<br />
<br />
| last2 = Wang | first2 = Pei<br />
<br />
| last2 = Wang | first2 = Pei<br />
<br />
2 Wang | first2 Pei<br />
<br />
| year = 2006<br />
<br />
| year = 2006<br />
<br />
2006年<br />
<br />
| title = Introduction: Aspects of Artificial General Intelligence<br />
<br />
| title = Introduction: Aspects of Artificial General Intelligence<br />
<br />
| 题目简介: 人工通用智能的方方面面<br />
<br />
| url=http://sites.google.com/site/narswang/publications/wang-goertzel.AGI_Aspects.pdf?attredirects=1<br />
<br />
| url=http://sites.google.com/site/narswang/publications/wang-goertzel.AGI_Aspects.pdf?attredirects=1<br />
<br />
Http://sites.google.com/site/narswang/publications/wang-goertzel.agi_aspects.pdf?attredirects=1<br />
<br />
}}<br />
<br />
}}<br />
<br />
}}<br />
<br />
* {{Citation |last=Goertzel|first=Ben |authorlink=Ben Goertzel |date=Dec 2007 |title=Human-level artificial general intelligence and the possibility of a technological singularity: a reaction to Ray Kurzweil's The Singularity Is Near, and McDermott's critique of Kurzweil|journal=Artificial Intelligence |volume=171 |issue=18, Special Review Issue |pages=1161–1173 |url=https://scholar.google.com/scholar?hl=sv&lr=&cluster=15189798216526465792 |accessdate=1 April 2009 |doi=10.1016/j.artint.2007.10.011 |postscript=.}}<br />
<br />
* {{Citation | last = Gubrud | first = Mark | url=http://www.foresight.org/Conferences/MNT05/Papers/Gubrud/ | title = Nanotechnology and International Security | journal= Fifth Foresight Conference on Molecular Nanotechnology |date = November 1997| accessdate= 7 May 2011}}<br />
<br />
* {{Citation| last1 = Holte | first1=RC | last2=Choueiry |first2=BY| title = Abstraction and reformulation in artificial intelligence| journal = [[Philosophical Transactions of the Royal Society B]]| volume = 358| issue = 1435| pages = 1197–1204| year = 2003| pmid = 12903653| pmc = 1693218| doi = 10.1098/rstb.2003.1317| postscript = .}}<br />
<br />
* {{Citation | last = Howe | first = J. | url=http://www.dai.ed.ac.uk/AI_at_Edinburgh_perspective.html | title = Artificial Intelligence at Edinburgh University : a Perspective |date = November 1994| accessdate= 30 August 2007}}<br />
<br />
* {{Citation | last = Johnson| first= Mark | year = 1987| title =The body in the mind| publisher =Chicago|isbn= 978-0-226-40317-5}}<br />
<br />
* {{Citation | last = Kurzweil | first = Ray | author-link = Ray Kurzweil | title = The Singularity is Near | year = 2005 | publisher = Viking Press | title-link = The Singularity is Near }}<br />
<br />
* {{Citation | last = Lighthill | first = Professor Sir James | author-link=James Lighthill | year = 1973 | contribution= Artificial Intelligence: A General Survey | title = Artificial Intelligence: a paper symposium| publisher = Science Research Council }}<br />
<br />
* {{Citation|last=Luger|first=George|first2=William|last2=Stubblefield|year=2004|title=Artificial Intelligence: Structures and Strategies for Complex Problem Solving|edition=5th|publisher=The Benjamin/Cummings Publishing Company, Inc.|page=[https://archive.org/details/artificialintell0000luge/page/720 720]|isbn=978-0-8053-4780-7|url=https://archive.org/details/artificialintell0000luge/page/720}}<br />
<br />
* {{Citation |last=McCarthy|first=John |authorlink=John McCarthy (computer scientist)|date=Oct 2007 |title=From here to human-level AI|journal=Artificial Intelligence |volume=171 |pages=1174–1182 |doi=10.1016/j.artint.2007.10.009 | postscript=. |issue=18}}<br />
<br />
* {{McCorduck 2004}}<br />
<br />
* {{Citation | last = Moravec | first = Hans | author-link = Hans Moravec | year = 1976 | url = http://www.frc.ri.cmu.edu/users/hpm/project.archive/general.articles/1975/Raw.Power.html | title = The Role of Raw Power in Intelligence | access-date = 29 September 2007 | archive-url = https://web.archive.org/web/20160303232511/http://www.frc.ri.cmu.edu/users/hpm/project.archive/general.articles/1975/Raw.Power.html | archive-date = 3 March 2016 | url-status = dead }}<br />
<br />
* {{Citation | last = Moravec | first = Hans | author-link=Hans Moravec | year = 1988 | title = Mind Children | publisher = Harvard University Press}}<br />
<br />
* {{Citation| last=Nagel | year=1974 | title =What Is it Like to Be a Bat | journal = Philosophical Review | volume=83 | issue=4 | pages=435–50 | url=http://organizations.utep.edu/Portals/1475/nagel_bat.pdf| postscript=. | doi=10.2307/2183914 | jstor=2183914 }}<br />
<br />
* {{Citation | last = Newell | first = Allen | author-link=Allen Newell | last2 = Simon | first2=H. A. | year = 1963 | contribution=GPS: A Program that Simulates Human Thought| title=Computers and Thought | editor-last= Feigenbaum | editor-first= E.A. |editor2-last= Feldman |editor2-first= J. |publisher= McGraw-Hill | authorlink2 = Herbert A. Simon|location= New York }}<br />
<br />
* {{cite journal | doi = 10.1145/360018.360022 | last = Newell | first = Allen | last2 = Simon | first2=H. A. | year = 1976 | title=Computer Science as Empirical Inquiry: Symbols and Search| volume= 19 | pages = 113–126 | journal = Communications of the ACM| author-link=Allen Newell | authorlink2=Herbert A. Simon|issue=3 | ref=harv| doi-access= free }}<br />
<br />
* {{Citation| last=Nilsson | first=Nils | author-link=Nils Nilsson (researcher) | year=1998|title=Artificial Intelligence: A New Synthesis|publisher=Morgan Kaufmann Publishers|isbn=978-1-55860-467-4}}<br />
<br />
* {{Russell Norvig 2003}}<br />
<br />
* {{Citation | last = NRC| author-link=United States National Research Council | chapter=Developments in Artificial Intelligence|chapter-url=http://www.nap.edu/readingroom/books/far/ch9.html|title=Funding a Revolution: Government Support for Computing Research|publisher=National Academy Press|year=1999 }}<br />
<br />
* {{Citation | last = Poole | first = David | first2 = Alan | last2 = Mackworth | first3 = Randy | last3 = Goebel | publisher = Oxford University Press | year = 1998 | title = Computational Intelligence: A Logical Approach | url = http://www.cs.ubc.ca/spider/poole/ci.html | author-link=David Poole (researcher) | location = New York }}<br />
<br />
* {{Citation|last=Searle |first=John |author-link=John Searle |title=Minds, Brains and Programs |journal=Behavioral and Brain Sciences |volume=3 |issue=3 |pages=417–457 |year=1980 |doi=10.1017/S0140525X00005756 }}<br />
<br />
* {{Citation | last= Simon | first = H. A. | author-link=Herbert A. Simon | year = 1965 | title=The Shape of Automation for Men and Management | publisher =Harper & Row | location = New York }}<br />
<br />
* {{Citation |last=Sutherland|first= J.G. |year =1990| title= Holographic Model of Memory, Learning, and Expression|journal=International Journal of Neural Systems|volume= 1–3|pages= 256–267 |postscript=.}}<br />
<br />
* {{Citation|vauthors=Williams RW, Herrup K | title = The control of neuron number| journal = Annual Review of Neuroscience| volume = 11| pages = 423–53| year = 1988| pmid = 3284447| doi = 10.1146/annurev.ne.11.030188.002231| postscript = .}}<!--| accessdate = 20 June 2009--><br />
<br />
* {{Citation<br />
<br />
| editor1-last = de Vega | editor1-first = Manuel<br />
<br />
| editor1-last = de Vega | editor1-first = Manuel<br />
<br />
| 编辑1-last de Vega | 编辑1-first Manuel<br />
<br />
| editor2-last = Glenberg | editor2-first = Arthur<br />
<br />
| editor2-last = Glenberg | editor2-first = Arthur<br />
<br />
2- 最后的格伦伯格2- 第一个亚瑟<br />
<br />
| editor3-last = Graesser | editor3-first = Arthur<br />
<br />
| editor3-last = Graesser | editor3-first = Arthur<br />
<br />
| 编辑3-last Graesser | 编辑3-first Arthur<br />
<br />
| year = 2008<br />
<br />
| year = 2008<br />
<br />
2008年<br />
<br />
| title = Symbols and Embodiment: Debates on meaning and cognition<br />
<br />
| title = Symbols and Embodiment: Debates on meaning and cognition<br />
<br />
标题符号与具体化: 关于意义与认知的争论<br />
<br />
| publisher = Oxford University Press<br />
<br />
| publisher = Oxford University Press<br />
<br />
牛津大学出版社<br />
<br />
| isbn=978-0-19-921727-4<br />
<br />
| isbn=978-0-19-921727-4<br />
<br />
[国际标准图书编号978-0-19-921727-4]<br />
<br />
}}<br />
<br />
}}<br />
<br />
}}<br />
<br />
* {{Citation|last=Yudkowsky |first=Eliezer |author-link=Eliezer Yudkowsky |editor-last=Goertzel |editor-first=Ben |editor2-last=Pennachin |editor2-first=Cassio |title=Artificial General Intelligence |journal=Annual Review of Psychology |publisher=Springer |year=2006 |url=http://www.singinst.org/upload/LOGI//LOGI.pdf |doi=10.1146/annurev.psych.49.1.585 |pmid=9496632 |isbn=978-3-540-23733-4 |url-status=dead |archiveurl=https://web.archive.org/web/20090411050423/http://www.singinst.org/upload/LOGI/LOGI.pdf |archivedate=11 April 2009 |volume=49 |pages=585–612}} <br />
<br />
* {{Citation |last=Zucker|first=Jean-Daniel |date=July 2003 |title=A grounded theory of abstraction in artificial intelligence|journal=[[Philosophical Transactions of the Royal Society B]] |pmid=12903672 |volume=358 |issue=1435 |pmc=1693211 |pages=1293–1309 |doi=10.1098/rstb.2003.1308 |postscript=.}}<br />
<br />
* {{Citation| last=Yudkowsky | first=Eliezer | author-link=Eliezer Yudkowsky| year=2008 |title=Artificial Intelligence as a Positive and Negative Factor in Global Risk |journal=Global Catastrophic Risks | bibcode=2008gcr..book..303Y }}.<br />
<br />
{{refend}}<br />
<br />
<br />
<br />
==External links==<br />
<br />
* [https://cis.temple.edu/~pwang/AGI-Intro.html The AGI portal maintained by Pei Wang]<br />
<br />
* [https://web.archive.org/web/20050405071221/http://genesis.csail.mit.edu/index.html The Genesis Group at MIT's CSAIL] – Modern research on the computations that underlay human intelligence<br />
<br />
* [http://www.opencog.org/ OpenCog – open source project to develop a human-level AI]<br />
<br />
* [http://academia.wikia.com/wiki/A_Method_for_Simulating_the_Process_of_Logical_Human_Thought Simulating logical human thought]<br />
<br />
* [http://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/ai-timelines What Do We Know about AI Timelines?] – Literature review<br />
<br />
<br />
<br />
{{Existential risk from artificial intelligence}}<br />
<br />
<br />
<br />
{{DEFAULTSORT:Artificial general intelligence}}<br />
<br />
[[Category:Hypothetical technology]]<br />
<br />
Category:Hypothetical technology<br />
<br />
类别: 假设技术<br />
<br />
[[Category:Artificial intelligence]]<br />
<br />
Category:Artificial intelligence<br />
<br />
类别: 人工智能<br />
<br />
[[Category:Computational neuroscience]]<br />
<br />
Category:Computational neuroscience<br />
<br />
类别: 计算神经科学<br />
<br />
<br />
<br />
[[fr:Intelligence artificielle#Intelligence artificielle forte]]<br />
<br />
fr:Intelligence artificielle#Intelligence artificielle forte<br />
<br />
智力人工 # 智力人工强项<br />
<br />
<noinclude><br />
<br />
<small>This page was moved from [[wikipedia:en:Artificial general intelligence]]. Its edit history can be viewed at [[通用人工智能/edithistory]]</small></noinclude><br />
<br />
[[Category:待整理页面]]</div>粲兰https://wiki.swarma.org/index.php?title=%E9%80%9A%E7%94%A8%E4%BA%BA%E5%B7%A5%E6%99%BA%E8%83%BD&diff=14833通用人工智能2020-10-08T08:07:44Z<p>粲兰:</p>
<hr />
<div>此词条暂由彩云小译翻译,未经人工整理和审校,带来阅读不便,请见谅。<br />
<br />
{{Short description|Hypothetical human-level or stronger AI}}<br />
<br />
{{Use British English|date = March 2019}}<br />
<br />
{{Use dmy dates|date=December 2019}}<br />
<br />
{{Artificial intelligence}}<br />
<br />
'''Artificial general intelligence''' ('''AGI''') is the hypothetical<ref>{{cite news |title=DeepMind and Google: the battle to control artificial intelligence |url=https://www.economist.com/news/2019/04/05/deepmind-and-google-the-battle-to-control-artificial-intelligence |accessdate=15 March 2020 |work=[[The Economist]] ([[1843 (magazine)]]) |date=2019 |quote=AGI stands for Artificial General Intelligence, a hypothetical computer program...}}</ref> intelligence of a machine that has the capacity to understand or learn any intellectual task that a [[human being]] can. It is a primary goal of some [[artificial intelligence]] research and a common topic in [[science fiction]] and [[futures studies]]. AGI can also be referred to as '''strong AI''',<ref>Kurzweil, ''Singularity'' (2005) p. 260</ref><ref name = "Kurzweil 2005-08-05">{{Citation|url=https://www.forbes.com/home/free_forbes/2005/0815/030.htmlhttps://www.forbes.com/home/free_forbes/2005/0815/030.html |first=Ray |last=Kurzweil |date=5 August 2005 |magazine=[[Forbes]]|title=Long Live AI}}: Kurzweil describes strong AI as "machine intelligence with the full range of human intelligence."</ref><ref>{{Citation|url=https://crnano.typepad.com/crnblog/2005/08/advanced_human_.html |title=Advanced Human ntelligence<br />
<br />
Artificial general intelligence (AGI) is the hypothetical intelligence of a machine that has the capacity to understand or learn any intellectual task that a human being can. It is a primary goal of some artificial intelligence research and a common topic in science fiction and futures studies. AGI can also be referred to as strong AI,<ref>{{Citation|url=https://crnano.typepad.com/crnblog/2005/08/advanced_human_.html |title=Advanced Human ntelligence<br />
<br />
人工通用智能(Artificial general intelligence,AGI)是一种假设的机器智能,它有能力理解或学习任何人类能够完成的智力任务。这是一些人工智能研究的主要目标,也是科幻小说和未来研究的共同话题。也可以被称为强人工智能,参考文献{ Citation | url https://crnano.typepad.com/crnblog/2005/08/advanced_human_.html | title Advanced Human intelligence<br />
<br />
|first=Mike|last=Treder|work=Responsible Nanotechnology|date=10 August 2005 |archive-url=https://web.archive.org/web/20191016214415/https://crnano.typepad.com/crnblog/2005/08/advanced_human_.html|archive-date=16 October 2019 |url-status=live}}</ref> '''full AI''',<ref>{{Cite web |url=http://tedxtalks.ted.com/video/The-Age-of-Artificial-Intellige |title=The Age of Artificial Intelligence: George John at TEDxLondonBusinessSchool 2013 |access-date=22 February 2014 |archive-url=https://web.archive.org/web/20140226123940/http://tedxtalks.ted.com/video/The-Age-of-Artificial-Intellige |archive-date=26 February 2014 |url-status=live }}</ref> <br />
<br />
|first=Mike|last=Treder|work=Responsible Nanotechnology|date=10 August 2005 |archive-url=https://web.archive.org/web/20191016214415/https://crnano.typepad.com/crnblog/2005/08/advanced_human_.html|archive-date=16 October 2019 |url-status=live}}</ref> full AI, <br />
<br />
2005年8月10日2019年10月 https://web.archive.org/web/20191016214415/https://crnano.typepad.com/crnblog/2005/08/advanced_human_.html|archive-date=16,<br />
<br />
or '''general intelligent action'''.{{sfn|Newell|Simon|1976|ps=, This is the term they use for "human-level" intelligence in the [[physical symbol system]] hypothesis.}} <br />
<br />
or general intelligent action. <br />
<br />
或者是通用智能行为。<br />
<br />
Some academic sources reserve the term "strong AI" for machines that can experience [[Chinese room#Strong AI|consciousness]].{{sfn|Searle|1980|ps=, See below for the origin of the term "strong AI", and see the academic definition of "[[Chinese room#Strong AI|strong AI]]" in the article [[Chinese room]].}} Today's AI is speculated to be many years, if not decades, away from AGI.<ref>[https://www.europarl.europa.eu/at-your-service/files/be-heard/religious-and-non-confessional-dialogue/events/en-20190319-how-artificial-intelligence-works.pdf europarl.europa.eu: How artificial intelligence works], "Concluding remarks: Today's AI is powerful and useful, but remains far from speculated AGI or ASI.", European Parliamentary Research Service, retrieved March 3, 2020</ref><ref>{{Cite journal|last=Grace|first=Katja|last2=Salvatier|first2=John|last3=Dafoe|first3=Allan|last4=Zhang|first4=Baobao|last5=Evans|first5=Owain|date=2018-07-31|title=Viewpoint: When Will AI Exceed Human Performance? Evidence from AI Experts|journal=Journal of Artificial Intelligence Research|volume=62|pages=729–754|doi=10.1613/jair.1.11222|issn=1076-9757}}</ref><br />
<br />
Some academic sources reserve the term "strong AI" for machines that can experience consciousness. Today's AI is speculated to be many years, if not decades, away from AGI.<br />
<br />
一些学术资源保留了“强人工智能”这个术语,用来形容能够体会意识的机器。据推测,如果不是几十年的话,今天的人工智能将在很多年之后才能达到通用人工智能的地步。<br />
<br />
<br />
<br />
Some authorities emphasize a distinction between ''strong AI'' and ''applied AI'',<ref>Encyclopædia Britannica [http://www.britannica.com/eb/article-219086/artificial-intelligence Strong AI, applied AI, and cognitive simulation] {{Webarchive|url=https://web.archive.org/web/20071015054758/http://www.britannica.com/eb/article-219086/artificial-intelligence |date=15 October 2007 }} or Jack Copeland [http://www.cs.usfca.edu/www.AlanTuring.net/turing_archive/pages/Reference%20Articles/what_is_AI/What%20is%20AI02.html What is artificial intelligence?] {{Webarchive|url=https://web.archive.org/web/20070818125256/http://www.cs.usfca.edu/www.AlanTuring.net/turing_archive/pages/Reference%20Articles/what_is_AI/What%20is%20AI02.html |date=18 August 2007 }} on AlanTuring.net</ref> also called ''narrow AI''<ref name = "Kurzweil 2005-08-05"/> or ''[[weak AI]]''.<ref>{{Cite web |url=http://www.open2.net/nextbigthing/ai/ai_in_depth/in_depth.htm |title=The Open University on Strong and Weak AI |access-date=8 October 2007 |archive-url=https://web.archive.org/web/20090925043908/http://www.open2.net/nextbigthing/ai/ai_in_depth/in_depth.htm |archive-date=25 September 2009 |url-status=live }} {{Dead link |date=October 2019}}</ref> In contrast to strong AI, weak AI is not intended to perform human [[cognitive]] abilities. Rather, weak AI is limited to the use of software to study or accomplish specific [[problem solving]] or [[reason]]ing tasks.<br />
<br />
Some authorities emphasize a distinction between strong AI and applied AI, also called narrow AI In contrast to strong AI, weak AI is not intended to perform human cognitive abilities. Rather, weak AI is limited to the use of software to study or accomplish specific problem solving or reasoning tasks.<br />
<br />
一些权威机构强调''强人工智能''和''应用人工智能''之间的区别,也称为''狭义人工智能''与''强人工智能''相比,弱人工智能并不是为了执行人类的认知能力。相反,弱人工智能仅限于使用软件来研究或完成特定问题的解决或完成推理任务。<br />
<br />
<br />
<br />
As of 2017, over forty organizations are researching AGI.<ref name=baum/><br />
<br />
As of 2017, over forty organizations are researching AGI.<br />
<br />
截止到2017年,已经有超过四十家机构在研究 AGI。<br />
<br />
<br />
<br />
==Requirements 判定要求==<br />
<br />
{{main|Cognitive science}}<br />
<br />
<br />
<br />
Various criteria for [[intelligence]] have been proposed (most famously the [[Turing test]]) but to date, there is no definition that satisfies everyone.<ref>AI founder [[John McCarthy (computer scientist)|John McCarthy]] writes: "we cannot yet characterize in general what kinds of computational procedures we want to call intelligent." {{cite web| url=http://www-formal.stanford.edu/jmc/whatisai/node1.html| title=Basic Questions| last=McCarthy| first=John| authorlink=John McCarthy (computer scientist)| publisher=[[Stanford University]]| year=2007| access-date=6 December 2007| archive-url=https://web.archive.org/web/20071026100601/http://www-formal.stanford.edu/jmc/whatisai/node1.html| archive-date=26 October 2007| url-status=live}} (For a discussion of some definitions of intelligence used by [[artificial intelligence]] researchers, see [[philosophy of artificial intelligence]].)</ref> However, there ''is'' wide agreement among artificial intelligence researchers that intelligence is required to do the following:<ref><br />
<br />
Various criteria for intelligence have been proposed (most famously the Turing test) but to date, there is no definition that satisfies everyone. However, there is wide agreement among artificial intelligence researchers that intelligence is required to do the following:<ref><br />
<br />
人们提出了各种各样的智能标准(最著名的是图灵测试) ,但到目前为止,还没有一个定义能使所有人满意。然而,人工智能研究人员普遍认为,智能需要做到以下几点: <br />
<br />
This list of intelligent traits is based on the topics covered by major AI textbooks, including:<br />
<br />
This list of intelligent traits is based on the topics covered by major AI textbooks, including:<br />
<br />
这个智能特征的列表基于主流的人工智能教科书所涉及的主题,包括:<br />
<br />
{{Harvnb|Russell|Norvig|2003}},<br />
<br />
,<br />
<br />
,<br />
<br />
{{Harvnb|Luger|Stubblefield|2004}},<br />
<br />
,<br />
<br />
,<br />
<br />
{{Harvnb|Poole|Mackworth|Goebel|1998}} and<br />
<br />
and<br />
<br />
及<br />
<br />
{{Harvnb|Nilsson|1998}}.<br />
<br />
.<br />
<br />
.<br />
<br />
</ref><br />
<br />
</ref><br />
<br />
/ 参考<br />
<br />
* [[automated reasoning|reason]], use strategy, solve puzzles, and make judgments under [[uncertainty]];使用策略,解决问题,并且在不确定条件下做出决策。<br />
<br />
* [[knowledge representation|represent knowledge]], including [[Commonsense knowledge base|commonsense knowledge]];展现知识,包括常识。<br />
<br />
* [[automated planning and scheduling|plan]];规划。<br />
<br />
* [[machine learning|learn]];学习。<br />
<br />
* communicate in [[natural language processing|natural language]];使用自然语言进行交流。<br />
<br />
* and [[Artificial intelligence systems integration|integrate all these skills]] towards common goals.综合运用所有技巧以达到某个目的。<br />
<br />
<br />
<br />
Other important capabilities include the ability to [[machine perception|sense]] (e.g. [[computer vision|see]]) and the ability to act (e.g. [[robotics|move and manipulate objects]]) in the world where intelligent behaviour is to be observed.<ref>Pfeifer, R. and Bongard J. C., How the body shapes the way we think: a new view of intelligence (The MIT Press, 2007). {{ISBN|0-262-16239-3}}</ref> This would include an ability to detect and respond to [[hazard]].<ref>{{cite journal | last1 = White | first1 = R. W. | year = 1959 | title = Motivation reconsidered: The concept of competence | journal = Psychological Review | volume = 66 | issue = 5| pages = 297–333 | doi=10.1037/h0040934| pmid = 13844397 }}</ref> Many interdisciplinary approaches to intelligence (e.g. [[cognitive science]], [[computational intelligence]] and [[decision making]]) tend to emphasise the need to consider additional traits such as [[imagination]] (taken as the ability to form mental images and concepts that were not programmed in)<ref>{{Harvnb|Johnson|1987}}</ref> and [[Self-determination theory|autonomy]].<ref>deCharms, R. (1968). Personal causation. New York: Academic Press.</ref><br />
<br />
Other important capabilities include the ability to sense (e.g. see) and the ability to act (e.g. move and manipulate objects) in the world where intelligent behaviour is to be observed. This would include an ability to detect and respond to hazard. Many interdisciplinary approaches to intelligence (e.g. cognitive science, computational intelligence and decision making) tend to emphasise the need to consider additional traits such as imagination (taken as the ability to form mental images and concepts that were not programmed in) and autonomy.<br />
<br />
其他重要的能力包括在客观世界感知(例如:视觉)和行动(例如:移动和操纵物体)的能力。智能行为在客观世界中是可观测的。这将包括检测和应对危险的能力。许多跨学科的智力研究方法(例如:。认知科学、计算智能和决策)倾向于强调考虑额外特征的必要性,例如想象力(被认为是未编入程序的形成意象和概念的能力)和自主性。<br />
<br />
Computer based systems that exhibit many of these capabilities do exist (e.g. see [[computational creativity]], [[automated reasoning]], [[decision support system]], [[robot]], [[evolutionary computation]], [[intelligent agent]]), but not yet at human levels.<br />
<br />
Computer based systems that exhibit many of these capabilities do exist (e.g. see computational creativity, automated reasoning, decision support system, robot, evolutionary computation, intelligent agent), but not yet at human levels.<br />
<br />
基于计算机的系统,展示了许多这些能力确实存在(例如:。参见计算创造性、自动推理、决策支持系统、机器人、进化计算、智能代理) ,但还没有达到人类的水平。<br />
<br />
<br />
<br />
===Tests for confirming human-level AGI{{anchor|Tests_for_confirming_human-level_AGI}}===<br />
<br />
The following tests to confirm human-level AGI have been considered:<ref>{{cite web|last=Muehlhauser|first=Luke|title=What is AGI?|url=http://intelligence.org/2013/08/11/what-is-agi/|publisher=Machine Intelligence Research Institute|accessdate=1 May 2014|date=11 August 2013|archive-url=https://web.archive.org/web/20140425115445/http://intelligence.org/2013/08/11/what-is-agi/|archive-date=25 April 2014|url-status=live}}</ref><ref>{{Cite web|url=https://www.talkyblog.com/artificial_general_intelligence_agi/|title=What is Artificial General Intelligence (AGI)? {{!}} 4 Tests For Ensuring Artificial General Intelligence|date=13 July 2019|website=Talky Blog|language=en-US|access-date=17 July 2019|archive-url=https://web.archive.org/web/20190717071152/https://www.talkyblog.com/artificial_general_intelligence_agi/|archive-date=17 July 2019|url-status=live}}</ref><br />
<br />
The following tests to confirm human-level AGI have been considered:<br />
<br />
考虑了下列测试以确认人类水平 AGI:<br />
<br />
;[[Turing test|The Turing Test]] ([[Alan Turing|''Turing'']])<br />
<br />
The Turing Test (Turing)<br />
<br />
图灵测试(图灵)<br />
<br />
: A machine and a human both converse sight unseen with a second human, who must evaluate which of the two is the machine, which passes the test if it can fool the evaluator a significant fraction of the time. Note: Turing does not prescribe what should qualify as intelligence, only that knowing that it is a machine should disqualify it.<br />
<br />
A machine and a human both converse sight unseen with a second human, who must evaluate which of the two is the machine, which passes the test if it can fool the evaluator a significant fraction of the time. Note: Turing does not prescribe what should qualify as intelligence, only that knowing that it is a machine should disqualify it.<br />
<br />
一个机器人和一个人类都与另一个人类进行眼神交流,后者必须评估两者中哪一个是机器,如果它能骗过评估者很大一部分时间,那么机器就通过了测试。注意: 图灵并没有规定什么是智能,只要能认出它是一台机器就不应该认为它是智能的。<br />
<br />
;The Coffee Test ([[Steve Wozniak|''Wozniak'']])<br />
<br />
The Coffee Test (Wozniak)<br />
<br />
咖啡测试(沃兹尼亚克)<br />
<br />
: A machine is required to enter an average American home and figure out how to make coffee: find the coffee machine, find the coffee, add water, find a mug, and brew the coffee by pushing the proper buttons.<br />
<br />
A machine is required to enter an average American home and figure out how to make coffee: find the coffee machine, find the coffee, add water, find a mug, and brew the coffee by pushing the proper buttons.<br />
<br />
一台机器需要进入一个普通的美国家庭,并弄清楚如何制作咖啡: 找到咖啡机,找到咖啡,加水,找到一个马克杯,并通过按下正确的按钮来煮咖啡。<br />
<br />
;The Robot College Student Test ([[Ben Goertzel|''Goertzel'']])<br />
<br />
The Robot College Student Test (Goertzel)<br />
<br />
机器人大学生考试(格兹尔)<br />
<br />
: A machine enrolls in a university, taking and passing the same classes that humans would, and obtaining a degree.<br />
<br />
A machine enrolls in a university, taking and passing the same classes that humans would, and obtaining a degree.<br />
<br />
一台机器进入一所大学,学习并通过与人类相同的课程,并获得学位。<br />
<br />
;The Employment Test ([[Nils John Nilsson|''Nilsson'']])<br />
<br />
The Employment Test (Nilsson)<br />
<br />
就业测试(尼尔森)<br />
<br />
: A machine works an economically important job, performing at least as well as humans in the same job.<br />
<br />
A machine works an economically important job, performing at least as well as humans in the same job.<br />
<br />
机器从事一项经济上重要的工作,在同一项工作中表现至少和人类一样好。<br />
<br />
<br />
<br />
=== Problems requiring AGI to solve 等待通用人工智能解决的问题===<br />
<br />
{{Main|AI-complete}}<br />
<br />
<br />
<br />
The most difficult problems for computers are informally known as "AI-complete" or "AI-hard", implying that solving them is equivalent to the general aptitude of human intelligence, or strong AI, beyond the capabilities of a purpose-specific algorithm.<ref name="Shapiro92">Shapiro, Stuart C. (1992). [http://www.cse.buffalo.edu/~shapiro/Papers/ai.pdf Artificial Intelligence] {{Webarchive|url=https://web.archive.org/web/20160201014644/http://www.cse.buffalo.edu/~shapiro/Papers/ai.pdf |date=1 February 2016 }} In Stuart C. Shapiro (Ed.), ''Encyclopedia of Artificial Intelligence'' (Second Edition, pp.&nbsp;54–57). New York: John Wiley. (Section 4 is on "AI-Complete Tasks".)</ref><br />
<br />
The most difficult problems for computers are informally known as "AI-complete" or "AI-hard", implying that solving them is equivalent to the general aptitude of human intelligence, or strong AI, beyond the capabilities of a purpose-specific algorithm.<br />
<br />
对于计算机来说,最困难的问题被非正式地称为“ AI完全问题”或“ AI困难问题” ,这意味着解决这些问题相当于人类智能的一般才能,或强人工智能,超出了特定目的算法的能力。<br />
<br />
<br />
<br />
AI-complete problems are hypothesised to include general [[computer vision]], [[natural language understanding]], and dealing with unexpected circumstances while solving any real world problem.<ref>Roman V. Yampolskiy. Turing Test as a Defining Feature of AI-Completeness. In Artificial Intelligence, Evolutionary Computation and Metaheuristics (AIECM) --In the footsteps of Alan Turing. Xin-She Yang (Ed.). pp. 3–17. (Chapter 1). Springer, London. 2013. http://cecs.louisville.edu/ry/TuringTestasaDefiningFeature04270003.pdf {{Webarchive|url=https://web.archive.org/web/20130522094547/http://cecs.louisville.edu/ry/TuringTestasaDefiningFeature04270003.pdf |date=22 May 2013 }}</ref><br />
<br />
AI-complete problems are hypothesised to include general computer vision, natural language understanding, and dealing with unexpected circumstances while solving any real world problem.<br />
<br />
人工智能完全问题假设包括一般的计算机视觉,自然语言理解,以及在解决任何现实世界问题的同时处理意外情况。<br />
<br />
<br />
<br />
AI-complete problems cannot be solved with current computer technology alone, and also require [[human computation]]. This property could be useful, for example, to test for the presence of humans, as [[CAPTCHA]]s aim to do; and for [[computer security]] to repel [[brute-force attack]]s.<ref>Luis von Ahn, Manuel Blum, Nicholas Hopper, and John Langford. [http://www.captcha.net/captcha_crypt.pdf CAPTCHA: Using Hard AI Problems for Security] {{Webarchive|url=https://web.archive.org/web/20160304001102/http://www.captcha.net/captcha_crypt.pdf |date=4 March 2016 }}. In Proceedings of Eurocrypt, Vol. 2656 (2003), pp. 294–311.</ref><ref>{{cite journal | first = Richard | last = Bergmair | title = Natural Language Steganography and an "AI-complete" Security Primitive | citeseerx = 10.1.1.105.129 | date = 7 January 2006 }} (unpublished?)</ref><br />
<br />
AI-complete problems cannot be solved with current computer technology alone, and also require human computation. This property could be useful, for example, to test for the presence of humans, as CAPTCHAs aim to do; and for computer security to repel brute-force attacks.<br />
<br />
目前的计算机技术不能单独解决AI完全问题,而且还需要人工计算。例如,这个特性可以用来测试人类是否存在(CAPTCHAs 的目标就是这样做) ,以及应用于计算机安全以抵御强力攻击。<br />
<br />
<br />
<br />
== History 历史 == <br />
<br />
=== Classical AI 经典人工智能 ===<br />
<br />
{{Main|History of artificial intelligence}}<br />
<br />
Modern AI research began in the mid 1950s.<ref>{{Harvnb|Crevier|1993|pp=48–50}}</ref> The first generation of AI researchers were convinced that artificial general intelligence was possible and that it would exist in just a few decades. AI pioneer [[Herbert A. Simon]] wrote in 1965: "machines will be capable, within twenty years, of doing any work a man can do."<ref>{{Harvnb|Simon|1965|p=96}} quoted in {{Harvnb|Crevier|1993|p=109}}</ref> Their predictions were the inspiration for [[Stanley Kubrick]] and [[Arthur C. Clarke]]'s character [[HAL 9000]], who embodied what AI researchers believed they could create by the year 2001. AI pioneer [[Marvin Minsky]] was a consultant<ref>{{Cite web |url=http://mitpress.mit.edu/e-books/Hal/chap2/two1.html |title=Scientist on the Set: An Interview with Marvin Minsky |access-date=5 April 2008 |archive-url=https://web.archive.org/web/20120716182537/http://mitpress.mit.edu/e-books/Hal/chap2/two1.html |archive-date=16 July 2012 |url-status=live }}</ref> on the project of making HAL 9000 as realistic as possible according to the consensus predictions of the time; Crevier quotes him as having said on the subject in 1967, "Within a generation ... the problem of creating 'artificial intelligence' will substantially be solved,"<ref>Marvin Minsky to {{Harvtxt|Darrach|1970}}, quoted in {{Harvtxt|Crevier|1993|p=109}}.</ref> although Minsky states that he was misquoted.{{Citation needed|date=June 2011}}<br />
<br />
Modern AI research began in the mid 1950s. The first generation of AI researchers were convinced that artificial general intelligence was possible and that it would exist in just a few decades. AI pioneer Herbert A. Simon wrote in 1965: "machines will be capable, within twenty years, of doing any work a man can do." Their predictions were the inspiration for Stanley Kubrick and Arthur C. Clarke's character HAL 9000, who embodied what AI researchers believed they could create by the year 2001. AI pioneer Marvin Minsky was a consultant on the project of making HAL 9000 as realistic as possible according to the consensus prediction of the time; Crevier quotes him as having said on the subject in 1967, "Within a generation ... the problem of creating 'artificial intelligence' will substantially be solved," although Minsky states that he was misquoted.<br />
<br />
现代人工智能研究始于20世纪50年代中期。第一代人工智能研究人员确信,通用人工智能是可能的,并将在短短几十年内出现。人工智能的先驱赫伯特·A·西蒙(Herbert A. Simon)在1965年写道: “机器将在20年内拥有完成人类能做的任何工作的能力。”他们的预言启发了斯坦利·库布里克和亚瑟·查理斯·克拉克塑造的角色哈尔9000,它代表了人工智能研究人员相信他们截至2001年能够创造出的东西。人工智能先驱马文·明斯基(Marvin Minsky)是一个项目顾问,该项目旨在根据当时的一致预测,使哈尔9000尽可能逼真; 克里维尔援引他在1967年关于这个问题的话说,“在一代人的时间里... ... 创造‘人工智能’的问题将大体上得到解决,”尽管明斯基声称,他的话被错误引用了。<br />
<br />
<br />
<br />
However, in the early 1970s, it became obvious that researchers had grossly underestimated the difficulty of the project. Funding agencies became skeptical of AGI and put researchers under increasing pressure to produce useful "applied AI".<ref>The [[Lighthill report]] specifically criticized AI's "grandiose objectives" and led the dismantling of AI research in England. ({{Harvnb|Lighthill|1973}}; {{Harvnb|Howe|1994}}) In the U.S., [[DARPA]] became determined to fund only "mission-oriented direct research, rather than basic undirected research". See {{Harv|NRC|1999}} under "Shift to Applied Research Increases Investment". See also {{Harv|Crevier|1993|pp=115–117}} and {{Harv|Russell|Norvig|2003|pp=21–22}}</ref> As the 1980s began, Japan's [[Fifth Generation Computer]] Project revived interest in AGI, setting out a ten-year timeline that included AGI goals like "carry on a casual conversation".<ref>{{harvnb|Crevier|1993|p=211}}, {{harvnb|Russell|Norvig|2003|p=24}} and see also {{Harvnb|Feigenbaum|McCorduck|1983}}</ref> In response to this and the success of [[expert systems]], both industry and government pumped money back into the field.<ref>{{Harvnb|Crevier| 1993|pp=161–162,197–203,240}}; {{harvnb|Russell|Norvig|2003|p=25}}; {{harvnb|NRC|1999|loc=under "Shift to Applied Research Increases Investment"}}</ref> However, confidence in AI spectacularly collapsed in the late 1980s, and the goals of the Fifth Generation Computer Project were never fulfilled.<ref>{{Harvnb|Crevier|1993|pp=209–212}}</ref> For the second time in 20 years, AI researchers who had predicted the imminent achievement of AGI had been shown to be fundamentally mistaken. By the 1990s, AI researchers had gained a reputation for making vain promises. They became reluctant to make predictions at all<ref>As AI founder [[John McCarthy (computer scientist)|John McCarthy]] writes "it would be a great relief to the rest of the workers in AI if the inventors of new general formalisms would express their hopes in a more guarded form than has sometimes been the case." {{cite web | url=http://www-formal.stanford.edu/jmc/reviews/lighthill/lighthill.html | title=Reply to Lighthill | last=McCarthy | first=John | authorlink=John McCarthy (computer scientist) | publisher=Stanford University | year=2000 | access-date=29 September 2007 | archive-url=https://web.archive.org/web/20080930164952/http://www-formal.stanford.edu/jmc/reviews/lighthill/lighthill.html | archive-date=30 September 2008 | url-status=live }}</ref> and to avoid any mention of "human level" artificial intelligence for fear of being labeled "wild-eyed dreamer[s]."<ref>"At its low point, some computer scientists and software engineers avoided the term artificial intelligence for fear of being viewed as wild-eyed dreamers."{{cite news |first=John |last=Markoff |title=Behind Artificial Intelligence, a Squadron of Bright Real People |url=https://www.nytimes.com/2005/10/14/technology/14artificial.html?ei=5070&en=11ab55edb7cead5e&ex=1185940800&adxnnl=1&adxnnlx=1185805173-o7WsfW7qaP0x5/NUs1cQCQ |work=The New York Times|date=14 October 2005}}</ref><br />
<br />
However, in the early 1970s, it became obvious that researchers had grossly underestimated the difficulty of the project. Funding agencies became skeptical of AGI and put researchers under increasing pressure to produce useful "applied AI". As the 1980s began, Japan's Fifth Generation Computer Project revived interest in AGI, setting out a ten-year timeline that included AGI goals like "carry on a casual conversation". In response to this and the success of expert systems, both industry and government pumped money back into the field. However, confidence in AI spectacularly collapsed in the late 1980s, and the goals of the Fifth Generation Computer Project were never fulfilled. For the second time in 20 years, AI researchers who had predicted the imminent achievement of AGI had been shown to be fundamentally mistaken. By the 1990s, AI researchers had gained a reputation for making vain promises. They became reluctant to make predictions at all and to avoid any mention of "human level" artificial intelligence for fear of being labeled "wild-eyed dreamer[s]."<br />
<br />
然而,在20世纪70年代初,很明显,研究人员严重低估了该项目的难度。资助机构开始对通用人工智能持怀疑态度,并对研究人员施加越来越大的压力,要求他们生产出有用的“应用人工智能”。随着20世纪80年代的开始,日本的'''<font color="#ff8000">第五代计算机项目(Fifth Generation Computer Project)</font>'''重新唤起了人们对通用人工智能的兴趣,并设定了一个长达10年的时间线,其中包括通用人工智能的目标,比如“进行一次随意的交谈”。为了应对这种情况和专家系统的成功建立,工业界和政府都重新将资金投入这一领域。然而,人们对人工智能的信心在20世纪80年代末大幅下降,第五代计算机项目的目标从未实现。20年来的第二次,人工智能研究人员预测通用人工智能即将取得的成果已经被证明根本是错误的。到了20世纪90年代,人工智能研究人员因做出虚假承诺而臭名昭著。他们根本不愿意做预测,也不愿意提及“人类水平”的人工智能,因为他们害怕被贴上“狂热梦想家”的标签<br />
<br />
<br />
<br />
=== Narrow AI research 狭义人工智能的研究===<br />
<br />
{{Main|Artificial intelligence}}<br />
<br />
<br />
<br />
In the 1990s and early 21st century, mainstream AI achieved far greater commercial success and academic respectability by focusing on specific sub-problems where they can produce verifiable results and commercial applications, such as [[artificial neural networks]] and statistical [[machine learning]].<ref>{{Harvnb|Russell|Norvig|2003|pp=25–26}}</ref> These "applied AI" systems are now used extensively throughout the technology industry, and research in this vein is very heavily funded in both academia and industry. Currently, development on this field is considered an emerging trend, and a mature stage is expected to happen in more than 10 years.<ref>{{cite web |title=Trends in the Emerging Tech Hype Cycle |url=https://blogs.gartner.com/smarterwithgartner/files/2018/08/PR_490866_5_Trends_in_the_Emerging_Tech_Hype_Cycle_2018_Hype_Cycle.png |publisher=Gartner Reports |accessdate=7 May 2019 |archive-url=https://web.archive.org/web/20190522024829/https://blogs.gartner.com/smarterwithgartner/files/2018/08/PR_490866_5_Trends_in_the_Emerging_Tech_Hype_Cycle_2018_Hype_Cycle.png |archive-date=22 May 2019 |url-status=live }}</ref><br />
<br />
In the 1990s and early 21st century, mainstream AI achieved far greater commercial success and academic respectability by focusing on specific sub-problems where they can produce verifiable results and commercial applications, such as artificial neural networks and statistical machine learning. These "applied AI" systems are now used extensively throughout the technology industry, and research in this vein is very heavily funded in both academia and industry. Currently, development on this field is considered an emerging trend, and a mature stage is expected to happen in more than 10 years.<br />
<br />
在1990年代和21世纪初,主流人工智能取得了更大的商业成功和学术声望,因为它们把重点放在能够产生可验证结果和商业应用的具体子问题上,例如人工神经网络和统计机器学习。这些“应用人工智能”系统现在在整个技术产业中得到广泛应用,这方面的研究得到了学术界和产业界的大量资助。目前,这一领域的发展被认为是一个新兴的趋势,并有望在10多年内进入一个成熟的阶段。<br />
<br />
<br />
<br />
Most mainstream AI researchers hope that strong AI can be developed by combining the programs that solve various sub-problems. [[Hans Moravec]] wrote in 1988: <blockquote>"I am confident that this bottom-up route to artificial intelligence will one day meet the traditional top-down route more than half way, ready to provide the real world competence and the [[Commonsense knowledge base|commonsense knowledge]] that has been so frustratingly elusive in reasoning programs. Fully intelligent machines will result when the metaphorical [[golden spike]] is driven uniting the two efforts."<ref>{{Harvnb|Moravec|1988|p=20}}</ref></blockquote><br />
<br />
Most mainstream AI researchers hope that strong AI can be developed by combining the programs that solve various sub-problems. Hans Moravec wrote in 1988: <blockquote>"I am confident that this bottom-up route to artificial intelligence will one day meet the traditional top-down route more than half way, ready to provide the real world competence and the commonsense knowledge that has been so frustratingly elusive in reasoning programs. Fully intelligent machines will result when the metaphorical golden spike is driven uniting the two efforts."</blockquote><br />
<br />
大多数主流人工智能研究人员希望,通过结合解决各种子问题的项目,可以开发出强人工智能。汉斯·莫拉维克(Hans Moravec)在1988年写道: “我相信,这种自下而上的人工智能路线,终有一天会与传统的自上而下的路线在后半程相遇。令人沮丧的是,当下,真实世界的能力和常识知识在推理程序中一直难以捉摸。而这两种路线结合的人工智能将能为我们解决这些疑难。当一个黄金钉一样的东西将二者结合起来时,就会产生完全智能的机器。” / blockquote<br />
<br />
<br />
<br />
However, even this fundamental philosophy has been disputed; for example, Stevan Harnad of Princeton concluded his 1990 paper on the [[Symbol grounding problem|Symbol Grounding Hypothesis]] by stating: <blockquote>"The expectation has often been voiced that "top-down" (symbolic) approaches to modeling cognition will somehow meet "bottom-up" (sensory) approaches somewhere in between. If the grounding considerations in this paper are valid, then this expectation is hopelessly modular and there is really only one viable route from sense to symbols: from the ground up. A free-floating symbolic level like the software level of a computer will never be reached by this route (or vice versa) – nor is it clear why we should even try to reach such a level, since it looks as if getting there would just amount to uprooting our symbols from their intrinsic meanings (thereby merely reducing ourselves to the functional equivalent of a programmable computer)."<ref>{{cite journal | last1 = Harnad | first1 = S | year = 1990 | title = The Symbol Grounding Problem | journal = Physica D | volume = 42 | issue = 1–3| pages = 335–346 | doi=10.1016/0167-2789(90)90087-6| bibcode = 1990PhyD...42..335H | arxiv = cs/9906002}}</ref></blockquote><br />
<br />
However, even this fundamental philosophy has been disputed; for example, Stevan Harnad of Princeton concluded his 1990 paper on the Symbol Grounding Hypothesis by stating: <blockquote>"The expectation has often been voiced that "top-down" (symbolic) approaches to modeling cognition will somehow meet "bottom-up" (sensory) approaches somewhere in between. If the grounding considerations in this paper are valid, then this expectation is hopelessly modular and there is really only one viable route from sense to symbols: from the ground up. A free-floating symbolic level like the software level of a computer will never be reached by this route (or vice versa) – nor is it clear why we should even try to reach such a level, since it looks as if getting there would just amount to uprooting our symbols from their intrinsic meanings (thereby merely reducing ourselves to the functional equivalent of a programmable computer)."</blockquote><br />
<br />
然而,连如此基本的哲学问题也存在争议; 例如,普林斯顿大学的斯蒂文·哈纳德(Stevan Harnad)在1990年关于'''<font color="#ff8000">符号基础假说(the Symbol Grounding Hypothesis)</font>'''的论文中总结道: “人们经常提出这样的期望,即建立“自上而下”(符号)的认知模型的方法将在某种程度上与“自下而上”(感官)的方法在建模过程中的某处相会。如果本文中的基本考虑是正确的,那么绝望的是,这种期望是模块化的,并且从认知到符号真的只有一条可行的路径: 从头开始。类似计算机软件级别的自由浮动的符号永远不可能通过这条路径实现,反之亦然——甚至也不清楚为什么我们应该尝试达到这样一个级别,因为它看起来就像是把我们的符号从它们的内在意义上连根拔起(从而仅仅把我们自己降低为可编程计算机的功能等价物)。” / blockquote<br />
<br />
<br />
<br />
=== Modern artificial general intelligence research 现代通用人工智能的研究===<br />
<br />
The term "artificial general intelligence" was used as early as 1997, by Mark Gubrud<ref>{{Harvnb|Gubrud|1997}}</ref> in a discussion of the implications of fully automated military production and operations. The term was re-introduced and popularized by [[Shane Legg]] and [[Ben Goertzel]] around 2002.<ref>{{Cite web|url=http://goertzel.org/who-coined-the-term-agi/|title=Who coined the term "AGI"? » goertzel.org|language=en-US|access-date=28 December 2018|archive-url=https://web.archive.org/web/20181228083048/http://goertzel.org/who-coined-the-term-agi/|archive-date=28 December 2018|url-status=live}}, via [[Life 3.0]]: 'The term "AGI" was popularized by... Shane Legg, Mark Gubrud and Ben Goertzel'</ref> The research objective is much older, for example [[Doug Lenat]]'s [[Cyc]] project (that began in 1984), and [[Allen Newell]]'s [[Soar (cognitive architecture)|Soar]] project are regarded as within the scope of AGI. AGI research activity in 2006 was described by Pei Wang and Ben Goertzel<ref>{{harvnb|Goertzel|Wang|2006}}. See also {{harvtxt|Wang|2006}} with an up-to-date summary and lots of links.</ref> as "producing publications and preliminary results". The first summer school in AGI was organized in Xiamen, China in 2009<ref>https://goertzel.org/AGI_Summer_School_2009.htm</ref> by the Xiamen university's Artificial Brain Laboratory and OpenCog. The first university course was given in 2010<ref>http://fmi-plovdiv.org/index.jsp?id=1054&ln=1</ref> and 2011<ref>http://fmi.uni-plovdiv.bg/index.jsp?id=1139&ln=1</ref> at Plovdiv University, Bulgaria by Todor Arnaudov. MIT presented a course in AGI in 2018, organized by Lex Fridman and featuring a number of guest lecturers. However, as yet, most AI researchers have devoted little attention to AGI, with some claiming that intelligence is too complex to be completely replicated in the near term. However, a small number of computer scientists are active in AGI research, and many of this group are contributing to a series of [[Conference on Artificial General Intelligence|AGI conferences]]. The research is extremely diverse and often pioneering in nature. In the introduction to his book,{{sfn|Goertzel|Pennachin|2006}} Goertzel says that estimates of the time needed before a truly flexible AGI is built vary from 10 years to over a century, but the consensus in the AGI research community seems to be that the timeline discussed by [[Ray Kurzweil]] in ''[[The Singularity is Near]]''<ref name="K">{{Harv|Kurzweil|2005|p=260}} or see [http://crnano.typepad.com/crnblog/2005/08/advanced_human_.html Advanced Human Intelligence] {{Webarchive|url=https://web.archive.org/web/20110630032301/http://crnano.typepad.com/crnblog/2005/08/advanced_human_.html |date=30 June 2011 }} where he defines strong AI as "machine intelligence with the full range of human intelligence."</ref> (i.e. between 2015 and 2045) is plausible.{{sfn|Goertzel|2007}}<br />
<br />
The term "artificial general intelligence" was used as early as 1997, by Mark Gubrud in a discussion of the implications of fully automated military production and operations. The term was re-introduced and popularized by Shane Legg and Ben Goertzel around 2002. The research objective is much older, for example Doug Lenat's Cyc project (that began in 1984), and Allen Newell's Soar project are regarded as within the scope of AGI. AGI research activity in 2006 was described by Pei Wang and Ben Goertzel as "producing publications and preliminary results". The first summer school in AGI was organized in Xiamen, China in 2009 by the Xiamen university's Artificial Brain Laboratory and OpenCog. The first university course was given in 2010 and 2011 at Plovdiv University, Bulgaria by Todor Arnaudov. MIT presented a course in AGI in 2018, organized by Lex Fridman and featuring a number of guest lecturers. However, as yet, most AI researchers have devoted little attention to AGI, with some claiming that intelligence is too complex to be completely replicated in the near term. However, a small number of computer scientists are active in AGI research, and many of this group are contributing to a series of AGI conferences. The research is extremely diverse and often pioneering in nature. In the introduction to his book, Goertzel says that estimates of the time needed before a truly flexible AGI is built vary from 10 years to over a century, but the consensus in the AGI research community seems to be that the timeline discussed by Ray Kurzweil in The Singularity is Near (i.e. between 2015 and 2045) is plausible.<br />
<br />
”人工通用智能”一词早在1997年就由马克·古布鲁德(Mark Gubrud)在讨论全自动化军事生产和作业的影响时使用。这个术语在2002年左右被肖恩·莱格(Shane Legg)和本·格兹尔(Ben Goertzel)重新引入并推广。研究目标要古老得多,例如道格•雷纳特(Doug Lenat)的 Cyc 项目(始于1984年) ,以及艾伦•纽厄尔(Allen Newell)的 Soar 项目被认为属于 AGI 的范围。王培(Pei Wang)和本·格兹尔将 AGI 2006年的研究活动描述为”发表论文和取得初步成果”。2009年,厦门大学人工脑实验室和 OpenCog 在中国厦门组织了 AGI 的第一个暑期学校。第一个大学课程于2010年和2011年在保加利亚普罗夫迪夫大学由 Todor Arnaudov 开设。2018年,麻省理工学院在 AGI 开设了一门课程,由 Lex Fridman 组织,并邀请了一些客座讲师。然而,迄今为止,大多数人工智能研究人员对 AGI 关注甚少,一些人声称,智能过于复杂,无法在短期内完全复制。然而,少数计算机科学家积极参与 AGI 的研究,其中许多人正在为 AGI 的一系列会议做出贡献。这项研究极其多样化,而且往往具有开创性。在他的书的序言中,Goertzel 说,一个真正灵活的 AGI 制造所需的时间估计从10年到超过一个世纪不等,但是 AGI 研究团体的共识似乎是 Ray Kurzweil 在《奇点迫近讨论的时间表。在2015年至2045年之间)是合理的。<br />
<br />
<br />
<br />
However, most mainstream AI researchers doubt that progress will be this rapid.{{citation_needed|date=January 2017}} Organizations explicitly pursuing AGI include the Swiss AI lab [[IDSIA]],{{citation needed|date=December 2017}} Nnaisense,<ref>{{cite news|last1=Markoff|first1=John|title=When A.I. Matures, It May Call Jürgen Schmidhuber 'Dad'|url=https://www.nytimes.com/2016/11/27/technology/artificial-intelligence-pioneer-jurgen-schmidhuber-overlooked.html|accessdate=26 December 2017|work=The New York Times|date=27 November 2016|archive-url=https://web.archive.org/web/20171226234555/https://www.nytimes.com/2016/11/27/technology/artificial-intelligence-pioneer-jurgen-schmidhuber-overlooked.html|archive-date=26 December 2017|url-status=live}}</ref> [[Vicarious (company)|Vicarious]],<!--<ref name=baum/>--> [[Maluuba]],<ref name=baum/> the [[OpenCog|OpenCog Foundation]], Adaptive AI, [[LIDA (cognitive architecture)|LIDA]], and [[Numenta]] and the associated [[Redwood Neuroscience Institute]].<ref>{{cite book|author1=James Barrat|title=Our Final Invention: Artificial Intelligence and the End of the Human Era|date=2013|publisher=St. Martin's Press|location=New York|isbn=9780312622374|edition=First|chapter=Chapter 11: A Hard Takeoff|title-link=Our Final Invention|author1-link=James Barrat}}</ref> In addition, organizations such as the [[Machine Intelligence Research Institute]]<ref>{{cite web|title=About the Machine Intelligence Research Institute|url=https://intelligence.org/about/|website=Machine Intelligence Research Institute|accessdate=26 December 2017|archive-url=https://web.archive.org/web/20180121025925/https://intelligence.org/about/|archive-date=21 January 2018|url-status=live}}</ref> and [[OpenAI]]<ref>{{cite news|title=About OpenAI|url=https://openai.com/about/|accessdate=26 December 2017|work=[[OpenAI]]|language=en-us|archive-url=https://web.archive.org/web/20171222181056/https://openai.com/about/|archive-date=22 December 2017|url-status=live}}</ref> have been founded to influence the development path of AGI. Finally, projects such as the [[Human Brain Project]]<ref>{{cite news|last1=Theil|first1=Stefan|title=Trouble in Mind|url=https://www.scientificamerican.com/article/why-the-human-brain-project-went-wrong-and-how-to-fix-it/|accessdate=26 December 2017|work=Scientific American|pages=36–42|language=en|doi=10.1038/scientificamerican1015-36|bibcode=2015SciAm.313d..36T|archive-url=https://web.archive.org/web/20171109234151/https://www.scientificamerican.com/article/why-the-human-brain-project-went-wrong-and-how-to-fix-it/|archive-date=9 November 2017|url-status=live}}</ref> have the goal of building a functioning simulation of the human brain. A 2017 survey of AGI categorized forty-five known "active R&D projects" that explicitly or implicitly (through published research) research AGI, with the largest three being [[DeepMind]], the Human Brain Project, and [[OpenAI]].<ref name=baum>{{cite journal|title=Baum, Seth, A Survey of Artificial General Intelligence Projects for Ethics, Risk, and Policy (November 12, 2017). Global Catastrophic Risk Institute Working Paper 17-1|url=https://ssrn.com/abstract=3070741|date=12 November 2017|last1=Baum|first1=Seth}}</ref><br />
<br />
However, most mainstream AI researchers doubt that progress will be this rapid. Organizations explicitly pursuing AGI include the Swiss AI lab IDSIA, Nnaisense, Vicarious,<!-- In addition, organizations such as the Machine Intelligence Research Institute and OpenAI have been founded to influence the development path of AGI. Finally, projects such as the Human Brain Project have the goal of building a functioning simulation of the human brain. A 2017 survey of AGI categorized forty-five known "active R&D projects" that explicitly or implicitly (through published research) research AGI, with the largest three being DeepMind, the Human Brain Project, and OpenAI.<br />
<br />
然而,大多数主流的人工智能研究人员怀疑进展是否会如此之快。明确寻求 AGI 的组织包括瑞士人工智能实验室 IDSIA,Nnaisense,Vicarious,! -- 此外,还成立了机器智能研究所和 OpenAI 等机构来影响 AGI 的发展道路。最后,像人脑计划这样的项目的目标是建立一个人脑的功能模拟。2017年针对 AGI 的一项调查将45个已知的“活跃研发项目”(通过已发表的研究)归类为“活跃研发项目” ,其中最大的三个是 DeepMind、人类大脑项目和 OpenAI。<br />
<br />
<br />
<br />
In 2017, researchers Feng Liu, Yong Shi and Ying Liu conducted intelligence tests on publicly available and freely accessible weak AI such as Google AI or Apple's Siri and others. At the maximum, these AI reached a value of about 47, which corresponds approximately to a six-year-old child in first grade. An adult comes to about 100 on average. Similar tests had been carried out in 2014, with the IQ score reaching a maximum value of 27.<ref>{{cite journal|title=Intelligence Quotient and Intelligence Grade of Artificial Intelligence|journal=Annals of Data Science|volume=4|issue=2|pages=179–191|arxiv=1709.10242|doi=10.1007/s40745-017-0109-0|year=2017|last1=Liu|first1=Feng|last2=Shi|first2=Yong|last3=Liu|first3=Ying|bibcode=2017arXiv170910242L}}</ref><ref>{{cite web|title=Google-KI doppelt so schlau wie Siri|url=https://t3n.de/news/iq-kind-schlauer-google-ki-siri-864003|accessdate=2 January 2019|archive-url=https://web.archive.org/web/20190103055657/https://t3n.de/news/iq-kind-schlauer-google-ki-siri-864003/|archive-date=3 January 2019|url-status=live}}</ref><br />
<br />
In 2017, researchers Feng Liu, Yong Shi and Ying Liu conducted intelligence tests on publicly available and freely accessible weak AI such as Google AI or Apple's Siri and others. At the maximum, these AI reached a value of about 47, which corresponds approximately to a six-year-old child in first grade. An adult comes to about 100 on average. Similar tests had been carried out in 2014, with the IQ score reaching a maximum value of 27.<br />
<br />
2017年,研究人员 Feng Liu,Yong Shi 和 Ying Liu 对公开的和可自由访问的弱智能进行了智能测试,如谷歌人工智能或苹果的 Siri 等。在最大值,这些 AI 达到了约47,这大约相当于一个六岁的儿童在一年级。一个成年人平均体重约为100磅。2014年也进行了类似的测试,智商分数达到了最高值27。<br />
<br />
<br />
<br />
In 2019, video game programmer and aerospace engineer [[John Carmack]] announced plans to research AGI.<ref name="lawler">{{cite web |url=https://www.engadget.com/2019-11-13-john-carmack-agi.html |title=John Carmack takes a step back at Oculus to work on human-like AI |date=November 13, 2019 |first=Richard |last=Lawler |accessdate=April 4, 2020 |publisher=[[Engadget]]}}</ref><br />
<br />
In 2019, video game programmer and aerospace engineer John Carmack announced plans to research AGI.<br />
<br />
2019年,游戏程序师和航空工程师 John Carmack 宣布了研究 AGI 的计划。<br />
<br />
<br />
<br />
==Processing power needed to simulate a brain ==<br />
<br />
<br />
<br />
===Whole brain emulation===<br />
<br />
{{main|Mind uploading}}<br />
<br />
A popular discussed approach to achieving general intelligent action is [[whole brain emulation]]. A low-level brain model is built by [[brain scanning|scanning]] and [[Brain mapping|mapping]] a biological brain in detail and copying its state into a computer system or another computational device. The computer runs a [[computer simulation|simulation]] model so faithful to the original that it will behave in essentially the same way as the original brain, or for all practical purposes, indistinguishably.<ref name=Roadmap><br />
<br />
A popular discussed approach to achieving general intelligent action is whole brain emulation. A low-level brain model is built by scanning and mapping a biological brain in detail and copying its state into a computer system or another computational device. The computer runs a simulation model so faithful to the original that it will behave in essentially the same way as the original brain, or for all practical purposes, indistinguishably.<ref name=Roadmap><br />
<br />
实现一般智能行为的一种流行的讨论方法是全脑模拟。一个低层次的大脑模型是通过扫描和绘制生物大脑的详细情况,并将其状态复制到计算机系统或其他计算设备中来建立的。计算机运行的模拟模型如此忠实于原始模型,以至于它的行为在本质上与原始大脑相同,或者对于所有的实际目的,难以区分。 参考名称路线图<br />
<br />
{{Harvnb|Sandberg|Boström|2008}}. "The basic idea is to take a particular brain, scan its structure in detail, and construct a software model of it that is so faithful to the original that, when run on appropriate hardware, it will behave in essentially the same way as the original brain."</ref> Whole brain emulation is discussed in [[computational neuroscience]] and [[neuroinformatics]], in the context of [[brain simulation]] for medical research purposes. It is discussed in [[artificial intelligence]] research{{sfn|Goertzel|2007}} as an approach to strong AI. [[Neuroimaging]] technologies that could deliver the necessary detailed understanding are improving rapidly, and [[futurist]] Ray Kurzweil in the book ''The Singularity Is Near''<ref name=K/> predicts that a map of sufficient quality will become available on a similar timescale to the required computing power.<br />
<br />
. "The basic idea is to take a particular brain, scan its structure in detail, and construct a software model of it that is so faithful to the original that, when run on appropriate hardware, it will behave in essentially the same way as the original brain."</ref> Whole brain emulation is discussed in computational neuroscience and neuroinformatics, in the context of brain simulation for medical research purposes. It is discussed in artificial intelligence research as an approach to strong AI. Neuroimaging technologies that could deliver the necessary detailed understanding are improving rapidly, and futurist Ray Kurzweil in the book The Singularity Is Near predicts that a map of sufficient quality will become available on a similar timescale to the required computing power.<br />
<br />
.“基本思想是,取一个特定的大脑,详细地扫描其结构,并构建一个与原始大脑如此忠实的软件模型,以至于在适当的硬件上运行时,它基本上与原始大脑的行为方式相同。整个大脑模拟在计算神经科学和神经信息学医学期刊上讨论过,这是为了医学研究的大脑模拟。它是人工智能研究中讨论的一种强人工智能的方法。神经成像技术可以提供必要的详细的理解正在迅速提高,未来学家 Ray Kurzweil 在《奇点迫近书中预测,一张具有足够质量的地图将在类似的时间尺度上达到所需的计算能力。<br />
<br />
<br />
<br />
===Early estimates ===<br />
<br />
[[File:Estimations of Human Brain Emulation Required Performance.svg|thumb|right|400px|Estimates of how much processing power is needed to emulate a human brain at various levels (from Ray Kurzweil, and [[Anders Sandberg]] and [[Nick Bostrom]]), along with the fastest supercomputer from [[TOP500]] mapped by year. Note the logarithmic scale and exponential trendline, which assumes the computational capacity doubles every 1.1 years. Kurzweil believes that mind uploading will be possible at neural simulation, while the Sandberg, Bostrom report is less certain about where [[consciousness]] arises.{{sfn|Sandberg|Boström|2008}}]] For low-level brain simulation, an extremely powerful computer would be required. The [[human brain]] has a huge number of [[synapses]]. Each of the 10<sup>11</sup> (one hundred billion) [[neurons]] has on average 7,000 synaptic connections (synapses) to other neurons. It has been estimated that the brain of a three-year-old child has about 10<sup>15</sup> synapses (1 quadrillion). This number declines with age, stabilizing by adulthood. Estimates vary for an adult, ranging from 10<sup>14</sup> to 5×10<sup>14</sup> synapses (100 to 500 trillion).{{sfn|Drachman|2005}} An estimate of the brain's processing power, based on a simple switch model for neuron activity, is around 10<sup>14</sup> (100 trillion) synaptic updates per second ([[SUPS]]).{{sfn|Russell|Norvig|2003}} In 1997, Kurzweil looked at various estimates for the hardware required to equal the human brain and adopted a figure of 10<sup>16</sup> computations per second (cps).<ref>In "Mind Children" {{Harvnb|Moravec|1988|page=61}} 10<sup>15</sup> cps is used. More recently, in 1997, <{{cite web|url=http://www.transhumanist.com/volume1/moravec.htm |title=Archived copy |accessdate=23 June 2006 |url-status=dead |archiveurl=https://web.archive.org/web/20060615031852/http://transhumanist.com/volume1/moravec.htm |archivedate=15 June 2006 }}> Moravec argued for 10<sup>8</sup> MIPS which would roughly correspond to 10<sup>14</sup> cps. Moravec talks in terms of MIPS, not "cps", which is a non-standard term Kurzweil introduced.</ref> (For comparison, if a "computation" was equivalent to one "[[FLOPS|floating point operation]]" – a measure used to rate current [[supercomputer]]s – then 10<sup>16</sup> "computations" would be equivalent to 10 [[Peta-|petaFLOPS]], [[FLOPS#Performance records|achieved in 2011]]). He used this figure to predict the necessary hardware would be available sometime between 2015 and 2025, if the exponential growth in computer power at the time of writing continued.<br />
<br />
Estimates of how much processing power is needed to emulate a human brain at various levels (from Ray Kurzweil, and [[Anders Sandberg and Nick Bostrom), along with the fastest supercomputer from TOP500 mapped by year. Note the logarithmic scale and exponential trendline, which assumes the computational capacity doubles every 1.1 years. Kurzweil believes that mind uploading will be possible at neural simulation, while the Sandberg, Bostrom report is less certain about where consciousness arises.]] For low-level brain simulation, an extremely powerful computer would be required. The human brain has a huge number of synapses. Each of the 10<sup>11</sup> (one hundred billion) neurons has on average 7,000 synaptic connections (synapses) to other neurons. It has been estimated that the brain of a three-year-old child has about 10<sup>15</sup> synapses (1 quadrillion). This number declines with age, stabilizing by adulthood. Estimates vary for an adult, ranging from 10<sup>14</sup> to 5×10<sup>14</sup> synapses (100 to 500 trillion). An estimate of the brain's processing power, based on a simple switch model for neuron activity, is around 10<sup>14</sup> (100 trillion) synaptic updates per second (SUPS). In 1997, Kurzweil looked at various estimates for the hardware required to equal the human brain and adopted a figure of 10<sup>16</sup> computations per second (cps). (For comparison, if a "computation" was equivalent to one "floating point operation" – a measure used to rate current supercomputers – then 10<sup>16</sup> "computations" would be equivalent to 10 petaFLOPS, achieved in 2011). He used this figure to predict the necessary hardware would be available sometime between 2015 and 2025, if the exponential growth in computer power at the time of writing continued.<br />
<br />
估计需要多少处理能力才能在不同水平上模拟人类大脑(来自 Ray Kurzweil,[ Anders Sandberg 和 Nick Bostrom ]) ,以及每年从 TOP500绘制出的最快超级计算机。请注意对数尺度趋势线和指数趋势线,它假设计算能力每1.1年翻一番。库兹韦尔相信,在神经模拟中上传思维是可能的,而桑德伯格和博斯特罗姆的报告对意识在哪里产生则不太确定。]对于低层次的大脑模拟,需要一个非常强大的计算机。人类的大脑有大量的突触。每个10个 sup 11 / sup (1000亿)神经元平均有7000个突触连接(突触)到其他神经元。据估计,一个三岁儿童的大脑约有10个 sup 15 / sup 突触(1千万亿)。这个数字随着年龄的增长而下降,成年后趋于稳定。对于一个成年人的估计有所不同,从10个 sup 14 / sup 到5个 sup 10 sup 14 / sup 突触(100万亿到500万亿)不等。基于神经元活动的简单开关模型,对大脑处理能力的估计大约是每秒10次 / 秒(100万亿)突触更新(SUPS)。1997年,库兹韦尔研究了相当于人脑所需硬件的各种估计,并采用了每秒10 sup 16 / sup 计算(cps)的数字。(作为比较,如果一次“计算”相当于一次“浮点运算”——一种用于对当前超级计算机进行评级的措施——那么10 sup 16 / sup“计算”相当于2011年完成的10petaflops)。他用这个数字来预测,如果在撰写本文时计算机能力方面的指数增长继续下去的话,那么在2015年到2025年之间的某个时候,必要的硬件将会出现。<br />
<br />
<br />
<br />
===Modelling the neurons in more detail===<br />
<br />
The [[artificial neuron]] model assumed by Kurzweil and used in many current [[artificial neural network]] implementations is simple compared with [[biological neuron model|biological neurons]]. A brain simulation would likely have to capture the detailed cellular behaviour of biological [[neurons]], presently understood only in the broadest of outlines. The overhead introduced by full modeling of the biological, chemical, and physical details of neural behaviour (especially on a molecular scale) would require computational powers several orders of magnitude larger than Kurzweil's estimate. In addition the estimates do not account for [[glial cells]], which are at least as numerous as neurons, and which may outnumber neurons by as much as 10:1, and are now known to play a role in cognitive processes.<ref name="Discover2011JanFeb">{{Cite journal|author=Swaminathan, Nikhil|title=Glia—the other brain cells|journal=Discover|date=Jan–Feb 2011|url=http://discovermagazine.com/2011/jan-feb/62|access-date=24 January 2014|archive-url=https://web.archive.org/web/20140208071350/http://discovermagazine.com/2011/jan-feb/62|archive-date=8 February 2014|url-status=live}}</ref><br />
<br />
The artificial neuron model assumed by Kurzweil and used in many current artificial neural network implementations is simple compared with biological neurons. A brain simulation would likely have to capture the detailed cellular behaviour of biological neurons, presently understood only in the broadest of outlines. The overhead introduced by full modeling of the biological, chemical, and physical details of neural behaviour (especially on a molecular scale) would require computational powers several orders of magnitude larger than Kurzweil's estimate. In addition the estimates do not account for glial cells, which are at least as numerous as neurons, and which may outnumber neurons by as much as 10:1, and are now known to play a role in cognitive processes.<br />
<br />
与生物神经元相比,Kurzweil 假设的人工神经元模型在当前许多人工神经网络实现中的应用是简单的。大脑模拟可能需要捕捉生物神经元细胞行为的细节,目前只能从最广泛的轮廓中理解。对神经行为的生物、化学和物理细节(特别是在分子尺度上)进行全面建模所需要的计算能力将比 Kurzweil 的估计大几百万数量级。此外,这些估计没有考虑胶质细胞,胶质细胞至少和神经元一样多,数量可能比神经元多10:1,现在已知它们在认知过程中发挥作用。<br />
<br />
<br />
<br />
=== Current research===<br />
<br />
There are some research projects that are investigating brain simulation using more sophisticated neural models, implemented on conventional computing architectures. The [[Artificial Intelligence System]] project implemented non-real time simulations of a "brain" (with 10<sup>11</sup> neurons) in 2005. It took 50 days on a cluster of 27 processors to simulate 1 second of a model.<ref>{{cite journal |last=Izhikevich |first=Eugene M. |last2=Edelman |first2=Gerald M. |date=4 March 2008 |title=Large-scale model of mammalian thalamocortical systems |url=http://vesicle.nsi.edu/users/izhikevich/publications/large-scale_model_of_human_brain.pdf |journal=PNAS |volume=105 |issue=9 |pages=3593–3598 |doi= 10.1073/pnas.0712231105|access-date=23 June 2015 |archive-url=https://web.archive.org/web/20090612095651/http://vesicle.nsi.edu/users/izhikevich/publications/large-scale_model_of_human_brain.pdf |archive-date=12 June 2009 |pmid=18292226 |pmc=2265160|bibcode=2008PNAS..105.3593I }}</ref> The [[Blue Brain]] project used one of the fastest supercomputer architectures in the world, [[IBM]]'s [[Blue Gene]] platform, to create a real time simulation of a single rat [[Neocortex|neocortical column]] consisting of approximately 10,000 neurons and 10<sup>8</sup> synapses in 2006.<ref>{{cite web|url=http://bluebrain.epfl.ch/Jahia/site/bluebrain/op/edit/pid/19085|title=Project Milestones|work=Blue Brain|accessdate=11 August 2008}}</ref> A longer term goal is to build a detailed, functional simulation of the physiological processes in the human brain: "It is not impossible to build a human brain and we can do it in 10 years," [[Henry Markram]], director of the Blue Brain Project said in 2009 at the [[TED (conference)|TED conference]] in Oxford.<ref>{{Cite news |url=http://news.bbc.co.uk/1/hi/technology/8164060.stm |title=Artificial brain '10 years away' 2009 BBC news |date=22 July 2009 |access-date=25 July 2009 |archive-url=https://web.archive.org/web/20170726040959/http://news.bbc.co.uk/1/hi/technology/8164060.stm |archive-date=26 July 2017 |url-status=live }}</ref> There have also been controversial claims to have simulated a [[cat intelligence#Computer simulation of the cat brain|cat brain]]. Neuro-silicon interfaces have been proposed as an alternative implementation strategy that may scale better.<ref>[http://gauntlet.ucalgary.ca/story/10343 University of Calgary news] {{Webarchive|url=https://web.archive.org/web/20090818081044/http://gauntlet.ucalgary.ca/story/10343 |date=18 August 2009 }}, [http://www.nbcnews.com/id/12037941 NBC News news] {{Webarchive|url=https://web.archive.org/web/20170704063922/http://www.nbcnews.com/id/12037941/ |date=4 July 2017 }}</ref><br />
<br />
There are some research projects that are investigating brain simulation using more sophisticated neural models, implemented on conventional computing architectures. The Artificial Intelligence System project implemented non-real time simulations of a "brain" (with 10<sup>11</sup> neurons) in 2005. It took 50 days on a cluster of 27 processors to simulate 1 second of a model. The Blue Brain project used one of the fastest supercomputer architectures in the world, IBM's Blue Gene platform, to create a real time simulation of a single rat neocortical column consisting of approximately 10,000 neurons and 10<sup>8</sup> synapses in 2006. A longer term goal is to build a detailed, functional simulation of the physiological processes in the human brain: "It is not impossible to build a human brain and we can do it in 10 years," Henry Markram, director of the Blue Brain Project said in 2009 at the TED conference in Oxford. There have also been controversial claims to have simulated a cat brain. Neuro-silicon interfaces have been proposed as an alternative implementation strategy that may scale better.<br />
<br />
有一些研究项目正在使用更复杂的神经模型研究大脑模拟,这些模型是在传统的计算机体系结构上实现的。人工智能系统项目在2005年实现了一个“大脑”(有10个 sup 11 / sup 神经元)的非实时模拟。在一个由27个处理器组成的集群上,模拟一个模型的一秒钟花费了50天时间。2006年,蓝脑项目利用世界上最快的超级计算机架构之一,IBM 的蓝色基因平台,创建了一个包含大约10,000个神经元和10个 sup 8 / sup 突触的单个大鼠皮层柱的实时模拟。一个更长期的目标是建立一个人脑生理过程的详细的功能模拟: “建立一个人脑并不是不可能的,我们可以在10年内完成,”2009年在牛津举行的 TED 大会上,蓝脑项目主任亨利 · 马克拉姆说。还有一些有争议的说法是模拟猫的大脑。神经硅接口已被提出作为一种替代的实施策略,可能会更好地伸缩。<br />
<br />
<br />
<br />
[[Hans Moravec]] addressed the above arguments ("brains are more complicated", "neurons have to be modeled in more detail") in his 1997 paper "When will computer hardware match the human brain?".<ref>{{cite web|url=http://www.transhumanist.com/volume1/moravec.htm |title=Archived copy |accessdate=23 June 2006 |url-status=dead |archiveurl=https://web.archive.org/web/20060615031852/http://transhumanist.com/volume1/moravec.htm |archivedate=15 June 2006 }}</ref> He measured the ability of existing software to simulate the functionality of neural tissue, specifically the retina. His results do not depend on the number of glial cells, nor on what kinds of processing neurons perform where.<br />
<br />
Hans Moravec addressed the above arguments ("brains are more complicated", "neurons have to be modeled in more detail") in his 1997 paper "When will computer hardware match the human brain?". He measured the ability of existing software to simulate the functionality of neural tissue, specifically the retina. His results do not depend on the number of glial cells, nor on what kinds of processing neurons perform where.<br />
<br />
Hans Moravec 在他1997年的论文《计算机硬件何时能与人脑相匹配? 》中提出了上述观点(“大脑更复杂” ,“神经元必须建模得更详细”) .他测量了现有软件模拟神经组织,特别是视网膜功能的能力。他的研究结果并不取决于神经胶质细胞的数量,也不取决于处理神经元在哪里工作。<br />
<br />
<br />
<br />
The actual complexity of modeling biological neurons has been explored in [[OpenWorm|OpenWorm project]] that was aimed on complete simulation of a worm that has only 302 neurons in its neural network (among about 1000 cells in total). The animal's neural network has been well documented before the start of the project. However, although the task seemed simple at the beginning, the models based on a generic neural network did not work. Currently, the efforts are focused on precise emulation of biological neurons (partly on the molecular level), but the result cannot be called a total success yet. Even if the number of issues to be solved in a human-brain-scale model is not proportional to the number of neurons, the amount of work along this path is obvious.<br />
<br />
The actual complexity of modeling biological neurons has been explored in OpenWorm project that was aimed on complete simulation of a worm that has only 302 neurons in its neural network (among about 1000 cells in total). The animal's neural network has been well documented before the start of the project. However, although the task seemed simple at the beginning, the models based on a generic neural network did not work. Currently, the efforts are focused on precise emulation of biological neurons (partly on the molecular level), but the result cannot be called a total success yet. Even if the number of issues to be solved in a human-brain-scale model is not proportional to the number of neurons, the amount of work along this path is obvious.<br />
<br />
在 OpenWorm 项目中,已经探讨了建模生物神经元的实际复杂性,该项目旨在完全模拟一个蠕虫,其神经网络中只有302个神经元(在总共约1000个细胞中)。在项目开始之前,动物的神经网络已经被很好地记录了下来。然而,尽管一开始任务看起来很简单,基于一般神经网络的模型并不起作用。目前,研究的重点是精确模拟生物神经元(部分在分子水平上) ,但结果还不能被称为完全成功。即使在人脑尺度模型中需要解决的问题的数量与神经元的数量不成比例,沿着这条路径所做的工作量也是显而易见的。<br />
<br />
<br />
<br />
===Criticisms of simulation-based approaches===<br />
<br />
A fundamental criticism of the simulated brain approach derives from [[embodied cognition]] where human embodiment is taken as an essential aspect of human intelligence. Many researchers believe that embodiment is necessary to ground meaning.<ref>{{Harvnb|de Vega|Glenberg|Graesser|2008}}. A wide range of views in current research, all of which require grounding to some degree</ref> If this view is correct, any fully functional brain model will need to encompass more than just the neurons (i.e., a robotic body). Goertzel{{sfn|Goertzel|2007}} proposes virtual embodiment (like in ''[[Second Life]]''), but it is not yet known whether this would be sufficient.<br />
<br />
A fundamental criticism of the simulated brain approach derives from embodied cognition where human embodiment is taken as an essential aspect of human intelligence. Many researchers believe that embodiment is necessary to ground meaning. If this view is correct, any fully functional brain model will need to encompass more than just the neurons (i.e., a robotic body). Goertzel proposes virtual embodiment (like in Second Life), but it is not yet known whether this would be sufficient.<br />
<br />
对模拟大脑方法的一个基本批评来自具身认知,在那里人体化被视为人类智力的一个重要方面。许多研究者认为,具体化是必要的基础意义。如果这种观点是正确的,那么任何功能齐全的大脑模型都需要包含更多的神经元(例如,一个机器人身体)。Goertzel 提出了虚拟化身(就像在《第二人生》中那样) ,但是目前还不知道这是否足够。<br />
<br />
<br />
<br />
Desktop computers using microprocessors capable of more than 10<sup>9</sup> cps (Kurzweil's non-standard unit "computations per second", see above) have been available since 2005. According to the brain power estimates used by Kurzweil (and Moravec), this computer should be capable of supporting a simulation of a bee brain, but despite some interest<ref>{{Cite web |url=http://www.setiai.com/archives/cat_honey_bee_brain.html |title=some links to bee brain studies |access-date=30 March 2010 |archive-url=https://web.archive.org/web/20111005162232/http://www.setiai.com/archives/cat_honey_bee_brain.html |archive-date=5 October 2011 |url-status=dead }}</ref> no such simulation exists {{Citation needed|date=April 2011}}. There are at least three reasons for this:<br />
<br />
Desktop computers using microprocessors capable of more than 10<sup>9</sup> cps (Kurzweil's non-standard unit "computations per second", see above) have been available since 2005. According to the brain power estimates used by Kurzweil (and Moravec), this computer should be capable of supporting a simulation of a bee brain, but despite some interest no such simulation exists . There are at least three reasons for this:<br />
<br />
自2005年以来,台式计算机使用的微处理器能够超过10 sup 9 / sup cps (库兹韦尔的非标准单位“每秒计算” ,见上文)。根据 Kurzweil (和 Moravec)使用的大脑能量估算,这台计算机应该能够支持蜜蜂大脑的模拟,但是尽管有些人感兴趣,这样的模拟并不存在。这至少有三个原因:<br />
<br />
#The neuron model seems to be oversimplified (see next section).<br />
<br />
The neuron model seems to be oversimplified (see next section).<br />
<br />
神经元模型似乎过于简化了(见下一节)。<br />
<br />
#There is insufficient understanding of higher cognitive processes{{refn|In Goertzels' AGI book ([[#CITEREFYudkowsky2006|Yudkowsky 2006]]), Yudkowsky proposes 5 levels of organisation that must be understood – code/data, sensory modality, concept & category, thought, and deliberation (consciousness) – in order to use the available hardware}} to establish accurately what the brain's neural activity, observed using techniques such as [[Neuroimaging#Functional magnetic resonance imaging|functional magnetic resonance imaging]], correlates with.<br />
<br />
There is insufficient understanding of higher cognitive processes to establish accurately what the brain's neural activity, observed using techniques such as functional magnetic resonance imaging, correlates with.<br />
<br />
人们对高级认知过程的理解不够充分,无法准确地确定大脑的神经活动---- 使用功能性磁共振成像等技术观察到的活动---- 与之相关。<br />
<br />
#Even if our understanding of cognition advances sufficiently, early simulation programs are likely to be very inefficient and will, therefore, need considerably more hardware.<br />
<br />
Even if our understanding of cognition advances sufficiently, early simulation programs are likely to be very inefficient and will, therefore, need considerably more hardware.<br />
<br />
即使我们对认知的理解有了足够的进步,早期的仿真程序也可能非常低效,因此需要更多的硬件。<br />
<br />
#The brain of an organism, while critical, may not be an appropriate boundary for a cognitive model. To simulate a bee brain, it may be necessary to simulate the body, and the environment. [[The Extended Mind]] thesis formalizes the philosophical concept, and research into [[Cephalopoda|cephalopods]] has demonstrated clear examples of a decentralized system.<ref>{{cite journal | pmid = 15829594 | doi=10.1152/jn.00684.2004 | volume=94 | issue=2 | title=Dynamic model of the octopus arm. I. Biomechanics of the octopus reaching movement |date=August 2005 | journal=J. Neurophysiol. | pages=1443–58 | last1 = Yekutieli | first1 = Y | last2 = Sagiv-Zohar | first2 = R | last3 = Aharonov | first3 = R | last4 = Engel | first4 = Y | last5 = Hochner | first5 = B | last6 = Flash | first6 = T}}</ref><br />
<br />
The brain of an organism, while critical, may not be an appropriate boundary for a cognitive model. To simulate a bee brain, it may be necessary to simulate the body, and the environment. The Extended Mind thesis formalizes the philosophical concept, and research into cephalopods has demonstrated clear examples of a decentralized system.<br />
<br />
有机体的大脑虽然关键,但可能不是认知模型的合适边界。为了模拟蜜蜂的大脑,可能需要模拟身体和环境。扩展心智论文形式化了哲学概念,对头足类动物的研究已经证明了分散系统的明显例子。<br />
<br />
<br />
<br />
In addition, the scale of the human brain is not currently well-constrained. One estimate puts the human brain at about 100 billion neurons and 100 trillion synapses.<ref>{{Harvnb|Williams|Herrup|1988}}</ref><ref>[http://search.eb.com/eb/article-75525 "nervous system, human."] ''[[Encyclopædia Britannica]]''. 9 January 2007</ref> Another estimate is 86 billion neurons of which 16.3 billion are in the [[cerebral cortex]] and 69 billion in the [[cerebellum]].{{sfn|Azevedo et al.|2009}} [[Glial cell]] synapses are currently unquantified but are known to be extremely numerous.<br />
<br />
In addition, the scale of the human brain is not currently well-constrained. One estimate puts the human brain at about 100 billion neurons and 100 trillion synapses. Another estimate is 86 billion neurons of which 16.3 billion are in the cerebral cortex and 69 billion in the cerebellum. Glial cell synapses are currently unquantified but are known to be extremely numerous.<br />
<br />
此外,人类大脑的规模目前还没有得到很好的限制。据估计,人类大脑大约有1000亿个神经元和100万亿个突触。另一个估计是860亿个神经元,其中163亿在大脑皮层,690亿在小脑。神经胶质细胞突触目前尚未定量,但已知数量极多。<br />
<br />
<br />
<br />
==Strong AI and consciousness==<br />
<br />
{{See also|Philosophy of artificial intelligence|Turing test}}<br />
<br />
<br />
<br />
In 1980, philosopher [[John Searle]] coined the term "strong AI" as part of his [[Chinese room]] argument.<ref>{{Harvnb|Searle|1980}}</ref> He wanted to distinguish between two different hypotheses about artificial intelligence:<ref>As defined in a standard AI textbook: "The assertion that machines could possibly act intelligently (or, perhaps better, act as if they were intelligent) is called the 'weak AI' hypothesis by philosophers, and the assertion that machines that do so are actually thinking (as opposed to simulating thinking) is called the 'strong AI' hypothesis." {{Harv|Russell|Norvig|2003}}</ref><br />
<br />
In 1980, philosopher John Searle coined the term "strong AI" as part of his Chinese room argument. He wanted to distinguish between two different hypotheses about artificial intelligence:<br />
<br />
1980年,哲学家约翰•塞尔(John Searle)将“强人工智能”(strong AI)一词作为他在中文房间里辩论的一部分。他想要区分关于人工智能的两种不同假设:<br />
<br />
* An artificial intelligence system can ''think'' and have a ''mind''. (The word "mind" has a specific meaning for philosophers, as used in "the [[mind body problem]]" or "the [[philosophy of mind]]".)<br />
<br />
* An artificial intelligence system can (only) ''act like'' it thinks and has a mind.<br />
<br />
The first one is called "the ''strong'' AI hypothesis" and the second is "the ''weak'' AI hypothesis" because the first one makes the ''stronger'' statement: it assumes something special has happened to the machine that goes beyond all its abilities that we can test. Searle referred to the "strong AI hypothesis" as "strong AI". This usage is also common in academic AI research and textbooks.<ref>For example:<br />
<br />
The first one is called "the strong AI hypothesis" and the second is "the weak AI hypothesis" because the first one makes the stronger statement: it assumes something special has happened to the machine that goes beyond all its abilities that we can test. Searle referred to the "strong AI hypothesis" as "strong AI". This usage is also common in academic AI research and textbooks.<ref>For example:<br />
<br />
第一个被称为“强人工智能假设” ,第二个被称为“弱人工智能假设” ,因为第一个假设提出了更强的陈述: 它假定机器发生了某种特殊的事情,超出了我们能够测试的所有能力。Searle 将“强 AI 假说”称为“强 AI”。这种用法在人工智能学术研究和教科书中也很常见。例如:<br />
<br />
* {{Harvnb|Russell|Norvig|2003}},<br />
<br />
* [http://www.encyclopedia.com/doc/1O87-strongAI.html Oxford University Press Dictionary of Psychology] {{Webarchive|url=https://web.archive.org/web/20071203103022/http://www.encyclopedia.com/doc/1O87-strongAI.html |date=3 December 2007 }} (quoted in "High Beam Encyclopedia"),<br />
<br />
* [http://www.aaai.org/AITopics/html/phil.html MIT Encyclopedia of Cognitive Science] {{Webarchive|url=https://web.archive.org/web/20080719074502/http://www.aaai.org/AITopics/html/phil.html |date=19 July 2008 }} (quoted in "AITopics")<br />
<br />
* [http://planetmath.org/encyclopedia/StrongAIThesis.html Planet Math] {{Webarchive|url=https://web.archive.org/web/20070919012830/http://planetmath.org/encyclopedia/StrongAIThesis.html |date=19 September 2007 }}<br />
<br />
* [http://www.cbhd.org/resources/biotech/tongen_2003-11-07.htm Will Biological Computers Enable Artificially Intelligent Machines to Become Persons?] {{Webarchive|url=https://web.archive.org/web/20080513031753/http://www.cbhd.org/resources/biotech/tongen_2003-11-07.htm |date=13 May 2008 }} Anthony Tongen</ref><br />
<br />
<br />
<br />
The weak AI hypothesis is equivalent to the hypothesis that artificial general intelligence is possible. According to [[Stuart J. Russell|Russell]] and [[Peter Norvig|Norvig]], "Most AI researchers take the weak AI hypothesis for granted, and don't care about the strong AI hypothesis."{{sfn|Russell|Norvig|2003|p=947}}<br />
<br />
The weak AI hypothesis is equivalent to the hypothesis that artificial general intelligence is possible. According to Russell and Norvig, "Most AI researchers take the weak AI hypothesis for granted, and don't care about the strong AI hypothesis."<br />
<br />
弱人工智能假说等同于人工一般智能是可能的假说。根据罗素和诺维格的说法,“大多数人工智能研究人员认为弱人工智能假说是理所当然的,并且不关心强人工智能假说。”<br />
<br />
<br />
<br />
In contrast to Searle, [[Ray Kurzweil]] uses the term "strong AI" to describe any artificial intelligence system that acts like it has a mind,<ref name=K/> regardless of whether a philosopher would be able to determine if it ''actually'' has a mind or not.<br />
<br />
In contrast to Searle, Ray Kurzweil uses the term "strong AI" to describe any artificial intelligence system that acts like it has a mind, regardless of whether a philosopher would be able to determine if it actually has a mind or not.<br />
<br />
与 Searle 不同的是,Ray Kurzweil 使用“强人工智能”这个词来描述任何人工智能系统,这个系统的行为就像它有思想一样,不管哲学家是否能够确定它是否真的有思想。<br />
<br />
In science fiction, AGI is associated with traits such as [[consciousness]], [[sentience]], [[sapience]], and [[self-awareness]] observed in living beings. However, according to Searle, it is an open question whether general intelligence is sufficient for consciousness. "Strong AI" (as defined above by Kurzweil) should not be confused with Searle's "[[strong AI hypothesis]]." The strong AI hypothesis is the claim that a computer which behaves as intelligently as a person must also necessarily have a [[mind]] and [[consciousness]]. AGI refers only to the amount of intelligence that the machine displays, with or without a mind.<br />
<br />
In science fiction, AGI is associated with traits such as consciousness, sentience, sapience, and self-awareness observed in living beings. However, according to Searle, it is an open question whether general intelligence is sufficient for consciousness. "Strong AI" (as defined above by Kurzweil) should not be confused with Searle's "strong AI hypothesis." The strong AI hypothesis is the claim that a computer which behaves as intelligently as a person must also necessarily have a mind and consciousness. AGI refers only to the amount of intelligence that the machine displays, with or without a mind.<br />
<br />
在科幻小说中,AGI 与生物的意识、知觉、智慧和自我意识等特征有关。然而,根据 Searle 的说法,一般智力是否足以产生意识还是一个悬而未决的问题。“强 AI”(如上文库兹韦尔所定义的)不应与塞尔的“强 AI 假设”相混淆强有力的人工智能假说认为,一台像人一样智能运行的计算机必然具有思想和意识。Agi 只是指机器显示的智能量,不管有没有头脑。<br />
<br />
<br />
<br />
===Consciousness===<br />
<br />
There are other aspects of the human mind besides intelligence that are relevant to the concept of strong AI which play a major role in [[science fiction]] and the [[ethics of artificial intelligence]]:<br />
<br />
There are other aspects of the human mind besides intelligence that are relevant to the concept of strong AI which play a major role in science fiction and the ethics of artificial intelligence:<br />
<br />
在科幻小说和人工智能伦理中扮演重要角色的强大人工智能概念中,除了与智能有关的人类思维还有其他方面:<br />
<br />
* [[consciousness]]: To have [[qualia|subjective experience]] and [[thought]].<ref>Note that [[consciousness]] is difficult to define. A popular definition, due to [[Thomas Nagel]], is that it "feels like" something to be conscious. If we are not conscious, then it doesn't feel like anything. Nagel uses the example of a bat: we can sensibly ask "what does it feel like to be a bat?" However, we are unlikely to ask "what does it feel like to be a toaster?" Nagel concludes that a bat appears to be conscious (i.e. has consciousness) but a toaster does not. See {{Harv|Nagel|1974}}</ref><br />
<br />
* [[self-awareness]]: To be aware of oneself as a separate individual, especially to be aware of one's own thoughts.<br />
<br />
* [[sentience]]: The ability to "feel" perceptions or emotions subjectively.<br />
<br />
* [[sapience]]: The capacity for wisdom.<br />
<br />
<br />
<br />
These traits have a moral dimension, because a machine with this form of strong AI may have legal rights, analogous to the [[animal rights|rights of non-human animals]]. As such, preliminary work has been conducted on approaches to integrating full ethical agents with existing legal and social frameworks. These approaches have focused on the legal position and rights of 'strong' AI.<ref>{{Cite journal|last=Sotala|first=Kaj|last2=Yampolskiy|first2=Roman V|date=2014-12-19|title=Responses to catastrophic AGI risk: a survey|journal=Physica Scripta|volume=90|issue=1|pages=8|doi=10.1088/0031-8949/90/1/018001|issn=0031-8949|doi-access=free}}</ref><br />
<br />
These traits have a moral dimension, because a machine with this form of strong AI may have legal rights, analogous to the rights of non-human animals. As such, preliminary work has been conducted on approaches to integrating full ethical agents with existing legal and social frameworks. These approaches have focused on the legal position and rights of 'strong' AI.<br />
<br />
这些特征具有道德维度,因为拥有这种强人工智能形式的机器可能拥有法律权利,类似于非人类动物的权利。因此,已经开展了初步工作,探讨如何将全面的道德行为者纳入现有的法律和社会框架。这些方法都集中在强大的人工智能的法律地位和权利上。<br />
<br />
<br />
<br />
However, [[Bill Joy]], among others, argues a machine with these traits may be a threat to human life or dignity.<ref>{{cite journal| title=Why the future doesn't need us | last=Joy | first=Bill |author-link=Bill Joy | journal=Wired |date=April 2000 }}</ref> It remains to be shown whether any of these traits are [[Necessary and sufficient condition|necessary]] for strong AI. The role of [[consciousness]] is not clear, and currently there is no agreed test for its presence. If a machine is built with a device that simulates the [[neural correlates of consciousness]], would it automatically have self-awareness? It is also possible that some of these properties, such as sentience, [[Emergence|naturally emerge]] from a fully intelligent machine, or that it becomes natural to ''ascribe'' these properties to machines once they begin to act in a way that is clearly intelligent. For example, intelligent action may be sufficient for sentience, rather than the other way around.<br />
<br />
However, Bill Joy, among others, argues a machine with these traits may be a threat to human life or dignity. It remains to be shown whether any of these traits are necessary for strong AI. The role of consciousness is not clear, and currently there is no agreed test for its presence. If a machine is built with a device that simulates the neural correlates of consciousness, would it automatically have self-awareness? It is also possible that some of these properties, such as sentience, naturally emerge from a fully intelligent machine, or that it becomes natural to ascribe these properties to machines once they begin to act in a way that is clearly intelligent. For example, intelligent action may be sufficient for sentience, rather than the other way around.<br />
<br />
然而,比尔 · 乔伊等人认为,具有这些特征的机器可能会威胁到人类的生命或尊严。这些特征对于强 AI 来说是否是必要的还有待证明。意识的作用并不清楚,目前也没有对其存在的一致的测试。如果一台机器装有一个模拟意识相关神经区的装置,它会自动具有自我意识吗?也有可能这些特性中的一些,比如感知能力,自然而然地从一个完全智能的机器中产生,或者一旦机器开始以一种明显的智能方式行动,人们就会自然而然地把这些特性归因于机器。例如,智能行为可能足以产生知觉,而不是相反。<br />
<br />
<br />
<br />
===Artificial consciousness research===<br />
<br />
{{Main|Artificial consciousness}}<br />
<br />
<br />
<br />
Although the role of consciousness in strong AI/AGI is debatable, many AGI researchers{{sfn|Yudkowsky|2006}} regard research that investigates possibilities for implementing consciousness as vital. In an early effort [[Igor Aleksander]]{{sfn|Aleksander|1996}} argued that the principles for creating a conscious machine already existed but that it would take forty years to train such a machine to understand [[language]].<br />
<br />
Although the role of consciousness in strong AI/AGI is debatable, many AGI researchers regard research that investigates possibilities for implementing consciousness as vital. In an early effort Igor Aleksander argued that the principles for creating a conscious machine already existed but that it would take forty years to train such a machine to understand language.<br />
<br />
虽然意识在强 ai / AGI 中的作用是有争议的,但是很多 AGI 的研究人员认为研究实现意识的可能性是至关重要的。在早期的努力中,Igor Aleksander 认为创造一个有意识的机器的原则已经存在,但是训练这样一个机器去理解语言需要四十年。<br />
<br />
<br />
<br />
==Possible explanations for the slow progress of AI research==<br />
<br />
{{See also|History of artificial intelligence#The problems}}<br />
<br />
<br />
<br />
Since the launch of AI research in 1956, the growth of this field has slowed down over time and has stalled the aims of creating machines skilled with intelligent action at the human level.{{sfn|Clocksin|2003}} A possible explanation for this delay is that computers lack a sufficient scope of memory or processing power.{{sfn|Clocksin|2003}} In addition, the level of complexity that connects to the process of AI research may also limit the progress of AI research.{{sfn|Clocksin|2003}}<br />
<br />
Since the launch of AI research in 1956, the growth of this field has slowed down over time and has stalled the aims of creating machines skilled with intelligent action at the human level. A possible explanation for this delay is that computers lack a sufficient scope of memory or processing power. In addition, the level of complexity that connects to the process of AI research may also limit the progress of AI research.<br />
<br />
自从1956年开始人工智能研究以来,这一领域的发展速度已经随着时间的推移而放缓,并且阻碍了创造具有人类水平的智能行为的机器的目标。这种延迟的一个可能的解释是计算机缺乏足够的存储空间或处理能力。此外,与人工智能研究过程相关的复杂程度也可能限制人工智能研究的进展。<br />
<br />
<br />
<br />
While most AI researchers believe strong AI can be achieved in the future, there are some individuals like [[Hubert Dreyfus]] and [[Roger Penrose]] who deny the possibility of achieving strong AI.{{sfn|Clocksin|2003}} [[John McCarthy (computer scientist)|John McCarthy]] was one of various computer scientists who believe human-level AI will be accomplished, but a date cannot accurately be predicted.{{sfn|McCarthy|2003}}<br />
<br />
While most AI researchers believe strong AI can be achieved in the future, there are some individuals like Hubert Dreyfus and Roger Penrose who deny the possibility of achieving strong AI. John McCarthy was one of various computer scientists who believe human-level AI will be accomplished, but a date cannot accurately be predicted.<br />
<br />
虽然大多数人工智能研究人员认为强大的人工智能可以在未来实现,但也有一些人像休伯特 · 德雷福斯和罗杰 · 彭罗斯否认实现强大人工智能的可能性。约翰 · 麦卡锡是众多计算机科学家之一,他们相信人类水平的人工智能将会实现,但是日期无法准确预测。<br />
<br />
<br />
<br />
Conceptual limitations are another possible reason for the slowness in AI research.{{sfn|Clocksin|2003}} AI researchers may need to modify the conceptual framework of their discipline in order to provide a stronger base and contribution to the quest of achieving strong AI. As William Clocksin wrote in 2003: "the framework starts from Weizenbaum's observation that intelligence manifests itself only relative to specific social and cultural contexts".{{sfn|Clocksin|2003}}<br />
<br />
Conceptual limitations are another possible reason for the slowness in AI research. AI researchers may need to modify the conceptual framework of their discipline in order to provide a stronger base and contribution to the quest of achieving strong AI. As William Clocksin wrote in 2003: "the framework starts from Weizenbaum's observation that intelligence manifests itself only relative to specific social and cultural contexts".<br />
<br />
概念上的局限性是人工智能研究缓慢的另一个可能原因。人工智能研究人员可能需要修改他们学科的概念框架,以便为实现强大的人工智能提供一个更强大的基础和贡献。正如 William Clocksin 在2003年写的那样: “这个框架始于 Weizenbaum 的观察,即智力只在特定的社会和文化背景下表现出来。”。<br />
<br />
<br />
<br />
Furthermore, AI researchers have been able to create computers that can perform jobs that are complicated for people to do, such as mathematics, but conversely they have struggled to develop a computer that is capable of carrying out tasks that are simple for humans to do, such as walking ([[Moravec's paradox]]).{{sfn|Clocksin|2003}} A problem described by David Gelernter is that some people assume thinking and reasoning are equivalent.{{sfn|Gelernter|2010}} However, the idea of whether thoughts and the creator of those thoughts are isolated individually has intrigued AI researchers.{{sfn|Gelernter|2010}}<br />
<br />
Furthermore, AI researchers have been able to create computers that can perform jobs that are complicated for people to do, such as mathematics, but conversely they have struggled to develop a computer that is capable of carrying out tasks that are simple for humans to do, such as walking (Moravec's paradox). A problem described by David Gelernter is that some people assume thinking and reasoning are equivalent. However, the idea of whether thoughts and the creator of those thoughts are isolated individually has intrigued AI researchers.<br />
<br />
此外,人工智能研究人员已经能够创造出能够执行复杂工作(如数学)的计算机,但相反地,他们却难以开发出能够执行人类简单任务(如行走)的计算机(莫拉维克悖论)。大卫 · 格勒尼特描述的一个问题是,有些人认为思考和推理是等价的。然而,思想和这些思想的创造者是否被孤立的想法引起了人工智能研究者的兴趣。<br />
<br />
<br />
<br />
The problems that have been encountered in AI research over the past decades have further impeded the progress of AI. The failed predictions that have been promised by AI researchers and the lack of a complete understanding of human behaviors have helped diminish the primary idea of human-level AI.{{sfn|Goertzel|2007}} Although the progress of AI research has brought both improvement and disappointment, most investigators have established optimism about potentially achieving the goal of AI in the 21st century.{{sfn|Goertzel|2007}}<br />
<br />
The problems that have been encountered in AI research over the past decades have further impeded the progress of AI. The failed predictions that have been promised by AI researchers and the lack of a complete understanding of human behaviors have helped diminish the primary idea of human-level AI. Although the progress of AI research has brought both improvement and disappointment, most investigators have established optimism about potentially achieving the goal of AI in the 21st century.<br />
<br />
过去几十年人工智能研究中遇到的问题进一步阻碍了人工智能的发展。人工智能研究人员所承诺的失败的预测,以及对人类行为缺乏完整理解,已经帮助削弱了人类水平人工智能的基本概念。尽管人工智能研究的进展带来了进步和失望,但大多数研究人员对人工智能在21世纪可能实现的目标持乐观态度。<br />
<br />
<br />
<br />
Other possible reasons have been proposed for the lengthy research in the progress of strong AI. The intricacy of scientific problems and the need to fully understand the human brain through psychology and neurophysiology have limited many researchers in emulating the function of the human brain in computer hardware.{{sfn|McCarthy|2007}} Many researchers tend to underestimate any doubt that is involved with future predictions of AI, but without taking those issues seriously, people can then overlook solutions to problematic questions.{{sfn|Goertzel|2007}}<br />
<br />
Other possible reasons have been proposed for the lengthy research in the progress of strong AI. The intricacy of scientific problems and the need to fully understand the human brain through psychology and neurophysiology have limited many researchers in emulating the function of the human brain in computer hardware. Many researchers tend to underestimate any doubt that is involved with future predictions of AI, but without taking those issues seriously, people can then overlook solutions to problematic questions.<br />
<br />
还有其他可能的原因可以解释为什么对于强人工智能进行了长时间的研究。错综复杂的科学问题,以及通过心理学和神经生理学充分了解人脑的必要性,限制了许多研究人员在计算机硬件中模拟人脑的功能。许多研究人员倾向于低估与人工智能未来预测有关的任何怀疑,但是如果不认真对待这些问题,人们就会忽视问题的解决方案。<br />
<br />
<br />
<br />
Clocksin says that a conceptual limitation that may impede the progress of AI research is that people may be using the wrong techniques for computer programs and implementation of equipment.{{sfn|Clocksin|2003}} When AI researchers first began to aim for the goal of artificial intelligence, a main interest was human reasoning.{{sfn|Holte|Choueiry|2003}} Researchers hoped to establish computational models of human knowledge through reasoning and to find out how to design a computer with a specific cognitive task.{{sfn|Holte|Choueiry|2003}}<br />
<br />
Clocksin says that a conceptual limitation that may impede the progress of AI research is that people may be using the wrong techniques for computer programs and implementation of equipment. When AI researchers first began to aim for the goal of artificial intelligence, a main interest was human reasoning. Researchers hoped to establish computational models of human knowledge through reasoning and to find out how to design a computer with a specific cognitive task.<br />
<br />
Clocksin 说,阻碍人工智能研究进展的一个概念上的限制是,人们可能在计算机程序和设备实现方面使用了错误的技术。当人工智能研究人员第一次开始瞄准人工智能的目标时,主要的兴趣是人类推理。研究人员希望通过推理建立人类知识的计算模型,并找出如何设计一台具有特定认知任务的计算机。<br />
<br />
<br />
<br />
The practice of abstraction, which people tend to redefine when working with a particular context in research, provides researchers with a concentration on just a few concepts.{{sfn|Holte|Choueiry|2003}} The most productive use of abstraction in AI research comes from planning and problem solving.{{sfn|Holte|Choueiry|2003}} Although the aim is to increase the speed of a computation, the role of abstraction has posed questions about the involvement of abstraction operators.{{sfn|Zucker|2003}}<br />
<br />
The practice of abstraction, which people tend to redefine when working with a particular context in research, provides researchers with a concentration on just a few concepts. The most productive use of abstraction in AI research comes from planning and problem solving. Although the aim is to increase the speed of a computation, the role of abstraction has posed questions about the involvement of abstraction operators.<br />
<br />
抽象的实践,人们在研究中使用特定的语境时倾向于重新定义,为研究人员提供了集中在几个概念上的机会。抽象在人工智能研究中最有效的应用来自规划和解决问题。虽然目标是提高计算速度,但是抽象的作用已经对抽象操作符的参与提出了问题。<br />
<br />
<br />
<br />
A possible reason for the slowness in AI relates to the acknowledgement by many AI researchers that heuristics is a section that contains a significant breach between computer performance and human performance.{{sfn|McCarthy|2007}} The specific functions that are programmed to a computer may be able to account for many of the requirements that allow it to match human intelligence. These explanations are not necessarily guaranteed to be the fundamental causes for the delay in achieving strong AI, but they are widely agreed by numerous researchers.<br />
<br />
A possible reason for the slowness in AI relates to the acknowledgement by many AI researchers that heuristics is a section that contains a significant breach between computer performance and human performance. The specific functions that are programmed to a computer may be able to account for many of the requirements that allow it to match human intelligence. These explanations are not necessarily guaranteed to be the fundamental causes for the delay in achieving strong AI, but they are widely agreed by numerous researchers.<br />
<br />
人工智能发展缓慢的一个可能原因是许多人工智能研究人员承认启发式是计算机性能和人类性能之间的一个重大缺口。为计算机编程的特定功能可以满足许多要求,使计算机与人类智能相匹配。这些解释并不一定是造成人工智能实现延迟的根本原因,但它们得到了众多研究人员的广泛认同。<br />
<br />
<br />
<br />
There have been many AI researchers that debate over the idea whether [[affective computing|machines should be created with emotions]]. There are no emotions in typical models of AI and some researchers say programming emotions into machines allows them to have a mind of their own.{{sfn|Clocksin|2003}} Emotion sums up the experiences of humans because it allows them to remember those experiences.{{sfn|Gelernter|2010}} David Gelernter writes, "No computer will be creative unless it can simulate all the nuances of human emotion."{{sfn|Gelernter|2010}} This concern about emotion has posed problems for AI researchers and it connects to the concept of strong AI as its research progresses into the future.<ref>{{cite journal|doi=10.1016/j.bushor.2018.08.004|title=Kaplan Andreas and Haelein Michael (2019) Siri, Siri, in my hand: Who's the fairest in the land? On the interpretations, illustrations, and implications of artificial intelligence | volume=62 | year=2019|journal=Business Horizons|pages=15–25 | last1 = Kaplan | first1 = Andreas | last2 = Haenlein | first2 = Michael}}</ref><br />
<br />
There have been many AI researchers that debate over the idea whether machines should be created with emotions. There are no emotions in typical models of AI and some researchers say programming emotions into machines allows them to have a mind of their own. Emotion sums up the experiences of humans because it allows them to remember those experiences. David Gelernter writes, "No computer will be creative unless it can simulate all the nuances of human emotion." This concern about emotion has posed problems for AI researchers and it connects to the concept of strong AI as its research progresses into the future.<br />
<br />
许多人工智能研究人员一直在争论机器是否应该带有情感。典型的人工智能模型中没有情感,一些研究人员说,将情感编程到机器中可以让它们拥有自己的思想。情感总结了人类的经历,因为它允许人们记住那些经历。大卫 · 格勒尼特写道: “除非计算机能够模拟人类情感的所有细微差别,否则它不会具有创造力。”这种对情绪的关注给人工智能研究人员带来了一些问题,随着未来人工智能研究的进展,它与强人工智能的概念相联系。<br />
<br />
<br />
<br />
==Controversies and dangers==<br />
<br />
<br />
<br />
===Feasibility===<br />
<br />
{{expand section|date=February 2016}}<br />
<br />
As of March 2020, AGI remains speculative<ref name="spec1">[https://www.europarl.europa.eu/at-your-service/files/be-heard/religious-and-non-confessional-dialogue/events/en-20190319-how-artificial-intelligence-works.pdf europarl.europa.eu: How artificial intelligence works], "Concluding remarks: Today's AI is powerful and useful, but remains far from speculated AGI or ASI.", European Parliamentary Research Service, retrieved March 3, 2020</ref><ref name="spec2">[https://www.itu.int/en/journal/001/Documents/itu2018-9.pdf itu.int: Beyond Mad?: The Race For Artificial General Intelligence], "AGI represents a level of power that remains firmly in the realm of speculative fiction as on date." February 2, 2018, retrieved March 3, 2020</ref> as no such system has been demonstrated yet. Opinions vary both on ''whether'' and ''when'' artificial general intelligence will arrive. At one extreme, AI pioneer [[Herbert A. Simon]] wrote in 1965: "machines will be capable, within twenty years, of doing any work a man can do". However, this prediction failed to come true. Microsoft co-founder [[Paul Allen]] believed that such intelligence is unlikely in the 21st century because it would require "unforeseeable and fundamentally unpredictable breakthroughs" and a "scientifically deep understanding of cognition".<ref>{{cite news|last1=Allen|first1=Paul|title=The Singularity Isn't Near|url=http://www.technologyreview.com/view/425733/paul-allen-the-singularity-isnt-near/|accessdate=17 September 2014|work=[[MIT Technology Review]]}}</ref> Writing in [[The Guardian]], roboticist Alan Winfield claimed the gulf between modern computing and human-level artificial intelligence is as wide as the gulf between current space flight and practical faster-than-light spaceflight.<ref>{{cite news|last1=Winfield|first1=Alan|title=Artificial intelligence will not turn into a Frankenstein's monster|url=https://www.theguardian.com/technology/2014/aug/10/artificial-intelligence-will-not-become-a-frankensteins-monster-ian-winfield|accessdate=17 September 2014|work=[[The Guardian]]|archive-url=https://web.archive.org/web/20140917135230/http://www.theguardian.com/technology/2014/aug/10/artificial-intelligence-will-not-become-a-frankensteins-monster-ian-winfield|archive-date=17 September 2014|url-status=live}}</ref><br />
<br />
As of March 2020, AGI remains speculative as no such system has been demonstrated yet. Opinions vary both on whether and when artificial general intelligence will arrive. At one extreme, AI pioneer Herbert A. Simon wrote in 1965: "machines will be capable, within twenty years, of doing any work a man can do". However, this prediction failed to come true. Microsoft co-founder Paul Allen believed that such intelligence is unlikely in the 21st century because it would require "unforeseeable and fundamentally unpredictable breakthroughs" and a "scientifically deep understanding of cognition". Writing in The Guardian, roboticist Alan Winfield claimed the gulf between modern computing and human-level artificial intelligence is as wide as the gulf between current space flight and practical faster-than-light spaceflight.<br />
<br />
截至2020年3月,德盛安联仍处于投机状态,因为迄今尚未展示此类系统。对于人工通用智能是否会到来以及何时到来,人们的看法各不相同。在一个极端,人工智能的先驱赫伯特·西蒙在1965年写道: “机器将能在20年内完成人类能做的任何工作。”。然而,这个预言并没有实现。微软(Microsoft)联合创始人保罗•艾伦(Paul Allen)认为,这种情报在21世纪不太可能出现,因为它需要“不可预见且根本无法预测的突破”和“对认知的科学深入理解”。机器人专家 Alan Winfield 在《卫报》上发表文章称,现代计算机和人类水平的人工智能之间的鸿沟就像当前的太空飞行和实际的超光速空间飞行之间的鸿沟一样宽。<br />
<br />
<br />
<br />
AI experts' views on the feasibility of AGI wax and wane, and may have seen a resurgence in the 2010s. Four polls conducted in 2012 and 2013 suggested that the median guess among experts for when they would be 50% confident AGI would arrive was 2040 to 2050, depending on the poll, with the mean being 2081. Of the experts, 16.5% answered with "never" when asked the same question but with a 90% confidence instead.<ref name="new yorker doomsday">{{cite news|author1=Raffi Khatchadourian|title=The Doomsday Invention: Will artificial intelligence bring us utopia or destruction?|url=http://www.newyorker.com/magazine/2015/11/23/doomsday-invention-artificial-intelligence-nick-bostrom|accessdate=7 February 2016|work=[[The New Yorker (magazine)|The New Yorker]]|date=23 November 2015|archive-url=https://web.archive.org/web/20160128105955/http://www.newyorker.com/magazine/2015/11/23/doomsday-invention-artificial-intelligence-nick-bostrom|archive-date=28 January 2016|url-status=live}}</ref><ref>Müller, V. C., & Bostrom, N. (2016). Future progress in artificial intelligence: A survey of expert opinion. In Fundamental issues of artificial intelligence (pp. 555–572). Springer, Cham.</ref> Further current AGI progress considerations can be found below [[#Tests_for_confirming_human-level_AGI|''Tests for confirming human-level AGI'']] and [[#IQ-Tests_AGI|''IQ-tests AGI'']].<br />
<br />
AI experts' views on the feasibility of AGI wax and wane, and may have seen a resurgence in the 2010s. Four polls conducted in 2012 and 2013 suggested that the median guess among experts for when they would be 50% confident AGI would arrive was 2040 to 2050, depending on the poll, with the mean being 2081. Of the experts, 16.5% answered with "never" when asked the same question but with a 90% confidence instead. Further current AGI progress considerations can be found below Tests for confirming human-level AGI and IQ-tests AGI.<br />
<br />
人工智能专家对德盛安联兴衰可行性的看法,可能在2010年出现了复苏。2012年和2013年进行的四次民意调查显示,专家对德盛安联50% 有信心的平均猜测是2040年到2050年,具体取决于调查结果,平均猜测是2081年。在这些专家中,16.5% 的人在被问到同样的问题时回答“从来没有” ,但他们的自信心却达到了90% 。进一步的进展考虑可以在确认人类水平 AGI 和 iq 测试 AGI 的测试下面找到。<br />
<br />
<br />
<br />
===Potential threat to human existence{{anchor|Risk_of_human_extinction}}===<br />
<br />
{{Main|Existential risk from artificial general intelligence}}<br />
<br />
<br />
<br />
The thesis that AI poses an existential risk, and that this risk needs much more attention than it currently gets, has been endorsed by many public figures; perhaps the most famous are [[Elon Musk]], [[Bill Gates]], and [[Stephen Hawking]]. The most notable AI researcher to endorse the thesis is [[Stuart J. Russell]]. Endorsers of the thesis sometimes express bafflement at skeptics: Gates states he does not "understand why some people are not concerned",<ref name="BBC News">{{cite news|last1=Rawlinson|first1=Kevin|title=Microsoft's Bill Gates insists AI is a threat|url=https://www.bbc.co.uk/news/31047780|work=[[BBC News]]|accessdate=30 January 2015}}</ref> and Hawking criticized widespread indifference in his 2014 editorial: {{cquote|'So, facing possible futures of incalculable benefits and risks, the experts are surely doing everything possible to ensure the best outcome, right? Wrong. If a superior alien civilisation sent us a message saying, 'We'll arrive in a few decades,' would we just reply, 'OK, call us when you get here{{endash}}we'll leave the lights on?' Probably not{{endash}}but this is more or less what is happening with AI.'<ref name="hawking editorial">{{cite news |title=Stephen Hawking: 'Transcendence looks at the implications of artificial intelligence&nbsp;– but are we taking AI seriously enough?' |url=https://www.independent.co.uk/news/science/stephen-hawking-transcendence-looks-at-the-implications-of-artificial-intelligence--but-are-we-taking-ai-seriously-enough-9313474.html |accessdate=3 December 2014 |publisher=[[The Independent (UK)]]}}</ref>}}<br />
<br />
The thesis that AI poses an existential risk, and that this risk needs much more attention than it currently gets, has been endorsed by many public figures; perhaps the most famous are Elon Musk, Bill Gates, and Stephen Hawking. The most notable AI researcher to endorse the thesis is Stuart J. Russell. Endorsers of the thesis sometimes express bafflement at skeptics: Gates states he does not "understand why some people are not concerned", and Hawking criticized widespread indifference in his 2014 editorial: we'll leave the lights on?' Probably notbut this is more or less what is happening with AI.'}}<br />
<br />
人工智能构成了世界末日,这种风险需要比现在更多的关注,这一论点已经得到了许多公众人物的支持; 也许最著名的是埃隆 · 马斯克,比尔 · 盖茨和斯蒂芬 · 霍金。支持这一观点的最著名的人工智能研究者是斯图尔特 · 罗素。这篇论文的支持者有时会对怀疑论者表示困惑: 盖茨表示,他不“理解为什么有些人不关心” ,霍金在2014年的社论中批评了普遍的冷漠: 我们会让灯亮着吗可能不是,但这或多或少是正在发生的与人工智能<br />
<br />
<br />
<br />
Many of the scholars who are concerned about existential risk believe that the best way forward would be to conduct (possibly massive) research into solving the difficult "[[AI control problem|control problem]]" to answer the question: what types of safeguards, algorithms, or architectures can programmers implement to maximize the probability that their recursively-improving AI would continue to behave in a friendly, rather than destructive, manner after it reaches superintelligence?<ref name="superintelligence" >{{cite book|last1=Bostrom|first1=Nick|author-link=Nick Bostrom|title=Superintelligence: Paths, Dangers, Strategies|date=2014|isbn=978-0199678112|edition=First|quote=|title-link=Superintelligence: Paths, Dangers, Strategies}}<!-- preface --></ref><ref name="physica_scripta" >{{cite journal|title=Responses to catastrophic AGI risk: a survey|journal=[[Physica Scripta]]|date=19 December 2014|volume=90|issue=1|author1=Kaj Sotala|author2-link=Roman Yampolskiy|author2=Roman Yampolskiy}}</ref><br />
<br />
Many of the scholars who are concerned about existential risk believe that the best way forward would be to conduct (possibly massive) research into solving the difficult "control problem" to answer the question: what types of safeguards, algorithms, or architectures can programmers implement to maximize the probability that their recursively-improving AI would continue to behave in a friendly, rather than destructive, manner after it reaches superintelligence?<br />
<br />
许多关注世界末日的学者认为,最好的方法是进行(可能是大规模的)研究,解决困难的“控制问题” ,以回答这个问题: 程序员可以实现哪些类型的保障措施、算法或架构,以最大限度地提高其递归改进的人工智能在达到超级智能后继续以友好而不是破坏性的方式运行的可能性?<br />
<br />
<br />
<br />
The thesis that AI can pose existential risk also has many strong detractors. Skeptics sometimes charge that the thesis is crypto-religious, with an irrational belief in the possibility of superintelligence replacing an irrational belief in an omnipotent God; at an extreme, [[Jaron Lanier]] argues that the whole concept that current machines are in any way intelligent is "an illusion" and a "stupendous con" by the wealthy.<ref name="atlantic-but-what">{{cite magazine |url=https://www.theatlantic.com/health/archive/2014/05/but-what-does-the-end-of-humanity-mean-for-me/361931/ |title=But What Would the End of Humanity Mean for Me? |magazine=The Atlantic | date = 9 May 2014 | accessdate =12 December 2015}}</ref><br />
<br />
The thesis that AI can pose existential risk also has many strong detractors. Skeptics sometimes charge that the thesis is crypto-religious, with an irrational belief in the possibility of superintelligence replacing an irrational belief in an omnipotent God; at an extreme, Jaron Lanier argues that the whole concept that current machines are in any way intelligent is "an illusion" and a "stupendous con" by the wealthy.<br />
<br />
认为人工智能可以提出世界末日的观点也遭到了许多强烈的反对。怀疑论者有时指责该论点是秘密宗教性的,他们非理性地相信超级智能可能取代对万能的上帝的非理性信仰; 在极端情况下,杰伦 · 拉尼尔(Jaron Lanier)认为,目前的机器以任何方式具有智能的整个概念是“一种幻觉” ,是富人的“惊人骗局”。<br />
<br />
<br />
<br />
Much of existing criticism argues that AGI is unlikely in the short term. Computer scientist [[Gordon Bell]] argues that the human race will already destroy itself before it reaches the [[technological singularity]]. [[Gordon Moore]], the original proponent of [[Moore's Law]], declares that "I am a skeptic. I don't believe [a technological singularity] is likely to happen, at least for a long time. And I don't know why I feel that way."<ref>{{cite news |title=Tech Luminaries Address Singularity |url=https://spectrum.ieee.org/computing/hardware/tech-luminaries-address-singularity |accessdate=8 April 2020 |work=IEEE Spectrum: Technology, Engineering, and Science News |issue=SPECIAL REPORT: THE SINGULARITY |date=1 June 2008 |language=en}}</ref> [[Baidu]] Vice President [[Andrew Ng]] states AI existential risk is "like worrying about overpopulation on Mars when we have not even set foot on the planet yet."<ref name=shermer>{{cite news|last1=Shermer|first1=Michael|title=Apocalypse AI|url=https://www.scientificamerican.com/article/artificial-intelligence-is-not-a-threat-mdash-yet/|accessdate=27 November 2017|work=Scientific American|date=1 March 2017|pages=77|language=en|doi=10.1038/scientificamerican0317-77|bibcode=2017SciAm.316c..77S}}</ref><br />
<br />
Much of existing criticism argues that AGI is unlikely in the short term. Computer scientist Gordon Bell argues that the human race will already destroy itself before it reaches the technological singularity. Gordon Moore, the original proponent of Moore's Law, declares that "I am a skeptic. I don't believe [a technological singularity] is likely to happen, at least for a long time. And I don't know why I feel that way." Baidu Vice President Andrew Ng states AI existential risk is "like worrying about overpopulation on Mars when we have not even set foot on the planet yet."<br />
<br />
现有的许多批评认为,德盛安联短期内不太可能成功。计算机科学家 Gordon Bell 认为人类在到达技术奇异点之前就已经自我毁灭了。戈登 · 摩尔,摩尔定律的最初倡导者,宣称“我是一个怀疑论者。我不认为技术奇异点会发生,至少在很长一段时间内不会。我不知道为什么会有这种感觉。”百度副总裁 Andrew Ng 说,人工智能世界末日就像是在担心火星人口过剩,而我们甚至还没有踏上这个星球<br />
<br />
<br />
<br />
==See also==<br />
<br />
{{div col|colwidth=30em}}<br />
<br />
* [[Automated machine learning]]<br />
<br />
* [[Machine ethics]]<br />
<br />
* [[Multi-task learning]]<br />
<br />
* [[Superintelligence: Paths, Dangers, Strategies|Superintelligence]]<br />
<br />
* [[Nick Bostrom]]<br />
<br />
* [[Eliezer Yudkowsky]]<br />
<br />
* [[Future of Humanity Institute]]<br />
<br />
* [[Outline of artificial intelligence]]<br />
<br />
* [[Artificial brain]]<br />
<br />
* [[Transfer learning]]<br />
<br />
* [[Outline of transhumanism]]<br />
<br />
* [[General game playing]]<br />
<br />
* [[Synthetic intelligence]]<br />
<br />
* [[Intelligence amplification]] (IA), the use of information technology in augmenting human intelligence instead of creating an external autonomous "AGI"{{div col end}}<br />
<br />
<br />
<br />
==Notes==<br />
<br />
{{reflist|colwidth=30em}}<br />
<br />
<br />
<br />
==References==<br />
<br />
• Stages of Artificial Intelligence"[https://www.computerscience0.xyz/2020/04/ai-artificial-intelligence-stages-of.html Computer Science]" on 2nd April, 2020.{{refbegin|2}}<br />
<br />
• Stages of Artificial Intelligence"[https://www.computerscience0.xyz/2020/04/ai-artificial-intelligence-stages-of.html Computer Science]" on 2nd April, 2020.<br />
<br />
•2020年4月2日,《人工智能的阶段》[ https://www.computerscience0.xyz/2020/04/ai-Artificial-Intelligence-Stages-of.html 计算机科学]。<br />
<br />
* {{cite web |url=http://www.techcast.org/Upload/PDFs/633615794236495345_TCTheAutomationofThought.pdf |title=TechCast Article Series: The Automation of Thought |last1=Halal |first1=William E. |website= |access-date= |archive-url=https://web.archive.org/web/20130606101835/http://www.techcast.org/Upload/PDFs/633615794236495345_TCTheAutomationofThought.pdf |archive-date=6 June 2013}}<br />
<br />
* {{Citation | last=Aleksander | first=Igor | author-link=Igor Aleksander | year=1996 | title=Impossible Minds | publisher=World Scientific Publishing Company | isbn=978-1-86094-036-1 | url-access=registration | url=https://archive.org/details/impossiblemindsm0000alek }}<br />
<br />
* {{Citation | last = Omohundro|first= Steve| author-link= Steve Omohundro | year = 2008| title= The Nature of Self-Improving Artificial Intelligence| publisher= presented and distributed at the 2007 Singularity Summit, San Francisco, CA.}}<br />
<br />
* {{Citation|last=Sandberg |first=Anders|last2=Boström|first2=Nick|title=Whole Brain Emulation: A Roadmap|url=http://www.fhi.ox.ac.uk/Reports/2008-3.pdf|accessdate=5 April 2009|series= Technical Report #2008‐3|year=2008| publisher = Future of Humanity Institute, Oxford University}}<br />
<br />
* {{Citation|vauthors=Azevedo FA, Carvalho LR, Grinberg LT | title = Equal numbers of neuronal and nonneuronal cells make the human brain an isometrically scaled-up primate brain| journal = The Journal of Comparative Neurology| volume = 513| issue = 5| pages = 532–41|date=April 2009| pmid = 19226510| doi = 10.1002/cne.21974|url=https://www.researchgate.net/publication/24024444 |accessdate=4 September 2013| ref={{harvid|Azevedo et al.|2009}}|display-authors=etal}}<br />
<br />
* {{Citation<br />
<br />
| first=Anthony<br />
<br />
| first=Anthony<br />
<br />
首先是安东尼<br />
<br />
| last=Berglas<br />
<br />
| last=Berglas<br />
<br />
最后一个贝格拉斯<br />
<br />
| title=Artificial Intelligence will Kill our Grandchildren<br />
<br />
| title=Artificial Intelligence will Kill our Grandchildren<br />
<br />
人工智能会杀死我们的孙子<br />
<br />
| year=2008<br />
<br />
| year=2008<br />
<br />
2008年<br />
<br />
| url=http://berglas.org/Articles/AIKillGrandchildren/AIKillGrandchildren.html<br />
<br />
| url=http://berglas.org/Articles/AIKillGrandchildren/AIKillGrandchildren.html<br />
<br />
Http://berglas.org/articles/aikillgrandchildren/aikillgrandchildren.html<br />
<br />
}}<br />
<br />
}}<br />
<br />
}}<br />
<br />
* {{Citation | last = Chalmers |first = David | author-link=David Chalmers | year=1996 | title = The Conscious Mind |publisher=Oxford University Press.}}<br />
<br />
* {{Citation | last = Clocksin|first=William |date=Aug 2003 |title=Artificial intelligence and the future|journal=[[Philosophical Transactions of the Royal Society A]] |pmid=12952683 |volume=361 |issue=1809 |pages=1721–1748 |doi=10.1098/rsta.2003.1232 |postscript=.|bibcode=2003RSPTA.361.1721C }}<br />
<br />
* {{Crevier 1993}}<br />
<br />
* {{Citation | first = Brad | last = Darrach | date=20 November 1970 | title=Meet Shakey, the First Electronic Person | magazine=[[Life Magazine]] | pages = 58–68 }}.<br />
<br />
* {{Citation | last = Drachman | first = D | title = Do we have brain to spare? | journal = Neurology | volume = 64 | issue = 12 | pages = 2004–5 | year = 2005 | pmid = 15985565 | doi = 10.1212/01.WNL.0000166914.38327.BB | postscript = .}}<br />
<br />
* {{Citation | last = Feigenbaum | first = Edward A. | first2=Pamela | last2=McCorduck | author-link=Edward Feigenbaum | author2-link = Pamela McCorduck | title = The Fifth Generation: Artificial Intelligence and Japan's Computer Challenge to the World | publisher = Michael Joseph | year = 1983 | isbn = 978-0-7181-2401-4 }}<br />
<br />
* {{Citation<br />
<br />
| last = Gelernter | first = David<br />
<br />
| last = Gelernter | first = David<br />
<br />
最后的盖兰特 | 第一个大卫<br />
<br />
| year = 2010<br />
<br />
| year = 2010<br />
<br />
2010年<br />
<br />
| title = Dream-logic, the Internet and Artificial Thought<br />
<br />
| title = Dream-logic, the Internet and Artificial Thought<br />
<br />
梦的逻辑,互联网和人工思维<br />
<br />
| url=http://www.edge.org/3rd_culture/gelernter10.1/gelernter10.1_index.html | accessdate= 25 July 2010<br />
<br />
| url=http://www.edge.org/3rd_culture/gelernter10.1/gelernter10.1_index.html | accessdate= 25 July 2010<br />
<br />
Http://www.edge.org/3rd_culture/gelernter10.1/gelernter10.1_index.html : 2010年7月25日<br />
<br />
}}<br />
<br />
}}<br />
<br />
}}<br />
<br />
* {{Citation<br />
<br />
| editor1-last = Goertzel | editor1-first = Ben | authorlink = Ben Goertzel<br />
<br />
| editor1-last = Goertzel | editor1-first = Ben | authorlink = Ben Goertzel<br />
<br />
1-first Ben | authorlink Ben Goertzel<br />
<br />
| editor2-last = Pennachin | editor2-first= Cassio<br />
<br />
| editor2-last = Pennachin | editor2-first= Cassio<br />
<br />
| 编辑2-last Pennachin | 编辑2-first Cassio<br />
<br />
| year = 2006<br />
<br />
| year = 2006<br />
<br />
2006年<br />
<br />
| title=Artificial General Intelligence<br />
<br />
| title=Artificial General Intelligence<br />
<br />
人工通用智能<br />
<br />
| publisher = Springer <br />
<br />
| publisher = Springer <br />
<br />
出版商斯普林格<br />
<br />
| url=http://people.inf.elte.hu/csizsekp/ai/books/artificial-general-intelligence-cognitive-technologies.9783540237334.27156.pdf<br />
<br />
| url=http://people.inf.elte.hu/csizsekp/ai/books/artificial-general-intelligence-cognitive-technologies.9783540237334.27156.pdf<br />
<br />
Http://people.inf.elte.hu/csizsekp/ai/books/artificial-general-intelligence-cognitive-technologies.9783540237334.27156.pdf<br />
<br />
| isbn = 978-3-540-23733-4<br />
<br />
| isbn = 978-3-540-23733-4<br />
<br />
[国际标准图书馆编号978-3-540-23733-4]<br />
<br />
| archive-url = https://web.archive.org/web/20130320184603/http://people.inf.elte.hu/csizsekp/ai/books/artificial-general-intelligence-cognitive-technologies.9783540237334.27156.pdf<br />
<br />
| archive-url = https://web.archive.org/web/20130320184603/http://people.inf.elte.hu/csizsekp/ai/books/artificial-general-intelligence-cognitive-technologies.9783540237334.27156.pdf<br />
<br />
| 档案-url https://web.archive.org/web/20130320184603/http://people.inf.elte.hu/csizsekp/ai/books/artificial-general-intelligence-cognitive-technologies.9783540237334.27156.pdf<br />
<br />
| archive-date = 20 March 2013<br />
<br />
| archive-date = 20 March 2013<br />
<br />
| 档案-日期2013年3月20日<br />
<br />
}}<br />
<br />
}}<br />
<br />
}}<br />
<br />
* {{Citation<br />
<br />
| last = Goertzel | first = Ben | authorlink = Ben Goertzel<br />
<br />
| last = Goertzel | first = Ben | authorlink = Ben Goertzel<br />
<br />
作者: Ben Goertzel<br />
<br />
| last2 = Wang | first2 = Pei<br />
<br />
| last2 = Wang | first2 = Pei<br />
<br />
2 Wang | first2 Pei<br />
<br />
| year = 2006<br />
<br />
| year = 2006<br />
<br />
2006年<br />
<br />
| title = Introduction: Aspects of Artificial General Intelligence<br />
<br />
| title = Introduction: Aspects of Artificial General Intelligence<br />
<br />
| 题目简介: 人工通用智能的方方面面<br />
<br />
| url=http://sites.google.com/site/narswang/publications/wang-goertzel.AGI_Aspects.pdf?attredirects=1<br />
<br />
| url=http://sites.google.com/site/narswang/publications/wang-goertzel.AGI_Aspects.pdf?attredirects=1<br />
<br />
Http://sites.google.com/site/narswang/publications/wang-goertzel.agi_aspects.pdf?attredirects=1<br />
<br />
}}<br />
<br />
}}<br />
<br />
}}<br />
<br />
* {{Citation |last=Goertzel|first=Ben |authorlink=Ben Goertzel |date=Dec 2007 |title=Human-level artificial general intelligence and the possibility of a technological singularity: a reaction to Ray Kurzweil's The Singularity Is Near, and McDermott's critique of Kurzweil|journal=Artificial Intelligence |volume=171 |issue=18, Special Review Issue |pages=1161–1173 |url=https://scholar.google.com/scholar?hl=sv&lr=&cluster=15189798216526465792 |accessdate=1 April 2009 |doi=10.1016/j.artint.2007.10.011 |postscript=.}}<br />
<br />
* {{Citation | last = Gubrud | first = Mark | url=http://www.foresight.org/Conferences/MNT05/Papers/Gubrud/ | title = Nanotechnology and International Security | journal= Fifth Foresight Conference on Molecular Nanotechnology |date = November 1997| accessdate= 7 May 2011}}<br />
<br />
* {{Citation| last1 = Holte | first1=RC | last2=Choueiry |first2=BY| title = Abstraction and reformulation in artificial intelligence| journal = [[Philosophical Transactions of the Royal Society B]]| volume = 358| issue = 1435| pages = 1197–1204| year = 2003| pmid = 12903653| pmc = 1693218| doi = 10.1098/rstb.2003.1317| postscript = .}}<br />
<br />
* {{Citation | last = Howe | first = J. | url=http://www.dai.ed.ac.uk/AI_at_Edinburgh_perspective.html | title = Artificial Intelligence at Edinburgh University : a Perspective |date = November 1994| accessdate= 30 August 2007}}<br />
<br />
* {{Citation | last = Johnson| first= Mark | year = 1987| title =The body in the mind| publisher =Chicago|isbn= 978-0-226-40317-5}}<br />
<br />
* {{Citation | last = Kurzweil | first = Ray | author-link = Ray Kurzweil | title = The Singularity is Near | year = 2005 | publisher = Viking Press | title-link = The Singularity is Near }}<br />
<br />
* {{Citation | last = Lighthill | first = Professor Sir James | author-link=James Lighthill | year = 1973 | contribution= Artificial Intelligence: A General Survey | title = Artificial Intelligence: a paper symposium| publisher = Science Research Council }}<br />
<br />
* {{Citation|last=Luger|first=George|first2=William|last2=Stubblefield|year=2004|title=Artificial Intelligence: Structures and Strategies for Complex Problem Solving|edition=5th|publisher=The Benjamin/Cummings Publishing Company, Inc.|page=[https://archive.org/details/artificialintell0000luge/page/720 720]|isbn=978-0-8053-4780-7|url=https://archive.org/details/artificialintell0000luge/page/720}}<br />
<br />
* {{Citation |last=McCarthy|first=John |authorlink=John McCarthy (computer scientist)|date=Oct 2007 |title=From here to human-level AI|journal=Artificial Intelligence |volume=171 |pages=1174–1182 |doi=10.1016/j.artint.2007.10.009 | postscript=. |issue=18}}<br />
<br />
* {{McCorduck 2004}}<br />
<br />
* {{Citation | last = Moravec | first = Hans | author-link = Hans Moravec | year = 1976 | url = http://www.frc.ri.cmu.edu/users/hpm/project.archive/general.articles/1975/Raw.Power.html | title = The Role of Raw Power in Intelligence | access-date = 29 September 2007 | archive-url = https://web.archive.org/web/20160303232511/http://www.frc.ri.cmu.edu/users/hpm/project.archive/general.articles/1975/Raw.Power.html | archive-date = 3 March 2016 | url-status = dead }}<br />
<br />
* {{Citation | last = Moravec | first = Hans | author-link=Hans Moravec | year = 1988 | title = Mind Children | publisher = Harvard University Press}}<br />
<br />
* {{Citation| last=Nagel | year=1974 | title =What Is it Like to Be a Bat | journal = Philosophical Review | volume=83 | issue=4 | pages=435–50 | url=http://organizations.utep.edu/Portals/1475/nagel_bat.pdf| postscript=. | doi=10.2307/2183914 | jstor=2183914 }}<br />
<br />
* {{Citation | last = Newell | first = Allen | author-link=Allen Newell | last2 = Simon | first2=H. A. | year = 1963 | contribution=GPS: A Program that Simulates Human Thought| title=Computers and Thought | editor-last= Feigenbaum | editor-first= E.A. |editor2-last= Feldman |editor2-first= J. |publisher= McGraw-Hill | authorlink2 = Herbert A. Simon|location= New York }}<br />
<br />
* {{cite journal | doi = 10.1145/360018.360022 | last = Newell | first = Allen | last2 = Simon | first2=H. A. | year = 1976 | title=Computer Science as Empirical Inquiry: Symbols and Search| volume= 19 | pages = 113–126 | journal = Communications of the ACM| author-link=Allen Newell | authorlink2=Herbert A. Simon|issue=3 | ref=harv| doi-access= free }}<br />
<br />
* {{Citation| last=Nilsson | first=Nils | author-link=Nils Nilsson (researcher) | year=1998|title=Artificial Intelligence: A New Synthesis|publisher=Morgan Kaufmann Publishers|isbn=978-1-55860-467-4}}<br />
<br />
* {{Russell Norvig 2003}}<br />
<br />
* {{Citation | last = NRC| author-link=United States National Research Council | chapter=Developments in Artificial Intelligence|chapter-url=http://www.nap.edu/readingroom/books/far/ch9.html|title=Funding a Revolution: Government Support for Computing Research|publisher=National Academy Press|year=1999 }}<br />
<br />
* {{Citation | last = Poole | first = David | first2 = Alan | last2 = Mackworth | first3 = Randy | last3 = Goebel | publisher = Oxford University Press | year = 1998 | title = Computational Intelligence: A Logical Approach | url = http://www.cs.ubc.ca/spider/poole/ci.html | author-link=David Poole (researcher) | location = New York }}<br />
<br />
* {{Citation|last=Searle |first=John |author-link=John Searle |title=Minds, Brains and Programs |journal=Behavioral and Brain Sciences |volume=3 |issue=3 |pages=417–457 |year=1980 |doi=10.1017/S0140525X00005756 }}<br />
<br />
* {{Citation | last= Simon | first = H. A. | author-link=Herbert A. Simon | year = 1965 | title=The Shape of Automation for Men and Management | publisher =Harper & Row | location = New York }}<br />
<br />
* {{Citation |last=Sutherland|first= J.G. |year =1990| title= Holographic Model of Memory, Learning, and Expression|journal=International Journal of Neural Systems|volume= 1–3|pages= 256–267 |postscript=.}}<br />
<br />
* {{Citation|vauthors=Williams RW, Herrup K | title = The control of neuron number| journal = Annual Review of Neuroscience| volume = 11| pages = 423–53| year = 1988| pmid = 3284447| doi = 10.1146/annurev.ne.11.030188.002231| postscript = .}}<!--| accessdate = 20 June 2009--><br />
<br />
* {{Citation<br />
<br />
| editor1-last = de Vega | editor1-first = Manuel<br />
<br />
| editor1-last = de Vega | editor1-first = Manuel<br />
<br />
| 编辑1-last de Vega | 编辑1-first Manuel<br />
<br />
| editor2-last = Glenberg | editor2-first = Arthur<br />
<br />
| editor2-last = Glenberg | editor2-first = Arthur<br />
<br />
2- 最后的格伦伯格2- 第一个亚瑟<br />
<br />
| editor3-last = Graesser | editor3-first = Arthur<br />
<br />
| editor3-last = Graesser | editor3-first = Arthur<br />
<br />
| 编辑3-last Graesser | 编辑3-first Arthur<br />
<br />
| year = 2008<br />
<br />
| year = 2008<br />
<br />
2008年<br />
<br />
| title = Symbols and Embodiment: Debates on meaning and cognition<br />
<br />
| title = Symbols and Embodiment: Debates on meaning and cognition<br />
<br />
标题符号与具体化: 关于意义与认知的争论<br />
<br />
| publisher = Oxford University Press<br />
<br />
| publisher = Oxford University Press<br />
<br />
牛津大学出版社<br />
<br />
| isbn=978-0-19-921727-4<br />
<br />
| isbn=978-0-19-921727-4<br />
<br />
[国际标准图书编号978-0-19-921727-4]<br />
<br />
}}<br />
<br />
}}<br />
<br />
}}<br />
<br />
* {{Citation|last=Yudkowsky |first=Eliezer |author-link=Eliezer Yudkowsky |editor-last=Goertzel |editor-first=Ben |editor2-last=Pennachin |editor2-first=Cassio |title=Artificial General Intelligence |journal=Annual Review of Psychology |publisher=Springer |year=2006 |url=http://www.singinst.org/upload/LOGI//LOGI.pdf |doi=10.1146/annurev.psych.49.1.585 |pmid=9496632 |isbn=978-3-540-23733-4 |url-status=dead |archiveurl=https://web.archive.org/web/20090411050423/http://www.singinst.org/upload/LOGI/LOGI.pdf |archivedate=11 April 2009 |volume=49 |pages=585–612}} <br />
<br />
* {{Citation |last=Zucker|first=Jean-Daniel |date=July 2003 |title=A grounded theory of abstraction in artificial intelligence|journal=[[Philosophical Transactions of the Royal Society B]] |pmid=12903672 |volume=358 |issue=1435 |pmc=1693211 |pages=1293–1309 |doi=10.1098/rstb.2003.1308 |postscript=.}}<br />
<br />
* {{Citation| last=Yudkowsky | first=Eliezer | author-link=Eliezer Yudkowsky| year=2008 |title=Artificial Intelligence as a Positive and Negative Factor in Global Risk |journal=Global Catastrophic Risks | bibcode=2008gcr..book..303Y }}.<br />
<br />
{{refend}}<br />
<br />
<br />
<br />
==External links==<br />
<br />
* [https://cis.temple.edu/~pwang/AGI-Intro.html The AGI portal maintained by Pei Wang]<br />
<br />
* [https://web.archive.org/web/20050405071221/http://genesis.csail.mit.edu/index.html The Genesis Group at MIT's CSAIL] – Modern research on the computations that underlay human intelligence<br />
<br />
* [http://www.opencog.org/ OpenCog – open source project to develop a human-level AI]<br />
<br />
* [http://academia.wikia.com/wiki/A_Method_for_Simulating_the_Process_of_Logical_Human_Thought Simulating logical human thought]<br />
<br />
* [http://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/ai-timelines What Do We Know about AI Timelines?] – Literature review<br />
<br />
<br />
<br />
{{Existential risk from artificial intelligence}}<br />
<br />
<br />
<br />
{{DEFAULTSORT:Artificial general intelligence}}<br />
<br />
[[Category:Hypothetical technology]]<br />
<br />
Category:Hypothetical technology<br />
<br />
类别: 假设技术<br />
<br />
[[Category:Artificial intelligence]]<br />
<br />
Category:Artificial intelligence<br />
<br />
类别: 人工智能<br />
<br />
[[Category:Computational neuroscience]]<br />
<br />
Category:Computational neuroscience<br />
<br />
类别: 计算神经科学<br />
<br />
<br />
<br />
[[fr:Intelligence artificielle#Intelligence artificielle forte]]<br />
<br />
fr:Intelligence artificielle#Intelligence artificielle forte<br />
<br />
智力人工 # 智力人工强项<br />
<br />
<noinclude><br />
<br />
<small>This page was moved from [[wikipedia:en:Artificial general intelligence]]. Its edit history can be viewed at [[通用人工智能/edithistory]]</small></noinclude><br />
<br />
[[Category:待整理页面]]</div>粲兰https://wiki.swarma.org/index.php?title=%E9%80%9A%E7%94%A8%E4%BA%BA%E5%B7%A5%E6%99%BA%E8%83%BD&diff=14809通用人工智能2020-10-07T23:25:55Z<p>粲兰:</p>
<hr />
<div>此词条暂由彩云小译翻译,未经人工整理和审校,带来阅读不便,请见谅。<br />
<br />
{{Short description|Hypothetical human-level or stronger AI}}<br />
<br />
{{Use British English|date = March 2019}}<br />
<br />
{{Use dmy dates|date=December 2019}}<br />
<br />
{{Artificial intelligence}}<br />
<br />
'''Artificial general intelligence''' ('''AGI''') is the hypothetical<ref>{{cite news |title=DeepMind and Google: the battle to control artificial intelligence |url=https://www.economist.com/news/2019/04/05/deepmind-and-google-the-battle-to-control-artificial-intelligence |accessdate=15 March 2020 |work=[[The Economist]] ([[1843 (magazine)]]) |date=2019 |quote=AGI stands for Artificial General Intelligence, a hypothetical computer program...}}</ref> intelligence of a machine that has the capacity to understand or learn any intellectual task that a [[human being]] can. It is a primary goal of some [[artificial intelligence]] research and a common topic in [[science fiction]] and [[futures studies]]. AGI can also be referred to as '''strong AI''',<ref>Kurzweil, ''Singularity'' (2005) p. 260</ref><ref name = "Kurzweil 2005-08-05">{{Citation|url=https://www.forbes.com/home/free_forbes/2005/0815/030.htmlhttps://www.forbes.com/home/free_forbes/2005/0815/030.html |first=Ray |last=Kurzweil |date=5 August 2005 |magazine=[[Forbes]]|title=Long Live AI}}: Kurzweil describes strong AI as "machine intelligence with the full range of human intelligence."</ref><ref>{{Citation|url=https://crnano.typepad.com/crnblog/2005/08/advanced_human_.html |title=Advanced Human ntelligence<br />
<br />
Artificial general intelligence (AGI) is the hypothetical intelligence of a machine that has the capacity to understand or learn any intellectual task that a human being can. It is a primary goal of some artificial intelligence research and a common topic in science fiction and futures studies. AGI can also be referred to as strong AI,<ref>{{Citation|url=https://crnano.typepad.com/crnblog/2005/08/advanced_human_.html |title=Advanced Human ntelligence<br />
<br />
人工通用智能(Artificial general intelligence,AGI)是一种假设的机器智能,它有能力理解或学习任何人类能够完成的智力任务。这是一些人工智能研究的主要目标,也是科幻小说和未来研究的共同话题。也可以被称为强人工智能,参考文献{ Citation | url https://crnano.typepad.com/crnblog/2005/08/advanced_human_.html | title Advanced Human intelligence<br />
<br />
|first=Mike|last=Treder|work=Responsible Nanotechnology|date=10 August 2005 |archive-url=https://web.archive.org/web/20191016214415/https://crnano.typepad.com/crnblog/2005/08/advanced_human_.html|archive-date=16 October 2019 |url-status=live}}</ref> '''full AI''',<ref>{{Cite web |url=http://tedxtalks.ted.com/video/The-Age-of-Artificial-Intellige |title=The Age of Artificial Intelligence: George John at TEDxLondonBusinessSchool 2013 |access-date=22 February 2014 |archive-url=https://web.archive.org/web/20140226123940/http://tedxtalks.ted.com/video/The-Age-of-Artificial-Intellige |archive-date=26 February 2014 |url-status=live }}</ref> <br />
<br />
|first=Mike|last=Treder|work=Responsible Nanotechnology|date=10 August 2005 |archive-url=https://web.archive.org/web/20191016214415/https://crnano.typepad.com/crnblog/2005/08/advanced_human_.html|archive-date=16 October 2019 |url-status=live}}</ref> full AI, <br />
<br />
2005年8月10日2019年10月 https://web.archive.org/web/20191016214415/https://crnano.typepad.com/crnblog/2005/08/advanced_human_.html|archive-date=16,<br />
<br />
or '''general intelligent action'''.{{sfn|Newell|Simon|1976|ps=, This is the term they use for "human-level" intelligence in the [[physical symbol system]] hypothesis.}} <br />
<br />
or general intelligent action. <br />
<br />
或者是通用智能行为。<br />
<br />
Some academic sources reserve the term "strong AI" for machines that can experience [[Chinese room#Strong AI|consciousness]].{{sfn|Searle|1980|ps=, See below for the origin of the term "strong AI", and see the academic definition of "[[Chinese room#Strong AI|strong AI]]" in the article [[Chinese room]].}} Today's AI is speculated to be many years, if not decades, away from AGI.<ref>[https://www.europarl.europa.eu/at-your-service/files/be-heard/religious-and-non-confessional-dialogue/events/en-20190319-how-artificial-intelligence-works.pdf europarl.europa.eu: How artificial intelligence works], "Concluding remarks: Today's AI is powerful and useful, but remains far from speculated AGI or ASI.", European Parliamentary Research Service, retrieved March 3, 2020</ref><ref>{{Cite journal|last=Grace|first=Katja|last2=Salvatier|first2=John|last3=Dafoe|first3=Allan|last4=Zhang|first4=Baobao|last5=Evans|first5=Owain|date=2018-07-31|title=Viewpoint: When Will AI Exceed Human Performance? Evidence from AI Experts|journal=Journal of Artificial Intelligence Research|volume=62|pages=729–754|doi=10.1613/jair.1.11222|issn=1076-9757}}</ref><br />
<br />
Some academic sources reserve the term "strong AI" for machines that can experience consciousness. Today's AI is speculated to be many years, if not decades, away from AGI.<br />
<br />
一些学术资源保留了“强人工智能”这个术语,用来形容能够体会意识的机器。据推测,如果不是几十年的话,今天的人工智能将在很多年之后才能达到通用人工智能的地步。<br />
<br />
<br />
<br />
Some authorities emphasize a distinction between ''strong AI'' and ''applied AI'',<ref>Encyclopædia Britannica [http://www.britannica.com/eb/article-219086/artificial-intelligence Strong AI, applied AI, and cognitive simulation] {{Webarchive|url=https://web.archive.org/web/20071015054758/http://www.britannica.com/eb/article-219086/artificial-intelligence |date=15 October 2007 }} or Jack Copeland [http://www.cs.usfca.edu/www.AlanTuring.net/turing_archive/pages/Reference%20Articles/what_is_AI/What%20is%20AI02.html What is artificial intelligence?] {{Webarchive|url=https://web.archive.org/web/20070818125256/http://www.cs.usfca.edu/www.AlanTuring.net/turing_archive/pages/Reference%20Articles/what_is_AI/What%20is%20AI02.html |date=18 August 2007 }} on AlanTuring.net</ref> also called ''narrow AI''<ref name = "Kurzweil 2005-08-05"/> or ''[[weak AI]]''.<ref>{{Cite web |url=http://www.open2.net/nextbigthing/ai/ai_in_depth/in_depth.htm |title=The Open University on Strong and Weak AI |access-date=8 October 2007 |archive-url=https://web.archive.org/web/20090925043908/http://www.open2.net/nextbigthing/ai/ai_in_depth/in_depth.htm |archive-date=25 September 2009 |url-status=live }} {{Dead link |date=October 2019}}</ref> In contrast to strong AI, weak AI is not intended to perform human [[cognitive]] abilities. Rather, weak AI is limited to the use of software to study or accomplish specific [[problem solving]] or [[reason]]ing tasks.<br />
<br />
Some authorities emphasize a distinction between strong AI and applied AI, also called narrow AI In contrast to strong AI, weak AI is not intended to perform human cognitive abilities. Rather, weak AI is limited to the use of software to study or accomplish specific problem solving or reasoning tasks.<br />
<br />
一些权威机构强调''强人工智能''和''应用人工智能''之间的区别,也称为''狭义人工智能''与''强人工智能''相比,弱人工智能并不是为了执行人类的认知能力。相反,弱人工智能仅限于使用软件来研究或完成特定问题的解决或完成推理任务。<br />
<br />
<br />
<br />
As of 2017, over forty organizations are researching AGI.<ref name=baum/><br />
<br />
As of 2017, over forty organizations are researching AGI.<br />
<br />
截止到2017年,已经有超过四十家机构在研究 AGI。<br />
<br />
<br />
<br />
==Requirements 判定要求==<br />
<br />
{{main|Cognitive science}}<br />
<br />
<br />
<br />
Various criteria for [[intelligence]] have been proposed (most famously the [[Turing test]]) but to date, there is no definition that satisfies everyone.<ref>AI founder [[John McCarthy (computer scientist)|John McCarthy]] writes: "we cannot yet characterize in general what kinds of computational procedures we want to call intelligent." {{cite web| url=http://www-formal.stanford.edu/jmc/whatisai/node1.html| title=Basic Questions| last=McCarthy| first=John| authorlink=John McCarthy (computer scientist)| publisher=[[Stanford University]]| year=2007| access-date=6 December 2007| archive-url=https://web.archive.org/web/20071026100601/http://www-formal.stanford.edu/jmc/whatisai/node1.html| archive-date=26 October 2007| url-status=live}} (For a discussion of some definitions of intelligence used by [[artificial intelligence]] researchers, see [[philosophy of artificial intelligence]].)</ref> However, there ''is'' wide agreement among artificial intelligence researchers that intelligence is required to do the following:<ref><br />
<br />
Various criteria for intelligence have been proposed (most famously the Turing test) but to date, there is no definition that satisfies everyone. However, there is wide agreement among artificial intelligence researchers that intelligence is required to do the following:<ref><br />
<br />
人们提出了各种各样的智能标准(最著名的是图灵测试) ,但到目前为止,还没有一个定义能使所有人满意。然而,人工智能研究人员普遍认为,智能需要做到以下几点: <br />
<br />
This list of intelligent traits is based on the topics covered by major AI textbooks, including:<br />
<br />
This list of intelligent traits is based on the topics covered by major AI textbooks, including:<br />
<br />
这个智能特征的列表基于主流的人工智能教科书所涉及的主题,包括:<br />
<br />
{{Harvnb|Russell|Norvig|2003}},<br />
<br />
,<br />
<br />
,<br />
<br />
{{Harvnb|Luger|Stubblefield|2004}},<br />
<br />
,<br />
<br />
,<br />
<br />
{{Harvnb|Poole|Mackworth|Goebel|1998}} and<br />
<br />
and<br />
<br />
及<br />
<br />
{{Harvnb|Nilsson|1998}}.<br />
<br />
.<br />
<br />
.<br />
<br />
</ref><br />
<br />
</ref><br />
<br />
/ 参考<br />
<br />
* [[automated reasoning|reason]], use strategy, solve puzzles, and make judgments under [[uncertainty]];使用策略,解决问题,并且在不确定条件下做出决策。<br />
<br />
* [[knowledge representation|represent knowledge]], including [[Commonsense knowledge base|commonsense knowledge]];展现知识,包括常识。<br />
<br />
* [[automated planning and scheduling|plan]];规划。<br />
<br />
* [[machine learning|learn]];学习。<br />
<br />
* communicate in [[natural language processing|natural language]];使用自然语言进行交流。<br />
<br />
* and [[Artificial intelligence systems integration|integrate all these skills]] towards common goals.综合运用所有技巧以达到某个目的。<br />
<br />
<br />
<br />
Other important capabilities include the ability to [[machine perception|sense]] (e.g. [[computer vision|see]]) and the ability to act (e.g. [[robotics|move and manipulate objects]]) in the world where intelligent behaviour is to be observed.<ref>Pfeifer, R. and Bongard J. C., How the body shapes the way we think: a new view of intelligence (The MIT Press, 2007). {{ISBN|0-262-16239-3}}</ref> This would include an ability to detect and respond to [[hazard]].<ref>{{cite journal | last1 = White | first1 = R. W. | year = 1959 | title = Motivation reconsidered: The concept of competence | journal = Psychological Review | volume = 66 | issue = 5| pages = 297–333 | doi=10.1037/h0040934| pmid = 13844397 }}</ref> Many interdisciplinary approaches to intelligence (e.g. [[cognitive science]], [[computational intelligence]] and [[decision making]]) tend to emphasise the need to consider additional traits such as [[imagination]] (taken as the ability to form mental images and concepts that were not programmed in)<ref>{{Harvnb|Johnson|1987}}</ref> and [[Self-determination theory|autonomy]].<ref>deCharms, R. (1968). Personal causation. New York: Academic Press.</ref><br />
<br />
Other important capabilities include the ability to sense (e.g. see) and the ability to act (e.g. move and manipulate objects) in the world where intelligent behaviour is to be observed. This would include an ability to detect and respond to hazard. Many interdisciplinary approaches to intelligence (e.g. cognitive science, computational intelligence and decision making) tend to emphasise the need to consider additional traits such as imagination (taken as the ability to form mental images and concepts that were not programmed in) and autonomy.<br />
<br />
其他重要的能力包括在客观世界感知(例如:视觉)和行动(例如:移动和操纵物体)的能力。智能行为在客观世界中是可观测的。这将包括检测和应对危险的能力。许多跨学科的智力研究方法(例如:。认知科学、计算智能和决策)倾向于强调考虑额外特征的必要性,例如想象力(被认为是未编入程序的形成意象和概念的能力)和自主性。<br />
<br />
Computer based systems that exhibit many of these capabilities do exist (e.g. see [[computational creativity]], [[automated reasoning]], [[decision support system]], [[robot]], [[evolutionary computation]], [[intelligent agent]]), but not yet at human levels.<br />
<br />
Computer based systems that exhibit many of these capabilities do exist (e.g. see computational creativity, automated reasoning, decision support system, robot, evolutionary computation, intelligent agent), but not yet at human levels.<br />
<br />
基于计算机的系统,展示了许多这些能力确实存在(例如:。参见计算创造性、自动推理、决策支持系统、机器人、进化计算、智能代理) ,但还没有达到人类的水平。<br />
<br />
<br />
<br />
===Tests for confirming human-level AGI{{anchor|Tests_for_confirming_human-level_AGI}}===<br />
<br />
The following tests to confirm human-level AGI have been considered:<ref>{{cite web|last=Muehlhauser|first=Luke|title=What is AGI?|url=http://intelligence.org/2013/08/11/what-is-agi/|publisher=Machine Intelligence Research Institute|accessdate=1 May 2014|date=11 August 2013|archive-url=https://web.archive.org/web/20140425115445/http://intelligence.org/2013/08/11/what-is-agi/|archive-date=25 April 2014|url-status=live}}</ref><ref>{{Cite web|url=https://www.talkyblog.com/artificial_general_intelligence_agi/|title=What is Artificial General Intelligence (AGI)? {{!}} 4 Tests For Ensuring Artificial General Intelligence|date=13 July 2019|website=Talky Blog|language=en-US|access-date=17 July 2019|archive-url=https://web.archive.org/web/20190717071152/https://www.talkyblog.com/artificial_general_intelligence_agi/|archive-date=17 July 2019|url-status=live}}</ref><br />
<br />
The following tests to confirm human-level AGI have been considered:<br />
<br />
考虑了下列测试以确认人类水平 AGI:<br />
<br />
;[[Turing test|The Turing Test]] ([[Alan Turing|''Turing'']])<br />
<br />
The Turing Test (Turing)<br />
<br />
图灵测试(图灵)<br />
<br />
: A machine and a human both converse sight unseen with a second human, who must evaluate which of the two is the machine, which passes the test if it can fool the evaluator a significant fraction of the time. Note: Turing does not prescribe what should qualify as intelligence, only that knowing that it is a machine should disqualify it.<br />
<br />
A machine and a human both converse sight unseen with a second human, who must evaluate which of the two is the machine, which passes the test if it can fool the evaluator a significant fraction of the time. Note: Turing does not prescribe what should qualify as intelligence, only that knowing that it is a machine should disqualify it.<br />
<br />
一个机器人和一个人类都与另一个人类相反,后者必须评估两者中哪一个是机器,如果它能骗过评估者很大一部分时间,那么机器就通过了测试。注意: 图灵并没有规定什么是智能,只是知道它是一台机器就应该取消它的资格。<br />
<br />
;The Coffee Test ([[Steve Wozniak|''Wozniak'']])<br />
<br />
The Coffee Test (Wozniak)<br />
<br />
咖啡测试(沃兹尼亚克)<br />
<br />
: A machine is required to enter an average American home and figure out how to make coffee: find the coffee machine, find the coffee, add water, find a mug, and brew the coffee by pushing the proper buttons.<br />
<br />
A machine is required to enter an average American home and figure out how to make coffee: find the coffee machine, find the coffee, add water, find a mug, and brew the coffee by pushing the proper buttons.<br />
<br />
一台机器需要进入一个普通的美国家庭,并弄清楚如何制作咖啡: 找到咖啡机,找到咖啡,加水,找到一个马克杯,并通过按下正确的按钮来煮咖啡。<br />
<br />
;The Robot College Student Test ([[Ben Goertzel|''Goertzel'']])<br />
<br />
The Robot College Student Test (Goertzel)<br />
<br />
机器人大学生考试(Goertzel)<br />
<br />
: A machine enrolls in a university, taking and passing the same classes that humans would, and obtaining a degree.<br />
<br />
A machine enrolls in a university, taking and passing the same classes that humans would, and obtaining a degree.<br />
<br />
一台机器进入一所大学,学习和通过与人类相同的课程,并获得学位。<br />
<br />
;The Employment Test ([[Nils John Nilsson|''Nilsson'']])<br />
<br />
The Employment Test (Nilsson)<br />
<br />
就业测试(Nilsson)<br />
<br />
: A machine works an economically important job, performing at least as well as humans in the same job.<br />
<br />
A machine works an economically important job, performing at least as well as humans in the same job.<br />
<br />
机器从事一项经济上重要的工作,在同一项工作中表现至少和人类一样好。<br />
<br />
<br />
<br />
=== Problems requiring AGI to solve ===<br />
<br />
{{Main|AI-complete}}<br />
<br />
<br />
<br />
The most difficult problems for computers are informally known as "AI-complete" or "AI-hard", implying that solving them is equivalent to the general aptitude of human intelligence, or strong AI, beyond the capabilities of a purpose-specific algorithm.<ref name="Shapiro92">Shapiro, Stuart C. (1992). [http://www.cse.buffalo.edu/~shapiro/Papers/ai.pdf Artificial Intelligence] {{Webarchive|url=https://web.archive.org/web/20160201014644/http://www.cse.buffalo.edu/~shapiro/Papers/ai.pdf |date=1 February 2016 }} In Stuart C. Shapiro (Ed.), ''Encyclopedia of Artificial Intelligence'' (Second Edition, pp.&nbsp;54–57). New York: John Wiley. (Section 4 is on "AI-Complete Tasks".)</ref><br />
<br />
The most difficult problems for computers are informally known as "AI-complete" or "AI-hard", implying that solving them is equivalent to the general aptitude of human intelligence, or strong AI, beyond the capabilities of a purpose-specific algorithm.<br />
<br />
对于计算机来说,最困难的问题被非正式地称为“ AI 完成”或“ AI 困难” ,这意味着解决这些问题相当于人类智能的一般才能,或强大的人工智能,超出了特定目的算法的能力。<br />
<br />
<br />
<br />
AI-complete problems are hypothesised to include general [[computer vision]], [[natural language understanding]], and dealing with unexpected circumstances while solving any real world problem.<ref>Roman V. Yampolskiy. Turing Test as a Defining Feature of AI-Completeness. In Artificial Intelligence, Evolutionary Computation and Metaheuristics (AIECM) --In the footsteps of Alan Turing. Xin-She Yang (Ed.). pp. 3–17. (Chapter 1). Springer, London. 2013. http://cecs.louisville.edu/ry/TuringTestasaDefiningFeature04270003.pdf {{Webarchive|url=https://web.archive.org/web/20130522094547/http://cecs.louisville.edu/ry/TuringTestasaDefiningFeature04270003.pdf |date=22 May 2013 }}</ref><br />
<br />
AI-complete problems are hypothesised to include general computer vision, natural language understanding, and dealing with unexpected circumstances while solving any real world problem.<br />
<br />
人工智能完全问题假设包括一般的计算机视觉,自然语言理解,以及在解决任何现实世界问题的同时处理意外情况。<br />
<br />
<br />
<br />
AI-complete problems cannot be solved with current computer technology alone, and also require [[human computation]]. This property could be useful, for example, to test for the presence of humans, as [[CAPTCHA]]s aim to do; and for [[computer security]] to repel [[brute-force attack]]s.<ref>Luis von Ahn, Manuel Blum, Nicholas Hopper, and John Langford. [http://www.captcha.net/captcha_crypt.pdf CAPTCHA: Using Hard AI Problems for Security] {{Webarchive|url=https://web.archive.org/web/20160304001102/http://www.captcha.net/captcha_crypt.pdf |date=4 March 2016 }}. In Proceedings of Eurocrypt, Vol. 2656 (2003), pp. 294–311.</ref><ref>{{cite journal | first = Richard | last = Bergmair | title = Natural Language Steganography and an "AI-complete" Security Primitive | citeseerx = 10.1.1.105.129 | date = 7 January 2006 }} (unpublished?)</ref><br />
<br />
AI-complete problems cannot be solved with current computer technology alone, and also require human computation. This property could be useful, for example, to test for the presence of humans, as CAPTCHAs aim to do; and for computer security to repel brute-force attacks.<br />
<br />
目前的计算机技术不能单独解决人工智能完全问题,而且还需要人工计算。例如,这个特性可以用来测试人类是否存在(CAPTCHAs 的目标就是这样做) ,以及用于计算机安全性以抵御蛮力攻击。<br />
<br />
<br />
<br />
== History == <br />
<br />
=== Classical AI ===<br />
<br />
{{Main|History of artificial intelligence}}<br />
<br />
Modern AI research began in the mid 1950s.<ref>{{Harvnb|Crevier|1993|pp=48–50}}</ref> The first generation of AI researchers were convinced that artificial general intelligence was possible and that it would exist in just a few decades. AI pioneer [[Herbert A. Simon]] wrote in 1965: "machines will be capable, within twenty years, of doing any work a man can do."<ref>{{Harvnb|Simon|1965|p=96}} quoted in {{Harvnb|Crevier|1993|p=109}}</ref> Their predictions were the inspiration for [[Stanley Kubrick]] and [[Arthur C. Clarke]]'s character [[HAL 9000]], who embodied what AI researchers believed they could create by the year 2001. AI pioneer [[Marvin Minsky]] was a consultant<ref>{{Cite web |url=http://mitpress.mit.edu/e-books/Hal/chap2/two1.html |title=Scientist on the Set: An Interview with Marvin Minsky |access-date=5 April 2008 |archive-url=https://web.archive.org/web/20120716182537/http://mitpress.mit.edu/e-books/Hal/chap2/two1.html |archive-date=16 July 2012 |url-status=live }}</ref> on the project of making HAL 9000 as realistic as possible according to the consensus predictions of the time; Crevier quotes him as having said on the subject in 1967, "Within a generation ... the problem of creating 'artificial intelligence' will substantially be solved,"<ref>Marvin Minsky to {{Harvtxt|Darrach|1970}}, quoted in {{Harvtxt|Crevier|1993|p=109}}.</ref> although Minsky states that he was misquoted.{{Citation needed|date=June 2011}}<br />
<br />
Modern AI research began in the mid 1950s. The first generation of AI researchers were convinced that artificial general intelligence was possible and that it would exist in just a few decades. AI pioneer Herbert A. Simon wrote in 1965: "machines will be capable, within twenty years, of doing any work a man can do." Their predictions were the inspiration for Stanley Kubrick and Arthur C. Clarke's character HAL 9000, who embodied what AI researchers believed they could create by the year 2001. AI pioneer Marvin Minsky was a consultant on the project of making HAL 9000 as realistic as possible according to the consensus predictions of the time; Crevier quotes him as having said on the subject in 1967, "Within a generation ... the problem of creating 'artificial intelligence' will substantially be solved," although Minsky states that he was misquoted.<br />
<br />
现代人工智能研究始于20世纪50年代中期。第一代人工智能研究人员确信,人工普通智能是可能的,并将在短短几十年内出现。人工智能的先驱赫伯特·西蒙在1965年写道: “机器将在20年内完成人类能做的任何工作。”他们的预言启发了斯坦利 · 库布里克和亚瑟·查理斯·克拉克的人物哈尔9000,他们代表了人工智能研究人员相信他们在2001年能够创造的东西。人工智能先驱马文 · 明斯基(Marvin Minsky)是一个项目顾问,该项目旨在根据当时的共识预测,使 HAL 9000尽可能逼真; 克里维尔援引他在1967年关于这个问题的话说,“在一代人的时间里... ... 创造‘人工智能’的问题将大大得到解决,”尽管明斯基声称,他的话被错误引用了。<br />
<br />
<br />
<br />
However, in the early 1970s, it became obvious that researchers had grossly underestimated the difficulty of the project. Funding agencies became skeptical of AGI and put researchers under increasing pressure to produce useful "applied AI".<ref>The [[Lighthill report]] specifically criticized AI's "grandiose objectives" and led the dismantling of AI research in England. ({{Harvnb|Lighthill|1973}}; {{Harvnb|Howe|1994}}) In the U.S., [[DARPA]] became determined to fund only "mission-oriented direct research, rather than basic undirected research". See {{Harv|NRC|1999}} under "Shift to Applied Research Increases Investment". See also {{Harv|Crevier|1993|pp=115–117}} and {{Harv|Russell|Norvig|2003|pp=21–22}}</ref> As the 1980s began, Japan's [[Fifth Generation Computer]] Project revived interest in AGI, setting out a ten-year timeline that included AGI goals like "carry on a casual conversation".<ref>{{harvnb|Crevier|1993|p=211}}, {{harvnb|Russell|Norvig|2003|p=24}} and see also {{Harvnb|Feigenbaum|McCorduck|1983}}</ref> In response to this and the success of [[expert systems]], both industry and government pumped money back into the field.<ref>{{Harvnb|Crevier| 1993|pp=161–162,197–203,240}}; {{harvnb|Russell|Norvig|2003|p=25}}; {{harvnb|NRC|1999|loc=under "Shift to Applied Research Increases Investment"}}</ref> However, confidence in AI spectacularly collapsed in the late 1980s, and the goals of the Fifth Generation Computer Project were never fulfilled.<ref>{{Harvnb|Crevier|1993|pp=209–212}}</ref> For the second time in 20 years, AI researchers who had predicted the imminent achievement of AGI had been shown to be fundamentally mistaken. By the 1990s, AI researchers had gained a reputation for making vain promises. They became reluctant to make predictions at all<ref>As AI founder [[John McCarthy (computer scientist)|John McCarthy]] writes "it would be a great relief to the rest of the workers in AI if the inventors of new general formalisms would express their hopes in a more guarded form than has sometimes been the case." {{cite web | url=http://www-formal.stanford.edu/jmc/reviews/lighthill/lighthill.html | title=Reply to Lighthill | last=McCarthy | first=John | authorlink=John McCarthy (computer scientist) | publisher=Stanford University | year=2000 | access-date=29 September 2007 | archive-url=https://web.archive.org/web/20080930164952/http://www-formal.stanford.edu/jmc/reviews/lighthill/lighthill.html | archive-date=30 September 2008 | url-status=live }}</ref> and to avoid any mention of "human level" artificial intelligence for fear of being labeled "wild-eyed dreamer[s]."<ref>"At its low point, some computer scientists and software engineers avoided the term artificial intelligence for fear of being viewed as wild-eyed dreamers."{{cite news |first=John |last=Markoff |title=Behind Artificial Intelligence, a Squadron of Bright Real People |url=https://www.nytimes.com/2005/10/14/technology/14artificial.html?ei=5070&en=11ab55edb7cead5e&ex=1185940800&adxnnl=1&adxnnlx=1185805173-o7WsfW7qaP0x5/NUs1cQCQ |work=The New York Times|date=14 October 2005}}</ref><br />
<br />
However, in the early 1970s, it became obvious that researchers had grossly underestimated the difficulty of the project. Funding agencies became skeptical of AGI and put researchers under increasing pressure to produce useful "applied AI". As the 1980s began, Japan's Fifth Generation Computer Project revived interest in AGI, setting out a ten-year timeline that included AGI goals like "carry on a casual conversation". In response to this and the success of expert systems, both industry and government pumped money back into the field. However, confidence in AI spectacularly collapsed in the late 1980s, and the goals of the Fifth Generation Computer Project were never fulfilled. For the second time in 20 years, AI researchers who had predicted the imminent achievement of AGI had been shown to be fundamentally mistaken. By the 1990s, AI researchers had gained a reputation for making vain promises. They became reluctant to make predictions at all and to avoid any mention of "human level" artificial intelligence for fear of being labeled "wild-eyed dreamer[s]."<br />
<br />
然而,在20世纪70年代早期,很明显,研究人员严重低估了该项目的难度。资助机构开始对 AGI 持怀疑态度,并对研究人员施加越来越大的压力,要求他们生产出有用的“应用人工智能”。随着20世纪80年代的开始,日本的第五代计算机项目(Fifth Generation Computer Project)重新唤起了人们对 AGI 的兴趣,设定了一个为期10年的时间表,其中包括 AGI 的目标,比如“进行一次随意的交谈”。为了应对这种情况和专家系统的成功,工业界和政府都将资金重新投入这一领域。然而,人们对人工智能的信心在20世纪80年代末大幅下降,第五代计算机项目的目标从未实现。20年来的第二次,人工智能研究人员预测 AGI 即将取得的成就被证明是根本错误的。到了20世纪90年代,人工智能研究人员因做出虚假承诺而闻名。他们根本不愿意做预测,也不愿意提及“人类水平”的人工智能,因为他们害怕被贴上“狂热的梦想家”的标签<br />
<br />
<br />
<br />
=== Narrow AI research ===<br />
<br />
{{Main|Artificial intelligence}}<br />
<br />
<br />
<br />
In the 1990s and early 21st century, mainstream AI achieved far greater commercial success and academic respectability by focusing on specific sub-problems where they can produce verifiable results and commercial applications, such as [[artificial neural networks]] and statistical [[machine learning]].<ref>{{Harvnb|Russell|Norvig|2003|pp=25–26}}</ref> These "applied AI" systems are now used extensively throughout the technology industry, and research in this vein is very heavily funded in both academia and industry. Currently, development on this field is considered an emerging trend, and a mature stage is expected to happen in more than 10 years.<ref>{{cite web |title=Trends in the Emerging Tech Hype Cycle |url=https://blogs.gartner.com/smarterwithgartner/files/2018/08/PR_490866_5_Trends_in_the_Emerging_Tech_Hype_Cycle_2018_Hype_Cycle.png |publisher=Gartner Reports |accessdate=7 May 2019 |archive-url=https://web.archive.org/web/20190522024829/https://blogs.gartner.com/smarterwithgartner/files/2018/08/PR_490866_5_Trends_in_the_Emerging_Tech_Hype_Cycle_2018_Hype_Cycle.png |archive-date=22 May 2019 |url-status=live }}</ref><br />
<br />
In the 1990s and early 21st century, mainstream AI achieved far greater commercial success and academic respectability by focusing on specific sub-problems where they can produce verifiable results and commercial applications, such as artificial neural networks and statistical machine learning. These "applied AI" systems are now used extensively throughout the technology industry, and research in this vein is very heavily funded in both academia and industry. Currently, development on this field is considered an emerging trend, and a mature stage is expected to happen in more than 10 years.<br />
<br />
在1990年代和21世纪初,主流人工智能取得了更大的商业成功和学术声望,因为它们把重点放在能够产生可验证结果和商业应用的具体子问题上,例如人工神经网络和统计机器学习。这些“应用人工智能”系统现在在整个技术产业中得到广泛应用,这方面的研究得到了学术界和产业界的大量资助。目前,在这一领域的发展被认为是一个新兴的趋势,并有望在10多年内发生一个成熟的阶段。<br />
<br />
<br />
<br />
Most mainstream AI researchers hope that strong AI can be developed by combining the programs that solve various sub-problems. [[Hans Moravec]] wrote in 1988: <blockquote>"I am confident that this bottom-up route to artificial intelligence will one day meet the traditional top-down route more than half way, ready to provide the real world competence and the [[Commonsense knowledge base|commonsense knowledge]] that has been so frustratingly elusive in reasoning programs. Fully intelligent machines will result when the metaphorical [[golden spike]] is driven uniting the two efforts."<ref>{{Harvnb|Moravec|1988|p=20}}</ref></blockquote><br />
<br />
Most mainstream AI researchers hope that strong AI can be developed by combining the programs that solve various sub-problems. Hans Moravec wrote in 1988: <blockquote>"I am confident that this bottom-up route to artificial intelligence will one day meet the traditional top-down route more than half way, ready to provide the real world competence and the commonsense knowledge that has been so frustratingly elusive in reasoning programs. Fully intelligent machines will result when the metaphorical golden spike is driven uniting the two efforts."</blockquote><br />
<br />
大多数主流人工智能研究人员希望,通过结合解决各种子问题的程序,可以开发出强大的人工智能。汉斯 · 莫拉维克(Hans Moravec)在1988年写道: “我相信,这种自下而上的人工智能路线,终有一天会与传统的自上而下的路线相遇,超过一半的路程,准备好提供真实世界的能力和常识知识,而这些知识在推理程序中一直难以捉摸,令人沮丧。当隐喻性的黄金钉将两者结合起来时,就会产生完全智能的机器。” / blockquote<br />
<br />
<br />
<br />
However, even this fundamental philosophy has been disputed; for example, Stevan Harnad of Princeton concluded his 1990 paper on the [[Symbol grounding problem|Symbol Grounding Hypothesis]] by stating: <blockquote>"The expectation has often been voiced that "top-down" (symbolic) approaches to modeling cognition will somehow meet "bottom-up" (sensory) approaches somewhere in between. If the grounding considerations in this paper are valid, then this expectation is hopelessly modular and there is really only one viable route from sense to symbols: from the ground up. A free-floating symbolic level like the software level of a computer will never be reached by this route (or vice versa) – nor is it clear why we should even try to reach such a level, since it looks as if getting there would just amount to uprooting our symbols from their intrinsic meanings (thereby merely reducing ourselves to the functional equivalent of a programmable computer)."<ref>{{cite journal | last1 = Harnad | first1 = S | year = 1990 | title = The Symbol Grounding Problem | journal = Physica D | volume = 42 | issue = 1–3| pages = 335–346 | doi=10.1016/0167-2789(90)90087-6| bibcode = 1990PhyD...42..335H | arxiv = cs/9906002}}</ref></blockquote><br />
<br />
However, even this fundamental philosophy has been disputed; for example, Stevan Harnad of Princeton concluded his 1990 paper on the Symbol Grounding Hypothesis by stating: <blockquote>"The expectation has often been voiced that "top-down" (symbolic) approaches to modeling cognition will somehow meet "bottom-up" (sensory) approaches somewhere in between. If the grounding considerations in this paper are valid, then this expectation is hopelessly modular and there is really only one viable route from sense to symbols: from the ground up. A free-floating symbolic level like the software level of a computer will never be reached by this route (or vice versa) – nor is it clear why we should even try to reach such a level, since it looks as if getting there would just amount to uprooting our symbols from their intrinsic meanings (thereby merely reducing ourselves to the functional equivalent of a programmable computer)."</blockquote><br />
<br />
然而,即使是这种基本哲学也存在争议; 例如,普林斯顿大学的斯蒂文 · 哈纳德在1990年关于符号根植假说的论文中总结道: “人们经常提出这样的期望,即认知建模的“自上而下”(符号)方法将在某种程度上满足介于两者之间的“自下而上”(感官)方法。如果本文中的基础考虑是正确的,那么这种期望是无望的模块化的,从感觉到符号真的只有一条可行的路径: 从头开始。像计算机软件级别这样自由浮动的符号级别永远不可能通过这条路径(反之亦然)达到——也不清楚为什么我们甚至应该尝试达到这样一个级别,因为它看起来就像是把我们的符号从它们的内在意义上连根拔起(从而仅仅把我们自己降低为可编程计算机的功能等价物)。” / blockquote<br />
<br />
<br />
<br />
===Modern artificial general intelligence research===<br />
<br />
The term "artificial general intelligence" was used as early as 1997, by Mark Gubrud<ref>{{Harvnb|Gubrud|1997}}</ref> in a discussion of the implications of fully automated military production and operations. The term was re-introduced and popularized by [[Shane Legg]] and [[Ben Goertzel]] around 2002.<ref>{{Cite web|url=http://goertzel.org/who-coined-the-term-agi/|title=Who coined the term "AGI"? » goertzel.org|language=en-US|access-date=28 December 2018|archive-url=https://web.archive.org/web/20181228083048/http://goertzel.org/who-coined-the-term-agi/|archive-date=28 December 2018|url-status=live}}, via [[Life 3.0]]: 'The term "AGI" was popularized by... Shane Legg, Mark Gubrud and Ben Goertzel'</ref> The research objective is much older, for example [[Doug Lenat]]'s [[Cyc]] project (that began in 1984), and [[Allen Newell]]'s [[Soar (cognitive architecture)|Soar]] project are regarded as within the scope of AGI. AGI research activity in 2006 was described by Pei Wang and Ben Goertzel<ref>{{harvnb|Goertzel|Wang|2006}}. See also {{harvtxt|Wang|2006}} with an up-to-date summary and lots of links.</ref> as "producing publications and preliminary results". The first summer school in AGI was organized in Xiamen, China in 2009<ref>https://goertzel.org/AGI_Summer_School_2009.htm</ref> by the Xiamen university's Artificial Brain Laboratory and OpenCog. The first university course was given in 2010<ref>http://fmi-plovdiv.org/index.jsp?id=1054&ln=1</ref> and 2011<ref>http://fmi.uni-plovdiv.bg/index.jsp?id=1139&ln=1</ref> at Plovdiv University, Bulgaria by Todor Arnaudov. MIT presented a course in AGI in 2018, organized by Lex Fridman and featuring a number of guest lecturers. However, as yet, most AI researchers have devoted little attention to AGI, with some claiming that intelligence is too complex to be completely replicated in the near term. However, a small number of computer scientists are active in AGI research, and many of this group are contributing to a series of [[Conference on Artificial General Intelligence|AGI conferences]]. The research is extremely diverse and often pioneering in nature. In the introduction to his book,{{sfn|Goertzel|Pennachin|2006}} Goertzel says that estimates of the time needed before a truly flexible AGI is built vary from 10 years to over a century, but the consensus in the AGI research community seems to be that the timeline discussed by [[Ray Kurzweil]] in ''[[The Singularity is Near]]''<ref name="K">{{Harv|Kurzweil|2005|p=260}} or see [http://crnano.typepad.com/crnblog/2005/08/advanced_human_.html Advanced Human Intelligence] {{Webarchive|url=https://web.archive.org/web/20110630032301/http://crnano.typepad.com/crnblog/2005/08/advanced_human_.html |date=30 June 2011 }} where he defines strong AI as "machine intelligence with the full range of human intelligence."</ref> (i.e. between 2015 and 2045) is plausible.{{sfn|Goertzel|2007}}<br />
<br />
The term "artificial general intelligence" was used as early as 1997, by Mark Gubrud in a discussion of the implications of fully automated military production and operations. The term was re-introduced and popularized by Shane Legg and Ben Goertzel around 2002. The research objective is much older, for example Doug Lenat's Cyc project (that began in 1984), and Allen Newell's Soar project are regarded as within the scope of AGI. AGI research activity in 2006 was described by Pei Wang and Ben Goertzel as "producing publications and preliminary results". The first summer school in AGI was organized in Xiamen, China in 2009 by the Xiamen university's Artificial Brain Laboratory and OpenCog. The first university course was given in 2010 and 2011 at Plovdiv University, Bulgaria by Todor Arnaudov. MIT presented a course in AGI in 2018, organized by Lex Fridman and featuring a number of guest lecturers. However, as yet, most AI researchers have devoted little attention to AGI, with some claiming that intelligence is too complex to be completely replicated in the near term. However, a small number of computer scientists are active in AGI research, and many of this group are contributing to a series of AGI conferences. The research is extremely diverse and often pioneering in nature. In the introduction to his book, Goertzel says that estimates of the time needed before a truly flexible AGI is built vary from 10 years to over a century, but the consensus in the AGI research community seems to be that the timeline discussed by Ray Kurzweil in The Singularity is Near (i.e. between 2015 and 2045) is plausible.<br />
<br />
”人工通用智能”一词早在1997年就由马克 · 古布鲁德在讨论全自动化军事生产和作业的影响时使用。这个术语在2002年左右被 Shane Legg 和 Ben Goertzel 重新引入并推广。研究目标要古老得多,例如道格•雷纳特(Doug Lenat)的 Cyc 项目(始于1984年) ,以及艾伦•纽厄尔(Allen Newell)的 Soar 项目被认为属于 AGI 的范围。王(音译)和本 · 戈泽尔(音译)将 AGI 2006年的研究活动描述为”发表论文和取得初步成果”。2009年,厦门大学人工脑实验室和 OpenCog 在中国厦门组织了 AGI 的第一个暑期学校。第一个大学课程于2010年和2011年在保加利亚普罗夫迪夫大学由 Todor Arnaudov 开设。2018年,麻省理工学院在 AGI 开设了一门课程,由 Lex Fridman 组织,并邀请了一些客座讲师。然而,迄今为止,大多数人工智能研究人员对 AGI 关注甚少,一些人声称,智能过于复杂,无法在短期内完全复制。然而,少数计算机科学家积极参与 AGI 的研究,其中许多人正在为 AGI 的一系列会议做出贡献。这项研究极其多样化,而且往往具有开创性。在他的书的序言中,Goertzel 说,一个真正灵活的 AGI 制造所需的时间估计从10年到超过一个世纪不等,但是 AGI 研究团体的共识似乎是 Ray Kurzweil 在《奇点迫近讨论的时间表。在2015年至2045年之间)是合理的。<br />
<br />
<br />
<br />
However, most mainstream AI researchers doubt that progress will be this rapid.{{citation_needed|date=January 2017}} Organizations explicitly pursuing AGI include the Swiss AI lab [[IDSIA]],{{citation needed|date=December 2017}} Nnaisense,<ref>{{cite news|last1=Markoff|first1=John|title=When A.I. Matures, It May Call Jürgen Schmidhuber 'Dad'|url=https://www.nytimes.com/2016/11/27/technology/artificial-intelligence-pioneer-jurgen-schmidhuber-overlooked.html|accessdate=26 December 2017|work=The New York Times|date=27 November 2016|archive-url=https://web.archive.org/web/20171226234555/https://www.nytimes.com/2016/11/27/technology/artificial-intelligence-pioneer-jurgen-schmidhuber-overlooked.html|archive-date=26 December 2017|url-status=live}}</ref> [[Vicarious (company)|Vicarious]],<!--<ref name=baum/>--> [[Maluuba]],<ref name=baum/> the [[OpenCog|OpenCog Foundation]], Adaptive AI, [[LIDA (cognitive architecture)|LIDA]], and [[Numenta]] and the associated [[Redwood Neuroscience Institute]].<ref>{{cite book|author1=James Barrat|title=Our Final Invention: Artificial Intelligence and the End of the Human Era|date=2013|publisher=St. Martin's Press|location=New York|isbn=9780312622374|edition=First|chapter=Chapter 11: A Hard Takeoff|title-link=Our Final Invention|author1-link=James Barrat}}</ref> In addition, organizations such as the [[Machine Intelligence Research Institute]]<ref>{{cite web|title=About the Machine Intelligence Research Institute|url=https://intelligence.org/about/|website=Machine Intelligence Research Institute|accessdate=26 December 2017|archive-url=https://web.archive.org/web/20180121025925/https://intelligence.org/about/|archive-date=21 January 2018|url-status=live}}</ref> and [[OpenAI]]<ref>{{cite news|title=About OpenAI|url=https://openai.com/about/|accessdate=26 December 2017|work=[[OpenAI]]|language=en-us|archive-url=https://web.archive.org/web/20171222181056/https://openai.com/about/|archive-date=22 December 2017|url-status=live}}</ref> have been founded to influence the development path of AGI. Finally, projects such as the [[Human Brain Project]]<ref>{{cite news|last1=Theil|first1=Stefan|title=Trouble in Mind|url=https://www.scientificamerican.com/article/why-the-human-brain-project-went-wrong-and-how-to-fix-it/|accessdate=26 December 2017|work=Scientific American|pages=36–42|language=en|doi=10.1038/scientificamerican1015-36|bibcode=2015SciAm.313d..36T|archive-url=https://web.archive.org/web/20171109234151/https://www.scientificamerican.com/article/why-the-human-brain-project-went-wrong-and-how-to-fix-it/|archive-date=9 November 2017|url-status=live}}</ref> have the goal of building a functioning simulation of the human brain. A 2017 survey of AGI categorized forty-five known "active R&D projects" that explicitly or implicitly (through published research) research AGI, with the largest three being [[DeepMind]], the Human Brain Project, and [[OpenAI]].<ref name=baum>{{cite journal|title=Baum, Seth, A Survey of Artificial General Intelligence Projects for Ethics, Risk, and Policy (November 12, 2017). Global Catastrophic Risk Institute Working Paper 17-1|url=https://ssrn.com/abstract=3070741|date=12 November 2017|last1=Baum|first1=Seth}}</ref><br />
<br />
However, most mainstream AI researchers doubt that progress will be this rapid. Organizations explicitly pursuing AGI include the Swiss AI lab IDSIA, Nnaisense, Vicarious,<!-- In addition, organizations such as the Machine Intelligence Research Institute and OpenAI have been founded to influence the development path of AGI. Finally, projects such as the Human Brain Project have the goal of building a functioning simulation of the human brain. A 2017 survey of AGI categorized forty-five known "active R&D projects" that explicitly or implicitly (through published research) research AGI, with the largest three being DeepMind, the Human Brain Project, and OpenAI.<br />
<br />
然而,大多数主流的人工智能研究人员怀疑进展是否会如此之快。明确寻求 AGI 的组织包括瑞士人工智能实验室 IDSIA,Nnaisense,Vicarious,! -- 此外,还成立了机器智能研究所和 OpenAI 等机构来影响 AGI 的发展道路。最后,像人脑计划这样的项目的目标是建立一个人脑的功能模拟。2017年针对 AGI 的一项调查将45个已知的“活跃研发项目”(通过已发表的研究)归类为“活跃研发项目” ,其中最大的三个是 DeepMind、人类大脑项目和 OpenAI。<br />
<br />
<br />
<br />
In 2017, researchers Feng Liu, Yong Shi and Ying Liu conducted intelligence tests on publicly available and freely accessible weak AI such as Google AI or Apple's Siri and others. At the maximum, these AI reached a value of about 47, which corresponds approximately to a six-year-old child in first grade. An adult comes to about 100 on average. Similar tests had been carried out in 2014, with the IQ score reaching a maximum value of 27.<ref>{{cite journal|title=Intelligence Quotient and Intelligence Grade of Artificial Intelligence|journal=Annals of Data Science|volume=4|issue=2|pages=179–191|arxiv=1709.10242|doi=10.1007/s40745-017-0109-0|year=2017|last1=Liu|first1=Feng|last2=Shi|first2=Yong|last3=Liu|first3=Ying|bibcode=2017arXiv170910242L}}</ref><ref>{{cite web|title=Google-KI doppelt so schlau wie Siri|url=https://t3n.de/news/iq-kind-schlauer-google-ki-siri-864003|accessdate=2 January 2019|archive-url=https://web.archive.org/web/20190103055657/https://t3n.de/news/iq-kind-schlauer-google-ki-siri-864003/|archive-date=3 January 2019|url-status=live}}</ref><br />
<br />
In 2017, researchers Feng Liu, Yong Shi and Ying Liu conducted intelligence tests on publicly available and freely accessible weak AI such as Google AI or Apple's Siri and others. At the maximum, these AI reached a value of about 47, which corresponds approximately to a six-year-old child in first grade. An adult comes to about 100 on average. Similar tests had been carried out in 2014, with the IQ score reaching a maximum value of 27.<br />
<br />
2017年,研究人员 Feng Liu,Yong Shi 和 Ying Liu 对公开的和可自由访问的弱智能进行了智能测试,如谷歌人工智能或苹果的 Siri 等。在最大值,这些 AI 达到了约47,这大约相当于一个六岁的儿童在一年级。一个成年人平均体重约为100磅。2014年也进行了类似的测试,智商分数达到了最高值27。<br />
<br />
<br />
<br />
In 2019, video game programmer and aerospace engineer [[John Carmack]] announced plans to research AGI.<ref name="lawler">{{cite web |url=https://www.engadget.com/2019-11-13-john-carmack-agi.html |title=John Carmack takes a step back at Oculus to work on human-like AI |date=November 13, 2019 |first=Richard |last=Lawler |accessdate=April 4, 2020 |publisher=[[Engadget]]}}</ref><br />
<br />
In 2019, video game programmer and aerospace engineer John Carmack announced plans to research AGI.<br />
<br />
2019年,游戏程序师和航空工程师 John Carmack 宣布了研究 AGI 的计划。<br />
<br />
<br />
<br />
==Processing power needed to simulate a brain ==<br />
<br />
<br />
<br />
===Whole brain emulation===<br />
<br />
{{main|Mind uploading}}<br />
<br />
A popular discussed approach to achieving general intelligent action is [[whole brain emulation]]. A low-level brain model is built by [[brain scanning|scanning]] and [[Brain mapping|mapping]] a biological brain in detail and copying its state into a computer system or another computational device. The computer runs a [[computer simulation|simulation]] model so faithful to the original that it will behave in essentially the same way as the original brain, or for all practical purposes, indistinguishably.<ref name=Roadmap><br />
<br />
A popular discussed approach to achieving general intelligent action is whole brain emulation. A low-level brain model is built by scanning and mapping a biological brain in detail and copying its state into a computer system or another computational device. The computer runs a simulation model so faithful to the original that it will behave in essentially the same way as the original brain, or for all practical purposes, indistinguishably.<ref name=Roadmap><br />
<br />
实现一般智能行为的一种流行的讨论方法是全脑模拟。一个低层次的大脑模型是通过扫描和绘制生物大脑的详细情况,并将其状态复制到计算机系统或其他计算设备中来建立的。计算机运行的模拟模型如此忠实于原始模型,以至于它的行为在本质上与原始大脑相同,或者对于所有的实际目的,难以区分。 参考名称路线图<br />
<br />
{{Harvnb|Sandberg|Boström|2008}}. "The basic idea is to take a particular brain, scan its structure in detail, and construct a software model of it that is so faithful to the original that, when run on appropriate hardware, it will behave in essentially the same way as the original brain."</ref> Whole brain emulation is discussed in [[computational neuroscience]] and [[neuroinformatics]], in the context of [[brain simulation]] for medical research purposes. It is discussed in [[artificial intelligence]] research{{sfn|Goertzel|2007}} as an approach to strong AI. [[Neuroimaging]] technologies that could deliver the necessary detailed understanding are improving rapidly, and [[futurist]] Ray Kurzweil in the book ''The Singularity Is Near''<ref name=K/> predicts that a map of sufficient quality will become available on a similar timescale to the required computing power.<br />
<br />
. "The basic idea is to take a particular brain, scan its structure in detail, and construct a software model of it that is so faithful to the original that, when run on appropriate hardware, it will behave in essentially the same way as the original brain."</ref> Whole brain emulation is discussed in computational neuroscience and neuroinformatics, in the context of brain simulation for medical research purposes. It is discussed in artificial intelligence research as an approach to strong AI. Neuroimaging technologies that could deliver the necessary detailed understanding are improving rapidly, and futurist Ray Kurzweil in the book The Singularity Is Near predicts that a map of sufficient quality will become available on a similar timescale to the required computing power.<br />
<br />
.“基本思想是,取一个特定的大脑,详细地扫描其结构,并构建一个与原始大脑如此忠实的软件模型,以至于在适当的硬件上运行时,它基本上与原始大脑的行为方式相同。整个大脑模拟在计算神经科学和神经信息学医学期刊上讨论过,这是为了医学研究的大脑模拟。它是人工智能研究中讨论的一种强人工智能的方法。神经成像技术可以提供必要的详细的理解正在迅速提高,未来学家 Ray Kurzweil 在《奇点迫近书中预测,一张具有足够质量的地图将在类似的时间尺度上达到所需的计算能力。<br />
<br />
<br />
<br />
===Early estimates ===<br />
<br />
[[File:Estimations of Human Brain Emulation Required Performance.svg|thumb|right|400px|Estimates of how much processing power is needed to emulate a human brain at various levels (from Ray Kurzweil, and [[Anders Sandberg]] and [[Nick Bostrom]]), along with the fastest supercomputer from [[TOP500]] mapped by year. Note the logarithmic scale and exponential trendline, which assumes the computational capacity doubles every 1.1 years. Kurzweil believes that mind uploading will be possible at neural simulation, while the Sandberg, Bostrom report is less certain about where [[consciousness]] arises.{{sfn|Sandberg|Boström|2008}}]] For low-level brain simulation, an extremely powerful computer would be required. The [[human brain]] has a huge number of [[synapses]]. Each of the 10<sup>11</sup> (one hundred billion) [[neurons]] has on average 7,000 synaptic connections (synapses) to other neurons. It has been estimated that the brain of a three-year-old child has about 10<sup>15</sup> synapses (1 quadrillion). This number declines with age, stabilizing by adulthood. Estimates vary for an adult, ranging from 10<sup>14</sup> to 5×10<sup>14</sup> synapses (100 to 500 trillion).{{sfn|Drachman|2005}} An estimate of the brain's processing power, based on a simple switch model for neuron activity, is around 10<sup>14</sup> (100 trillion) synaptic updates per second ([[SUPS]]).{{sfn|Russell|Norvig|2003}} In 1997, Kurzweil looked at various estimates for the hardware required to equal the human brain and adopted a figure of 10<sup>16</sup> computations per second (cps).<ref>In "Mind Children" {{Harvnb|Moravec|1988|page=61}} 10<sup>15</sup> cps is used. More recently, in 1997, <{{cite web|url=http://www.transhumanist.com/volume1/moravec.htm |title=Archived copy |accessdate=23 June 2006 |url-status=dead |archiveurl=https://web.archive.org/web/20060615031852/http://transhumanist.com/volume1/moravec.htm |archivedate=15 June 2006 }}> Moravec argued for 10<sup>8</sup> MIPS which would roughly correspond to 10<sup>14</sup> cps. Moravec talks in terms of MIPS, not "cps", which is a non-standard term Kurzweil introduced.</ref> (For comparison, if a "computation" was equivalent to one "[[FLOPS|floating point operation]]" – a measure used to rate current [[supercomputer]]s – then 10<sup>16</sup> "computations" would be equivalent to 10 [[Peta-|petaFLOPS]], [[FLOPS#Performance records|achieved in 2011]]). He used this figure to predict the necessary hardware would be available sometime between 2015 and 2025, if the exponential growth in computer power at the time of writing continued.<br />
<br />
Estimates of how much processing power is needed to emulate a human brain at various levels (from Ray Kurzweil, and [[Anders Sandberg and Nick Bostrom), along with the fastest supercomputer from TOP500 mapped by year. Note the logarithmic scale and exponential trendline, which assumes the computational capacity doubles every 1.1 years. Kurzweil believes that mind uploading will be possible at neural simulation, while the Sandberg, Bostrom report is less certain about where consciousness arises.]] For low-level brain simulation, an extremely powerful computer would be required. The human brain has a huge number of synapses. Each of the 10<sup>11</sup> (one hundred billion) neurons has on average 7,000 synaptic connections (synapses) to other neurons. It has been estimated that the brain of a three-year-old child has about 10<sup>15</sup> synapses (1 quadrillion). This number declines with age, stabilizing by adulthood. Estimates vary for an adult, ranging from 10<sup>14</sup> to 5×10<sup>14</sup> synapses (100 to 500 trillion). An estimate of the brain's processing power, based on a simple switch model for neuron activity, is around 10<sup>14</sup> (100 trillion) synaptic updates per second (SUPS). In 1997, Kurzweil looked at various estimates for the hardware required to equal the human brain and adopted a figure of 10<sup>16</sup> computations per second (cps). (For comparison, if a "computation" was equivalent to one "floating point operation" – a measure used to rate current supercomputers – then 10<sup>16</sup> "computations" would be equivalent to 10 petaFLOPS, achieved in 2011). He used this figure to predict the necessary hardware would be available sometime between 2015 and 2025, if the exponential growth in computer power at the time of writing continued.<br />
<br />
估计需要多少处理能力才能在不同水平上模拟人类大脑(来自 Ray Kurzweil,[ Anders Sandberg 和 Nick Bostrom ]) ,以及每年从 TOP500绘制出的最快超级计算机。请注意对数尺度趋势线和指数趋势线,它假设计算能力每1.1年翻一番。库兹韦尔相信,在神经模拟中上传思维是可能的,而桑德伯格和博斯特罗姆的报告对意识在哪里产生则不太确定。]对于低层次的大脑模拟,需要一个非常强大的计算机。人类的大脑有大量的突触。每个10个 sup 11 / sup (1000亿)神经元平均有7000个突触连接(突触)到其他神经元。据估计,一个三岁儿童的大脑约有10个 sup 15 / sup 突触(1千万亿)。这个数字随着年龄的增长而下降,成年后趋于稳定。对于一个成年人的估计有所不同,从10个 sup 14 / sup 到5个 sup 10 sup 14 / sup 突触(100万亿到500万亿)不等。基于神经元活动的简单开关模型,对大脑处理能力的估计大约是每秒10次 / 秒(100万亿)突触更新(SUPS)。1997年,库兹韦尔研究了相当于人脑所需硬件的各种估计,并采用了每秒10 sup 16 / sup 计算(cps)的数字。(作为比较,如果一次“计算”相当于一次“浮点运算”——一种用于对当前超级计算机进行评级的措施——那么10 sup 16 / sup“计算”相当于2011年完成的10petaflops)。他用这个数字来预测,如果在撰写本文时计算机能力方面的指数增长继续下去的话,那么在2015年到2025年之间的某个时候,必要的硬件将会出现。<br />
<br />
<br />
<br />
===Modelling the neurons in more detail===<br />
<br />
The [[artificial neuron]] model assumed by Kurzweil and used in many current [[artificial neural network]] implementations is simple compared with [[biological neuron model|biological neurons]]. A brain simulation would likely have to capture the detailed cellular behaviour of biological [[neurons]], presently understood only in the broadest of outlines. The overhead introduced by full modeling of the biological, chemical, and physical details of neural behaviour (especially on a molecular scale) would require computational powers several orders of magnitude larger than Kurzweil's estimate. In addition the estimates do not account for [[glial cells]], which are at least as numerous as neurons, and which may outnumber neurons by as much as 10:1, and are now known to play a role in cognitive processes.<ref name="Discover2011JanFeb">{{Cite journal|author=Swaminathan, Nikhil|title=Glia—the other brain cells|journal=Discover|date=Jan–Feb 2011|url=http://discovermagazine.com/2011/jan-feb/62|access-date=24 January 2014|archive-url=https://web.archive.org/web/20140208071350/http://discovermagazine.com/2011/jan-feb/62|archive-date=8 February 2014|url-status=live}}</ref><br />
<br />
The artificial neuron model assumed by Kurzweil and used in many current artificial neural network implementations is simple compared with biological neurons. A brain simulation would likely have to capture the detailed cellular behaviour of biological neurons, presently understood only in the broadest of outlines. The overhead introduced by full modeling of the biological, chemical, and physical details of neural behaviour (especially on a molecular scale) would require computational powers several orders of magnitude larger than Kurzweil's estimate. In addition the estimates do not account for glial cells, which are at least as numerous as neurons, and which may outnumber neurons by as much as 10:1, and are now known to play a role in cognitive processes.<br />
<br />
与生物神经元相比,Kurzweil 假设的人工神经元模型在当前许多人工神经网络实现中的应用是简单的。大脑模拟可能需要捕捉生物神经元细胞行为的细节,目前只能从最广泛的轮廓中理解。对神经行为的生物、化学和物理细节(特别是在分子尺度上)进行全面建模所需要的计算能力将比 Kurzweil 的估计大几百万数量级。此外,这些估计没有考虑胶质细胞,胶质细胞至少和神经元一样多,数量可能比神经元多10:1,现在已知它们在认知过程中发挥作用。<br />
<br />
<br />
<br />
=== Current research===<br />
<br />
There are some research projects that are investigating brain simulation using more sophisticated neural models, implemented on conventional computing architectures. The [[Artificial Intelligence System]] project implemented non-real time simulations of a "brain" (with 10<sup>11</sup> neurons) in 2005. It took 50 days on a cluster of 27 processors to simulate 1 second of a model.<ref>{{cite journal |last=Izhikevich |first=Eugene M. |last2=Edelman |first2=Gerald M. |date=4 March 2008 |title=Large-scale model of mammalian thalamocortical systems |url=http://vesicle.nsi.edu/users/izhikevich/publications/large-scale_model_of_human_brain.pdf |journal=PNAS |volume=105 |issue=9 |pages=3593–3598 |doi= 10.1073/pnas.0712231105|access-date=23 June 2015 |archive-url=https://web.archive.org/web/20090612095651/http://vesicle.nsi.edu/users/izhikevich/publications/large-scale_model_of_human_brain.pdf |archive-date=12 June 2009 |pmid=18292226 |pmc=2265160|bibcode=2008PNAS..105.3593I }}</ref> The [[Blue Brain]] project used one of the fastest supercomputer architectures in the world, [[IBM]]'s [[Blue Gene]] platform, to create a real time simulation of a single rat [[Neocortex|neocortical column]] consisting of approximately 10,000 neurons and 10<sup>8</sup> synapses in 2006.<ref>{{cite web|url=http://bluebrain.epfl.ch/Jahia/site/bluebrain/op/edit/pid/19085|title=Project Milestones|work=Blue Brain|accessdate=11 August 2008}}</ref> A longer term goal is to build a detailed, functional simulation of the physiological processes in the human brain: "It is not impossible to build a human brain and we can do it in 10 years," [[Henry Markram]], director of the Blue Brain Project said in 2009 at the [[TED (conference)|TED conference]] in Oxford.<ref>{{Cite news |url=http://news.bbc.co.uk/1/hi/technology/8164060.stm |title=Artificial brain '10 years away' 2009 BBC news |date=22 July 2009 |access-date=25 July 2009 |archive-url=https://web.archive.org/web/20170726040959/http://news.bbc.co.uk/1/hi/technology/8164060.stm |archive-date=26 July 2017 |url-status=live }}</ref> There have also been controversial claims to have simulated a [[cat intelligence#Computer simulation of the cat brain|cat brain]]. Neuro-silicon interfaces have been proposed as an alternative implementation strategy that may scale better.<ref>[http://gauntlet.ucalgary.ca/story/10343 University of Calgary news] {{Webarchive|url=https://web.archive.org/web/20090818081044/http://gauntlet.ucalgary.ca/story/10343 |date=18 August 2009 }}, [http://www.nbcnews.com/id/12037941 NBC News news] {{Webarchive|url=https://web.archive.org/web/20170704063922/http://www.nbcnews.com/id/12037941/ |date=4 July 2017 }}</ref><br />
<br />
There are some research projects that are investigating brain simulation using more sophisticated neural models, implemented on conventional computing architectures. The Artificial Intelligence System project implemented non-real time simulations of a "brain" (with 10<sup>11</sup> neurons) in 2005. It took 50 days on a cluster of 27 processors to simulate 1 second of a model. The Blue Brain project used one of the fastest supercomputer architectures in the world, IBM's Blue Gene platform, to create a real time simulation of a single rat neocortical column consisting of approximately 10,000 neurons and 10<sup>8</sup> synapses in 2006. A longer term goal is to build a detailed, functional simulation of the physiological processes in the human brain: "It is not impossible to build a human brain and we can do it in 10 years," Henry Markram, director of the Blue Brain Project said in 2009 at the TED conference in Oxford. There have also been controversial claims to have simulated a cat brain. Neuro-silicon interfaces have been proposed as an alternative implementation strategy that may scale better.<br />
<br />
有一些研究项目正在使用更复杂的神经模型研究大脑模拟,这些模型是在传统的计算机体系结构上实现的。人工智能系统项目在2005年实现了一个“大脑”(有10个 sup 11 / sup 神经元)的非实时模拟。在一个由27个处理器组成的集群上,模拟一个模型的一秒钟花费了50天时间。2006年,蓝脑项目利用世界上最快的超级计算机架构之一,IBM 的蓝色基因平台,创建了一个包含大约10,000个神经元和10个 sup 8 / sup 突触的单个大鼠皮层柱的实时模拟。一个更长期的目标是建立一个人脑生理过程的详细的功能模拟: “建立一个人脑并不是不可能的,我们可以在10年内完成,”2009年在牛津举行的 TED 大会上,蓝脑项目主任亨利 · 马克拉姆说。还有一些有争议的说法是模拟猫的大脑。神经硅接口已被提出作为一种替代的实施策略,可能会更好地伸缩。<br />
<br />
<br />
<br />
[[Hans Moravec]] addressed the above arguments ("brains are more complicated", "neurons have to be modeled in more detail") in his 1997 paper "When will computer hardware match the human brain?".<ref>{{cite web|url=http://www.transhumanist.com/volume1/moravec.htm |title=Archived copy |accessdate=23 June 2006 |url-status=dead |archiveurl=https://web.archive.org/web/20060615031852/http://transhumanist.com/volume1/moravec.htm |archivedate=15 June 2006 }}</ref> He measured the ability of existing software to simulate the functionality of neural tissue, specifically the retina. His results do not depend on the number of glial cells, nor on what kinds of processing neurons perform where.<br />
<br />
Hans Moravec addressed the above arguments ("brains are more complicated", "neurons have to be modeled in more detail") in his 1997 paper "When will computer hardware match the human brain?". He measured the ability of existing software to simulate the functionality of neural tissue, specifically the retina. His results do not depend on the number of glial cells, nor on what kinds of processing neurons perform where.<br />
<br />
Hans Moravec 在他1997年的论文《计算机硬件何时能与人脑相匹配? 》中提出了上述观点(“大脑更复杂” ,“神经元必须建模得更详细”) .他测量了现有软件模拟神经组织,特别是视网膜功能的能力。他的研究结果并不取决于神经胶质细胞的数量,也不取决于处理神经元在哪里工作。<br />
<br />
<br />
<br />
The actual complexity of modeling biological neurons has been explored in [[OpenWorm|OpenWorm project]] that was aimed on complete simulation of a worm that has only 302 neurons in its neural network (among about 1000 cells in total). The animal's neural network has been well documented before the start of the project. However, although the task seemed simple at the beginning, the models based on a generic neural network did not work. Currently, the efforts are focused on precise emulation of biological neurons (partly on the molecular level), but the result cannot be called a total success yet. Even if the number of issues to be solved in a human-brain-scale model is not proportional to the number of neurons, the amount of work along this path is obvious.<br />
<br />
The actual complexity of modeling biological neurons has been explored in OpenWorm project that was aimed on complete simulation of a worm that has only 302 neurons in its neural network (among about 1000 cells in total). The animal's neural network has been well documented before the start of the project. However, although the task seemed simple at the beginning, the models based on a generic neural network did not work. Currently, the efforts are focused on precise emulation of biological neurons (partly on the molecular level), but the result cannot be called a total success yet. Even if the number of issues to be solved in a human-brain-scale model is not proportional to the number of neurons, the amount of work along this path is obvious.<br />
<br />
在 OpenWorm 项目中,已经探讨了建模生物神经元的实际复杂性,该项目旨在完全模拟一个蠕虫,其神经网络中只有302个神经元(在总共约1000个细胞中)。在项目开始之前,动物的神经网络已经被很好地记录了下来。然而,尽管一开始任务看起来很简单,基于一般神经网络的模型并不起作用。目前,研究的重点是精确模拟生物神经元(部分在分子水平上) ,但结果还不能被称为完全成功。即使在人脑尺度模型中需要解决的问题的数量与神经元的数量不成比例,沿着这条路径所做的工作量也是显而易见的。<br />
<br />
<br />
<br />
===Criticisms of simulation-based approaches===<br />
<br />
A fundamental criticism of the simulated brain approach derives from [[embodied cognition]] where human embodiment is taken as an essential aspect of human intelligence. Many researchers believe that embodiment is necessary to ground meaning.<ref>{{Harvnb|de Vega|Glenberg|Graesser|2008}}. A wide range of views in current research, all of which require grounding to some degree</ref> If this view is correct, any fully functional brain model will need to encompass more than just the neurons (i.e., a robotic body). Goertzel{{sfn|Goertzel|2007}} proposes virtual embodiment (like in ''[[Second Life]]''), but it is not yet known whether this would be sufficient.<br />
<br />
A fundamental criticism of the simulated brain approach derives from embodied cognition where human embodiment is taken as an essential aspect of human intelligence. Many researchers believe that embodiment is necessary to ground meaning. If this view is correct, any fully functional brain model will need to encompass more than just the neurons (i.e., a robotic body). Goertzel proposes virtual embodiment (like in Second Life), but it is not yet known whether this would be sufficient.<br />
<br />
对模拟大脑方法的一个基本批评来自具身认知,在那里人体化被视为人类智力的一个重要方面。许多研究者认为,具体化是必要的基础意义。如果这种观点是正确的,那么任何功能齐全的大脑模型都需要包含更多的神经元(例如,一个机器人身体)。Goertzel 提出了虚拟化身(就像在《第二人生》中那样) ,但是目前还不知道这是否足够。<br />
<br />
<br />
<br />
Desktop computers using microprocessors capable of more than 10<sup>9</sup> cps (Kurzweil's non-standard unit "computations per second", see above) have been available since 2005. According to the brain power estimates used by Kurzweil (and Moravec), this computer should be capable of supporting a simulation of a bee brain, but despite some interest<ref>{{Cite web |url=http://www.setiai.com/archives/cat_honey_bee_brain.html |title=some links to bee brain studies |access-date=30 March 2010 |archive-url=https://web.archive.org/web/20111005162232/http://www.setiai.com/archives/cat_honey_bee_brain.html |archive-date=5 October 2011 |url-status=dead }}</ref> no such simulation exists {{Citation needed|date=April 2011}}. There are at least three reasons for this:<br />
<br />
Desktop computers using microprocessors capable of more than 10<sup>9</sup> cps (Kurzweil's non-standard unit "computations per second", see above) have been available since 2005. According to the brain power estimates used by Kurzweil (and Moravec), this computer should be capable of supporting a simulation of a bee brain, but despite some interest no such simulation exists . There are at least three reasons for this:<br />
<br />
自2005年以来,台式计算机使用的微处理器能够超过10 sup 9 / sup cps (库兹韦尔的非标准单位“每秒计算” ,见上文)。根据 Kurzweil (和 Moravec)使用的大脑能量估算,这台计算机应该能够支持蜜蜂大脑的模拟,但是尽管有些人感兴趣,这样的模拟并不存在。这至少有三个原因:<br />
<br />
#The neuron model seems to be oversimplified (see next section).<br />
<br />
The neuron model seems to be oversimplified (see next section).<br />
<br />
神经元模型似乎过于简化了(见下一节)。<br />
<br />
#There is insufficient understanding of higher cognitive processes{{refn|In Goertzels' AGI book ([[#CITEREFYudkowsky2006|Yudkowsky 2006]]), Yudkowsky proposes 5 levels of organisation that must be understood – code/data, sensory modality, concept & category, thought, and deliberation (consciousness) – in order to use the available hardware}} to establish accurately what the brain's neural activity, observed using techniques such as [[Neuroimaging#Functional magnetic resonance imaging|functional magnetic resonance imaging]], correlates with.<br />
<br />
There is insufficient understanding of higher cognitive processes to establish accurately what the brain's neural activity, observed using techniques such as functional magnetic resonance imaging, correlates with.<br />
<br />
人们对高级认知过程的理解不够充分,无法准确地确定大脑的神经活动---- 使用功能性磁共振成像等技术观察到的活动---- 与之相关。<br />
<br />
#Even if our understanding of cognition advances sufficiently, early simulation programs are likely to be very inefficient and will, therefore, need considerably more hardware.<br />
<br />
Even if our understanding of cognition advances sufficiently, early simulation programs are likely to be very inefficient and will, therefore, need considerably more hardware.<br />
<br />
即使我们对认知的理解有了足够的进步,早期的仿真程序也可能非常低效,因此需要更多的硬件。<br />
<br />
#The brain of an organism, while critical, may not be an appropriate boundary for a cognitive model. To simulate a bee brain, it may be necessary to simulate the body, and the environment. [[The Extended Mind]] thesis formalizes the philosophical concept, and research into [[Cephalopoda|cephalopods]] has demonstrated clear examples of a decentralized system.<ref>{{cite journal | pmid = 15829594 | doi=10.1152/jn.00684.2004 | volume=94 | issue=2 | title=Dynamic model of the octopus arm. I. Biomechanics of the octopus reaching movement |date=August 2005 | journal=J. Neurophysiol. | pages=1443–58 | last1 = Yekutieli | first1 = Y | last2 = Sagiv-Zohar | first2 = R | last3 = Aharonov | first3 = R | last4 = Engel | first4 = Y | last5 = Hochner | first5 = B | last6 = Flash | first6 = T}}</ref><br />
<br />
The brain of an organism, while critical, may not be an appropriate boundary for a cognitive model. To simulate a bee brain, it may be necessary to simulate the body, and the environment. The Extended Mind thesis formalizes the philosophical concept, and research into cephalopods has demonstrated clear examples of a decentralized system.<br />
<br />
有机体的大脑虽然关键,但可能不是认知模型的合适边界。为了模拟蜜蜂的大脑,可能需要模拟身体和环境。扩展心智论文形式化了哲学概念,对头足类动物的研究已经证明了分散系统的明显例子。<br />
<br />
<br />
<br />
In addition, the scale of the human brain is not currently well-constrained. One estimate puts the human brain at about 100 billion neurons and 100 trillion synapses.<ref>{{Harvnb|Williams|Herrup|1988}}</ref><ref>[http://search.eb.com/eb/article-75525 "nervous system, human."] ''[[Encyclopædia Britannica]]''. 9 January 2007</ref> Another estimate is 86 billion neurons of which 16.3 billion are in the [[cerebral cortex]] and 69 billion in the [[cerebellum]].{{sfn|Azevedo et al.|2009}} [[Glial cell]] synapses are currently unquantified but are known to be extremely numerous.<br />
<br />
In addition, the scale of the human brain is not currently well-constrained. One estimate puts the human brain at about 100 billion neurons and 100 trillion synapses. Another estimate is 86 billion neurons of which 16.3 billion are in the cerebral cortex and 69 billion in the cerebellum. Glial cell synapses are currently unquantified but are known to be extremely numerous.<br />
<br />
此外,人类大脑的规模目前还没有得到很好的限制。据估计,人类大脑大约有1000亿个神经元和100万亿个突触。另一个估计是860亿个神经元,其中163亿在大脑皮层,690亿在小脑。神经胶质细胞突触目前尚未定量,但已知数量极多。<br />
<br />
<br />
<br />
==Strong AI and consciousness==<br />
<br />
{{See also|Philosophy of artificial intelligence|Turing test}}<br />
<br />
<br />
<br />
In 1980, philosopher [[John Searle]] coined the term "strong AI" as part of his [[Chinese room]] argument.<ref>{{Harvnb|Searle|1980}}</ref> He wanted to distinguish between two different hypotheses about artificial intelligence:<ref>As defined in a standard AI textbook: "The assertion that machines could possibly act intelligently (or, perhaps better, act as if they were intelligent) is called the 'weak AI' hypothesis by philosophers, and the assertion that machines that do so are actually thinking (as opposed to simulating thinking) is called the 'strong AI' hypothesis." {{Harv|Russell|Norvig|2003}}</ref><br />
<br />
In 1980, philosopher John Searle coined the term "strong AI" as part of his Chinese room argument. He wanted to distinguish between two different hypotheses about artificial intelligence:<br />
<br />
1980年,哲学家约翰•塞尔(John Searle)将“强人工智能”(strong AI)一词作为他在中文房间里辩论的一部分。他想要区分关于人工智能的两种不同假设:<br />
<br />
* An artificial intelligence system can ''think'' and have a ''mind''. (The word "mind" has a specific meaning for philosophers, as used in "the [[mind body problem]]" or "the [[philosophy of mind]]".)<br />
<br />
* An artificial intelligence system can (only) ''act like'' it thinks and has a mind.<br />
<br />
The first one is called "the ''strong'' AI hypothesis" and the second is "the ''weak'' AI hypothesis" because the first one makes the ''stronger'' statement: it assumes something special has happened to the machine that goes beyond all its abilities that we can test. Searle referred to the "strong AI hypothesis" as "strong AI". This usage is also common in academic AI research and textbooks.<ref>For example:<br />
<br />
The first one is called "the strong AI hypothesis" and the second is "the weak AI hypothesis" because the first one makes the stronger statement: it assumes something special has happened to the machine that goes beyond all its abilities that we can test. Searle referred to the "strong AI hypothesis" as "strong AI". This usage is also common in academic AI research and textbooks.<ref>For example:<br />
<br />
第一个被称为“强人工智能假设” ,第二个被称为“弱人工智能假设” ,因为第一个假设提出了更强的陈述: 它假定机器发生了某种特殊的事情,超出了我们能够测试的所有能力。Searle 将“强 AI 假说”称为“强 AI”。这种用法在人工智能学术研究和教科书中也很常见。例如:<br />
<br />
* {{Harvnb|Russell|Norvig|2003}},<br />
<br />
* [http://www.encyclopedia.com/doc/1O87-strongAI.html Oxford University Press Dictionary of Psychology] {{Webarchive|url=https://web.archive.org/web/20071203103022/http://www.encyclopedia.com/doc/1O87-strongAI.html |date=3 December 2007 }} (quoted in "High Beam Encyclopedia"),<br />
<br />
* [http://www.aaai.org/AITopics/html/phil.html MIT Encyclopedia of Cognitive Science] {{Webarchive|url=https://web.archive.org/web/20080719074502/http://www.aaai.org/AITopics/html/phil.html |date=19 July 2008 }} (quoted in "AITopics")<br />
<br />
* [http://planetmath.org/encyclopedia/StrongAIThesis.html Planet Math] {{Webarchive|url=https://web.archive.org/web/20070919012830/http://planetmath.org/encyclopedia/StrongAIThesis.html |date=19 September 2007 }}<br />
<br />
* [http://www.cbhd.org/resources/biotech/tongen_2003-11-07.htm Will Biological Computers Enable Artificially Intelligent Machines to Become Persons?] {{Webarchive|url=https://web.archive.org/web/20080513031753/http://www.cbhd.org/resources/biotech/tongen_2003-11-07.htm |date=13 May 2008 }} Anthony Tongen</ref><br />
<br />
<br />
<br />
The weak AI hypothesis is equivalent to the hypothesis that artificial general intelligence is possible. According to [[Stuart J. Russell|Russell]] and [[Peter Norvig|Norvig]], "Most AI researchers take the weak AI hypothesis for granted, and don't care about the strong AI hypothesis."{{sfn|Russell|Norvig|2003|p=947}}<br />
<br />
The weak AI hypothesis is equivalent to the hypothesis that artificial general intelligence is possible. According to Russell and Norvig, "Most AI researchers take the weak AI hypothesis for granted, and don't care about the strong AI hypothesis."<br />
<br />
弱人工智能假说等同于人工一般智能是可能的假说。根据罗素和诺维格的说法,“大多数人工智能研究人员认为弱人工智能假说是理所当然的,并且不关心强人工智能假说。”<br />
<br />
<br />
<br />
In contrast to Searle, [[Ray Kurzweil]] uses the term "strong AI" to describe any artificial intelligence system that acts like it has a mind,<ref name=K/> regardless of whether a philosopher would be able to determine if it ''actually'' has a mind or not.<br />
<br />
In contrast to Searle, Ray Kurzweil uses the term "strong AI" to describe any artificial intelligence system that acts like it has a mind, regardless of whether a philosopher would be able to determine if it actually has a mind or not.<br />
<br />
与 Searle 不同的是,Ray Kurzweil 使用“强人工智能”这个词来描述任何人工智能系统,这个系统的行为就像它有思想一样,不管哲学家是否能够确定它是否真的有思想。<br />
<br />
In science fiction, AGI is associated with traits such as [[consciousness]], [[sentience]], [[sapience]], and [[self-awareness]] observed in living beings. However, according to Searle, it is an open question whether general intelligence is sufficient for consciousness. "Strong AI" (as defined above by Kurzweil) should not be confused with Searle's "[[strong AI hypothesis]]." The strong AI hypothesis is the claim that a computer which behaves as intelligently as a person must also necessarily have a [[mind]] and [[consciousness]]. AGI refers only to the amount of intelligence that the machine displays, with or without a mind.<br />
<br />
In science fiction, AGI is associated with traits such as consciousness, sentience, sapience, and self-awareness observed in living beings. However, according to Searle, it is an open question whether general intelligence is sufficient for consciousness. "Strong AI" (as defined above by Kurzweil) should not be confused with Searle's "strong AI hypothesis." The strong AI hypothesis is the claim that a computer which behaves as intelligently as a person must also necessarily have a mind and consciousness. AGI refers only to the amount of intelligence that the machine displays, with or without a mind.<br />
<br />
在科幻小说中,AGI 与生物的意识、知觉、智慧和自我意识等特征有关。然而,根据 Searle 的说法,一般智力是否足以产生意识还是一个悬而未决的问题。“强 AI”(如上文库兹韦尔所定义的)不应与塞尔的“强 AI 假设”相混淆强有力的人工智能假说认为,一台像人一样智能运行的计算机必然具有思想和意识。Agi 只是指机器显示的智能量,不管有没有头脑。<br />
<br />
<br />
<br />
===Consciousness===<br />
<br />
There are other aspects of the human mind besides intelligence that are relevant to the concept of strong AI which play a major role in [[science fiction]] and the [[ethics of artificial intelligence]]:<br />
<br />
There are other aspects of the human mind besides intelligence that are relevant to the concept of strong AI which play a major role in science fiction and the ethics of artificial intelligence:<br />
<br />
在科幻小说和人工智能伦理中扮演重要角色的强大人工智能概念中,除了与智能有关的人类思维还有其他方面:<br />
<br />
* [[consciousness]]: To have [[qualia|subjective experience]] and [[thought]].<ref>Note that [[consciousness]] is difficult to define. A popular definition, due to [[Thomas Nagel]], is that it "feels like" something to be conscious. If we are not conscious, then it doesn't feel like anything. Nagel uses the example of a bat: we can sensibly ask "what does it feel like to be a bat?" However, we are unlikely to ask "what does it feel like to be a toaster?" Nagel concludes that a bat appears to be conscious (i.e. has consciousness) but a toaster does not. See {{Harv|Nagel|1974}}</ref><br />
<br />
* [[self-awareness]]: To be aware of oneself as a separate individual, especially to be aware of one's own thoughts.<br />
<br />
* [[sentience]]: The ability to "feel" perceptions or emotions subjectively.<br />
<br />
* [[sapience]]: The capacity for wisdom.<br />
<br />
<br />
<br />
These traits have a moral dimension, because a machine with this form of strong AI may have legal rights, analogous to the [[animal rights|rights of non-human animals]]. As such, preliminary work has been conducted on approaches to integrating full ethical agents with existing legal and social frameworks. These approaches have focused on the legal position and rights of 'strong' AI.<ref>{{Cite journal|last=Sotala|first=Kaj|last2=Yampolskiy|first2=Roman V|date=2014-12-19|title=Responses to catastrophic AGI risk: a survey|journal=Physica Scripta|volume=90|issue=1|pages=8|doi=10.1088/0031-8949/90/1/018001|issn=0031-8949|doi-access=free}}</ref><br />
<br />
These traits have a moral dimension, because a machine with this form of strong AI may have legal rights, analogous to the rights of non-human animals. As such, preliminary work has been conducted on approaches to integrating full ethical agents with existing legal and social frameworks. These approaches have focused on the legal position and rights of 'strong' AI.<br />
<br />
这些特征具有道德维度,因为拥有这种强人工智能形式的机器可能拥有法律权利,类似于非人类动物的权利。因此,已经开展了初步工作,探讨如何将全面的道德行为者纳入现有的法律和社会框架。这些方法都集中在强大的人工智能的法律地位和权利上。<br />
<br />
<br />
<br />
However, [[Bill Joy]], among others, argues a machine with these traits may be a threat to human life or dignity.<ref>{{cite journal| title=Why the future doesn't need us | last=Joy | first=Bill |author-link=Bill Joy | journal=Wired |date=April 2000 }}</ref> It remains to be shown whether any of these traits are [[Necessary and sufficient condition|necessary]] for strong AI. The role of [[consciousness]] is not clear, and currently there is no agreed test for its presence. If a machine is built with a device that simulates the [[neural correlates of consciousness]], would it automatically have self-awareness? It is also possible that some of these properties, such as sentience, [[Emergence|naturally emerge]] from a fully intelligent machine, or that it becomes natural to ''ascribe'' these properties to machines once they begin to act in a way that is clearly intelligent. For example, intelligent action may be sufficient for sentience, rather than the other way around.<br />
<br />
However, Bill Joy, among others, argues a machine with these traits may be a threat to human life or dignity. It remains to be shown whether any of these traits are necessary for strong AI. The role of consciousness is not clear, and currently there is no agreed test for its presence. If a machine is built with a device that simulates the neural correlates of consciousness, would it automatically have self-awareness? It is also possible that some of these properties, such as sentience, naturally emerge from a fully intelligent machine, or that it becomes natural to ascribe these properties to machines once they begin to act in a way that is clearly intelligent. For example, intelligent action may be sufficient for sentience, rather than the other way around.<br />
<br />
然而,比尔 · 乔伊等人认为,具有这些特征的机器可能会威胁到人类的生命或尊严。这些特征对于强 AI 来说是否是必要的还有待证明。意识的作用并不清楚,目前也没有对其存在的一致的测试。如果一台机器装有一个模拟意识相关神经区的装置,它会自动具有自我意识吗?也有可能这些特性中的一些,比如感知能力,自然而然地从一个完全智能的机器中产生,或者一旦机器开始以一种明显的智能方式行动,人们就会自然而然地把这些特性归因于机器。例如,智能行为可能足以产生知觉,而不是相反。<br />
<br />
<br />
<br />
===Artificial consciousness research===<br />
<br />
{{Main|Artificial consciousness}}<br />
<br />
<br />
<br />
Although the role of consciousness in strong AI/AGI is debatable, many AGI researchers{{sfn|Yudkowsky|2006}} regard research that investigates possibilities for implementing consciousness as vital. In an early effort [[Igor Aleksander]]{{sfn|Aleksander|1996}} argued that the principles for creating a conscious machine already existed but that it would take forty years to train such a machine to understand [[language]].<br />
<br />
Although the role of consciousness in strong AI/AGI is debatable, many AGI researchers regard research that investigates possibilities for implementing consciousness as vital. In an early effort Igor Aleksander argued that the principles for creating a conscious machine already existed but that it would take forty years to train such a machine to understand language.<br />
<br />
虽然意识在强 ai / AGI 中的作用是有争议的,但是很多 AGI 的研究人员认为研究实现意识的可能性是至关重要的。在早期的努力中,Igor Aleksander 认为创造一个有意识的机器的原则已经存在,但是训练这样一个机器去理解语言需要四十年。<br />
<br />
<br />
<br />
==Possible explanations for the slow progress of AI research==<br />
<br />
{{See also|History of artificial intelligence#The problems}}<br />
<br />
<br />
<br />
Since the launch of AI research in 1956, the growth of this field has slowed down over time and has stalled the aims of creating machines skilled with intelligent action at the human level.{{sfn|Clocksin|2003}} A possible explanation for this delay is that computers lack a sufficient scope of memory or processing power.{{sfn|Clocksin|2003}} In addition, the level of complexity that connects to the process of AI research may also limit the progress of AI research.{{sfn|Clocksin|2003}}<br />
<br />
Since the launch of AI research in 1956, the growth of this field has slowed down over time and has stalled the aims of creating machines skilled with intelligent action at the human level. A possible explanation for this delay is that computers lack a sufficient scope of memory or processing power. In addition, the level of complexity that connects to the process of AI research may also limit the progress of AI research.<br />
<br />
自从1956年开始人工智能研究以来,这一领域的发展速度已经随着时间的推移而放缓,并且阻碍了创造具有人类水平的智能行为的机器的目标。这种延迟的一个可能的解释是计算机缺乏足够的存储空间或处理能力。此外,与人工智能研究过程相关的复杂程度也可能限制人工智能研究的进展。<br />
<br />
<br />
<br />
While most AI researchers believe strong AI can be achieved in the future, there are some individuals like [[Hubert Dreyfus]] and [[Roger Penrose]] who deny the possibility of achieving strong AI.{{sfn|Clocksin|2003}} [[John McCarthy (computer scientist)|John McCarthy]] was one of various computer scientists who believe human-level AI will be accomplished, but a date cannot accurately be predicted.{{sfn|McCarthy|2003}}<br />
<br />
While most AI researchers believe strong AI can be achieved in the future, there are some individuals like Hubert Dreyfus and Roger Penrose who deny the possibility of achieving strong AI. John McCarthy was one of various computer scientists who believe human-level AI will be accomplished, but a date cannot accurately be predicted.<br />
<br />
虽然大多数人工智能研究人员认为强大的人工智能可以在未来实现,但也有一些人像休伯特 · 德雷福斯和罗杰 · 彭罗斯否认实现强大人工智能的可能性。约翰 · 麦卡锡是众多计算机科学家之一,他们相信人类水平的人工智能将会实现,但是日期无法准确预测。<br />
<br />
<br />
<br />
Conceptual limitations are another possible reason for the slowness in AI research.{{sfn|Clocksin|2003}} AI researchers may need to modify the conceptual framework of their discipline in order to provide a stronger base and contribution to the quest of achieving strong AI. As William Clocksin wrote in 2003: "the framework starts from Weizenbaum's observation that intelligence manifests itself only relative to specific social and cultural contexts".{{sfn|Clocksin|2003}}<br />
<br />
Conceptual limitations are another possible reason for the slowness in AI research. AI researchers may need to modify the conceptual framework of their discipline in order to provide a stronger base and contribution to the quest of achieving strong AI. As William Clocksin wrote in 2003: "the framework starts from Weizenbaum's observation that intelligence manifests itself only relative to specific social and cultural contexts".<br />
<br />
概念上的局限性是人工智能研究缓慢的另一个可能原因。人工智能研究人员可能需要修改他们学科的概念框架,以便为实现强大的人工智能提供一个更强大的基础和贡献。正如 William Clocksin 在2003年写的那样: “这个框架始于 Weizenbaum 的观察,即智力只在特定的社会和文化背景下表现出来。”。<br />
<br />
<br />
<br />
Furthermore, AI researchers have been able to create computers that can perform jobs that are complicated for people to do, such as mathematics, but conversely they have struggled to develop a computer that is capable of carrying out tasks that are simple for humans to do, such as walking ([[Moravec's paradox]]).{{sfn|Clocksin|2003}} A problem described by David Gelernter is that some people assume thinking and reasoning are equivalent.{{sfn|Gelernter|2010}} However, the idea of whether thoughts and the creator of those thoughts are isolated individually has intrigued AI researchers.{{sfn|Gelernter|2010}}<br />
<br />
Furthermore, AI researchers have been able to create computers that can perform jobs that are complicated for people to do, such as mathematics, but conversely they have struggled to develop a computer that is capable of carrying out tasks that are simple for humans to do, such as walking (Moravec's paradox). A problem described by David Gelernter is that some people assume thinking and reasoning are equivalent. However, the idea of whether thoughts and the creator of those thoughts are isolated individually has intrigued AI researchers.<br />
<br />
此外,人工智能研究人员已经能够创造出能够执行复杂工作(如数学)的计算机,但相反地,他们却难以开发出能够执行人类简单任务(如行走)的计算机(莫拉维克悖论)。大卫 · 格勒尼特描述的一个问题是,有些人认为思考和推理是等价的。然而,思想和这些思想的创造者是否被孤立的想法引起了人工智能研究者的兴趣。<br />
<br />
<br />
<br />
The problems that have been encountered in AI research over the past decades have further impeded the progress of AI. The failed predictions that have been promised by AI researchers and the lack of a complete understanding of human behaviors have helped diminish the primary idea of human-level AI.{{sfn|Goertzel|2007}} Although the progress of AI research has brought both improvement and disappointment, most investigators have established optimism about potentially achieving the goal of AI in the 21st century.{{sfn|Goertzel|2007}}<br />
<br />
The problems that have been encountered in AI research over the past decades have further impeded the progress of AI. The failed predictions that have been promised by AI researchers and the lack of a complete understanding of human behaviors have helped diminish the primary idea of human-level AI. Although the progress of AI research has brought both improvement and disappointment, most investigators have established optimism about potentially achieving the goal of AI in the 21st century.<br />
<br />
过去几十年人工智能研究中遇到的问题进一步阻碍了人工智能的发展。人工智能研究人员所承诺的失败的预测,以及对人类行为缺乏完整理解,已经帮助削弱了人类水平人工智能的基本概念。尽管人工智能研究的进展带来了进步和失望,但大多数研究人员对人工智能在21世纪可能实现的目标持乐观态度。<br />
<br />
<br />
<br />
Other possible reasons have been proposed for the lengthy research in the progress of strong AI. The intricacy of scientific problems and the need to fully understand the human brain through psychology and neurophysiology have limited many researchers in emulating the function of the human brain in computer hardware.{{sfn|McCarthy|2007}} Many researchers tend to underestimate any doubt that is involved with future predictions of AI, but without taking those issues seriously, people can then overlook solutions to problematic questions.{{sfn|Goertzel|2007}}<br />
<br />
Other possible reasons have been proposed for the lengthy research in the progress of strong AI. The intricacy of scientific problems and the need to fully understand the human brain through psychology and neurophysiology have limited many researchers in emulating the function of the human brain in computer hardware. Many researchers tend to underestimate any doubt that is involved with future predictions of AI, but without taking those issues seriously, people can then overlook solutions to problematic questions.<br />
<br />
还有其他可能的原因可以解释为什么对于强人工智能进行了长时间的研究。错综复杂的科学问题,以及通过心理学和神经生理学充分了解人脑的必要性,限制了许多研究人员在计算机硬件中模拟人脑的功能。许多研究人员倾向于低估与人工智能未来预测有关的任何怀疑,但是如果不认真对待这些问题,人们就会忽视问题的解决方案。<br />
<br />
<br />
<br />
Clocksin says that a conceptual limitation that may impede the progress of AI research is that people may be using the wrong techniques for computer programs and implementation of equipment.{{sfn|Clocksin|2003}} When AI researchers first began to aim for the goal of artificial intelligence, a main interest was human reasoning.{{sfn|Holte|Choueiry|2003}} Researchers hoped to establish computational models of human knowledge through reasoning and to find out how to design a computer with a specific cognitive task.{{sfn|Holte|Choueiry|2003}}<br />
<br />
Clocksin says that a conceptual limitation that may impede the progress of AI research is that people may be using the wrong techniques for computer programs and implementation of equipment. When AI researchers first began to aim for the goal of artificial intelligence, a main interest was human reasoning. Researchers hoped to establish computational models of human knowledge through reasoning and to find out how to design a computer with a specific cognitive task.<br />
<br />
Clocksin 说,阻碍人工智能研究进展的一个概念上的限制是,人们可能在计算机程序和设备实现方面使用了错误的技术。当人工智能研究人员第一次开始瞄准人工智能的目标时,主要的兴趣是人类推理。研究人员希望通过推理建立人类知识的计算模型,并找出如何设计一台具有特定认知任务的计算机。<br />
<br />
<br />
<br />
The practice of abstraction, which people tend to redefine when working with a particular context in research, provides researchers with a concentration on just a few concepts.{{sfn|Holte|Choueiry|2003}} The most productive use of abstraction in AI research comes from planning and problem solving.{{sfn|Holte|Choueiry|2003}} Although the aim is to increase the speed of a computation, the role of abstraction has posed questions about the involvement of abstraction operators.{{sfn|Zucker|2003}}<br />
<br />
The practice of abstraction, which people tend to redefine when working with a particular context in research, provides researchers with a concentration on just a few concepts. The most productive use of abstraction in AI research comes from planning and problem solving. Although the aim is to increase the speed of a computation, the role of abstraction has posed questions about the involvement of abstraction operators.<br />
<br />
抽象的实践,人们在研究中使用特定的语境时倾向于重新定义,为研究人员提供了集中在几个概念上的机会。抽象在人工智能研究中最有效的应用来自规划和解决问题。虽然目标是提高计算速度,但是抽象的作用已经对抽象操作符的参与提出了问题。<br />
<br />
<br />
<br />
A possible reason for the slowness in AI relates to the acknowledgement by many AI researchers that heuristics is a section that contains a significant breach between computer performance and human performance.{{sfn|McCarthy|2007}} The specific functions that are programmed to a computer may be able to account for many of the requirements that allow it to match human intelligence. These explanations are not necessarily guaranteed to be the fundamental causes for the delay in achieving strong AI, but they are widely agreed by numerous researchers.<br />
<br />
A possible reason for the slowness in AI relates to the acknowledgement by many AI researchers that heuristics is a section that contains a significant breach between computer performance and human performance. The specific functions that are programmed to a computer may be able to account for many of the requirements that allow it to match human intelligence. These explanations are not necessarily guaranteed to be the fundamental causes for the delay in achieving strong AI, but they are widely agreed by numerous researchers.<br />
<br />
人工智能发展缓慢的一个可能原因是许多人工智能研究人员承认启发式是计算机性能和人类性能之间的一个重大缺口。为计算机编程的特定功能可以满足许多要求,使计算机与人类智能相匹配。这些解释并不一定是造成人工智能实现延迟的根本原因,但它们得到了众多研究人员的广泛认同。<br />
<br />
<br />
<br />
There have been many AI researchers that debate over the idea whether [[affective computing|machines should be created with emotions]]. There are no emotions in typical models of AI and some researchers say programming emotions into machines allows them to have a mind of their own.{{sfn|Clocksin|2003}} Emotion sums up the experiences of humans because it allows them to remember those experiences.{{sfn|Gelernter|2010}} David Gelernter writes, "No computer will be creative unless it can simulate all the nuances of human emotion."{{sfn|Gelernter|2010}} This concern about emotion has posed problems for AI researchers and it connects to the concept of strong AI as its research progresses into the future.<ref>{{cite journal|doi=10.1016/j.bushor.2018.08.004|title=Kaplan Andreas and Haelein Michael (2019) Siri, Siri, in my hand: Who's the fairest in the land? On the interpretations, illustrations, and implications of artificial intelligence | volume=62 | year=2019|journal=Business Horizons|pages=15–25 | last1 = Kaplan | first1 = Andreas | last2 = Haenlein | first2 = Michael}}</ref><br />
<br />
There have been many AI researchers that debate over the idea whether machines should be created with emotions. There are no emotions in typical models of AI and some researchers say programming emotions into machines allows them to have a mind of their own. Emotion sums up the experiences of humans because it allows them to remember those experiences. David Gelernter writes, "No computer will be creative unless it can simulate all the nuances of human emotion." This concern about emotion has posed problems for AI researchers and it connects to the concept of strong AI as its research progresses into the future.<br />
<br />
许多人工智能研究人员一直在争论机器是否应该带有情感。典型的人工智能模型中没有情感,一些研究人员说,将情感编程到机器中可以让它们拥有自己的思想。情感总结了人类的经历,因为它允许人们记住那些经历。大卫 · 格勒尼特写道: “除非计算机能够模拟人类情感的所有细微差别,否则它不会具有创造力。”这种对情绪的关注给人工智能研究人员带来了一些问题,随着未来人工智能研究的进展,它与强人工智能的概念相联系。<br />
<br />
<br />
<br />
==Controversies and dangers==<br />
<br />
<br />
<br />
===Feasibility===<br />
<br />
{{expand section|date=February 2016}}<br />
<br />
As of March 2020, AGI remains speculative<ref name="spec1">[https://www.europarl.europa.eu/at-your-service/files/be-heard/religious-and-non-confessional-dialogue/events/en-20190319-how-artificial-intelligence-works.pdf europarl.europa.eu: How artificial intelligence works], "Concluding remarks: Today's AI is powerful and useful, but remains far from speculated AGI or ASI.", European Parliamentary Research Service, retrieved March 3, 2020</ref><ref name="spec2">[https://www.itu.int/en/journal/001/Documents/itu2018-9.pdf itu.int: Beyond Mad?: The Race For Artificial General Intelligence], "AGI represents a level of power that remains firmly in the realm of speculative fiction as on date." February 2, 2018, retrieved March 3, 2020</ref> as no such system has been demonstrated yet. Opinions vary both on ''whether'' and ''when'' artificial general intelligence will arrive. At one extreme, AI pioneer [[Herbert A. Simon]] wrote in 1965: "machines will be capable, within twenty years, of doing any work a man can do". However, this prediction failed to come true. Microsoft co-founder [[Paul Allen]] believed that such intelligence is unlikely in the 21st century because it would require "unforeseeable and fundamentally unpredictable breakthroughs" and a "scientifically deep understanding of cognition".<ref>{{cite news|last1=Allen|first1=Paul|title=The Singularity Isn't Near|url=http://www.technologyreview.com/view/425733/paul-allen-the-singularity-isnt-near/|accessdate=17 September 2014|work=[[MIT Technology Review]]}}</ref> Writing in [[The Guardian]], roboticist Alan Winfield claimed the gulf between modern computing and human-level artificial intelligence is as wide as the gulf between current space flight and practical faster-than-light spaceflight.<ref>{{cite news|last1=Winfield|first1=Alan|title=Artificial intelligence will not turn into a Frankenstein's monster|url=https://www.theguardian.com/technology/2014/aug/10/artificial-intelligence-will-not-become-a-frankensteins-monster-ian-winfield|accessdate=17 September 2014|work=[[The Guardian]]|archive-url=https://web.archive.org/web/20140917135230/http://www.theguardian.com/technology/2014/aug/10/artificial-intelligence-will-not-become-a-frankensteins-monster-ian-winfield|archive-date=17 September 2014|url-status=live}}</ref><br />
<br />
As of March 2020, AGI remains speculative as no such system has been demonstrated yet. Opinions vary both on whether and when artificial general intelligence will arrive. At one extreme, AI pioneer Herbert A. Simon wrote in 1965: "machines will be capable, within twenty years, of doing any work a man can do". However, this prediction failed to come true. Microsoft co-founder Paul Allen believed that such intelligence is unlikely in the 21st century because it would require "unforeseeable and fundamentally unpredictable breakthroughs" and a "scientifically deep understanding of cognition". Writing in The Guardian, roboticist Alan Winfield claimed the gulf between modern computing and human-level artificial intelligence is as wide as the gulf between current space flight and practical faster-than-light spaceflight.<br />
<br />
截至2020年3月,德盛安联仍处于投机状态,因为迄今尚未展示此类系统。对于人工通用智能是否会到来以及何时到来,人们的看法各不相同。在一个极端,人工智能的先驱赫伯特·西蒙在1965年写道: “机器将能在20年内完成人类能做的任何工作。”。然而,这个预言并没有实现。微软(Microsoft)联合创始人保罗•艾伦(Paul Allen)认为,这种情报在21世纪不太可能出现,因为它需要“不可预见且根本无法预测的突破”和“对认知的科学深入理解”。机器人专家 Alan Winfield 在《卫报》上发表文章称,现代计算机和人类水平的人工智能之间的鸿沟就像当前的太空飞行和实际的超光速空间飞行之间的鸿沟一样宽。<br />
<br />
<br />
<br />
AI experts' views on the feasibility of AGI wax and wane, and may have seen a resurgence in the 2010s. Four polls conducted in 2012 and 2013 suggested that the median guess among experts for when they would be 50% confident AGI would arrive was 2040 to 2050, depending on the poll, with the mean being 2081. Of the experts, 16.5% answered with "never" when asked the same question but with a 90% confidence instead.<ref name="new yorker doomsday">{{cite news|author1=Raffi Khatchadourian|title=The Doomsday Invention: Will artificial intelligence bring us utopia or destruction?|url=http://www.newyorker.com/magazine/2015/11/23/doomsday-invention-artificial-intelligence-nick-bostrom|accessdate=7 February 2016|work=[[The New Yorker (magazine)|The New Yorker]]|date=23 November 2015|archive-url=https://web.archive.org/web/20160128105955/http://www.newyorker.com/magazine/2015/11/23/doomsday-invention-artificial-intelligence-nick-bostrom|archive-date=28 January 2016|url-status=live}}</ref><ref>Müller, V. C., & Bostrom, N. (2016). Future progress in artificial intelligence: A survey of expert opinion. In Fundamental issues of artificial intelligence (pp. 555–572). Springer, Cham.</ref> Further current AGI progress considerations can be found below [[#Tests_for_confirming_human-level_AGI|''Tests for confirming human-level AGI'']] and [[#IQ-Tests_AGI|''IQ-tests AGI'']].<br />
<br />
AI experts' views on the feasibility of AGI wax and wane, and may have seen a resurgence in the 2010s. Four polls conducted in 2012 and 2013 suggested that the median guess among experts for when they would be 50% confident AGI would arrive was 2040 to 2050, depending on the poll, with the mean being 2081. Of the experts, 16.5% answered with "never" when asked the same question but with a 90% confidence instead. Further current AGI progress considerations can be found below Tests for confirming human-level AGI and IQ-tests AGI.<br />
<br />
人工智能专家对德盛安联兴衰可行性的看法,可能在2010年出现了复苏。2012年和2013年进行的四次民意调查显示,专家对德盛安联50% 有信心的平均猜测是2040年到2050年,具体取决于调查结果,平均猜测是2081年。在这些专家中,16.5% 的人在被问到同样的问题时回答“从来没有” ,但他们的自信心却达到了90% 。进一步的进展考虑可以在确认人类水平 AGI 和 iq 测试 AGI 的测试下面找到。<br />
<br />
<br />
<br />
===Potential threat to human existence{{anchor|Risk_of_human_extinction}}===<br />
<br />
{{Main|Existential risk from artificial general intelligence}}<br />
<br />
<br />
<br />
The thesis that AI poses an existential risk, and that this risk needs much more attention than it currently gets, has been endorsed by many public figures; perhaps the most famous are [[Elon Musk]], [[Bill Gates]], and [[Stephen Hawking]]. The most notable AI researcher to endorse the thesis is [[Stuart J. Russell]]. Endorsers of the thesis sometimes express bafflement at skeptics: Gates states he does not "understand why some people are not concerned",<ref name="BBC News">{{cite news|last1=Rawlinson|first1=Kevin|title=Microsoft's Bill Gates insists AI is a threat|url=https://www.bbc.co.uk/news/31047780|work=[[BBC News]]|accessdate=30 January 2015}}</ref> and Hawking criticized widespread indifference in his 2014 editorial: {{cquote|'So, facing possible futures of incalculable benefits and risks, the experts are surely doing everything possible to ensure the best outcome, right? Wrong. If a superior alien civilisation sent us a message saying, 'We'll arrive in a few decades,' would we just reply, 'OK, call us when you get here{{endash}}we'll leave the lights on?' Probably not{{endash}}but this is more or less what is happening with AI.'<ref name="hawking editorial">{{cite news |title=Stephen Hawking: 'Transcendence looks at the implications of artificial intelligence&nbsp;– but are we taking AI seriously enough?' |url=https://www.independent.co.uk/news/science/stephen-hawking-transcendence-looks-at-the-implications-of-artificial-intelligence--but-are-we-taking-ai-seriously-enough-9313474.html |accessdate=3 December 2014 |publisher=[[The Independent (UK)]]}}</ref>}}<br />
<br />
The thesis that AI poses an existential risk, and that this risk needs much more attention than it currently gets, has been endorsed by many public figures; perhaps the most famous are Elon Musk, Bill Gates, and Stephen Hawking. The most notable AI researcher to endorse the thesis is Stuart J. Russell. Endorsers of the thesis sometimes express bafflement at skeptics: Gates states he does not "understand why some people are not concerned", and Hawking criticized widespread indifference in his 2014 editorial: we'll leave the lights on?' Probably notbut this is more or less what is happening with AI.'}}<br />
<br />
人工智能构成了世界末日,这种风险需要比现在更多的关注,这一论点已经得到了许多公众人物的支持; 也许最著名的是埃隆 · 马斯克,比尔 · 盖茨和斯蒂芬 · 霍金。支持这一观点的最著名的人工智能研究者是斯图尔特 · 罗素。这篇论文的支持者有时会对怀疑论者表示困惑: 盖茨表示,他不“理解为什么有些人不关心” ,霍金在2014年的社论中批评了普遍的冷漠: 我们会让灯亮着吗可能不是,但这或多或少是正在发生的与人工智能<br />
<br />
<br />
<br />
Many of the scholars who are concerned about existential risk believe that the best way forward would be to conduct (possibly massive) research into solving the difficult "[[AI control problem|control problem]]" to answer the question: what types of safeguards, algorithms, or architectures can programmers implement to maximize the probability that their recursively-improving AI would continue to behave in a friendly, rather than destructive, manner after it reaches superintelligence?<ref name="superintelligence" >{{cite book|last1=Bostrom|first1=Nick|author-link=Nick Bostrom|title=Superintelligence: Paths, Dangers, Strategies|date=2014|isbn=978-0199678112|edition=First|quote=|title-link=Superintelligence: Paths, Dangers, Strategies}}<!-- preface --></ref><ref name="physica_scripta" >{{cite journal|title=Responses to catastrophic AGI risk: a survey|journal=[[Physica Scripta]]|date=19 December 2014|volume=90|issue=1|author1=Kaj Sotala|author2-link=Roman Yampolskiy|author2=Roman Yampolskiy}}</ref><br />
<br />
Many of the scholars who are concerned about existential risk believe that the best way forward would be to conduct (possibly massive) research into solving the difficult "control problem" to answer the question: what types of safeguards, algorithms, or architectures can programmers implement to maximize the probability that their recursively-improving AI would continue to behave in a friendly, rather than destructive, manner after it reaches superintelligence?<br />
<br />
许多关注世界末日的学者认为,最好的方法是进行(可能是大规模的)研究,解决困难的“控制问题” ,以回答这个问题: 程序员可以实现哪些类型的保障措施、算法或架构,以最大限度地提高其递归改进的人工智能在达到超级智能后继续以友好而不是破坏性的方式运行的可能性?<br />
<br />
<br />
<br />
The thesis that AI can pose existential risk also has many strong detractors. Skeptics sometimes charge that the thesis is crypto-religious, with an irrational belief in the possibility of superintelligence replacing an irrational belief in an omnipotent God; at an extreme, [[Jaron Lanier]] argues that the whole concept that current machines are in any way intelligent is "an illusion" and a "stupendous con" by the wealthy.<ref name="atlantic-but-what">{{cite magazine |url=https://www.theatlantic.com/health/archive/2014/05/but-what-does-the-end-of-humanity-mean-for-me/361931/ |title=But What Would the End of Humanity Mean for Me? |magazine=The Atlantic | date = 9 May 2014 | accessdate =12 December 2015}}</ref><br />
<br />
The thesis that AI can pose existential risk also has many strong detractors. Skeptics sometimes charge that the thesis is crypto-religious, with an irrational belief in the possibility of superintelligence replacing an irrational belief in an omnipotent God; at an extreme, Jaron Lanier argues that the whole concept that current machines are in any way intelligent is "an illusion" and a "stupendous con" by the wealthy.<br />
<br />
认为人工智能可以提出世界末日的观点也遭到了许多强烈的反对。怀疑论者有时指责该论点是秘密宗教性的,他们非理性地相信超级智能可能取代对万能的上帝的非理性信仰; 在极端情况下,杰伦 · 拉尼尔(Jaron Lanier)认为,目前的机器以任何方式具有智能的整个概念是“一种幻觉” ,是富人的“惊人骗局”。<br />
<br />
<br />
<br />
Much of existing criticism argues that AGI is unlikely in the short term. Computer scientist [[Gordon Bell]] argues that the human race will already destroy itself before it reaches the [[technological singularity]]. [[Gordon Moore]], the original proponent of [[Moore's Law]], declares that "I am a skeptic. I don't believe [a technological singularity] is likely to happen, at least for a long time. And I don't know why I feel that way."<ref>{{cite news |title=Tech Luminaries Address Singularity |url=https://spectrum.ieee.org/computing/hardware/tech-luminaries-address-singularity |accessdate=8 April 2020 |work=IEEE Spectrum: Technology, Engineering, and Science News |issue=SPECIAL REPORT: THE SINGULARITY |date=1 June 2008 |language=en}}</ref> [[Baidu]] Vice President [[Andrew Ng]] states AI existential risk is "like worrying about overpopulation on Mars when we have not even set foot on the planet yet."<ref name=shermer>{{cite news|last1=Shermer|first1=Michael|title=Apocalypse AI|url=https://www.scientificamerican.com/article/artificial-intelligence-is-not-a-threat-mdash-yet/|accessdate=27 November 2017|work=Scientific American|date=1 March 2017|pages=77|language=en|doi=10.1038/scientificamerican0317-77|bibcode=2017SciAm.316c..77S}}</ref><br />
<br />
Much of existing criticism argues that AGI is unlikely in the short term. Computer scientist Gordon Bell argues that the human race will already destroy itself before it reaches the technological singularity. Gordon Moore, the original proponent of Moore's Law, declares that "I am a skeptic. I don't believe [a technological singularity] is likely to happen, at least for a long time. And I don't know why I feel that way." Baidu Vice President Andrew Ng states AI existential risk is "like worrying about overpopulation on Mars when we have not even set foot on the planet yet."<br />
<br />
现有的许多批评认为,德盛安联短期内不太可能成功。计算机科学家 Gordon Bell 认为人类在到达技术奇异点之前就已经自我毁灭了。戈登 · 摩尔,摩尔定律的最初倡导者,宣称“我是一个怀疑论者。我不认为技术奇异点会发生,至少在很长一段时间内不会。我不知道为什么会有这种感觉。”百度副总裁 Andrew Ng 说,人工智能世界末日就像是在担心火星人口过剩,而我们甚至还没有踏上这个星球<br />
<br />
<br />
<br />
==See also==<br />
<br />
{{div col|colwidth=30em}}<br />
<br />
* [[Automated machine learning]]<br />
<br />
* [[Machine ethics]]<br />
<br />
* [[Multi-task learning]]<br />
<br />
* [[Superintelligence: Paths, Dangers, Strategies|Superintelligence]]<br />
<br />
* [[Nick Bostrom]]<br />
<br />
* [[Eliezer Yudkowsky]]<br />
<br />
* [[Future of Humanity Institute]]<br />
<br />
* [[Outline of artificial intelligence]]<br />
<br />
* [[Artificial brain]]<br />
<br />
* [[Transfer learning]]<br />
<br />
* [[Outline of transhumanism]]<br />
<br />
* [[General game playing]]<br />
<br />
* [[Synthetic intelligence]]<br />
<br />
* [[Intelligence amplification]] (IA), the use of information technology in augmenting human intelligence instead of creating an external autonomous "AGI"{{div col end}}<br />
<br />
<br />
<br />
==Notes==<br />
<br />
{{reflist|colwidth=30em}}<br />
<br />
<br />
<br />
==References==<br />
<br />
• Stages of Artificial Intelligence"[https://www.computerscience0.xyz/2020/04/ai-artificial-intelligence-stages-of.html Computer Science]" on 2nd April, 2020.{{refbegin|2}}<br />
<br />
• Stages of Artificial Intelligence"[https://www.computerscience0.xyz/2020/04/ai-artificial-intelligence-stages-of.html Computer Science]" on 2nd April, 2020.<br />
<br />
•2020年4月2日,《人工智能的阶段》[ https://www.computerscience0.xyz/2020/04/ai-Artificial-Intelligence-Stages-of.html 计算机科学]。<br />
<br />
* {{cite web |url=http://www.techcast.org/Upload/PDFs/633615794236495345_TCTheAutomationofThought.pdf |title=TechCast Article Series: The Automation of Thought |last1=Halal |first1=William E. |website= |access-date= |archive-url=https://web.archive.org/web/20130606101835/http://www.techcast.org/Upload/PDFs/633615794236495345_TCTheAutomationofThought.pdf |archive-date=6 June 2013}}<br />
<br />
* {{Citation | last=Aleksander | first=Igor | author-link=Igor Aleksander | year=1996 | title=Impossible Minds | publisher=World Scientific Publishing Company | isbn=978-1-86094-036-1 | url-access=registration | url=https://archive.org/details/impossiblemindsm0000alek }}<br />
<br />
* {{Citation | last = Omohundro|first= Steve| author-link= Steve Omohundro | year = 2008| title= The Nature of Self-Improving Artificial Intelligence| publisher= presented and distributed at the 2007 Singularity Summit, San Francisco, CA.}}<br />
<br />
* {{Citation|last=Sandberg |first=Anders|last2=Boström|first2=Nick|title=Whole Brain Emulation: A Roadmap|url=http://www.fhi.ox.ac.uk/Reports/2008-3.pdf|accessdate=5 April 2009|series= Technical Report #2008‐3|year=2008| publisher = Future of Humanity Institute, Oxford University}}<br />
<br />
* {{Citation|vauthors=Azevedo FA, Carvalho LR, Grinberg LT | title = Equal numbers of neuronal and nonneuronal cells make the human brain an isometrically scaled-up primate brain| journal = The Journal of Comparative Neurology| volume = 513| issue = 5| pages = 532–41|date=April 2009| pmid = 19226510| doi = 10.1002/cne.21974|url=https://www.researchgate.net/publication/24024444 |accessdate=4 September 2013| ref={{harvid|Azevedo et al.|2009}}|display-authors=etal}}<br />
<br />
* {{Citation<br />
<br />
| first=Anthony<br />
<br />
| first=Anthony<br />
<br />
首先是安东尼<br />
<br />
| last=Berglas<br />
<br />
| last=Berglas<br />
<br />
最后一个贝格拉斯<br />
<br />
| title=Artificial Intelligence will Kill our Grandchildren<br />
<br />
| title=Artificial Intelligence will Kill our Grandchildren<br />
<br />
人工智能会杀死我们的孙子<br />
<br />
| year=2008<br />
<br />
| year=2008<br />
<br />
2008年<br />
<br />
| url=http://berglas.org/Articles/AIKillGrandchildren/AIKillGrandchildren.html<br />
<br />
| url=http://berglas.org/Articles/AIKillGrandchildren/AIKillGrandchildren.html<br />
<br />
Http://berglas.org/articles/aikillgrandchildren/aikillgrandchildren.html<br />
<br />
}}<br />
<br />
}}<br />
<br />
}}<br />
<br />
* {{Citation | last = Chalmers |first = David | author-link=David Chalmers | year=1996 | title = The Conscious Mind |publisher=Oxford University Press.}}<br />
<br />
* {{Citation | last = Clocksin|first=William |date=Aug 2003 |title=Artificial intelligence and the future|journal=[[Philosophical Transactions of the Royal Society A]] |pmid=12952683 |volume=361 |issue=1809 |pages=1721–1748 |doi=10.1098/rsta.2003.1232 |postscript=.|bibcode=2003RSPTA.361.1721C }}<br />
<br />
* {{Crevier 1993}}<br />
<br />
* {{Citation | first = Brad | last = Darrach | date=20 November 1970 | title=Meet Shakey, the First Electronic Person | magazine=[[Life Magazine]] | pages = 58–68 }}.<br />
<br />
* {{Citation | last = Drachman | first = D | title = Do we have brain to spare? | journal = Neurology | volume = 64 | issue = 12 | pages = 2004–5 | year = 2005 | pmid = 15985565 | doi = 10.1212/01.WNL.0000166914.38327.BB | postscript = .}}<br />
<br />
* {{Citation | last = Feigenbaum | first = Edward A. | first2=Pamela | last2=McCorduck | author-link=Edward Feigenbaum | author2-link = Pamela McCorduck | title = The Fifth Generation: Artificial Intelligence and Japan's Computer Challenge to the World | publisher = Michael Joseph | year = 1983 | isbn = 978-0-7181-2401-4 }}<br />
<br />
* {{Citation<br />
<br />
| last = Gelernter | first = David<br />
<br />
| last = Gelernter | first = David<br />
<br />
最后的盖兰特 | 第一个大卫<br />
<br />
| year = 2010<br />
<br />
| year = 2010<br />
<br />
2010年<br />
<br />
| title = Dream-logic, the Internet and Artificial Thought<br />
<br />
| title = Dream-logic, the Internet and Artificial Thought<br />
<br />
梦的逻辑,互联网和人工思维<br />
<br />
| url=http://www.edge.org/3rd_culture/gelernter10.1/gelernter10.1_index.html | accessdate= 25 July 2010<br />
<br />
| url=http://www.edge.org/3rd_culture/gelernter10.1/gelernter10.1_index.html | accessdate= 25 July 2010<br />
<br />
Http://www.edge.org/3rd_culture/gelernter10.1/gelernter10.1_index.html : 2010年7月25日<br />
<br />
}}<br />
<br />
}}<br />
<br />
}}<br />
<br />
* {{Citation<br />
<br />
| editor1-last = Goertzel | editor1-first = Ben | authorlink = Ben Goertzel<br />
<br />
| editor1-last = Goertzel | editor1-first = Ben | authorlink = Ben Goertzel<br />
<br />
1-first Ben | authorlink Ben Goertzel<br />
<br />
| editor2-last = Pennachin | editor2-first= Cassio<br />
<br />
| editor2-last = Pennachin | editor2-first= Cassio<br />
<br />
| 编辑2-last Pennachin | 编辑2-first Cassio<br />
<br />
| year = 2006<br />
<br />
| year = 2006<br />
<br />
2006年<br />
<br />
| title=Artificial General Intelligence<br />
<br />
| title=Artificial General Intelligence<br />
<br />
人工通用智能<br />
<br />
| publisher = Springer <br />
<br />
| publisher = Springer <br />
<br />
出版商斯普林格<br />
<br />
| url=http://people.inf.elte.hu/csizsekp/ai/books/artificial-general-intelligence-cognitive-technologies.9783540237334.27156.pdf<br />
<br />
| url=http://people.inf.elte.hu/csizsekp/ai/books/artificial-general-intelligence-cognitive-technologies.9783540237334.27156.pdf<br />
<br />
Http://people.inf.elte.hu/csizsekp/ai/books/artificial-general-intelligence-cognitive-technologies.9783540237334.27156.pdf<br />
<br />
| isbn = 978-3-540-23733-4<br />
<br />
| isbn = 978-3-540-23733-4<br />
<br />
[国际标准图书馆编号978-3-540-23733-4]<br />
<br />
| archive-url = https://web.archive.org/web/20130320184603/http://people.inf.elte.hu/csizsekp/ai/books/artificial-general-intelligence-cognitive-technologies.9783540237334.27156.pdf<br />
<br />
| archive-url = https://web.archive.org/web/20130320184603/http://people.inf.elte.hu/csizsekp/ai/books/artificial-general-intelligence-cognitive-technologies.9783540237334.27156.pdf<br />
<br />
| 档案-url https://web.archive.org/web/20130320184603/http://people.inf.elte.hu/csizsekp/ai/books/artificial-general-intelligence-cognitive-technologies.9783540237334.27156.pdf<br />
<br />
| archive-date = 20 March 2013<br />
<br />
| archive-date = 20 March 2013<br />
<br />
| 档案-日期2013年3月20日<br />
<br />
}}<br />
<br />
}}<br />
<br />
}}<br />
<br />
* {{Citation<br />
<br />
| last = Goertzel | first = Ben | authorlink = Ben Goertzel<br />
<br />
| last = Goertzel | first = Ben | authorlink = Ben Goertzel<br />
<br />
作者: Ben Goertzel<br />
<br />
| last2 = Wang | first2 = Pei<br />
<br />
| last2 = Wang | first2 = Pei<br />
<br />
2 Wang | first2 Pei<br />
<br />
| year = 2006<br />
<br />
| year = 2006<br />
<br />
2006年<br />
<br />
| title = Introduction: Aspects of Artificial General Intelligence<br />
<br />
| title = Introduction: Aspects of Artificial General Intelligence<br />
<br />
| 题目简介: 人工通用智能的方方面面<br />
<br />
| url=http://sites.google.com/site/narswang/publications/wang-goertzel.AGI_Aspects.pdf?attredirects=1<br />
<br />
| url=http://sites.google.com/site/narswang/publications/wang-goertzel.AGI_Aspects.pdf?attredirects=1<br />
<br />
Http://sites.google.com/site/narswang/publications/wang-goertzel.agi_aspects.pdf?attredirects=1<br />
<br />
}}<br />
<br />
}}<br />
<br />
}}<br />
<br />
* {{Citation |last=Goertzel|first=Ben |authorlink=Ben Goertzel |date=Dec 2007 |title=Human-level artificial general intelligence and the possibility of a technological singularity: a reaction to Ray Kurzweil's The Singularity Is Near, and McDermott's critique of Kurzweil|journal=Artificial Intelligence |volume=171 |issue=18, Special Review Issue |pages=1161–1173 |url=https://scholar.google.com/scholar?hl=sv&lr=&cluster=15189798216526465792 |accessdate=1 April 2009 |doi=10.1016/j.artint.2007.10.011 |postscript=.}}<br />
<br />
* {{Citation | last = Gubrud | first = Mark | url=http://www.foresight.org/Conferences/MNT05/Papers/Gubrud/ | title = Nanotechnology and International Security | journal= Fifth Foresight Conference on Molecular Nanotechnology |date = November 1997| accessdate= 7 May 2011}}<br />
<br />
* {{Citation| last1 = Holte | first1=RC | last2=Choueiry |first2=BY| title = Abstraction and reformulation in artificial intelligence| journal = [[Philosophical Transactions of the Royal Society B]]| volume = 358| issue = 1435| pages = 1197–1204| year = 2003| pmid = 12903653| pmc = 1693218| doi = 10.1098/rstb.2003.1317| postscript = .}}<br />
<br />
* {{Citation | last = Howe | first = J. | url=http://www.dai.ed.ac.uk/AI_at_Edinburgh_perspective.html | title = Artificial Intelligence at Edinburgh University : a Perspective |date = November 1994| accessdate= 30 August 2007}}<br />
<br />
* {{Citation | last = Johnson| first= Mark | year = 1987| title =The body in the mind| publisher =Chicago|isbn= 978-0-226-40317-5}}<br />
<br />
* {{Citation | last = Kurzweil | first = Ray | author-link = Ray Kurzweil | title = The Singularity is Near | year = 2005 | publisher = Viking Press | title-link = The Singularity is Near }}<br />
<br />
* {{Citation | last = Lighthill | first = Professor Sir James | author-link=James Lighthill | year = 1973 | contribution= Artificial Intelligence: A General Survey | title = Artificial Intelligence: a paper symposium| publisher = Science Research Council }}<br />
<br />
* {{Citation|last=Luger|first=George|first2=William|last2=Stubblefield|year=2004|title=Artificial Intelligence: Structures and Strategies for Complex Problem Solving|edition=5th|publisher=The Benjamin/Cummings Publishing Company, Inc.|page=[https://archive.org/details/artificialintell0000luge/page/720 720]|isbn=978-0-8053-4780-7|url=https://archive.org/details/artificialintell0000luge/page/720}}<br />
<br />
* {{Citation |last=McCarthy|first=John |authorlink=John McCarthy (computer scientist)|date=Oct 2007 |title=From here to human-level AI|journal=Artificial Intelligence |volume=171 |pages=1174–1182 |doi=10.1016/j.artint.2007.10.009 | postscript=. |issue=18}}<br />
<br />
* {{McCorduck 2004}}<br />
<br />
* {{Citation | last = Moravec | first = Hans | author-link = Hans Moravec | year = 1976 | url = http://www.frc.ri.cmu.edu/users/hpm/project.archive/general.articles/1975/Raw.Power.html | title = The Role of Raw Power in Intelligence | access-date = 29 September 2007 | archive-url = https://web.archive.org/web/20160303232511/http://www.frc.ri.cmu.edu/users/hpm/project.archive/general.articles/1975/Raw.Power.html | archive-date = 3 March 2016 | url-status = dead }}<br />
<br />
* {{Citation | last = Moravec | first = Hans | author-link=Hans Moravec | year = 1988 | title = Mind Children | publisher = Harvard University Press}}<br />
<br />
* {{Citation| last=Nagel | year=1974 | title =What Is it Like to Be a Bat | journal = Philosophical Review | volume=83 | issue=4 | pages=435–50 | url=http://organizations.utep.edu/Portals/1475/nagel_bat.pdf| postscript=. | doi=10.2307/2183914 | jstor=2183914 }}<br />
<br />
* {{Citation | last = Newell | first = Allen | author-link=Allen Newell | last2 = Simon | first2=H. A. | year = 1963 | contribution=GPS: A Program that Simulates Human Thought| title=Computers and Thought | editor-last= Feigenbaum | editor-first= E.A. |editor2-last= Feldman |editor2-first= J. |publisher= McGraw-Hill | authorlink2 = Herbert A. Simon|location= New York }}<br />
<br />
* {{cite journal | doi = 10.1145/360018.360022 | last = Newell | first = Allen | last2 = Simon | first2=H. A. | year = 1976 | title=Computer Science as Empirical Inquiry: Symbols and Search| volume= 19 | pages = 113–126 | journal = Communications of the ACM| author-link=Allen Newell | authorlink2=Herbert A. Simon|issue=3 | ref=harv| doi-access= free }}<br />
<br />
* {{Citation| last=Nilsson | first=Nils | author-link=Nils Nilsson (researcher) | year=1998|title=Artificial Intelligence: A New Synthesis|publisher=Morgan Kaufmann Publishers|isbn=978-1-55860-467-4}}<br />
<br />
* {{Russell Norvig 2003}}<br />
<br />
* {{Citation | last = NRC| author-link=United States National Research Council | chapter=Developments in Artificial Intelligence|chapter-url=http://www.nap.edu/readingroom/books/far/ch9.html|title=Funding a Revolution: Government Support for Computing Research|publisher=National Academy Press|year=1999 }}<br />
<br />
* {{Citation | last = Poole | first = David | first2 = Alan | last2 = Mackworth | first3 = Randy | last3 = Goebel | publisher = Oxford University Press | year = 1998 | title = Computational Intelligence: A Logical Approach | url = http://www.cs.ubc.ca/spider/poole/ci.html | author-link=David Poole (researcher) | location = New York }}<br />
<br />
* {{Citation|last=Searle |first=John |author-link=John Searle |title=Minds, Brains and Programs |journal=Behavioral and Brain Sciences |volume=3 |issue=3 |pages=417–457 |year=1980 |doi=10.1017/S0140525X00005756 }}<br />
<br />
* {{Citation | last= Simon | first = H. A. | author-link=Herbert A. Simon | year = 1965 | title=The Shape of Automation for Men and Management | publisher =Harper & Row | location = New York }}<br />
<br />
* {{Citation |last=Sutherland|first= J.G. |year =1990| title= Holographic Model of Memory, Learning, and Expression|journal=International Journal of Neural Systems|volume= 1–3|pages= 256–267 |postscript=.}}<br />
<br />
* {{Citation|vauthors=Williams RW, Herrup K | title = The control of neuron number| journal = Annual Review of Neuroscience| volume = 11| pages = 423–53| year = 1988| pmid = 3284447| doi = 10.1146/annurev.ne.11.030188.002231| postscript = .}}<!--| accessdate = 20 June 2009--><br />
<br />
* {{Citation<br />
<br />
| editor1-last = de Vega | editor1-first = Manuel<br />
<br />
| editor1-last = de Vega | editor1-first = Manuel<br />
<br />
| 编辑1-last de Vega | 编辑1-first Manuel<br />
<br />
| editor2-last = Glenberg | editor2-first = Arthur<br />
<br />
| editor2-last = Glenberg | editor2-first = Arthur<br />
<br />
2- 最后的格伦伯格2- 第一个亚瑟<br />
<br />
| editor3-last = Graesser | editor3-first = Arthur<br />
<br />
| editor3-last = Graesser | editor3-first = Arthur<br />
<br />
| 编辑3-last Graesser | 编辑3-first Arthur<br />
<br />
| year = 2008<br />
<br />
| year = 2008<br />
<br />
2008年<br />
<br />
| title = Symbols and Embodiment: Debates on meaning and cognition<br />
<br />
| title = Symbols and Embodiment: Debates on meaning and cognition<br />
<br />
标题符号与具体化: 关于意义与认知的争论<br />
<br />
| publisher = Oxford University Press<br />
<br />
| publisher = Oxford University Press<br />
<br />
牛津大学出版社<br />
<br />
| isbn=978-0-19-921727-4<br />
<br />
| isbn=978-0-19-921727-4<br />
<br />
[国际标准图书编号978-0-19-921727-4]<br />
<br />
}}<br />
<br />
}}<br />
<br />
}}<br />
<br />
* {{Citation|last=Yudkowsky |first=Eliezer |author-link=Eliezer Yudkowsky |editor-last=Goertzel |editor-first=Ben |editor2-last=Pennachin |editor2-first=Cassio |title=Artificial General Intelligence |journal=Annual Review of Psychology |publisher=Springer |year=2006 |url=http://www.singinst.org/upload/LOGI//LOGI.pdf |doi=10.1146/annurev.psych.49.1.585 |pmid=9496632 |isbn=978-3-540-23733-4 |url-status=dead |archiveurl=https://web.archive.org/web/20090411050423/http://www.singinst.org/upload/LOGI/LOGI.pdf |archivedate=11 April 2009 |volume=49 |pages=585–612}} <br />
<br />
* {{Citation |last=Zucker|first=Jean-Daniel |date=July 2003 |title=A grounded theory of abstraction in artificial intelligence|journal=[[Philosophical Transactions of the Royal Society B]] |pmid=12903672 |volume=358 |issue=1435 |pmc=1693211 |pages=1293–1309 |doi=10.1098/rstb.2003.1308 |postscript=.}}<br />
<br />
* {{Citation| last=Yudkowsky | first=Eliezer | author-link=Eliezer Yudkowsky| year=2008 |title=Artificial Intelligence as a Positive and Negative Factor in Global Risk |journal=Global Catastrophic Risks | bibcode=2008gcr..book..303Y }}.<br />
<br />
{{refend}}<br />
<br />
<br />
<br />
==External links==<br />
<br />
* [https://cis.temple.edu/~pwang/AGI-Intro.html The AGI portal maintained by Pei Wang]<br />
<br />
* [https://web.archive.org/web/20050405071221/http://genesis.csail.mit.edu/index.html The Genesis Group at MIT's CSAIL] – Modern research on the computations that underlay human intelligence<br />
<br />
* [http://www.opencog.org/ OpenCog – open source project to develop a human-level AI]<br />
<br />
* [http://academia.wikia.com/wiki/A_Method_for_Simulating_the_Process_of_Logical_Human_Thought Simulating logical human thought]<br />
<br />
* [http://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/ai-timelines What Do We Know about AI Timelines?] – Literature review<br />
<br />
<br />
<br />
{{Existential risk from artificial intelligence}}<br />
<br />
<br />
<br />
{{DEFAULTSORT:Artificial general intelligence}}<br />
<br />
[[Category:Hypothetical technology]]<br />
<br />
Category:Hypothetical technology<br />
<br />
类别: 假设技术<br />
<br />
[[Category:Artificial intelligence]]<br />
<br />
Category:Artificial intelligence<br />
<br />
类别: 人工智能<br />
<br />
[[Category:Computational neuroscience]]<br />
<br />
Category:Computational neuroscience<br />
<br />
类别: 计算神经科学<br />
<br />
<br />
<br />
[[fr:Intelligence artificielle#Intelligence artificielle forte]]<br />
<br />
fr:Intelligence artificielle#Intelligence artificielle forte<br />
<br />
智力人工 # 智力人工强项<br />
<br />
<noinclude><br />
<br />
<small>This page was moved from [[wikipedia:en:Artificial general intelligence]]. Its edit history can be viewed at [[通用人工智能/edithistory]]</small></noinclude><br />
<br />
[[Category:待整理页面]]</div>粲兰https://wiki.swarma.org/index.php?title=%E9%80%9A%E7%94%A8%E4%BA%BA%E5%B7%A5%E6%99%BA%E8%83%BD&diff=14798通用人工智能2020-10-07T12:23:22Z<p>粲兰:</p>
<hr />
<div>此词条暂由彩云小译翻译,未经人工整理和审校,带来阅读不便,请见谅。<br />
<br />
{{Short description|Hypothetical human-level or stronger AI}}<br />
<br />
{{Use British English|date = March 2019}}<br />
<br />
{{Use dmy dates|date=December 2019}}<br />
<br />
{{Artificial intelligence}}<br />
<br />
'''Artificial general intelligence''' ('''AGI''') is the hypothetical<ref>{{cite news |title=DeepMind and Google: the battle to control artificial intelligence |url=https://www.economist.com/news/2019/04/05/deepmind-and-google-the-battle-to-control-artificial-intelligence |accessdate=15 March 2020 |work=[[The Economist]] ([[1843 (magazine)]]) |date=2019 |quote=AGI stands for Artificial General Intelligence, a hypothetical computer program...}}</ref> intelligence of a machine that has the capacity to understand or learn any intellectual task that a [[human being]] can. It is a primary goal of some [[artificial intelligence]] research and a common topic in [[science fiction]] and [[futures studies]]. AGI can also be referred to as '''strong AI''',<ref>Kurzweil, ''Singularity'' (2005) p. 260</ref><ref name = "Kurzweil 2005-08-05">{{Citation|url=https://www.forbes.com/home/free_forbes/2005/0815/030.htmlhttps://www.forbes.com/home/free_forbes/2005/0815/030.html |first=Ray |last=Kurzweil |date=5 August 2005 |magazine=[[Forbes]]|title=Long Live AI}}: Kurzweil describes strong AI as "machine intelligence with the full range of human intelligence."</ref><ref>{{Citation|url=https://crnano.typepad.com/crnblog/2005/08/advanced_human_.html |title=Advanced Human ntelligence<br />
<br />
Artificial general intelligence (AGI) is the hypothetical intelligence of a machine that has the capacity to understand or learn any intellectual task that a human being can. It is a primary goal of some artificial intelligence research and a common topic in science fiction and futures studies. AGI can also be referred to as strong AI,<ref>{{Citation|url=https://crnano.typepad.com/crnblog/2005/08/advanced_human_.html |title=Advanced Human ntelligence<br />
<br />
人工通用智能(Artificial general intelligence,AGI)是一种假设的机器智能,它有能力理解或学习任何人类能够完成的智力任务。这是一些人工智能研究的主要目标,也是科幻小说和未来研究的共同话题。也可以被称为强人工智能,参考文献{ Citation | url https://crnano.typepad.com/crnblog/2005/08/advanced_human_.html | title Advanced Human intelligence<br />
<br />
|first=Mike|last=Treder|work=Responsible Nanotechnology|date=10 August 2005 |archive-url=https://web.archive.org/web/20191016214415/https://crnano.typepad.com/crnblog/2005/08/advanced_human_.html|archive-date=16 October 2019 |url-status=live}}</ref> '''full AI''',<ref>{{Cite web |url=http://tedxtalks.ted.com/video/The-Age-of-Artificial-Intellige |title=The Age of Artificial Intelligence: George John at TEDxLondonBusinessSchool 2013 |access-date=22 February 2014 |archive-url=https://web.archive.org/web/20140226123940/http://tedxtalks.ted.com/video/The-Age-of-Artificial-Intellige |archive-date=26 February 2014 |url-status=live }}</ref> <br />
<br />
|first=Mike|last=Treder|work=Responsible Nanotechnology|date=10 August 2005 |archive-url=https://web.archive.org/web/20191016214415/https://crnano.typepad.com/crnblog/2005/08/advanced_human_.html|archive-date=16 October 2019 |url-status=live}}</ref> full AI, <br />
<br />
2005年8月10日2019年10月 https://web.archive.org/web/20191016214415/https://crnano.typepad.com/crnblog/2005/08/advanced_human_.html|archive-date=16,<br />
<br />
or '''general intelligent action'''.{{sfn|Newell|Simon|1976|ps=, This is the term they use for "human-level" intelligence in the [[physical symbol system]] hypothesis.}} <br />
<br />
or general intelligent action. <br />
<br />
或者是通用智能行为。<br />
<br />
Some academic sources reserve the term "strong AI" for machines that can experience [[Chinese room#Strong AI|consciousness]].{{sfn|Searle|1980|ps=, See below for the origin of the term "strong AI", and see the academic definition of "[[Chinese room#Strong AI|strong AI]]" in the article [[Chinese room]].}} Today's AI is speculated to be many years, if not decades, away from AGI.<ref>[https://www.europarl.europa.eu/at-your-service/files/be-heard/religious-and-non-confessional-dialogue/events/en-20190319-how-artificial-intelligence-works.pdf europarl.europa.eu: How artificial intelligence works], "Concluding remarks: Today's AI is powerful and useful, but remains far from speculated AGI or ASI.", European Parliamentary Research Service, retrieved March 3, 2020</ref><ref>{{Cite journal|last=Grace|first=Katja|last2=Salvatier|first2=John|last3=Dafoe|first3=Allan|last4=Zhang|first4=Baobao|last5=Evans|first5=Owain|date=2018-07-31|title=Viewpoint: When Will AI Exceed Human Performance? Evidence from AI Experts|journal=Journal of Artificial Intelligence Research|volume=62|pages=729–754|doi=10.1613/jair.1.11222|issn=1076-9757}}</ref><br />
<br />
Some academic sources reserve the term "strong AI" for machines that can experience consciousness. Today's AI is speculated to be many years, if not decades, away from AGI.<br />
<br />
一些学术资源保留了“强人工智能”这个术语,用来形容能够体会意识的机器。据推测,如果不是几十年的话,今天的人工智能将在很多年之后才能达到通用人工智能的地步。<br />
<br />
<br />
<br />
Some authorities emphasize a distinction between ''strong AI'' and ''applied AI'',<ref>Encyclopædia Britannica [http://www.britannica.com/eb/article-219086/artificial-intelligence Strong AI, applied AI, and cognitive simulation] {{Webarchive|url=https://web.archive.org/web/20071015054758/http://www.britannica.com/eb/article-219086/artificial-intelligence |date=15 October 2007 }} or Jack Copeland [http://www.cs.usfca.edu/www.AlanTuring.net/turing_archive/pages/Reference%20Articles/what_is_AI/What%20is%20AI02.html What is artificial intelligence?] {{Webarchive|url=https://web.archive.org/web/20070818125256/http://www.cs.usfca.edu/www.AlanTuring.net/turing_archive/pages/Reference%20Articles/what_is_AI/What%20is%20AI02.html |date=18 August 2007 }} on AlanTuring.net</ref> also called ''narrow AI''<ref name = "Kurzweil 2005-08-05"/> or ''[[weak AI]]''.<ref>{{Cite web |url=http://www.open2.net/nextbigthing/ai/ai_in_depth/in_depth.htm |title=The Open University on Strong and Weak AI |access-date=8 October 2007 |archive-url=https://web.archive.org/web/20090925043908/http://www.open2.net/nextbigthing/ai/ai_in_depth/in_depth.htm |archive-date=25 September 2009 |url-status=live }} {{Dead link |date=October 2019}}</ref> In contrast to strong AI, weak AI is not intended to perform human [[cognitive]] abilities. Rather, weak AI is limited to the use of software to study or accomplish specific [[problem solving]] or [[reason]]ing tasks.<br />
<br />
Some authorities emphasize a distinction between strong AI and applied AI, also called narrow AI In contrast to strong AI, weak AI is not intended to perform human cognitive abilities. Rather, weak AI is limited to the use of software to study or accomplish specific problem solving or reasoning tasks.<br />
<br />
一些权威机构强调''强人工智能''和''应用人工智能''之间的区别,也称为''狭义人工智能''与''强人工智能''相比,弱人工智能并不是为了执行人类的认知能力。相反,弱人工智能仅限于使用软件来研究或完成特定问题的解决或完成推理任务。<br />
<br />
<br />
<br />
As of 2017, over forty organizations are researching AGI.<ref name=baum/><br />
<br />
As of 2017, over forty organizations are researching AGI.<br />
<br />
截止到2017年,已经有超过四十家机构在研究 AGI。<br />
<br />
<br />
<br />
==Requirements 判定要求==<br />
<br />
{{main|Cognitive science}}<br />
<br />
<br />
<br />
Various criteria for [[intelligence]] have been proposed (most famously the [[Turing test]]) but to date, there is no definition that satisfies everyone.<ref>AI founder [[John McCarthy (computer scientist)|John McCarthy]] writes: "we cannot yet characterize in general what kinds of computational procedures we want to call intelligent." {{cite web| url=http://www-formal.stanford.edu/jmc/whatisai/node1.html| title=Basic Questions| last=McCarthy| first=John| authorlink=John McCarthy (computer scientist)| publisher=[[Stanford University]]| year=2007| access-date=6 December 2007| archive-url=https://web.archive.org/web/20071026100601/http://www-formal.stanford.edu/jmc/whatisai/node1.html| archive-date=26 October 2007| url-status=live}} (For a discussion of some definitions of intelligence used by [[artificial intelligence]] researchers, see [[philosophy of artificial intelligence]].)</ref> However, there ''is'' wide agreement among artificial intelligence researchers that intelligence is required to do the following:<ref><br />
<br />
Various criteria for intelligence have been proposed (most famously the Turing test) but to date, there is no definition that satisfies everyone. However, there is wide agreement among artificial intelligence researchers that intelligence is required to do the following:<ref><br />
<br />
人们提出了各种各样的智能标准(最著名的是图灵测试) ,但到目前为止,还没有一个定义能使所有人满意。然而,人工智能研究人员普遍认为,智能需要做到以下几点: <br />
<br />
This list of intelligent traits is based on the topics covered by major AI textbooks, including:<br />
<br />
This list of intelligent traits is based on the topics covered by major AI textbooks, including:<br />
<br />
这个智能特征的列表基于主流的人工智能教科书所涉及的主题,包括:<br />
<br />
{{Harvnb|Russell|Norvig|2003}},<br />
<br />
,<br />
<br />
,<br />
<br />
{{Harvnb|Luger|Stubblefield|2004}},<br />
<br />
,<br />
<br />
,<br />
<br />
{{Harvnb|Poole|Mackworth|Goebel|1998}} and<br />
<br />
and<br />
<br />
及<br />
<br />
{{Harvnb|Nilsson|1998}}.<br />
<br />
.<br />
<br />
.<br />
<br />
</ref><br />
<br />
</ref><br />
<br />
/ 参考<br />
<br />
* [[automated reasoning|reason]], use strategy, solve puzzles, and make judgments under [[uncertainty]];使用策略,解决问题,并且在不确定条件下做出决策。<br />
<br />
* [[knowledge representation|represent knowledge]], including [[Commonsense knowledge base|commonsense knowledge]];展现知识,包括常识。<br />
<br />
* [[automated planning and scheduling|plan]];规划。<br />
<br />
* [[machine learning|learn]];学习。<br />
<br />
* communicate in [[natural language processing|natural language]];使用自然语言进行交流。<br />
<br />
* and [[Artificial intelligence systems integration|integrate all these skills]] towards common goals.综合运用所有技巧以达到某个目的。<br />
<br />
<br />
<br />
Other important capabilities include the ability to [[machine perception|sense]] (e.g. [[computer vision|see]]) and the ability to act (e.g. [[robotics|move and manipulate objects]]) in the world where intelligent behaviour is to be observed.<ref>Pfeifer, R. and Bongard J. C., How the body shapes the way we think: a new view of intelligence (The MIT Press, 2007). {{ISBN|0-262-16239-3}}</ref> This would include an ability to detect and respond to [[hazard]].<ref>{{cite journal | last1 = White | first1 = R. W. | year = 1959 | title = Motivation reconsidered: The concept of competence | journal = Psychological Review | volume = 66 | issue = 5| pages = 297–333 | doi=10.1037/h0040934| pmid = 13844397 }}</ref> Many interdisciplinary approaches to intelligence (e.g. [[cognitive science]], [[computational intelligence]] and [[decision making]]) tend to emphasise the need to consider additional traits such as [[imagination]] (taken as the ability to form mental images and concepts that were not programmed in)<ref>{{Harvnb|Johnson|1987}}</ref> and [[Self-determination theory|autonomy]].<ref>deCharms, R. (1968). Personal causation. New York: Academic Press.</ref><br />
<br />
Other important capabilities include the ability to sense (e.g. see) and the ability to act (e.g. move and manipulate objects) in the world where intelligent behaviour is to be observed. This would include an ability to detect and respond to hazard. Many interdisciplinary approaches to intelligence (e.g. cognitive science, computational intelligence and decision making) tend to emphasise the need to consider additional traits such as imagination (taken as the ability to form mental images and concepts that were not programmed in) and autonomy.<br />
<br />
其他重要的能力包括感知(例如:视觉)和行动(例如:移动和操纵物体)这些可以在客观世界被观测到的智能行为。<br />
--[[用户:粲兰|袁一博]]([[用户讨论:粲兰|讨论]])“Other important capabilities include the ability to sense (e.g. see) and the ability to act (e.g. move and manipulate objects) in the world where intelligent behaviour is to be observed.”此句翻译存疑。<br />
这将包括检测和应对危险的能力。许多跨学科的智力研究方法(例如:。认知科学、计算智能和决策)倾向于强调考虑额外特征的必要性,例如想象力(被认为是未编入程序的形成意象和概念的能力)和自主性。<br />
<br />
Computer based systems that exhibit many of these capabilities do exist (e.g. see [[computational creativity]], [[automated reasoning]], [[decision support system]], [[robot]], [[evolutionary computation]], [[intelligent agent]]), but not yet at human levels.<br />
<br />
Computer based systems that exhibit many of these capabilities do exist (e.g. see computational creativity, automated reasoning, decision support system, robot, evolutionary computation, intelligent agent), but not yet at human levels.<br />
<br />
基于计算机的系统,展示了许多这些能力确实存在(例如:。参见计算创造性、自动推理、决策支持系统、机器人、进化计算、智能代理) ,但还没有达到人类的水平。<br />
<br />
<br />
<br />
===Tests for confirming human-level AGI{{anchor|Tests_for_confirming_human-level_AGI}}===<br />
<br />
The following tests to confirm human-level AGI have been considered:<ref>{{cite web|last=Muehlhauser|first=Luke|title=What is AGI?|url=http://intelligence.org/2013/08/11/what-is-agi/|publisher=Machine Intelligence Research Institute|accessdate=1 May 2014|date=11 August 2013|archive-url=https://web.archive.org/web/20140425115445/http://intelligence.org/2013/08/11/what-is-agi/|archive-date=25 April 2014|url-status=live}}</ref><ref>{{Cite web|url=https://www.talkyblog.com/artificial_general_intelligence_agi/|title=What is Artificial General Intelligence (AGI)? {{!}} 4 Tests For Ensuring Artificial General Intelligence|date=13 July 2019|website=Talky Blog|language=en-US|access-date=17 July 2019|archive-url=https://web.archive.org/web/20190717071152/https://www.talkyblog.com/artificial_general_intelligence_agi/|archive-date=17 July 2019|url-status=live}}</ref><br />
<br />
The following tests to confirm human-level AGI have been considered:<br />
<br />
考虑了下列测试以确认人类水平 AGI:<br />
<br />
;[[Turing test|The Turing Test]] ([[Alan Turing|''Turing'']])<br />
<br />
The Turing Test (Turing)<br />
<br />
图灵测试(图灵)<br />
<br />
: A machine and a human both converse sight unseen with a second human, who must evaluate which of the two is the machine, which passes the test if it can fool the evaluator a significant fraction of the time. Note: Turing does not prescribe what should qualify as intelligence, only that knowing that it is a machine should disqualify it.<br />
<br />
A machine and a human both converse sight unseen with a second human, who must evaluate which of the two is the machine, which passes the test if it can fool the evaluator a significant fraction of the time. Note: Turing does not prescribe what should qualify as intelligence, only that knowing that it is a machine should disqualify it.<br />
<br />
一个机器人和一个人类都与另一个人类相反,后者必须评估两者中哪一个是机器,如果它能骗过评估者很大一部分时间,那么机器就通过了测试。注意: 图灵并没有规定什么是智能,只是知道它是一台机器就应该取消它的资格。<br />
<br />
;The Coffee Test ([[Steve Wozniak|''Wozniak'']])<br />
<br />
The Coffee Test (Wozniak)<br />
<br />
咖啡测试(沃兹尼亚克)<br />
<br />
: A machine is required to enter an average American home and figure out how to make coffee: find the coffee machine, find the coffee, add water, find a mug, and brew the coffee by pushing the proper buttons.<br />
<br />
A machine is required to enter an average American home and figure out how to make coffee: find the coffee machine, find the coffee, add water, find a mug, and brew the coffee by pushing the proper buttons.<br />
<br />
一台机器需要进入一个普通的美国家庭,并弄清楚如何制作咖啡: 找到咖啡机,找到咖啡,加水,找到一个马克杯,并通过按下正确的按钮来煮咖啡。<br />
<br />
;The Robot College Student Test ([[Ben Goertzel|''Goertzel'']])<br />
<br />
The Robot College Student Test (Goertzel)<br />
<br />
机器人大学生考试(Goertzel)<br />
<br />
: A machine enrolls in a university, taking and passing the same classes that humans would, and obtaining a degree.<br />
<br />
A machine enrolls in a university, taking and passing the same classes that humans would, and obtaining a degree.<br />
<br />
一台机器进入一所大学,学习和通过与人类相同的课程,并获得学位。<br />
<br />
;The Employment Test ([[Nils John Nilsson|''Nilsson'']])<br />
<br />
The Employment Test (Nilsson)<br />
<br />
就业测试(Nilsson)<br />
<br />
: A machine works an economically important job, performing at least as well as humans in the same job.<br />
<br />
A machine works an economically important job, performing at least as well as humans in the same job.<br />
<br />
机器从事一项经济上重要的工作,在同一项工作中表现至少和人类一样好。<br />
<br />
<br />
<br />
=== Problems requiring AGI to solve ===<br />
<br />
{{Main|AI-complete}}<br />
<br />
<br />
<br />
The most difficult problems for computers are informally known as "AI-complete" or "AI-hard", implying that solving them is equivalent to the general aptitude of human intelligence, or strong AI, beyond the capabilities of a purpose-specific algorithm.<ref name="Shapiro92">Shapiro, Stuart C. (1992). [http://www.cse.buffalo.edu/~shapiro/Papers/ai.pdf Artificial Intelligence] {{Webarchive|url=https://web.archive.org/web/20160201014644/http://www.cse.buffalo.edu/~shapiro/Papers/ai.pdf |date=1 February 2016 }} In Stuart C. Shapiro (Ed.), ''Encyclopedia of Artificial Intelligence'' (Second Edition, pp.&nbsp;54–57). New York: John Wiley. (Section 4 is on "AI-Complete Tasks".)</ref><br />
<br />
The most difficult problems for computers are informally known as "AI-complete" or "AI-hard", implying that solving them is equivalent to the general aptitude of human intelligence, or strong AI, beyond the capabilities of a purpose-specific algorithm.<br />
<br />
对于计算机来说,最困难的问题被非正式地称为“ AI 完成”或“ AI 困难” ,这意味着解决这些问题相当于人类智能的一般才能,或强大的人工智能,超出了特定目的算法的能力。<br />
<br />
<br />
<br />
AI-complete problems are hypothesised to include general [[computer vision]], [[natural language understanding]], and dealing with unexpected circumstances while solving any real world problem.<ref>Roman V. Yampolskiy. Turing Test as a Defining Feature of AI-Completeness. In Artificial Intelligence, Evolutionary Computation and Metaheuristics (AIECM) --In the footsteps of Alan Turing. Xin-She Yang (Ed.). pp. 3–17. (Chapter 1). Springer, London. 2013. http://cecs.louisville.edu/ry/TuringTestasaDefiningFeature04270003.pdf {{Webarchive|url=https://web.archive.org/web/20130522094547/http://cecs.louisville.edu/ry/TuringTestasaDefiningFeature04270003.pdf |date=22 May 2013 }}</ref><br />
<br />
AI-complete problems are hypothesised to include general computer vision, natural language understanding, and dealing with unexpected circumstances while solving any real world problem.<br />
<br />
人工智能完全问题假设包括一般的计算机视觉,自然语言理解,以及在解决任何现实世界问题的同时处理意外情况。<br />
<br />
<br />
<br />
AI-complete problems cannot be solved with current computer technology alone, and also require [[human computation]]. This property could be useful, for example, to test for the presence of humans, as [[CAPTCHA]]s aim to do; and for [[computer security]] to repel [[brute-force attack]]s.<ref>Luis von Ahn, Manuel Blum, Nicholas Hopper, and John Langford. [http://www.captcha.net/captcha_crypt.pdf CAPTCHA: Using Hard AI Problems for Security] {{Webarchive|url=https://web.archive.org/web/20160304001102/http://www.captcha.net/captcha_crypt.pdf |date=4 March 2016 }}. In Proceedings of Eurocrypt, Vol. 2656 (2003), pp. 294–311.</ref><ref>{{cite journal | first = Richard | last = Bergmair | title = Natural Language Steganography and an "AI-complete" Security Primitive | citeseerx = 10.1.1.105.129 | date = 7 January 2006 }} (unpublished?)</ref><br />
<br />
AI-complete problems cannot be solved with current computer technology alone, and also require human computation. This property could be useful, for example, to test for the presence of humans, as CAPTCHAs aim to do; and for computer security to repel brute-force attacks.<br />
<br />
目前的计算机技术不能单独解决人工智能完全问题,而且还需要人工计算。例如,这个特性可以用来测试人类是否存在(CAPTCHAs 的目标就是这样做) ,以及用于计算机安全性以抵御蛮力攻击。<br />
<br />
<br />
<br />
== History == <br />
<br />
=== Classical AI ===<br />
<br />
{{Main|History of artificial intelligence}}<br />
<br />
Modern AI research began in the mid 1950s.<ref>{{Harvnb|Crevier|1993|pp=48–50}}</ref> The first generation of AI researchers were convinced that artificial general intelligence was possible and that it would exist in just a few decades. AI pioneer [[Herbert A. Simon]] wrote in 1965: "machines will be capable, within twenty years, of doing any work a man can do."<ref>{{Harvnb|Simon|1965|p=96}} quoted in {{Harvnb|Crevier|1993|p=109}}</ref> Their predictions were the inspiration for [[Stanley Kubrick]] and [[Arthur C. Clarke]]'s character [[HAL 9000]], who embodied what AI researchers believed they could create by the year 2001. AI pioneer [[Marvin Minsky]] was a consultant<ref>{{Cite web |url=http://mitpress.mit.edu/e-books/Hal/chap2/two1.html |title=Scientist on the Set: An Interview with Marvin Minsky |access-date=5 April 2008 |archive-url=https://web.archive.org/web/20120716182537/http://mitpress.mit.edu/e-books/Hal/chap2/two1.html |archive-date=16 July 2012 |url-status=live }}</ref> on the project of making HAL 9000 as realistic as possible according to the consensus predictions of the time; Crevier quotes him as having said on the subject in 1967, "Within a generation ... the problem of creating 'artificial intelligence' will substantially be solved,"<ref>Marvin Minsky to {{Harvtxt|Darrach|1970}}, quoted in {{Harvtxt|Crevier|1993|p=109}}.</ref> although Minsky states that he was misquoted.{{Citation needed|date=June 2011}}<br />
<br />
Modern AI research began in the mid 1950s. The first generation of AI researchers were convinced that artificial general intelligence was possible and that it would exist in just a few decades. AI pioneer Herbert A. Simon wrote in 1965: "machines will be capable, within twenty years, of doing any work a man can do." Their predictions were the inspiration for Stanley Kubrick and Arthur C. Clarke's character HAL 9000, who embodied what AI researchers believed they could create by the year 2001. AI pioneer Marvin Minsky was a consultant on the project of making HAL 9000 as realistic as possible according to the consensus predictions of the time; Crevier quotes him as having said on the subject in 1967, "Within a generation ... the problem of creating 'artificial intelligence' will substantially be solved," although Minsky states that he was misquoted.<br />
<br />
现代人工智能研究始于20世纪50年代中期。第一代人工智能研究人员确信,人工普通智能是可能的,并将在短短几十年内出现。人工智能的先驱赫伯特·西蒙在1965年写道: “机器将在20年内完成人类能做的任何工作。”他们的预言启发了斯坦利 · 库布里克和亚瑟·查理斯·克拉克的人物哈尔9000,他们代表了人工智能研究人员相信他们在2001年能够创造的东西。人工智能先驱马文 · 明斯基(Marvin Minsky)是一个项目顾问,该项目旨在根据当时的共识预测,使 HAL 9000尽可能逼真; 克里维尔援引他在1967年关于这个问题的话说,“在一代人的时间里... ... 创造‘人工智能’的问题将大大得到解决,”尽管明斯基声称,他的话被错误引用了。<br />
<br />
<br />
<br />
However, in the early 1970s, it became obvious that researchers had grossly underestimated the difficulty of the project. Funding agencies became skeptical of AGI and put researchers under increasing pressure to produce useful "applied AI".<ref>The [[Lighthill report]] specifically criticized AI's "grandiose objectives" and led the dismantling of AI research in England. ({{Harvnb|Lighthill|1973}}; {{Harvnb|Howe|1994}}) In the U.S., [[DARPA]] became determined to fund only "mission-oriented direct research, rather than basic undirected research". See {{Harv|NRC|1999}} under "Shift to Applied Research Increases Investment". See also {{Harv|Crevier|1993|pp=115–117}} and {{Harv|Russell|Norvig|2003|pp=21–22}}</ref> As the 1980s began, Japan's [[Fifth Generation Computer]] Project revived interest in AGI, setting out a ten-year timeline that included AGI goals like "carry on a casual conversation".<ref>{{harvnb|Crevier|1993|p=211}}, {{harvnb|Russell|Norvig|2003|p=24}} and see also {{Harvnb|Feigenbaum|McCorduck|1983}}</ref> In response to this and the success of [[expert systems]], both industry and government pumped money back into the field.<ref>{{Harvnb|Crevier| 1993|pp=161–162,197–203,240}}; {{harvnb|Russell|Norvig|2003|p=25}}; {{harvnb|NRC|1999|loc=under "Shift to Applied Research Increases Investment"}}</ref> However, confidence in AI spectacularly collapsed in the late 1980s, and the goals of the Fifth Generation Computer Project were never fulfilled.<ref>{{Harvnb|Crevier|1993|pp=209–212}}</ref> For the second time in 20 years, AI researchers who had predicted the imminent achievement of AGI had been shown to be fundamentally mistaken. By the 1990s, AI researchers had gained a reputation for making vain promises. They became reluctant to make predictions at all<ref>As AI founder [[John McCarthy (computer scientist)|John McCarthy]] writes "it would be a great relief to the rest of the workers in AI if the inventors of new general formalisms would express their hopes in a more guarded form than has sometimes been the case." {{cite web | url=http://www-formal.stanford.edu/jmc/reviews/lighthill/lighthill.html | title=Reply to Lighthill | last=McCarthy | first=John | authorlink=John McCarthy (computer scientist) | publisher=Stanford University | year=2000 | access-date=29 September 2007 | archive-url=https://web.archive.org/web/20080930164952/http://www-formal.stanford.edu/jmc/reviews/lighthill/lighthill.html | archive-date=30 September 2008 | url-status=live }}</ref> and to avoid any mention of "human level" artificial intelligence for fear of being labeled "wild-eyed dreamer[s]."<ref>"At its low point, some computer scientists and software engineers avoided the term artificial intelligence for fear of being viewed as wild-eyed dreamers."{{cite news |first=John |last=Markoff |title=Behind Artificial Intelligence, a Squadron of Bright Real People |url=https://www.nytimes.com/2005/10/14/technology/14artificial.html?ei=5070&en=11ab55edb7cead5e&ex=1185940800&adxnnl=1&adxnnlx=1185805173-o7WsfW7qaP0x5/NUs1cQCQ |work=The New York Times|date=14 October 2005}}</ref><br />
<br />
However, in the early 1970s, it became obvious that researchers had grossly underestimated the difficulty of the project. Funding agencies became skeptical of AGI and put researchers under increasing pressure to produce useful "applied AI". As the 1980s began, Japan's Fifth Generation Computer Project revived interest in AGI, setting out a ten-year timeline that included AGI goals like "carry on a casual conversation". In response to this and the success of expert systems, both industry and government pumped money back into the field. However, confidence in AI spectacularly collapsed in the late 1980s, and the goals of the Fifth Generation Computer Project were never fulfilled. For the second time in 20 years, AI researchers who had predicted the imminent achievement of AGI had been shown to be fundamentally mistaken. By the 1990s, AI researchers had gained a reputation for making vain promises. They became reluctant to make predictions at all and to avoid any mention of "human level" artificial intelligence for fear of being labeled "wild-eyed dreamer[s]."<br />
<br />
然而,在20世纪70年代早期,很明显,研究人员严重低估了该项目的难度。资助机构开始对 AGI 持怀疑态度,并对研究人员施加越来越大的压力,要求他们生产出有用的“应用人工智能”。随着20世纪80年代的开始,日本的第五代计算机项目(Fifth Generation Computer Project)重新唤起了人们对 AGI 的兴趣,设定了一个为期10年的时间表,其中包括 AGI 的目标,比如“进行一次随意的交谈”。为了应对这种情况和专家系统的成功,工业界和政府都将资金重新投入这一领域。然而,人们对人工智能的信心在20世纪80年代末大幅下降,第五代计算机项目的目标从未实现。20年来的第二次,人工智能研究人员预测 AGI 即将取得的成就被证明是根本错误的。到了20世纪90年代,人工智能研究人员因做出虚假承诺而闻名。他们根本不愿意做预测,也不愿意提及“人类水平”的人工智能,因为他们害怕被贴上“狂热的梦想家”的标签<br />
<br />
<br />
<br />
=== Narrow AI research ===<br />
<br />
{{Main|Artificial intelligence}}<br />
<br />
<br />
<br />
In the 1990s and early 21st century, mainstream AI achieved far greater commercial success and academic respectability by focusing on specific sub-problems where they can produce verifiable results and commercial applications, such as [[artificial neural networks]] and statistical [[machine learning]].<ref>{{Harvnb|Russell|Norvig|2003|pp=25–26}}</ref> These "applied AI" systems are now used extensively throughout the technology industry, and research in this vein is very heavily funded in both academia and industry. Currently, development on this field is considered an emerging trend, and a mature stage is expected to happen in more than 10 years.<ref>{{cite web |title=Trends in the Emerging Tech Hype Cycle |url=https://blogs.gartner.com/smarterwithgartner/files/2018/08/PR_490866_5_Trends_in_the_Emerging_Tech_Hype_Cycle_2018_Hype_Cycle.png |publisher=Gartner Reports |accessdate=7 May 2019 |archive-url=https://web.archive.org/web/20190522024829/https://blogs.gartner.com/smarterwithgartner/files/2018/08/PR_490866_5_Trends_in_the_Emerging_Tech_Hype_Cycle_2018_Hype_Cycle.png |archive-date=22 May 2019 |url-status=live }}</ref><br />
<br />
In the 1990s and early 21st century, mainstream AI achieved far greater commercial success and academic respectability by focusing on specific sub-problems where they can produce verifiable results and commercial applications, such as artificial neural networks and statistical machine learning. These "applied AI" systems are now used extensively throughout the technology industry, and research in this vein is very heavily funded in both academia and industry. Currently, development on this field is considered an emerging trend, and a mature stage is expected to happen in more than 10 years.<br />
<br />
在1990年代和21世纪初,主流人工智能取得了更大的商业成功和学术声望,因为它们把重点放在能够产生可验证结果和商业应用的具体子问题上,例如人工神经网络和统计机器学习。这些“应用人工智能”系统现在在整个技术产业中得到广泛应用,这方面的研究得到了学术界和产业界的大量资助。目前,在这一领域的发展被认为是一个新兴的趋势,并有望在10多年内发生一个成熟的阶段。<br />
<br />
<br />
<br />
Most mainstream AI researchers hope that strong AI can be developed by combining the programs that solve various sub-problems. [[Hans Moravec]] wrote in 1988: <blockquote>"I am confident that this bottom-up route to artificial intelligence will one day meet the traditional top-down route more than half way, ready to provide the real world competence and the [[Commonsense knowledge base|commonsense knowledge]] that has been so frustratingly elusive in reasoning programs. Fully intelligent machines will result when the metaphorical [[golden spike]] is driven uniting the two efforts."<ref>{{Harvnb|Moravec|1988|p=20}}</ref></blockquote><br />
<br />
Most mainstream AI researchers hope that strong AI can be developed by combining the programs that solve various sub-problems. Hans Moravec wrote in 1988: <blockquote>"I am confident that this bottom-up route to artificial intelligence will one day meet the traditional top-down route more than half way, ready to provide the real world competence and the commonsense knowledge that has been so frustratingly elusive in reasoning programs. Fully intelligent machines will result when the metaphorical golden spike is driven uniting the two efforts."</blockquote><br />
<br />
大多数主流人工智能研究人员希望,通过结合解决各种子问题的程序,可以开发出强大的人工智能。汉斯 · 莫拉维克(Hans Moravec)在1988年写道: “我相信,这种自下而上的人工智能路线,终有一天会与传统的自上而下的路线相遇,超过一半的路程,准备好提供真实世界的能力和常识知识,而这些知识在推理程序中一直难以捉摸,令人沮丧。当隐喻性的黄金钉将两者结合起来时,就会产生完全智能的机器。” / blockquote<br />
<br />
<br />
<br />
However, even this fundamental philosophy has been disputed; for example, Stevan Harnad of Princeton concluded his 1990 paper on the [[Symbol grounding problem|Symbol Grounding Hypothesis]] by stating: <blockquote>"The expectation has often been voiced that "top-down" (symbolic) approaches to modeling cognition will somehow meet "bottom-up" (sensory) approaches somewhere in between. If the grounding considerations in this paper are valid, then this expectation is hopelessly modular and there is really only one viable route from sense to symbols: from the ground up. A free-floating symbolic level like the software level of a computer will never be reached by this route (or vice versa) – nor is it clear why we should even try to reach such a level, since it looks as if getting there would just amount to uprooting our symbols from their intrinsic meanings (thereby merely reducing ourselves to the functional equivalent of a programmable computer)."<ref>{{cite journal | last1 = Harnad | first1 = S | year = 1990 | title = The Symbol Grounding Problem | journal = Physica D | volume = 42 | issue = 1–3| pages = 335–346 | doi=10.1016/0167-2789(90)90087-6| bibcode = 1990PhyD...42..335H | arxiv = cs/9906002}}</ref></blockquote><br />
<br />
However, even this fundamental philosophy has been disputed; for example, Stevan Harnad of Princeton concluded his 1990 paper on the Symbol Grounding Hypothesis by stating: <blockquote>"The expectation has often been voiced that "top-down" (symbolic) approaches to modeling cognition will somehow meet "bottom-up" (sensory) approaches somewhere in between. If the grounding considerations in this paper are valid, then this expectation is hopelessly modular and there is really only one viable route from sense to symbols: from the ground up. A free-floating symbolic level like the software level of a computer will never be reached by this route (or vice versa) – nor is it clear why we should even try to reach such a level, since it looks as if getting there would just amount to uprooting our symbols from their intrinsic meanings (thereby merely reducing ourselves to the functional equivalent of a programmable computer)."</blockquote><br />
<br />
然而,即使是这种基本哲学也存在争议; 例如,普林斯顿大学的斯蒂文 · 哈纳德在1990年关于符号根植假说的论文中总结道: “人们经常提出这样的期望,即认知建模的“自上而下”(符号)方法将在某种程度上满足介于两者之间的“自下而上”(感官)方法。如果本文中的基础考虑是正确的,那么这种期望是无望的模块化的,从感觉到符号真的只有一条可行的路径: 从头开始。像计算机软件级别这样自由浮动的符号级别永远不可能通过这条路径(反之亦然)达到——也不清楚为什么我们甚至应该尝试达到这样一个级别,因为它看起来就像是把我们的符号从它们的内在意义上连根拔起(从而仅仅把我们自己降低为可编程计算机的功能等价物)。” / blockquote<br />
<br />
<br />
<br />
===Modern artificial general intelligence research===<br />
<br />
The term "artificial general intelligence" was used as early as 1997, by Mark Gubrud<ref>{{Harvnb|Gubrud|1997}}</ref> in a discussion of the implications of fully automated military production and operations. The term was re-introduced and popularized by [[Shane Legg]] and [[Ben Goertzel]] around 2002.<ref>{{Cite web|url=http://goertzel.org/who-coined-the-term-agi/|title=Who coined the term "AGI"? » goertzel.org|language=en-US|access-date=28 December 2018|archive-url=https://web.archive.org/web/20181228083048/http://goertzel.org/who-coined-the-term-agi/|archive-date=28 December 2018|url-status=live}}, via [[Life 3.0]]: 'The term "AGI" was popularized by... Shane Legg, Mark Gubrud and Ben Goertzel'</ref> The research objective is much older, for example [[Doug Lenat]]'s [[Cyc]] project (that began in 1984), and [[Allen Newell]]'s [[Soar (cognitive architecture)|Soar]] project are regarded as within the scope of AGI. AGI research activity in 2006 was described by Pei Wang and Ben Goertzel<ref>{{harvnb|Goertzel|Wang|2006}}. See also {{harvtxt|Wang|2006}} with an up-to-date summary and lots of links.</ref> as "producing publications and preliminary results". The first summer school in AGI was organized in Xiamen, China in 2009<ref>https://goertzel.org/AGI_Summer_School_2009.htm</ref> by the Xiamen university's Artificial Brain Laboratory and OpenCog. The first university course was given in 2010<ref>http://fmi-plovdiv.org/index.jsp?id=1054&ln=1</ref> and 2011<ref>http://fmi.uni-plovdiv.bg/index.jsp?id=1139&ln=1</ref> at Plovdiv University, Bulgaria by Todor Arnaudov. MIT presented a course in AGI in 2018, organized by Lex Fridman and featuring a number of guest lecturers. However, as yet, most AI researchers have devoted little attention to AGI, with some claiming that intelligence is too complex to be completely replicated in the near term. However, a small number of computer scientists are active in AGI research, and many of this group are contributing to a series of [[Conference on Artificial General Intelligence|AGI conferences]]. The research is extremely diverse and often pioneering in nature. In the introduction to his book,{{sfn|Goertzel|Pennachin|2006}} Goertzel says that estimates of the time needed before a truly flexible AGI is built vary from 10 years to over a century, but the consensus in the AGI research community seems to be that the timeline discussed by [[Ray Kurzweil]] in ''[[The Singularity is Near]]''<ref name="K">{{Harv|Kurzweil|2005|p=260}} or see [http://crnano.typepad.com/crnblog/2005/08/advanced_human_.html Advanced Human Intelligence] {{Webarchive|url=https://web.archive.org/web/20110630032301/http://crnano.typepad.com/crnblog/2005/08/advanced_human_.html |date=30 June 2011 }} where he defines strong AI as "machine intelligence with the full range of human intelligence."</ref> (i.e. between 2015 and 2045) is plausible.{{sfn|Goertzel|2007}}<br />
<br />
The term "artificial general intelligence" was used as early as 1997, by Mark Gubrud in a discussion of the implications of fully automated military production and operations. The term was re-introduced and popularized by Shane Legg and Ben Goertzel around 2002. The research objective is much older, for example Doug Lenat's Cyc project (that began in 1984), and Allen Newell's Soar project are regarded as within the scope of AGI. AGI research activity in 2006 was described by Pei Wang and Ben Goertzel as "producing publications and preliminary results". The first summer school in AGI was organized in Xiamen, China in 2009 by the Xiamen university's Artificial Brain Laboratory and OpenCog. The first university course was given in 2010 and 2011 at Plovdiv University, Bulgaria by Todor Arnaudov. MIT presented a course in AGI in 2018, organized by Lex Fridman and featuring a number of guest lecturers. However, as yet, most AI researchers have devoted little attention to AGI, with some claiming that intelligence is too complex to be completely replicated in the near term. However, a small number of computer scientists are active in AGI research, and many of this group are contributing to a series of AGI conferences. The research is extremely diverse and often pioneering in nature. In the introduction to his book, Goertzel says that estimates of the time needed before a truly flexible AGI is built vary from 10 years to over a century, but the consensus in the AGI research community seems to be that the timeline discussed by Ray Kurzweil in The Singularity is Near (i.e. between 2015 and 2045) is plausible.<br />
<br />
”人工通用智能”一词早在1997年就由马克 · 古布鲁德在讨论全自动化军事生产和作业的影响时使用。这个术语在2002年左右被 Shane Legg 和 Ben Goertzel 重新引入并推广。研究目标要古老得多,例如道格•雷纳特(Doug Lenat)的 Cyc 项目(始于1984年) ,以及艾伦•纽厄尔(Allen Newell)的 Soar 项目被认为属于 AGI 的范围。王(音译)和本 · 戈泽尔(音译)将 AGI 2006年的研究活动描述为”发表论文和取得初步成果”。2009年,厦门大学人工脑实验室和 OpenCog 在中国厦门组织了 AGI 的第一个暑期学校。第一个大学课程于2010年和2011年在保加利亚普罗夫迪夫大学由 Todor Arnaudov 开设。2018年,麻省理工学院在 AGI 开设了一门课程,由 Lex Fridman 组织,并邀请了一些客座讲师。然而,迄今为止,大多数人工智能研究人员对 AGI 关注甚少,一些人声称,智能过于复杂,无法在短期内完全复制。然而,少数计算机科学家积极参与 AGI 的研究,其中许多人正在为 AGI 的一系列会议做出贡献。这项研究极其多样化,而且往往具有开创性。在他的书的序言中,Goertzel 说,一个真正灵活的 AGI 制造所需的时间估计从10年到超过一个世纪不等,但是 AGI 研究团体的共识似乎是 Ray Kurzweil 在《奇点迫近讨论的时间表。在2015年至2045年之间)是合理的。<br />
<br />
<br />
<br />
However, most mainstream AI researchers doubt that progress will be this rapid.{{citation_needed|date=January 2017}} Organizations explicitly pursuing AGI include the Swiss AI lab [[IDSIA]],{{citation needed|date=December 2017}} Nnaisense,<ref>{{cite news|last1=Markoff|first1=John|title=When A.I. Matures, It May Call Jürgen Schmidhuber 'Dad'|url=https://www.nytimes.com/2016/11/27/technology/artificial-intelligence-pioneer-jurgen-schmidhuber-overlooked.html|accessdate=26 December 2017|work=The New York Times|date=27 November 2016|archive-url=https://web.archive.org/web/20171226234555/https://www.nytimes.com/2016/11/27/technology/artificial-intelligence-pioneer-jurgen-schmidhuber-overlooked.html|archive-date=26 December 2017|url-status=live}}</ref> [[Vicarious (company)|Vicarious]],<!--<ref name=baum/>--> [[Maluuba]],<ref name=baum/> the [[OpenCog|OpenCog Foundation]], Adaptive AI, [[LIDA (cognitive architecture)|LIDA]], and [[Numenta]] and the associated [[Redwood Neuroscience Institute]].<ref>{{cite book|author1=James Barrat|title=Our Final Invention: Artificial Intelligence and the End of the Human Era|date=2013|publisher=St. Martin's Press|location=New York|isbn=9780312622374|edition=First|chapter=Chapter 11: A Hard Takeoff|title-link=Our Final Invention|author1-link=James Barrat}}</ref> In addition, organizations such as the [[Machine Intelligence Research Institute]]<ref>{{cite web|title=About the Machine Intelligence Research Institute|url=https://intelligence.org/about/|website=Machine Intelligence Research Institute|accessdate=26 December 2017|archive-url=https://web.archive.org/web/20180121025925/https://intelligence.org/about/|archive-date=21 January 2018|url-status=live}}</ref> and [[OpenAI]]<ref>{{cite news|title=About OpenAI|url=https://openai.com/about/|accessdate=26 December 2017|work=[[OpenAI]]|language=en-us|archive-url=https://web.archive.org/web/20171222181056/https://openai.com/about/|archive-date=22 December 2017|url-status=live}}</ref> have been founded to influence the development path of AGI. Finally, projects such as the [[Human Brain Project]]<ref>{{cite news|last1=Theil|first1=Stefan|title=Trouble in Mind|url=https://www.scientificamerican.com/article/why-the-human-brain-project-went-wrong-and-how-to-fix-it/|accessdate=26 December 2017|work=Scientific American|pages=36–42|language=en|doi=10.1038/scientificamerican1015-36|bibcode=2015SciAm.313d..36T|archive-url=https://web.archive.org/web/20171109234151/https://www.scientificamerican.com/article/why-the-human-brain-project-went-wrong-and-how-to-fix-it/|archive-date=9 November 2017|url-status=live}}</ref> have the goal of building a functioning simulation of the human brain. A 2017 survey of AGI categorized forty-five known "active R&D projects" that explicitly or implicitly (through published research) research AGI, with the largest three being [[DeepMind]], the Human Brain Project, and [[OpenAI]].<ref name=baum>{{cite journal|title=Baum, Seth, A Survey of Artificial General Intelligence Projects for Ethics, Risk, and Policy (November 12, 2017). Global Catastrophic Risk Institute Working Paper 17-1|url=https://ssrn.com/abstract=3070741|date=12 November 2017|last1=Baum|first1=Seth}}</ref><br />
<br />
However, most mainstream AI researchers doubt that progress will be this rapid. Organizations explicitly pursuing AGI include the Swiss AI lab IDSIA, Nnaisense, Vicarious,<!-- In addition, organizations such as the Machine Intelligence Research Institute and OpenAI have been founded to influence the development path of AGI. Finally, projects such as the Human Brain Project have the goal of building a functioning simulation of the human brain. A 2017 survey of AGI categorized forty-five known "active R&D projects" that explicitly or implicitly (through published research) research AGI, with the largest three being DeepMind, the Human Brain Project, and OpenAI.<br />
<br />
然而,大多数主流的人工智能研究人员怀疑进展是否会如此之快。明确寻求 AGI 的组织包括瑞士人工智能实验室 IDSIA,Nnaisense,Vicarious,! -- 此外,还成立了机器智能研究所和 OpenAI 等机构来影响 AGI 的发展道路。最后,像人脑计划这样的项目的目标是建立一个人脑的功能模拟。2017年针对 AGI 的一项调查将45个已知的“活跃研发项目”(通过已发表的研究)归类为“活跃研发项目” ,其中最大的三个是 DeepMind、人类大脑项目和 OpenAI。<br />
<br />
<br />
<br />
In 2017, researchers Feng Liu, Yong Shi and Ying Liu conducted intelligence tests on publicly available and freely accessible weak AI such as Google AI or Apple's Siri and others. At the maximum, these AI reached a value of about 47, which corresponds approximately to a six-year-old child in first grade. An adult comes to about 100 on average. Similar tests had been carried out in 2014, with the IQ score reaching a maximum value of 27.<ref>{{cite journal|title=Intelligence Quotient and Intelligence Grade of Artificial Intelligence|journal=Annals of Data Science|volume=4|issue=2|pages=179–191|arxiv=1709.10242|doi=10.1007/s40745-017-0109-0|year=2017|last1=Liu|first1=Feng|last2=Shi|first2=Yong|last3=Liu|first3=Ying|bibcode=2017arXiv170910242L}}</ref><ref>{{cite web|title=Google-KI doppelt so schlau wie Siri|url=https://t3n.de/news/iq-kind-schlauer-google-ki-siri-864003|accessdate=2 January 2019|archive-url=https://web.archive.org/web/20190103055657/https://t3n.de/news/iq-kind-schlauer-google-ki-siri-864003/|archive-date=3 January 2019|url-status=live}}</ref><br />
<br />
In 2017, researchers Feng Liu, Yong Shi and Ying Liu conducted intelligence tests on publicly available and freely accessible weak AI such as Google AI or Apple's Siri and others. At the maximum, these AI reached a value of about 47, which corresponds approximately to a six-year-old child in first grade. An adult comes to about 100 on average. Similar tests had been carried out in 2014, with the IQ score reaching a maximum value of 27.<br />
<br />
2017年,研究人员 Feng Liu,Yong Shi 和 Ying Liu 对公开的和可自由访问的弱智能进行了智能测试,如谷歌人工智能或苹果的 Siri 等。在最大值,这些 AI 达到了约47,这大约相当于一个六岁的儿童在一年级。一个成年人平均体重约为100磅。2014年也进行了类似的测试,智商分数达到了最高值27。<br />
<br />
<br />
<br />
In 2019, video game programmer and aerospace engineer [[John Carmack]] announced plans to research AGI.<ref name="lawler">{{cite web |url=https://www.engadget.com/2019-11-13-john-carmack-agi.html |title=John Carmack takes a step back at Oculus to work on human-like AI |date=November 13, 2019 |first=Richard |last=Lawler |accessdate=April 4, 2020 |publisher=[[Engadget]]}}</ref><br />
<br />
In 2019, video game programmer and aerospace engineer John Carmack announced plans to research AGI.<br />
<br />
2019年,游戏程序师和航空工程师 John Carmack 宣布了研究 AGI 的计划。<br />
<br />
<br />
<br />
==Processing power needed to simulate a brain ==<br />
<br />
<br />
<br />
===Whole brain emulation===<br />
<br />
{{main|Mind uploading}}<br />
<br />
A popular discussed approach to achieving general intelligent action is [[whole brain emulation]]. A low-level brain model is built by [[brain scanning|scanning]] and [[Brain mapping|mapping]] a biological brain in detail and copying its state into a computer system or another computational device. The computer runs a [[computer simulation|simulation]] model so faithful to the original that it will behave in essentially the same way as the original brain, or for all practical purposes, indistinguishably.<ref name=Roadmap><br />
<br />
A popular discussed approach to achieving general intelligent action is whole brain emulation. A low-level brain model is built by scanning and mapping a biological brain in detail and copying its state into a computer system or another computational device. The computer runs a simulation model so faithful to the original that it will behave in essentially the same way as the original brain, or for all practical purposes, indistinguishably.<ref name=Roadmap><br />
<br />
实现一般智能行为的一种流行的讨论方法是全脑模拟。一个低层次的大脑模型是通过扫描和绘制生物大脑的详细情况,并将其状态复制到计算机系统或其他计算设备中来建立的。计算机运行的模拟模型如此忠实于原始模型,以至于它的行为在本质上与原始大脑相同,或者对于所有的实际目的,难以区分。 参考名称路线图<br />
<br />
{{Harvnb|Sandberg|Boström|2008}}. "The basic idea is to take a particular brain, scan its structure in detail, and construct a software model of it that is so faithful to the original that, when run on appropriate hardware, it will behave in essentially the same way as the original brain."</ref> Whole brain emulation is discussed in [[computational neuroscience]] and [[neuroinformatics]], in the context of [[brain simulation]] for medical research purposes. It is discussed in [[artificial intelligence]] research{{sfn|Goertzel|2007}} as an approach to strong AI. [[Neuroimaging]] technologies that could deliver the necessary detailed understanding are improving rapidly, and [[futurist]] Ray Kurzweil in the book ''The Singularity Is Near''<ref name=K/> predicts that a map of sufficient quality will become available on a similar timescale to the required computing power.<br />
<br />
. "The basic idea is to take a particular brain, scan its structure in detail, and construct a software model of it that is so faithful to the original that, when run on appropriate hardware, it will behave in essentially the same way as the original brain."</ref> Whole brain emulation is discussed in computational neuroscience and neuroinformatics, in the context of brain simulation for medical research purposes. It is discussed in artificial intelligence research as an approach to strong AI. Neuroimaging technologies that could deliver the necessary detailed understanding are improving rapidly, and futurist Ray Kurzweil in the book The Singularity Is Near predicts that a map of sufficient quality will become available on a similar timescale to the required computing power.<br />
<br />
.“基本思想是,取一个特定的大脑,详细地扫描其结构,并构建一个与原始大脑如此忠实的软件模型,以至于在适当的硬件上运行时,它基本上与原始大脑的行为方式相同。整个大脑模拟在计算神经科学和神经信息学医学期刊上讨论过,这是为了医学研究的大脑模拟。它是人工智能研究中讨论的一种强人工智能的方法。神经成像技术可以提供必要的详细的理解正在迅速提高,未来学家 Ray Kurzweil 在《奇点迫近书中预测,一张具有足够质量的地图将在类似的时间尺度上达到所需的计算能力。<br />
<br />
<br />
<br />
===Early estimates ===<br />
<br />
[[File:Estimations of Human Brain Emulation Required Performance.svg|thumb|right|400px|Estimates of how much processing power is needed to emulate a human brain at various levels (from Ray Kurzweil, and [[Anders Sandberg]] and [[Nick Bostrom]]), along with the fastest supercomputer from [[TOP500]] mapped by year. Note the logarithmic scale and exponential trendline, which assumes the computational capacity doubles every 1.1 years. Kurzweil believes that mind uploading will be possible at neural simulation, while the Sandberg, Bostrom report is less certain about where [[consciousness]] arises.{{sfn|Sandberg|Boström|2008}}]] For low-level brain simulation, an extremely powerful computer would be required. The [[human brain]] has a huge number of [[synapses]]. Each of the 10<sup>11</sup> (one hundred billion) [[neurons]] has on average 7,000 synaptic connections (synapses) to other neurons. It has been estimated that the brain of a three-year-old child has about 10<sup>15</sup> synapses (1 quadrillion). This number declines with age, stabilizing by adulthood. Estimates vary for an adult, ranging from 10<sup>14</sup> to 5×10<sup>14</sup> synapses (100 to 500 trillion).{{sfn|Drachman|2005}} An estimate of the brain's processing power, based on a simple switch model for neuron activity, is around 10<sup>14</sup> (100 trillion) synaptic updates per second ([[SUPS]]).{{sfn|Russell|Norvig|2003}} In 1997, Kurzweil looked at various estimates for the hardware required to equal the human brain and adopted a figure of 10<sup>16</sup> computations per second (cps).<ref>In "Mind Children" {{Harvnb|Moravec|1988|page=61}} 10<sup>15</sup> cps is used. More recently, in 1997, <{{cite web|url=http://www.transhumanist.com/volume1/moravec.htm |title=Archived copy |accessdate=23 June 2006 |url-status=dead |archiveurl=https://web.archive.org/web/20060615031852/http://transhumanist.com/volume1/moravec.htm |archivedate=15 June 2006 }}> Moravec argued for 10<sup>8</sup> MIPS which would roughly correspond to 10<sup>14</sup> cps. Moravec talks in terms of MIPS, not "cps", which is a non-standard term Kurzweil introduced.</ref> (For comparison, if a "computation" was equivalent to one "[[FLOPS|floating point operation]]" – a measure used to rate current [[supercomputer]]s – then 10<sup>16</sup> "computations" would be equivalent to 10 [[Peta-|petaFLOPS]], [[FLOPS#Performance records|achieved in 2011]]). He used this figure to predict the necessary hardware would be available sometime between 2015 and 2025, if the exponential growth in computer power at the time of writing continued.<br />
<br />
Estimates of how much processing power is needed to emulate a human brain at various levels (from Ray Kurzweil, and [[Anders Sandberg and Nick Bostrom), along with the fastest supercomputer from TOP500 mapped by year. Note the logarithmic scale and exponential trendline, which assumes the computational capacity doubles every 1.1 years. Kurzweil believes that mind uploading will be possible at neural simulation, while the Sandberg, Bostrom report is less certain about where consciousness arises.]] For low-level brain simulation, an extremely powerful computer would be required. The human brain has a huge number of synapses. Each of the 10<sup>11</sup> (one hundred billion) neurons has on average 7,000 synaptic connections (synapses) to other neurons. It has been estimated that the brain of a three-year-old child has about 10<sup>15</sup> synapses (1 quadrillion). This number declines with age, stabilizing by adulthood. Estimates vary for an adult, ranging from 10<sup>14</sup> to 5×10<sup>14</sup> synapses (100 to 500 trillion). An estimate of the brain's processing power, based on a simple switch model for neuron activity, is around 10<sup>14</sup> (100 trillion) synaptic updates per second (SUPS). In 1997, Kurzweil looked at various estimates for the hardware required to equal the human brain and adopted a figure of 10<sup>16</sup> computations per second (cps). (For comparison, if a "computation" was equivalent to one "floating point operation" – a measure used to rate current supercomputers – then 10<sup>16</sup> "computations" would be equivalent to 10 petaFLOPS, achieved in 2011). He used this figure to predict the necessary hardware would be available sometime between 2015 and 2025, if the exponential growth in computer power at the time of writing continued.<br />
<br />
估计需要多少处理能力才能在不同水平上模拟人类大脑(来自 Ray Kurzweil,[ Anders Sandberg 和 Nick Bostrom ]) ,以及每年从 TOP500绘制出的最快超级计算机。请注意对数尺度趋势线和指数趋势线,它假设计算能力每1.1年翻一番。库兹韦尔相信,在神经模拟中上传思维是可能的,而桑德伯格和博斯特罗姆的报告对意识在哪里产生则不太确定。]对于低层次的大脑模拟,需要一个非常强大的计算机。人类的大脑有大量的突触。每个10个 sup 11 / sup (1000亿)神经元平均有7000个突触连接(突触)到其他神经元。据估计,一个三岁儿童的大脑约有10个 sup 15 / sup 突触(1千万亿)。这个数字随着年龄的增长而下降,成年后趋于稳定。对于一个成年人的估计有所不同,从10个 sup 14 / sup 到5个 sup 10 sup 14 / sup 突触(100万亿到500万亿)不等。基于神经元活动的简单开关模型,对大脑处理能力的估计大约是每秒10次 / 秒(100万亿)突触更新(SUPS)。1997年,库兹韦尔研究了相当于人脑所需硬件的各种估计,并采用了每秒10 sup 16 / sup 计算(cps)的数字。(作为比较,如果一次“计算”相当于一次“浮点运算”——一种用于对当前超级计算机进行评级的措施——那么10 sup 16 / sup“计算”相当于2011年完成的10petaflops)。他用这个数字来预测,如果在撰写本文时计算机能力方面的指数增长继续下去的话,那么在2015年到2025年之间的某个时候,必要的硬件将会出现。<br />
<br />
<br />
<br />
===Modelling the neurons in more detail===<br />
<br />
The [[artificial neuron]] model assumed by Kurzweil and used in many current [[artificial neural network]] implementations is simple compared with [[biological neuron model|biological neurons]]. A brain simulation would likely have to capture the detailed cellular behaviour of biological [[neurons]], presently understood only in the broadest of outlines. The overhead introduced by full modeling of the biological, chemical, and physical details of neural behaviour (especially on a molecular scale) would require computational powers several orders of magnitude larger than Kurzweil's estimate. In addition the estimates do not account for [[glial cells]], which are at least as numerous as neurons, and which may outnumber neurons by as much as 10:1, and are now known to play a role in cognitive processes.<ref name="Discover2011JanFeb">{{Cite journal|author=Swaminathan, Nikhil|title=Glia—the other brain cells|journal=Discover|date=Jan–Feb 2011|url=http://discovermagazine.com/2011/jan-feb/62|access-date=24 January 2014|archive-url=https://web.archive.org/web/20140208071350/http://discovermagazine.com/2011/jan-feb/62|archive-date=8 February 2014|url-status=live}}</ref><br />
<br />
The artificial neuron model assumed by Kurzweil and used in many current artificial neural network implementations is simple compared with biological neurons. A brain simulation would likely have to capture the detailed cellular behaviour of biological neurons, presently understood only in the broadest of outlines. The overhead introduced by full modeling of the biological, chemical, and physical details of neural behaviour (especially on a molecular scale) would require computational powers several orders of magnitude larger than Kurzweil's estimate. In addition the estimates do not account for glial cells, which are at least as numerous as neurons, and which may outnumber neurons by as much as 10:1, and are now known to play a role in cognitive processes.<br />
<br />
与生物神经元相比,Kurzweil 假设的人工神经元模型在当前许多人工神经网络实现中的应用是简单的。大脑模拟可能需要捕捉生物神经元细胞行为的细节,目前只能从最广泛的轮廓中理解。对神经行为的生物、化学和物理细节(特别是在分子尺度上)进行全面建模所需要的计算能力将比 Kurzweil 的估计大几百万数量级。此外,这些估计没有考虑胶质细胞,胶质细胞至少和神经元一样多,数量可能比神经元多10:1,现在已知它们在认知过程中发挥作用。<br />
<br />
<br />
<br />
=== Current research===<br />
<br />
There are some research projects that are investigating brain simulation using more sophisticated neural models, implemented on conventional computing architectures. The [[Artificial Intelligence System]] project implemented non-real time simulations of a "brain" (with 10<sup>11</sup> neurons) in 2005. It took 50 days on a cluster of 27 processors to simulate 1 second of a model.<ref>{{cite journal |last=Izhikevich |first=Eugene M. |last2=Edelman |first2=Gerald M. |date=4 March 2008 |title=Large-scale model of mammalian thalamocortical systems |url=http://vesicle.nsi.edu/users/izhikevich/publications/large-scale_model_of_human_brain.pdf |journal=PNAS |volume=105 |issue=9 |pages=3593–3598 |doi= 10.1073/pnas.0712231105|access-date=23 June 2015 |archive-url=https://web.archive.org/web/20090612095651/http://vesicle.nsi.edu/users/izhikevich/publications/large-scale_model_of_human_brain.pdf |archive-date=12 June 2009 |pmid=18292226 |pmc=2265160|bibcode=2008PNAS..105.3593I }}</ref> The [[Blue Brain]] project used one of the fastest supercomputer architectures in the world, [[IBM]]'s [[Blue Gene]] platform, to create a real time simulation of a single rat [[Neocortex|neocortical column]] consisting of approximately 10,000 neurons and 10<sup>8</sup> synapses in 2006.<ref>{{cite web|url=http://bluebrain.epfl.ch/Jahia/site/bluebrain/op/edit/pid/19085|title=Project Milestones|work=Blue Brain|accessdate=11 August 2008}}</ref> A longer term goal is to build a detailed, functional simulation of the physiological processes in the human brain: "It is not impossible to build a human brain and we can do it in 10 years," [[Henry Markram]], director of the Blue Brain Project said in 2009 at the [[TED (conference)|TED conference]] in Oxford.<ref>{{Cite news |url=http://news.bbc.co.uk/1/hi/technology/8164060.stm |title=Artificial brain '10 years away' 2009 BBC news |date=22 July 2009 |access-date=25 July 2009 |archive-url=https://web.archive.org/web/20170726040959/http://news.bbc.co.uk/1/hi/technology/8164060.stm |archive-date=26 July 2017 |url-status=live }}</ref> There have also been controversial claims to have simulated a [[cat intelligence#Computer simulation of the cat brain|cat brain]]. Neuro-silicon interfaces have been proposed as an alternative implementation strategy that may scale better.<ref>[http://gauntlet.ucalgary.ca/story/10343 University of Calgary news] {{Webarchive|url=https://web.archive.org/web/20090818081044/http://gauntlet.ucalgary.ca/story/10343 |date=18 August 2009 }}, [http://www.nbcnews.com/id/12037941 NBC News news] {{Webarchive|url=https://web.archive.org/web/20170704063922/http://www.nbcnews.com/id/12037941/ |date=4 July 2017 }}</ref><br />
<br />
There are some research projects that are investigating brain simulation using more sophisticated neural models, implemented on conventional computing architectures. The Artificial Intelligence System project implemented non-real time simulations of a "brain" (with 10<sup>11</sup> neurons) in 2005. It took 50 days on a cluster of 27 processors to simulate 1 second of a model. The Blue Brain project used one of the fastest supercomputer architectures in the world, IBM's Blue Gene platform, to create a real time simulation of a single rat neocortical column consisting of approximately 10,000 neurons and 10<sup>8</sup> synapses in 2006. A longer term goal is to build a detailed, functional simulation of the physiological processes in the human brain: "It is not impossible to build a human brain and we can do it in 10 years," Henry Markram, director of the Blue Brain Project said in 2009 at the TED conference in Oxford. There have also been controversial claims to have simulated a cat brain. Neuro-silicon interfaces have been proposed as an alternative implementation strategy that may scale better.<br />
<br />
有一些研究项目正在使用更复杂的神经模型研究大脑模拟,这些模型是在传统的计算机体系结构上实现的。人工智能系统项目在2005年实现了一个“大脑”(有10个 sup 11 / sup 神经元)的非实时模拟。在一个由27个处理器组成的集群上,模拟一个模型的一秒钟花费了50天时间。2006年,蓝脑项目利用世界上最快的超级计算机架构之一,IBM 的蓝色基因平台,创建了一个包含大约10,000个神经元和10个 sup 8 / sup 突触的单个大鼠皮层柱的实时模拟。一个更长期的目标是建立一个人脑生理过程的详细的功能模拟: “建立一个人脑并不是不可能的,我们可以在10年内完成,”2009年在牛津举行的 TED 大会上,蓝脑项目主任亨利 · 马克拉姆说。还有一些有争议的说法是模拟猫的大脑。神经硅接口已被提出作为一种替代的实施策略,可能会更好地伸缩。<br />
<br />
<br />
<br />
[[Hans Moravec]] addressed the above arguments ("brains are more complicated", "neurons have to be modeled in more detail") in his 1997 paper "When will computer hardware match the human brain?".<ref>{{cite web|url=http://www.transhumanist.com/volume1/moravec.htm |title=Archived copy |accessdate=23 June 2006 |url-status=dead |archiveurl=https://web.archive.org/web/20060615031852/http://transhumanist.com/volume1/moravec.htm |archivedate=15 June 2006 }}</ref> He measured the ability of existing software to simulate the functionality of neural tissue, specifically the retina. His results do not depend on the number of glial cells, nor on what kinds of processing neurons perform where.<br />
<br />
Hans Moravec addressed the above arguments ("brains are more complicated", "neurons have to be modeled in more detail") in his 1997 paper "When will computer hardware match the human brain?". He measured the ability of existing software to simulate the functionality of neural tissue, specifically the retina. His results do not depend on the number of glial cells, nor on what kinds of processing neurons perform where.<br />
<br />
Hans Moravec 在他1997年的论文《计算机硬件何时能与人脑相匹配? 》中提出了上述观点(“大脑更复杂” ,“神经元必须建模得更详细”) .他测量了现有软件模拟神经组织,特别是视网膜功能的能力。他的研究结果并不取决于神经胶质细胞的数量,也不取决于处理神经元在哪里工作。<br />
<br />
<br />
<br />
The actual complexity of modeling biological neurons has been explored in [[OpenWorm|OpenWorm project]] that was aimed on complete simulation of a worm that has only 302 neurons in its neural network (among about 1000 cells in total). The animal's neural network has been well documented before the start of the project. However, although the task seemed simple at the beginning, the models based on a generic neural network did not work. Currently, the efforts are focused on precise emulation of biological neurons (partly on the molecular level), but the result cannot be called a total success yet. Even if the number of issues to be solved in a human-brain-scale model is not proportional to the number of neurons, the amount of work along this path is obvious.<br />
<br />
The actual complexity of modeling biological neurons has been explored in OpenWorm project that was aimed on complete simulation of a worm that has only 302 neurons in its neural network (among about 1000 cells in total). The animal's neural network has been well documented before the start of the project. However, although the task seemed simple at the beginning, the models based on a generic neural network did not work. Currently, the efforts are focused on precise emulation of biological neurons (partly on the molecular level), but the result cannot be called a total success yet. Even if the number of issues to be solved in a human-brain-scale model is not proportional to the number of neurons, the amount of work along this path is obvious.<br />
<br />
在 OpenWorm 项目中,已经探讨了建模生物神经元的实际复杂性,该项目旨在完全模拟一个蠕虫,其神经网络中只有302个神经元(在总共约1000个细胞中)。在项目开始之前,动物的神经网络已经被很好地记录了下来。然而,尽管一开始任务看起来很简单,基于一般神经网络的模型并不起作用。目前,研究的重点是精确模拟生物神经元(部分在分子水平上) ,但结果还不能被称为完全成功。即使在人脑尺度模型中需要解决的问题的数量与神经元的数量不成比例,沿着这条路径所做的工作量也是显而易见的。<br />
<br />
<br />
<br />
===Criticisms of simulation-based approaches===<br />
<br />
A fundamental criticism of the simulated brain approach derives from [[embodied cognition]] where human embodiment is taken as an essential aspect of human intelligence. Many researchers believe that embodiment is necessary to ground meaning.<ref>{{Harvnb|de Vega|Glenberg|Graesser|2008}}. A wide range of views in current research, all of which require grounding to some degree</ref> If this view is correct, any fully functional brain model will need to encompass more than just the neurons (i.e., a robotic body). Goertzel{{sfn|Goertzel|2007}} proposes virtual embodiment (like in ''[[Second Life]]''), but it is not yet known whether this would be sufficient.<br />
<br />
A fundamental criticism of the simulated brain approach derives from embodied cognition where human embodiment is taken as an essential aspect of human intelligence. Many researchers believe that embodiment is necessary to ground meaning. If this view is correct, any fully functional brain model will need to encompass more than just the neurons (i.e., a robotic body). Goertzel proposes virtual embodiment (like in Second Life), but it is not yet known whether this would be sufficient.<br />
<br />
对模拟大脑方法的一个基本批评来自具身认知,在那里人体化被视为人类智力的一个重要方面。许多研究者认为,具体化是必要的基础意义。如果这种观点是正确的,那么任何功能齐全的大脑模型都需要包含更多的神经元(例如,一个机器人身体)。Goertzel 提出了虚拟化身(就像在《第二人生》中那样) ,但是目前还不知道这是否足够。<br />
<br />
<br />
<br />
Desktop computers using microprocessors capable of more than 10<sup>9</sup> cps (Kurzweil's non-standard unit "computations per second", see above) have been available since 2005. According to the brain power estimates used by Kurzweil (and Moravec), this computer should be capable of supporting a simulation of a bee brain, but despite some interest<ref>{{Cite web |url=http://www.setiai.com/archives/cat_honey_bee_brain.html |title=some links to bee brain studies |access-date=30 March 2010 |archive-url=https://web.archive.org/web/20111005162232/http://www.setiai.com/archives/cat_honey_bee_brain.html |archive-date=5 October 2011 |url-status=dead }}</ref> no such simulation exists {{Citation needed|date=April 2011}}. There are at least three reasons for this:<br />
<br />
Desktop computers using microprocessors capable of more than 10<sup>9</sup> cps (Kurzweil's non-standard unit "computations per second", see above) have been available since 2005. According to the brain power estimates used by Kurzweil (and Moravec), this computer should be capable of supporting a simulation of a bee brain, but despite some interest no such simulation exists . There are at least three reasons for this:<br />
<br />
自2005年以来,台式计算机使用的微处理器能够超过10 sup 9 / sup cps (库兹韦尔的非标准单位“每秒计算” ,见上文)。根据 Kurzweil (和 Moravec)使用的大脑能量估算,这台计算机应该能够支持蜜蜂大脑的模拟,但是尽管有些人感兴趣,这样的模拟并不存在。这至少有三个原因:<br />
<br />
#The neuron model seems to be oversimplified (see next section).<br />
<br />
The neuron model seems to be oversimplified (see next section).<br />
<br />
神经元模型似乎过于简化了(见下一节)。<br />
<br />
#There is insufficient understanding of higher cognitive processes{{refn|In Goertzels' AGI book ([[#CITEREFYudkowsky2006|Yudkowsky 2006]]), Yudkowsky proposes 5 levels of organisation that must be understood – code/data, sensory modality, concept & category, thought, and deliberation (consciousness) – in order to use the available hardware}} to establish accurately what the brain's neural activity, observed using techniques such as [[Neuroimaging#Functional magnetic resonance imaging|functional magnetic resonance imaging]], correlates with.<br />
<br />
There is insufficient understanding of higher cognitive processes to establish accurately what the brain's neural activity, observed using techniques such as functional magnetic resonance imaging, correlates with.<br />
<br />
人们对高级认知过程的理解不够充分,无法准确地确定大脑的神经活动---- 使用功能性磁共振成像等技术观察到的活动---- 与之相关。<br />
<br />
#Even if our understanding of cognition advances sufficiently, early simulation programs are likely to be very inefficient and will, therefore, need considerably more hardware.<br />
<br />
Even if our understanding of cognition advances sufficiently, early simulation programs are likely to be very inefficient and will, therefore, need considerably more hardware.<br />
<br />
即使我们对认知的理解有了足够的进步,早期的仿真程序也可能非常低效,因此需要更多的硬件。<br />
<br />
#The brain of an organism, while critical, may not be an appropriate boundary for a cognitive model. To simulate a bee brain, it may be necessary to simulate the body, and the environment. [[The Extended Mind]] thesis formalizes the philosophical concept, and research into [[Cephalopoda|cephalopods]] has demonstrated clear examples of a decentralized system.<ref>{{cite journal | pmid = 15829594 | doi=10.1152/jn.00684.2004 | volume=94 | issue=2 | title=Dynamic model of the octopus arm. I. Biomechanics of the octopus reaching movement |date=August 2005 | journal=J. Neurophysiol. | pages=1443–58 | last1 = Yekutieli | first1 = Y | last2 = Sagiv-Zohar | first2 = R | last3 = Aharonov | first3 = R | last4 = Engel | first4 = Y | last5 = Hochner | first5 = B | last6 = Flash | first6 = T}}</ref><br />
<br />
The brain of an organism, while critical, may not be an appropriate boundary for a cognitive model. To simulate a bee brain, it may be necessary to simulate the body, and the environment. The Extended Mind thesis formalizes the philosophical concept, and research into cephalopods has demonstrated clear examples of a decentralized system.<br />
<br />
有机体的大脑虽然关键,但可能不是认知模型的合适边界。为了模拟蜜蜂的大脑,可能需要模拟身体和环境。扩展心智论文形式化了哲学概念,对头足类动物的研究已经证明了分散系统的明显例子。<br />
<br />
<br />
<br />
In addition, the scale of the human brain is not currently well-constrained. One estimate puts the human brain at about 100 billion neurons and 100 trillion synapses.<ref>{{Harvnb|Williams|Herrup|1988}}</ref><ref>[http://search.eb.com/eb/article-75525 "nervous system, human."] ''[[Encyclopædia Britannica]]''. 9 January 2007</ref> Another estimate is 86 billion neurons of which 16.3 billion are in the [[cerebral cortex]] and 69 billion in the [[cerebellum]].{{sfn|Azevedo et al.|2009}} [[Glial cell]] synapses are currently unquantified but are known to be extremely numerous.<br />
<br />
In addition, the scale of the human brain is not currently well-constrained. One estimate puts the human brain at about 100 billion neurons and 100 trillion synapses. Another estimate is 86 billion neurons of which 16.3 billion are in the cerebral cortex and 69 billion in the cerebellum. Glial cell synapses are currently unquantified but are known to be extremely numerous.<br />
<br />
此外,人类大脑的规模目前还没有得到很好的限制。据估计,人类大脑大约有1000亿个神经元和100万亿个突触。另一个估计是860亿个神经元,其中163亿在大脑皮层,690亿在小脑。神经胶质细胞突触目前尚未定量,但已知数量极多。<br />
<br />
<br />
<br />
==Strong AI and consciousness==<br />
<br />
{{See also|Philosophy of artificial intelligence|Turing test}}<br />
<br />
<br />
<br />
In 1980, philosopher [[John Searle]] coined the term "strong AI" as part of his [[Chinese room]] argument.<ref>{{Harvnb|Searle|1980}}</ref> He wanted to distinguish between two different hypotheses about artificial intelligence:<ref>As defined in a standard AI textbook: "The assertion that machines could possibly act intelligently (or, perhaps better, act as if they were intelligent) is called the 'weak AI' hypothesis by philosophers, and the assertion that machines that do so are actually thinking (as opposed to simulating thinking) is called the 'strong AI' hypothesis." {{Harv|Russell|Norvig|2003}}</ref><br />
<br />
In 1980, philosopher John Searle coined the term "strong AI" as part of his Chinese room argument. He wanted to distinguish between two different hypotheses about artificial intelligence:<br />
<br />
1980年,哲学家约翰•塞尔(John Searle)将“强人工智能”(strong AI)一词作为他在中文房间里辩论的一部分。他想要区分关于人工智能的两种不同假设:<br />
<br />
* An artificial intelligence system can ''think'' and have a ''mind''. (The word "mind" has a specific meaning for philosophers, as used in "the [[mind body problem]]" or "the [[philosophy of mind]]".)<br />
<br />
* An artificial intelligence system can (only) ''act like'' it thinks and has a mind.<br />
<br />
The first one is called "the ''strong'' AI hypothesis" and the second is "the ''weak'' AI hypothesis" because the first one makes the ''stronger'' statement: it assumes something special has happened to the machine that goes beyond all its abilities that we can test. Searle referred to the "strong AI hypothesis" as "strong AI". This usage is also common in academic AI research and textbooks.<ref>For example:<br />
<br />
The first one is called "the strong AI hypothesis" and the second is "the weak AI hypothesis" because the first one makes the stronger statement: it assumes something special has happened to the machine that goes beyond all its abilities that we can test. Searle referred to the "strong AI hypothesis" as "strong AI". This usage is also common in academic AI research and textbooks.<ref>For example:<br />
<br />
第一个被称为“强人工智能假设” ,第二个被称为“弱人工智能假设” ,因为第一个假设提出了更强的陈述: 它假定机器发生了某种特殊的事情,超出了我们能够测试的所有能力。Searle 将“强 AI 假说”称为“强 AI”。这种用法在人工智能学术研究和教科书中也很常见。例如:<br />
<br />
* {{Harvnb|Russell|Norvig|2003}},<br />
<br />
* [http://www.encyclopedia.com/doc/1O87-strongAI.html Oxford University Press Dictionary of Psychology] {{Webarchive|url=https://web.archive.org/web/20071203103022/http://www.encyclopedia.com/doc/1O87-strongAI.html |date=3 December 2007 }} (quoted in "High Beam Encyclopedia"),<br />
<br />
* [http://www.aaai.org/AITopics/html/phil.html MIT Encyclopedia of Cognitive Science] {{Webarchive|url=https://web.archive.org/web/20080719074502/http://www.aaai.org/AITopics/html/phil.html |date=19 July 2008 }} (quoted in "AITopics")<br />
<br />
* [http://planetmath.org/encyclopedia/StrongAIThesis.html Planet Math] {{Webarchive|url=https://web.archive.org/web/20070919012830/http://planetmath.org/encyclopedia/StrongAIThesis.html |date=19 September 2007 }}<br />
<br />
* [http://www.cbhd.org/resources/biotech/tongen_2003-11-07.htm Will Biological Computers Enable Artificially Intelligent Machines to Become Persons?] {{Webarchive|url=https://web.archive.org/web/20080513031753/http://www.cbhd.org/resources/biotech/tongen_2003-11-07.htm |date=13 May 2008 }} Anthony Tongen</ref><br />
<br />
<br />
<br />
The weak AI hypothesis is equivalent to the hypothesis that artificial general intelligence is possible. According to [[Stuart J. Russell|Russell]] and [[Peter Norvig|Norvig]], "Most AI researchers take the weak AI hypothesis for granted, and don't care about the strong AI hypothesis."{{sfn|Russell|Norvig|2003|p=947}}<br />
<br />
The weak AI hypothesis is equivalent to the hypothesis that artificial general intelligence is possible. According to Russell and Norvig, "Most AI researchers take the weak AI hypothesis for granted, and don't care about the strong AI hypothesis."<br />
<br />
弱人工智能假说等同于人工一般智能是可能的假说。根据罗素和诺维格的说法,“大多数人工智能研究人员认为弱人工智能假说是理所当然的,并且不关心强人工智能假说。”<br />
<br />
<br />
<br />
In contrast to Searle, [[Ray Kurzweil]] uses the term "strong AI" to describe any artificial intelligence system that acts like it has a mind,<ref name=K/> regardless of whether a philosopher would be able to determine if it ''actually'' has a mind or not.<br />
<br />
In contrast to Searle, Ray Kurzweil uses the term "strong AI" to describe any artificial intelligence system that acts like it has a mind, regardless of whether a philosopher would be able to determine if it actually has a mind or not.<br />
<br />
与 Searle 不同的是,Ray Kurzweil 使用“强人工智能”这个词来描述任何人工智能系统,这个系统的行为就像它有思想一样,不管哲学家是否能够确定它是否真的有思想。<br />
<br />
In science fiction, AGI is associated with traits such as [[consciousness]], [[sentience]], [[sapience]], and [[self-awareness]] observed in living beings. However, according to Searle, it is an open question whether general intelligence is sufficient for consciousness. "Strong AI" (as defined above by Kurzweil) should not be confused with Searle's "[[strong AI hypothesis]]." The strong AI hypothesis is the claim that a computer which behaves as intelligently as a person must also necessarily have a [[mind]] and [[consciousness]]. AGI refers only to the amount of intelligence that the machine displays, with or without a mind.<br />
<br />
In science fiction, AGI is associated with traits such as consciousness, sentience, sapience, and self-awareness observed in living beings. However, according to Searle, it is an open question whether general intelligence is sufficient for consciousness. "Strong AI" (as defined above by Kurzweil) should not be confused with Searle's "strong AI hypothesis." The strong AI hypothesis is the claim that a computer which behaves as intelligently as a person must also necessarily have a mind and consciousness. AGI refers only to the amount of intelligence that the machine displays, with or without a mind.<br />
<br />
在科幻小说中,AGI 与生物的意识、知觉、智慧和自我意识等特征有关。然而,根据 Searle 的说法,一般智力是否足以产生意识还是一个悬而未决的问题。“强 AI”(如上文库兹韦尔所定义的)不应与塞尔的“强 AI 假设”相混淆强有力的人工智能假说认为,一台像人一样智能运行的计算机必然具有思想和意识。Agi 只是指机器显示的智能量,不管有没有头脑。<br />
<br />
<br />
<br />
===Consciousness===<br />
<br />
There are other aspects of the human mind besides intelligence that are relevant to the concept of strong AI which play a major role in [[science fiction]] and the [[ethics of artificial intelligence]]:<br />
<br />
There are other aspects of the human mind besides intelligence that are relevant to the concept of strong AI which play a major role in science fiction and the ethics of artificial intelligence:<br />
<br />
在科幻小说和人工智能伦理中扮演重要角色的强大人工智能概念中,除了与智能有关的人类思维还有其他方面:<br />
<br />
* [[consciousness]]: To have [[qualia|subjective experience]] and [[thought]].<ref>Note that [[consciousness]] is difficult to define. A popular definition, due to [[Thomas Nagel]], is that it "feels like" something to be conscious. If we are not conscious, then it doesn't feel like anything. Nagel uses the example of a bat: we can sensibly ask "what does it feel like to be a bat?" However, we are unlikely to ask "what does it feel like to be a toaster?" Nagel concludes that a bat appears to be conscious (i.e. has consciousness) but a toaster does not. See {{Harv|Nagel|1974}}</ref><br />
<br />
* [[self-awareness]]: To be aware of oneself as a separate individual, especially to be aware of one's own thoughts.<br />
<br />
* [[sentience]]: The ability to "feel" perceptions or emotions subjectively.<br />
<br />
* [[sapience]]: The capacity for wisdom.<br />
<br />
<br />
<br />
These traits have a moral dimension, because a machine with this form of strong AI may have legal rights, analogous to the [[animal rights|rights of non-human animals]]. As such, preliminary work has been conducted on approaches to integrating full ethical agents with existing legal and social frameworks. These approaches have focused on the legal position and rights of 'strong' AI.<ref>{{Cite journal|last=Sotala|first=Kaj|last2=Yampolskiy|first2=Roman V|date=2014-12-19|title=Responses to catastrophic AGI risk: a survey|journal=Physica Scripta|volume=90|issue=1|pages=8|doi=10.1088/0031-8949/90/1/018001|issn=0031-8949|doi-access=free}}</ref><br />
<br />
These traits have a moral dimension, because a machine with this form of strong AI may have legal rights, analogous to the rights of non-human animals. As such, preliminary work has been conducted on approaches to integrating full ethical agents with existing legal and social frameworks. These approaches have focused on the legal position and rights of 'strong' AI.<br />
<br />
这些特征具有道德维度,因为拥有这种强人工智能形式的机器可能拥有法律权利,类似于非人类动物的权利。因此,已经开展了初步工作,探讨如何将全面的道德行为者纳入现有的法律和社会框架。这些方法都集中在强大的人工智能的法律地位和权利上。<br />
<br />
<br />
<br />
However, [[Bill Joy]], among others, argues a machine with these traits may be a threat to human life or dignity.<ref>{{cite journal| title=Why the future doesn't need us | last=Joy | first=Bill |author-link=Bill Joy | journal=Wired |date=April 2000 }}</ref> It remains to be shown whether any of these traits are [[Necessary and sufficient condition|necessary]] for strong AI. The role of [[consciousness]] is not clear, and currently there is no agreed test for its presence. If a machine is built with a device that simulates the [[neural correlates of consciousness]], would it automatically have self-awareness? It is also possible that some of these properties, such as sentience, [[Emergence|naturally emerge]] from a fully intelligent machine, or that it becomes natural to ''ascribe'' these properties to machines once they begin to act in a way that is clearly intelligent. For example, intelligent action may be sufficient for sentience, rather than the other way around.<br />
<br />
However, Bill Joy, among others, argues a machine with these traits may be a threat to human life or dignity. It remains to be shown whether any of these traits are necessary for strong AI. The role of consciousness is not clear, and currently there is no agreed test for its presence. If a machine is built with a device that simulates the neural correlates of consciousness, would it automatically have self-awareness? It is also possible that some of these properties, such as sentience, naturally emerge from a fully intelligent machine, or that it becomes natural to ascribe these properties to machines once they begin to act in a way that is clearly intelligent. For example, intelligent action may be sufficient for sentience, rather than the other way around.<br />
<br />
然而,比尔 · 乔伊等人认为,具有这些特征的机器可能会威胁到人类的生命或尊严。这些特征对于强 AI 来说是否是必要的还有待证明。意识的作用并不清楚,目前也没有对其存在的一致的测试。如果一台机器装有一个模拟意识相关神经区的装置,它会自动具有自我意识吗?也有可能这些特性中的一些,比如感知能力,自然而然地从一个完全智能的机器中产生,或者一旦机器开始以一种明显的智能方式行动,人们就会自然而然地把这些特性归因于机器。例如,智能行为可能足以产生知觉,而不是相反。<br />
<br />
<br />
<br />
===Artificial consciousness research===<br />
<br />
{{Main|Artificial consciousness}}<br />
<br />
<br />
<br />
Although the role of consciousness in strong AI/AGI is debatable, many AGI researchers{{sfn|Yudkowsky|2006}} regard research that investigates possibilities for implementing consciousness as vital. In an early effort [[Igor Aleksander]]{{sfn|Aleksander|1996}} argued that the principles for creating a conscious machine already existed but that it would take forty years to train such a machine to understand [[language]].<br />
<br />
Although the role of consciousness in strong AI/AGI is debatable, many AGI researchers regard research that investigates possibilities for implementing consciousness as vital. In an early effort Igor Aleksander argued that the principles for creating a conscious machine already existed but that it would take forty years to train such a machine to understand language.<br />
<br />
虽然意识在强 ai / AGI 中的作用是有争议的,但是很多 AGI 的研究人员认为研究实现意识的可能性是至关重要的。在早期的努力中,Igor Aleksander 认为创造一个有意识的机器的原则已经存在,但是训练这样一个机器去理解语言需要四十年。<br />
<br />
<br />
<br />
==Possible explanations for the slow progress of AI research==<br />
<br />
{{See also|History of artificial intelligence#The problems}}<br />
<br />
<br />
<br />
Since the launch of AI research in 1956, the growth of this field has slowed down over time and has stalled the aims of creating machines skilled with intelligent action at the human level.{{sfn|Clocksin|2003}} A possible explanation for this delay is that computers lack a sufficient scope of memory or processing power.{{sfn|Clocksin|2003}} In addition, the level of complexity that connects to the process of AI research may also limit the progress of AI research.{{sfn|Clocksin|2003}}<br />
<br />
Since the launch of AI research in 1956, the growth of this field has slowed down over time and has stalled the aims of creating machines skilled with intelligent action at the human level. A possible explanation for this delay is that computers lack a sufficient scope of memory or processing power. In addition, the level of complexity that connects to the process of AI research may also limit the progress of AI research.<br />
<br />
自从1956年开始人工智能研究以来,这一领域的发展速度已经随着时间的推移而放缓,并且阻碍了创造具有人类水平的智能行为的机器的目标。这种延迟的一个可能的解释是计算机缺乏足够的存储空间或处理能力。此外,与人工智能研究过程相关的复杂程度也可能限制人工智能研究的进展。<br />
<br />
<br />
<br />
While most AI researchers believe strong AI can be achieved in the future, there are some individuals like [[Hubert Dreyfus]] and [[Roger Penrose]] who deny the possibility of achieving strong AI.{{sfn|Clocksin|2003}} [[John McCarthy (computer scientist)|John McCarthy]] was one of various computer scientists who believe human-level AI will be accomplished, but a date cannot accurately be predicted.{{sfn|McCarthy|2003}}<br />
<br />
While most AI researchers believe strong AI can be achieved in the future, there are some individuals like Hubert Dreyfus and Roger Penrose who deny the possibility of achieving strong AI. John McCarthy was one of various computer scientists who believe human-level AI will be accomplished, but a date cannot accurately be predicted.<br />
<br />
虽然大多数人工智能研究人员认为强大的人工智能可以在未来实现,但也有一些人像休伯特 · 德雷福斯和罗杰 · 彭罗斯否认实现强大人工智能的可能性。约翰 · 麦卡锡是众多计算机科学家之一,他们相信人类水平的人工智能将会实现,但是日期无法准确预测。<br />
<br />
<br />
<br />
Conceptual limitations are another possible reason for the slowness in AI research.{{sfn|Clocksin|2003}} AI researchers may need to modify the conceptual framework of their discipline in order to provide a stronger base and contribution to the quest of achieving strong AI. As William Clocksin wrote in 2003: "the framework starts from Weizenbaum's observation that intelligence manifests itself only relative to specific social and cultural contexts".{{sfn|Clocksin|2003}}<br />
<br />
Conceptual limitations are another possible reason for the slowness in AI research. AI researchers may need to modify the conceptual framework of their discipline in order to provide a stronger base and contribution to the quest of achieving strong AI. As William Clocksin wrote in 2003: "the framework starts from Weizenbaum's observation that intelligence manifests itself only relative to specific social and cultural contexts".<br />
<br />
概念上的局限性是人工智能研究缓慢的另一个可能原因。人工智能研究人员可能需要修改他们学科的概念框架,以便为实现强大的人工智能提供一个更强大的基础和贡献。正如 William Clocksin 在2003年写的那样: “这个框架始于 Weizenbaum 的观察,即智力只在特定的社会和文化背景下表现出来。”。<br />
<br />
<br />
<br />
Furthermore, AI researchers have been able to create computers that can perform jobs that are complicated for people to do, such as mathematics, but conversely they have struggled to develop a computer that is capable of carrying out tasks that are simple for humans to do, such as walking ([[Moravec's paradox]]).{{sfn|Clocksin|2003}} A problem described by David Gelernter is that some people assume thinking and reasoning are equivalent.{{sfn|Gelernter|2010}} However, the idea of whether thoughts and the creator of those thoughts are isolated individually has intrigued AI researchers.{{sfn|Gelernter|2010}}<br />
<br />
Furthermore, AI researchers have been able to create computers that can perform jobs that are complicated for people to do, such as mathematics, but conversely they have struggled to develop a computer that is capable of carrying out tasks that are simple for humans to do, such as walking (Moravec's paradox). A problem described by David Gelernter is that some people assume thinking and reasoning are equivalent. However, the idea of whether thoughts and the creator of those thoughts are isolated individually has intrigued AI researchers.<br />
<br />
此外,人工智能研究人员已经能够创造出能够执行复杂工作(如数学)的计算机,但相反地,他们却难以开发出能够执行人类简单任务(如行走)的计算机(莫拉维克悖论)。大卫 · 格勒尼特描述的一个问题是,有些人认为思考和推理是等价的。然而,思想和这些思想的创造者是否被孤立的想法引起了人工智能研究者的兴趣。<br />
<br />
<br />
<br />
The problems that have been encountered in AI research over the past decades have further impeded the progress of AI. The failed predictions that have been promised by AI researchers and the lack of a complete understanding of human behaviors have helped diminish the primary idea of human-level AI.{{sfn|Goertzel|2007}} Although the progress of AI research has brought both improvement and disappointment, most investigators have established optimism about potentially achieving the goal of AI in the 21st century.{{sfn|Goertzel|2007}}<br />
<br />
The problems that have been encountered in AI research over the past decades have further impeded the progress of AI. The failed predictions that have been promised by AI researchers and the lack of a complete understanding of human behaviors have helped diminish the primary idea of human-level AI. Although the progress of AI research has brought both improvement and disappointment, most investigators have established optimism about potentially achieving the goal of AI in the 21st century.<br />
<br />
过去几十年人工智能研究中遇到的问题进一步阻碍了人工智能的发展。人工智能研究人员所承诺的失败的预测,以及对人类行为缺乏完整理解,已经帮助削弱了人类水平人工智能的基本概念。尽管人工智能研究的进展带来了进步和失望,但大多数研究人员对人工智能在21世纪可能实现的目标持乐观态度。<br />
<br />
<br />
<br />
Other possible reasons have been proposed for the lengthy research in the progress of strong AI. The intricacy of scientific problems and the need to fully understand the human brain through psychology and neurophysiology have limited many researchers in emulating the function of the human brain in computer hardware.{{sfn|McCarthy|2007}} Many researchers tend to underestimate any doubt that is involved with future predictions of AI, but without taking those issues seriously, people can then overlook solutions to problematic questions.{{sfn|Goertzel|2007}}<br />
<br />
Other possible reasons have been proposed for the lengthy research in the progress of strong AI. The intricacy of scientific problems and the need to fully understand the human brain through psychology and neurophysiology have limited many researchers in emulating the function of the human brain in computer hardware. Many researchers tend to underestimate any doubt that is involved with future predictions of AI, but without taking those issues seriously, people can then overlook solutions to problematic questions.<br />
<br />
还有其他可能的原因可以解释为什么对于强人工智能进行了长时间的研究。错综复杂的科学问题,以及通过心理学和神经生理学充分了解人脑的必要性,限制了许多研究人员在计算机硬件中模拟人脑的功能。许多研究人员倾向于低估与人工智能未来预测有关的任何怀疑,但是如果不认真对待这些问题,人们就会忽视问题的解决方案。<br />
<br />
<br />
<br />
Clocksin says that a conceptual limitation that may impede the progress of AI research is that people may be using the wrong techniques for computer programs and implementation of equipment.{{sfn|Clocksin|2003}} When AI researchers first began to aim for the goal of artificial intelligence, a main interest was human reasoning.{{sfn|Holte|Choueiry|2003}} Researchers hoped to establish computational models of human knowledge through reasoning and to find out how to design a computer with a specific cognitive task.{{sfn|Holte|Choueiry|2003}}<br />
<br />
Clocksin says that a conceptual limitation that may impede the progress of AI research is that people may be using the wrong techniques for computer programs and implementation of equipment. When AI researchers first began to aim for the goal of artificial intelligence, a main interest was human reasoning. Researchers hoped to establish computational models of human knowledge through reasoning and to find out how to design a computer with a specific cognitive task.<br />
<br />
Clocksin 说,阻碍人工智能研究进展的一个概念上的限制是,人们可能在计算机程序和设备实现方面使用了错误的技术。当人工智能研究人员第一次开始瞄准人工智能的目标时,主要的兴趣是人类推理。研究人员希望通过推理建立人类知识的计算模型,并找出如何设计一台具有特定认知任务的计算机。<br />
<br />
<br />
<br />
The practice of abstraction, which people tend to redefine when working with a particular context in research, provides researchers with a concentration on just a few concepts.{{sfn|Holte|Choueiry|2003}} The most productive use of abstraction in AI research comes from planning and problem solving.{{sfn|Holte|Choueiry|2003}} Although the aim is to increase the speed of a computation, the role of abstraction has posed questions about the involvement of abstraction operators.{{sfn|Zucker|2003}}<br />
<br />
The practice of abstraction, which people tend to redefine when working with a particular context in research, provides researchers with a concentration on just a few concepts. The most productive use of abstraction in AI research comes from planning and problem solving. Although the aim is to increase the speed of a computation, the role of abstraction has posed questions about the involvement of abstraction operators.<br />
<br />
抽象的实践,人们在研究中使用特定的语境时倾向于重新定义,为研究人员提供了集中在几个概念上的机会。抽象在人工智能研究中最有效的应用来自规划和解决问题。虽然目标是提高计算速度,但是抽象的作用已经对抽象操作符的参与提出了问题。<br />
<br />
<br />
<br />
A possible reason for the slowness in AI relates to the acknowledgement by many AI researchers that heuristics is a section that contains a significant breach between computer performance and human performance.{{sfn|McCarthy|2007}} The specific functions that are programmed to a computer may be able to account for many of the requirements that allow it to match human intelligence. These explanations are not necessarily guaranteed to be the fundamental causes for the delay in achieving strong AI, but they are widely agreed by numerous researchers.<br />
<br />
A possible reason for the slowness in AI relates to the acknowledgement by many AI researchers that heuristics is a section that contains a significant breach between computer performance and human performance. The specific functions that are programmed to a computer may be able to account for many of the requirements that allow it to match human intelligence. These explanations are not necessarily guaranteed to be the fundamental causes for the delay in achieving strong AI, but they are widely agreed by numerous researchers.<br />
<br />
人工智能发展缓慢的一个可能原因是许多人工智能研究人员承认启发式是计算机性能和人类性能之间的一个重大缺口。为计算机编程的特定功能可以满足许多要求,使计算机与人类智能相匹配。这些解释并不一定是造成人工智能实现延迟的根本原因,但它们得到了众多研究人员的广泛认同。<br />
<br />
<br />
<br />
There have been many AI researchers that debate over the idea whether [[affective computing|machines should be created with emotions]]. There are no emotions in typical models of AI and some researchers say programming emotions into machines allows them to have a mind of their own.{{sfn|Clocksin|2003}} Emotion sums up the experiences of humans because it allows them to remember those experiences.{{sfn|Gelernter|2010}} David Gelernter writes, "No computer will be creative unless it can simulate all the nuances of human emotion."{{sfn|Gelernter|2010}} This concern about emotion has posed problems for AI researchers and it connects to the concept of strong AI as its research progresses into the future.<ref>{{cite journal|doi=10.1016/j.bushor.2018.08.004|title=Kaplan Andreas and Haelein Michael (2019) Siri, Siri, in my hand: Who's the fairest in the land? On the interpretations, illustrations, and implications of artificial intelligence | volume=62 | year=2019|journal=Business Horizons|pages=15–25 | last1 = Kaplan | first1 = Andreas | last2 = Haenlein | first2 = Michael}}</ref><br />
<br />
There have been many AI researchers that debate over the idea whether machines should be created with emotions. There are no emotions in typical models of AI and some researchers say programming emotions into machines allows them to have a mind of their own. Emotion sums up the experiences of humans because it allows them to remember those experiences. David Gelernter writes, "No computer will be creative unless it can simulate all the nuances of human emotion." This concern about emotion has posed problems for AI researchers and it connects to the concept of strong AI as its research progresses into the future.<br />
<br />
许多人工智能研究人员一直在争论机器是否应该带有情感。典型的人工智能模型中没有情感,一些研究人员说,将情感编程到机器中可以让它们拥有自己的思想。情感总结了人类的经历,因为它允许人们记住那些经历。大卫 · 格勒尼特写道: “除非计算机能够模拟人类情感的所有细微差别,否则它不会具有创造力。”这种对情绪的关注给人工智能研究人员带来了一些问题,随着未来人工智能研究的进展,它与强人工智能的概念相联系。<br />
<br />
<br />
<br />
==Controversies and dangers==<br />
<br />
<br />
<br />
===Feasibility===<br />
<br />
{{expand section|date=February 2016}}<br />
<br />
As of March 2020, AGI remains speculative<ref name="spec1">[https://www.europarl.europa.eu/at-your-service/files/be-heard/religious-and-non-confessional-dialogue/events/en-20190319-how-artificial-intelligence-works.pdf europarl.europa.eu: How artificial intelligence works], "Concluding remarks: Today's AI is powerful and useful, but remains far from speculated AGI or ASI.", European Parliamentary Research Service, retrieved March 3, 2020</ref><ref name="spec2">[https://www.itu.int/en/journal/001/Documents/itu2018-9.pdf itu.int: Beyond Mad?: The Race For Artificial General Intelligence], "AGI represents a level of power that remains firmly in the realm of speculative fiction as on date." February 2, 2018, retrieved March 3, 2020</ref> as no such system has been demonstrated yet. Opinions vary both on ''whether'' and ''when'' artificial general intelligence will arrive. At one extreme, AI pioneer [[Herbert A. Simon]] wrote in 1965: "machines will be capable, within twenty years, of doing any work a man can do". However, this prediction failed to come true. Microsoft co-founder [[Paul Allen]] believed that such intelligence is unlikely in the 21st century because it would require "unforeseeable and fundamentally unpredictable breakthroughs" and a "scientifically deep understanding of cognition".<ref>{{cite news|last1=Allen|first1=Paul|title=The Singularity Isn't Near|url=http://www.technologyreview.com/view/425733/paul-allen-the-singularity-isnt-near/|accessdate=17 September 2014|work=[[MIT Technology Review]]}}</ref> Writing in [[The Guardian]], roboticist Alan Winfield claimed the gulf between modern computing and human-level artificial intelligence is as wide as the gulf between current space flight and practical faster-than-light spaceflight.<ref>{{cite news|last1=Winfield|first1=Alan|title=Artificial intelligence will not turn into a Frankenstein's monster|url=https://www.theguardian.com/technology/2014/aug/10/artificial-intelligence-will-not-become-a-frankensteins-monster-ian-winfield|accessdate=17 September 2014|work=[[The Guardian]]|archive-url=https://web.archive.org/web/20140917135230/http://www.theguardian.com/technology/2014/aug/10/artificial-intelligence-will-not-become-a-frankensteins-monster-ian-winfield|archive-date=17 September 2014|url-status=live}}</ref><br />
<br />
As of March 2020, AGI remains speculative as no such system has been demonstrated yet. Opinions vary both on whether and when artificial general intelligence will arrive. At one extreme, AI pioneer Herbert A. Simon wrote in 1965: "machines will be capable, within twenty years, of doing any work a man can do". However, this prediction failed to come true. Microsoft co-founder Paul Allen believed that such intelligence is unlikely in the 21st century because it would require "unforeseeable and fundamentally unpredictable breakthroughs" and a "scientifically deep understanding of cognition". Writing in The Guardian, roboticist Alan Winfield claimed the gulf between modern computing and human-level artificial intelligence is as wide as the gulf between current space flight and practical faster-than-light spaceflight.<br />
<br />
截至2020年3月,德盛安联仍处于投机状态,因为迄今尚未展示此类系统。对于人工通用智能是否会到来以及何时到来,人们的看法各不相同。在一个极端,人工智能的先驱赫伯特·西蒙在1965年写道: “机器将能在20年内完成人类能做的任何工作。”。然而,这个预言并没有实现。微软(Microsoft)联合创始人保罗•艾伦(Paul Allen)认为,这种情报在21世纪不太可能出现,因为它需要“不可预见且根本无法预测的突破”和“对认知的科学深入理解”。机器人专家 Alan Winfield 在《卫报》上发表文章称,现代计算机和人类水平的人工智能之间的鸿沟就像当前的太空飞行和实际的超光速空间飞行之间的鸿沟一样宽。<br />
<br />
<br />
<br />
AI experts' views on the feasibility of AGI wax and wane, and may have seen a resurgence in the 2010s. Four polls conducted in 2012 and 2013 suggested that the median guess among experts for when they would be 50% confident AGI would arrive was 2040 to 2050, depending on the poll, with the mean being 2081. Of the experts, 16.5% answered with "never" when asked the same question but with a 90% confidence instead.<ref name="new yorker doomsday">{{cite news|author1=Raffi Khatchadourian|title=The Doomsday Invention: Will artificial intelligence bring us utopia or destruction?|url=http://www.newyorker.com/magazine/2015/11/23/doomsday-invention-artificial-intelligence-nick-bostrom|accessdate=7 February 2016|work=[[The New Yorker (magazine)|The New Yorker]]|date=23 November 2015|archive-url=https://web.archive.org/web/20160128105955/http://www.newyorker.com/magazine/2015/11/23/doomsday-invention-artificial-intelligence-nick-bostrom|archive-date=28 January 2016|url-status=live}}</ref><ref>Müller, V. C., & Bostrom, N. (2016). Future progress in artificial intelligence: A survey of expert opinion. In Fundamental issues of artificial intelligence (pp. 555–572). Springer, Cham.</ref> Further current AGI progress considerations can be found below [[#Tests_for_confirming_human-level_AGI|''Tests for confirming human-level AGI'']] and [[#IQ-Tests_AGI|''IQ-tests AGI'']].<br />
<br />
AI experts' views on the feasibility of AGI wax and wane, and may have seen a resurgence in the 2010s. Four polls conducted in 2012 and 2013 suggested that the median guess among experts for when they would be 50% confident AGI would arrive was 2040 to 2050, depending on the poll, with the mean being 2081. Of the experts, 16.5% answered with "never" when asked the same question but with a 90% confidence instead. Further current AGI progress considerations can be found below Tests for confirming human-level AGI and IQ-tests AGI.<br />
<br />
人工智能专家对德盛安联兴衰可行性的看法,可能在2010年出现了复苏。2012年和2013年进行的四次民意调查显示,专家对德盛安联50% 有信心的平均猜测是2040年到2050年,具体取决于调查结果,平均猜测是2081年。在这些专家中,16.5% 的人在被问到同样的问题时回答“从来没有” ,但他们的自信心却达到了90% 。进一步的进展考虑可以在确认人类水平 AGI 和 iq 测试 AGI 的测试下面找到。<br />
<br />
<br />
<br />
===Potential threat to human existence{{anchor|Risk_of_human_extinction}}===<br />
<br />
{{Main|Existential risk from artificial general intelligence}}<br />
<br />
<br />
<br />
The thesis that AI poses an existential risk, and that this risk needs much more attention than it currently gets, has been endorsed by many public figures; perhaps the most famous are [[Elon Musk]], [[Bill Gates]], and [[Stephen Hawking]]. The most notable AI researcher to endorse the thesis is [[Stuart J. Russell]]. Endorsers of the thesis sometimes express bafflement at skeptics: Gates states he does not "understand why some people are not concerned",<ref name="BBC News">{{cite news|last1=Rawlinson|first1=Kevin|title=Microsoft's Bill Gates insists AI is a threat|url=https://www.bbc.co.uk/news/31047780|work=[[BBC News]]|accessdate=30 January 2015}}</ref> and Hawking criticized widespread indifference in his 2014 editorial: {{cquote|'So, facing possible futures of incalculable benefits and risks, the experts are surely doing everything possible to ensure the best outcome, right? Wrong. If a superior alien civilisation sent us a message saying, 'We'll arrive in a few decades,' would we just reply, 'OK, call us when you get here{{endash}}we'll leave the lights on?' Probably not{{endash}}but this is more or less what is happening with AI.'<ref name="hawking editorial">{{cite news |title=Stephen Hawking: 'Transcendence looks at the implications of artificial intelligence&nbsp;– but are we taking AI seriously enough?' |url=https://www.independent.co.uk/news/science/stephen-hawking-transcendence-looks-at-the-implications-of-artificial-intelligence--but-are-we-taking-ai-seriously-enough-9313474.html |accessdate=3 December 2014 |publisher=[[The Independent (UK)]]}}</ref>}}<br />
<br />
The thesis that AI poses an existential risk, and that this risk needs much more attention than it currently gets, has been endorsed by many public figures; perhaps the most famous are Elon Musk, Bill Gates, and Stephen Hawking. The most notable AI researcher to endorse the thesis is Stuart J. Russell. Endorsers of the thesis sometimes express bafflement at skeptics: Gates states he does not "understand why some people are not concerned", and Hawking criticized widespread indifference in his 2014 editorial: we'll leave the lights on?' Probably notbut this is more or less what is happening with AI.'}}<br />
<br />
人工智能构成了世界末日,这种风险需要比现在更多的关注,这一论点已经得到了许多公众人物的支持; 也许最著名的是埃隆 · 马斯克,比尔 · 盖茨和斯蒂芬 · 霍金。支持这一观点的最著名的人工智能研究者是斯图尔特 · 罗素。这篇论文的支持者有时会对怀疑论者表示困惑: 盖茨表示,他不“理解为什么有些人不关心” ,霍金在2014年的社论中批评了普遍的冷漠: 我们会让灯亮着吗可能不是,但这或多或少是正在发生的与人工智能<br />
<br />
<br />
<br />
Many of the scholars who are concerned about existential risk believe that the best way forward would be to conduct (possibly massive) research into solving the difficult "[[AI control problem|control problem]]" to answer the question: what types of safeguards, algorithms, or architectures can programmers implement to maximize the probability that their recursively-improving AI would continue to behave in a friendly, rather than destructive, manner after it reaches superintelligence?<ref name="superintelligence" >{{cite book|last1=Bostrom|first1=Nick|author-link=Nick Bostrom|title=Superintelligence: Paths, Dangers, Strategies|date=2014|isbn=978-0199678112|edition=First|quote=|title-link=Superintelligence: Paths, Dangers, Strategies}}<!-- preface --></ref><ref name="physica_scripta" >{{cite journal|title=Responses to catastrophic AGI risk: a survey|journal=[[Physica Scripta]]|date=19 December 2014|volume=90|issue=1|author1=Kaj Sotala|author2-link=Roman Yampolskiy|author2=Roman Yampolskiy}}</ref><br />
<br />
Many of the scholars who are concerned about existential risk believe that the best way forward would be to conduct (possibly massive) research into solving the difficult "control problem" to answer the question: what types of safeguards, algorithms, or architectures can programmers implement to maximize the probability that their recursively-improving AI would continue to behave in a friendly, rather than destructive, manner after it reaches superintelligence?<br />
<br />
许多关注世界末日的学者认为,最好的方法是进行(可能是大规模的)研究,解决困难的“控制问题” ,以回答这个问题: 程序员可以实现哪些类型的保障措施、算法或架构,以最大限度地提高其递归改进的人工智能在达到超级智能后继续以友好而不是破坏性的方式运行的可能性?<br />
<br />
<br />
<br />
The thesis that AI can pose existential risk also has many strong detractors. Skeptics sometimes charge that the thesis is crypto-religious, with an irrational belief in the possibility of superintelligence replacing an irrational belief in an omnipotent God; at an extreme, [[Jaron Lanier]] argues that the whole concept that current machines are in any way intelligent is "an illusion" and a "stupendous con" by the wealthy.<ref name="atlantic-but-what">{{cite magazine |url=https://www.theatlantic.com/health/archive/2014/05/but-what-does-the-end-of-humanity-mean-for-me/361931/ |title=But What Would the End of Humanity Mean for Me? |magazine=The Atlantic | date = 9 May 2014 | accessdate =12 December 2015}}</ref><br />
<br />
The thesis that AI can pose existential risk also has many strong detractors. Skeptics sometimes charge that the thesis is crypto-religious, with an irrational belief in the possibility of superintelligence replacing an irrational belief in an omnipotent God; at an extreme, Jaron Lanier argues that the whole concept that current machines are in any way intelligent is "an illusion" and a "stupendous con" by the wealthy.<br />
<br />
认为人工智能可以提出世界末日的观点也遭到了许多强烈的反对。怀疑论者有时指责该论点是秘密宗教性的,他们非理性地相信超级智能可能取代对万能的上帝的非理性信仰; 在极端情况下,杰伦 · 拉尼尔(Jaron Lanier)认为,目前的机器以任何方式具有智能的整个概念是“一种幻觉” ,是富人的“惊人骗局”。<br />
<br />
<br />
<br />
Much of existing criticism argues that AGI is unlikely in the short term. Computer scientist [[Gordon Bell]] argues that the human race will already destroy itself before it reaches the [[technological singularity]]. [[Gordon Moore]], the original proponent of [[Moore's Law]], declares that "I am a skeptic. I don't believe [a technological singularity] is likely to happen, at least for a long time. And I don't know why I feel that way."<ref>{{cite news |title=Tech Luminaries Address Singularity |url=https://spectrum.ieee.org/computing/hardware/tech-luminaries-address-singularity |accessdate=8 April 2020 |work=IEEE Spectrum: Technology, Engineering, and Science News |issue=SPECIAL REPORT: THE SINGULARITY |date=1 June 2008 |language=en}}</ref> [[Baidu]] Vice President [[Andrew Ng]] states AI existential risk is "like worrying about overpopulation on Mars when we have not even set foot on the planet yet."<ref name=shermer>{{cite news|last1=Shermer|first1=Michael|title=Apocalypse AI|url=https://www.scientificamerican.com/article/artificial-intelligence-is-not-a-threat-mdash-yet/|accessdate=27 November 2017|work=Scientific American|date=1 March 2017|pages=77|language=en|doi=10.1038/scientificamerican0317-77|bibcode=2017SciAm.316c..77S}}</ref><br />
<br />
Much of existing criticism argues that AGI is unlikely in the short term. Computer scientist Gordon Bell argues that the human race will already destroy itself before it reaches the technological singularity. Gordon Moore, the original proponent of Moore's Law, declares that "I am a skeptic. I don't believe [a technological singularity] is likely to happen, at least for a long time. And I don't know why I feel that way." Baidu Vice President Andrew Ng states AI existential risk is "like worrying about overpopulation on Mars when we have not even set foot on the planet yet."<br />
<br />
现有的许多批评认为,德盛安联短期内不太可能成功。计算机科学家 Gordon Bell 认为人类在到达技术奇异点之前就已经自我毁灭了。戈登 · 摩尔,摩尔定律的最初倡导者,宣称“我是一个怀疑论者。我不认为技术奇异点会发生,至少在很长一段时间内不会。我不知道为什么会有这种感觉。”百度副总裁 Andrew Ng 说,人工智能世界末日就像是在担心火星人口过剩,而我们甚至还没有踏上这个星球<br />
<br />
<br />
<br />
==See also==<br />
<br />
{{div col|colwidth=30em}}<br />
<br />
* [[Automated machine learning]]<br />
<br />
* [[Machine ethics]]<br />
<br />
* [[Multi-task learning]]<br />
<br />
* [[Superintelligence: Paths, Dangers, Strategies|Superintelligence]]<br />
<br />
* [[Nick Bostrom]]<br />
<br />
* [[Eliezer Yudkowsky]]<br />
<br />
* [[Future of Humanity Institute]]<br />
<br />
* [[Outline of artificial intelligence]]<br />
<br />
* [[Artificial brain]]<br />
<br />
* [[Transfer learning]]<br />
<br />
* [[Outline of transhumanism]]<br />
<br />
* [[General game playing]]<br />
<br />
* [[Synthetic intelligence]]<br />
<br />
* [[Intelligence amplification]] (IA), the use of information technology in augmenting human intelligence instead of creating an external autonomous "AGI"{{div col end}}<br />
<br />
<br />
<br />
==Notes==<br />
<br />
{{reflist|colwidth=30em}}<br />
<br />
<br />
<br />
==References==<br />
<br />
• Stages of Artificial Intelligence"[https://www.computerscience0.xyz/2020/04/ai-artificial-intelligence-stages-of.html Computer Science]" on 2nd April, 2020.{{refbegin|2}}<br />
<br />
• Stages of Artificial Intelligence"[https://www.computerscience0.xyz/2020/04/ai-artificial-intelligence-stages-of.html Computer Science]" on 2nd April, 2020.<br />
<br />
•2020年4月2日,《人工智能的阶段》[ https://www.computerscience0.xyz/2020/04/ai-Artificial-Intelligence-Stages-of.html 计算机科学]。<br />
<br />
* {{cite web |url=http://www.techcast.org/Upload/PDFs/633615794236495345_TCTheAutomationofThought.pdf |title=TechCast Article Series: The Automation of Thought |last1=Halal |first1=William E. |website= |access-date= |archive-url=https://web.archive.org/web/20130606101835/http://www.techcast.org/Upload/PDFs/633615794236495345_TCTheAutomationofThought.pdf |archive-date=6 June 2013}}<br />
<br />
* {{Citation | last=Aleksander | first=Igor | author-link=Igor Aleksander | year=1996 | title=Impossible Minds | publisher=World Scientific Publishing Company | isbn=978-1-86094-036-1 | url-access=registration | url=https://archive.org/details/impossiblemindsm0000alek }}<br />
<br />
* {{Citation | last = Omohundro|first= Steve| author-link= Steve Omohundro | year = 2008| title= The Nature of Self-Improving Artificial Intelligence| publisher= presented and distributed at the 2007 Singularity Summit, San Francisco, CA.}}<br />
<br />
* {{Citation|last=Sandberg |first=Anders|last2=Boström|first2=Nick|title=Whole Brain Emulation: A Roadmap|url=http://www.fhi.ox.ac.uk/Reports/2008-3.pdf|accessdate=5 April 2009|series= Technical Report #2008‐3|year=2008| publisher = Future of Humanity Institute, Oxford University}}<br />
<br />
* {{Citation|vauthors=Azevedo FA, Carvalho LR, Grinberg LT | title = Equal numbers of neuronal and nonneuronal cells make the human brain an isometrically scaled-up primate brain| journal = The Journal of Comparative Neurology| volume = 513| issue = 5| pages = 532–41|date=April 2009| pmid = 19226510| doi = 10.1002/cne.21974|url=https://www.researchgate.net/publication/24024444 |accessdate=4 September 2013| ref={{harvid|Azevedo et al.|2009}}|display-authors=etal}}<br />
<br />
* {{Citation<br />
<br />
| first=Anthony<br />
<br />
| first=Anthony<br />
<br />
首先是安东尼<br />
<br />
| last=Berglas<br />
<br />
| last=Berglas<br />
<br />
最后一个贝格拉斯<br />
<br />
| title=Artificial Intelligence will Kill our Grandchildren<br />
<br />
| title=Artificial Intelligence will Kill our Grandchildren<br />
<br />
人工智能会杀死我们的孙子<br />
<br />
| year=2008<br />
<br />
| year=2008<br />
<br />
2008年<br />
<br />
| url=http://berglas.org/Articles/AIKillGrandchildren/AIKillGrandchildren.html<br />
<br />
| url=http://berglas.org/Articles/AIKillGrandchildren/AIKillGrandchildren.html<br />
<br />
Http://berglas.org/articles/aikillgrandchildren/aikillgrandchildren.html<br />
<br />
}}<br />
<br />
}}<br />
<br />
}}<br />
<br />
* {{Citation | last = Chalmers |first = David | author-link=David Chalmers | year=1996 | title = The Conscious Mind |publisher=Oxford University Press.}}<br />
<br />
* {{Citation | last = Clocksin|first=William |date=Aug 2003 |title=Artificial intelligence and the future|journal=[[Philosophical Transactions of the Royal Society A]] |pmid=12952683 |volume=361 |issue=1809 |pages=1721–1748 |doi=10.1098/rsta.2003.1232 |postscript=.|bibcode=2003RSPTA.361.1721C }}<br />
<br />
* {{Crevier 1993}}<br />
<br />
* {{Citation | first = Brad | last = Darrach | date=20 November 1970 | title=Meet Shakey, the First Electronic Person | magazine=[[Life Magazine]] | pages = 58–68 }}.<br />
<br />
* {{Citation | last = Drachman | first = D | title = Do we have brain to spare? | journal = Neurology | volume = 64 | issue = 12 | pages = 2004–5 | year = 2005 | pmid = 15985565 | doi = 10.1212/01.WNL.0000166914.38327.BB | postscript = .}}<br />
<br />
* {{Citation | last = Feigenbaum | first = Edward A. | first2=Pamela | last2=McCorduck | author-link=Edward Feigenbaum | author2-link = Pamela McCorduck | title = The Fifth Generation: Artificial Intelligence and Japan's Computer Challenge to the World | publisher = Michael Joseph | year = 1983 | isbn = 978-0-7181-2401-4 }}<br />
<br />
* {{Citation<br />
<br />
| last = Gelernter | first = David<br />
<br />
| last = Gelernter | first = David<br />
<br />
最后的盖兰特 | 第一个大卫<br />
<br />
| year = 2010<br />
<br />
| year = 2010<br />
<br />
2010年<br />
<br />
| title = Dream-logic, the Internet and Artificial Thought<br />
<br />
| title = Dream-logic, the Internet and Artificial Thought<br />
<br />
梦的逻辑,互联网和人工思维<br />
<br />
| url=http://www.edge.org/3rd_culture/gelernter10.1/gelernter10.1_index.html | accessdate= 25 July 2010<br />
<br />
| url=http://www.edge.org/3rd_culture/gelernter10.1/gelernter10.1_index.html | accessdate= 25 July 2010<br />
<br />
Http://www.edge.org/3rd_culture/gelernter10.1/gelernter10.1_index.html : 2010年7月25日<br />
<br />
}}<br />
<br />
}}<br />
<br />
}}<br />
<br />
* {{Citation<br />
<br />
| editor1-last = Goertzel | editor1-first = Ben | authorlink = Ben Goertzel<br />
<br />
| editor1-last = Goertzel | editor1-first = Ben | authorlink = Ben Goertzel<br />
<br />
1-first Ben | authorlink Ben Goertzel<br />
<br />
| editor2-last = Pennachin | editor2-first= Cassio<br />
<br />
| editor2-last = Pennachin | editor2-first= Cassio<br />
<br />
| 编辑2-last Pennachin | 编辑2-first Cassio<br />
<br />
| year = 2006<br />
<br />
| year = 2006<br />
<br />
2006年<br />
<br />
| title=Artificial General Intelligence<br />
<br />
| title=Artificial General Intelligence<br />
<br />
人工通用智能<br />
<br />
| publisher = Springer <br />
<br />
| publisher = Springer <br />
<br />
出版商斯普林格<br />
<br />
| url=http://people.inf.elte.hu/csizsekp/ai/books/artificial-general-intelligence-cognitive-technologies.9783540237334.27156.pdf<br />
<br />
| url=http://people.inf.elte.hu/csizsekp/ai/books/artificial-general-intelligence-cognitive-technologies.9783540237334.27156.pdf<br />
<br />
Http://people.inf.elte.hu/csizsekp/ai/books/artificial-general-intelligence-cognitive-technologies.9783540237334.27156.pdf<br />
<br />
| isbn = 978-3-540-23733-4<br />
<br />
| isbn = 978-3-540-23733-4<br />
<br />
[国际标准图书馆编号978-3-540-23733-4]<br />
<br />
| archive-url = https://web.archive.org/web/20130320184603/http://people.inf.elte.hu/csizsekp/ai/books/artificial-general-intelligence-cognitive-technologies.9783540237334.27156.pdf<br />
<br />
| archive-url = https://web.archive.org/web/20130320184603/http://people.inf.elte.hu/csizsekp/ai/books/artificial-general-intelligence-cognitive-technologies.9783540237334.27156.pdf<br />
<br />
| 档案-url https://web.archive.org/web/20130320184603/http://people.inf.elte.hu/csizsekp/ai/books/artificial-general-intelligence-cognitive-technologies.9783540237334.27156.pdf<br />
<br />
| archive-date = 20 March 2013<br />
<br />
| archive-date = 20 March 2013<br />
<br />
| 档案-日期2013年3月20日<br />
<br />
}}<br />
<br />
}}<br />
<br />
}}<br />
<br />
* {{Citation<br />
<br />
| last = Goertzel | first = Ben | authorlink = Ben Goertzel<br />
<br />
| last = Goertzel | first = Ben | authorlink = Ben Goertzel<br />
<br />
作者: Ben Goertzel<br />
<br />
| last2 = Wang | first2 = Pei<br />
<br />
| last2 = Wang | first2 = Pei<br />
<br />
2 Wang | first2 Pei<br />
<br />
| year = 2006<br />
<br />
| year = 2006<br />
<br />
2006年<br />
<br />
| title = Introduction: Aspects of Artificial General Intelligence<br />
<br />
| title = Introduction: Aspects of Artificial General Intelligence<br />
<br />
| 题目简介: 人工通用智能的方方面面<br />
<br />
| url=http://sites.google.com/site/narswang/publications/wang-goertzel.AGI_Aspects.pdf?attredirects=1<br />
<br />
| url=http://sites.google.com/site/narswang/publications/wang-goertzel.AGI_Aspects.pdf?attredirects=1<br />
<br />
Http://sites.google.com/site/narswang/publications/wang-goertzel.agi_aspects.pdf?attredirects=1<br />
<br />
}}<br />
<br />
}}<br />
<br />
}}<br />
<br />
* {{Citation |last=Goertzel|first=Ben |authorlink=Ben Goertzel |date=Dec 2007 |title=Human-level artificial general intelligence and the possibility of a technological singularity: a reaction to Ray Kurzweil's The Singularity Is Near, and McDermott's critique of Kurzweil|journal=Artificial Intelligence |volume=171 |issue=18, Special Review Issue |pages=1161–1173 |url=https://scholar.google.com/scholar?hl=sv&lr=&cluster=15189798216526465792 |accessdate=1 April 2009 |doi=10.1016/j.artint.2007.10.011 |postscript=.}}<br />
<br />
* {{Citation | last = Gubrud | first = Mark | url=http://www.foresight.org/Conferences/MNT05/Papers/Gubrud/ | title = Nanotechnology and International Security | journal= Fifth Foresight Conference on Molecular Nanotechnology |date = November 1997| accessdate= 7 May 2011}}<br />
<br />
* {{Citation| last1 = Holte | first1=RC | last2=Choueiry |first2=BY| title = Abstraction and reformulation in artificial intelligence| journal = [[Philosophical Transactions of the Royal Society B]]| volume = 358| issue = 1435| pages = 1197–1204| year = 2003| pmid = 12903653| pmc = 1693218| doi = 10.1098/rstb.2003.1317| postscript = .}}<br />
<br />
* {{Citation | last = Howe | first = J. | url=http://www.dai.ed.ac.uk/AI_at_Edinburgh_perspective.html | title = Artificial Intelligence at Edinburgh University : a Perspective |date = November 1994| accessdate= 30 August 2007}}<br />
<br />
* {{Citation | last = Johnson| first= Mark | year = 1987| title =The body in the mind| publisher =Chicago|isbn= 978-0-226-40317-5}}<br />
<br />
* {{Citation | last = Kurzweil | first = Ray | author-link = Ray Kurzweil | title = The Singularity is Near | year = 2005 | publisher = Viking Press | title-link = The Singularity is Near }}<br />
<br />
* {{Citation | last = Lighthill | first = Professor Sir James | author-link=James Lighthill | year = 1973 | contribution= Artificial Intelligence: A General Survey | title = Artificial Intelligence: a paper symposium| publisher = Science Research Council }}<br />
<br />
* {{Citation|last=Luger|first=George|first2=William|last2=Stubblefield|year=2004|title=Artificial Intelligence: Structures and Strategies for Complex Problem Solving|edition=5th|publisher=The Benjamin/Cummings Publishing Company, Inc.|page=[https://archive.org/details/artificialintell0000luge/page/720 720]|isbn=978-0-8053-4780-7|url=https://archive.org/details/artificialintell0000luge/page/720}}<br />
<br />
* {{Citation |last=McCarthy|first=John |authorlink=John McCarthy (computer scientist)|date=Oct 2007 |title=From here to human-level AI|journal=Artificial Intelligence |volume=171 |pages=1174–1182 |doi=10.1016/j.artint.2007.10.009 | postscript=. |issue=18}}<br />
<br />
* {{McCorduck 2004}}<br />
<br />
* {{Citation | last = Moravec | first = Hans | author-link = Hans Moravec | year = 1976 | url = http://www.frc.ri.cmu.edu/users/hpm/project.archive/general.articles/1975/Raw.Power.html | title = The Role of Raw Power in Intelligence | access-date = 29 September 2007 | archive-url = https://web.archive.org/web/20160303232511/http://www.frc.ri.cmu.edu/users/hpm/project.archive/general.articles/1975/Raw.Power.html | archive-date = 3 March 2016 | url-status = dead }}<br />
<br />
* {{Citation | last = Moravec | first = Hans | author-link=Hans Moravec | year = 1988 | title = Mind Children | publisher = Harvard University Press}}<br />
<br />
* {{Citation| last=Nagel | year=1974 | title =What Is it Like to Be a Bat | journal = Philosophical Review | volume=83 | issue=4 | pages=435–50 | url=http://organizations.utep.edu/Portals/1475/nagel_bat.pdf| postscript=. | doi=10.2307/2183914 | jstor=2183914 }}<br />
<br />
* {{Citation | last = Newell | first = Allen | author-link=Allen Newell | last2 = Simon | first2=H. A. | year = 1963 | contribution=GPS: A Program that Simulates Human Thought| title=Computers and Thought | editor-last= Feigenbaum | editor-first= E.A. |editor2-last= Feldman |editor2-first= J. |publisher= McGraw-Hill | authorlink2 = Herbert A. Simon|location= New York }}<br />
<br />
* {{cite journal | doi = 10.1145/360018.360022 | last = Newell | first = Allen | last2 = Simon | first2=H. A. | year = 1976 | title=Computer Science as Empirical Inquiry: Symbols and Search| volume= 19 | pages = 113–126 | journal = Communications of the ACM| author-link=Allen Newell | authorlink2=Herbert A. Simon|issue=3 | ref=harv| doi-access= free }}<br />
<br />
* {{Citation| last=Nilsson | first=Nils | author-link=Nils Nilsson (researcher) | year=1998|title=Artificial Intelligence: A New Synthesis|publisher=Morgan Kaufmann Publishers|isbn=978-1-55860-467-4}}<br />
<br />
* {{Russell Norvig 2003}}<br />
<br />
* {{Citation | last = NRC| author-link=United States National Research Council | chapter=Developments in Artificial Intelligence|chapter-url=http://www.nap.edu/readingroom/books/far/ch9.html|title=Funding a Revolution: Government Support for Computing Research|publisher=National Academy Press|year=1999 }}<br />
<br />
* {{Citation | last = Poole | first = David | first2 = Alan | last2 = Mackworth | first3 = Randy | last3 = Goebel | publisher = Oxford University Press | year = 1998 | title = Computational Intelligence: A Logical Approach | url = http://www.cs.ubc.ca/spider/poole/ci.html | author-link=David Poole (researcher) | location = New York }}<br />
<br />
* {{Citation|last=Searle |first=John |author-link=John Searle |title=Minds, Brains and Programs |journal=Behavioral and Brain Sciences |volume=3 |issue=3 |pages=417–457 |year=1980 |doi=10.1017/S0140525X00005756 }}<br />
<br />
* {{Citation | last= Simon | first = H. A. | author-link=Herbert A. Simon | year = 1965 | title=The Shape of Automation for Men and Management | publisher =Harper & Row | location = New York }}<br />
<br />
* {{Citation |last=Sutherland|first= J.G. |year =1990| title= Holographic Model of Memory, Learning, and Expression|journal=International Journal of Neural Systems|volume= 1–3|pages= 256–267 |postscript=.}}<br />
<br />
* {{Citation|vauthors=Williams RW, Herrup K | title = The control of neuron number| journal = Annual Review of Neuroscience| volume = 11| pages = 423–53| year = 1988| pmid = 3284447| doi = 10.1146/annurev.ne.11.030188.002231| postscript = .}}<!--| accessdate = 20 June 2009--><br />
<br />
* {{Citation<br />
<br />
| editor1-last = de Vega | editor1-first = Manuel<br />
<br />
| editor1-last = de Vega | editor1-first = Manuel<br />
<br />
| 编辑1-last de Vega | 编辑1-first Manuel<br />
<br />
| editor2-last = Glenberg | editor2-first = Arthur<br />
<br />
| editor2-last = Glenberg | editor2-first = Arthur<br />
<br />
2- 最后的格伦伯格2- 第一个亚瑟<br />
<br />
| editor3-last = Graesser | editor3-first = Arthur<br />
<br />
| editor3-last = Graesser | editor3-first = Arthur<br />
<br />
| 编辑3-last Graesser | 编辑3-first Arthur<br />
<br />
| year = 2008<br />
<br />
| year = 2008<br />
<br />
2008年<br />
<br />
| title = Symbols and Embodiment: Debates on meaning and cognition<br />
<br />
| title = Symbols and Embodiment: Debates on meaning and cognition<br />
<br />
标题符号与具体化: 关于意义与认知的争论<br />
<br />
| publisher = Oxford University Press<br />
<br />
| publisher = Oxford University Press<br />
<br />
牛津大学出版社<br />
<br />
| isbn=978-0-19-921727-4<br />
<br />
| isbn=978-0-19-921727-4<br />
<br />
[国际标准图书编号978-0-19-921727-4]<br />
<br />
}}<br />
<br />
}}<br />
<br />
}}<br />
<br />
* {{Citation|last=Yudkowsky |first=Eliezer |author-link=Eliezer Yudkowsky |editor-last=Goertzel |editor-first=Ben |editor2-last=Pennachin |editor2-first=Cassio |title=Artificial General Intelligence |journal=Annual Review of Psychology |publisher=Springer |year=2006 |url=http://www.singinst.org/upload/LOGI//LOGI.pdf |doi=10.1146/annurev.psych.49.1.585 |pmid=9496632 |isbn=978-3-540-23733-4 |url-status=dead |archiveurl=https://web.archive.org/web/20090411050423/http://www.singinst.org/upload/LOGI/LOGI.pdf |archivedate=11 April 2009 |volume=49 |pages=585–612}} <br />
<br />
* {{Citation |last=Zucker|first=Jean-Daniel |date=July 2003 |title=A grounded theory of abstraction in artificial intelligence|journal=[[Philosophical Transactions of the Royal Society B]] |pmid=12903672 |volume=358 |issue=1435 |pmc=1693211 |pages=1293–1309 |doi=10.1098/rstb.2003.1308 |postscript=.}}<br />
<br />
* {{Citation| last=Yudkowsky | first=Eliezer | author-link=Eliezer Yudkowsky| year=2008 |title=Artificial Intelligence as a Positive and Negative Factor in Global Risk |journal=Global Catastrophic Risks | bibcode=2008gcr..book..303Y }}.<br />
<br />
{{refend}}<br />
<br />
<br />
<br />
==External links==<br />
<br />
* [https://cis.temple.edu/~pwang/AGI-Intro.html The AGI portal maintained by Pei Wang]<br />
<br />
* [https://web.archive.org/web/20050405071221/http://genesis.csail.mit.edu/index.html The Genesis Group at MIT's CSAIL] – Modern research on the computations that underlay human intelligence<br />
<br />
* [http://www.opencog.org/ OpenCog – open source project to develop a human-level AI]<br />
<br />
* [http://academia.wikia.com/wiki/A_Method_for_Simulating_the_Process_of_Logical_Human_Thought Simulating logical human thought]<br />
<br />
* [http://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/ai-timelines What Do We Know about AI Timelines?] – Literature review<br />
<br />
<br />
<br />
{{Existential risk from artificial intelligence}}<br />
<br />
<br />
<br />
{{DEFAULTSORT:Artificial general intelligence}}<br />
<br />
[[Category:Hypothetical technology]]<br />
<br />
Category:Hypothetical technology<br />
<br />
类别: 假设技术<br />
<br />
[[Category:Artificial intelligence]]<br />
<br />
Category:Artificial intelligence<br />
<br />
类别: 人工智能<br />
<br />
[[Category:Computational neuroscience]]<br />
<br />
Category:Computational neuroscience<br />
<br />
类别: 计算神经科学<br />
<br />
<br />
<br />
[[fr:Intelligence artificielle#Intelligence artificielle forte]]<br />
<br />
fr:Intelligence artificielle#Intelligence artificielle forte<br />
<br />
智力人工 # 智力人工强项<br />
<br />
<noinclude><br />
<br />
<small>This page was moved from [[wikipedia:en:Artificial general intelligence]]. Its edit history can be viewed at [[通用人工智能/edithistory]]</small></noinclude><br />
<br />
[[Category:待整理页面]]</div>粲兰https://wiki.swarma.org/index.php?title=%E9%80%9A%E7%94%A8%E4%BA%BA%E5%B7%A5%E6%99%BA%E8%83%BD&diff=14797通用人工智能2020-10-07T11:56:51Z<p>粲兰:</p>
<hr />
<div>此词条暂由彩云小译翻译,未经人工整理和审校,带来阅读不便,请见谅。<br />
<br />
{{Short description|Hypothetical human-level or stronger AI}}<br />
<br />
{{Use British English|date = March 2019}}<br />
<br />
{{Use dmy dates|date=December 2019}}<br />
<br />
{{Artificial intelligence}}<br />
<br />
'''Artificial general intelligence''' ('''AGI''') is the hypothetical<ref>{{cite news |title=DeepMind and Google: the battle to control artificial intelligence |url=https://www.economist.com/news/2019/04/05/deepmind-and-google-the-battle-to-control-artificial-intelligence |accessdate=15 March 2020 |work=[[The Economist]] ([[1843 (magazine)]]) |date=2019 |quote=AGI stands for Artificial General Intelligence, a hypothetical computer program...}}</ref> intelligence of a machine that has the capacity to understand or learn any intellectual task that a [[human being]] can. It is a primary goal of some [[artificial intelligence]] research and a common topic in [[science fiction]] and [[futures studies]]. AGI can also be referred to as '''strong AI''',<ref>Kurzweil, ''Singularity'' (2005) p. 260</ref><ref name = "Kurzweil 2005-08-05">{{Citation|url=https://www.forbes.com/home/free_forbes/2005/0815/030.htmlhttps://www.forbes.com/home/free_forbes/2005/0815/030.html |first=Ray |last=Kurzweil |date=5 August 2005 |magazine=[[Forbes]]|title=Long Live AI}}: Kurzweil describes strong AI as "machine intelligence with the full range of human intelligence."</ref><ref>{{Citation|url=https://crnano.typepad.com/crnblog/2005/08/advanced_human_.html |title=Advanced Human ntelligence<br />
<br />
Artificial general intelligence (AGI) is the hypothetical intelligence of a machine that has the capacity to understand or learn any intellectual task that a human being can. It is a primary goal of some artificial intelligence research and a common topic in science fiction and futures studies. AGI can also be referred to as strong AI,<ref>{{Citation|url=https://crnano.typepad.com/crnblog/2005/08/advanced_human_.html |title=Advanced Human ntelligence<br />
<br />
人工通用智能(Artificial general intelligence,AGI)是一种假设的机器智能,它有能力理解或学习任何人类能够完成的智力任务。这是一些人工智能研究的主要目标,也是科幻小说和未来研究的共同话题。也可以被称为强人工智能,参考文献{ Citation | url https://crnano.typepad.com/crnblog/2005/08/advanced_human_.html | title Advanced Human intelligence<br />
<br />
|first=Mike|last=Treder|work=Responsible Nanotechnology|date=10 August 2005 |archive-url=https://web.archive.org/web/20191016214415/https://crnano.typepad.com/crnblog/2005/08/advanced_human_.html|archive-date=16 October 2019 |url-status=live}}</ref> '''full AI''',<ref>{{Cite web |url=http://tedxtalks.ted.com/video/The-Age-of-Artificial-Intellige |title=The Age of Artificial Intelligence: George John at TEDxLondonBusinessSchool 2013 |access-date=22 February 2014 |archive-url=https://web.archive.org/web/20140226123940/http://tedxtalks.ted.com/video/The-Age-of-Artificial-Intellige |archive-date=26 February 2014 |url-status=live }}</ref> <br />
<br />
|first=Mike|last=Treder|work=Responsible Nanotechnology|date=10 August 2005 |archive-url=https://web.archive.org/web/20191016214415/https://crnano.typepad.com/crnblog/2005/08/advanced_human_.html|archive-date=16 October 2019 |url-status=live}}</ref> full AI, <br />
<br />
2005年8月10日2019年10月 https://web.archive.org/web/20191016214415/https://crnano.typepad.com/crnblog/2005/08/advanced_human_.html|archive-date=16,<br />
<br />
or '''general intelligent action'''.{{sfn|Newell|Simon|1976|ps=, This is the term they use for "human-level" intelligence in the [[physical symbol system]] hypothesis.}} <br />
<br />
or general intelligent action. <br />
<br />
或者是通用智能行为。<br />
<br />
Some academic sources reserve the term "strong AI" for machines that can experience [[Chinese room#Strong AI|consciousness]].{{sfn|Searle|1980|ps=, See below for the origin of the term "strong AI", and see the academic definition of "[[Chinese room#Strong AI|strong AI]]" in the article [[Chinese room]].}} Today's AI is speculated to be many years, if not decades, away from AGI.<ref>[https://www.europarl.europa.eu/at-your-service/files/be-heard/religious-and-non-confessional-dialogue/events/en-20190319-how-artificial-intelligence-works.pdf europarl.europa.eu: How artificial intelligence works], "Concluding remarks: Today's AI is powerful and useful, but remains far from speculated AGI or ASI.", European Parliamentary Research Service, retrieved March 3, 2020</ref><ref>{{Cite journal|last=Grace|first=Katja|last2=Salvatier|first2=John|last3=Dafoe|first3=Allan|last4=Zhang|first4=Baobao|last5=Evans|first5=Owain|date=2018-07-31|title=Viewpoint: When Will AI Exceed Human Performance? Evidence from AI Experts|journal=Journal of Artificial Intelligence Research|volume=62|pages=729–754|doi=10.1613/jair.1.11222|issn=1076-9757}}</ref><br />
<br />
Some academic sources reserve the term "strong AI" for machines that can experience consciousness. Today's AI is speculated to be many years, if not decades, away from AGI.<br />
<br />
一些学术资源保留了“强人工智能”这个术语,用来形容能够体会意识的机器。据推测,如果不是几十年的话,今天的人工智能将在很多年之后才能达到通用人工智能的地步。<br />
<br />
<br />
<br />
Some authorities emphasize a distinction between ''strong AI'' and ''applied AI'',<ref>Encyclopædia Britannica [http://www.britannica.com/eb/article-219086/artificial-intelligence Strong AI, applied AI, and cognitive simulation] {{Webarchive|url=https://web.archive.org/web/20071015054758/http://www.britannica.com/eb/article-219086/artificial-intelligence |date=15 October 2007 }} or Jack Copeland [http://www.cs.usfca.edu/www.AlanTuring.net/turing_archive/pages/Reference%20Articles/what_is_AI/What%20is%20AI02.html What is artificial intelligence?] {{Webarchive|url=https://web.archive.org/web/20070818125256/http://www.cs.usfca.edu/www.AlanTuring.net/turing_archive/pages/Reference%20Articles/what_is_AI/What%20is%20AI02.html |date=18 August 2007 }} on AlanTuring.net</ref> also called ''narrow AI''<ref name = "Kurzweil 2005-08-05"/> or ''[[weak AI]]''.<ref>{{Cite web |url=http://www.open2.net/nextbigthing/ai/ai_in_depth/in_depth.htm |title=The Open University on Strong and Weak AI |access-date=8 October 2007 |archive-url=https://web.archive.org/web/20090925043908/http://www.open2.net/nextbigthing/ai/ai_in_depth/in_depth.htm |archive-date=25 September 2009 |url-status=live }} {{Dead link |date=October 2019}}</ref> In contrast to strong AI, weak AI is not intended to perform human [[cognitive]] abilities. Rather, weak AI is limited to the use of software to study or accomplish specific [[problem solving]] or [[reason]]ing tasks.<br />
<br />
Some authorities emphasize a distinction between strong AI and applied AI, also called narrow AI In contrast to strong AI, weak AI is not intended to perform human cognitive abilities. Rather, weak AI is limited to the use of software to study or accomplish specific problem solving or reasoning tasks.<br />
<br />
一些权威机构强调''强人工智能''和''应用人工智能''之间的区别,也称为''狭义人工智能''与''强人工智能''相比,弱人工智能并不是为了执行人类的认知能力。相反,弱人工智能仅限于使用软件来研究或完成特定问题的解决或完成推理任务。<br />
<br />
<br />
<br />
As of 2017, over forty organizations are researching AGI.<ref name=baum/><br />
<br />
As of 2017, over forty organizations are researching AGI.<br />
<br />
截止到2017年,已经有超过四十家机构在研究 AGI。<br />
<br />
<br />
<br />
==Requirements 判定要求==<br />
<br />
{{main|Cognitive science}}<br />
<br />
<br />
<br />
Various criteria for [[intelligence]] have been proposed (most famously the [[Turing test]]) but to date, there is no definition that satisfies everyone.<ref>AI founder [[John McCarthy (computer scientist)|John McCarthy]] writes: "we cannot yet characterize in general what kinds of computational procedures we want to call intelligent." {{cite web| url=http://www-formal.stanford.edu/jmc/whatisai/node1.html| title=Basic Questions| last=McCarthy| first=John| authorlink=John McCarthy (computer scientist)| publisher=[[Stanford University]]| year=2007| access-date=6 December 2007| archive-url=https://web.archive.org/web/20071026100601/http://www-formal.stanford.edu/jmc/whatisai/node1.html| archive-date=26 October 2007| url-status=live}} (For a discussion of some definitions of intelligence used by [[artificial intelligence]] researchers, see [[philosophy of artificial intelligence]].)</ref> However, there ''is'' wide agreement among artificial intelligence researchers that intelligence is required to do the following:<ref><br />
<br />
Various criteria for intelligence have been proposed (most famously the Turing test) but to date, there is no definition that satisfies everyone. However, there is wide agreement among artificial intelligence researchers that intelligence is required to do the following:<ref><br />
<br />
人们提出了各种各样的智能标准(最著名的是图灵测试) ,但到目前为止,还没有一个定义能使所有人满意。然而,人工智能研究人员普遍认为,智能需要做到以下几点: <br />
<br />
This list of intelligent traits is based on the topics covered by major AI textbooks, including:<br />
<br />
This list of intelligent traits is based on the topics covered by major AI textbooks, including:<br />
<br />
这个智能特征的列表基于主流的人工智能教科书所涉及的主题,包括:<br />
<br />
{{Harvnb|Russell|Norvig|2003}},<br />
<br />
,<br />
<br />
,<br />
<br />
{{Harvnb|Luger|Stubblefield|2004}},<br />
<br />
,<br />
<br />
,<br />
<br />
{{Harvnb|Poole|Mackworth|Goebel|1998}} and<br />
<br />
and<br />
<br />
及<br />
<br />
{{Harvnb|Nilsson|1998}}.<br />
<br />
.<br />
<br />
.<br />
<br />
</ref><br />
<br />
</ref><br />
<br />
/ 参考<br />
<br />
* [[automated reasoning|reason]], use strategy, solve puzzles, and make judgments under [[uncertainty]];使用策略,解决问题,并且在不确定条件下做出决策。<br />
<br />
* [[knowledge representation|represent knowledge]], including [[Commonsense knowledge base|commonsense knowledge]];展现知识,包括常识。<br />
<br />
* [[automated planning and scheduling|plan]];规划。<br />
<br />
* [[machine learning|learn]];学习。<br />
<br />
* communicate in [[natural language processing|natural language]];使用自然语言进行交流。<br />
<br />
* and [[Artificial intelligence systems integration|integrate all these skills]] towards common goals.综合运用所有技巧以达到某个目的。<br />
<br />
<br />
<br />
Other important capabilities include the ability to [[machine perception|sense]] (e.g. [[computer vision|see]]) and the ability to act (e.g. [[robotics|move and manipulate objects]]) in the world where intelligent behaviour is to be observed.<ref>Pfeifer, R. and Bongard J. C., How the body shapes the way we think: a new view of intelligence (The MIT Press, 2007). {{ISBN|0-262-16239-3}}</ref> This would include an ability to detect and respond to [[hazard]].<ref>{{cite journal | last1 = White | first1 = R. W. | year = 1959 | title = Motivation reconsidered: The concept of competence | journal = Psychological Review | volume = 66 | issue = 5| pages = 297–333 | doi=10.1037/h0040934| pmid = 13844397 }}</ref> Many interdisciplinary approaches to intelligence (e.g. [[cognitive science]], [[computational intelligence]] and [[decision making]]) tend to emphasise the need to consider additional traits such as [[imagination]] (taken as the ability to form mental images and concepts that were not programmed in)<ref>{{Harvnb|Johnson|1987}}</ref> and [[Self-determination theory|autonomy]].<ref>deCharms, R. (1968). Personal causation. New York: Academic Press.</ref><br />
<br />
Other important capabilities include the ability to sense (e.g. see) and the ability to act (e.g. move and manipulate objects) in the world where intelligent behaviour is to be observed. This would include an ability to detect and respond to hazard. Many interdisciplinary approaches to intelligence (e.g. cognitive science, computational intelligence and decision making) tend to emphasise the need to consider additional traits such as imagination (taken as the ability to form mental images and concepts that were not programmed in) and autonomy.<br />
<br />
其他重要的能力包括在现实世界感知(例如:视觉)和行动的能力(例如:移动和操纵物体)。智能行为在现实世界中是可观测的。<br />
--[[用户:粲兰|袁一博]]([[用户讨论:粲兰|讨论]])“Other important capabilities include the ability to sense (e.g. see) and the ability to act (e.g. move and manipulate objects) in the world where intelligent behaviour is to be observed.”此句翻译存疑。<br />
这将包括检测和应对危险的能力。许多跨学科的智力研究方法(例如:。认知科学、计算智能和决策)倾向于强调考虑额外特征的必要性,例如想象力(被认为是未编入程序的形成意象和概念的能力)和自主性。<br />
<br />
Computer based systems that exhibit many of these capabilities do exist (e.g. see [[computational creativity]], [[automated reasoning]], [[decision support system]], [[robot]], [[evolutionary computation]], [[intelligent agent]]), but not yet at human levels.<br />
<br />
Computer based systems that exhibit many of these capabilities do exist (e.g. see computational creativity, automated reasoning, decision support system, robot, evolutionary computation, intelligent agent), but not yet at human levels.<br />
<br />
基于计算机的系统,展示了许多这些能力确实存在(例如:。参见计算创造性、自动推理、决策支持系统、机器人、进化计算、智能代理) ,但还没有达到人类的水平。<br />
<br />
<br />
<br />
===Tests for confirming human-level AGI{{anchor|Tests_for_confirming_human-level_AGI}}===<br />
<br />
The following tests to confirm human-level AGI have been considered:<ref>{{cite web|last=Muehlhauser|first=Luke|title=What is AGI?|url=http://intelligence.org/2013/08/11/what-is-agi/|publisher=Machine Intelligence Research Institute|accessdate=1 May 2014|date=11 August 2013|archive-url=https://web.archive.org/web/20140425115445/http://intelligence.org/2013/08/11/what-is-agi/|archive-date=25 April 2014|url-status=live}}</ref><ref>{{Cite web|url=https://www.talkyblog.com/artificial_general_intelligence_agi/|title=What is Artificial General Intelligence (AGI)? {{!}} 4 Tests For Ensuring Artificial General Intelligence|date=13 July 2019|website=Talky Blog|language=en-US|access-date=17 July 2019|archive-url=https://web.archive.org/web/20190717071152/https://www.talkyblog.com/artificial_general_intelligence_agi/|archive-date=17 July 2019|url-status=live}}</ref><br />
<br />
The following tests to confirm human-level AGI have been considered:<br />
<br />
考虑了下列测试以确认人类水平 AGI:<br />
<br />
;[[Turing test|The Turing Test]] ([[Alan Turing|''Turing'']])<br />
<br />
The Turing Test (Turing)<br />
<br />
图灵测试(图灵)<br />
<br />
: A machine and a human both converse sight unseen with a second human, who must evaluate which of the two is the machine, which passes the test if it can fool the evaluator a significant fraction of the time. Note: Turing does not prescribe what should qualify as intelligence, only that knowing that it is a machine should disqualify it.<br />
<br />
A machine and a human both converse sight unseen with a second human, who must evaluate which of the two is the machine, which passes the test if it can fool the evaluator a significant fraction of the time. Note: Turing does not prescribe what should qualify as intelligence, only that knowing that it is a machine should disqualify it.<br />
<br />
一个机器人和一个人类都与另一个人类相反,后者必须评估两者中哪一个是机器,如果它能骗过评估者很大一部分时间,那么机器就通过了测试。注意: 图灵并没有规定什么是智能,只是知道它是一台机器就应该取消它的资格。<br />
<br />
;The Coffee Test ([[Steve Wozniak|''Wozniak'']])<br />
<br />
The Coffee Test (Wozniak)<br />
<br />
咖啡测试(沃兹尼亚克)<br />
<br />
: A machine is required to enter an average American home and figure out how to make coffee: find the coffee machine, find the coffee, add water, find a mug, and brew the coffee by pushing the proper buttons.<br />
<br />
A machine is required to enter an average American home and figure out how to make coffee: find the coffee machine, find the coffee, add water, find a mug, and brew the coffee by pushing the proper buttons.<br />
<br />
一台机器需要进入一个普通的美国家庭,并弄清楚如何制作咖啡: 找到咖啡机,找到咖啡,加水,找到一个马克杯,并通过按下正确的按钮来煮咖啡。<br />
<br />
;The Robot College Student Test ([[Ben Goertzel|''Goertzel'']])<br />
<br />
The Robot College Student Test (Goertzel)<br />
<br />
机器人大学生考试(Goertzel)<br />
<br />
: A machine enrolls in a university, taking and passing the same classes that humans would, and obtaining a degree.<br />
<br />
A machine enrolls in a university, taking and passing the same classes that humans would, and obtaining a degree.<br />
<br />
一台机器进入一所大学,学习和通过与人类相同的课程,并获得学位。<br />
<br />
;The Employment Test ([[Nils John Nilsson|''Nilsson'']])<br />
<br />
The Employment Test (Nilsson)<br />
<br />
就业测试(Nilsson)<br />
<br />
: A machine works an economically important job, performing at least as well as humans in the same job.<br />
<br />
A machine works an economically important job, performing at least as well as humans in the same job.<br />
<br />
机器从事一项经济上重要的工作,在同一项工作中表现至少和人类一样好。<br />
<br />
<br />
<br />
=== Problems requiring AGI to solve ===<br />
<br />
{{Main|AI-complete}}<br />
<br />
<br />
<br />
The most difficult problems for computers are informally known as "AI-complete" or "AI-hard", implying that solving them is equivalent to the general aptitude of human intelligence, or strong AI, beyond the capabilities of a purpose-specific algorithm.<ref name="Shapiro92">Shapiro, Stuart C. (1992). [http://www.cse.buffalo.edu/~shapiro/Papers/ai.pdf Artificial Intelligence] {{Webarchive|url=https://web.archive.org/web/20160201014644/http://www.cse.buffalo.edu/~shapiro/Papers/ai.pdf |date=1 February 2016 }} In Stuart C. Shapiro (Ed.), ''Encyclopedia of Artificial Intelligence'' (Second Edition, pp.&nbsp;54–57). New York: John Wiley. (Section 4 is on "AI-Complete Tasks".)</ref><br />
<br />
The most difficult problems for computers are informally known as "AI-complete" or "AI-hard", implying that solving them is equivalent to the general aptitude of human intelligence, or strong AI, beyond the capabilities of a purpose-specific algorithm.<br />
<br />
对于计算机来说,最困难的问题被非正式地称为“ AI 完成”或“ AI 困难” ,这意味着解决这些问题相当于人类智能的一般才能,或强大的人工智能,超出了特定目的算法的能力。<br />
<br />
<br />
<br />
AI-complete problems are hypothesised to include general [[computer vision]], [[natural language understanding]], and dealing with unexpected circumstances while solving any real world problem.<ref>Roman V. Yampolskiy. Turing Test as a Defining Feature of AI-Completeness. In Artificial Intelligence, Evolutionary Computation and Metaheuristics (AIECM) --In the footsteps of Alan Turing. Xin-She Yang (Ed.). pp. 3–17. (Chapter 1). Springer, London. 2013. http://cecs.louisville.edu/ry/TuringTestasaDefiningFeature04270003.pdf {{Webarchive|url=https://web.archive.org/web/20130522094547/http://cecs.louisville.edu/ry/TuringTestasaDefiningFeature04270003.pdf |date=22 May 2013 }}</ref><br />
<br />
AI-complete problems are hypothesised to include general computer vision, natural language understanding, and dealing with unexpected circumstances while solving any real world problem.<br />
<br />
人工智能完全问题假设包括一般的计算机视觉,自然语言理解,以及在解决任何现实世界问题的同时处理意外情况。<br />
<br />
<br />
<br />
AI-complete problems cannot be solved with current computer technology alone, and also require [[human computation]]. This property could be useful, for example, to test for the presence of humans, as [[CAPTCHA]]s aim to do; and for [[computer security]] to repel [[brute-force attack]]s.<ref>Luis von Ahn, Manuel Blum, Nicholas Hopper, and John Langford. [http://www.captcha.net/captcha_crypt.pdf CAPTCHA: Using Hard AI Problems for Security] {{Webarchive|url=https://web.archive.org/web/20160304001102/http://www.captcha.net/captcha_crypt.pdf |date=4 March 2016 }}. In Proceedings of Eurocrypt, Vol. 2656 (2003), pp. 294–311.</ref><ref>{{cite journal | first = Richard | last = Bergmair | title = Natural Language Steganography and an "AI-complete" Security Primitive | citeseerx = 10.1.1.105.129 | date = 7 January 2006 }} (unpublished?)</ref><br />
<br />
AI-complete problems cannot be solved with current computer technology alone, and also require human computation. This property could be useful, for example, to test for the presence of humans, as CAPTCHAs aim to do; and for computer security to repel brute-force attacks.<br />
<br />
目前的计算机技术不能单独解决人工智能完全问题,而且还需要人工计算。例如,这个特性可以用来测试人类是否存在(CAPTCHAs 的目标就是这样做) ,以及用于计算机安全性以抵御蛮力攻击。<br />
<br />
<br />
<br />
== History == <br />
<br />
=== Classical AI ===<br />
<br />
{{Main|History of artificial intelligence}}<br />
<br />
Modern AI research began in the mid 1950s.<ref>{{Harvnb|Crevier|1993|pp=48–50}}</ref> The first generation of AI researchers were convinced that artificial general intelligence was possible and that it would exist in just a few decades. AI pioneer [[Herbert A. Simon]] wrote in 1965: "machines will be capable, within twenty years, of doing any work a man can do."<ref>{{Harvnb|Simon|1965|p=96}} quoted in {{Harvnb|Crevier|1993|p=109}}</ref> Their predictions were the inspiration for [[Stanley Kubrick]] and [[Arthur C. Clarke]]'s character [[HAL 9000]], who embodied what AI researchers believed they could create by the year 2001. AI pioneer [[Marvin Minsky]] was a consultant<ref>{{Cite web |url=http://mitpress.mit.edu/e-books/Hal/chap2/two1.html |title=Scientist on the Set: An Interview with Marvin Minsky |access-date=5 April 2008 |archive-url=https://web.archive.org/web/20120716182537/http://mitpress.mit.edu/e-books/Hal/chap2/two1.html |archive-date=16 July 2012 |url-status=live }}</ref> on the project of making HAL 9000 as realistic as possible according to the consensus predictions of the time; Crevier quotes him as having said on the subject in 1967, "Within a generation ... the problem of creating 'artificial intelligence' will substantially be solved,"<ref>Marvin Minsky to {{Harvtxt|Darrach|1970}}, quoted in {{Harvtxt|Crevier|1993|p=109}}.</ref> although Minsky states that he was misquoted.{{Citation needed|date=June 2011}}<br />
<br />
Modern AI research began in the mid 1950s. The first generation of AI researchers were convinced that artificial general intelligence was possible and that it would exist in just a few decades. AI pioneer Herbert A. Simon wrote in 1965: "machines will be capable, within twenty years, of doing any work a man can do." Their predictions were the inspiration for Stanley Kubrick and Arthur C. Clarke's character HAL 9000, who embodied what AI researchers believed they could create by the year 2001. AI pioneer Marvin Minsky was a consultant on the project of making HAL 9000 as realistic as possible according to the consensus predictions of the time; Crevier quotes him as having said on the subject in 1967, "Within a generation ... the problem of creating 'artificial intelligence' will substantially be solved," although Minsky states that he was misquoted.<br />
<br />
现代人工智能研究始于20世纪50年代中期。第一代人工智能研究人员确信,人工普通智能是可能的,并将在短短几十年内出现。人工智能的先驱赫伯特·西蒙在1965年写道: “机器将在20年内完成人类能做的任何工作。”他们的预言启发了斯坦利 · 库布里克和亚瑟·查理斯·克拉克的人物哈尔9000,他们代表了人工智能研究人员相信他们在2001年能够创造的东西。人工智能先驱马文 · 明斯基(Marvin Minsky)是一个项目顾问,该项目旨在根据当时的共识预测,使 HAL 9000尽可能逼真; 克里维尔援引他在1967年关于这个问题的话说,“在一代人的时间里... ... 创造‘人工智能’的问题将大大得到解决,”尽管明斯基声称,他的话被错误引用了。<br />
<br />
<br />
<br />
However, in the early 1970s, it became obvious that researchers had grossly underestimated the difficulty of the project. Funding agencies became skeptical of AGI and put researchers under increasing pressure to produce useful "applied AI".<ref>The [[Lighthill report]] specifically criticized AI's "grandiose objectives" and led the dismantling of AI research in England. ({{Harvnb|Lighthill|1973}}; {{Harvnb|Howe|1994}}) In the U.S., [[DARPA]] became determined to fund only "mission-oriented direct research, rather than basic undirected research". See {{Harv|NRC|1999}} under "Shift to Applied Research Increases Investment". See also {{Harv|Crevier|1993|pp=115–117}} and {{Harv|Russell|Norvig|2003|pp=21–22}}</ref> As the 1980s began, Japan's [[Fifth Generation Computer]] Project revived interest in AGI, setting out a ten-year timeline that included AGI goals like "carry on a casual conversation".<ref>{{harvnb|Crevier|1993|p=211}}, {{harvnb|Russell|Norvig|2003|p=24}} and see also {{Harvnb|Feigenbaum|McCorduck|1983}}</ref> In response to this and the success of [[expert systems]], both industry and government pumped money back into the field.<ref>{{Harvnb|Crevier| 1993|pp=161–162,197–203,240}}; {{harvnb|Russell|Norvig|2003|p=25}}; {{harvnb|NRC|1999|loc=under "Shift to Applied Research Increases Investment"}}</ref> However, confidence in AI spectacularly collapsed in the late 1980s, and the goals of the Fifth Generation Computer Project were never fulfilled.<ref>{{Harvnb|Crevier|1993|pp=209–212}}</ref> For the second time in 20 years, AI researchers who had predicted the imminent achievement of AGI had been shown to be fundamentally mistaken. By the 1990s, AI researchers had gained a reputation for making vain promises. They became reluctant to make predictions at all<ref>As AI founder [[John McCarthy (computer scientist)|John McCarthy]] writes "it would be a great relief to the rest of the workers in AI if the inventors of new general formalisms would express their hopes in a more guarded form than has sometimes been the case." {{cite web | url=http://www-formal.stanford.edu/jmc/reviews/lighthill/lighthill.html | title=Reply to Lighthill | last=McCarthy | first=John | authorlink=John McCarthy (computer scientist) | publisher=Stanford University | year=2000 | access-date=29 September 2007 | archive-url=https://web.archive.org/web/20080930164952/http://www-formal.stanford.edu/jmc/reviews/lighthill/lighthill.html | archive-date=30 September 2008 | url-status=live }}</ref> and to avoid any mention of "human level" artificial intelligence for fear of being labeled "wild-eyed dreamer[s]."<ref>"At its low point, some computer scientists and software engineers avoided the term artificial intelligence for fear of being viewed as wild-eyed dreamers."{{cite news |first=John |last=Markoff |title=Behind Artificial Intelligence, a Squadron of Bright Real People |url=https://www.nytimes.com/2005/10/14/technology/14artificial.html?ei=5070&en=11ab55edb7cead5e&ex=1185940800&adxnnl=1&adxnnlx=1185805173-o7WsfW7qaP0x5/NUs1cQCQ |work=The New York Times|date=14 October 2005}}</ref><br />
<br />
However, in the early 1970s, it became obvious that researchers had grossly underestimated the difficulty of the project. Funding agencies became skeptical of AGI and put researchers under increasing pressure to produce useful "applied AI". As the 1980s began, Japan's Fifth Generation Computer Project revived interest in AGI, setting out a ten-year timeline that included AGI goals like "carry on a casual conversation". In response to this and the success of expert systems, both industry and government pumped money back into the field. However, confidence in AI spectacularly collapsed in the late 1980s, and the goals of the Fifth Generation Computer Project were never fulfilled. For the second time in 20 years, AI researchers who had predicted the imminent achievement of AGI had been shown to be fundamentally mistaken. By the 1990s, AI researchers had gained a reputation for making vain promises. They became reluctant to make predictions at all and to avoid any mention of "human level" artificial intelligence for fear of being labeled "wild-eyed dreamer[s]."<br />
<br />
然而,在20世纪70年代早期,很明显,研究人员严重低估了该项目的难度。资助机构开始对 AGI 持怀疑态度,并对研究人员施加越来越大的压力,要求他们生产出有用的“应用人工智能”。随着20世纪80年代的开始,日本的第五代计算机项目(Fifth Generation Computer Project)重新唤起了人们对 AGI 的兴趣,设定了一个为期10年的时间表,其中包括 AGI 的目标,比如“进行一次随意的交谈”。为了应对这种情况和专家系统的成功,工业界和政府都将资金重新投入这一领域。然而,人们对人工智能的信心在20世纪80年代末大幅下降,第五代计算机项目的目标从未实现。20年来的第二次,人工智能研究人员预测 AGI 即将取得的成就被证明是根本错误的。到了20世纪90年代,人工智能研究人员因做出虚假承诺而闻名。他们根本不愿意做预测,也不愿意提及“人类水平”的人工智能,因为他们害怕被贴上“狂热的梦想家”的标签<br />
<br />
<br />
<br />
=== Narrow AI research ===<br />
<br />
{{Main|Artificial intelligence}}<br />
<br />
<br />
<br />
In the 1990s and early 21st century, mainstream AI achieved far greater commercial success and academic respectability by focusing on specific sub-problems where they can produce verifiable results and commercial applications, such as [[artificial neural networks]] and statistical [[machine learning]].<ref>{{Harvnb|Russell|Norvig|2003|pp=25–26}}</ref> These "applied AI" systems are now used extensively throughout the technology industry, and research in this vein is very heavily funded in both academia and industry. Currently, development on this field is considered an emerging trend, and a mature stage is expected to happen in more than 10 years.<ref>{{cite web |title=Trends in the Emerging Tech Hype Cycle |url=https://blogs.gartner.com/smarterwithgartner/files/2018/08/PR_490866_5_Trends_in_the_Emerging_Tech_Hype_Cycle_2018_Hype_Cycle.png |publisher=Gartner Reports |accessdate=7 May 2019 |archive-url=https://web.archive.org/web/20190522024829/https://blogs.gartner.com/smarterwithgartner/files/2018/08/PR_490866_5_Trends_in_the_Emerging_Tech_Hype_Cycle_2018_Hype_Cycle.png |archive-date=22 May 2019 |url-status=live }}</ref><br />
<br />
In the 1990s and early 21st century, mainstream AI achieved far greater commercial success and academic respectability by focusing on specific sub-problems where they can produce verifiable results and commercial applications, such as artificial neural networks and statistical machine learning. These "applied AI" systems are now used extensively throughout the technology industry, and research in this vein is very heavily funded in both academia and industry. Currently, development on this field is considered an emerging trend, and a mature stage is expected to happen in more than 10 years.<br />
<br />
在1990年代和21世纪初,主流人工智能取得了更大的商业成功和学术声望,因为它们把重点放在能够产生可验证结果和商业应用的具体子问题上,例如人工神经网络和统计机器学习。这些“应用人工智能”系统现在在整个技术产业中得到广泛应用,这方面的研究得到了学术界和产业界的大量资助。目前,在这一领域的发展被认为是一个新兴的趋势,并有望在10多年内发生一个成熟的阶段。<br />
<br />
<br />
<br />
Most mainstream AI researchers hope that strong AI can be developed by combining the programs that solve various sub-problems. [[Hans Moravec]] wrote in 1988: <blockquote>"I am confident that this bottom-up route to artificial intelligence will one day meet the traditional top-down route more than half way, ready to provide the real world competence and the [[Commonsense knowledge base|commonsense knowledge]] that has been so frustratingly elusive in reasoning programs. Fully intelligent machines will result when the metaphorical [[golden spike]] is driven uniting the two efforts."<ref>{{Harvnb|Moravec|1988|p=20}}</ref></blockquote><br />
<br />
Most mainstream AI researchers hope that strong AI can be developed by combining the programs that solve various sub-problems. Hans Moravec wrote in 1988: <blockquote>"I am confident that this bottom-up route to artificial intelligence will one day meet the traditional top-down route more than half way, ready to provide the real world competence and the commonsense knowledge that has been so frustratingly elusive in reasoning programs. Fully intelligent machines will result when the metaphorical golden spike is driven uniting the two efforts."</blockquote><br />
<br />
大多数主流人工智能研究人员希望,通过结合解决各种子问题的程序,可以开发出强大的人工智能。汉斯 · 莫拉维克(Hans Moravec)在1988年写道: “我相信,这种自下而上的人工智能路线,终有一天会与传统的自上而下的路线相遇,超过一半的路程,准备好提供真实世界的能力和常识知识,而这些知识在推理程序中一直难以捉摸,令人沮丧。当隐喻性的黄金钉将两者结合起来时,就会产生完全智能的机器。” / blockquote<br />
<br />
<br />
<br />
However, even this fundamental philosophy has been disputed; for example, Stevan Harnad of Princeton concluded his 1990 paper on the [[Symbol grounding problem|Symbol Grounding Hypothesis]] by stating: <blockquote>"The expectation has often been voiced that "top-down" (symbolic) approaches to modeling cognition will somehow meet "bottom-up" (sensory) approaches somewhere in between. If the grounding considerations in this paper are valid, then this expectation is hopelessly modular and there is really only one viable route from sense to symbols: from the ground up. A free-floating symbolic level like the software level of a computer will never be reached by this route (or vice versa) – nor is it clear why we should even try to reach such a level, since it looks as if getting there would just amount to uprooting our symbols from their intrinsic meanings (thereby merely reducing ourselves to the functional equivalent of a programmable computer)."<ref>{{cite journal | last1 = Harnad | first1 = S | year = 1990 | title = The Symbol Grounding Problem | journal = Physica D | volume = 42 | issue = 1–3| pages = 335–346 | doi=10.1016/0167-2789(90)90087-6| bibcode = 1990PhyD...42..335H | arxiv = cs/9906002}}</ref></blockquote><br />
<br />
However, even this fundamental philosophy has been disputed; for example, Stevan Harnad of Princeton concluded his 1990 paper on the Symbol Grounding Hypothesis by stating: <blockquote>"The expectation has often been voiced that "top-down" (symbolic) approaches to modeling cognition will somehow meet "bottom-up" (sensory) approaches somewhere in between. If the grounding considerations in this paper are valid, then this expectation is hopelessly modular and there is really only one viable route from sense to symbols: from the ground up. A free-floating symbolic level like the software level of a computer will never be reached by this route (or vice versa) – nor is it clear why we should even try to reach such a level, since it looks as if getting there would just amount to uprooting our symbols from their intrinsic meanings (thereby merely reducing ourselves to the functional equivalent of a programmable computer)."</blockquote><br />
<br />
然而,即使是这种基本哲学也存在争议; 例如,普林斯顿大学的斯蒂文 · 哈纳德在1990年关于符号根植假说的论文中总结道: “人们经常提出这样的期望,即认知建模的“自上而下”(符号)方法将在某种程度上满足介于两者之间的“自下而上”(感官)方法。如果本文中的基础考虑是正确的,那么这种期望是无望的模块化的,从感觉到符号真的只有一条可行的路径: 从头开始。像计算机软件级别这样自由浮动的符号级别永远不可能通过这条路径(反之亦然)达到——也不清楚为什么我们甚至应该尝试达到这样一个级别,因为它看起来就像是把我们的符号从它们的内在意义上连根拔起(从而仅仅把我们自己降低为可编程计算机的功能等价物)。” / blockquote<br />
<br />
<br />
<br />
===Modern artificial general intelligence research===<br />
<br />
The term "artificial general intelligence" was used as early as 1997, by Mark Gubrud<ref>{{Harvnb|Gubrud|1997}}</ref> in a discussion of the implications of fully automated military production and operations. The term was re-introduced and popularized by [[Shane Legg]] and [[Ben Goertzel]] around 2002.<ref>{{Cite web|url=http://goertzel.org/who-coined-the-term-agi/|title=Who coined the term "AGI"? » goertzel.org|language=en-US|access-date=28 December 2018|archive-url=https://web.archive.org/web/20181228083048/http://goertzel.org/who-coined-the-term-agi/|archive-date=28 December 2018|url-status=live}}, via [[Life 3.0]]: 'The term "AGI" was popularized by... Shane Legg, Mark Gubrud and Ben Goertzel'</ref> The research objective is much older, for example [[Doug Lenat]]'s [[Cyc]] project (that began in 1984), and [[Allen Newell]]'s [[Soar (cognitive architecture)|Soar]] project are regarded as within the scope of AGI. AGI research activity in 2006 was described by Pei Wang and Ben Goertzel<ref>{{harvnb|Goertzel|Wang|2006}}. See also {{harvtxt|Wang|2006}} with an up-to-date summary and lots of links.</ref> as "producing publications and preliminary results". The first summer school in AGI was organized in Xiamen, China in 2009<ref>https://goertzel.org/AGI_Summer_School_2009.htm</ref> by the Xiamen university's Artificial Brain Laboratory and OpenCog. The first university course was given in 2010<ref>http://fmi-plovdiv.org/index.jsp?id=1054&ln=1</ref> and 2011<ref>http://fmi.uni-plovdiv.bg/index.jsp?id=1139&ln=1</ref> at Plovdiv University, Bulgaria by Todor Arnaudov. MIT presented a course in AGI in 2018, organized by Lex Fridman and featuring a number of guest lecturers. However, as yet, most AI researchers have devoted little attention to AGI, with some claiming that intelligence is too complex to be completely replicated in the near term. However, a small number of computer scientists are active in AGI research, and many of this group are contributing to a series of [[Conference on Artificial General Intelligence|AGI conferences]]. The research is extremely diverse and often pioneering in nature. In the introduction to his book,{{sfn|Goertzel|Pennachin|2006}} Goertzel says that estimates of the time needed before a truly flexible AGI is built vary from 10 years to over a century, but the consensus in the AGI research community seems to be that the timeline discussed by [[Ray Kurzweil]] in ''[[The Singularity is Near]]''<ref name="K">{{Harv|Kurzweil|2005|p=260}} or see [http://crnano.typepad.com/crnblog/2005/08/advanced_human_.html Advanced Human Intelligence] {{Webarchive|url=https://web.archive.org/web/20110630032301/http://crnano.typepad.com/crnblog/2005/08/advanced_human_.html |date=30 June 2011 }} where he defines strong AI as "machine intelligence with the full range of human intelligence."</ref> (i.e. between 2015 and 2045) is plausible.{{sfn|Goertzel|2007}}<br />
<br />
The term "artificial general intelligence" was used as early as 1997, by Mark Gubrud in a discussion of the implications of fully automated military production and operations. The term was re-introduced and popularized by Shane Legg and Ben Goertzel around 2002. The research objective is much older, for example Doug Lenat's Cyc project (that began in 1984), and Allen Newell's Soar project are regarded as within the scope of AGI. AGI research activity in 2006 was described by Pei Wang and Ben Goertzel as "producing publications and preliminary results". The first summer school in AGI was organized in Xiamen, China in 2009 by the Xiamen university's Artificial Brain Laboratory and OpenCog. The first university course was given in 2010 and 2011 at Plovdiv University, Bulgaria by Todor Arnaudov. MIT presented a course in AGI in 2018, organized by Lex Fridman and featuring a number of guest lecturers. However, as yet, most AI researchers have devoted little attention to AGI, with some claiming that intelligence is too complex to be completely replicated in the near term. However, a small number of computer scientists are active in AGI research, and many of this group are contributing to a series of AGI conferences. The research is extremely diverse and often pioneering in nature. In the introduction to his book, Goertzel says that estimates of the time needed before a truly flexible AGI is built vary from 10 years to over a century, but the consensus in the AGI research community seems to be that the timeline discussed by Ray Kurzweil in The Singularity is Near (i.e. between 2015 and 2045) is plausible.<br />
<br />
”人工通用智能”一词早在1997年就由马克 · 古布鲁德在讨论全自动化军事生产和作业的影响时使用。这个术语在2002年左右被 Shane Legg 和 Ben Goertzel 重新引入并推广。研究目标要古老得多,例如道格•雷纳特(Doug Lenat)的 Cyc 项目(始于1984年) ,以及艾伦•纽厄尔(Allen Newell)的 Soar 项目被认为属于 AGI 的范围。王(音译)和本 · 戈泽尔(音译)将 AGI 2006年的研究活动描述为”发表论文和取得初步成果”。2009年,厦门大学人工脑实验室和 OpenCog 在中国厦门组织了 AGI 的第一个暑期学校。第一个大学课程于2010年和2011年在保加利亚普罗夫迪夫大学由 Todor Arnaudov 开设。2018年,麻省理工学院在 AGI 开设了一门课程,由 Lex Fridman 组织,并邀请了一些客座讲师。然而,迄今为止,大多数人工智能研究人员对 AGI 关注甚少,一些人声称,智能过于复杂,无法在短期内完全复制。然而,少数计算机科学家积极参与 AGI 的研究,其中许多人正在为 AGI 的一系列会议做出贡献。这项研究极其多样化,而且往往具有开创性。在他的书的序言中,Goertzel 说,一个真正灵活的 AGI 制造所需的时间估计从10年到超过一个世纪不等,但是 AGI 研究团体的共识似乎是 Ray Kurzweil 在《奇点迫近讨论的时间表。在2015年至2045年之间)是合理的。<br />
<br />
<br />
<br />
However, most mainstream AI researchers doubt that progress will be this rapid.{{citation_needed|date=January 2017}} Organizations explicitly pursuing AGI include the Swiss AI lab [[IDSIA]],{{citation needed|date=December 2017}} Nnaisense,<ref>{{cite news|last1=Markoff|first1=John|title=When A.I. Matures, It May Call Jürgen Schmidhuber 'Dad'|url=https://www.nytimes.com/2016/11/27/technology/artificial-intelligence-pioneer-jurgen-schmidhuber-overlooked.html|accessdate=26 December 2017|work=The New York Times|date=27 November 2016|archive-url=https://web.archive.org/web/20171226234555/https://www.nytimes.com/2016/11/27/technology/artificial-intelligence-pioneer-jurgen-schmidhuber-overlooked.html|archive-date=26 December 2017|url-status=live}}</ref> [[Vicarious (company)|Vicarious]],<!--<ref name=baum/>--> [[Maluuba]],<ref name=baum/> the [[OpenCog|OpenCog Foundation]], Adaptive AI, [[LIDA (cognitive architecture)|LIDA]], and [[Numenta]] and the associated [[Redwood Neuroscience Institute]].<ref>{{cite book|author1=James Barrat|title=Our Final Invention: Artificial Intelligence and the End of the Human Era|date=2013|publisher=St. Martin's Press|location=New York|isbn=9780312622374|edition=First|chapter=Chapter 11: A Hard Takeoff|title-link=Our Final Invention|author1-link=James Barrat}}</ref> In addition, organizations such as the [[Machine Intelligence Research Institute]]<ref>{{cite web|title=About the Machine Intelligence Research Institute|url=https://intelligence.org/about/|website=Machine Intelligence Research Institute|accessdate=26 December 2017|archive-url=https://web.archive.org/web/20180121025925/https://intelligence.org/about/|archive-date=21 January 2018|url-status=live}}</ref> and [[OpenAI]]<ref>{{cite news|title=About OpenAI|url=https://openai.com/about/|accessdate=26 December 2017|work=[[OpenAI]]|language=en-us|archive-url=https://web.archive.org/web/20171222181056/https://openai.com/about/|archive-date=22 December 2017|url-status=live}}</ref> have been founded to influence the development path of AGI. Finally, projects such as the [[Human Brain Project]]<ref>{{cite news|last1=Theil|first1=Stefan|title=Trouble in Mind|url=https://www.scientificamerican.com/article/why-the-human-brain-project-went-wrong-and-how-to-fix-it/|accessdate=26 December 2017|work=Scientific American|pages=36–42|language=en|doi=10.1038/scientificamerican1015-36|bibcode=2015SciAm.313d..36T|archive-url=https://web.archive.org/web/20171109234151/https://www.scientificamerican.com/article/why-the-human-brain-project-went-wrong-and-how-to-fix-it/|archive-date=9 November 2017|url-status=live}}</ref> have the goal of building a functioning simulation of the human brain. A 2017 survey of AGI categorized forty-five known "active R&D projects" that explicitly or implicitly (through published research) research AGI, with the largest three being [[DeepMind]], the Human Brain Project, and [[OpenAI]].<ref name=baum>{{cite journal|title=Baum, Seth, A Survey of Artificial General Intelligence Projects for Ethics, Risk, and Policy (November 12, 2017). Global Catastrophic Risk Institute Working Paper 17-1|url=https://ssrn.com/abstract=3070741|date=12 November 2017|last1=Baum|first1=Seth}}</ref><br />
<br />
However, most mainstream AI researchers doubt that progress will be this rapid. Organizations explicitly pursuing AGI include the Swiss AI lab IDSIA, Nnaisense, Vicarious,<!-- In addition, organizations such as the Machine Intelligence Research Institute and OpenAI have been founded to influence the development path of AGI. Finally, projects such as the Human Brain Project have the goal of building a functioning simulation of the human brain. A 2017 survey of AGI categorized forty-five known "active R&D projects" that explicitly or implicitly (through published research) research AGI, with the largest three being DeepMind, the Human Brain Project, and OpenAI.<br />
<br />
然而,大多数主流的人工智能研究人员怀疑进展是否会如此之快。明确寻求 AGI 的组织包括瑞士人工智能实验室 IDSIA,Nnaisense,Vicarious,! -- 此外,还成立了机器智能研究所和 OpenAI 等机构来影响 AGI 的发展道路。最后,像人脑计划这样的项目的目标是建立一个人脑的功能模拟。2017年针对 AGI 的一项调查将45个已知的“活跃研发项目”(通过已发表的研究)归类为“活跃研发项目” ,其中最大的三个是 DeepMind、人类大脑项目和 OpenAI。<br />
<br />
<br />
<br />
In 2017, researchers Feng Liu, Yong Shi and Ying Liu conducted intelligence tests on publicly available and freely accessible weak AI such as Google AI or Apple's Siri and others. At the maximum, these AI reached a value of about 47, which corresponds approximately to a six-year-old child in first grade. An adult comes to about 100 on average. Similar tests had been carried out in 2014, with the IQ score reaching a maximum value of 27.<ref>{{cite journal|title=Intelligence Quotient and Intelligence Grade of Artificial Intelligence|journal=Annals of Data Science|volume=4|issue=2|pages=179–191|arxiv=1709.10242|doi=10.1007/s40745-017-0109-0|year=2017|last1=Liu|first1=Feng|last2=Shi|first2=Yong|last3=Liu|first3=Ying|bibcode=2017arXiv170910242L}}</ref><ref>{{cite web|title=Google-KI doppelt so schlau wie Siri|url=https://t3n.de/news/iq-kind-schlauer-google-ki-siri-864003|accessdate=2 January 2019|archive-url=https://web.archive.org/web/20190103055657/https://t3n.de/news/iq-kind-schlauer-google-ki-siri-864003/|archive-date=3 January 2019|url-status=live}}</ref><br />
<br />
In 2017, researchers Feng Liu, Yong Shi and Ying Liu conducted intelligence tests on publicly available and freely accessible weak AI such as Google AI or Apple's Siri and others. At the maximum, these AI reached a value of about 47, which corresponds approximately to a six-year-old child in first grade. An adult comes to about 100 on average. Similar tests had been carried out in 2014, with the IQ score reaching a maximum value of 27.<br />
<br />
2017年,研究人员 Feng Liu,Yong Shi 和 Ying Liu 对公开的和可自由访问的弱智能进行了智能测试,如谷歌人工智能或苹果的 Siri 等。在最大值,这些 AI 达到了约47,这大约相当于一个六岁的儿童在一年级。一个成年人平均体重约为100磅。2014年也进行了类似的测试,智商分数达到了最高值27。<br />
<br />
<br />
<br />
In 2019, video game programmer and aerospace engineer [[John Carmack]] announced plans to research AGI.<ref name="lawler">{{cite web |url=https://www.engadget.com/2019-11-13-john-carmack-agi.html |title=John Carmack takes a step back at Oculus to work on human-like AI |date=November 13, 2019 |first=Richard |last=Lawler |accessdate=April 4, 2020 |publisher=[[Engadget]]}}</ref><br />
<br />
In 2019, video game programmer and aerospace engineer John Carmack announced plans to research AGI.<br />
<br />
2019年,游戏程序师和航空工程师 John Carmack 宣布了研究 AGI 的计划。<br />
<br />
<br />
<br />
==Processing power needed to simulate a brain ==<br />
<br />
<br />
<br />
===Whole brain emulation===<br />
<br />
{{main|Mind uploading}}<br />
<br />
A popular discussed approach to achieving general intelligent action is [[whole brain emulation]]. A low-level brain model is built by [[brain scanning|scanning]] and [[Brain mapping|mapping]] a biological brain in detail and copying its state into a computer system or another computational device. The computer runs a [[computer simulation|simulation]] model so faithful to the original that it will behave in essentially the same way as the original brain, or for all practical purposes, indistinguishably.<ref name=Roadmap><br />
<br />
A popular discussed approach to achieving general intelligent action is whole brain emulation. A low-level brain model is built by scanning and mapping a biological brain in detail and copying its state into a computer system or another computational device. The computer runs a simulation model so faithful to the original that it will behave in essentially the same way as the original brain, or for all practical purposes, indistinguishably.<ref name=Roadmap><br />
<br />
实现一般智能行为的一种流行的讨论方法是全脑模拟。一个低层次的大脑模型是通过扫描和绘制生物大脑的详细情况,并将其状态复制到计算机系统或其他计算设备中来建立的。计算机运行的模拟模型如此忠实于原始模型,以至于它的行为在本质上与原始大脑相同,或者对于所有的实际目的,难以区分。 参考名称路线图<br />
<br />
{{Harvnb|Sandberg|Boström|2008}}. "The basic idea is to take a particular brain, scan its structure in detail, and construct a software model of it that is so faithful to the original that, when run on appropriate hardware, it will behave in essentially the same way as the original brain."</ref> Whole brain emulation is discussed in [[computational neuroscience]] and [[neuroinformatics]], in the context of [[brain simulation]] for medical research purposes. It is discussed in [[artificial intelligence]] research{{sfn|Goertzel|2007}} as an approach to strong AI. [[Neuroimaging]] technologies that could deliver the necessary detailed understanding are improving rapidly, and [[futurist]] Ray Kurzweil in the book ''The Singularity Is Near''<ref name=K/> predicts that a map of sufficient quality will become available on a similar timescale to the required computing power.<br />
<br />
. "The basic idea is to take a particular brain, scan its structure in detail, and construct a software model of it that is so faithful to the original that, when run on appropriate hardware, it will behave in essentially the same way as the original brain."</ref> Whole brain emulation is discussed in computational neuroscience and neuroinformatics, in the context of brain simulation for medical research purposes. It is discussed in artificial intelligence research as an approach to strong AI. Neuroimaging technologies that could deliver the necessary detailed understanding are improving rapidly, and futurist Ray Kurzweil in the book The Singularity Is Near predicts that a map of sufficient quality will become available on a similar timescale to the required computing power.<br />
<br />
.“基本思想是,取一个特定的大脑,详细地扫描其结构,并构建一个与原始大脑如此忠实的软件模型,以至于在适当的硬件上运行时,它基本上与原始大脑的行为方式相同。整个大脑模拟在计算神经科学和神经信息学医学期刊上讨论过,这是为了医学研究的大脑模拟。它是人工智能研究中讨论的一种强人工智能的方法。神经成像技术可以提供必要的详细的理解正在迅速提高,未来学家 Ray Kurzweil 在《奇点迫近书中预测,一张具有足够质量的地图将在类似的时间尺度上达到所需的计算能力。<br />
<br />
<br />
<br />
===Early estimates ===<br />
<br />
[[File:Estimations of Human Brain Emulation Required Performance.svg|thumb|right|400px|Estimates of how much processing power is needed to emulate a human brain at various levels (from Ray Kurzweil, and [[Anders Sandberg]] and [[Nick Bostrom]]), along with the fastest supercomputer from [[TOP500]] mapped by year. Note the logarithmic scale and exponential trendline, which assumes the computational capacity doubles every 1.1 years. Kurzweil believes that mind uploading will be possible at neural simulation, while the Sandberg, Bostrom report is less certain about where [[consciousness]] arises.{{sfn|Sandberg|Boström|2008}}]] For low-level brain simulation, an extremely powerful computer would be required. The [[human brain]] has a huge number of [[synapses]]. Each of the 10<sup>11</sup> (one hundred billion) [[neurons]] has on average 7,000 synaptic connections (synapses) to other neurons. It has been estimated that the brain of a three-year-old child has about 10<sup>15</sup> synapses (1 quadrillion). This number declines with age, stabilizing by adulthood. Estimates vary for an adult, ranging from 10<sup>14</sup> to 5×10<sup>14</sup> synapses (100 to 500 trillion).{{sfn|Drachman|2005}} An estimate of the brain's processing power, based on a simple switch model for neuron activity, is around 10<sup>14</sup> (100 trillion) synaptic updates per second ([[SUPS]]).{{sfn|Russell|Norvig|2003}} In 1997, Kurzweil looked at various estimates for the hardware required to equal the human brain and adopted a figure of 10<sup>16</sup> computations per second (cps).<ref>In "Mind Children" {{Harvnb|Moravec|1988|page=61}} 10<sup>15</sup> cps is used. More recently, in 1997, <{{cite web|url=http://www.transhumanist.com/volume1/moravec.htm |title=Archived copy |accessdate=23 June 2006 |url-status=dead |archiveurl=https://web.archive.org/web/20060615031852/http://transhumanist.com/volume1/moravec.htm |archivedate=15 June 2006 }}> Moravec argued for 10<sup>8</sup> MIPS which would roughly correspond to 10<sup>14</sup> cps. Moravec talks in terms of MIPS, not "cps", which is a non-standard term Kurzweil introduced.</ref> (For comparison, if a "computation" was equivalent to one "[[FLOPS|floating point operation]]" – a measure used to rate current [[supercomputer]]s – then 10<sup>16</sup> "computations" would be equivalent to 10 [[Peta-|petaFLOPS]], [[FLOPS#Performance records|achieved in 2011]]). He used this figure to predict the necessary hardware would be available sometime between 2015 and 2025, if the exponential growth in computer power at the time of writing continued.<br />
<br />
Estimates of how much processing power is needed to emulate a human brain at various levels (from Ray Kurzweil, and [[Anders Sandberg and Nick Bostrom), along with the fastest supercomputer from TOP500 mapped by year. Note the logarithmic scale and exponential trendline, which assumes the computational capacity doubles every 1.1 years. Kurzweil believes that mind uploading will be possible at neural simulation, while the Sandberg, Bostrom report is less certain about where consciousness arises.]] For low-level brain simulation, an extremely powerful computer would be required. The human brain has a huge number of synapses. Each of the 10<sup>11</sup> (one hundred billion) neurons has on average 7,000 synaptic connections (synapses) to other neurons. It has been estimated that the brain of a three-year-old child has about 10<sup>15</sup> synapses (1 quadrillion). This number declines with age, stabilizing by adulthood. Estimates vary for an adult, ranging from 10<sup>14</sup> to 5×10<sup>14</sup> synapses (100 to 500 trillion). An estimate of the brain's processing power, based on a simple switch model for neuron activity, is around 10<sup>14</sup> (100 trillion) synaptic updates per second (SUPS). In 1997, Kurzweil looked at various estimates for the hardware required to equal the human brain and adopted a figure of 10<sup>16</sup> computations per second (cps). (For comparison, if a "computation" was equivalent to one "floating point operation" – a measure used to rate current supercomputers – then 10<sup>16</sup> "computations" would be equivalent to 10 petaFLOPS, achieved in 2011). He used this figure to predict the necessary hardware would be available sometime between 2015 and 2025, if the exponential growth in computer power at the time of writing continued.<br />
<br />
估计需要多少处理能力才能在不同水平上模拟人类大脑(来自 Ray Kurzweil,[ Anders Sandberg 和 Nick Bostrom ]) ,以及每年从 TOP500绘制出的最快超级计算机。请注意对数尺度趋势线和指数趋势线,它假设计算能力每1.1年翻一番。库兹韦尔相信,在神经模拟中上传思维是可能的,而桑德伯格和博斯特罗姆的报告对意识在哪里产生则不太确定。]对于低层次的大脑模拟,需要一个非常强大的计算机。人类的大脑有大量的突触。每个10个 sup 11 / sup (1000亿)神经元平均有7000个突触连接(突触)到其他神经元。据估计,一个三岁儿童的大脑约有10个 sup 15 / sup 突触(1千万亿)。这个数字随着年龄的增长而下降,成年后趋于稳定。对于一个成年人的估计有所不同,从10个 sup 14 / sup 到5个 sup 10 sup 14 / sup 突触(100万亿到500万亿)不等。基于神经元活动的简单开关模型,对大脑处理能力的估计大约是每秒10次 / 秒(100万亿)突触更新(SUPS)。1997年,库兹韦尔研究了相当于人脑所需硬件的各种估计,并采用了每秒10 sup 16 / sup 计算(cps)的数字。(作为比较,如果一次“计算”相当于一次“浮点运算”——一种用于对当前超级计算机进行评级的措施——那么10 sup 16 / sup“计算”相当于2011年完成的10petaflops)。他用这个数字来预测,如果在撰写本文时计算机能力方面的指数增长继续下去的话,那么在2015年到2025年之间的某个时候,必要的硬件将会出现。<br />
<br />
<br />
<br />
===Modelling the neurons in more detail===<br />
<br />
The [[artificial neuron]] model assumed by Kurzweil and used in many current [[artificial neural network]] implementations is simple compared with [[biological neuron model|biological neurons]]. A brain simulation would likely have to capture the detailed cellular behaviour of biological [[neurons]], presently understood only in the broadest of outlines. The overhead introduced by full modeling of the biological, chemical, and physical details of neural behaviour (especially on a molecular scale) would require computational powers several orders of magnitude larger than Kurzweil's estimate. In addition the estimates do not account for [[glial cells]], which are at least as numerous as neurons, and which may outnumber neurons by as much as 10:1, and are now known to play a role in cognitive processes.<ref name="Discover2011JanFeb">{{Cite journal|author=Swaminathan, Nikhil|title=Glia—the other brain cells|journal=Discover|date=Jan–Feb 2011|url=http://discovermagazine.com/2011/jan-feb/62|access-date=24 January 2014|archive-url=https://web.archive.org/web/20140208071350/http://discovermagazine.com/2011/jan-feb/62|archive-date=8 February 2014|url-status=live}}</ref><br />
<br />
The artificial neuron model assumed by Kurzweil and used in many current artificial neural network implementations is simple compared with biological neurons. A brain simulation would likely have to capture the detailed cellular behaviour of biological neurons, presently understood only in the broadest of outlines. The overhead introduced by full modeling of the biological, chemical, and physical details of neural behaviour (especially on a molecular scale) would require computational powers several orders of magnitude larger than Kurzweil's estimate. In addition the estimates do not account for glial cells, which are at least as numerous as neurons, and which may outnumber neurons by as much as 10:1, and are now known to play a role in cognitive processes.<br />
<br />
与生物神经元相比,Kurzweil 假设的人工神经元模型在当前许多人工神经网络实现中的应用是简单的。大脑模拟可能需要捕捉生物神经元细胞行为的细节,目前只能从最广泛的轮廓中理解。对神经行为的生物、化学和物理细节(特别是在分子尺度上)进行全面建模所需要的计算能力将比 Kurzweil 的估计大几百万数量级。此外,这些估计没有考虑胶质细胞,胶质细胞至少和神经元一样多,数量可能比神经元多10:1,现在已知它们在认知过程中发挥作用。<br />
<br />
<br />
<br />
=== Current research===<br />
<br />
There are some research projects that are investigating brain simulation using more sophisticated neural models, implemented on conventional computing architectures. The [[Artificial Intelligence System]] project implemented non-real time simulations of a "brain" (with 10<sup>11</sup> neurons) in 2005. It took 50 days on a cluster of 27 processors to simulate 1 second of a model.<ref>{{cite journal |last=Izhikevich |first=Eugene M. |last2=Edelman |first2=Gerald M. |date=4 March 2008 |title=Large-scale model of mammalian thalamocortical systems |url=http://vesicle.nsi.edu/users/izhikevich/publications/large-scale_model_of_human_brain.pdf |journal=PNAS |volume=105 |issue=9 |pages=3593–3598 |doi= 10.1073/pnas.0712231105|access-date=23 June 2015 |archive-url=https://web.archive.org/web/20090612095651/http://vesicle.nsi.edu/users/izhikevich/publications/large-scale_model_of_human_brain.pdf |archive-date=12 June 2009 |pmid=18292226 |pmc=2265160|bibcode=2008PNAS..105.3593I }}</ref> The [[Blue Brain]] project used one of the fastest supercomputer architectures in the world, [[IBM]]'s [[Blue Gene]] platform, to create a real time simulation of a single rat [[Neocortex|neocortical column]] consisting of approximately 10,000 neurons and 10<sup>8</sup> synapses in 2006.<ref>{{cite web|url=http://bluebrain.epfl.ch/Jahia/site/bluebrain/op/edit/pid/19085|title=Project Milestones|work=Blue Brain|accessdate=11 August 2008}}</ref> A longer term goal is to build a detailed, functional simulation of the physiological processes in the human brain: "It is not impossible to build a human brain and we can do it in 10 years," [[Henry Markram]], director of the Blue Brain Project said in 2009 at the [[TED (conference)|TED conference]] in Oxford.<ref>{{Cite news |url=http://news.bbc.co.uk/1/hi/technology/8164060.stm |title=Artificial brain '10 years away' 2009 BBC news |date=22 July 2009 |access-date=25 July 2009 |archive-url=https://web.archive.org/web/20170726040959/http://news.bbc.co.uk/1/hi/technology/8164060.stm |archive-date=26 July 2017 |url-status=live }}</ref> There have also been controversial claims to have simulated a [[cat intelligence#Computer simulation of the cat brain|cat brain]]. Neuro-silicon interfaces have been proposed as an alternative implementation strategy that may scale better.<ref>[http://gauntlet.ucalgary.ca/story/10343 University of Calgary news] {{Webarchive|url=https://web.archive.org/web/20090818081044/http://gauntlet.ucalgary.ca/story/10343 |date=18 August 2009 }}, [http://www.nbcnews.com/id/12037941 NBC News news] {{Webarchive|url=https://web.archive.org/web/20170704063922/http://www.nbcnews.com/id/12037941/ |date=4 July 2017 }}</ref><br />
<br />
There are some research projects that are investigating brain simulation using more sophisticated neural models, implemented on conventional computing architectures. The Artificial Intelligence System project implemented non-real time simulations of a "brain" (with 10<sup>11</sup> neurons) in 2005. It took 50 days on a cluster of 27 processors to simulate 1 second of a model. The Blue Brain project used one of the fastest supercomputer architectures in the world, IBM's Blue Gene platform, to create a real time simulation of a single rat neocortical column consisting of approximately 10,000 neurons and 10<sup>8</sup> synapses in 2006. A longer term goal is to build a detailed, functional simulation of the physiological processes in the human brain: "It is not impossible to build a human brain and we can do it in 10 years," Henry Markram, director of the Blue Brain Project said in 2009 at the TED conference in Oxford. There have also been controversial claims to have simulated a cat brain. Neuro-silicon interfaces have been proposed as an alternative implementation strategy that may scale better.<br />
<br />
有一些研究项目正在使用更复杂的神经模型研究大脑模拟,这些模型是在传统的计算机体系结构上实现的。人工智能系统项目在2005年实现了一个“大脑”(有10个 sup 11 / sup 神经元)的非实时模拟。在一个由27个处理器组成的集群上,模拟一个模型的一秒钟花费了50天时间。2006年,蓝脑项目利用世界上最快的超级计算机架构之一,IBM 的蓝色基因平台,创建了一个包含大约10,000个神经元和10个 sup 8 / sup 突触的单个大鼠皮层柱的实时模拟。一个更长期的目标是建立一个人脑生理过程的详细的功能模拟: “建立一个人脑并不是不可能的,我们可以在10年内完成,”2009年在牛津举行的 TED 大会上,蓝脑项目主任亨利 · 马克拉姆说。还有一些有争议的说法是模拟猫的大脑。神经硅接口已被提出作为一种替代的实施策略,可能会更好地伸缩。<br />
<br />
<br />
<br />
[[Hans Moravec]] addressed the above arguments ("brains are more complicated", "neurons have to be modeled in more detail") in his 1997 paper "When will computer hardware match the human brain?".<ref>{{cite web|url=http://www.transhumanist.com/volume1/moravec.htm |title=Archived copy |accessdate=23 June 2006 |url-status=dead |archiveurl=https://web.archive.org/web/20060615031852/http://transhumanist.com/volume1/moravec.htm |archivedate=15 June 2006 }}</ref> He measured the ability of existing software to simulate the functionality of neural tissue, specifically the retina. His results do not depend on the number of glial cells, nor on what kinds of processing neurons perform where.<br />
<br />
Hans Moravec addressed the above arguments ("brains are more complicated", "neurons have to be modeled in more detail") in his 1997 paper "When will computer hardware match the human brain?". He measured the ability of existing software to simulate the functionality of neural tissue, specifically the retina. His results do not depend on the number of glial cells, nor on what kinds of processing neurons perform where.<br />
<br />
Hans Moravec 在他1997年的论文《计算机硬件何时能与人脑相匹配? 》中提出了上述观点(“大脑更复杂” ,“神经元必须建模得更详细”) .他测量了现有软件模拟神经组织,特别是视网膜功能的能力。他的研究结果并不取决于神经胶质细胞的数量,也不取决于处理神经元在哪里工作。<br />
<br />
<br />
<br />
The actual complexity of modeling biological neurons has been explored in [[OpenWorm|OpenWorm project]] that was aimed on complete simulation of a worm that has only 302 neurons in its neural network (among about 1000 cells in total). The animal's neural network has been well documented before the start of the project. However, although the task seemed simple at the beginning, the models based on a generic neural network did not work. Currently, the efforts are focused on precise emulation of biological neurons (partly on the molecular level), but the result cannot be called a total success yet. Even if the number of issues to be solved in a human-brain-scale model is not proportional to the number of neurons, the amount of work along this path is obvious.<br />
<br />
The actual complexity of modeling biological neurons has been explored in OpenWorm project that was aimed on complete simulation of a worm that has only 302 neurons in its neural network (among about 1000 cells in total). The animal's neural network has been well documented before the start of the project. However, although the task seemed simple at the beginning, the models based on a generic neural network did not work. Currently, the efforts are focused on precise emulation of biological neurons (partly on the molecular level), but the result cannot be called a total success yet. Even if the number of issues to be solved in a human-brain-scale model is not proportional to the number of neurons, the amount of work along this path is obvious.<br />
<br />
在 OpenWorm 项目中,已经探讨了建模生物神经元的实际复杂性,该项目旨在完全模拟一个蠕虫,其神经网络中只有302个神经元(在总共约1000个细胞中)。在项目开始之前,动物的神经网络已经被很好地记录了下来。然而,尽管一开始任务看起来很简单,基于一般神经网络的模型并不起作用。目前,研究的重点是精确模拟生物神经元(部分在分子水平上) ,但结果还不能被称为完全成功。即使在人脑尺度模型中需要解决的问题的数量与神经元的数量不成比例,沿着这条路径所做的工作量也是显而易见的。<br />
<br />
<br />
<br />
===Criticisms of simulation-based approaches===<br />
<br />
A fundamental criticism of the simulated brain approach derives from [[embodied cognition]] where human embodiment is taken as an essential aspect of human intelligence. Many researchers believe that embodiment is necessary to ground meaning.<ref>{{Harvnb|de Vega|Glenberg|Graesser|2008}}. A wide range of views in current research, all of which require grounding to some degree</ref> If this view is correct, any fully functional brain model will need to encompass more than just the neurons (i.e., a robotic body). Goertzel{{sfn|Goertzel|2007}} proposes virtual embodiment (like in ''[[Second Life]]''), but it is not yet known whether this would be sufficient.<br />
<br />
A fundamental criticism of the simulated brain approach derives from embodied cognition where human embodiment is taken as an essential aspect of human intelligence. Many researchers believe that embodiment is necessary to ground meaning. If this view is correct, any fully functional brain model will need to encompass more than just the neurons (i.e., a robotic body). Goertzel proposes virtual embodiment (like in Second Life), but it is not yet known whether this would be sufficient.<br />
<br />
对模拟大脑方法的一个基本批评来自具身认知,在那里人体化被视为人类智力的一个重要方面。许多研究者认为,具体化是必要的基础意义。如果这种观点是正确的,那么任何功能齐全的大脑模型都需要包含更多的神经元(例如,一个机器人身体)。Goertzel 提出了虚拟化身(就像在《第二人生》中那样) ,但是目前还不知道这是否足够。<br />
<br />
<br />
<br />
Desktop computers using microprocessors capable of more than 10<sup>9</sup> cps (Kurzweil's non-standard unit "computations per second", see above) have been available since 2005. According to the brain power estimates used by Kurzweil (and Moravec), this computer should be capable of supporting a simulation of a bee brain, but despite some interest<ref>{{Cite web |url=http://www.setiai.com/archives/cat_honey_bee_brain.html |title=some links to bee brain studies |access-date=30 March 2010 |archive-url=https://web.archive.org/web/20111005162232/http://www.setiai.com/archives/cat_honey_bee_brain.html |archive-date=5 October 2011 |url-status=dead }}</ref> no such simulation exists {{Citation needed|date=April 2011}}. There are at least three reasons for this:<br />
<br />
Desktop computers using microprocessors capable of more than 10<sup>9</sup> cps (Kurzweil's non-standard unit "computations per second", see above) have been available since 2005. According to the brain power estimates used by Kurzweil (and Moravec), this computer should be capable of supporting a simulation of a bee brain, but despite some interest no such simulation exists . There are at least three reasons for this:<br />
<br />
自2005年以来,台式计算机使用的微处理器能够超过10 sup 9 / sup cps (库兹韦尔的非标准单位“每秒计算” ,见上文)。根据 Kurzweil (和 Moravec)使用的大脑能量估算,这台计算机应该能够支持蜜蜂大脑的模拟,但是尽管有些人感兴趣,这样的模拟并不存在。这至少有三个原因:<br />
<br />
#The neuron model seems to be oversimplified (see next section).<br />
<br />
The neuron model seems to be oversimplified (see next section).<br />
<br />
神经元模型似乎过于简化了(见下一节)。<br />
<br />
#There is insufficient understanding of higher cognitive processes{{refn|In Goertzels' AGI book ([[#CITEREFYudkowsky2006|Yudkowsky 2006]]), Yudkowsky proposes 5 levels of organisation that must be understood – code/data, sensory modality, concept & category, thought, and deliberation (consciousness) – in order to use the available hardware}} to establish accurately what the brain's neural activity, observed using techniques such as [[Neuroimaging#Functional magnetic resonance imaging|functional magnetic resonance imaging]], correlates with.<br />
<br />
There is insufficient understanding of higher cognitive processes to establish accurately what the brain's neural activity, observed using techniques such as functional magnetic resonance imaging, correlates with.<br />
<br />
人们对高级认知过程的理解不够充分,无法准确地确定大脑的神经活动---- 使用功能性磁共振成像等技术观察到的活动---- 与之相关。<br />
<br />
#Even if our understanding of cognition advances sufficiently, early simulation programs are likely to be very inefficient and will, therefore, need considerably more hardware.<br />
<br />
Even if our understanding of cognition advances sufficiently, early simulation programs are likely to be very inefficient and will, therefore, need considerably more hardware.<br />
<br />
即使我们对认知的理解有了足够的进步,早期的仿真程序也可能非常低效,因此需要更多的硬件。<br />
<br />
#The brain of an organism, while critical, may not be an appropriate boundary for a cognitive model. To simulate a bee brain, it may be necessary to simulate the body, and the environment. [[The Extended Mind]] thesis formalizes the philosophical concept, and research into [[Cephalopoda|cephalopods]] has demonstrated clear examples of a decentralized system.<ref>{{cite journal | pmid = 15829594 | doi=10.1152/jn.00684.2004 | volume=94 | issue=2 | title=Dynamic model of the octopus arm. I. Biomechanics of the octopus reaching movement |date=August 2005 | journal=J. Neurophysiol. | pages=1443–58 | last1 = Yekutieli | first1 = Y | last2 = Sagiv-Zohar | first2 = R | last3 = Aharonov | first3 = R | last4 = Engel | first4 = Y | last5 = Hochner | first5 = B | last6 = Flash | first6 = T}}</ref><br />
<br />
The brain of an organism, while critical, may not be an appropriate boundary for a cognitive model. To simulate a bee brain, it may be necessary to simulate the body, and the environment. The Extended Mind thesis formalizes the philosophical concept, and research into cephalopods has demonstrated clear examples of a decentralized system.<br />
<br />
有机体的大脑虽然关键,但可能不是认知模型的合适边界。为了模拟蜜蜂的大脑,可能需要模拟身体和环境。扩展心智论文形式化了哲学概念,对头足类动物的研究已经证明了分散系统的明显例子。<br />
<br />
<br />
<br />
In addition, the scale of the human brain is not currently well-constrained. One estimate puts the human brain at about 100 billion neurons and 100 trillion synapses.<ref>{{Harvnb|Williams|Herrup|1988}}</ref><ref>[http://search.eb.com/eb/article-75525 "nervous system, human."] ''[[Encyclopædia Britannica]]''. 9 January 2007</ref> Another estimate is 86 billion neurons of which 16.3 billion are in the [[cerebral cortex]] and 69 billion in the [[cerebellum]].{{sfn|Azevedo et al.|2009}} [[Glial cell]] synapses are currently unquantified but are known to be extremely numerous.<br />
<br />
In addition, the scale of the human brain is not currently well-constrained. One estimate puts the human brain at about 100 billion neurons and 100 trillion synapses. Another estimate is 86 billion neurons of which 16.3 billion are in the cerebral cortex and 69 billion in the cerebellum. Glial cell synapses are currently unquantified but are known to be extremely numerous.<br />
<br />
此外,人类大脑的规模目前还没有得到很好的限制。据估计,人类大脑大约有1000亿个神经元和100万亿个突触。另一个估计是860亿个神经元,其中163亿在大脑皮层,690亿在小脑。神经胶质细胞突触目前尚未定量,但已知数量极多。<br />
<br />
<br />
<br />
==Strong AI and consciousness==<br />
<br />
{{See also|Philosophy of artificial intelligence|Turing test}}<br />
<br />
<br />
<br />
In 1980, philosopher [[John Searle]] coined the term "strong AI" as part of his [[Chinese room]] argument.<ref>{{Harvnb|Searle|1980}}</ref> He wanted to distinguish between two different hypotheses about artificial intelligence:<ref>As defined in a standard AI textbook: "The assertion that machines could possibly act intelligently (or, perhaps better, act as if they were intelligent) is called the 'weak AI' hypothesis by philosophers, and the assertion that machines that do so are actually thinking (as opposed to simulating thinking) is called the 'strong AI' hypothesis." {{Harv|Russell|Norvig|2003}}</ref><br />
<br />
In 1980, philosopher John Searle coined the term "strong AI" as part of his Chinese room argument. He wanted to distinguish between two different hypotheses about artificial intelligence:<br />
<br />
1980年,哲学家约翰•塞尔(John Searle)将“强人工智能”(strong AI)一词作为他在中文房间里辩论的一部分。他想要区分关于人工智能的两种不同假设:<br />
<br />
* An artificial intelligence system can ''think'' and have a ''mind''. (The word "mind" has a specific meaning for philosophers, as used in "the [[mind body problem]]" or "the [[philosophy of mind]]".)<br />
<br />
* An artificial intelligence system can (only) ''act like'' it thinks and has a mind.<br />
<br />
The first one is called "the ''strong'' AI hypothesis" and the second is "the ''weak'' AI hypothesis" because the first one makes the ''stronger'' statement: it assumes something special has happened to the machine that goes beyond all its abilities that we can test. Searle referred to the "strong AI hypothesis" as "strong AI". This usage is also common in academic AI research and textbooks.<ref>For example:<br />
<br />
The first one is called "the strong AI hypothesis" and the second is "the weak AI hypothesis" because the first one makes the stronger statement: it assumes something special has happened to the machine that goes beyond all its abilities that we can test. Searle referred to the "strong AI hypothesis" as "strong AI". This usage is also common in academic AI research and textbooks.<ref>For example:<br />
<br />
第一个被称为“强人工智能假设” ,第二个被称为“弱人工智能假设” ,因为第一个假设提出了更强的陈述: 它假定机器发生了某种特殊的事情,超出了我们能够测试的所有能力。Searle 将“强 AI 假说”称为“强 AI”。这种用法在人工智能学术研究和教科书中也很常见。例如:<br />
<br />
* {{Harvnb|Russell|Norvig|2003}},<br />
<br />
* [http://www.encyclopedia.com/doc/1O87-strongAI.html Oxford University Press Dictionary of Psychology] {{Webarchive|url=https://web.archive.org/web/20071203103022/http://www.encyclopedia.com/doc/1O87-strongAI.html |date=3 December 2007 }} (quoted in "High Beam Encyclopedia"),<br />
<br />
* [http://www.aaai.org/AITopics/html/phil.html MIT Encyclopedia of Cognitive Science] {{Webarchive|url=https://web.archive.org/web/20080719074502/http://www.aaai.org/AITopics/html/phil.html |date=19 July 2008 }} (quoted in "AITopics")<br />
<br />
* [http://planetmath.org/encyclopedia/StrongAIThesis.html Planet Math] {{Webarchive|url=https://web.archive.org/web/20070919012830/http://planetmath.org/encyclopedia/StrongAIThesis.html |date=19 September 2007 }}<br />
<br />
* [http://www.cbhd.org/resources/biotech/tongen_2003-11-07.htm Will Biological Computers Enable Artificially Intelligent Machines to Become Persons?] {{Webarchive|url=https://web.archive.org/web/20080513031753/http://www.cbhd.org/resources/biotech/tongen_2003-11-07.htm |date=13 May 2008 }} Anthony Tongen</ref><br />
<br />
<br />
<br />
The weak AI hypothesis is equivalent to the hypothesis that artificial general intelligence is possible. According to [[Stuart J. Russell|Russell]] and [[Peter Norvig|Norvig]], "Most AI researchers take the weak AI hypothesis for granted, and don't care about the strong AI hypothesis."{{sfn|Russell|Norvig|2003|p=947}}<br />
<br />
The weak AI hypothesis is equivalent to the hypothesis that artificial general intelligence is possible. According to Russell and Norvig, "Most AI researchers take the weak AI hypothesis for granted, and don't care about the strong AI hypothesis."<br />
<br />
弱人工智能假说等同于人工一般智能是可能的假说。根据罗素和诺维格的说法,“大多数人工智能研究人员认为弱人工智能假说是理所当然的,并且不关心强人工智能假说。”<br />
<br />
<br />
<br />
In contrast to Searle, [[Ray Kurzweil]] uses the term "strong AI" to describe any artificial intelligence system that acts like it has a mind,<ref name=K/> regardless of whether a philosopher would be able to determine if it ''actually'' has a mind or not.<br />
<br />
In contrast to Searle, Ray Kurzweil uses the term "strong AI" to describe any artificial intelligence system that acts like it has a mind, regardless of whether a philosopher would be able to determine if it actually has a mind or not.<br />
<br />
与 Searle 不同的是,Ray Kurzweil 使用“强人工智能”这个词来描述任何人工智能系统,这个系统的行为就像它有思想一样,不管哲学家是否能够确定它是否真的有思想。<br />
<br />
In science fiction, AGI is associated with traits such as [[consciousness]], [[sentience]], [[sapience]], and [[self-awareness]] observed in living beings. However, according to Searle, it is an open question whether general intelligence is sufficient for consciousness. "Strong AI" (as defined above by Kurzweil) should not be confused with Searle's "[[strong AI hypothesis]]." The strong AI hypothesis is the claim that a computer which behaves as intelligently as a person must also necessarily have a [[mind]] and [[consciousness]]. AGI refers only to the amount of intelligence that the machine displays, with or without a mind.<br />
<br />
In science fiction, AGI is associated with traits such as consciousness, sentience, sapience, and self-awareness observed in living beings. However, according to Searle, it is an open question whether general intelligence is sufficient for consciousness. "Strong AI" (as defined above by Kurzweil) should not be confused with Searle's "strong AI hypothesis." The strong AI hypothesis is the claim that a computer which behaves as intelligently as a person must also necessarily have a mind and consciousness. AGI refers only to the amount of intelligence that the machine displays, with or without a mind.<br />
<br />
在科幻小说中,AGI 与生物的意识、知觉、智慧和自我意识等特征有关。然而,根据 Searle 的说法,一般智力是否足以产生意识还是一个悬而未决的问题。“强 AI”(如上文库兹韦尔所定义的)不应与塞尔的“强 AI 假设”相混淆强有力的人工智能假说认为,一台像人一样智能运行的计算机必然具有思想和意识。Agi 只是指机器显示的智能量,不管有没有头脑。<br />
<br />
<br />
<br />
===Consciousness===<br />
<br />
There are other aspects of the human mind besides intelligence that are relevant to the concept of strong AI which play a major role in [[science fiction]] and the [[ethics of artificial intelligence]]:<br />
<br />
There are other aspects of the human mind besides intelligence that are relevant to the concept of strong AI which play a major role in science fiction and the ethics of artificial intelligence:<br />
<br />
在科幻小说和人工智能伦理中扮演重要角色的强大人工智能概念中,除了与智能有关的人类思维还有其他方面:<br />
<br />
* [[consciousness]]: To have [[qualia|subjective experience]] and [[thought]].<ref>Note that [[consciousness]] is difficult to define. A popular definition, due to [[Thomas Nagel]], is that it "feels like" something to be conscious. If we are not conscious, then it doesn't feel like anything. Nagel uses the example of a bat: we can sensibly ask "what does it feel like to be a bat?" However, we are unlikely to ask "what does it feel like to be a toaster?" Nagel concludes that a bat appears to be conscious (i.e. has consciousness) but a toaster does not. See {{Harv|Nagel|1974}}</ref><br />
<br />
* [[self-awareness]]: To be aware of oneself as a separate individual, especially to be aware of one's own thoughts.<br />
<br />
* [[sentience]]: The ability to "feel" perceptions or emotions subjectively.<br />
<br />
* [[sapience]]: The capacity for wisdom.<br />
<br />
<br />
<br />
These traits have a moral dimension, because a machine with this form of strong AI may have legal rights, analogous to the [[animal rights|rights of non-human animals]]. As such, preliminary work has been conducted on approaches to integrating full ethical agents with existing legal and social frameworks. These approaches have focused on the legal position and rights of 'strong' AI.<ref>{{Cite journal|last=Sotala|first=Kaj|last2=Yampolskiy|first2=Roman V|date=2014-12-19|title=Responses to catastrophic AGI risk: a survey|journal=Physica Scripta|volume=90|issue=1|pages=8|doi=10.1088/0031-8949/90/1/018001|issn=0031-8949|doi-access=free}}</ref><br />
<br />
These traits have a moral dimension, because a machine with this form of strong AI may have legal rights, analogous to the rights of non-human animals. As such, preliminary work has been conducted on approaches to integrating full ethical agents with existing legal and social frameworks. These approaches have focused on the legal position and rights of 'strong' AI.<br />
<br />
这些特征具有道德维度,因为拥有这种强人工智能形式的机器可能拥有法律权利,类似于非人类动物的权利。因此,已经开展了初步工作,探讨如何将全面的道德行为者纳入现有的法律和社会框架。这些方法都集中在强大的人工智能的法律地位和权利上。<br />
<br />
<br />
<br />
However, [[Bill Joy]], among others, argues a machine with these traits may be a threat to human life or dignity.<ref>{{cite journal| title=Why the future doesn't need us | last=Joy | first=Bill |author-link=Bill Joy | journal=Wired |date=April 2000 }}</ref> It remains to be shown whether any of these traits are [[Necessary and sufficient condition|necessary]] for strong AI. The role of [[consciousness]] is not clear, and currently there is no agreed test for its presence. If a machine is built with a device that simulates the [[neural correlates of consciousness]], would it automatically have self-awareness? It is also possible that some of these properties, such as sentience, [[Emergence|naturally emerge]] from a fully intelligent machine, or that it becomes natural to ''ascribe'' these properties to machines once they begin to act in a way that is clearly intelligent. For example, intelligent action may be sufficient for sentience, rather than the other way around.<br />
<br />
However, Bill Joy, among others, argues a machine with these traits may be a threat to human life or dignity. It remains to be shown whether any of these traits are necessary for strong AI. The role of consciousness is not clear, and currently there is no agreed test for its presence. If a machine is built with a device that simulates the neural correlates of consciousness, would it automatically have self-awareness? It is also possible that some of these properties, such as sentience, naturally emerge from a fully intelligent machine, or that it becomes natural to ascribe these properties to machines once they begin to act in a way that is clearly intelligent. For example, intelligent action may be sufficient for sentience, rather than the other way around.<br />
<br />
然而,比尔 · 乔伊等人认为,具有这些特征的机器可能会威胁到人类的生命或尊严。这些特征对于强 AI 来说是否是必要的还有待证明。意识的作用并不清楚,目前也没有对其存在的一致的测试。如果一台机器装有一个模拟意识相关神经区的装置,它会自动具有自我意识吗?也有可能这些特性中的一些,比如感知能力,自然而然地从一个完全智能的机器中产生,或者一旦机器开始以一种明显的智能方式行动,人们就会自然而然地把这些特性归因于机器。例如,智能行为可能足以产生知觉,而不是相反。<br />
<br />
<br />
<br />
===Artificial consciousness research===<br />
<br />
{{Main|Artificial consciousness}}<br />
<br />
<br />
<br />
Although the role of consciousness in strong AI/AGI is debatable, many AGI researchers{{sfn|Yudkowsky|2006}} regard research that investigates possibilities for implementing consciousness as vital. In an early effort [[Igor Aleksander]]{{sfn|Aleksander|1996}} argued that the principles for creating a conscious machine already existed but that it would take forty years to train such a machine to understand [[language]].<br />
<br />
Although the role of consciousness in strong AI/AGI is debatable, many AGI researchers regard research that investigates possibilities for implementing consciousness as vital. In an early effort Igor Aleksander argued that the principles for creating a conscious machine already existed but that it would take forty years to train such a machine to understand language.<br />
<br />
虽然意识在强 ai / AGI 中的作用是有争议的,但是很多 AGI 的研究人员认为研究实现意识的可能性是至关重要的。在早期的努力中,Igor Aleksander 认为创造一个有意识的机器的原则已经存在,但是训练这样一个机器去理解语言需要四十年。<br />
<br />
<br />
<br />
==Possible explanations for the slow progress of AI research==<br />
<br />
{{See also|History of artificial intelligence#The problems}}<br />
<br />
<br />
<br />
Since the launch of AI research in 1956, the growth of this field has slowed down over time and has stalled the aims of creating machines skilled with intelligent action at the human level.{{sfn|Clocksin|2003}} A possible explanation for this delay is that computers lack a sufficient scope of memory or processing power.{{sfn|Clocksin|2003}} In addition, the level of complexity that connects to the process of AI research may also limit the progress of AI research.{{sfn|Clocksin|2003}}<br />
<br />
Since the launch of AI research in 1956, the growth of this field has slowed down over time and has stalled the aims of creating machines skilled with intelligent action at the human level. A possible explanation for this delay is that computers lack a sufficient scope of memory or processing power. In addition, the level of complexity that connects to the process of AI research may also limit the progress of AI research.<br />
<br />
自从1956年开始人工智能研究以来,这一领域的发展速度已经随着时间的推移而放缓,并且阻碍了创造具有人类水平的智能行为的机器的目标。这种延迟的一个可能的解释是计算机缺乏足够的存储空间或处理能力。此外,与人工智能研究过程相关的复杂程度也可能限制人工智能研究的进展。<br />
<br />
<br />
<br />
While most AI researchers believe strong AI can be achieved in the future, there are some individuals like [[Hubert Dreyfus]] and [[Roger Penrose]] who deny the possibility of achieving strong AI.{{sfn|Clocksin|2003}} [[John McCarthy (computer scientist)|John McCarthy]] was one of various computer scientists who believe human-level AI will be accomplished, but a date cannot accurately be predicted.{{sfn|McCarthy|2003}}<br />
<br />
While most AI researchers believe strong AI can be achieved in the future, there are some individuals like Hubert Dreyfus and Roger Penrose who deny the possibility of achieving strong AI. John McCarthy was one of various computer scientists who believe human-level AI will be accomplished, but a date cannot accurately be predicted.<br />
<br />
虽然大多数人工智能研究人员认为强大的人工智能可以在未来实现,但也有一些人像休伯特 · 德雷福斯和罗杰 · 彭罗斯否认实现强大人工智能的可能性。约翰 · 麦卡锡是众多计算机科学家之一,他们相信人类水平的人工智能将会实现,但是日期无法准确预测。<br />
<br />
<br />
<br />
Conceptual limitations are another possible reason for the slowness in AI research.{{sfn|Clocksin|2003}} AI researchers may need to modify the conceptual framework of their discipline in order to provide a stronger base and contribution to the quest of achieving strong AI. As William Clocksin wrote in 2003: "the framework starts from Weizenbaum's observation that intelligence manifests itself only relative to specific social and cultural contexts".{{sfn|Clocksin|2003}}<br />
<br />
Conceptual limitations are another possible reason for the slowness in AI research. AI researchers may need to modify the conceptual framework of their discipline in order to provide a stronger base and contribution to the quest of achieving strong AI. As William Clocksin wrote in 2003: "the framework starts from Weizenbaum's observation that intelligence manifests itself only relative to specific social and cultural contexts".<br />
<br />
概念上的局限性是人工智能研究缓慢的另一个可能原因。人工智能研究人员可能需要修改他们学科的概念框架,以便为实现强大的人工智能提供一个更强大的基础和贡献。正如 William Clocksin 在2003年写的那样: “这个框架始于 Weizenbaum 的观察,即智力只在特定的社会和文化背景下表现出来。”。<br />
<br />
<br />
<br />
Furthermore, AI researchers have been able to create computers that can perform jobs that are complicated for people to do, such as mathematics, but conversely they have struggled to develop a computer that is capable of carrying out tasks that are simple for humans to do, such as walking ([[Moravec's paradox]]).{{sfn|Clocksin|2003}} A problem described by David Gelernter is that some people assume thinking and reasoning are equivalent.{{sfn|Gelernter|2010}} However, the idea of whether thoughts and the creator of those thoughts are isolated individually has intrigued AI researchers.{{sfn|Gelernter|2010}}<br />
<br />
Furthermore, AI researchers have been able to create computers that can perform jobs that are complicated for people to do, such as mathematics, but conversely they have struggled to develop a computer that is capable of carrying out tasks that are simple for humans to do, such as walking (Moravec's paradox). A problem described by David Gelernter is that some people assume thinking and reasoning are equivalent. However, the idea of whether thoughts and the creator of those thoughts are isolated individually has intrigued AI researchers.<br />
<br />
此外,人工智能研究人员已经能够创造出能够执行复杂工作(如数学)的计算机,但相反地,他们却难以开发出能够执行人类简单任务(如行走)的计算机(莫拉维克悖论)。大卫 · 格勒尼特描述的一个问题是,有些人认为思考和推理是等价的。然而,思想和这些思想的创造者是否被孤立的想法引起了人工智能研究者的兴趣。<br />
<br />
<br />
<br />
The problems that have been encountered in AI research over the past decades have further impeded the progress of AI. The failed predictions that have been promised by AI researchers and the lack of a complete understanding of human behaviors have helped diminish the primary idea of human-level AI.{{sfn|Goertzel|2007}} Although the progress of AI research has brought both improvement and disappointment, most investigators have established optimism about potentially achieving the goal of AI in the 21st century.{{sfn|Goertzel|2007}}<br />
<br />
The problems that have been encountered in AI research over the past decades have further impeded the progress of AI. The failed predictions that have been promised by AI researchers and the lack of a complete understanding of human behaviors have helped diminish the primary idea of human-level AI. Although the progress of AI research has brought both improvement and disappointment, most investigators have established optimism about potentially achieving the goal of AI in the 21st century.<br />
<br />
过去几十年人工智能研究中遇到的问题进一步阻碍了人工智能的发展。人工智能研究人员所承诺的失败的预测,以及对人类行为缺乏完整理解,已经帮助削弱了人类水平人工智能的基本概念。尽管人工智能研究的进展带来了进步和失望,但大多数研究人员对人工智能在21世纪可能实现的目标持乐观态度。<br />
<br />
<br />
<br />
Other possible reasons have been proposed for the lengthy research in the progress of strong AI. The intricacy of scientific problems and the need to fully understand the human brain through psychology and neurophysiology have limited many researchers in emulating the function of the human brain in computer hardware.{{sfn|McCarthy|2007}} Many researchers tend to underestimate any doubt that is involved with future predictions of AI, but without taking those issues seriously, people can then overlook solutions to problematic questions.{{sfn|Goertzel|2007}}<br />
<br />
Other possible reasons have been proposed for the lengthy research in the progress of strong AI. The intricacy of scientific problems and the need to fully understand the human brain through psychology and neurophysiology have limited many researchers in emulating the function of the human brain in computer hardware. Many researchers tend to underestimate any doubt that is involved with future predictions of AI, but without taking those issues seriously, people can then overlook solutions to problematic questions.<br />
<br />
还有其他可能的原因可以解释为什么对于强人工智能进行了长时间的研究。错综复杂的科学问题,以及通过心理学和神经生理学充分了解人脑的必要性,限制了许多研究人员在计算机硬件中模拟人脑的功能。许多研究人员倾向于低估与人工智能未来预测有关的任何怀疑,但是如果不认真对待这些问题,人们就会忽视问题的解决方案。<br />
<br />
<br />
<br />
Clocksin says that a conceptual limitation that may impede the progress of AI research is that people may be using the wrong techniques for computer programs and implementation of equipment.{{sfn|Clocksin|2003}} When AI researchers first began to aim for the goal of artificial intelligence, a main interest was human reasoning.{{sfn|Holte|Choueiry|2003}} Researchers hoped to establish computational models of human knowledge through reasoning and to find out how to design a computer with a specific cognitive task.{{sfn|Holte|Choueiry|2003}}<br />
<br />
Clocksin says that a conceptual limitation that may impede the progress of AI research is that people may be using the wrong techniques for computer programs and implementation of equipment. When AI researchers first began to aim for the goal of artificial intelligence, a main interest was human reasoning. Researchers hoped to establish computational models of human knowledge through reasoning and to find out how to design a computer with a specific cognitive task.<br />
<br />
Clocksin 说,阻碍人工智能研究进展的一个概念上的限制是,人们可能在计算机程序和设备实现方面使用了错误的技术。当人工智能研究人员第一次开始瞄准人工智能的目标时,主要的兴趣是人类推理。研究人员希望通过推理建立人类知识的计算模型,并找出如何设计一台具有特定认知任务的计算机。<br />
<br />
<br />
<br />
The practice of abstraction, which people tend to redefine when working with a particular context in research, provides researchers with a concentration on just a few concepts.{{sfn|Holte|Choueiry|2003}} The most productive use of abstraction in AI research comes from planning and problem solving.{{sfn|Holte|Choueiry|2003}} Although the aim is to increase the speed of a computation, the role of abstraction has posed questions about the involvement of abstraction operators.{{sfn|Zucker|2003}}<br />
<br />
The practice of abstraction, which people tend to redefine when working with a particular context in research, provides researchers with a concentration on just a few concepts. The most productive use of abstraction in AI research comes from planning and problem solving. Although the aim is to increase the speed of a computation, the role of abstraction has posed questions about the involvement of abstraction operators.<br />
<br />
抽象的实践,人们在研究中使用特定的语境时倾向于重新定义,为研究人员提供了集中在几个概念上的机会。抽象在人工智能研究中最有效的应用来自规划和解决问题。虽然目标是提高计算速度,但是抽象的作用已经对抽象操作符的参与提出了问题。<br />
<br />
<br />
<br />
A possible reason for the slowness in AI relates to the acknowledgement by many AI researchers that heuristics is a section that contains a significant breach between computer performance and human performance.{{sfn|McCarthy|2007}} The specific functions that are programmed to a computer may be able to account for many of the requirements that allow it to match human intelligence. These explanations are not necessarily guaranteed to be the fundamental causes for the delay in achieving strong AI, but they are widely agreed by numerous researchers.<br />
<br />
A possible reason for the slowness in AI relates to the acknowledgement by many AI researchers that heuristics is a section that contains a significant breach between computer performance and human performance. The specific functions that are programmed to a computer may be able to account for many of the requirements that allow it to match human intelligence. These explanations are not necessarily guaranteed to be the fundamental causes for the delay in achieving strong AI, but they are widely agreed by numerous researchers.<br />
<br />
人工智能发展缓慢的一个可能原因是许多人工智能研究人员承认启发式是计算机性能和人类性能之间的一个重大缺口。为计算机编程的特定功能可以满足许多要求,使计算机与人类智能相匹配。这些解释并不一定是造成人工智能实现延迟的根本原因,但它们得到了众多研究人员的广泛认同。<br />
<br />
<br />
<br />
There have been many AI researchers that debate over the idea whether [[affective computing|machines should be created with emotions]]. There are no emotions in typical models of AI and some researchers say programming emotions into machines allows them to have a mind of their own.{{sfn|Clocksin|2003}} Emotion sums up the experiences of humans because it allows them to remember those experiences.{{sfn|Gelernter|2010}} David Gelernter writes, "No computer will be creative unless it can simulate all the nuances of human emotion."{{sfn|Gelernter|2010}} This concern about emotion has posed problems for AI researchers and it connects to the concept of strong AI as its research progresses into the future.<ref>{{cite journal|doi=10.1016/j.bushor.2018.08.004|title=Kaplan Andreas and Haelein Michael (2019) Siri, Siri, in my hand: Who's the fairest in the land? On the interpretations, illustrations, and implications of artificial intelligence | volume=62 | year=2019|journal=Business Horizons|pages=15–25 | last1 = Kaplan | first1 = Andreas | last2 = Haenlein | first2 = Michael}}</ref><br />
<br />
There have been many AI researchers that debate over the idea whether machines should be created with emotions. There are no emotions in typical models of AI and some researchers say programming emotions into machines allows them to have a mind of their own. Emotion sums up the experiences of humans because it allows them to remember those experiences. David Gelernter writes, "No computer will be creative unless it can simulate all the nuances of human emotion." This concern about emotion has posed problems for AI researchers and it connects to the concept of strong AI as its research progresses into the future.<br />
<br />
许多人工智能研究人员一直在争论机器是否应该带有情感。典型的人工智能模型中没有情感,一些研究人员说,将情感编程到机器中可以让它们拥有自己的思想。情感总结了人类的经历,因为它允许人们记住那些经历。大卫 · 格勒尼特写道: “除非计算机能够模拟人类情感的所有细微差别,否则它不会具有创造力。”这种对情绪的关注给人工智能研究人员带来了一些问题,随着未来人工智能研究的进展,它与强人工智能的概念相联系。<br />
<br />
<br />
<br />
==Controversies and dangers==<br />
<br />
<br />
<br />
===Feasibility===<br />
<br />
{{expand section|date=February 2016}}<br />
<br />
As of March 2020, AGI remains speculative<ref name="spec1">[https://www.europarl.europa.eu/at-your-service/files/be-heard/religious-and-non-confessional-dialogue/events/en-20190319-how-artificial-intelligence-works.pdf europarl.europa.eu: How artificial intelligence works], "Concluding remarks: Today's AI is powerful and useful, but remains far from speculated AGI or ASI.", European Parliamentary Research Service, retrieved March 3, 2020</ref><ref name="spec2">[https://www.itu.int/en/journal/001/Documents/itu2018-9.pdf itu.int: Beyond Mad?: The Race For Artificial General Intelligence], "AGI represents a level of power that remains firmly in the realm of speculative fiction as on date." February 2, 2018, retrieved March 3, 2020</ref> as no such system has been demonstrated yet. Opinions vary both on ''whether'' and ''when'' artificial general intelligence will arrive. At one extreme, AI pioneer [[Herbert A. Simon]] wrote in 1965: "machines will be capable, within twenty years, of doing any work a man can do". However, this prediction failed to come true. Microsoft co-founder [[Paul Allen]] believed that such intelligence is unlikely in the 21st century because it would require "unforeseeable and fundamentally unpredictable breakthroughs" and a "scientifically deep understanding of cognition".<ref>{{cite news|last1=Allen|first1=Paul|title=The Singularity Isn't Near|url=http://www.technologyreview.com/view/425733/paul-allen-the-singularity-isnt-near/|accessdate=17 September 2014|work=[[MIT Technology Review]]}}</ref> Writing in [[The Guardian]], roboticist Alan Winfield claimed the gulf between modern computing and human-level artificial intelligence is as wide as the gulf between current space flight and practical faster-than-light spaceflight.<ref>{{cite news|last1=Winfield|first1=Alan|title=Artificial intelligence will not turn into a Frankenstein's monster|url=https://www.theguardian.com/technology/2014/aug/10/artificial-intelligence-will-not-become-a-frankensteins-monster-ian-winfield|accessdate=17 September 2014|work=[[The Guardian]]|archive-url=https://web.archive.org/web/20140917135230/http://www.theguardian.com/technology/2014/aug/10/artificial-intelligence-will-not-become-a-frankensteins-monster-ian-winfield|archive-date=17 September 2014|url-status=live}}</ref><br />
<br />
As of March 2020, AGI remains speculative as no such system has been demonstrated yet. Opinions vary both on whether and when artificial general intelligence will arrive. At one extreme, AI pioneer Herbert A. Simon wrote in 1965: "machines will be capable, within twenty years, of doing any work a man can do". However, this prediction failed to come true. Microsoft co-founder Paul Allen believed that such intelligence is unlikely in the 21st century because it would require "unforeseeable and fundamentally unpredictable breakthroughs" and a "scientifically deep understanding of cognition". Writing in The Guardian, roboticist Alan Winfield claimed the gulf between modern computing and human-level artificial intelligence is as wide as the gulf between current space flight and practical faster-than-light spaceflight.<br />
<br />
截至2020年3月,德盛安联仍处于投机状态,因为迄今尚未展示此类系统。对于人工通用智能是否会到来以及何时到来,人们的看法各不相同。在一个极端,人工智能的先驱赫伯特·西蒙在1965年写道: “机器将能在20年内完成人类能做的任何工作。”。然而,这个预言并没有实现。微软(Microsoft)联合创始人保罗•艾伦(Paul Allen)认为,这种情报在21世纪不太可能出现,因为它需要“不可预见且根本无法预测的突破”和“对认知的科学深入理解”。机器人专家 Alan Winfield 在《卫报》上发表文章称,现代计算机和人类水平的人工智能之间的鸿沟就像当前的太空飞行和实际的超光速空间飞行之间的鸿沟一样宽。<br />
<br />
<br />
<br />
AI experts' views on the feasibility of AGI wax and wane, and may have seen a resurgence in the 2010s. Four polls conducted in 2012 and 2013 suggested that the median guess among experts for when they would be 50% confident AGI would arrive was 2040 to 2050, depending on the poll, with the mean being 2081. Of the experts, 16.5% answered with "never" when asked the same question but with a 90% confidence instead.<ref name="new yorker doomsday">{{cite news|author1=Raffi Khatchadourian|title=The Doomsday Invention: Will artificial intelligence bring us utopia or destruction?|url=http://www.newyorker.com/magazine/2015/11/23/doomsday-invention-artificial-intelligence-nick-bostrom|accessdate=7 February 2016|work=[[The New Yorker (magazine)|The New Yorker]]|date=23 November 2015|archive-url=https://web.archive.org/web/20160128105955/http://www.newyorker.com/magazine/2015/11/23/doomsday-invention-artificial-intelligence-nick-bostrom|archive-date=28 January 2016|url-status=live}}</ref><ref>Müller, V. C., & Bostrom, N. (2016). Future progress in artificial intelligence: A survey of expert opinion. In Fundamental issues of artificial intelligence (pp. 555–572). Springer, Cham.</ref> Further current AGI progress considerations can be found below [[#Tests_for_confirming_human-level_AGI|''Tests for confirming human-level AGI'']] and [[#IQ-Tests_AGI|''IQ-tests AGI'']].<br />
<br />
AI experts' views on the feasibility of AGI wax and wane, and may have seen a resurgence in the 2010s. Four polls conducted in 2012 and 2013 suggested that the median guess among experts for when they would be 50% confident AGI would arrive was 2040 to 2050, depending on the poll, with the mean being 2081. Of the experts, 16.5% answered with "never" when asked the same question but with a 90% confidence instead. Further current AGI progress considerations can be found below Tests for confirming human-level AGI and IQ-tests AGI.<br />
<br />
人工智能专家对德盛安联兴衰可行性的看法,可能在2010年出现了复苏。2012年和2013年进行的四次民意调查显示,专家对德盛安联50% 有信心的平均猜测是2040年到2050年,具体取决于调查结果,平均猜测是2081年。在这些专家中,16.5% 的人在被问到同样的问题时回答“从来没有” ,但他们的自信心却达到了90% 。进一步的进展考虑可以在确认人类水平 AGI 和 iq 测试 AGI 的测试下面找到。<br />
<br />
<br />
<br />
===Potential threat to human existence{{anchor|Risk_of_human_extinction}}===<br />
<br />
{{Main|Existential risk from artificial general intelligence}}<br />
<br />
<br />
<br />
The thesis that AI poses an existential risk, and that this risk needs much more attention than it currently gets, has been endorsed by many public figures; perhaps the most famous are [[Elon Musk]], [[Bill Gates]], and [[Stephen Hawking]]. The most notable AI researcher to endorse the thesis is [[Stuart J. Russell]]. Endorsers of the thesis sometimes express bafflement at skeptics: Gates states he does not "understand why some people are not concerned",<ref name="BBC News">{{cite news|last1=Rawlinson|first1=Kevin|title=Microsoft's Bill Gates insists AI is a threat|url=https://www.bbc.co.uk/news/31047780|work=[[BBC News]]|accessdate=30 January 2015}}</ref> and Hawking criticized widespread indifference in his 2014 editorial: {{cquote|'So, facing possible futures of incalculable benefits and risks, the experts are surely doing everything possible to ensure the best outcome, right? Wrong. If a superior alien civilisation sent us a message saying, 'We'll arrive in a few decades,' would we just reply, 'OK, call us when you get here{{endash}}we'll leave the lights on?' Probably not{{endash}}but this is more or less what is happening with AI.'<ref name="hawking editorial">{{cite news |title=Stephen Hawking: 'Transcendence looks at the implications of artificial intelligence&nbsp;– but are we taking AI seriously enough?' |url=https://www.independent.co.uk/news/science/stephen-hawking-transcendence-looks-at-the-implications-of-artificial-intelligence--but-are-we-taking-ai-seriously-enough-9313474.html |accessdate=3 December 2014 |publisher=[[The Independent (UK)]]}}</ref>}}<br />
<br />
The thesis that AI poses an existential risk, and that this risk needs much more attention than it currently gets, has been endorsed by many public figures; perhaps the most famous are Elon Musk, Bill Gates, and Stephen Hawking. The most notable AI researcher to endorse the thesis is Stuart J. Russell. Endorsers of the thesis sometimes express bafflement at skeptics: Gates states he does not "understand why some people are not concerned", and Hawking criticized widespread indifference in his 2014 editorial: we'll leave the lights on?' Probably notbut this is more or less what is happening with AI.'}}<br />
<br />
人工智能构成了世界末日,这种风险需要比现在更多的关注,这一论点已经得到了许多公众人物的支持; 也许最著名的是埃隆 · 马斯克,比尔 · 盖茨和斯蒂芬 · 霍金。支持这一观点的最著名的人工智能研究者是斯图尔特 · 罗素。这篇论文的支持者有时会对怀疑论者表示困惑: 盖茨表示,他不“理解为什么有些人不关心” ,霍金在2014年的社论中批评了普遍的冷漠: 我们会让灯亮着吗可能不是,但这或多或少是正在发生的与人工智能<br />
<br />
<br />
<br />
Many of the scholars who are concerned about existential risk believe that the best way forward would be to conduct (possibly massive) research into solving the difficult "[[AI control problem|control problem]]" to answer the question: what types of safeguards, algorithms, or architectures can programmers implement to maximize the probability that their recursively-improving AI would continue to behave in a friendly, rather than destructive, manner after it reaches superintelligence?<ref name="superintelligence" >{{cite book|last1=Bostrom|first1=Nick|author-link=Nick Bostrom|title=Superintelligence: Paths, Dangers, Strategies|date=2014|isbn=978-0199678112|edition=First|quote=|title-link=Superintelligence: Paths, Dangers, Strategies}}<!-- preface --></ref><ref name="physica_scripta" >{{cite journal|title=Responses to catastrophic AGI risk: a survey|journal=[[Physica Scripta]]|date=19 December 2014|volume=90|issue=1|author1=Kaj Sotala|author2-link=Roman Yampolskiy|author2=Roman Yampolskiy}}</ref><br />
<br />
Many of the scholars who are concerned about existential risk believe that the best way forward would be to conduct (possibly massive) research into solving the difficult "control problem" to answer the question: what types of safeguards, algorithms, or architectures can programmers implement to maximize the probability that their recursively-improving AI would continue to behave in a friendly, rather than destructive, manner after it reaches superintelligence?<br />
<br />
许多关注世界末日的学者认为,最好的方法是进行(可能是大规模的)研究,解决困难的“控制问题” ,以回答这个问题: 程序员可以实现哪些类型的保障措施、算法或架构,以最大限度地提高其递归改进的人工智能在达到超级智能后继续以友好而不是破坏性的方式运行的可能性?<br />
<br />
<br />
<br />
The thesis that AI can pose existential risk also has many strong detractors. Skeptics sometimes charge that the thesis is crypto-religious, with an irrational belief in the possibility of superintelligence replacing an irrational belief in an omnipotent God; at an extreme, [[Jaron Lanier]] argues that the whole concept that current machines are in any way intelligent is "an illusion" and a "stupendous con" by the wealthy.<ref name="atlantic-but-what">{{cite magazine |url=https://www.theatlantic.com/health/archive/2014/05/but-what-does-the-end-of-humanity-mean-for-me/361931/ |title=But What Would the End of Humanity Mean for Me? |magazine=The Atlantic | date = 9 May 2014 | accessdate =12 December 2015}}</ref><br />
<br />
The thesis that AI can pose existential risk also has many strong detractors. Skeptics sometimes charge that the thesis is crypto-religious, with an irrational belief in the possibility of superintelligence replacing an irrational belief in an omnipotent God; at an extreme, Jaron Lanier argues that the whole concept that current machines are in any way intelligent is "an illusion" and a "stupendous con" by the wealthy.<br />
<br />
认为人工智能可以提出世界末日的观点也遭到了许多强烈的反对。怀疑论者有时指责该论点是秘密宗教性的,他们非理性地相信超级智能可能取代对万能的上帝的非理性信仰; 在极端情况下,杰伦 · 拉尼尔(Jaron Lanier)认为,目前的机器以任何方式具有智能的整个概念是“一种幻觉” ,是富人的“惊人骗局”。<br />
<br />
<br />
<br />
Much of existing criticism argues that AGI is unlikely in the short term. Computer scientist [[Gordon Bell]] argues that the human race will already destroy itself before it reaches the [[technological singularity]]. [[Gordon Moore]], the original proponent of [[Moore's Law]], declares that "I am a skeptic. I don't believe [a technological singularity] is likely to happen, at least for a long time. And I don't know why I feel that way."<ref>{{cite news |title=Tech Luminaries Address Singularity |url=https://spectrum.ieee.org/computing/hardware/tech-luminaries-address-singularity |accessdate=8 April 2020 |work=IEEE Spectrum: Technology, Engineering, and Science News |issue=SPECIAL REPORT: THE SINGULARITY |date=1 June 2008 |language=en}}</ref> [[Baidu]] Vice President [[Andrew Ng]] states AI existential risk is "like worrying about overpopulation on Mars when we have not even set foot on the planet yet."<ref name=shermer>{{cite news|last1=Shermer|first1=Michael|title=Apocalypse AI|url=https://www.scientificamerican.com/article/artificial-intelligence-is-not-a-threat-mdash-yet/|accessdate=27 November 2017|work=Scientific American|date=1 March 2017|pages=77|language=en|doi=10.1038/scientificamerican0317-77|bibcode=2017SciAm.316c..77S}}</ref><br />
<br />
Much of existing criticism argues that AGI is unlikely in the short term. Computer scientist Gordon Bell argues that the human race will already destroy itself before it reaches the technological singularity. Gordon Moore, the original proponent of Moore's Law, declares that "I am a skeptic. I don't believe [a technological singularity] is likely to happen, at least for a long time. And I don't know why I feel that way." Baidu Vice President Andrew Ng states AI existential risk is "like worrying about overpopulation on Mars when we have not even set foot on the planet yet."<br />
<br />
现有的许多批评认为,德盛安联短期内不太可能成功。计算机科学家 Gordon Bell 认为人类在到达技术奇异点之前就已经自我毁灭了。戈登 · 摩尔,摩尔定律的最初倡导者,宣称“我是一个怀疑论者。我不认为技术奇异点会发生,至少在很长一段时间内不会。我不知道为什么会有这种感觉。”百度副总裁 Andrew Ng 说,人工智能世界末日就像是在担心火星人口过剩,而我们甚至还没有踏上这个星球<br />
<br />
<br />
<br />
==See also==<br />
<br />
{{div col|colwidth=30em}}<br />
<br />
* [[Automated machine learning]]<br />
<br />
* [[Machine ethics]]<br />
<br />
* [[Multi-task learning]]<br />
<br />
* [[Superintelligence: Paths, Dangers, Strategies|Superintelligence]]<br />
<br />
* [[Nick Bostrom]]<br />
<br />
* [[Eliezer Yudkowsky]]<br />
<br />
* [[Future of Humanity Institute]]<br />
<br />
* [[Outline of artificial intelligence]]<br />
<br />
* [[Artificial brain]]<br />
<br />
* [[Transfer learning]]<br />
<br />
* [[Outline of transhumanism]]<br />
<br />
* [[General game playing]]<br />
<br />
* [[Synthetic intelligence]]<br />
<br />
* [[Intelligence amplification]] (IA), the use of information technology in augmenting human intelligence instead of creating an external autonomous "AGI"{{div col end}}<br />
<br />
<br />
<br />
==Notes==<br />
<br />
{{reflist|colwidth=30em}}<br />
<br />
<br />
<br />
==References==<br />
<br />
• Stages of Artificial Intelligence"[https://www.computerscience0.xyz/2020/04/ai-artificial-intelligence-stages-of.html Computer Science]" on 2nd April, 2020.{{refbegin|2}}<br />
<br />
• Stages of Artificial Intelligence"[https://www.computerscience0.xyz/2020/04/ai-artificial-intelligence-stages-of.html Computer Science]" on 2nd April, 2020.<br />
<br />
•2020年4月2日,《人工智能的阶段》[ https://www.computerscience0.xyz/2020/04/ai-Artificial-Intelligence-Stages-of.html 计算机科学]。<br />
<br />
* {{cite web |url=http://www.techcast.org/Upload/PDFs/633615794236495345_TCTheAutomationofThought.pdf |title=TechCast Article Series: The Automation of Thought |last1=Halal |first1=William E. |website= |access-date= |archive-url=https://web.archive.org/web/20130606101835/http://www.techcast.org/Upload/PDFs/633615794236495345_TCTheAutomationofThought.pdf |archive-date=6 June 2013}}<br />
<br />
* {{Citation | last=Aleksander | first=Igor | author-link=Igor Aleksander | year=1996 | title=Impossible Minds | publisher=World Scientific Publishing Company | isbn=978-1-86094-036-1 | url-access=registration | url=https://archive.org/details/impossiblemindsm0000alek }}<br />
<br />
* {{Citation | last = Omohundro|first= Steve| author-link= Steve Omohundro | year = 2008| title= The Nature of Self-Improving Artificial Intelligence| publisher= presented and distributed at the 2007 Singularity Summit, San Francisco, CA.}}<br />
<br />
* {{Citation|last=Sandberg |first=Anders|last2=Boström|first2=Nick|title=Whole Brain Emulation: A Roadmap|url=http://www.fhi.ox.ac.uk/Reports/2008-3.pdf|accessdate=5 April 2009|series= Technical Report #2008‐3|year=2008| publisher = Future of Humanity Institute, Oxford University}}<br />
<br />
* {{Citation|vauthors=Azevedo FA, Carvalho LR, Grinberg LT | title = Equal numbers of neuronal and nonneuronal cells make the human brain an isometrically scaled-up primate brain| journal = The Journal of Comparative Neurology| volume = 513| issue = 5| pages = 532–41|date=April 2009| pmid = 19226510| doi = 10.1002/cne.21974|url=https://www.researchgate.net/publication/24024444 |accessdate=4 September 2013| ref={{harvid|Azevedo et al.|2009}}|display-authors=etal}}<br />
<br />
* {{Citation<br />
<br />
| first=Anthony<br />
<br />
| first=Anthony<br />
<br />
首先是安东尼<br />
<br />
| last=Berglas<br />
<br />
| last=Berglas<br />
<br />
最后一个贝格拉斯<br />
<br />
| title=Artificial Intelligence will Kill our Grandchildren<br />
<br />
| title=Artificial Intelligence will Kill our Grandchildren<br />
<br />
人工智能会杀死我们的孙子<br />
<br />
| year=2008<br />
<br />
| year=2008<br />
<br />
2008年<br />
<br />
| url=http://berglas.org/Articles/AIKillGrandchildren/AIKillGrandchildren.html<br />
<br />
| url=http://berglas.org/Articles/AIKillGrandchildren/AIKillGrandchildren.html<br />
<br />
Http://berglas.org/articles/aikillgrandchildren/aikillgrandchildren.html<br />
<br />
}}<br />
<br />
}}<br />
<br />
}}<br />
<br />
* {{Citation | last = Chalmers |first = David | author-link=David Chalmers | year=1996 | title = The Conscious Mind |publisher=Oxford University Press.}}<br />
<br />
* {{Citation | last = Clocksin|first=William |date=Aug 2003 |title=Artificial intelligence and the future|journal=[[Philosophical Transactions of the Royal Society A]] |pmid=12952683 |volume=361 |issue=1809 |pages=1721–1748 |doi=10.1098/rsta.2003.1232 |postscript=.|bibcode=2003RSPTA.361.1721C }}<br />
<br />
* {{Crevier 1993}}<br />
<br />
* {{Citation | first = Brad | last = Darrach | date=20 November 1970 | title=Meet Shakey, the First Electronic Person | magazine=[[Life Magazine]] | pages = 58–68 }}.<br />
<br />
* {{Citation | last = Drachman | first = D | title = Do we have brain to spare? | journal = Neurology | volume = 64 | issue = 12 | pages = 2004–5 | year = 2005 | pmid = 15985565 | doi = 10.1212/01.WNL.0000166914.38327.BB | postscript = .}}<br />
<br />
* {{Citation | last = Feigenbaum | first = Edward A. | first2=Pamela | last2=McCorduck | author-link=Edward Feigenbaum | author2-link = Pamela McCorduck | title = The Fifth Generation: Artificial Intelligence and Japan's Computer Challenge to the World | publisher = Michael Joseph | year = 1983 | isbn = 978-0-7181-2401-4 }}<br />
<br />
* {{Citation<br />
<br />
| last = Gelernter | first = David<br />
<br />
| last = Gelernter | first = David<br />
<br />
最后的盖兰特 | 第一个大卫<br />
<br />
| year = 2010<br />
<br />
| year = 2010<br />
<br />
2010年<br />
<br />
| title = Dream-logic, the Internet and Artificial Thought<br />
<br />
| title = Dream-logic, the Internet and Artificial Thought<br />
<br />
梦的逻辑,互联网和人工思维<br />
<br />
| url=http://www.edge.org/3rd_culture/gelernter10.1/gelernter10.1_index.html | accessdate= 25 July 2010<br />
<br />
| url=http://www.edge.org/3rd_culture/gelernter10.1/gelernter10.1_index.html | accessdate= 25 July 2010<br />
<br />
Http://www.edge.org/3rd_culture/gelernter10.1/gelernter10.1_index.html : 2010年7月25日<br />
<br />
}}<br />
<br />
}}<br />
<br />
}}<br />
<br />
* {{Citation<br />
<br />
| editor1-last = Goertzel | editor1-first = Ben | authorlink = Ben Goertzel<br />
<br />
| editor1-last = Goertzel | editor1-first = Ben | authorlink = Ben Goertzel<br />
<br />
1-first Ben | authorlink Ben Goertzel<br />
<br />
| editor2-last = Pennachin | editor2-first= Cassio<br />
<br />
| editor2-last = Pennachin | editor2-first= Cassio<br />
<br />
| 编辑2-last Pennachin | 编辑2-first Cassio<br />
<br />
| year = 2006<br />
<br />
| year = 2006<br />
<br />
2006年<br />
<br />
| title=Artificial General Intelligence<br />
<br />
| title=Artificial General Intelligence<br />
<br />
人工通用智能<br />
<br />
| publisher = Springer <br />
<br />
| publisher = Springer <br />
<br />
出版商斯普林格<br />
<br />
| url=http://people.inf.elte.hu/csizsekp/ai/books/artificial-general-intelligence-cognitive-technologies.9783540237334.27156.pdf<br />
<br />
| url=http://people.inf.elte.hu/csizsekp/ai/books/artificial-general-intelligence-cognitive-technologies.9783540237334.27156.pdf<br />
<br />
Http://people.inf.elte.hu/csizsekp/ai/books/artificial-general-intelligence-cognitive-technologies.9783540237334.27156.pdf<br />
<br />
| isbn = 978-3-540-23733-4<br />
<br />
| isbn = 978-3-540-23733-4<br />
<br />
[国际标准图书馆编号978-3-540-23733-4]<br />
<br />
| archive-url = https://web.archive.org/web/20130320184603/http://people.inf.elte.hu/csizsekp/ai/books/artificial-general-intelligence-cognitive-technologies.9783540237334.27156.pdf<br />
<br />
| archive-url = https://web.archive.org/web/20130320184603/http://people.inf.elte.hu/csizsekp/ai/books/artificial-general-intelligence-cognitive-technologies.9783540237334.27156.pdf<br />
<br />
| 档案-url https://web.archive.org/web/20130320184603/http://people.inf.elte.hu/csizsekp/ai/books/artificial-general-intelligence-cognitive-technologies.9783540237334.27156.pdf<br />
<br />
| archive-date = 20 March 2013<br />
<br />
| archive-date = 20 March 2013<br />
<br />
| 档案-日期2013年3月20日<br />
<br />
}}<br />
<br />
}}<br />
<br />
}}<br />
<br />
* {{Citation<br />
<br />
| last = Goertzel | first = Ben | authorlink = Ben Goertzel<br />
<br />
| last = Goertzel | first = Ben | authorlink = Ben Goertzel<br />
<br />
作者: Ben Goertzel<br />
<br />
| last2 = Wang | first2 = Pei<br />
<br />
| last2 = Wang | first2 = Pei<br />
<br />
2 Wang | first2 Pei<br />
<br />
| year = 2006<br />
<br />
| year = 2006<br />
<br />
2006年<br />
<br />
| title = Introduction: Aspects of Artificial General Intelligence<br />
<br />
| title = Introduction: Aspects of Artificial General Intelligence<br />
<br />
| 题目简介: 人工通用智能的方方面面<br />
<br />
| url=http://sites.google.com/site/narswang/publications/wang-goertzel.AGI_Aspects.pdf?attredirects=1<br />
<br />
| url=http://sites.google.com/site/narswang/publications/wang-goertzel.AGI_Aspects.pdf?attredirects=1<br />
<br />
Http://sites.google.com/site/narswang/publications/wang-goertzel.agi_aspects.pdf?attredirects=1<br />
<br />
}}<br />
<br />
}}<br />
<br />
}}<br />
<br />
* {{Citation |last=Goertzel|first=Ben |authorlink=Ben Goertzel |date=Dec 2007 |title=Human-level artificial general intelligence and the possibility of a technological singularity: a reaction to Ray Kurzweil's The Singularity Is Near, and McDermott's critique of Kurzweil|journal=Artificial Intelligence |volume=171 |issue=18, Special Review Issue |pages=1161–1173 |url=https://scholar.google.com/scholar?hl=sv&lr=&cluster=15189798216526465792 |accessdate=1 April 2009 |doi=10.1016/j.artint.2007.10.011 |postscript=.}}<br />
<br />
* {{Citation | last = Gubrud | first = Mark | url=http://www.foresight.org/Conferences/MNT05/Papers/Gubrud/ | title = Nanotechnology and International Security | journal= Fifth Foresight Conference on Molecular Nanotechnology |date = November 1997| accessdate= 7 May 2011}}<br />
<br />
* {{Citation| last1 = Holte | first1=RC | last2=Choueiry |first2=BY| title = Abstraction and reformulation in artificial intelligence| journal = [[Philosophical Transactions of the Royal Society B]]| volume = 358| issue = 1435| pages = 1197–1204| year = 2003| pmid = 12903653| pmc = 1693218| doi = 10.1098/rstb.2003.1317| postscript = .}}<br />
<br />
* {{Citation | last = Howe | first = J. | url=http://www.dai.ed.ac.uk/AI_at_Edinburgh_perspective.html | title = Artificial Intelligence at Edinburgh University : a Perspective |date = November 1994| accessdate= 30 August 2007}}<br />
<br />
* {{Citation | last = Johnson| first= Mark | year = 1987| title =The body in the mind| publisher =Chicago|isbn= 978-0-226-40317-5}}<br />
<br />
* {{Citation | last = Kurzweil | first = Ray | author-link = Ray Kurzweil | title = The Singularity is Near | year = 2005 | publisher = Viking Press | title-link = The Singularity is Near }}<br />
<br />
* {{Citation | last = Lighthill | first = Professor Sir James | author-link=James Lighthill | year = 1973 | contribution= Artificial Intelligence: A General Survey | title = Artificial Intelligence: a paper symposium| publisher = Science Research Council }}<br />
<br />
* {{Citation|last=Luger|first=George|first2=William|last2=Stubblefield|year=2004|title=Artificial Intelligence: Structures and Strategies for Complex Problem Solving|edition=5th|publisher=The Benjamin/Cummings Publishing Company, Inc.|page=[https://archive.org/details/artificialintell0000luge/page/720 720]|isbn=978-0-8053-4780-7|url=https://archive.org/details/artificialintell0000luge/page/720}}<br />
<br />
* {{Citation |last=McCarthy|first=John |authorlink=John McCarthy (computer scientist)|date=Oct 2007 |title=From here to human-level AI|journal=Artificial Intelligence |volume=171 |pages=1174–1182 |doi=10.1016/j.artint.2007.10.009 | postscript=. |issue=18}}<br />
<br />
* {{McCorduck 2004}}<br />
<br />
* {{Citation | last = Moravec | first = Hans | author-link = Hans Moravec | year = 1976 | url = http://www.frc.ri.cmu.edu/users/hpm/project.archive/general.articles/1975/Raw.Power.html | title = The Role of Raw Power in Intelligence | access-date = 29 September 2007 | archive-url = https://web.archive.org/web/20160303232511/http://www.frc.ri.cmu.edu/users/hpm/project.archive/general.articles/1975/Raw.Power.html | archive-date = 3 March 2016 | url-status = dead }}<br />
<br />
* {{Citation | last = Moravec | first = Hans | author-link=Hans Moravec | year = 1988 | title = Mind Children | publisher = Harvard University Press}}<br />
<br />
* {{Citation| last=Nagel | year=1974 | title =What Is it Like to Be a Bat | journal = Philosophical Review | volume=83 | issue=4 | pages=435–50 | url=http://organizations.utep.edu/Portals/1475/nagel_bat.pdf| postscript=. | doi=10.2307/2183914 | jstor=2183914 }}<br />
<br />
* {{Citation | last = Newell | first = Allen | author-link=Allen Newell | last2 = Simon | first2=H. A. | year = 1963 | contribution=GPS: A Program that Simulates Human Thought| title=Computers and Thought | editor-last= Feigenbaum | editor-first= E.A. |editor2-last= Feldman |editor2-first= J. |publisher= McGraw-Hill | authorlink2 = Herbert A. Simon|location= New York }}<br />
<br />
* {{cite journal | doi = 10.1145/360018.360022 | last = Newell | first = Allen | last2 = Simon | first2=H. A. | year = 1976 | title=Computer Science as Empirical Inquiry: Symbols and Search| volume= 19 | pages = 113–126 | journal = Communications of the ACM| author-link=Allen Newell | authorlink2=Herbert A. Simon|issue=3 | ref=harv| doi-access= free }}<br />
<br />
* {{Citation| last=Nilsson | first=Nils | author-link=Nils Nilsson (researcher) | year=1998|title=Artificial Intelligence: A New Synthesis|publisher=Morgan Kaufmann Publishers|isbn=978-1-55860-467-4}}<br />
<br />
* {{Russell Norvig 2003}}<br />
<br />
* {{Citation | last = NRC| author-link=United States National Research Council | chapter=Developments in Artificial Intelligence|chapter-url=http://www.nap.edu/readingroom/books/far/ch9.html|title=Funding a Revolution: Government Support for Computing Research|publisher=National Academy Press|year=1999 }}<br />
<br />
* {{Citation | last = Poole | first = David | first2 = Alan | last2 = Mackworth | first3 = Randy | last3 = Goebel | publisher = Oxford University Press | year = 1998 | title = Computational Intelligence: A Logical Approach | url = http://www.cs.ubc.ca/spider/poole/ci.html | author-link=David Poole (researcher) | location = New York }}<br />
<br />
* {{Citation|last=Searle |first=John |author-link=John Searle |title=Minds, Brains and Programs |journal=Behavioral and Brain Sciences |volume=3 |issue=3 |pages=417–457 |year=1980 |doi=10.1017/S0140525X00005756 }}<br />
<br />
* {{Citation | last= Simon | first = H. A. | author-link=Herbert A. Simon | year = 1965 | title=The Shape of Automation for Men and Management | publisher =Harper & Row | location = New York }}<br />
<br />
* {{Citation |last=Sutherland|first= J.G. |year =1990| title= Holographic Model of Memory, Learning, and Expression|journal=International Journal of Neural Systems|volume= 1–3|pages= 256–267 |postscript=.}}<br />
<br />
* {{Citation|vauthors=Williams RW, Herrup K | title = The control of neuron number| journal = Annual Review of Neuroscience| volume = 11| pages = 423–53| year = 1988| pmid = 3284447| doi = 10.1146/annurev.ne.11.030188.002231| postscript = .}}<!--| accessdate = 20 June 2009--><br />
<br />
* {{Citation<br />
<br />
| editor1-last = de Vega | editor1-first = Manuel<br />
<br />
| editor1-last = de Vega | editor1-first = Manuel<br />
<br />
| 编辑1-last de Vega | 编辑1-first Manuel<br />
<br />
| editor2-last = Glenberg | editor2-first = Arthur<br />
<br />
| editor2-last = Glenberg | editor2-first = Arthur<br />
<br />
2- 最后的格伦伯格2- 第一个亚瑟<br />
<br />
| editor3-last = Graesser | editor3-first = Arthur<br />
<br />
| editor3-last = Graesser | editor3-first = Arthur<br />
<br />
| 编辑3-last Graesser | 编辑3-first Arthur<br />
<br />
| year = 2008<br />
<br />
| year = 2008<br />
<br />
2008年<br />
<br />
| title = Symbols and Embodiment: Debates on meaning and cognition<br />
<br />
| title = Symbols and Embodiment: Debates on meaning and cognition<br />
<br />
标题符号与具体化: 关于意义与认知的争论<br />
<br />
| publisher = Oxford University Press<br />
<br />
| publisher = Oxford University Press<br />
<br />
牛津大学出版社<br />
<br />
| isbn=978-0-19-921727-4<br />
<br />
| isbn=978-0-19-921727-4<br />
<br />
[国际标准图书编号978-0-19-921727-4]<br />
<br />
}}<br />
<br />
}}<br />
<br />
}}<br />
<br />
* {{Citation|last=Yudkowsky |first=Eliezer |author-link=Eliezer Yudkowsky |editor-last=Goertzel |editor-first=Ben |editor2-last=Pennachin |editor2-first=Cassio |title=Artificial General Intelligence |journal=Annual Review of Psychology |publisher=Springer |year=2006 |url=http://www.singinst.org/upload/LOGI//LOGI.pdf |doi=10.1146/annurev.psych.49.1.585 |pmid=9496632 |isbn=978-3-540-23733-4 |url-status=dead |archiveurl=https://web.archive.org/web/20090411050423/http://www.singinst.org/upload/LOGI/LOGI.pdf |archivedate=11 April 2009 |volume=49 |pages=585–612}} <br />
<br />
* {{Citation |last=Zucker|first=Jean-Daniel |date=July 2003 |title=A grounded theory of abstraction in artificial intelligence|journal=[[Philosophical Transactions of the Royal Society B]] |pmid=12903672 |volume=358 |issue=1435 |pmc=1693211 |pages=1293–1309 |doi=10.1098/rstb.2003.1308 |postscript=.}}<br />
<br />
* {{Citation| last=Yudkowsky | first=Eliezer | author-link=Eliezer Yudkowsky| year=2008 |title=Artificial Intelligence as a Positive and Negative Factor in Global Risk |journal=Global Catastrophic Risks | bibcode=2008gcr..book..303Y }}.<br />
<br />
{{refend}}<br />
<br />
<br />
<br />
==External links==<br />
<br />
* [https://cis.temple.edu/~pwang/AGI-Intro.html The AGI portal maintained by Pei Wang]<br />
<br />
* [https://web.archive.org/web/20050405071221/http://genesis.csail.mit.edu/index.html The Genesis Group at MIT's CSAIL] – Modern research on the computations that underlay human intelligence<br />
<br />
* [http://www.opencog.org/ OpenCog – open source project to develop a human-level AI]<br />
<br />
* [http://academia.wikia.com/wiki/A_Method_for_Simulating_the_Process_of_Logical_Human_Thought Simulating logical human thought]<br />
<br />
* [http://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/ai-timelines What Do We Know about AI Timelines?] – Literature review<br />
<br />
<br />
<br />
{{Existential risk from artificial intelligence}}<br />
<br />
<br />
<br />
{{DEFAULTSORT:Artificial general intelligence}}<br />
<br />
[[Category:Hypothetical technology]]<br />
<br />
Category:Hypothetical technology<br />
<br />
类别: 假设技术<br />
<br />
[[Category:Artificial intelligence]]<br />
<br />
Category:Artificial intelligence<br />
<br />
类别: 人工智能<br />
<br />
[[Category:Computational neuroscience]]<br />
<br />
Category:Computational neuroscience<br />
<br />
类别: 计算神经科学<br />
<br />
<br />
<br />
[[fr:Intelligence artificielle#Intelligence artificielle forte]]<br />
<br />
fr:Intelligence artificielle#Intelligence artificielle forte<br />
<br />
智力人工 # 智力人工强项<br />
<br />
<noinclude><br />
<br />
<small>This page was moved from [[wikipedia:en:Artificial general intelligence]]. Its edit history can be viewed at [[通用人工智能/edithistory]]</small></noinclude><br />
<br />
[[Category:待整理页面]]</div>粲兰https://wiki.swarma.org/index.php?title=%E9%80%9A%E7%94%A8%E4%BA%BA%E5%B7%A5%E6%99%BA%E8%83%BD&diff=14796通用人工智能2020-10-07T11:51:03Z<p>粲兰:</p>
<hr />
<div>此词条暂由彩云小译翻译,未经人工整理和审校,带来阅读不便,请见谅。<br />
<br />
{{Short description|Hypothetical human-level or stronger AI}}<br />
<br />
{{Use British English|date = March 2019}}<br />
<br />
{{Use dmy dates|date=December 2019}}<br />
<br />
{{Artificial intelligence}}<br />
<br />
'''Artificial general intelligence''' ('''AGI''') is the hypothetical<ref>{{cite news |title=DeepMind and Google: the battle to control artificial intelligence |url=https://www.economist.com/news/2019/04/05/deepmind-and-google-the-battle-to-control-artificial-intelligence |accessdate=15 March 2020 |work=[[The Economist]] ([[1843 (magazine)]]) |date=2019 |quote=AGI stands for Artificial General Intelligence, a hypothetical computer program...}}</ref> intelligence of a machine that has the capacity to understand or learn any intellectual task that a [[human being]] can. It is a primary goal of some [[artificial intelligence]] research and a common topic in [[science fiction]] and [[futures studies]]. AGI can also be referred to as '''strong AI''',<ref>Kurzweil, ''Singularity'' (2005) p. 260</ref><ref name = "Kurzweil 2005-08-05">{{Citation|url=https://www.forbes.com/home/free_forbes/2005/0815/030.htmlhttps://www.forbes.com/home/free_forbes/2005/0815/030.html |first=Ray |last=Kurzweil |date=5 August 2005 |magazine=[[Forbes]]|title=Long Live AI}}: Kurzweil describes strong AI as "machine intelligence with the full range of human intelligence."</ref><ref>{{Citation|url=https://crnano.typepad.com/crnblog/2005/08/advanced_human_.html |title=Advanced Human ntelligence<br />
<br />
Artificial general intelligence (AGI) is the hypothetical intelligence of a machine that has the capacity to understand or learn any intellectual task that a human being can. It is a primary goal of some artificial intelligence research and a common topic in science fiction and futures studies. AGI can also be referred to as strong AI,<ref>{{Citation|url=https://crnano.typepad.com/crnblog/2005/08/advanced_human_.html |title=Advanced Human ntelligence<br />
<br />
人工通用智能(Artificial general intelligence,AGI)是一种假设的机器智能,它有能力理解或学习任何人类能够完成的智力任务。这是一些人工智能研究的主要目标,也是科幻小说和未来研究的共同话题。也可以被称为强人工智能,参考文献{ Citation | url https://crnano.typepad.com/crnblog/2005/08/advanced_human_.html | title Advanced Human intelligence<br />
<br />
|first=Mike|last=Treder|work=Responsible Nanotechnology|date=10 August 2005 |archive-url=https://web.archive.org/web/20191016214415/https://crnano.typepad.com/crnblog/2005/08/advanced_human_.html|archive-date=16 October 2019 |url-status=live}}</ref> '''full AI''',<ref>{{Cite web |url=http://tedxtalks.ted.com/video/The-Age-of-Artificial-Intellige |title=The Age of Artificial Intelligence: George John at TEDxLondonBusinessSchool 2013 |access-date=22 February 2014 |archive-url=https://web.archive.org/web/20140226123940/http://tedxtalks.ted.com/video/The-Age-of-Artificial-Intellige |archive-date=26 February 2014 |url-status=live }}</ref> <br />
<br />
|first=Mike|last=Treder|work=Responsible Nanotechnology|date=10 August 2005 |archive-url=https://web.archive.org/web/20191016214415/https://crnano.typepad.com/crnblog/2005/08/advanced_human_.html|archive-date=16 October 2019 |url-status=live}}</ref> full AI, <br />
<br />
2005年8月10日2019年10月 https://web.archive.org/web/20191016214415/https://crnano.typepad.com/crnblog/2005/08/advanced_human_.html|archive-date=16,<br />
<br />
or '''general intelligent action'''.{{sfn|Newell|Simon|1976|ps=, This is the term they use for "human-level" intelligence in the [[physical symbol system]] hypothesis.}} <br />
<br />
or general intelligent action. <br />
<br />
或者是通用智能行为。<br />
<br />
Some academic sources reserve the term "strong AI" for machines that can experience [[Chinese room#Strong AI|consciousness]].{{sfn|Searle|1980|ps=, See below for the origin of the term "strong AI", and see the academic definition of "[[Chinese room#Strong AI|strong AI]]" in the article [[Chinese room]].}} Today's AI is speculated to be many years, if not decades, away from AGI.<ref>[https://www.europarl.europa.eu/at-your-service/files/be-heard/religious-and-non-confessional-dialogue/events/en-20190319-how-artificial-intelligence-works.pdf europarl.europa.eu: How artificial intelligence works], "Concluding remarks: Today's AI is powerful and useful, but remains far from speculated AGI or ASI.", European Parliamentary Research Service, retrieved March 3, 2020</ref><ref>{{Cite journal|last=Grace|first=Katja|last2=Salvatier|first2=John|last3=Dafoe|first3=Allan|last4=Zhang|first4=Baobao|last5=Evans|first5=Owain|date=2018-07-31|title=Viewpoint: When Will AI Exceed Human Performance? Evidence from AI Experts|journal=Journal of Artificial Intelligence Research|volume=62|pages=729–754|doi=10.1613/jair.1.11222|issn=1076-9757}}</ref><br />
<br />
Some academic sources reserve the term "strong AI" for machines that can experience consciousness. Today's AI is speculated to be many years, if not decades, away from AGI.<br />
<br />
一些学术资源保留了“强人工智能”这个术语,用来形容能够体会意识的机器。据推测,如果不是几十年的话,今天的人工智能将在很多年之后才能达到通用人工智能的地步。<br />
<br />
<br />
<br />
Some authorities emphasize a distinction between ''strong AI'' and ''applied AI'',<ref>Encyclopædia Britannica [http://www.britannica.com/eb/article-219086/artificial-intelligence Strong AI, applied AI, and cognitive simulation] {{Webarchive|url=https://web.archive.org/web/20071015054758/http://www.britannica.com/eb/article-219086/artificial-intelligence |date=15 October 2007 }} or Jack Copeland [http://www.cs.usfca.edu/www.AlanTuring.net/turing_archive/pages/Reference%20Articles/what_is_AI/What%20is%20AI02.html What is artificial intelligence?] {{Webarchive|url=https://web.archive.org/web/20070818125256/http://www.cs.usfca.edu/www.AlanTuring.net/turing_archive/pages/Reference%20Articles/what_is_AI/What%20is%20AI02.html |date=18 August 2007 }} on AlanTuring.net</ref> also called ''narrow AI''<ref name = "Kurzweil 2005-08-05"/> or ''[[weak AI]]''.<ref>{{Cite web |url=http://www.open2.net/nextbigthing/ai/ai_in_depth/in_depth.htm |title=The Open University on Strong and Weak AI |access-date=8 October 2007 |archive-url=https://web.archive.org/web/20090925043908/http://www.open2.net/nextbigthing/ai/ai_in_depth/in_depth.htm |archive-date=25 September 2009 |url-status=live }} {{Dead link |date=October 2019}}</ref> In contrast to strong AI, weak AI is not intended to perform human [[cognitive]] abilities. Rather, weak AI is limited to the use of software to study or accomplish specific [[problem solving]] or [[reason]]ing tasks.<br />
<br />
Some authorities emphasize a distinction between strong AI and applied AI, also called narrow AI In contrast to strong AI, weak AI is not intended to perform human cognitive abilities. Rather, weak AI is limited to the use of software to study or accomplish specific problem solving or reasoning tasks.<br />
<br />
一些权威机构强调''强人工智能''和''应用人工智能''之间的区别,也称为''狭义人工智能''与''强人工智能''相比,弱人工智能并不是为了执行人类的认知能力。相反,弱人工智能仅限于使用软件来研究或完成特定问题的解决或完成推理任务。<br />
<br />
<br />
<br />
As of 2017, over forty organizations are researching AGI.<ref name=baum/><br />
<br />
As of 2017, over forty organizations are researching AGI.<br />
<br />
截止到2017年,已经有超过四十家机构在研究 AGI。<br />
<br />
<br />
<br />
==Requirements 判定要求==<br />
<br />
{{main|Cognitive science}}<br />
<br />
<br />
<br />
Various criteria for [[intelligence]] have been proposed (most famously the [[Turing test]]) but to date, there is no definition that satisfies everyone.<ref>AI founder [[John McCarthy (computer scientist)|John McCarthy]] writes: "we cannot yet characterize in general what kinds of computational procedures we want to call intelligent." {{cite web| url=http://www-formal.stanford.edu/jmc/whatisai/node1.html| title=Basic Questions| last=McCarthy| first=John| authorlink=John McCarthy (computer scientist)| publisher=[[Stanford University]]| year=2007| access-date=6 December 2007| archive-url=https://web.archive.org/web/20071026100601/http://www-formal.stanford.edu/jmc/whatisai/node1.html| archive-date=26 October 2007| url-status=live}} (For a discussion of some definitions of intelligence used by [[artificial intelligence]] researchers, see [[philosophy of artificial intelligence]].)</ref> However, there ''is'' wide agreement among artificial intelligence researchers that intelligence is required to do the following:<ref><br />
<br />
Various criteria for intelligence have been proposed (most famously the Turing test) but to date, there is no definition that satisfies everyone. However, there is wide agreement among artificial intelligence researchers that intelligence is required to do the following:<ref><br />
<br />
人们提出了各种各样的智能标准(最著名的是图灵测试) ,但到目前为止,还没有一个定义能使所有人满意。然而,人工智能研究人员普遍认为,智能需要做到以下几点: <br />
<br />
This list of intelligent traits is based on the topics covered by major AI textbooks, including:<br />
<br />
This list of intelligent traits is based on the topics covered by major AI textbooks, including:<br />
<br />
这个智能特征的列表基于主流的人工智能教科书所涉及的主题,包括:<br />
<br />
{{Harvnb|Russell|Norvig|2003}},<br />
<br />
,<br />
<br />
,<br />
<br />
{{Harvnb|Luger|Stubblefield|2004}},<br />
<br />
,<br />
<br />
,<br />
<br />
{{Harvnb|Poole|Mackworth|Goebel|1998}} and<br />
<br />
and<br />
<br />
及<br />
<br />
{{Harvnb|Nilsson|1998}}.<br />
<br />
.<br />
<br />
.<br />
<br />
</ref><br />
<br />
</ref><br />
<br />
/ 参考<br />
<br />
* [[automated reasoning|reason]], use strategy, solve puzzles, and make judgments under [[uncertainty]];使用策略,解决问题,并且在不确定条件下做出决策。<br />
<br />
* [[knowledge representation|represent knowledge]], including [[Commonsense knowledge base|commonsense knowledge]];展现知识,包括常识。<br />
<br />
* [[automated planning and scheduling|plan]];规划。<br />
<br />
* [[machine learning|learn]];学习。<br />
<br />
* communicate in [[natural language processing|natural language]];使用自然语言进行交流。<br />
<br />
* and [[Artificial intelligence systems integration|integrate all these skills]] towards common goals.综合运用所有技巧以达到某个目的。<br />
<br />
<br />
<br />
Other important capabilities include the ability to [[machine perception|sense]] (e.g. [[computer vision|see]]) and the ability to act (e.g. [[robotics|move and manipulate objects]]) in the world where intelligent behaviour is to be observed.<ref>Pfeifer, R. and Bongard J. C., How the body shapes the way we think: a new view of intelligence (The MIT Press, 2007). {{ISBN|0-262-16239-3}}</ref> This would include an ability to detect and respond to [[hazard]].<ref>{{cite journal | last1 = White | first1 = R. W. | year = 1959 | title = Motivation reconsidered: The concept of competence | journal = Psychological Review | volume = 66 | issue = 5| pages = 297–333 | doi=10.1037/h0040934| pmid = 13844397 }}</ref> Many interdisciplinary approaches to intelligence (e.g. [[cognitive science]], [[computational intelligence]] and [[decision making]]) tend to emphasise the need to consider additional traits such as [[imagination]] (taken as the ability to form mental images and concepts that were not programmed in)<ref>{{Harvnb|Johnson|1987}}</ref> and [[Self-determination theory|autonomy]].<ref>deCharms, R. (1968). Personal causation. New York: Academic Press.</ref><br />
<br />
Other important capabilities include the ability to sense (e.g. see) and the ability to act (e.g. move and manipulate objects) in the world where intelligent behaviour is to be observed. This would include an ability to detect and respond to hazard. Many interdisciplinary approaches to intelligence (e.g. cognitive science, computational intelligence and decision making) tend to emphasise the need to consider additional traits such as imagination (taken as the ability to form mental images and concepts that were not programmed in) and autonomy.<br />
<br />
其他重要的能力包括在现实世界感知(例如:视觉)和行动的能力(例如:移动和操纵物体)。在现实世界里,智能行为是可观测的。这将包括检测和应对危险的能力。许多跨学科的智力研究方法(例如:。认知科学、计算智能和决策)倾向于强调考虑额外特征的必要性,例如想象力(被认为是未编入程序的形成意象和概念的能力)和自主性。<br />
<br />
Computer based systems that exhibit many of these capabilities do exist (e.g. see [[computational creativity]], [[automated reasoning]], [[decision support system]], [[robot]], [[evolutionary computation]], [[intelligent agent]]), but not yet at human levels.<br />
<br />
Computer based systems that exhibit many of these capabilities do exist (e.g. see computational creativity, automated reasoning, decision support system, robot, evolutionary computation, intelligent agent), but not yet at human levels.<br />
<br />
基于计算机的系统,展示了许多这些能力确实存在(例如:。参见计算创造性、自动推理、决策支持系统、机器人、进化计算、智能代理) ,但还没有达到人类的水平。<br />
<br />
<br />
<br />
===Tests for confirming human-level AGI{{anchor|Tests_for_confirming_human-level_AGI}}===<br />
<br />
The following tests to confirm human-level AGI have been considered:<ref>{{cite web|last=Muehlhauser|first=Luke|title=What is AGI?|url=http://intelligence.org/2013/08/11/what-is-agi/|publisher=Machine Intelligence Research Institute|accessdate=1 May 2014|date=11 August 2013|archive-url=https://web.archive.org/web/20140425115445/http://intelligence.org/2013/08/11/what-is-agi/|archive-date=25 April 2014|url-status=live}}</ref><ref>{{Cite web|url=https://www.talkyblog.com/artificial_general_intelligence_agi/|title=What is Artificial General Intelligence (AGI)? {{!}} 4 Tests For Ensuring Artificial General Intelligence|date=13 July 2019|website=Talky Blog|language=en-US|access-date=17 July 2019|archive-url=https://web.archive.org/web/20190717071152/https://www.talkyblog.com/artificial_general_intelligence_agi/|archive-date=17 July 2019|url-status=live}}</ref><br />
<br />
The following tests to confirm human-level AGI have been considered:<br />
<br />
考虑了下列测试以确认人类水平 AGI:<br />
<br />
;[[Turing test|The Turing Test]] ([[Alan Turing|''Turing'']])<br />
<br />
The Turing Test (Turing)<br />
<br />
图灵测试(图灵)<br />
<br />
: A machine and a human both converse sight unseen with a second human, who must evaluate which of the two is the machine, which passes the test if it can fool the evaluator a significant fraction of the time. Note: Turing does not prescribe what should qualify as intelligence, only that knowing that it is a machine should disqualify it.<br />
<br />
A machine and a human both converse sight unseen with a second human, who must evaluate which of the two is the machine, which passes the test if it can fool the evaluator a significant fraction of the time. Note: Turing does not prescribe what should qualify as intelligence, only that knowing that it is a machine should disqualify it.<br />
<br />
一个机器人和一个人类都与另一个人类相反,后者必须评估两者中哪一个是机器,如果它能骗过评估者很大一部分时间,那么机器就通过了测试。注意: 图灵并没有规定什么是智能,只是知道它是一台机器就应该取消它的资格。<br />
<br />
;The Coffee Test ([[Steve Wozniak|''Wozniak'']])<br />
<br />
The Coffee Test (Wozniak)<br />
<br />
咖啡测试(沃兹尼亚克)<br />
<br />
: A machine is required to enter an average American home and figure out how to make coffee: find the coffee machine, find the coffee, add water, find a mug, and brew the coffee by pushing the proper buttons.<br />
<br />
A machine is required to enter an average American home and figure out how to make coffee: find the coffee machine, find the coffee, add water, find a mug, and brew the coffee by pushing the proper buttons.<br />
<br />
一台机器需要进入一个普通的美国家庭,并弄清楚如何制作咖啡: 找到咖啡机,找到咖啡,加水,找到一个马克杯,并通过按下正确的按钮来煮咖啡。<br />
<br />
;The Robot College Student Test ([[Ben Goertzel|''Goertzel'']])<br />
<br />
The Robot College Student Test (Goertzel)<br />
<br />
机器人大学生考试(Goertzel)<br />
<br />
: A machine enrolls in a university, taking and passing the same classes that humans would, and obtaining a degree.<br />
<br />
A machine enrolls in a university, taking and passing the same classes that humans would, and obtaining a degree.<br />
<br />
一台机器进入一所大学,学习和通过与人类相同的课程,并获得学位。<br />
<br />
;The Employment Test ([[Nils John Nilsson|''Nilsson'']])<br />
<br />
The Employment Test (Nilsson)<br />
<br />
就业测试(Nilsson)<br />
<br />
: A machine works an economically important job, performing at least as well as humans in the same job.<br />
<br />
A machine works an economically important job, performing at least as well as humans in the same job.<br />
<br />
机器从事一项经济上重要的工作,在同一项工作中表现至少和人类一样好。<br />
<br />
<br />
<br />
=== Problems requiring AGI to solve ===<br />
<br />
{{Main|AI-complete}}<br />
<br />
<br />
<br />
The most difficult problems for computers are informally known as "AI-complete" or "AI-hard", implying that solving them is equivalent to the general aptitude of human intelligence, or strong AI, beyond the capabilities of a purpose-specific algorithm.<ref name="Shapiro92">Shapiro, Stuart C. (1992). [http://www.cse.buffalo.edu/~shapiro/Papers/ai.pdf Artificial Intelligence] {{Webarchive|url=https://web.archive.org/web/20160201014644/http://www.cse.buffalo.edu/~shapiro/Papers/ai.pdf |date=1 February 2016 }} In Stuart C. Shapiro (Ed.), ''Encyclopedia of Artificial Intelligence'' (Second Edition, pp.&nbsp;54–57). New York: John Wiley. (Section 4 is on "AI-Complete Tasks".)</ref><br />
<br />
The most difficult problems for computers are informally known as "AI-complete" or "AI-hard", implying that solving them is equivalent to the general aptitude of human intelligence, or strong AI, beyond the capabilities of a purpose-specific algorithm.<br />
<br />
对于计算机来说,最困难的问题被非正式地称为“ AI 完成”或“ AI 困难” ,这意味着解决这些问题相当于人类智能的一般才能,或强大的人工智能,超出了特定目的算法的能力。<br />
<br />
<br />
<br />
AI-complete problems are hypothesised to include general [[computer vision]], [[natural language understanding]], and dealing with unexpected circumstances while solving any real world problem.<ref>Roman V. Yampolskiy. Turing Test as a Defining Feature of AI-Completeness. In Artificial Intelligence, Evolutionary Computation and Metaheuristics (AIECM) --In the footsteps of Alan Turing. Xin-She Yang (Ed.). pp. 3–17. (Chapter 1). Springer, London. 2013. http://cecs.louisville.edu/ry/TuringTestasaDefiningFeature04270003.pdf {{Webarchive|url=https://web.archive.org/web/20130522094547/http://cecs.louisville.edu/ry/TuringTestasaDefiningFeature04270003.pdf |date=22 May 2013 }}</ref><br />
<br />
AI-complete problems are hypothesised to include general computer vision, natural language understanding, and dealing with unexpected circumstances while solving any real world problem.<br />
<br />
人工智能完全问题假设包括一般的计算机视觉,自然语言理解,以及在解决任何现实世界问题的同时处理意外情况。<br />
<br />
<br />
<br />
AI-complete problems cannot be solved with current computer technology alone, and also require [[human computation]]. This property could be useful, for example, to test for the presence of humans, as [[CAPTCHA]]s aim to do; and for [[computer security]] to repel [[brute-force attack]]s.<ref>Luis von Ahn, Manuel Blum, Nicholas Hopper, and John Langford. [http://www.captcha.net/captcha_crypt.pdf CAPTCHA: Using Hard AI Problems for Security] {{Webarchive|url=https://web.archive.org/web/20160304001102/http://www.captcha.net/captcha_crypt.pdf |date=4 March 2016 }}. In Proceedings of Eurocrypt, Vol. 2656 (2003), pp. 294–311.</ref><ref>{{cite journal | first = Richard | last = Bergmair | title = Natural Language Steganography and an "AI-complete" Security Primitive | citeseerx = 10.1.1.105.129 | date = 7 January 2006 }} (unpublished?)</ref><br />
<br />
AI-complete problems cannot be solved with current computer technology alone, and also require human computation. This property could be useful, for example, to test for the presence of humans, as CAPTCHAs aim to do; and for computer security to repel brute-force attacks.<br />
<br />
目前的计算机技术不能单独解决人工智能完全问题,而且还需要人工计算。例如,这个特性可以用来测试人类是否存在(CAPTCHAs 的目标就是这样做) ,以及用于计算机安全性以抵御蛮力攻击。<br />
<br />
<br />
<br />
== History == <br />
<br />
=== Classical AI ===<br />
<br />
{{Main|History of artificial intelligence}}<br />
<br />
Modern AI research began in the mid 1950s.<ref>{{Harvnb|Crevier|1993|pp=48–50}}</ref> The first generation of AI researchers were convinced that artificial general intelligence was possible and that it would exist in just a few decades. AI pioneer [[Herbert A. Simon]] wrote in 1965: "machines will be capable, within twenty years, of doing any work a man can do."<ref>{{Harvnb|Simon|1965|p=96}} quoted in {{Harvnb|Crevier|1993|p=109}}</ref> Their predictions were the inspiration for [[Stanley Kubrick]] and [[Arthur C. Clarke]]'s character [[HAL 9000]], who embodied what AI researchers believed they could create by the year 2001. AI pioneer [[Marvin Minsky]] was a consultant<ref>{{Cite web |url=http://mitpress.mit.edu/e-books/Hal/chap2/two1.html |title=Scientist on the Set: An Interview with Marvin Minsky |access-date=5 April 2008 |archive-url=https://web.archive.org/web/20120716182537/http://mitpress.mit.edu/e-books/Hal/chap2/two1.html |archive-date=16 July 2012 |url-status=live }}</ref> on the project of making HAL 9000 as realistic as possible according to the consensus predictions of the time; Crevier quotes him as having said on the subject in 1967, "Within a generation ... the problem of creating 'artificial intelligence' will substantially be solved,"<ref>Marvin Minsky to {{Harvtxt|Darrach|1970}}, quoted in {{Harvtxt|Crevier|1993|p=109}}.</ref> although Minsky states that he was misquoted.{{Citation needed|date=June 2011}}<br />
<br />
Modern AI research began in the mid 1950s. The first generation of AI researchers were convinced that artificial general intelligence was possible and that it would exist in just a few decades. AI pioneer Herbert A. Simon wrote in 1965: "machines will be capable, within twenty years, of doing any work a man can do." Their predictions were the inspiration for Stanley Kubrick and Arthur C. Clarke's character HAL 9000, who embodied what AI researchers believed they could create by the year 2001. AI pioneer Marvin Minsky was a consultant on the project of making HAL 9000 as realistic as possible according to the consensus predictions of the time; Crevier quotes him as having said on the subject in 1967, "Within a generation ... the problem of creating 'artificial intelligence' will substantially be solved," although Minsky states that he was misquoted.<br />
<br />
现代人工智能研究始于20世纪50年代中期。第一代人工智能研究人员确信,人工普通智能是可能的,并将在短短几十年内出现。人工智能的先驱赫伯特·西蒙在1965年写道: “机器将在20年内完成人类能做的任何工作。”他们的预言启发了斯坦利 · 库布里克和亚瑟·查理斯·克拉克的人物哈尔9000,他们代表了人工智能研究人员相信他们在2001年能够创造的东西。人工智能先驱马文 · 明斯基(Marvin Minsky)是一个项目顾问,该项目旨在根据当时的共识预测,使 HAL 9000尽可能逼真; 克里维尔援引他在1967年关于这个问题的话说,“在一代人的时间里... ... 创造‘人工智能’的问题将大大得到解决,”尽管明斯基声称,他的话被错误引用了。<br />
<br />
<br />
<br />
However, in the early 1970s, it became obvious that researchers had grossly underestimated the difficulty of the project. Funding agencies became skeptical of AGI and put researchers under increasing pressure to produce useful "applied AI".<ref>The [[Lighthill report]] specifically criticized AI's "grandiose objectives" and led the dismantling of AI research in England. ({{Harvnb|Lighthill|1973}}; {{Harvnb|Howe|1994}}) In the U.S., [[DARPA]] became determined to fund only "mission-oriented direct research, rather than basic undirected research". See {{Harv|NRC|1999}} under "Shift to Applied Research Increases Investment". See also {{Harv|Crevier|1993|pp=115–117}} and {{Harv|Russell|Norvig|2003|pp=21–22}}</ref> As the 1980s began, Japan's [[Fifth Generation Computer]] Project revived interest in AGI, setting out a ten-year timeline that included AGI goals like "carry on a casual conversation".<ref>{{harvnb|Crevier|1993|p=211}}, {{harvnb|Russell|Norvig|2003|p=24}} and see also {{Harvnb|Feigenbaum|McCorduck|1983}}</ref> In response to this and the success of [[expert systems]], both industry and government pumped money back into the field.<ref>{{Harvnb|Crevier| 1993|pp=161–162,197–203,240}}; {{harvnb|Russell|Norvig|2003|p=25}}; {{harvnb|NRC|1999|loc=under "Shift to Applied Research Increases Investment"}}</ref> However, confidence in AI spectacularly collapsed in the late 1980s, and the goals of the Fifth Generation Computer Project were never fulfilled.<ref>{{Harvnb|Crevier|1993|pp=209–212}}</ref> For the second time in 20 years, AI researchers who had predicted the imminent achievement of AGI had been shown to be fundamentally mistaken. By the 1990s, AI researchers had gained a reputation for making vain promises. They became reluctant to make predictions at all<ref>As AI founder [[John McCarthy (computer scientist)|John McCarthy]] writes "it would be a great relief to the rest of the workers in AI if the inventors of new general formalisms would express their hopes in a more guarded form than has sometimes been the case." {{cite web | url=http://www-formal.stanford.edu/jmc/reviews/lighthill/lighthill.html | title=Reply to Lighthill | last=McCarthy | first=John | authorlink=John McCarthy (computer scientist) | publisher=Stanford University | year=2000 | access-date=29 September 2007 | archive-url=https://web.archive.org/web/20080930164952/http://www-formal.stanford.edu/jmc/reviews/lighthill/lighthill.html | archive-date=30 September 2008 | url-status=live }}</ref> and to avoid any mention of "human level" artificial intelligence for fear of being labeled "wild-eyed dreamer[s]."<ref>"At its low point, some computer scientists and software engineers avoided the term artificial intelligence for fear of being viewed as wild-eyed dreamers."{{cite news |first=John |last=Markoff |title=Behind Artificial Intelligence, a Squadron of Bright Real People |url=https://www.nytimes.com/2005/10/14/technology/14artificial.html?ei=5070&en=11ab55edb7cead5e&ex=1185940800&adxnnl=1&adxnnlx=1185805173-o7WsfW7qaP0x5/NUs1cQCQ |work=The New York Times|date=14 October 2005}}</ref><br />
<br />
However, in the early 1970s, it became obvious that researchers had grossly underestimated the difficulty of the project. Funding agencies became skeptical of AGI and put researchers under increasing pressure to produce useful "applied AI". As the 1980s began, Japan's Fifth Generation Computer Project revived interest in AGI, setting out a ten-year timeline that included AGI goals like "carry on a casual conversation". In response to this and the success of expert systems, both industry and government pumped money back into the field. However, confidence in AI spectacularly collapsed in the late 1980s, and the goals of the Fifth Generation Computer Project were never fulfilled. For the second time in 20 years, AI researchers who had predicted the imminent achievement of AGI had been shown to be fundamentally mistaken. By the 1990s, AI researchers had gained a reputation for making vain promises. They became reluctant to make predictions at all and to avoid any mention of "human level" artificial intelligence for fear of being labeled "wild-eyed dreamer[s]."<br />
<br />
然而,在20世纪70年代早期,很明显,研究人员严重低估了该项目的难度。资助机构开始对 AGI 持怀疑态度,并对研究人员施加越来越大的压力,要求他们生产出有用的“应用人工智能”。随着20世纪80年代的开始,日本的第五代计算机项目(Fifth Generation Computer Project)重新唤起了人们对 AGI 的兴趣,设定了一个为期10年的时间表,其中包括 AGI 的目标,比如“进行一次随意的交谈”。为了应对这种情况和专家系统的成功,工业界和政府都将资金重新投入这一领域。然而,人们对人工智能的信心在20世纪80年代末大幅下降,第五代计算机项目的目标从未实现。20年来的第二次,人工智能研究人员预测 AGI 即将取得的成就被证明是根本错误的。到了20世纪90年代,人工智能研究人员因做出虚假承诺而闻名。他们根本不愿意做预测,也不愿意提及“人类水平”的人工智能,因为他们害怕被贴上“狂热的梦想家”的标签<br />
<br />
<br />
<br />
=== Narrow AI research ===<br />
<br />
{{Main|Artificial intelligence}}<br />
<br />
<br />
<br />
In the 1990s and early 21st century, mainstream AI achieved far greater commercial success and academic respectability by focusing on specific sub-problems where they can produce verifiable results and commercial applications, such as [[artificial neural networks]] and statistical [[machine learning]].<ref>{{Harvnb|Russell|Norvig|2003|pp=25–26}}</ref> These "applied AI" systems are now used extensively throughout the technology industry, and research in this vein is very heavily funded in both academia and industry. Currently, development on this field is considered an emerging trend, and a mature stage is expected to happen in more than 10 years.<ref>{{cite web |title=Trends in the Emerging Tech Hype Cycle |url=https://blogs.gartner.com/smarterwithgartner/files/2018/08/PR_490866_5_Trends_in_the_Emerging_Tech_Hype_Cycle_2018_Hype_Cycle.png |publisher=Gartner Reports |accessdate=7 May 2019 |archive-url=https://web.archive.org/web/20190522024829/https://blogs.gartner.com/smarterwithgartner/files/2018/08/PR_490866_5_Trends_in_the_Emerging_Tech_Hype_Cycle_2018_Hype_Cycle.png |archive-date=22 May 2019 |url-status=live }}</ref><br />
<br />
In the 1990s and early 21st century, mainstream AI achieved far greater commercial success and academic respectability by focusing on specific sub-problems where they can produce verifiable results and commercial applications, such as artificial neural networks and statistical machine learning. These "applied AI" systems are now used extensively throughout the technology industry, and research in this vein is very heavily funded in both academia and industry. Currently, development on this field is considered an emerging trend, and a mature stage is expected to happen in more than 10 years.<br />
<br />
在1990年代和21世纪初,主流人工智能取得了更大的商业成功和学术声望,因为它们把重点放在能够产生可验证结果和商业应用的具体子问题上,例如人工神经网络和统计机器学习。这些“应用人工智能”系统现在在整个技术产业中得到广泛应用,这方面的研究得到了学术界和产业界的大量资助。目前,在这一领域的发展被认为是一个新兴的趋势,并有望在10多年内发生一个成熟的阶段。<br />
<br />
<br />
<br />
Most mainstream AI researchers hope that strong AI can be developed by combining the programs that solve various sub-problems. [[Hans Moravec]] wrote in 1988: <blockquote>"I am confident that this bottom-up route to artificial intelligence will one day meet the traditional top-down route more than half way, ready to provide the real world competence and the [[Commonsense knowledge base|commonsense knowledge]] that has been so frustratingly elusive in reasoning programs. Fully intelligent machines will result when the metaphorical [[golden spike]] is driven uniting the two efforts."<ref>{{Harvnb|Moravec|1988|p=20}}</ref></blockquote><br />
<br />
Most mainstream AI researchers hope that strong AI can be developed by combining the programs that solve various sub-problems. Hans Moravec wrote in 1988: <blockquote>"I am confident that this bottom-up route to artificial intelligence will one day meet the traditional top-down route more than half way, ready to provide the real world competence and the commonsense knowledge that has been so frustratingly elusive in reasoning programs. Fully intelligent machines will result when the metaphorical golden spike is driven uniting the two efforts."</blockquote><br />
<br />
大多数主流人工智能研究人员希望,通过结合解决各种子问题的程序,可以开发出强大的人工智能。汉斯 · 莫拉维克(Hans Moravec)在1988年写道: “我相信,这种自下而上的人工智能路线,终有一天会与传统的自上而下的路线相遇,超过一半的路程,准备好提供真实世界的能力和常识知识,而这些知识在推理程序中一直难以捉摸,令人沮丧。当隐喻性的黄金钉将两者结合起来时,就会产生完全智能的机器。” / blockquote<br />
<br />
<br />
<br />
However, even this fundamental philosophy has been disputed; for example, Stevan Harnad of Princeton concluded his 1990 paper on the [[Symbol grounding problem|Symbol Grounding Hypothesis]] by stating: <blockquote>"The expectation has often been voiced that "top-down" (symbolic) approaches to modeling cognition will somehow meet "bottom-up" (sensory) approaches somewhere in between. If the grounding considerations in this paper are valid, then this expectation is hopelessly modular and there is really only one viable route from sense to symbols: from the ground up. A free-floating symbolic level like the software level of a computer will never be reached by this route (or vice versa) – nor is it clear why we should even try to reach such a level, since it looks as if getting there would just amount to uprooting our symbols from their intrinsic meanings (thereby merely reducing ourselves to the functional equivalent of a programmable computer)."<ref>{{cite journal | last1 = Harnad | first1 = S | year = 1990 | title = The Symbol Grounding Problem | journal = Physica D | volume = 42 | issue = 1–3| pages = 335–346 | doi=10.1016/0167-2789(90)90087-6| bibcode = 1990PhyD...42..335H | arxiv = cs/9906002}}</ref></blockquote><br />
<br />
However, even this fundamental philosophy has been disputed; for example, Stevan Harnad of Princeton concluded his 1990 paper on the Symbol Grounding Hypothesis by stating: <blockquote>"The expectation has often been voiced that "top-down" (symbolic) approaches to modeling cognition will somehow meet "bottom-up" (sensory) approaches somewhere in between. If the grounding considerations in this paper are valid, then this expectation is hopelessly modular and there is really only one viable route from sense to symbols: from the ground up. A free-floating symbolic level like the software level of a computer will never be reached by this route (or vice versa) – nor is it clear why we should even try to reach such a level, since it looks as if getting there would just amount to uprooting our symbols from their intrinsic meanings (thereby merely reducing ourselves to the functional equivalent of a programmable computer)."</blockquote><br />
<br />
然而,即使是这种基本哲学也存在争议; 例如,普林斯顿大学的斯蒂文 · 哈纳德在1990年关于符号根植假说的论文中总结道: “人们经常提出这样的期望,即认知建模的“自上而下”(符号)方法将在某种程度上满足介于两者之间的“自下而上”(感官)方法。如果本文中的基础考虑是正确的,那么这种期望是无望的模块化的,从感觉到符号真的只有一条可行的路径: 从头开始。像计算机软件级别这样自由浮动的符号级别永远不可能通过这条路径(反之亦然)达到——也不清楚为什么我们甚至应该尝试达到这样一个级别,因为它看起来就像是把我们的符号从它们的内在意义上连根拔起(从而仅仅把我们自己降低为可编程计算机的功能等价物)。” / blockquote<br />
<br />
<br />
<br />
===Modern artificial general intelligence research===<br />
<br />
The term "artificial general intelligence" was used as early as 1997, by Mark Gubrud<ref>{{Harvnb|Gubrud|1997}}</ref> in a discussion of the implications of fully automated military production and operations. The term was re-introduced and popularized by [[Shane Legg]] and [[Ben Goertzel]] around 2002.<ref>{{Cite web|url=http://goertzel.org/who-coined-the-term-agi/|title=Who coined the term "AGI"? » goertzel.org|language=en-US|access-date=28 December 2018|archive-url=https://web.archive.org/web/20181228083048/http://goertzel.org/who-coined-the-term-agi/|archive-date=28 December 2018|url-status=live}}, via [[Life 3.0]]: 'The term "AGI" was popularized by... Shane Legg, Mark Gubrud and Ben Goertzel'</ref> The research objective is much older, for example [[Doug Lenat]]'s [[Cyc]] project (that began in 1984), and [[Allen Newell]]'s [[Soar (cognitive architecture)|Soar]] project are regarded as within the scope of AGI. AGI research activity in 2006 was described by Pei Wang and Ben Goertzel<ref>{{harvnb|Goertzel|Wang|2006}}. See also {{harvtxt|Wang|2006}} with an up-to-date summary and lots of links.</ref> as "producing publications and preliminary results". The first summer school in AGI was organized in Xiamen, China in 2009<ref>https://goertzel.org/AGI_Summer_School_2009.htm</ref> by the Xiamen university's Artificial Brain Laboratory and OpenCog. The first university course was given in 2010<ref>http://fmi-plovdiv.org/index.jsp?id=1054&ln=1</ref> and 2011<ref>http://fmi.uni-plovdiv.bg/index.jsp?id=1139&ln=1</ref> at Plovdiv University, Bulgaria by Todor Arnaudov. MIT presented a course in AGI in 2018, organized by Lex Fridman and featuring a number of guest lecturers. However, as yet, most AI researchers have devoted little attention to AGI, with some claiming that intelligence is too complex to be completely replicated in the near term. However, a small number of computer scientists are active in AGI research, and many of this group are contributing to a series of [[Conference on Artificial General Intelligence|AGI conferences]]. The research is extremely diverse and often pioneering in nature. In the introduction to his book,{{sfn|Goertzel|Pennachin|2006}} Goertzel says that estimates of the time needed before a truly flexible AGI is built vary from 10 years to over a century, but the consensus in the AGI research community seems to be that the timeline discussed by [[Ray Kurzweil]] in ''[[The Singularity is Near]]''<ref name="K">{{Harv|Kurzweil|2005|p=260}} or see [http://crnano.typepad.com/crnblog/2005/08/advanced_human_.html Advanced Human Intelligence] {{Webarchive|url=https://web.archive.org/web/20110630032301/http://crnano.typepad.com/crnblog/2005/08/advanced_human_.html |date=30 June 2011 }} where he defines strong AI as "machine intelligence with the full range of human intelligence."</ref> (i.e. between 2015 and 2045) is plausible.{{sfn|Goertzel|2007}}<br />
<br />
The term "artificial general intelligence" was used as early as 1997, by Mark Gubrud in a discussion of the implications of fully automated military production and operations. The term was re-introduced and popularized by Shane Legg and Ben Goertzel around 2002. The research objective is much older, for example Doug Lenat's Cyc project (that began in 1984), and Allen Newell's Soar project are regarded as within the scope of AGI. AGI research activity in 2006 was described by Pei Wang and Ben Goertzel as "producing publications and preliminary results". The first summer school in AGI was organized in Xiamen, China in 2009 by the Xiamen university's Artificial Brain Laboratory and OpenCog. The first university course was given in 2010 and 2011 at Plovdiv University, Bulgaria by Todor Arnaudov. MIT presented a course in AGI in 2018, organized by Lex Fridman and featuring a number of guest lecturers. However, as yet, most AI researchers have devoted little attention to AGI, with some claiming that intelligence is too complex to be completely replicated in the near term. However, a small number of computer scientists are active in AGI research, and many of this group are contributing to a series of AGI conferences. The research is extremely diverse and often pioneering in nature. In the introduction to his book, Goertzel says that estimates of the time needed before a truly flexible AGI is built vary from 10 years to over a century, but the consensus in the AGI research community seems to be that the timeline discussed by Ray Kurzweil in The Singularity is Near (i.e. between 2015 and 2045) is plausible.<br />
<br />
”人工通用智能”一词早在1997年就由马克 · 古布鲁德在讨论全自动化军事生产和作业的影响时使用。这个术语在2002年左右被 Shane Legg 和 Ben Goertzel 重新引入并推广。研究目标要古老得多,例如道格•雷纳特(Doug Lenat)的 Cyc 项目(始于1984年) ,以及艾伦•纽厄尔(Allen Newell)的 Soar 项目被认为属于 AGI 的范围。王(音译)和本 · 戈泽尔(音译)将 AGI 2006年的研究活动描述为”发表论文和取得初步成果”。2009年,厦门大学人工脑实验室和 OpenCog 在中国厦门组织了 AGI 的第一个暑期学校。第一个大学课程于2010年和2011年在保加利亚普罗夫迪夫大学由 Todor Arnaudov 开设。2018年,麻省理工学院在 AGI 开设了一门课程,由 Lex Fridman 组织,并邀请了一些客座讲师。然而,迄今为止,大多数人工智能研究人员对 AGI 关注甚少,一些人声称,智能过于复杂,无法在短期内完全复制。然而,少数计算机科学家积极参与 AGI 的研究,其中许多人正在为 AGI 的一系列会议做出贡献。这项研究极其多样化,而且往往具有开创性。在他的书的序言中,Goertzel 说,一个真正灵活的 AGI 制造所需的时间估计从10年到超过一个世纪不等,但是 AGI 研究团体的共识似乎是 Ray Kurzweil 在《奇点迫近讨论的时间表。在2015年至2045年之间)是合理的。<br />
<br />
<br />
<br />
However, most mainstream AI researchers doubt that progress will be this rapid.{{citation_needed|date=January 2017}} Organizations explicitly pursuing AGI include the Swiss AI lab [[IDSIA]],{{citation needed|date=December 2017}} Nnaisense,<ref>{{cite news|last1=Markoff|first1=John|title=When A.I. Matures, It May Call Jürgen Schmidhuber 'Dad'|url=https://www.nytimes.com/2016/11/27/technology/artificial-intelligence-pioneer-jurgen-schmidhuber-overlooked.html|accessdate=26 December 2017|work=The New York Times|date=27 November 2016|archive-url=https://web.archive.org/web/20171226234555/https://www.nytimes.com/2016/11/27/technology/artificial-intelligence-pioneer-jurgen-schmidhuber-overlooked.html|archive-date=26 December 2017|url-status=live}}</ref> [[Vicarious (company)|Vicarious]],<!--<ref name=baum/>--> [[Maluuba]],<ref name=baum/> the [[OpenCog|OpenCog Foundation]], Adaptive AI, [[LIDA (cognitive architecture)|LIDA]], and [[Numenta]] and the associated [[Redwood Neuroscience Institute]].<ref>{{cite book|author1=James Barrat|title=Our Final Invention: Artificial Intelligence and the End of the Human Era|date=2013|publisher=St. Martin's Press|location=New York|isbn=9780312622374|edition=First|chapter=Chapter 11: A Hard Takeoff|title-link=Our Final Invention|author1-link=James Barrat}}</ref> In addition, organizations such as the [[Machine Intelligence Research Institute]]<ref>{{cite web|title=About the Machine Intelligence Research Institute|url=https://intelligence.org/about/|website=Machine Intelligence Research Institute|accessdate=26 December 2017|archive-url=https://web.archive.org/web/20180121025925/https://intelligence.org/about/|archive-date=21 January 2018|url-status=live}}</ref> and [[OpenAI]]<ref>{{cite news|title=About OpenAI|url=https://openai.com/about/|accessdate=26 December 2017|work=[[OpenAI]]|language=en-us|archive-url=https://web.archive.org/web/20171222181056/https://openai.com/about/|archive-date=22 December 2017|url-status=live}}</ref> have been founded to influence the development path of AGI. Finally, projects such as the [[Human Brain Project]]<ref>{{cite news|last1=Theil|first1=Stefan|title=Trouble in Mind|url=https://www.scientificamerican.com/article/why-the-human-brain-project-went-wrong-and-how-to-fix-it/|accessdate=26 December 2017|work=Scientific American|pages=36–42|language=en|doi=10.1038/scientificamerican1015-36|bibcode=2015SciAm.313d..36T|archive-url=https://web.archive.org/web/20171109234151/https://www.scientificamerican.com/article/why-the-human-brain-project-went-wrong-and-how-to-fix-it/|archive-date=9 November 2017|url-status=live}}</ref> have the goal of building a functioning simulation of the human brain. A 2017 survey of AGI categorized forty-five known "active R&D projects" that explicitly or implicitly (through published research) research AGI, with the largest three being [[DeepMind]], the Human Brain Project, and [[OpenAI]].<ref name=baum>{{cite journal|title=Baum, Seth, A Survey of Artificial General Intelligence Projects for Ethics, Risk, and Policy (November 12, 2017). Global Catastrophic Risk Institute Working Paper 17-1|url=https://ssrn.com/abstract=3070741|date=12 November 2017|last1=Baum|first1=Seth}}</ref><br />
<br />
However, most mainstream AI researchers doubt that progress will be this rapid. Organizations explicitly pursuing AGI include the Swiss AI lab IDSIA, Nnaisense, Vicarious,<!-- In addition, organizations such as the Machine Intelligence Research Institute and OpenAI have been founded to influence the development path of AGI. Finally, projects such as the Human Brain Project have the goal of building a functioning simulation of the human brain. A 2017 survey of AGI categorized forty-five known "active R&D projects" that explicitly or implicitly (through published research) research AGI, with the largest three being DeepMind, the Human Brain Project, and OpenAI.<br />
<br />
然而,大多数主流的人工智能研究人员怀疑进展是否会如此之快。明确寻求 AGI 的组织包括瑞士人工智能实验室 IDSIA,Nnaisense,Vicarious,! -- 此外,还成立了机器智能研究所和 OpenAI 等机构来影响 AGI 的发展道路。最后,像人脑计划这样的项目的目标是建立一个人脑的功能模拟。2017年针对 AGI 的一项调查将45个已知的“活跃研发项目”(通过已发表的研究)归类为“活跃研发项目” ,其中最大的三个是 DeepMind、人类大脑项目和 OpenAI。<br />
<br />
<br />
<br />
In 2017, researchers Feng Liu, Yong Shi and Ying Liu conducted intelligence tests on publicly available and freely accessible weak AI such as Google AI or Apple's Siri and others. At the maximum, these AI reached a value of about 47, which corresponds approximately to a six-year-old child in first grade. An adult comes to about 100 on average. Similar tests had been carried out in 2014, with the IQ score reaching a maximum value of 27.<ref>{{cite journal|title=Intelligence Quotient and Intelligence Grade of Artificial Intelligence|journal=Annals of Data Science|volume=4|issue=2|pages=179–191|arxiv=1709.10242|doi=10.1007/s40745-017-0109-0|year=2017|last1=Liu|first1=Feng|last2=Shi|first2=Yong|last3=Liu|first3=Ying|bibcode=2017arXiv170910242L}}</ref><ref>{{cite web|title=Google-KI doppelt so schlau wie Siri|url=https://t3n.de/news/iq-kind-schlauer-google-ki-siri-864003|accessdate=2 January 2019|archive-url=https://web.archive.org/web/20190103055657/https://t3n.de/news/iq-kind-schlauer-google-ki-siri-864003/|archive-date=3 January 2019|url-status=live}}</ref><br />
<br />
In 2017, researchers Feng Liu, Yong Shi and Ying Liu conducted intelligence tests on publicly available and freely accessible weak AI such as Google AI or Apple's Siri and others. At the maximum, these AI reached a value of about 47, which corresponds approximately to a six-year-old child in first grade. An adult comes to about 100 on average. Similar tests had been carried out in 2014, with the IQ score reaching a maximum value of 27.<br />
<br />
2017年,研究人员 Feng Liu,Yong Shi 和 Ying Liu 对公开的和可自由访问的弱智能进行了智能测试,如谷歌人工智能或苹果的 Siri 等。在最大值,这些 AI 达到了约47,这大约相当于一个六岁的儿童在一年级。一个成年人平均体重约为100磅。2014年也进行了类似的测试,智商分数达到了最高值27。<br />
<br />
<br />
<br />
In 2019, video game programmer and aerospace engineer [[John Carmack]] announced plans to research AGI.<ref name="lawler">{{cite web |url=https://www.engadget.com/2019-11-13-john-carmack-agi.html |title=John Carmack takes a step back at Oculus to work on human-like AI |date=November 13, 2019 |first=Richard |last=Lawler |accessdate=April 4, 2020 |publisher=[[Engadget]]}}</ref><br />
<br />
In 2019, video game programmer and aerospace engineer John Carmack announced plans to research AGI.<br />
<br />
2019年,游戏程序师和航空工程师 John Carmack 宣布了研究 AGI 的计划。<br />
<br />
<br />
<br />
==Processing power needed to simulate a brain ==<br />
<br />
<br />
<br />
===Whole brain emulation===<br />
<br />
{{main|Mind uploading}}<br />
<br />
A popular discussed approach to achieving general intelligent action is [[whole brain emulation]]. A low-level brain model is built by [[brain scanning|scanning]] and [[Brain mapping|mapping]] a biological brain in detail and copying its state into a computer system or another computational device. The computer runs a [[computer simulation|simulation]] model so faithful to the original that it will behave in essentially the same way as the original brain, or for all practical purposes, indistinguishably.<ref name=Roadmap><br />
<br />
A popular discussed approach to achieving general intelligent action is whole brain emulation. A low-level brain model is built by scanning and mapping a biological brain in detail and copying its state into a computer system or another computational device. The computer runs a simulation model so faithful to the original that it will behave in essentially the same way as the original brain, or for all practical purposes, indistinguishably.<ref name=Roadmap><br />
<br />
实现一般智能行为的一种流行的讨论方法是全脑模拟。一个低层次的大脑模型是通过扫描和绘制生物大脑的详细情况,并将其状态复制到计算机系统或其他计算设备中来建立的。计算机运行的模拟模型如此忠实于原始模型,以至于它的行为在本质上与原始大脑相同,或者对于所有的实际目的,难以区分。 参考名称路线图<br />
<br />
{{Harvnb|Sandberg|Boström|2008}}. "The basic idea is to take a particular brain, scan its structure in detail, and construct a software model of it that is so faithful to the original that, when run on appropriate hardware, it will behave in essentially the same way as the original brain."</ref> Whole brain emulation is discussed in [[computational neuroscience]] and [[neuroinformatics]], in the context of [[brain simulation]] for medical research purposes. It is discussed in [[artificial intelligence]] research{{sfn|Goertzel|2007}} as an approach to strong AI. [[Neuroimaging]] technologies that could deliver the necessary detailed understanding are improving rapidly, and [[futurist]] Ray Kurzweil in the book ''The Singularity Is Near''<ref name=K/> predicts that a map of sufficient quality will become available on a similar timescale to the required computing power.<br />
<br />
. "The basic idea is to take a particular brain, scan its structure in detail, and construct a software model of it that is so faithful to the original that, when run on appropriate hardware, it will behave in essentially the same way as the original brain."</ref> Whole brain emulation is discussed in computational neuroscience and neuroinformatics, in the context of brain simulation for medical research purposes. It is discussed in artificial intelligence research as an approach to strong AI. Neuroimaging technologies that could deliver the necessary detailed understanding are improving rapidly, and futurist Ray Kurzweil in the book The Singularity Is Near predicts that a map of sufficient quality will become available on a similar timescale to the required computing power.<br />
<br />
.“基本思想是,取一个特定的大脑,详细地扫描其结构,并构建一个与原始大脑如此忠实的软件模型,以至于在适当的硬件上运行时,它基本上与原始大脑的行为方式相同。整个大脑模拟在计算神经科学和神经信息学医学期刊上讨论过,这是为了医学研究的大脑模拟。它是人工智能研究中讨论的一种强人工智能的方法。神经成像技术可以提供必要的详细的理解正在迅速提高,未来学家 Ray Kurzweil 在《奇点迫近书中预测,一张具有足够质量的地图将在类似的时间尺度上达到所需的计算能力。<br />
<br />
<br />
<br />
===Early estimates ===<br />
<br />
[[File:Estimations of Human Brain Emulation Required Performance.svg|thumb|right|400px|Estimates of how much processing power is needed to emulate a human brain at various levels (from Ray Kurzweil, and [[Anders Sandberg]] and [[Nick Bostrom]]), along with the fastest supercomputer from [[TOP500]] mapped by year. Note the logarithmic scale and exponential trendline, which assumes the computational capacity doubles every 1.1 years. Kurzweil believes that mind uploading will be possible at neural simulation, while the Sandberg, Bostrom report is less certain about where [[consciousness]] arises.{{sfn|Sandberg|Boström|2008}}]] For low-level brain simulation, an extremely powerful computer would be required. The [[human brain]] has a huge number of [[synapses]]. Each of the 10<sup>11</sup> (one hundred billion) [[neurons]] has on average 7,000 synaptic connections (synapses) to other neurons. It has been estimated that the brain of a three-year-old child has about 10<sup>15</sup> synapses (1 quadrillion). This number declines with age, stabilizing by adulthood. Estimates vary for an adult, ranging from 10<sup>14</sup> to 5×10<sup>14</sup> synapses (100 to 500 trillion).{{sfn|Drachman|2005}} An estimate of the brain's processing power, based on a simple switch model for neuron activity, is around 10<sup>14</sup> (100 trillion) synaptic updates per second ([[SUPS]]).{{sfn|Russell|Norvig|2003}} In 1997, Kurzweil looked at various estimates for the hardware required to equal the human brain and adopted a figure of 10<sup>16</sup> computations per second (cps).<ref>In "Mind Children" {{Harvnb|Moravec|1988|page=61}} 10<sup>15</sup> cps is used. More recently, in 1997, <{{cite web|url=http://www.transhumanist.com/volume1/moravec.htm |title=Archived copy |accessdate=23 June 2006 |url-status=dead |archiveurl=https://web.archive.org/web/20060615031852/http://transhumanist.com/volume1/moravec.htm |archivedate=15 June 2006 }}> Moravec argued for 10<sup>8</sup> MIPS which would roughly correspond to 10<sup>14</sup> cps. Moravec talks in terms of MIPS, not "cps", which is a non-standard term Kurzweil introduced.</ref> (For comparison, if a "computation" was equivalent to one "[[FLOPS|floating point operation]]" – a measure used to rate current [[supercomputer]]s – then 10<sup>16</sup> "computations" would be equivalent to 10 [[Peta-|petaFLOPS]], [[FLOPS#Performance records|achieved in 2011]]). He used this figure to predict the necessary hardware would be available sometime between 2015 and 2025, if the exponential growth in computer power at the time of writing continued.<br />
<br />
Estimates of how much processing power is needed to emulate a human brain at various levels (from Ray Kurzweil, and [[Anders Sandberg and Nick Bostrom), along with the fastest supercomputer from TOP500 mapped by year. Note the logarithmic scale and exponential trendline, which assumes the computational capacity doubles every 1.1 years. Kurzweil believes that mind uploading will be possible at neural simulation, while the Sandberg, Bostrom report is less certain about where consciousness arises.]] For low-level brain simulation, an extremely powerful computer would be required. The human brain has a huge number of synapses. Each of the 10<sup>11</sup> (one hundred billion) neurons has on average 7,000 synaptic connections (synapses) to other neurons. It has been estimated that the brain of a three-year-old child has about 10<sup>15</sup> synapses (1 quadrillion). This number declines with age, stabilizing by adulthood. Estimates vary for an adult, ranging from 10<sup>14</sup> to 5×10<sup>14</sup> synapses (100 to 500 trillion). An estimate of the brain's processing power, based on a simple switch model for neuron activity, is around 10<sup>14</sup> (100 trillion) synaptic updates per second (SUPS). In 1997, Kurzweil looked at various estimates for the hardware required to equal the human brain and adopted a figure of 10<sup>16</sup> computations per second (cps). (For comparison, if a "computation" was equivalent to one "floating point operation" – a measure used to rate current supercomputers – then 10<sup>16</sup> "computations" would be equivalent to 10 petaFLOPS, achieved in 2011). He used this figure to predict the necessary hardware would be available sometime between 2015 and 2025, if the exponential growth in computer power at the time of writing continued.<br />
<br />
估计需要多少处理能力才能在不同水平上模拟人类大脑(来自 Ray Kurzweil,[ Anders Sandberg 和 Nick Bostrom ]) ,以及每年从 TOP500绘制出的最快超级计算机。请注意对数尺度趋势线和指数趋势线,它假设计算能力每1.1年翻一番。库兹韦尔相信,在神经模拟中上传思维是可能的,而桑德伯格和博斯特罗姆的报告对意识在哪里产生则不太确定。]对于低层次的大脑模拟,需要一个非常强大的计算机。人类的大脑有大量的突触。每个10个 sup 11 / sup (1000亿)神经元平均有7000个突触连接(突触)到其他神经元。据估计,一个三岁儿童的大脑约有10个 sup 15 / sup 突触(1千万亿)。这个数字随着年龄的增长而下降,成年后趋于稳定。对于一个成年人的估计有所不同,从10个 sup 14 / sup 到5个 sup 10 sup 14 / sup 突触(100万亿到500万亿)不等。基于神经元活动的简单开关模型,对大脑处理能力的估计大约是每秒10次 / 秒(100万亿)突触更新(SUPS)。1997年,库兹韦尔研究了相当于人脑所需硬件的各种估计,并采用了每秒10 sup 16 / sup 计算(cps)的数字。(作为比较,如果一次“计算”相当于一次“浮点运算”——一种用于对当前超级计算机进行评级的措施——那么10 sup 16 / sup“计算”相当于2011年完成的10petaflops)。他用这个数字来预测,如果在撰写本文时计算机能力方面的指数增长继续下去的话,那么在2015年到2025年之间的某个时候,必要的硬件将会出现。<br />
<br />
<br />
<br />
===Modelling the neurons in more detail===<br />
<br />
The [[artificial neuron]] model assumed by Kurzweil and used in many current [[artificial neural network]] implementations is simple compared with [[biological neuron model|biological neurons]]. A brain simulation would likely have to capture the detailed cellular behaviour of biological [[neurons]], presently understood only in the broadest of outlines. The overhead introduced by full modeling of the biological, chemical, and physical details of neural behaviour (especially on a molecular scale) would require computational powers several orders of magnitude larger than Kurzweil's estimate. In addition the estimates do not account for [[glial cells]], which are at least as numerous as neurons, and which may outnumber neurons by as much as 10:1, and are now known to play a role in cognitive processes.<ref name="Discover2011JanFeb">{{Cite journal|author=Swaminathan, Nikhil|title=Glia—the other brain cells|journal=Discover|date=Jan–Feb 2011|url=http://discovermagazine.com/2011/jan-feb/62|access-date=24 January 2014|archive-url=https://web.archive.org/web/20140208071350/http://discovermagazine.com/2011/jan-feb/62|archive-date=8 February 2014|url-status=live}}</ref><br />
<br />
The artificial neuron model assumed by Kurzweil and used in many current artificial neural network implementations is simple compared with biological neurons. A brain simulation would likely have to capture the detailed cellular behaviour of biological neurons, presently understood only in the broadest of outlines. The overhead introduced by full modeling of the biological, chemical, and physical details of neural behaviour (especially on a molecular scale) would require computational powers several orders of magnitude larger than Kurzweil's estimate. In addition the estimates do not account for glial cells, which are at least as numerous as neurons, and which may outnumber neurons by as much as 10:1, and are now known to play a role in cognitive processes.<br />
<br />
与生物神经元相比,Kurzweil 假设的人工神经元模型在当前许多人工神经网络实现中的应用是简单的。大脑模拟可能需要捕捉生物神经元细胞行为的细节,目前只能从最广泛的轮廓中理解。对神经行为的生物、化学和物理细节(特别是在分子尺度上)进行全面建模所需要的计算能力将比 Kurzweil 的估计大几百万数量级。此外,这些估计没有考虑胶质细胞,胶质细胞至少和神经元一样多,数量可能比神经元多10:1,现在已知它们在认知过程中发挥作用。<br />
<br />
<br />
<br />
=== Current research===<br />
<br />
There are some research projects that are investigating brain simulation using more sophisticated neural models, implemented on conventional computing architectures. The [[Artificial Intelligence System]] project implemented non-real time simulations of a "brain" (with 10<sup>11</sup> neurons) in 2005. It took 50 days on a cluster of 27 processors to simulate 1 second of a model.<ref>{{cite journal |last=Izhikevich |first=Eugene M. |last2=Edelman |first2=Gerald M. |date=4 March 2008 |title=Large-scale model of mammalian thalamocortical systems |url=http://vesicle.nsi.edu/users/izhikevich/publications/large-scale_model_of_human_brain.pdf |journal=PNAS |volume=105 |issue=9 |pages=3593–3598 |doi= 10.1073/pnas.0712231105|access-date=23 June 2015 |archive-url=https://web.archive.org/web/20090612095651/http://vesicle.nsi.edu/users/izhikevich/publications/large-scale_model_of_human_brain.pdf |archive-date=12 June 2009 |pmid=18292226 |pmc=2265160|bibcode=2008PNAS..105.3593I }}</ref> The [[Blue Brain]] project used one of the fastest supercomputer architectures in the world, [[IBM]]'s [[Blue Gene]] platform, to create a real time simulation of a single rat [[Neocortex|neocortical column]] consisting of approximately 10,000 neurons and 10<sup>8</sup> synapses in 2006.<ref>{{cite web|url=http://bluebrain.epfl.ch/Jahia/site/bluebrain/op/edit/pid/19085|title=Project Milestones|work=Blue Brain|accessdate=11 August 2008}}</ref> A longer term goal is to build a detailed, functional simulation of the physiological processes in the human brain: "It is not impossible to build a human brain and we can do it in 10 years," [[Henry Markram]], director of the Blue Brain Project said in 2009 at the [[TED (conference)|TED conference]] in Oxford.<ref>{{Cite news |url=http://news.bbc.co.uk/1/hi/technology/8164060.stm |title=Artificial brain '10 years away' 2009 BBC news |date=22 July 2009 |access-date=25 July 2009 |archive-url=https://web.archive.org/web/20170726040959/http://news.bbc.co.uk/1/hi/technology/8164060.stm |archive-date=26 July 2017 |url-status=live }}</ref> There have also been controversial claims to have simulated a [[cat intelligence#Computer simulation of the cat brain|cat brain]]. Neuro-silicon interfaces have been proposed as an alternative implementation strategy that may scale better.<ref>[http://gauntlet.ucalgary.ca/story/10343 University of Calgary news] {{Webarchive|url=https://web.archive.org/web/20090818081044/http://gauntlet.ucalgary.ca/story/10343 |date=18 August 2009 }}, [http://www.nbcnews.com/id/12037941 NBC News news] {{Webarchive|url=https://web.archive.org/web/20170704063922/http://www.nbcnews.com/id/12037941/ |date=4 July 2017 }}</ref><br />
<br />
There are some research projects that are investigating brain simulation using more sophisticated neural models, implemented on conventional computing architectures. The Artificial Intelligence System project implemented non-real time simulations of a "brain" (with 10<sup>11</sup> neurons) in 2005. It took 50 days on a cluster of 27 processors to simulate 1 second of a model. The Blue Brain project used one of the fastest supercomputer architectures in the world, IBM's Blue Gene platform, to create a real time simulation of a single rat neocortical column consisting of approximately 10,000 neurons and 10<sup>8</sup> synapses in 2006. A longer term goal is to build a detailed, functional simulation of the physiological processes in the human brain: "It is not impossible to build a human brain and we can do it in 10 years," Henry Markram, director of the Blue Brain Project said in 2009 at the TED conference in Oxford. There have also been controversial claims to have simulated a cat brain. Neuro-silicon interfaces have been proposed as an alternative implementation strategy that may scale better.<br />
<br />
有一些研究项目正在使用更复杂的神经模型研究大脑模拟,这些模型是在传统的计算机体系结构上实现的。人工智能系统项目在2005年实现了一个“大脑”(有10个 sup 11 / sup 神经元)的非实时模拟。在一个由27个处理器组成的集群上,模拟一个模型的一秒钟花费了50天时间。2006年,蓝脑项目利用世界上最快的超级计算机架构之一,IBM 的蓝色基因平台,创建了一个包含大约10,000个神经元和10个 sup 8 / sup 突触的单个大鼠皮层柱的实时模拟。一个更长期的目标是建立一个人脑生理过程的详细的功能模拟: “建立一个人脑并不是不可能的,我们可以在10年内完成,”2009年在牛津举行的 TED 大会上,蓝脑项目主任亨利 · 马克拉姆说。还有一些有争议的说法是模拟猫的大脑。神经硅接口已被提出作为一种替代的实施策略,可能会更好地伸缩。<br />
<br />
<br />
<br />
[[Hans Moravec]] addressed the above arguments ("brains are more complicated", "neurons have to be modeled in more detail") in his 1997 paper "When will computer hardware match the human brain?".<ref>{{cite web|url=http://www.transhumanist.com/volume1/moravec.htm |title=Archived copy |accessdate=23 June 2006 |url-status=dead |archiveurl=https://web.archive.org/web/20060615031852/http://transhumanist.com/volume1/moravec.htm |archivedate=15 June 2006 }}</ref> He measured the ability of existing software to simulate the functionality of neural tissue, specifically the retina. His results do not depend on the number of glial cells, nor on what kinds of processing neurons perform where.<br />
<br />
Hans Moravec addressed the above arguments ("brains are more complicated", "neurons have to be modeled in more detail") in his 1997 paper "When will computer hardware match the human brain?". He measured the ability of existing software to simulate the functionality of neural tissue, specifically the retina. His results do not depend on the number of glial cells, nor on what kinds of processing neurons perform where.<br />
<br />
Hans Moravec 在他1997年的论文《计算机硬件何时能与人脑相匹配? 》中提出了上述观点(“大脑更复杂” ,“神经元必须建模得更详细”) .他测量了现有软件模拟神经组织,特别是视网膜功能的能力。他的研究结果并不取决于神经胶质细胞的数量,也不取决于处理神经元在哪里工作。<br />
<br />
<br />
<br />
The actual complexity of modeling biological neurons has been explored in [[OpenWorm|OpenWorm project]] that was aimed on complete simulation of a worm that has only 302 neurons in its neural network (among about 1000 cells in total). The animal's neural network has been well documented before the start of the project. However, although the task seemed simple at the beginning, the models based on a generic neural network did not work. Currently, the efforts are focused on precise emulation of biological neurons (partly on the molecular level), but the result cannot be called a total success yet. Even if the number of issues to be solved in a human-brain-scale model is not proportional to the number of neurons, the amount of work along this path is obvious.<br />
<br />
The actual complexity of modeling biological neurons has been explored in OpenWorm project that was aimed on complete simulation of a worm that has only 302 neurons in its neural network (among about 1000 cells in total). The animal's neural network has been well documented before the start of the project. However, although the task seemed simple at the beginning, the models based on a generic neural network did not work. Currently, the efforts are focused on precise emulation of biological neurons (partly on the molecular level), but the result cannot be called a total success yet. Even if the number of issues to be solved in a human-brain-scale model is not proportional to the number of neurons, the amount of work along this path is obvious.<br />
<br />
在 OpenWorm 项目中,已经探讨了建模生物神经元的实际复杂性,该项目旨在完全模拟一个蠕虫,其神经网络中只有302个神经元(在总共约1000个细胞中)。在项目开始之前,动物的神经网络已经被很好地记录了下来。然而,尽管一开始任务看起来很简单,基于一般神经网络的模型并不起作用。目前,研究的重点是精确模拟生物神经元(部分在分子水平上) ,但结果还不能被称为完全成功。即使在人脑尺度模型中需要解决的问题的数量与神经元的数量不成比例,沿着这条路径所做的工作量也是显而易见的。<br />
<br />
<br />
<br />
===Criticisms of simulation-based approaches===<br />
<br />
A fundamental criticism of the simulated brain approach derives from [[embodied cognition]] where human embodiment is taken as an essential aspect of human intelligence. Many researchers believe that embodiment is necessary to ground meaning.<ref>{{Harvnb|de Vega|Glenberg|Graesser|2008}}. A wide range of views in current research, all of which require grounding to some degree</ref> If this view is correct, any fully functional brain model will need to encompass more than just the neurons (i.e., a robotic body). Goertzel{{sfn|Goertzel|2007}} proposes virtual embodiment (like in ''[[Second Life]]''), but it is not yet known whether this would be sufficient.<br />
<br />
A fundamental criticism of the simulated brain approach derives from embodied cognition where human embodiment is taken as an essential aspect of human intelligence. Many researchers believe that embodiment is necessary to ground meaning. If this view is correct, any fully functional brain model will need to encompass more than just the neurons (i.e., a robotic body). Goertzel proposes virtual embodiment (like in Second Life), but it is not yet known whether this would be sufficient.<br />
<br />
对模拟大脑方法的一个基本批评来自具身认知,在那里人体化被视为人类智力的一个重要方面。许多研究者认为,具体化是必要的基础意义。如果这种观点是正确的,那么任何功能齐全的大脑模型都需要包含更多的神经元(例如,一个机器人身体)。Goertzel 提出了虚拟化身(就像在《第二人生》中那样) ,但是目前还不知道这是否足够。<br />
<br />
<br />
<br />
Desktop computers using microprocessors capable of more than 10<sup>9</sup> cps (Kurzweil's non-standard unit "computations per second", see above) have been available since 2005. According to the brain power estimates used by Kurzweil (and Moravec), this computer should be capable of supporting a simulation of a bee brain, but despite some interest<ref>{{Cite web |url=http://www.setiai.com/archives/cat_honey_bee_brain.html |title=some links to bee brain studies |access-date=30 March 2010 |archive-url=https://web.archive.org/web/20111005162232/http://www.setiai.com/archives/cat_honey_bee_brain.html |archive-date=5 October 2011 |url-status=dead }}</ref> no such simulation exists {{Citation needed|date=April 2011}}. There are at least three reasons for this:<br />
<br />
Desktop computers using microprocessors capable of more than 10<sup>9</sup> cps (Kurzweil's non-standard unit "computations per second", see above) have been available since 2005. According to the brain power estimates used by Kurzweil (and Moravec), this computer should be capable of supporting a simulation of a bee brain, but despite some interest no such simulation exists . There are at least three reasons for this:<br />
<br />
自2005年以来,台式计算机使用的微处理器能够超过10 sup 9 / sup cps (库兹韦尔的非标准单位“每秒计算” ,见上文)。根据 Kurzweil (和 Moravec)使用的大脑能量估算,这台计算机应该能够支持蜜蜂大脑的模拟,但是尽管有些人感兴趣,这样的模拟并不存在。这至少有三个原因:<br />
<br />
#The neuron model seems to be oversimplified (see next section).<br />
<br />
The neuron model seems to be oversimplified (see next section).<br />
<br />
神经元模型似乎过于简化了(见下一节)。<br />
<br />
#There is insufficient understanding of higher cognitive processes{{refn|In Goertzels' AGI book ([[#CITEREFYudkowsky2006|Yudkowsky 2006]]), Yudkowsky proposes 5 levels of organisation that must be understood – code/data, sensory modality, concept & category, thought, and deliberation (consciousness) – in order to use the available hardware}} to establish accurately what the brain's neural activity, observed using techniques such as [[Neuroimaging#Functional magnetic resonance imaging|functional magnetic resonance imaging]], correlates with.<br />
<br />
There is insufficient understanding of higher cognitive processes to establish accurately what the brain's neural activity, observed using techniques such as functional magnetic resonance imaging, correlates with.<br />
<br />
人们对高级认知过程的理解不够充分,无法准确地确定大脑的神经活动---- 使用功能性磁共振成像等技术观察到的活动---- 与之相关。<br />
<br />
#Even if our understanding of cognition advances sufficiently, early simulation programs are likely to be very inefficient and will, therefore, need considerably more hardware.<br />
<br />
Even if our understanding of cognition advances sufficiently, early simulation programs are likely to be very inefficient and will, therefore, need considerably more hardware.<br />
<br />
即使我们对认知的理解有了足够的进步,早期的仿真程序也可能非常低效,因此需要更多的硬件。<br />
<br />
#The brain of an organism, while critical, may not be an appropriate boundary for a cognitive model. To simulate a bee brain, it may be necessary to simulate the body, and the environment. [[The Extended Mind]] thesis formalizes the philosophical concept, and research into [[Cephalopoda|cephalopods]] has demonstrated clear examples of a decentralized system.<ref>{{cite journal | pmid = 15829594 | doi=10.1152/jn.00684.2004 | volume=94 | issue=2 | title=Dynamic model of the octopus arm. I. Biomechanics of the octopus reaching movement |date=August 2005 | journal=J. Neurophysiol. | pages=1443–58 | last1 = Yekutieli | first1 = Y | last2 = Sagiv-Zohar | first2 = R | last3 = Aharonov | first3 = R | last4 = Engel | first4 = Y | last5 = Hochner | first5 = B | last6 = Flash | first6 = T}}</ref><br />
<br />
The brain of an organism, while critical, may not be an appropriate boundary for a cognitive model. To simulate a bee brain, it may be necessary to simulate the body, and the environment. The Extended Mind thesis formalizes the philosophical concept, and research into cephalopods has demonstrated clear examples of a decentralized system.<br />
<br />
有机体的大脑虽然关键,但可能不是认知模型的合适边界。为了模拟蜜蜂的大脑,可能需要模拟身体和环境。扩展心智论文形式化了哲学概念,对头足类动物的研究已经证明了分散系统的明显例子。<br />
<br />
<br />
<br />
In addition, the scale of the human brain is not currently well-constrained. One estimate puts the human brain at about 100 billion neurons and 100 trillion synapses.<ref>{{Harvnb|Williams|Herrup|1988}}</ref><ref>[http://search.eb.com/eb/article-75525 "nervous system, human."] ''[[Encyclopædia Britannica]]''. 9 January 2007</ref> Another estimate is 86 billion neurons of which 16.3 billion are in the [[cerebral cortex]] and 69 billion in the [[cerebellum]].{{sfn|Azevedo et al.|2009}} [[Glial cell]] synapses are currently unquantified but are known to be extremely numerous.<br />
<br />
In addition, the scale of the human brain is not currently well-constrained. One estimate puts the human brain at about 100 billion neurons and 100 trillion synapses. Another estimate is 86 billion neurons of which 16.3 billion are in the cerebral cortex and 69 billion in the cerebellum. Glial cell synapses are currently unquantified but are known to be extremely numerous.<br />
<br />
此外,人类大脑的规模目前还没有得到很好的限制。据估计,人类大脑大约有1000亿个神经元和100万亿个突触。另一个估计是860亿个神经元,其中163亿在大脑皮层,690亿在小脑。神经胶质细胞突触目前尚未定量,但已知数量极多。<br />
<br />
<br />
<br />
==Strong AI and consciousness==<br />
<br />
{{See also|Philosophy of artificial intelligence|Turing test}}<br />
<br />
<br />
<br />
In 1980, philosopher [[John Searle]] coined the term "strong AI" as part of his [[Chinese room]] argument.<ref>{{Harvnb|Searle|1980}}</ref> He wanted to distinguish between two different hypotheses about artificial intelligence:<ref>As defined in a standard AI textbook: "The assertion that machines could possibly act intelligently (or, perhaps better, act as if they were intelligent) is called the 'weak AI' hypothesis by philosophers, and the assertion that machines that do so are actually thinking (as opposed to simulating thinking) is called the 'strong AI' hypothesis." {{Harv|Russell|Norvig|2003}}</ref><br />
<br />
In 1980, philosopher John Searle coined the term "strong AI" as part of his Chinese room argument. He wanted to distinguish between two different hypotheses about artificial intelligence:<br />
<br />
1980年,哲学家约翰•塞尔(John Searle)将“强人工智能”(strong AI)一词作为他在中文房间里辩论的一部分。他想要区分关于人工智能的两种不同假设:<br />
<br />
* An artificial intelligence system can ''think'' and have a ''mind''. (The word "mind" has a specific meaning for philosophers, as used in "the [[mind body problem]]" or "the [[philosophy of mind]]".)<br />
<br />
* An artificial intelligence system can (only) ''act like'' it thinks and has a mind.<br />
<br />
The first one is called "the ''strong'' AI hypothesis" and the second is "the ''weak'' AI hypothesis" because the first one makes the ''stronger'' statement: it assumes something special has happened to the machine that goes beyond all its abilities that we can test. Searle referred to the "strong AI hypothesis" as "strong AI". This usage is also common in academic AI research and textbooks.<ref>For example:<br />
<br />
The first one is called "the strong AI hypothesis" and the second is "the weak AI hypothesis" because the first one makes the stronger statement: it assumes something special has happened to the machine that goes beyond all its abilities that we can test. Searle referred to the "strong AI hypothesis" as "strong AI". This usage is also common in academic AI research and textbooks.<ref>For example:<br />
<br />
第一个被称为“强人工智能假设” ,第二个被称为“弱人工智能假设” ,因为第一个假设提出了更强的陈述: 它假定机器发生了某种特殊的事情,超出了我们能够测试的所有能力。Searle 将“强 AI 假说”称为“强 AI”。这种用法在人工智能学术研究和教科书中也很常见。例如:<br />
<br />
* {{Harvnb|Russell|Norvig|2003}},<br />
<br />
* [http://www.encyclopedia.com/doc/1O87-strongAI.html Oxford University Press Dictionary of Psychology] {{Webarchive|url=https://web.archive.org/web/20071203103022/http://www.encyclopedia.com/doc/1O87-strongAI.html |date=3 December 2007 }} (quoted in "High Beam Encyclopedia"),<br />
<br />
* [http://www.aaai.org/AITopics/html/phil.html MIT Encyclopedia of Cognitive Science] {{Webarchive|url=https://web.archive.org/web/20080719074502/http://www.aaai.org/AITopics/html/phil.html |date=19 July 2008 }} (quoted in "AITopics")<br />
<br />
* [http://planetmath.org/encyclopedia/StrongAIThesis.html Planet Math] {{Webarchive|url=https://web.archive.org/web/20070919012830/http://planetmath.org/encyclopedia/StrongAIThesis.html |date=19 September 2007 }}<br />
<br />
* [http://www.cbhd.org/resources/biotech/tongen_2003-11-07.htm Will Biological Computers Enable Artificially Intelligent Machines to Become Persons?] {{Webarchive|url=https://web.archive.org/web/20080513031753/http://www.cbhd.org/resources/biotech/tongen_2003-11-07.htm |date=13 May 2008 }} Anthony Tongen</ref><br />
<br />
<br />
<br />
The weak AI hypothesis is equivalent to the hypothesis that artificial general intelligence is possible. According to [[Stuart J. Russell|Russell]] and [[Peter Norvig|Norvig]], "Most AI researchers take the weak AI hypothesis for granted, and don't care about the strong AI hypothesis."{{sfn|Russell|Norvig|2003|p=947}}<br />
<br />
The weak AI hypothesis is equivalent to the hypothesis that artificial general intelligence is possible. According to Russell and Norvig, "Most AI researchers take the weak AI hypothesis for granted, and don't care about the strong AI hypothesis."<br />
<br />
弱人工智能假说等同于人工一般智能是可能的假说。根据罗素和诺维格的说法,“大多数人工智能研究人员认为弱人工智能假说是理所当然的,并且不关心强人工智能假说。”<br />
<br />
<br />
<br />
In contrast to Searle, [[Ray Kurzweil]] uses the term "strong AI" to describe any artificial intelligence system that acts like it has a mind,<ref name=K/> regardless of whether a philosopher would be able to determine if it ''actually'' has a mind or not.<br />
<br />
In contrast to Searle, Ray Kurzweil uses the term "strong AI" to describe any artificial intelligence system that acts like it has a mind, regardless of whether a philosopher would be able to determine if it actually has a mind or not.<br />
<br />
与 Searle 不同的是,Ray Kurzweil 使用“强人工智能”这个词来描述任何人工智能系统,这个系统的行为就像它有思想一样,不管哲学家是否能够确定它是否真的有思想。<br />
<br />
In science fiction, AGI is associated with traits such as [[consciousness]], [[sentience]], [[sapience]], and [[self-awareness]] observed in living beings. However, according to Searle, it is an open question whether general intelligence is sufficient for consciousness. "Strong AI" (as defined above by Kurzweil) should not be confused with Searle's "[[strong AI hypothesis]]." The strong AI hypothesis is the claim that a computer which behaves as intelligently as a person must also necessarily have a [[mind]] and [[consciousness]]. AGI refers only to the amount of intelligence that the machine displays, with or without a mind.<br />
<br />
In science fiction, AGI is associated with traits such as consciousness, sentience, sapience, and self-awareness observed in living beings. However, according to Searle, it is an open question whether general intelligence is sufficient for consciousness. "Strong AI" (as defined above by Kurzweil) should not be confused with Searle's "strong AI hypothesis." The strong AI hypothesis is the claim that a computer which behaves as intelligently as a person must also necessarily have a mind and consciousness. AGI refers only to the amount of intelligence that the machine displays, with or without a mind.<br />
<br />
在科幻小说中,AGI 与生物的意识、知觉、智慧和自我意识等特征有关。然而,根据 Searle 的说法,一般智力是否足以产生意识还是一个悬而未决的问题。“强 AI”(如上文库兹韦尔所定义的)不应与塞尔的“强 AI 假设”相混淆强有力的人工智能假说认为,一台像人一样智能运行的计算机必然具有思想和意识。Agi 只是指机器显示的智能量,不管有没有头脑。<br />
<br />
<br />
<br />
===Consciousness===<br />
<br />
There are other aspects of the human mind besides intelligence that are relevant to the concept of strong AI which play a major role in [[science fiction]] and the [[ethics of artificial intelligence]]:<br />
<br />
There are other aspects of the human mind besides intelligence that are relevant to the concept of strong AI which play a major role in science fiction and the ethics of artificial intelligence:<br />
<br />
在科幻小说和人工智能伦理中扮演重要角色的强大人工智能概念中,除了与智能有关的人类思维还有其他方面:<br />
<br />
* [[consciousness]]: To have [[qualia|subjective experience]] and [[thought]].<ref>Note that [[consciousness]] is difficult to define. A popular definition, due to [[Thomas Nagel]], is that it "feels like" something to be conscious. If we are not conscious, then it doesn't feel like anything. Nagel uses the example of a bat: we can sensibly ask "what does it feel like to be a bat?" However, we are unlikely to ask "what does it feel like to be a toaster?" Nagel concludes that a bat appears to be conscious (i.e. has consciousness) but a toaster does not. See {{Harv|Nagel|1974}}</ref><br />
<br />
* [[self-awareness]]: To be aware of oneself as a separate individual, especially to be aware of one's own thoughts.<br />
<br />
* [[sentience]]: The ability to "feel" perceptions or emotions subjectively.<br />
<br />
* [[sapience]]: The capacity for wisdom.<br />
<br />
<br />
<br />
These traits have a moral dimension, because a machine with this form of strong AI may have legal rights, analogous to the [[animal rights|rights of non-human animals]]. As such, preliminary work has been conducted on approaches to integrating full ethical agents with existing legal and social frameworks. These approaches have focused on the legal position and rights of 'strong' AI.<ref>{{Cite journal|last=Sotala|first=Kaj|last2=Yampolskiy|first2=Roman V|date=2014-12-19|title=Responses to catastrophic AGI risk: a survey|journal=Physica Scripta|volume=90|issue=1|pages=8|doi=10.1088/0031-8949/90/1/018001|issn=0031-8949|doi-access=free}}</ref><br />
<br />
These traits have a moral dimension, because a machine with this form of strong AI may have legal rights, analogous to the rights of non-human animals. As such, preliminary work has been conducted on approaches to integrating full ethical agents with existing legal and social frameworks. These approaches have focused on the legal position and rights of 'strong' AI.<br />
<br />
这些特征具有道德维度,因为拥有这种强人工智能形式的机器可能拥有法律权利,类似于非人类动物的权利。因此,已经开展了初步工作,探讨如何将全面的道德行为者纳入现有的法律和社会框架。这些方法都集中在强大的人工智能的法律地位和权利上。<br />
<br />
<br />
<br />
However, [[Bill Joy]], among others, argues a machine with these traits may be a threat to human life or dignity.<ref>{{cite journal| title=Why the future doesn't need us | last=Joy | first=Bill |author-link=Bill Joy | journal=Wired |date=April 2000 }}</ref> It remains to be shown whether any of these traits are [[Necessary and sufficient condition|necessary]] for strong AI. The role of [[consciousness]] is not clear, and currently there is no agreed test for its presence. If a machine is built with a device that simulates the [[neural correlates of consciousness]], would it automatically have self-awareness? It is also possible that some of these properties, such as sentience, [[Emergence|naturally emerge]] from a fully intelligent machine, or that it becomes natural to ''ascribe'' these properties to machines once they begin to act in a way that is clearly intelligent. For example, intelligent action may be sufficient for sentience, rather than the other way around.<br />
<br />
However, Bill Joy, among others, argues a machine with these traits may be a threat to human life or dignity. It remains to be shown whether any of these traits are necessary for strong AI. The role of consciousness is not clear, and currently there is no agreed test for its presence. If a machine is built with a device that simulates the neural correlates of consciousness, would it automatically have self-awareness? It is also possible that some of these properties, such as sentience, naturally emerge from a fully intelligent machine, or that it becomes natural to ascribe these properties to machines once they begin to act in a way that is clearly intelligent. For example, intelligent action may be sufficient for sentience, rather than the other way around.<br />
<br />
然而,比尔 · 乔伊等人认为,具有这些特征的机器可能会威胁到人类的生命或尊严。这些特征对于强 AI 来说是否是必要的还有待证明。意识的作用并不清楚,目前也没有对其存在的一致的测试。如果一台机器装有一个模拟意识相关神经区的装置,它会自动具有自我意识吗?也有可能这些特性中的一些,比如感知能力,自然而然地从一个完全智能的机器中产生,或者一旦机器开始以一种明显的智能方式行动,人们就会自然而然地把这些特性归因于机器。例如,智能行为可能足以产生知觉,而不是相反。<br />
<br />
<br />
<br />
===Artificial consciousness research===<br />
<br />
{{Main|Artificial consciousness}}<br />
<br />
<br />
<br />
Although the role of consciousness in strong AI/AGI is debatable, many AGI researchers{{sfn|Yudkowsky|2006}} regard research that investigates possibilities for implementing consciousness as vital. In an early effort [[Igor Aleksander]]{{sfn|Aleksander|1996}} argued that the principles for creating a conscious machine already existed but that it would take forty years to train such a machine to understand [[language]].<br />
<br />
Although the role of consciousness in strong AI/AGI is debatable, many AGI researchers regard research that investigates possibilities for implementing consciousness as vital. In an early effort Igor Aleksander argued that the principles for creating a conscious machine already existed but that it would take forty years to train such a machine to understand language.<br />
<br />
虽然意识在强 ai / AGI 中的作用是有争议的,但是很多 AGI 的研究人员认为研究实现意识的可能性是至关重要的。在早期的努力中,Igor Aleksander 认为创造一个有意识的机器的原则已经存在,但是训练这样一个机器去理解语言需要四十年。<br />
<br />
<br />
<br />
==Possible explanations for the slow progress of AI research==<br />
<br />
{{See also|History of artificial intelligence#The problems}}<br />
<br />
<br />
<br />
Since the launch of AI research in 1956, the growth of this field has slowed down over time and has stalled the aims of creating machines skilled with intelligent action at the human level.{{sfn|Clocksin|2003}} A possible explanation for this delay is that computers lack a sufficient scope of memory or processing power.{{sfn|Clocksin|2003}} In addition, the level of complexity that connects to the process of AI research may also limit the progress of AI research.{{sfn|Clocksin|2003}}<br />
<br />
Since the launch of AI research in 1956, the growth of this field has slowed down over time and has stalled the aims of creating machines skilled with intelligent action at the human level. A possible explanation for this delay is that computers lack a sufficient scope of memory or processing power. In addition, the level of complexity that connects to the process of AI research may also limit the progress of AI research.<br />
<br />
自从1956年开始人工智能研究以来,这一领域的发展速度已经随着时间的推移而放缓,并且阻碍了创造具有人类水平的智能行为的机器的目标。这种延迟的一个可能的解释是计算机缺乏足够的存储空间或处理能力。此外,与人工智能研究过程相关的复杂程度也可能限制人工智能研究的进展。<br />
<br />
<br />
<br />
While most AI researchers believe strong AI can be achieved in the future, there are some individuals like [[Hubert Dreyfus]] and [[Roger Penrose]] who deny the possibility of achieving strong AI.{{sfn|Clocksin|2003}} [[John McCarthy (computer scientist)|John McCarthy]] was one of various computer scientists who believe human-level AI will be accomplished, but a date cannot accurately be predicted.{{sfn|McCarthy|2003}}<br />
<br />
While most AI researchers believe strong AI can be achieved in the future, there are some individuals like Hubert Dreyfus and Roger Penrose who deny the possibility of achieving strong AI. John McCarthy was one of various computer scientists who believe human-level AI will be accomplished, but a date cannot accurately be predicted.<br />
<br />
虽然大多数人工智能研究人员认为强大的人工智能可以在未来实现,但也有一些人像休伯特 · 德雷福斯和罗杰 · 彭罗斯否认实现强大人工智能的可能性。约翰 · 麦卡锡是众多计算机科学家之一,他们相信人类水平的人工智能将会实现,但是日期无法准确预测。<br />
<br />
<br />
<br />
Conceptual limitations are another possible reason for the slowness in AI research.{{sfn|Clocksin|2003}} AI researchers may need to modify the conceptual framework of their discipline in order to provide a stronger base and contribution to the quest of achieving strong AI. As William Clocksin wrote in 2003: "the framework starts from Weizenbaum's observation that intelligence manifests itself only relative to specific social and cultural contexts".{{sfn|Clocksin|2003}}<br />
<br />
Conceptual limitations are another possible reason for the slowness in AI research. AI researchers may need to modify the conceptual framework of their discipline in order to provide a stronger base and contribution to the quest of achieving strong AI. As William Clocksin wrote in 2003: "the framework starts from Weizenbaum's observation that intelligence manifests itself only relative to specific social and cultural contexts".<br />
<br />
概念上的局限性是人工智能研究缓慢的另一个可能原因。人工智能研究人员可能需要修改他们学科的概念框架,以便为实现强大的人工智能提供一个更强大的基础和贡献。正如 William Clocksin 在2003年写的那样: “这个框架始于 Weizenbaum 的观察,即智力只在特定的社会和文化背景下表现出来。”。<br />
<br />
<br />
<br />
Furthermore, AI researchers have been able to create computers that can perform jobs that are complicated for people to do, such as mathematics, but conversely they have struggled to develop a computer that is capable of carrying out tasks that are simple for humans to do, such as walking ([[Moravec's paradox]]).{{sfn|Clocksin|2003}} A problem described by David Gelernter is that some people assume thinking and reasoning are equivalent.{{sfn|Gelernter|2010}} However, the idea of whether thoughts and the creator of those thoughts are isolated individually has intrigued AI researchers.{{sfn|Gelernter|2010}}<br />
<br />
Furthermore, AI researchers have been able to create computers that can perform jobs that are complicated for people to do, such as mathematics, but conversely they have struggled to develop a computer that is capable of carrying out tasks that are simple for humans to do, such as walking (Moravec's paradox). A problem described by David Gelernter is that some people assume thinking and reasoning are equivalent. However, the idea of whether thoughts and the creator of those thoughts are isolated individually has intrigued AI researchers.<br />
<br />
此外,人工智能研究人员已经能够创造出能够执行复杂工作(如数学)的计算机,但相反地,他们却难以开发出能够执行人类简单任务(如行走)的计算机(莫拉维克悖论)。大卫 · 格勒尼特描述的一个问题是,有些人认为思考和推理是等价的。然而,思想和这些思想的创造者是否被孤立的想法引起了人工智能研究者的兴趣。<br />
<br />
<br />
<br />
The problems that have been encountered in AI research over the past decades have further impeded the progress of AI. The failed predictions that have been promised by AI researchers and the lack of a complete understanding of human behaviors have helped diminish the primary idea of human-level AI.{{sfn|Goertzel|2007}} Although the progress of AI research has brought both improvement and disappointment, most investigators have established optimism about potentially achieving the goal of AI in the 21st century.{{sfn|Goertzel|2007}}<br />
<br />
The problems that have been encountered in AI research over the past decades have further impeded the progress of AI. The failed predictions that have been promised by AI researchers and the lack of a complete understanding of human behaviors have helped diminish the primary idea of human-level AI. Although the progress of AI research has brought both improvement and disappointment, most investigators have established optimism about potentially achieving the goal of AI in the 21st century.<br />
<br />
过去几十年人工智能研究中遇到的问题进一步阻碍了人工智能的发展。人工智能研究人员所承诺的失败的预测,以及对人类行为缺乏完整理解,已经帮助削弱了人类水平人工智能的基本概念。尽管人工智能研究的进展带来了进步和失望,但大多数研究人员对人工智能在21世纪可能实现的目标持乐观态度。<br />
<br />
<br />
<br />
Other possible reasons have been proposed for the lengthy research in the progress of strong AI. The intricacy of scientific problems and the need to fully understand the human brain through psychology and neurophysiology have limited many researchers in emulating the function of the human brain in computer hardware.{{sfn|McCarthy|2007}} Many researchers tend to underestimate any doubt that is involved with future predictions of AI, but without taking those issues seriously, people can then overlook solutions to problematic questions.{{sfn|Goertzel|2007}}<br />
<br />
Other possible reasons have been proposed for the lengthy research in the progress of strong AI. The intricacy of scientific problems and the need to fully understand the human brain through psychology and neurophysiology have limited many researchers in emulating the function of the human brain in computer hardware. Many researchers tend to underestimate any doubt that is involved with future predictions of AI, but without taking those issues seriously, people can then overlook solutions to problematic questions.<br />
<br />
还有其他可能的原因可以解释为什么对于强人工智能进行了长时间的研究。错综复杂的科学问题,以及通过心理学和神经生理学充分了解人脑的必要性,限制了许多研究人员在计算机硬件中模拟人脑的功能。许多研究人员倾向于低估与人工智能未来预测有关的任何怀疑,但是如果不认真对待这些问题,人们就会忽视问题的解决方案。<br />
<br />
<br />
<br />
Clocksin says that a conceptual limitation that may impede the progress of AI research is that people may be using the wrong techniques for computer programs and implementation of equipment.{{sfn|Clocksin|2003}} When AI researchers first began to aim for the goal of artificial intelligence, a main interest was human reasoning.{{sfn|Holte|Choueiry|2003}} Researchers hoped to establish computational models of human knowledge through reasoning and to find out how to design a computer with a specific cognitive task.{{sfn|Holte|Choueiry|2003}}<br />
<br />
Clocksin says that a conceptual limitation that may impede the progress of AI research is that people may be using the wrong techniques for computer programs and implementation of equipment. When AI researchers first began to aim for the goal of artificial intelligence, a main interest was human reasoning. Researchers hoped to establish computational models of human knowledge through reasoning and to find out how to design a computer with a specific cognitive task.<br />
<br />
Clocksin 说,阻碍人工智能研究进展的一个概念上的限制是,人们可能在计算机程序和设备实现方面使用了错误的技术。当人工智能研究人员第一次开始瞄准人工智能的目标时,主要的兴趣是人类推理。研究人员希望通过推理建立人类知识的计算模型,并找出如何设计一台具有特定认知任务的计算机。<br />
<br />
<br />
<br />
The practice of abstraction, which people tend to redefine when working with a particular context in research, provides researchers with a concentration on just a few concepts.{{sfn|Holte|Choueiry|2003}} The most productive use of abstraction in AI research comes from planning and problem solving.{{sfn|Holte|Choueiry|2003}} Although the aim is to increase the speed of a computation, the role of abstraction has posed questions about the involvement of abstraction operators.{{sfn|Zucker|2003}}<br />
<br />
The practice of abstraction, which people tend to redefine when working with a particular context in research, provides researchers with a concentration on just a few concepts. The most productive use of abstraction in AI research comes from planning and problem solving. Although the aim is to increase the speed of a computation, the role of abstraction has posed questions about the involvement of abstraction operators.<br />
<br />
抽象的实践,人们在研究中使用特定的语境时倾向于重新定义,为研究人员提供了集中在几个概念上的机会。抽象在人工智能研究中最有效的应用来自规划和解决问题。虽然目标是提高计算速度,但是抽象的作用已经对抽象操作符的参与提出了问题。<br />
<br />
<br />
<br />
A possible reason for the slowness in AI relates to the acknowledgement by many AI researchers that heuristics is a section that contains a significant breach between computer performance and human performance.{{sfn|McCarthy|2007}} The specific functions that are programmed to a computer may be able to account for many of the requirements that allow it to match human intelligence. These explanations are not necessarily guaranteed to be the fundamental causes for the delay in achieving strong AI, but they are widely agreed by numerous researchers.<br />
<br />
A possible reason for the slowness in AI relates to the acknowledgement by many AI researchers that heuristics is a section that contains a significant breach between computer performance and human performance. The specific functions that are programmed to a computer may be able to account for many of the requirements that allow it to match human intelligence. These explanations are not necessarily guaranteed to be the fundamental causes for the delay in achieving strong AI, but they are widely agreed by numerous researchers.<br />
<br />
人工智能发展缓慢的一个可能原因是许多人工智能研究人员承认启发式是计算机性能和人类性能之间的一个重大缺口。为计算机编程的特定功能可以满足许多要求,使计算机与人类智能相匹配。这些解释并不一定是造成人工智能实现延迟的根本原因,但它们得到了众多研究人员的广泛认同。<br />
<br />
<br />
<br />
There have been many AI researchers that debate over the idea whether [[affective computing|machines should be created with emotions]]. There are no emotions in typical models of AI and some researchers say programming emotions into machines allows them to have a mind of their own.{{sfn|Clocksin|2003}} Emotion sums up the experiences of humans because it allows them to remember those experiences.{{sfn|Gelernter|2010}} David Gelernter writes, "No computer will be creative unless it can simulate all the nuances of human emotion."{{sfn|Gelernter|2010}} This concern about emotion has posed problems for AI researchers and it connects to the concept of strong AI as its research progresses into the future.<ref>{{cite journal|doi=10.1016/j.bushor.2018.08.004|title=Kaplan Andreas and Haelein Michael (2019) Siri, Siri, in my hand: Who's the fairest in the land? On the interpretations, illustrations, and implications of artificial intelligence | volume=62 | year=2019|journal=Business Horizons|pages=15–25 | last1 = Kaplan | first1 = Andreas | last2 = Haenlein | first2 = Michael}}</ref><br />
<br />
There have been many AI researchers that debate over the idea whether machines should be created with emotions. There are no emotions in typical models of AI and some researchers say programming emotions into machines allows them to have a mind of their own. Emotion sums up the experiences of humans because it allows them to remember those experiences. David Gelernter writes, "No computer will be creative unless it can simulate all the nuances of human emotion." This concern about emotion has posed problems for AI researchers and it connects to the concept of strong AI as its research progresses into the future.<br />
<br />
许多人工智能研究人员一直在争论机器是否应该带有情感。典型的人工智能模型中没有情感,一些研究人员说,将情感编程到机器中可以让它们拥有自己的思想。情感总结了人类的经历,因为它允许人们记住那些经历。大卫 · 格勒尼特写道: “除非计算机能够模拟人类情感的所有细微差别,否则它不会具有创造力。”这种对情绪的关注给人工智能研究人员带来了一些问题,随着未来人工智能研究的进展,它与强人工智能的概念相联系。<br />
<br />
<br />
<br />
==Controversies and dangers==<br />
<br />
<br />
<br />
===Feasibility===<br />
<br />
{{expand section|date=February 2016}}<br />
<br />
As of March 2020, AGI remains speculative<ref name="spec1">[https://www.europarl.europa.eu/at-your-service/files/be-heard/religious-and-non-confessional-dialogue/events/en-20190319-how-artificial-intelligence-works.pdf europarl.europa.eu: How artificial intelligence works], "Concluding remarks: Today's AI is powerful and useful, but remains far from speculated AGI or ASI.", European Parliamentary Research Service, retrieved March 3, 2020</ref><ref name="spec2">[https://www.itu.int/en/journal/001/Documents/itu2018-9.pdf itu.int: Beyond Mad?: The Race For Artificial General Intelligence], "AGI represents a level of power that remains firmly in the realm of speculative fiction as on date." February 2, 2018, retrieved March 3, 2020</ref> as no such system has been demonstrated yet. Opinions vary both on ''whether'' and ''when'' artificial general intelligence will arrive. At one extreme, AI pioneer [[Herbert A. Simon]] wrote in 1965: "machines will be capable, within twenty years, of doing any work a man can do". However, this prediction failed to come true. Microsoft co-founder [[Paul Allen]] believed that such intelligence is unlikely in the 21st century because it would require "unforeseeable and fundamentally unpredictable breakthroughs" and a "scientifically deep understanding of cognition".<ref>{{cite news|last1=Allen|first1=Paul|title=The Singularity Isn't Near|url=http://www.technologyreview.com/view/425733/paul-allen-the-singularity-isnt-near/|accessdate=17 September 2014|work=[[MIT Technology Review]]}}</ref> Writing in [[The Guardian]], roboticist Alan Winfield claimed the gulf between modern computing and human-level artificial intelligence is as wide as the gulf between current space flight and practical faster-than-light spaceflight.<ref>{{cite news|last1=Winfield|first1=Alan|title=Artificial intelligence will not turn into a Frankenstein's monster|url=https://www.theguardian.com/technology/2014/aug/10/artificial-intelligence-will-not-become-a-frankensteins-monster-ian-winfield|accessdate=17 September 2014|work=[[The Guardian]]|archive-url=https://web.archive.org/web/20140917135230/http://www.theguardian.com/technology/2014/aug/10/artificial-intelligence-will-not-become-a-frankensteins-monster-ian-winfield|archive-date=17 September 2014|url-status=live}}</ref><br />
<br />
As of March 2020, AGI remains speculative as no such system has been demonstrated yet. Opinions vary both on whether and when artificial general intelligence will arrive. At one extreme, AI pioneer Herbert A. Simon wrote in 1965: "machines will be capable, within twenty years, of doing any work a man can do". However, this prediction failed to come true. Microsoft co-founder Paul Allen believed that such intelligence is unlikely in the 21st century because it would require "unforeseeable and fundamentally unpredictable breakthroughs" and a "scientifically deep understanding of cognition". Writing in The Guardian, roboticist Alan Winfield claimed the gulf between modern computing and human-level artificial intelligence is as wide as the gulf between current space flight and practical faster-than-light spaceflight.<br />
<br />
截至2020年3月,德盛安联仍处于投机状态,因为迄今尚未展示此类系统。对于人工通用智能是否会到来以及何时到来,人们的看法各不相同。在一个极端,人工智能的先驱赫伯特·西蒙在1965年写道: “机器将能在20年内完成人类能做的任何工作。”。然而,这个预言并没有实现。微软(Microsoft)联合创始人保罗•艾伦(Paul Allen)认为,这种情报在21世纪不太可能出现,因为它需要“不可预见且根本无法预测的突破”和“对认知的科学深入理解”。机器人专家 Alan Winfield 在《卫报》上发表文章称,现代计算机和人类水平的人工智能之间的鸿沟就像当前的太空飞行和实际的超光速空间飞行之间的鸿沟一样宽。<br />
<br />
<br />
<br />
AI experts' views on the feasibility of AGI wax and wane, and may have seen a resurgence in the 2010s. Four polls conducted in 2012 and 2013 suggested that the median guess among experts for when they would be 50% confident AGI would arrive was 2040 to 2050, depending on the poll, with the mean being 2081. Of the experts, 16.5% answered with "never" when asked the same question but with a 90% confidence instead.<ref name="new yorker doomsday">{{cite news|author1=Raffi Khatchadourian|title=The Doomsday Invention: Will artificial intelligence bring us utopia or destruction?|url=http://www.newyorker.com/magazine/2015/11/23/doomsday-invention-artificial-intelligence-nick-bostrom|accessdate=7 February 2016|work=[[The New Yorker (magazine)|The New Yorker]]|date=23 November 2015|archive-url=https://web.archive.org/web/20160128105955/http://www.newyorker.com/magazine/2015/11/23/doomsday-invention-artificial-intelligence-nick-bostrom|archive-date=28 January 2016|url-status=live}}</ref><ref>Müller, V. C., & Bostrom, N. (2016). Future progress in artificial intelligence: A survey of expert opinion. In Fundamental issues of artificial intelligence (pp. 555–572). Springer, Cham.</ref> Further current AGI progress considerations can be found below [[#Tests_for_confirming_human-level_AGI|''Tests for confirming human-level AGI'']] and [[#IQ-Tests_AGI|''IQ-tests AGI'']].<br />
<br />
AI experts' views on the feasibility of AGI wax and wane, and may have seen a resurgence in the 2010s. Four polls conducted in 2012 and 2013 suggested that the median guess among experts for when they would be 50% confident AGI would arrive was 2040 to 2050, depending on the poll, with the mean being 2081. Of the experts, 16.5% answered with "never" when asked the same question but with a 90% confidence instead. Further current AGI progress considerations can be found below Tests for confirming human-level AGI and IQ-tests AGI.<br />
<br />
人工智能专家对德盛安联兴衰可行性的看法,可能在2010年出现了复苏。2012年和2013年进行的四次民意调查显示,专家对德盛安联50% 有信心的平均猜测是2040年到2050年,具体取决于调查结果,平均猜测是2081年。在这些专家中,16.5% 的人在被问到同样的问题时回答“从来没有” ,但他们的自信心却达到了90% 。进一步的进展考虑可以在确认人类水平 AGI 和 iq 测试 AGI 的测试下面找到。<br />
<br />
<br />
<br />
===Potential threat to human existence{{anchor|Risk_of_human_extinction}}===<br />
<br />
{{Main|Existential risk from artificial general intelligence}}<br />
<br />
<br />
<br />
The thesis that AI poses an existential risk, and that this risk needs much more attention than it currently gets, has been endorsed by many public figures; perhaps the most famous are [[Elon Musk]], [[Bill Gates]], and [[Stephen Hawking]]. The most notable AI researcher to endorse the thesis is [[Stuart J. Russell]]. Endorsers of the thesis sometimes express bafflement at skeptics: Gates states he does not "understand why some people are not concerned",<ref name="BBC News">{{cite news|last1=Rawlinson|first1=Kevin|title=Microsoft's Bill Gates insists AI is a threat|url=https://www.bbc.co.uk/news/31047780|work=[[BBC News]]|accessdate=30 January 2015}}</ref> and Hawking criticized widespread indifference in his 2014 editorial: {{cquote|'So, facing possible futures of incalculable benefits and risks, the experts are surely doing everything possible to ensure the best outcome, right? Wrong. If a superior alien civilisation sent us a message saying, 'We'll arrive in a few decades,' would we just reply, 'OK, call us when you get here{{endash}}we'll leave the lights on?' Probably not{{endash}}but this is more or less what is happening with AI.'<ref name="hawking editorial">{{cite news |title=Stephen Hawking: 'Transcendence looks at the implications of artificial intelligence&nbsp;– but are we taking AI seriously enough?' |url=https://www.independent.co.uk/news/science/stephen-hawking-transcendence-looks-at-the-implications-of-artificial-intelligence--but-are-we-taking-ai-seriously-enough-9313474.html |accessdate=3 December 2014 |publisher=[[The Independent (UK)]]}}</ref>}}<br />
<br />
The thesis that AI poses an existential risk, and that this risk needs much more attention than it currently gets, has been endorsed by many public figures; perhaps the most famous are Elon Musk, Bill Gates, and Stephen Hawking. The most notable AI researcher to endorse the thesis is Stuart J. Russell. Endorsers of the thesis sometimes express bafflement at skeptics: Gates states he does not "understand why some people are not concerned", and Hawking criticized widespread indifference in his 2014 editorial: we'll leave the lights on?' Probably notbut this is more or less what is happening with AI.'}}<br />
<br />
人工智能构成了世界末日,这种风险需要比现在更多的关注,这一论点已经得到了许多公众人物的支持; 也许最著名的是埃隆 · 马斯克,比尔 · 盖茨和斯蒂芬 · 霍金。支持这一观点的最著名的人工智能研究者是斯图尔特 · 罗素。这篇论文的支持者有时会对怀疑论者表示困惑: 盖茨表示,他不“理解为什么有些人不关心” ,霍金在2014年的社论中批评了普遍的冷漠: 我们会让灯亮着吗可能不是,但这或多或少是正在发生的与人工智能<br />
<br />
<br />
<br />
Many of the scholars who are concerned about existential risk believe that the best way forward would be to conduct (possibly massive) research into solving the difficult "[[AI control problem|control problem]]" to answer the question: what types of safeguards, algorithms, or architectures can programmers implement to maximize the probability that their recursively-improving AI would continue to behave in a friendly, rather than destructive, manner after it reaches superintelligence?<ref name="superintelligence" >{{cite book|last1=Bostrom|first1=Nick|author-link=Nick Bostrom|title=Superintelligence: Paths, Dangers, Strategies|date=2014|isbn=978-0199678112|edition=First|quote=|title-link=Superintelligence: Paths, Dangers, Strategies}}<!-- preface --></ref><ref name="physica_scripta" >{{cite journal|title=Responses to catastrophic AGI risk: a survey|journal=[[Physica Scripta]]|date=19 December 2014|volume=90|issue=1|author1=Kaj Sotala|author2-link=Roman Yampolskiy|author2=Roman Yampolskiy}}</ref><br />
<br />
Many of the scholars who are concerned about existential risk believe that the best way forward would be to conduct (possibly massive) research into solving the difficult "control problem" to answer the question: what types of safeguards, algorithms, or architectures can programmers implement to maximize the probability that their recursively-improving AI would continue to behave in a friendly, rather than destructive, manner after it reaches superintelligence?<br />
<br />
许多关注世界末日的学者认为,最好的方法是进行(可能是大规模的)研究,解决困难的“控制问题” ,以回答这个问题: 程序员可以实现哪些类型的保障措施、算法或架构,以最大限度地提高其递归改进的人工智能在达到超级智能后继续以友好而不是破坏性的方式运行的可能性?<br />
<br />
<br />
<br />
The thesis that AI can pose existential risk also has many strong detractors. Skeptics sometimes charge that the thesis is crypto-religious, with an irrational belief in the possibility of superintelligence replacing an irrational belief in an omnipotent God; at an extreme, [[Jaron Lanier]] argues that the whole concept that current machines are in any way intelligent is "an illusion" and a "stupendous con" by the wealthy.<ref name="atlantic-but-what">{{cite magazine |url=https://www.theatlantic.com/health/archive/2014/05/but-what-does-the-end-of-humanity-mean-for-me/361931/ |title=But What Would the End of Humanity Mean for Me? |magazine=The Atlantic | date = 9 May 2014 | accessdate =12 December 2015}}</ref><br />
<br />
The thesis that AI can pose existential risk also has many strong detractors. Skeptics sometimes charge that the thesis is crypto-religious, with an irrational belief in the possibility of superintelligence replacing an irrational belief in an omnipotent God; at an extreme, Jaron Lanier argues that the whole concept that current machines are in any way intelligent is "an illusion" and a "stupendous con" by the wealthy.<br />
<br />
认为人工智能可以提出世界末日的观点也遭到了许多强烈的反对。怀疑论者有时指责该论点是秘密宗教性的,他们非理性地相信超级智能可能取代对万能的上帝的非理性信仰; 在极端情况下,杰伦 · 拉尼尔(Jaron Lanier)认为,目前的机器以任何方式具有智能的整个概念是“一种幻觉” ,是富人的“惊人骗局”。<br />
<br />
<br />
<br />
Much of existing criticism argues that AGI is unlikely in the short term. Computer scientist [[Gordon Bell]] argues that the human race will already destroy itself before it reaches the [[technological singularity]]. [[Gordon Moore]], the original proponent of [[Moore's Law]], declares that "I am a skeptic. I don't believe [a technological singularity] is likely to happen, at least for a long time. And I don't know why I feel that way."<ref>{{cite news |title=Tech Luminaries Address Singularity |url=https://spectrum.ieee.org/computing/hardware/tech-luminaries-address-singularity |accessdate=8 April 2020 |work=IEEE Spectrum: Technology, Engineering, and Science News |issue=SPECIAL REPORT: THE SINGULARITY |date=1 June 2008 |language=en}}</ref> [[Baidu]] Vice President [[Andrew Ng]] states AI existential risk is "like worrying about overpopulation on Mars when we have not even set foot on the planet yet."<ref name=shermer>{{cite news|last1=Shermer|first1=Michael|title=Apocalypse AI|url=https://www.scientificamerican.com/article/artificial-intelligence-is-not-a-threat-mdash-yet/|accessdate=27 November 2017|work=Scientific American|date=1 March 2017|pages=77|language=en|doi=10.1038/scientificamerican0317-77|bibcode=2017SciAm.316c..77S}}</ref><br />
<br />
Much of existing criticism argues that AGI is unlikely in the short term. Computer scientist Gordon Bell argues that the human race will already destroy itself before it reaches the technological singularity. Gordon Moore, the original proponent of Moore's Law, declares that "I am a skeptic. I don't believe [a technological singularity] is likely to happen, at least for a long time. And I don't know why I feel that way." Baidu Vice President Andrew Ng states AI existential risk is "like worrying about overpopulation on Mars when we have not even set foot on the planet yet."<br />
<br />
现有的许多批评认为,德盛安联短期内不太可能成功。计算机科学家 Gordon Bell 认为人类在到达技术奇异点之前就已经自我毁灭了。戈登 · 摩尔,摩尔定律的最初倡导者,宣称“我是一个怀疑论者。我不认为技术奇异点会发生,至少在很长一段时间内不会。我不知道为什么会有这种感觉。”百度副总裁 Andrew Ng 说,人工智能世界末日就像是在担心火星人口过剩,而我们甚至还没有踏上这个星球<br />
<br />
<br />
<br />
==See also==<br />
<br />
{{div col|colwidth=30em}}<br />
<br />
* [[Automated machine learning]]<br />
<br />
* [[Machine ethics]]<br />
<br />
* [[Multi-task learning]]<br />
<br />
* [[Superintelligence: Paths, Dangers, Strategies|Superintelligence]]<br />
<br />
* [[Nick Bostrom]]<br />
<br />
* [[Eliezer Yudkowsky]]<br />
<br />
* [[Future of Humanity Institute]]<br />
<br />
* [[Outline of artificial intelligence]]<br />
<br />
* [[Artificial brain]]<br />
<br />
* [[Transfer learning]]<br />
<br />
* [[Outline of transhumanism]]<br />
<br />
* [[General game playing]]<br />
<br />
* [[Synthetic intelligence]]<br />
<br />
* [[Intelligence amplification]] (IA), the use of information technology in augmenting human intelligence instead of creating an external autonomous "AGI"{{div col end}}<br />
<br />
<br />
<br />
==Notes==<br />
<br />
{{reflist|colwidth=30em}}<br />
<br />
<br />
<br />
==References==<br />
<br />
• Stages of Artificial Intelligence"[https://www.computerscience0.xyz/2020/04/ai-artificial-intelligence-stages-of.html Computer Science]" on 2nd April, 2020.{{refbegin|2}}<br />
<br />
• Stages of Artificial Intelligence"[https://www.computerscience0.xyz/2020/04/ai-artificial-intelligence-stages-of.html Computer Science]" on 2nd April, 2020.<br />
<br />
•2020年4月2日,《人工智能的阶段》[ https://www.computerscience0.xyz/2020/04/ai-Artificial-Intelligence-Stages-of.html 计算机科学]。<br />
<br />
* {{cite web |url=http://www.techcast.org/Upload/PDFs/633615794236495345_TCTheAutomationofThought.pdf |title=TechCast Article Series: The Automation of Thought |last1=Halal |first1=William E. |website= |access-date= |archive-url=https://web.archive.org/web/20130606101835/http://www.techcast.org/Upload/PDFs/633615794236495345_TCTheAutomationofThought.pdf |archive-date=6 June 2013}}<br />
<br />
* {{Citation | last=Aleksander | first=Igor | author-link=Igor Aleksander | year=1996 | title=Impossible Minds | publisher=World Scientific Publishing Company | isbn=978-1-86094-036-1 | url-access=registration | url=https://archive.org/details/impossiblemindsm0000alek }}<br />
<br />
* {{Citation | last = Omohundro|first= Steve| author-link= Steve Omohundro | year = 2008| title= The Nature of Self-Improving Artificial Intelligence| publisher= presented and distributed at the 2007 Singularity Summit, San Francisco, CA.}}<br />
<br />
* {{Citation|last=Sandberg |first=Anders|last2=Boström|first2=Nick|title=Whole Brain Emulation: A Roadmap|url=http://www.fhi.ox.ac.uk/Reports/2008-3.pdf|accessdate=5 April 2009|series= Technical Report #2008‐3|year=2008| publisher = Future of Humanity Institute, Oxford University}}<br />
<br />
* {{Citation|vauthors=Azevedo FA, Carvalho LR, Grinberg LT | title = Equal numbers of neuronal and nonneuronal cells make the human brain an isometrically scaled-up primate brain| journal = The Journal of Comparative Neurology| volume = 513| issue = 5| pages = 532–41|date=April 2009| pmid = 19226510| doi = 10.1002/cne.21974|url=https://www.researchgate.net/publication/24024444 |accessdate=4 September 2013| ref={{harvid|Azevedo et al.|2009}}|display-authors=etal}}<br />
<br />
* {{Citation<br />
<br />
| first=Anthony<br />
<br />
| first=Anthony<br />
<br />
首先是安东尼<br />
<br />
| last=Berglas<br />
<br />
| last=Berglas<br />
<br />
最后一个贝格拉斯<br />
<br />
| title=Artificial Intelligence will Kill our Grandchildren<br />
<br />
| title=Artificial Intelligence will Kill our Grandchildren<br />
<br />
人工智能会杀死我们的孙子<br />
<br />
| year=2008<br />
<br />
| year=2008<br />
<br />
2008年<br />
<br />
| url=http://berglas.org/Articles/AIKillGrandchildren/AIKillGrandchildren.html<br />
<br />
| url=http://berglas.org/Articles/AIKillGrandchildren/AIKillGrandchildren.html<br />
<br />
Http://berglas.org/articles/aikillgrandchildren/aikillgrandchildren.html<br />
<br />
}}<br />
<br />
}}<br />
<br />
}}<br />
<br />
* {{Citation | last = Chalmers |first = David | author-link=David Chalmers | year=1996 | title = The Conscious Mind |publisher=Oxford University Press.}}<br />
<br />
* {{Citation | last = Clocksin|first=William |date=Aug 2003 |title=Artificial intelligence and the future|journal=[[Philosophical Transactions of the Royal Society A]] |pmid=12952683 |volume=361 |issue=1809 |pages=1721–1748 |doi=10.1098/rsta.2003.1232 |postscript=.|bibcode=2003RSPTA.361.1721C }}<br />
<br />
* {{Crevier 1993}}<br />
<br />
* {{Citation | first = Brad | last = Darrach | date=20 November 1970 | title=Meet Shakey, the First Electronic Person | magazine=[[Life Magazine]] | pages = 58–68 }}.<br />
<br />
* {{Citation | last = Drachman | first = D | title = Do we have brain to spare? | journal = Neurology | volume = 64 | issue = 12 | pages = 2004–5 | year = 2005 | pmid = 15985565 | doi = 10.1212/01.WNL.0000166914.38327.BB | postscript = .}}<br />
<br />
* {{Citation | last = Feigenbaum | first = Edward A. | first2=Pamela | last2=McCorduck | author-link=Edward Feigenbaum | author2-link = Pamela McCorduck | title = The Fifth Generation: Artificial Intelligence and Japan's Computer Challenge to the World | publisher = Michael Joseph | year = 1983 | isbn = 978-0-7181-2401-4 }}<br />
<br />
* {{Citation<br />
<br />
| last = Gelernter | first = David<br />
<br />
| last = Gelernter | first = David<br />
<br />
最后的盖兰特 | 第一个大卫<br />
<br />
| year = 2010<br />
<br />
| year = 2010<br />
<br />
2010年<br />
<br />
| title = Dream-logic, the Internet and Artificial Thought<br />
<br />
| title = Dream-logic, the Internet and Artificial Thought<br />
<br />
梦的逻辑,互联网和人工思维<br />
<br />
| url=http://www.edge.org/3rd_culture/gelernter10.1/gelernter10.1_index.html | accessdate= 25 July 2010<br />
<br />
| url=http://www.edge.org/3rd_culture/gelernter10.1/gelernter10.1_index.html | accessdate= 25 July 2010<br />
<br />
Http://www.edge.org/3rd_culture/gelernter10.1/gelernter10.1_index.html : 2010年7月25日<br />
<br />
}}<br />
<br />
}}<br />
<br />
}}<br />
<br />
* {{Citation<br />
<br />
| editor1-last = Goertzel | editor1-first = Ben | authorlink = Ben Goertzel<br />
<br />
| editor1-last = Goertzel | editor1-first = Ben | authorlink = Ben Goertzel<br />
<br />
1-first Ben | authorlink Ben Goertzel<br />
<br />
| editor2-last = Pennachin | editor2-first= Cassio<br />
<br />
| editor2-last = Pennachin | editor2-first= Cassio<br />
<br />
| 编辑2-last Pennachin | 编辑2-first Cassio<br />
<br />
| year = 2006<br />
<br />
| year = 2006<br />
<br />
2006年<br />
<br />
| title=Artificial General Intelligence<br />
<br />
| title=Artificial General Intelligence<br />
<br />
人工通用智能<br />
<br />
| publisher = Springer <br />
<br />
| publisher = Springer <br />
<br />
出版商斯普林格<br />
<br />
| url=http://people.inf.elte.hu/csizsekp/ai/books/artificial-general-intelligence-cognitive-technologies.9783540237334.27156.pdf<br />
<br />
| url=http://people.inf.elte.hu/csizsekp/ai/books/artificial-general-intelligence-cognitive-technologies.9783540237334.27156.pdf<br />
<br />
Http://people.inf.elte.hu/csizsekp/ai/books/artificial-general-intelligence-cognitive-technologies.9783540237334.27156.pdf<br />
<br />
| isbn = 978-3-540-23733-4<br />
<br />
| isbn = 978-3-540-23733-4<br />
<br />
[国际标准图书馆编号978-3-540-23733-4]<br />
<br />
| archive-url = https://web.archive.org/web/20130320184603/http://people.inf.elte.hu/csizsekp/ai/books/artificial-general-intelligence-cognitive-technologies.9783540237334.27156.pdf<br />
<br />
| archive-url = https://web.archive.org/web/20130320184603/http://people.inf.elte.hu/csizsekp/ai/books/artificial-general-intelligence-cognitive-technologies.9783540237334.27156.pdf<br />
<br />
| 档案-url https://web.archive.org/web/20130320184603/http://people.inf.elte.hu/csizsekp/ai/books/artificial-general-intelligence-cognitive-technologies.9783540237334.27156.pdf<br />
<br />
| archive-date = 20 March 2013<br />
<br />
| archive-date = 20 March 2013<br />
<br />
| 档案-日期2013年3月20日<br />
<br />
}}<br />
<br />
}}<br />
<br />
}}<br />
<br />
* {{Citation<br />
<br />
| last = Goertzel | first = Ben | authorlink = Ben Goertzel<br />
<br />
| last = Goertzel | first = Ben | authorlink = Ben Goertzel<br />
<br />
作者: Ben Goertzel<br />
<br />
| last2 = Wang | first2 = Pei<br />
<br />
| last2 = Wang | first2 = Pei<br />
<br />
2 Wang | first2 Pei<br />
<br />
| year = 2006<br />
<br />
| year = 2006<br />
<br />
2006年<br />
<br />
| title = Introduction: Aspects of Artificial General Intelligence<br />
<br />
| title = Introduction: Aspects of Artificial General Intelligence<br />
<br />
| 题目简介: 人工通用智能的方方面面<br />
<br />
| url=http://sites.google.com/site/narswang/publications/wang-goertzel.AGI_Aspects.pdf?attredirects=1<br />
<br />
| url=http://sites.google.com/site/narswang/publications/wang-goertzel.AGI_Aspects.pdf?attredirects=1<br />
<br />
Http://sites.google.com/site/narswang/publications/wang-goertzel.agi_aspects.pdf?attredirects=1<br />
<br />
}}<br />
<br />
}}<br />
<br />
}}<br />
<br />
* {{Citation |last=Goertzel|first=Ben |authorlink=Ben Goertzel |date=Dec 2007 |title=Human-level artificial general intelligence and the possibility of a technological singularity: a reaction to Ray Kurzweil's The Singularity Is Near, and McDermott's critique of Kurzweil|journal=Artificial Intelligence |volume=171 |issue=18, Special Review Issue |pages=1161–1173 |url=https://scholar.google.com/scholar?hl=sv&lr=&cluster=15189798216526465792 |accessdate=1 April 2009 |doi=10.1016/j.artint.2007.10.011 |postscript=.}}<br />
<br />
* {{Citation | last = Gubrud | first = Mark | url=http://www.foresight.org/Conferences/MNT05/Papers/Gubrud/ | title = Nanotechnology and International Security | journal= Fifth Foresight Conference on Molecular Nanotechnology |date = November 1997| accessdate= 7 May 2011}}<br />
<br />
* {{Citation| last1 = Holte | first1=RC | last2=Choueiry |first2=BY| title = Abstraction and reformulation in artificial intelligence| journal = [[Philosophical Transactions of the Royal Society B]]| volume = 358| issue = 1435| pages = 1197–1204| year = 2003| pmid = 12903653| pmc = 1693218| doi = 10.1098/rstb.2003.1317| postscript = .}}<br />
<br />
* {{Citation | last = Howe | first = J. | url=http://www.dai.ed.ac.uk/AI_at_Edinburgh_perspective.html | title = Artificial Intelligence at Edinburgh University : a Perspective |date = November 1994| accessdate= 30 August 2007}}<br />
<br />
* {{Citation | last = Johnson| first= Mark | year = 1987| title =The body in the mind| publisher =Chicago|isbn= 978-0-226-40317-5}}<br />
<br />
* {{Citation | last = Kurzweil | first = Ray | author-link = Ray Kurzweil | title = The Singularity is Near | year = 2005 | publisher = Viking Press | title-link = The Singularity is Near }}<br />
<br />
* {{Citation | last = Lighthill | first = Professor Sir James | author-link=James Lighthill | year = 1973 | contribution= Artificial Intelligence: A General Survey | title = Artificial Intelligence: a paper symposium| publisher = Science Research Council }}<br />
<br />
* {{Citation|last=Luger|first=George|first2=William|last2=Stubblefield|year=2004|title=Artificial Intelligence: Structures and Strategies for Complex Problem Solving|edition=5th|publisher=The Benjamin/Cummings Publishing Company, Inc.|page=[https://archive.org/details/artificialintell0000luge/page/720 720]|isbn=978-0-8053-4780-7|url=https://archive.org/details/artificialintell0000luge/page/720}}<br />
<br />
* {{Citation |last=McCarthy|first=John |authorlink=John McCarthy (computer scientist)|date=Oct 2007 |title=From here to human-level AI|journal=Artificial Intelligence |volume=171 |pages=1174–1182 |doi=10.1016/j.artint.2007.10.009 | postscript=. |issue=18}}<br />
<br />
* {{McCorduck 2004}}<br />
<br />
* {{Citation | last = Moravec | first = Hans | author-link = Hans Moravec | year = 1976 | url = http://www.frc.ri.cmu.edu/users/hpm/project.archive/general.articles/1975/Raw.Power.html | title = The Role of Raw Power in Intelligence | access-date = 29 September 2007 | archive-url = https://web.archive.org/web/20160303232511/http://www.frc.ri.cmu.edu/users/hpm/project.archive/general.articles/1975/Raw.Power.html | archive-date = 3 March 2016 | url-status = dead }}<br />
<br />
* {{Citation | last = Moravec | first = Hans | author-link=Hans Moravec | year = 1988 | title = Mind Children | publisher = Harvard University Press}}<br />
<br />
* {{Citation| last=Nagel | year=1974 | title =What Is it Like to Be a Bat | journal = Philosophical Review | volume=83 | issue=4 | pages=435–50 | url=http://organizations.utep.edu/Portals/1475/nagel_bat.pdf| postscript=. | doi=10.2307/2183914 | jstor=2183914 }}<br />
<br />
* {{Citation | last = Newell | first = Allen | author-link=Allen Newell | last2 = Simon | first2=H. A. | year = 1963 | contribution=GPS: A Program that Simulates Human Thought| title=Computers and Thought | editor-last= Feigenbaum | editor-first= E.A. |editor2-last= Feldman |editor2-first= J. |publisher= McGraw-Hill | authorlink2 = Herbert A. Simon|location= New York }}<br />
<br />
* {{cite journal | doi = 10.1145/360018.360022 | last = Newell | first = Allen | last2 = Simon | first2=H. A. | year = 1976 | title=Computer Science as Empirical Inquiry: Symbols and Search| volume= 19 | pages = 113–126 | journal = Communications of the ACM| author-link=Allen Newell | authorlink2=Herbert A. Simon|issue=3 | ref=harv| doi-access= free }}<br />
<br />
* {{Citation| last=Nilsson | first=Nils | author-link=Nils Nilsson (researcher) | year=1998|title=Artificial Intelligence: A New Synthesis|publisher=Morgan Kaufmann Publishers|isbn=978-1-55860-467-4}}<br />
<br />
* {{Russell Norvig 2003}}<br />
<br />
* {{Citation | last = NRC| author-link=United States National Research Council | chapter=Developments in Artificial Intelligence|chapter-url=http://www.nap.edu/readingroom/books/far/ch9.html|title=Funding a Revolution: Government Support for Computing Research|publisher=National Academy Press|year=1999 }}<br />
<br />
* {{Citation | last = Poole | first = David | first2 = Alan | last2 = Mackworth | first3 = Randy | last3 = Goebel | publisher = Oxford University Press | year = 1998 | title = Computational Intelligence: A Logical Approach | url = http://www.cs.ubc.ca/spider/poole/ci.html | author-link=David Poole (researcher) | location = New York }}<br />
<br />
* {{Citation|last=Searle |first=John |author-link=John Searle |title=Minds, Brains and Programs |journal=Behavioral and Brain Sciences |volume=3 |issue=3 |pages=417–457 |year=1980 |doi=10.1017/S0140525X00005756 }}<br />
<br />
* {{Citation | last= Simon | first = H. A. | author-link=Herbert A. Simon | year = 1965 | title=The Shape of Automation for Men and Management | publisher =Harper & Row | location = New York }}<br />
<br />
* {{Citation |last=Sutherland|first= J.G. |year =1990| title= Holographic Model of Memory, Learning, and Expression|journal=International Journal of Neural Systems|volume= 1–3|pages= 256–267 |postscript=.}}<br />
<br />
* {{Citation|vauthors=Williams RW, Herrup K | title = The control of neuron number| journal = Annual Review of Neuroscience| volume = 11| pages = 423–53| year = 1988| pmid = 3284447| doi = 10.1146/annurev.ne.11.030188.002231| postscript = .}}<!--| accessdate = 20 June 2009--><br />
<br />
* {{Citation<br />
<br />
| editor1-last = de Vega | editor1-first = Manuel<br />
<br />
| editor1-last = de Vega | editor1-first = Manuel<br />
<br />
| 编辑1-last de Vega | 编辑1-first Manuel<br />
<br />
| editor2-last = Glenberg | editor2-first = Arthur<br />
<br />
| editor2-last = Glenberg | editor2-first = Arthur<br />
<br />
2- 最后的格伦伯格2- 第一个亚瑟<br />
<br />
| editor3-last = Graesser | editor3-first = Arthur<br />
<br />
| editor3-last = Graesser | editor3-first = Arthur<br />
<br />
| 编辑3-last Graesser | 编辑3-first Arthur<br />
<br />
| year = 2008<br />
<br />
| year = 2008<br />
<br />
2008年<br />
<br />
| title = Symbols and Embodiment: Debates on meaning and cognition<br />
<br />
| title = Symbols and Embodiment: Debates on meaning and cognition<br />
<br />
标题符号与具体化: 关于意义与认知的争论<br />
<br />
| publisher = Oxford University Press<br />
<br />
| publisher = Oxford University Press<br />
<br />
牛津大学出版社<br />
<br />
| isbn=978-0-19-921727-4<br />
<br />
| isbn=978-0-19-921727-4<br />
<br />
[国际标准图书编号978-0-19-921727-4]<br />
<br />
}}<br />
<br />
}}<br />
<br />
}}<br />
<br />
* {{Citation|last=Yudkowsky |first=Eliezer |author-link=Eliezer Yudkowsky |editor-last=Goertzel |editor-first=Ben |editor2-last=Pennachin |editor2-first=Cassio |title=Artificial General Intelligence |journal=Annual Review of Psychology |publisher=Springer |year=2006 |url=http://www.singinst.org/upload/LOGI//LOGI.pdf |doi=10.1146/annurev.psych.49.1.585 |pmid=9496632 |isbn=978-3-540-23733-4 |url-status=dead |archiveurl=https://web.archive.org/web/20090411050423/http://www.singinst.org/upload/LOGI/LOGI.pdf |archivedate=11 April 2009 |volume=49 |pages=585–612}} <br />
<br />
* {{Citation |last=Zucker|first=Jean-Daniel |date=July 2003 |title=A grounded theory of abstraction in artificial intelligence|journal=[[Philosophical Transactions of the Royal Society B]] |pmid=12903672 |volume=358 |issue=1435 |pmc=1693211 |pages=1293–1309 |doi=10.1098/rstb.2003.1308 |postscript=.}}<br />
<br />
* {{Citation| last=Yudkowsky | first=Eliezer | author-link=Eliezer Yudkowsky| year=2008 |title=Artificial Intelligence as a Positive and Negative Factor in Global Risk |journal=Global Catastrophic Risks | bibcode=2008gcr..book..303Y }}.<br />
<br />
{{refend}}<br />
<br />
<br />
<br />
==External links==<br />
<br />
* [https://cis.temple.edu/~pwang/AGI-Intro.html The AGI portal maintained by Pei Wang]<br />
<br />
* [https://web.archive.org/web/20050405071221/http://genesis.csail.mit.edu/index.html The Genesis Group at MIT's CSAIL] – Modern research on the computations that underlay human intelligence<br />
<br />
* [http://www.opencog.org/ OpenCog – open source project to develop a human-level AI]<br />
<br />
* [http://academia.wikia.com/wiki/A_Method_for_Simulating_the_Process_of_Logical_Human_Thought Simulating logical human thought]<br />
<br />
* [http://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/ai-timelines What Do We Know about AI Timelines?] – Literature review<br />
<br />
<br />
<br />
{{Existential risk from artificial intelligence}}<br />
<br />
<br />
<br />
{{DEFAULTSORT:Artificial general intelligence}}<br />
<br />
[[Category:Hypothetical technology]]<br />
<br />
Category:Hypothetical technology<br />
<br />
类别: 假设技术<br />
<br />
[[Category:Artificial intelligence]]<br />
<br />
Category:Artificial intelligence<br />
<br />
类别: 人工智能<br />
<br />
[[Category:Computational neuroscience]]<br />
<br />
Category:Computational neuroscience<br />
<br />
类别: 计算神经科学<br />
<br />
<br />
<br />
[[fr:Intelligence artificielle#Intelligence artificielle forte]]<br />
<br />
fr:Intelligence artificielle#Intelligence artificielle forte<br />
<br />
智力人工 # 智力人工强项<br />
<br />
<noinclude><br />
<br />
<small>This page was moved from [[wikipedia:en:Artificial general intelligence]]. Its edit history can be viewed at [[通用人工智能/edithistory]]</small></noinclude><br />
<br />
[[Category:待整理页面]]</div>粲兰https://wiki.swarma.org/index.php?title=LFR%E7%AE%97%E6%B3%95&diff=14769LFR算法2020-10-06T06:00:38Z<p>粲兰:</p>
<hr />
<div>此词条由袁一博翻译,未经人工整理和审校,带来阅读不便,请见谅。<br />
<br />
{{short description|Algorithm}}<br />
<br />
{{Network science}}<br />
<br />
<br />
<br />
'''Lancichinetti–Fortunato–Radicchi''' '''benchmark''' is an algorithm that generates [[Benchmark (computing)|benchmark]] networks (artificial networks that resemble real-world networks). They have ''a priori'' known [[Community structure|communities]] and are used to compare different community detection methods.<ref>Hua-Wei Shen (2013). "Community Structure of Complex Networks". Springer Science & Business Media. 11–12.</ref> The advantage of the benchmark over other methods is that it accounts for the [[Homogeneity (statistics)|heterogeneity]] in the distributions of [[Vertex (graph theory)|node]] [[Degree (graph theory)|degrees]] and of community sizes.<ref name="original">A. Lancichinetti, S. Fortunato, and F. Radicchi.(2008) Benchmark graphs for testing community detection algorithms. Physical Review E, 78. {{ArXiv|0805.4770}}</ref><br />
<br />
Lancichinetti–Fortunato–Radicchi benchmark is an algorithm that generates benchmark networks (artificial networks that resemble real-world networks). They have a priori known communities and are used to compare different community detection methods. The advantage of the benchmark over other methods is that it accounts for the heterogeneity in the distributions of node degrees and of community sizes.<br />
<br />
'''<font color="#ff8000">兰奇基内蒂-福图纳托-拉迪奇基准程序(Lancichinetti–Fortunato–Radicchi benchmark)</font>'''是一种生成基准网络(类似于真实世界网络的人工网络)的算法。他们有一个预先已知的社区,用于比较不同的社区检测方法。与其他方法相比,基准测试的优点在于它解释了'''<font color="#ff8000">节点度(node degree)</font>'''分布和社区规模分布的'''<font color="#ff8000">异质性(heterogeneity)</font>'''。<br />
<br />
<br />
<br />
==The algorithm 算法==<br />
<br />
The node degrees and the community sizes are distributed according to a [[power law]], with different exponents. The benchmark assumes that both the degree and the community size have [[Power law distribution|power law distributions]] with different exponents, <math>\gamma</math> and <math>\beta</math>, respectively. <math>N</math> is the number of nodes and the average degree is <math>\langle k \rangle</math>. There is a mixing parameter <math>\mu</math>, which is the average fraction of neighboring nodes of a node that do not belong to any community that the benchmark node belongs to. This parameter controls the fraction of edges that are between communities.<ref name="original"/> Thus, it reflects the amount of noise in the network. At the extremes, when <math>\mu = 0</math> all links are within community links, if <math> \mu = 1 </math> all links are between nodes belonging to different communities.<ref>Twan van Laarhoven and Elena Marchiori (2013). "Network community detection with edge classifiers trained on LFR graphs". https://www.cs.ru.nl/~elenam/paper-learning-community.pdf</ref><br />
<br />
The node degrees and the community sizes are distributed according to a power law, with different exponents. The benchmark assumes that both the degree and the community size have power law distributions with different exponents, <math>\gamma</math> and <math>\beta</math>, respectively. <math>N</math> is the number of nodes and the average degree is <math>\langle k \rangle</math>. There is a mixing parameter <math>\mu</math>, which is the average fraction of neighboring nodes of a node that do not belong to any community that the benchmark node belongs to. This parameter controls the fraction of edges that are between communities.<br />
<br />
节点度和社区规模按幂律分布,但指数不同。基准测试假设度和社区规模都具有不同指数的'''<font color="#ff8000">幂律分布(power law distribution)</font>''',分别为'''<font color="#32CD32">此处需插入公式</font>'''和'''<font color="#32CD32">此处需插入公式</font>'''。'''<font color="#32CD32">此处需插入公式</font>'''是节点的数量,平均度为'''<font color="#32CD32">此处需插入公式</font>'''。混合参数'''<font color="#32CD32">此处需插入公式</font>'''是一个节点的相邻节点的平均比例,这些相邻节点不属于基准节点所属的任何社区。这个参数控制着社区之间的边缘比例。<br />
<br />
<br />
<br />
One can generate the benchmark network using the following steps.<br />
<br />
One can generate the benchmark network using the following steps.<br />
<br />
可以通过以下步骤生成基准网络。<br />
<br />
<br />
<br />
<big>'''Step 1:'''</big> Generate a network with nodes following a power law distribution with exponent <math>\gamma</math> and choose extremes of the distribution <math> k_{\min} </math> and <math> k_{\max} </math> to get desired average degree is <math>\langle k\rangle</math>.<br />
<br />
<big>Step 1:</big> Generate a network with nodes following a power law distribution with exponent <math>\gamma</math> and choose extremes of the distribution <math> k_{\min} </math> and <math> k_{\max} </math> to get desired average degree is <math>\langle k\rangle</math>.<br />
--[[用户:粲兰|袁一博]]([[用户讨论:粲兰|讨论]])“to get desired average degree is”中的“is”怀疑是原文的误输入,因为它完全违背正确的英语语法。<br />
<br />
< 大 > 步骤1: </big > 生成一个网络,其节点遵循指数为'''<font color="#32CD32">此处需插入公式</font>'''的幂律分布,并选择分布的极值'''<font color="#32CD32">此处需插入公式</font>'''和'''<font color="#32CD32">此处需插入公式</font>'''来获得期望平均度'''<font color="#32CD32">此处需插入公式</font>'''。<br />
<br />
<br />
<br />
<big>'''Step 2:'''</big> <math>(1 - \mu)</math> fraction of links of every node is with nodes of the same community, while fraction <math>\mu</math> is with the other nodes.<br />
<br />
<big>Step 2:</big> <math>(1 - \mu)</math> fraction of links of every node is with nodes of the same community, while fraction <math>\mu</math> is with the other nodes.<br />
<br />
< 大 > 步骤2: </big >每个节点的'''<font color="#32CD32">此处需插入公式</font>'''链接部分与同一社区的节点相同,而'''<font color="#32CD32">此处需插入公式</font>'''部分与其他节点相同。<br />
<br />
<br />
<br />
<big>'''Step 3:'''</big> Generate community sizes from a power law distribution with exponent <math>\beta</math>. The sum of all sizes must be equal to <math>N</math>. The minimal and maximal community sizes <math> s_{\min} </math> and <math> s_{\max} </math> must satisfy the definition of community so that every non-isolated node is in at least in one community:<br />
<br />
<big>Step 3:</big> Generate community sizes from a power law distribution with exponent <math>\beta</math>. The sum of all sizes must be equal to <math>N</math>. The minimal and maximal community sizes <math> s_{\min} </math> and <math> s_{\max} </math> must satisfy the definition of community so that every non-isolated node is in at least in one community:<br />
<br />
< big > 步骤3: </big > 根据指数为'''<font color="#32CD32">此处需插入公式</font>'''的幂律分布生成社区规模。所有规模大小的和必须等于'''<font color="#32CD32">此处需插入公式</font>'''。最小和最大的社区规模'''<font color="#32CD32">此处需插入公式</font>'''必须满足社区的定义,这样每个非孤立的节点至少存在于一个群落中:<br />
<br />
<br />
<br />
: <math> s_{\min} > k_{\min} </math> <br />
<br />
<math> s_{\min} > k_{\min} </math> <br />
<br />
[ math > s _ { min } > k _ { min }<br />
<br />
: <math> s_{\max} > k_{\max} </math><br />
<br />
<math> s_{\max} > k_{\max} </math><br />
<br />
[数学][数学]<br />
<br />
<br />
<br />
<big>'''Step 4:'''</big> Initially, no nodes are assigned to communities. Then, each node is randomly assigned to a community. As long as the number of neighboring nodes within the community does not exceed the community size a new node is added to the community, otherwise stays out. In the following iterations the “homeless” node is randomly assigned to some community. If that community is complete, i.e. the size is exhausted, a randomly selected node of that community must be unlinked. Stop the iteration when all the communities are complete and all the nodes belong to at least one community.<br />
<br />
<big>Step 4:</big> Initially, no nodes are assigned to communities. Then, each node is randomly assigned to a community. As long as the number of neighboring nodes within the community does not exceed the community size a new node is added to the community, otherwise stays out. In the following iterations the “homeless” node is randomly assigned to some community. If that community is complete, i.e. the size is exhausted, a randomly selected node of that community must be unlinked. Stop the iteration when all the communities are complete and all the nodes belong to at least one community.<br />
<br />
< big > 步骤4: </big > 最初,没有为任何社区分配任何节点。然后,每个节点被随机分配到一个社区。只要社区内相邻节点的数量不超过社区规模,就会向社区添加一个新节点,否则就不会添加。在接下来的迭代中,无归属的节点被随机分配给某个社区。如果该社区是完备的,即,规模已经用尽,必须随机选择社区中的一个节点并断开其链接。当所有社区都完备且所有节点都至少属于一个社区时停止迭代。<br />
<br />
<br />
<br />
<big>'''Step 5:'''</big> Implement rewiring of nodes keeping the same node degrees but only affecting the fraction of internal and external links such that the number of links outside the community for each node is approximately equal to the mixing parameter <math>\mu</math>.<ref name="original"/><br />
<br />
<big>Step 5:</big> Implement rewiring of nodes keeping the same node degrees but only affecting the fraction of internal and external links such that the number of links outside the community for each node is approximately equal to the mixing parameter <math>\mu</math>.<br />
<br />
< big > 步骤5: </big > 对节点重新布线,保持相同的节点度,但只影响内部和外部链接,使得每个节点在社区外的链接数量约等于混合参数'''<font color="#32CD32">此处需插入公式</font>'''。<br />
<br />
<br />
<br />
==Testing 调试==<br />
<br />
Consider a [[Partition of a set|partition]] into communities that do not overlap. The communities of randomly chosen nodes in each iteration follow a <math>p(C)</math> distribution that represents the probability that a randomly picked node is from the community <math>C</math>. Consider a partition of the same network that was predicted by some community finding algorithm and has <math>p(C_2)</math> distribution. The benchmark partition has <math>p(C_1)</math> distribution.<br />
<br />
Consider a partition into communities that do not overlap. The communities of randomly chosen nodes in each iteration follow a <math>p(C)</math> distribution that represents the probability that a randomly picked node is from the community <math>C</math>. Consider a partition of the same network that was predicted by some community finding algorithm and has <math>p(C_2)</math> distribution. The benchmark partition has <math>p(C_1)</math> distribution.<br />
<br />
考虑社区的一个不重叠分割。每次迭代中随机选择的节点的社区遵循一个'''<font color="#32CD32">此处需插入公式</font>'''分布,这个分布表示随机选择的节点来自社区'''<font color="#32CD32">此处需插入公式</font>'''的概率。考虑同一个网络的一个分割,这个分割由一些社区搜索算法预测得出,并且具有'''<font color="#32CD32">此处需插入公式</font>'''分布。基准分割具有'''<font color="#32CD32">此处需插入公式</font>'''分布。<br />
<br />
The joint distribution is <math>p(C_1, C_2)</math>. The similarity of these two partitions is captured by the normalized [[mutual information]].<br />
<br />
The joint distribution is <math>p(C_1, C_2)</math>. The similarity of these two partitions is captured by the normalized mutual information.<br />
<br />
联合分布为'''<font color="#32CD32">此处需插入公式</font>'''。这两个分割的相似性可以通过'''<font color="#ff8000">归一化互信息</font>'''得到。<br />
<br />
<br />
<br />
: <math> I_n = \frac{\sum_{C_1,C_2} p(C_1,C_2) \log_2 \frac{p(C_1,C_2)}{p(C_1)p(C_2)} }{\frac 1 2 H(\{p(C_1)\}) + \frac 1 2 H(\{p(C_2)\})} </math><br />
<br />
<math> I_n = \frac{\sum_{C_1,C_2} p(C_1,C_2) \log_2 \frac{p(C_1,C_2)}{p(C_1)p(C_2)} }{\frac 1 2 H(\{p(C_1)\}) + \frac 1 2 H(\{p(C_2)\})} </math><br />
<br />
< math > i _ n = frac { sum _ { c _ 1,c _ 2} p (c _ 1,c _ 2) log _ 2 frac { p (c _ 1,c _ 2)}{ p (c _ 1) p (c _ 2)}{ frac _ 12 h ({ p (c _ 1)}}) + frac 12 h ({ p (c _ 2)}}} </math > <br />
<br />
<br />
<br />
If <math> I_n=1 </math> the benchmark and the detected partitions are identical, and if <math> I_n=0 </math> then they are independent of each other.<ref>Barabasi, A.-L. (2014). "Network Science". Chapter 9: Communities.</ref><br />
<br />
If <math> I_n=1 </math> the benchmark and the detected partitions are identical, and if <math> I_n=0 </math> then they are independent of each other.<br />
<br />
如果'''<font color="#32CD32">此处需插入公式</font>'''基准和检测到的分割是相同的,并且如果'''<font color="#32CD32">此处需插入公式</font>''',那么它们彼此独立。<br />
<br />
<br />
<br />
==References 参考文献==<br />
<br />
{{Reflist}}<br />
<br />
<br />
<br />
{{DEFAULTSORT:Lancichinetti-Fortunato-Radicchi benchmark}}<br />
<br />
[[Category:Algorithms]]<br />
<br />
Category:Algorithms<br />
<br />
类别: 算法<br />
<br />
[[Category:Random graphs]]<br />
<br />
Category:Random graphs<br />
<br />
类别: 随机图<br />
<br />
[[Category:Benchmarks (computing)]]<br />
<br />
Category:Benchmarks (computing)<br />
<br />
类别: 基准(计算)<br />
<br />
[[Category:Statistical methods]]<br />
<br />
Category:Statistical methods<br />
<br />
类别: 统计方法<br />
<br />
<noinclude><br />
<br />
<small>This page was moved from [[wikipedia:en:Lancichinetti–Fortunato–Radicchi benchmark]]. Its edit history can be viewed at [[LFR算法/edithistory]]</small></noinclude><br />
<br />
[[Category:待整理页面]]</div>粲兰https://wiki.swarma.org/index.php?title=LFR%E7%AE%97%E6%B3%95&diff=14768LFR算法2020-10-05T15:17:38Z<p>粲兰:</p>
<hr />
<div>此词条由袁一博翻译,未经人工整理和审校,带来阅读不便,请见谅。<br />
<br />
{{short description|Algorithm}}<br />
<br />
{{Network science}}<br />
<br />
<br />
<br />
'''Lancichinetti–Fortunato–Radicchi''' '''benchmark''' is an algorithm that generates [[Benchmark (computing)|benchmark]] networks (artificial networks that resemble real-world networks). They have ''a priori'' known [[Community structure|communities]] and are used to compare different community detection methods.<ref>Hua-Wei Shen (2013). "Community Structure of Complex Networks". Springer Science & Business Media. 11–12.</ref> The advantage of the benchmark over other methods is that it accounts for the [[Homogeneity (statistics)|heterogeneity]] in the distributions of [[Vertex (graph theory)|node]] [[Degree (graph theory)|degrees]] and of community sizes.<ref name="original">A. Lancichinetti, S. Fortunato, and F. Radicchi.(2008) Benchmark graphs for testing community detection algorithms. Physical Review E, 78. {{ArXiv|0805.4770}}</ref><br />
<br />
Lancichinetti–Fortunato–Radicchi benchmark is an algorithm that generates benchmark networks (artificial networks that resemble real-world networks). They have a priori known communities and are used to compare different community detection methods. The advantage of the benchmark over other methods is that it accounts for the heterogeneity in the distributions of node degrees and of community sizes.<br />
<br />
'''<font color="#ff8000">兰奇基内蒂-福图纳托-拉迪奇基准程序(Lancichinetti–Fortunato–Radicchi benchmark)</font>'''是一种生成基准网络(类似于真实世界网络的人工网络)的算法。他们有一个预先已知的社区,用于比较不同的社区检测方法。与其他方法相比,基准测试的优点在于它解释了'''<font color="#ff8000">节点度(node degree)'''分布和社区规模分布的'''<font color="#ff8000">异质性(heterogeneity)</font>'''。<br />
<br />
<br />
<br />
==The algorithm 算法==<br />
<br />
The node degrees and the community sizes are distributed according to a [[power law]], with different exponents. The benchmark assumes that both the degree and the community size have [[Power law distribution|power law distributions]] with different exponents, <math>\gamma</math> and <math>\beta</math>, respectively. <math>N</math> is the number of nodes and the average degree is <math>\langle k \rangle</math>. There is a mixing parameter <math>\mu</math>, which is the average fraction of neighboring nodes of a node that do not belong to any community that the benchmark node belongs to. This parameter controls the fraction of edges that are between communities.<ref name="original"/> Thus, it reflects the amount of noise in the network. At the extremes, when <math>\mu = 0</math> all links are within community links, if <math> \mu = 1 </math> all links are between nodes belonging to different communities.<ref>Twan van Laarhoven and Elena Marchiori (2013). "Network community detection with edge classifiers trained on LFR graphs". https://www.cs.ru.nl/~elenam/paper-learning-community.pdf</ref><br />
<br />
The node degrees and the community sizes are distributed according to a power law, with different exponents. The benchmark assumes that both the degree and the community size have power law distributions with different exponents, <math>\gamma</math> and <math>\beta</math>, respectively. <math>N</math> is the number of nodes and the average degree is <math>\langle k \rangle</math>. There is a mixing parameter <math>\mu</math>, which is the average fraction of neighboring nodes of a node that do not belong to any community that the benchmark node belongs to. This parameter controls the fraction of edges that are between communities.<br />
<br />
节点度和社区规模按幂律分布,但指数不同。基准测试假设度和社区规模都具有不同指数的'''<font color="#ff8000">幂律分布(power law distribution)</font>''',分别为'''<font color="#32CD32">此处需插入公式</font>'''和'''<font color="#32CD32">此处需插入公式</font>'''。'''<font color="#32CD32">此处需插入公式</font>'''是节点的数量,平均度为'''<font color="#32CD32">此处需插入公式</font>'''。混合参数'''<font color="#32CD32">此处需插入公式</font>'''是一个节点的相邻节点的平均比例,这些相邻节点不属于基准节点所属的任何社区。这个参数控制着社区之间的边缘比例。<br />
<br />
<br />
<br />
One can generate the benchmark network using the following steps.<br />
<br />
One can generate the benchmark network using the following steps.<br />
<br />
可以通过以下步骤生成基准网络。<br />
<br />
<br />
<br />
<big>'''Step 1:'''</big> Generate a network with nodes following a power law distribution with exponent <math>\gamma</math> and choose extremes of the distribution <math> k_{\min} </math> and <math> k_{\max} </math> to get desired average degree is <math>\langle k\rangle</math>.<br />
<br />
<big>Step 1:</big> Generate a network with nodes following a power law distribution with exponent <math>\gamma</math> and choose extremes of the distribution <math> k_{\min} </math> and <math> k_{\max} </math> to get desired average degree is <math>\langle k\rangle</math>.<br />
--[[用户:粲兰|袁一博]]([[用户讨论:粲兰|讨论]])“to get desired average degree is”中的“is”怀疑是原文的误输入,因为它完全违背正确的英语语法。<br />
<br />
< 大 > 步骤1: </big > 生成一个网络,其节点遵循指数为'''<font color="#32CD32">此处需插入公式</font>'''的幂律分布,并选择分布的极值'''<font color="#32CD32">此处需插入公式</font>'''和'''<font color="#32CD32">此处需插入公式</font>'''来获得期望平均度'''<font color="#32CD32">此处需插入公式</font>'''。<br />
<br />
<br />
<br />
<big>'''Step 2:'''</big> <math>(1 - \mu)</math> fraction of links of every node is with nodes of the same community, while fraction <math>\mu</math> is with the other nodes.<br />
<br />
<big>Step 2:</big> <math>(1 - \mu)</math> fraction of links of every node is with nodes of the same community, while fraction <math>\mu</math> is with the other nodes.<br />
<br />
< 大 > 步骤2: </big >每个节点的'''<font color="#32CD32">此处需插入公式</font>'''链接部分与同一社区的节点相同,而'''<font color="#32CD32">此处需插入公式</font>'''部分与其他节点相同。<br />
<br />
<br />
<br />
<big>'''Step 3:'''</big> Generate community sizes from a power law distribution with exponent <math>\beta</math>. The sum of all sizes must be equal to <math>N</math>. The minimal and maximal community sizes <math> s_{\min} </math> and <math> s_{\max} </math> must satisfy the definition of community so that every non-isolated node is in at least in one community:<br />
<br />
<big>Step 3:</big> Generate community sizes from a power law distribution with exponent <math>\beta</math>. The sum of all sizes must be equal to <math>N</math>. The minimal and maximal community sizes <math> s_{\min} </math> and <math> s_{\max} </math> must satisfy the definition of community so that every non-isolated node is in at least in one community:<br />
<br />
< big > 步骤3: </big > 根据指数为'''<font color="#32CD32">此处需插入公式</font>'''的幂律分布生成社区规模。所有规模大小的和必须等于'''<font color="#32CD32">此处需插入公式</font>'''。最小和最大的社区规模'''<font color="#32CD32">此处需插入公式</font>'''必须满足社区的定义,这样每个非孤立的节点至少存在于一个群落中:<br />
<br />
<br />
<br />
: <math> s_{\min} > k_{\min} </math> <br />
<br />
<math> s_{\min} > k_{\min} </math> <br />
<br />
[ math > s _ { min } > k _ { min }<br />
<br />
: <math> s_{\max} > k_{\max} </math><br />
<br />
<math> s_{\max} > k_{\max} </math><br />
<br />
[数学][数学]<br />
<br />
<br />
<br />
<big>'''Step 4:'''</big> Initially, no nodes are assigned to communities. Then, each node is randomly assigned to a community. As long as the number of neighboring nodes within the community does not exceed the community size a new node is added to the community, otherwise stays out. In the following iterations the “homeless” node is randomly assigned to some community. If that community is complete, i.e. the size is exhausted, a randomly selected node of that community must be unlinked. Stop the iteration when all the communities are complete and all the nodes belong to at least one community.<br />
<br />
<big>Step 4:</big> Initially, no nodes are assigned to communities. Then, each node is randomly assigned to a community. As long as the number of neighboring nodes within the community does not exceed the community size a new node is added to the community, otherwise stays out. In the following iterations the “homeless” node is randomly assigned to some community. If that community is complete, i.e. the size is exhausted, a randomly selected node of that community must be unlinked. Stop the iteration when all the communities are complete and all the nodes belong to at least one community.<br />
<br />
< big > 步骤4: </big > 最初,没有为任何社区分配任何节点。然后,每个节点被随机分配到一个社区。只要社区内相邻节点的数量不超过社区规模,就会向社区添加一个新节点,否则就不会添加。在接下来的迭代中,无归属的节点被随机分配给某个社区。如果该社区是完备的,即,规模已经用尽,必须随机选择社区中的一个节点并断开其链接。当所有社区都完备且所有节点都至少属于一个社区时停止迭代。<br />
<br />
<br />
<br />
<big>'''Step 5:'''</big> Implement rewiring of nodes keeping the same node degrees but only affecting the fraction of internal and external links such that the number of links outside the community for each node is approximately equal to the mixing parameter <math>\mu</math>.<ref name="original"/><br />
<br />
<big>Step 5:</big> Implement rewiring of nodes keeping the same node degrees but only affecting the fraction of internal and external links such that the number of links outside the community for each node is approximately equal to the mixing parameter <math>\mu</math>.<br />
<br />
< big > 步骤5: </big > 对节点重新布线,保持相同的节点度,但只影响内部和外部链接,使得每个节点在社区外的链接数量约等于混合参数'''<font color="#32CD32">此处需插入公式</font>'''。<br />
<br />
<br />
<br />
==Testing 调试==<br />
<br />
Consider a [[Partition of a set|partition]] into communities that do not overlap. The communities of randomly chosen nodes in each iteration follow a <math>p(C)</math> distribution that represents the probability that a randomly picked node is from the community <math>C</math>. Consider a partition of the same network that was predicted by some community finding algorithm and has <math>p(C_2)</math> distribution. The benchmark partition has <math>p(C_1)</math> distribution.<br />
<br />
Consider a partition into communities that do not overlap. The communities of randomly chosen nodes in each iteration follow a <math>p(C)</math> distribution that represents the probability that a randomly picked node is from the community <math>C</math>. Consider a partition of the same network that was predicted by some community finding algorithm and has <math>p(C_2)</math> distribution. The benchmark partition has <math>p(C_1)</math> distribution.<br />
<br />
考虑社区的一个不重叠分割。每次迭代中随机选择的节点的社区遵循一个'''<font color="#32CD32">此处需插入公式</font>'''分布,这个分布表示随机选择的节点来自社区'''<font color="#32CD32">此处需插入公式</font>'''的概率。考虑同一个网络的一个分割,这个分割由一些社区搜索算法预测得出,并且具有'''<font color="#32CD32">此处需插入公式</font>'''分布。基准分割具有'''<font color="#32CD32">此处需插入公式</font>'''分布。<br />
<br />
The joint distribution is <math>p(C_1, C_2)</math>. The similarity of these two partitions is captured by the normalized [[mutual information]].<br />
<br />
The joint distribution is <math>p(C_1, C_2)</math>. The similarity of these two partitions is captured by the normalized mutual information.<br />
<br />
联合分布为'''<font color="#32CD32">此处需插入公式</font>'''。这两个分割的相似性可以通过'''<font color="#ff8000">归一化互信息</font>'''得到。<br />
<br />
<br />
<br />
: <math> I_n = \frac{\sum_{C_1,C_2} p(C_1,C_2) \log_2 \frac{p(C_1,C_2)}{p(C_1)p(C_2)} }{\frac 1 2 H(\{p(C_1)\}) + \frac 1 2 H(\{p(C_2)\})} </math><br />
<br />
<math> I_n = \frac{\sum_{C_1,C_2} p(C_1,C_2) \log_2 \frac{p(C_1,C_2)}{p(C_1)p(C_2)} }{\frac 1 2 H(\{p(C_1)\}) + \frac 1 2 H(\{p(C_2)\})} </math><br />
<br />
< math > i _ n = frac { sum _ { c _ 1,c _ 2} p (c _ 1,c _ 2) log _ 2 frac { p (c _ 1,c _ 2)}{ p (c _ 1) p (c _ 2)}{ frac _ 12 h ({ p (c _ 1)}}) + frac 12 h ({ p (c _ 2)}}} </math > <br />
<br />
<br />
<br />
If <math> I_n=1 </math> the benchmark and the detected partitions are identical, and if <math> I_n=0 </math> then they are independent of each other.<ref>Barabasi, A.-L. (2014). "Network Science". Chapter 9: Communities.</ref><br />
<br />
If <math> I_n=1 </math> the benchmark and the detected partitions are identical, and if <math> I_n=0 </math> then they are independent of each other.<br />
<br />
如果'''<font color="#32CD32">此处需插入公式</font>'''基准和检测到的分割是相同的,并且如果'''<font color="#32CD32">此处需插入公式</font>''',那么它们彼此独立。<br />
<br />
<br />
<br />
==References 参考文献==<br />
<br />
{{Reflist}}<br />
<br />
<br />
<br />
{{DEFAULTSORT:Lancichinetti-Fortunato-Radicchi benchmark}}<br />
<br />
[[Category:Algorithms]]<br />
<br />
Category:Algorithms<br />
<br />
类别: 算法<br />
<br />
[[Category:Random graphs]]<br />
<br />
Category:Random graphs<br />
<br />
类别: 随机图<br />
<br />
[[Category:Benchmarks (computing)]]<br />
<br />
Category:Benchmarks (computing)<br />
<br />
类别: 基准(计算)<br />
<br />
[[Category:Statistical methods]]<br />
<br />
Category:Statistical methods<br />
<br />
类别: 统计方法<br />
<br />
<noinclude><br />
<br />
<small>This page was moved from [[wikipedia:en:Lancichinetti–Fortunato–Radicchi benchmark]]. Its edit history can be viewed at [[LFR算法/edithistory]]</small></noinclude><br />
<br />
[[Category:待整理页面]]</div>粲兰https://wiki.swarma.org/index.php?title=LFR%E7%AE%97%E6%B3%95&diff=14767LFR算法2020-10-05T15:17:13Z<p>粲兰:</p>
<hr />
<div>此词条由袁一博翻译,未经人工整理和审校,带来阅读不便,请见谅。<br />
<br />
{{short description|Algorithm}}<br />
<br />
{{Network science}}<br />
<br />
<br />
<br />
'''Lancichinetti–Fortunato–Radicchi''' '''benchmark''' is an algorithm that generates [[Benchmark (computing)|benchmark]] networks (artificial networks that resemble real-world networks). They have ''a priori'' known [[Community structure|communities]] and are used to compare different community detection methods.<ref>Hua-Wei Shen (2013). "Community Structure of Complex Networks". Springer Science & Business Media. 11–12.</ref> The advantage of the benchmark over other methods is that it accounts for the [[Homogeneity (statistics)|heterogeneity]] in the distributions of [[Vertex (graph theory)|node]] [[Degree (graph theory)|degrees]] and of community sizes.<ref name="original">A. Lancichinetti, S. Fortunato, and F. Radicchi.(2008) Benchmark graphs for testing community detection algorithms. Physical Review E, 78. {{ArXiv|0805.4770}}</ref><br />
<br />
Lancichinetti–Fortunato–Radicchi benchmark is an algorithm that generates benchmark networks (artificial networks that resemble real-world networks). They have a priori known communities and are used to compare different community detection methods. The advantage of the benchmark over other methods is that it accounts for the heterogeneity in the distributions of node degrees and of community sizes.<br />
<br />
'''<font color="#ff8000">兰奇基内蒂-福图纳托-拉迪奇基准程序(Lancichinetti–Fortunato–Radicchi benchmark)</font>'''是一种生成基准网络(类似于真实世界网络的人工网络)的算法。他们有一个预先已知的社区,用于比较不同的社区检测方法。与其他方法相比,基准测试的优点在于它解释了'''<font color="#ff8000">节点度(node degree)'''分布和社区规模分布的'''<font color="#ff8000">异质性(heterogeneity)</font>''' 。<br />
<br />
<br />
<br />
==The algorithm 算法==<br />
<br />
The node degrees and the community sizes are distributed according to a [[power law]], with different exponents. The benchmark assumes that both the degree and the community size have [[Power law distribution|power law distributions]] with different exponents, <math>\gamma</math> and <math>\beta</math>, respectively. <math>N</math> is the number of nodes and the average degree is <math>\langle k \rangle</math>. There is a mixing parameter <math>\mu</math>, which is the average fraction of neighboring nodes of a node that do not belong to any community that the benchmark node belongs to. This parameter controls the fraction of edges that are between communities.<ref name="original"/> Thus, it reflects the amount of noise in the network. At the extremes, when <math>\mu = 0</math> all links are within community links, if <math> \mu = 1 </math> all links are between nodes belonging to different communities.<ref>Twan van Laarhoven and Elena Marchiori (2013). "Network community detection with edge classifiers trained on LFR graphs". https://www.cs.ru.nl/~elenam/paper-learning-community.pdf</ref><br />
<br />
The node degrees and the community sizes are distributed according to a power law, with different exponents. The benchmark assumes that both the degree and the community size have power law distributions with different exponents, <math>\gamma</math> and <math>\beta</math>, respectively. <math>N</math> is the number of nodes and the average degree is <math>\langle k \rangle</math>. There is a mixing parameter <math>\mu</math>, which is the average fraction of neighboring nodes of a node that do not belong to any community that the benchmark node belongs to. This parameter controls the fraction of edges that are between communities.<br />
<br />
节点度和社区规模按幂律分布,但指数不同。基准测试假设度和社区规模都具有不同指数的'''<font color="#ff8000">幂律分布(power law distribution)</font>''',分别为'''<font color="#32CD32">此处需插入公式</font>'''和'''<font color="#32CD32">此处需插入公式</font>'''。'''<font color="#32CD32">此处需插入公式</font>'''是节点的数量,平均度为'''<font color="#32CD32">此处需插入公式</font>'''。混合参数'''<font color="#32CD32">此处需插入公式</font>'''是一个节点的相邻节点的平均比例,这些相邻节点不属于基准节点所属的任何社区。这个参数控制着社区之间的边缘比例。<br />
<br />
<br />
<br />
One can generate the benchmark network using the following steps.<br />
<br />
One can generate the benchmark network using the following steps.<br />
<br />
可以通过以下步骤生成基准网络。<br />
<br />
<br />
<br />
<big>'''Step 1:'''</big> Generate a network with nodes following a power law distribution with exponent <math>\gamma</math> and choose extremes of the distribution <math> k_{\min} </math> and <math> k_{\max} </math> to get desired average degree is <math>\langle k\rangle</math>.<br />
<br />
<big>Step 1:</big> Generate a network with nodes following a power law distribution with exponent <math>\gamma</math> and choose extremes of the distribution <math> k_{\min} </math> and <math> k_{\max} </math> to get desired average degree is <math>\langle k\rangle</math>.<br />
--[[用户:粲兰|袁一博]]([[用户讨论:粲兰|讨论]])“to get desired average degree is”中的“is”怀疑是原文的误输入,因为它完全违背正确的英语语法。<br />
<br />
< 大 > 步骤1: </big > 生成一个网络,其节点遵循指数为'''<font color="#32CD32">此处需插入公式</font>'''的幂律分布,并选择分布的极值'''<font color="#32CD32">此处需插入公式</font>'''和'''<font color="#32CD32">此处需插入公式</font>'''来获得期望平均度'''<font color="#32CD32">此处需插入公式</font>'''。<br />
<br />
<br />
<br />
<big>'''Step 2:'''</big> <math>(1 - \mu)</math> fraction of links of every node is with nodes of the same community, while fraction <math>\mu</math> is with the other nodes.<br />
<br />
<big>Step 2:</big> <math>(1 - \mu)</math> fraction of links of every node is with nodes of the same community, while fraction <math>\mu</math> is with the other nodes.<br />
<br />
< 大 > 步骤2: </big >每个节点的'''<font color="#32CD32">此处需插入公式</font>'''链接部分与同一社区的节点相同,而'''<font color="#32CD32">此处需插入公式</font>'''部分与其他节点相同。<br />
<br />
<br />
<br />
<big>'''Step 3:'''</big> Generate community sizes from a power law distribution with exponent <math>\beta</math>. The sum of all sizes must be equal to <math>N</math>. The minimal and maximal community sizes <math> s_{\min} </math> and <math> s_{\max} </math> must satisfy the definition of community so that every non-isolated node is in at least in one community:<br />
<br />
<big>Step 3:</big> Generate community sizes from a power law distribution with exponent <math>\beta</math>. The sum of all sizes must be equal to <math>N</math>. The minimal and maximal community sizes <math> s_{\min} </math> and <math> s_{\max} </math> must satisfy the definition of community so that every non-isolated node is in at least in one community:<br />
<br />
< big > 步骤3: </big > 根据指数为'''<font color="#32CD32">此处需插入公式</font>'''的幂律分布生成社区规模。所有规模大小的和必须等于'''<font color="#32CD32">此处需插入公式</font>'''。最小和最大的社区规模'''<font color="#32CD32">此处需插入公式</font>'''必须满足社区的定义,这样每个非孤立的节点至少存在于一个群落中:<br />
<br />
<br />
<br />
: <math> s_{\min} > k_{\min} </math> <br />
<br />
<math> s_{\min} > k_{\min} </math> <br />
<br />
[ math > s _ { min } > k _ { min }<br />
<br />
: <math> s_{\max} > k_{\max} </math><br />
<br />
<math> s_{\max} > k_{\max} </math><br />
<br />
[数学][数学]<br />
<br />
<br />
<br />
<big>'''Step 4:'''</big> Initially, no nodes are assigned to communities. Then, each node is randomly assigned to a community. As long as the number of neighboring nodes within the community does not exceed the community size a new node is added to the community, otherwise stays out. In the following iterations the “homeless” node is randomly assigned to some community. If that community is complete, i.e. the size is exhausted, a randomly selected node of that community must be unlinked. Stop the iteration when all the communities are complete and all the nodes belong to at least one community.<br />
<br />
<big>Step 4:</big> Initially, no nodes are assigned to communities. Then, each node is randomly assigned to a community. As long as the number of neighboring nodes within the community does not exceed the community size a new node is added to the community, otherwise stays out. In the following iterations the “homeless” node is randomly assigned to some community. If that community is complete, i.e. the size is exhausted, a randomly selected node of that community must be unlinked. Stop the iteration when all the communities are complete and all the nodes belong to at least one community.<br />
<br />
< big > 步骤4: </big > 最初,没有为任何社区分配任何节点。然后,每个节点被随机分配到一个社区。只要社区内相邻节点的数量不超过社区规模,就会向社区添加一个新节点,否则就不会添加。在接下来的迭代中,无归属的节点被随机分配给某个社区。如果该社区是完备的,即,规模已经用尽,必须随机选择社区中的一个节点并断开其链接。当所有社区都完备且所有节点都至少属于一个社区时停止迭代。<br />
<br />
<br />
<br />
<big>'''Step 5:'''</big> Implement rewiring of nodes keeping the same node degrees but only affecting the fraction of internal and external links such that the number of links outside the community for each node is approximately equal to the mixing parameter <math>\mu</math>.<ref name="original"/><br />
<br />
<big>Step 5:</big> Implement rewiring of nodes keeping the same node degrees but only affecting the fraction of internal and external links such that the number of links outside the community for each node is approximately equal to the mixing parameter <math>\mu</math>.<br />
<br />
< big > 步骤5: </big > 对节点重新布线,保持相同的节点度,但只影响内部和外部链接,使得每个节点在社区外的链接数量约等于混合参数'''<font color="#32CD32">此处需插入公式</font>'''。<br />
<br />
<br />
<br />
==Testing 调试==<br />
<br />
Consider a [[Partition of a set|partition]] into communities that do not overlap. The communities of randomly chosen nodes in each iteration follow a <math>p(C)</math> distribution that represents the probability that a randomly picked node is from the community <math>C</math>. Consider a partition of the same network that was predicted by some community finding algorithm and has <math>p(C_2)</math> distribution. The benchmark partition has <math>p(C_1)</math> distribution.<br />
<br />
Consider a partition into communities that do not overlap. The communities of randomly chosen nodes in each iteration follow a <math>p(C)</math> distribution that represents the probability that a randomly picked node is from the community <math>C</math>. Consider a partition of the same network that was predicted by some community finding algorithm and has <math>p(C_2)</math> distribution. The benchmark partition has <math>p(C_1)</math> distribution.<br />
<br />
考虑社区的一个不重叠分割。每次迭代中随机选择的节点的社区遵循一个'''<font color="#32CD32">此处需插入公式</font>'''分布,这个分布表示随机选择的节点来自社区'''<font color="#32CD32">此处需插入公式</font>'''的概率。考虑同一个网络的一个分割,这个分割由一些社区搜索算法预测得出,并且具有'''<font color="#32CD32">此处需插入公式</font>'''分布。基准分割具有'''<font color="#32CD32">此处需插入公式</font>'''分布。<br />
<br />
The joint distribution is <math>p(C_1, C_2)</math>. The similarity of these two partitions is captured by the normalized [[mutual information]].<br />
<br />
The joint distribution is <math>p(C_1, C_2)</math>. The similarity of these two partitions is captured by the normalized mutual information.<br />
<br />
联合分布为'''<font color="#32CD32">此处需插入公式</font>'''。这两个分割的相似性可以通过'''<font color="#ff8000">归一化互信息</font>'''得到。<br />
<br />
<br />
<br />
: <math> I_n = \frac{\sum_{C_1,C_2} p(C_1,C_2) \log_2 \frac{p(C_1,C_2)}{p(C_1)p(C_2)} }{\frac 1 2 H(\{p(C_1)\}) + \frac 1 2 H(\{p(C_2)\})} </math><br />
<br />
<math> I_n = \frac{\sum_{C_1,C_2} p(C_1,C_2) \log_2 \frac{p(C_1,C_2)}{p(C_1)p(C_2)} }{\frac 1 2 H(\{p(C_1)\}) + \frac 1 2 H(\{p(C_2)\})} </math><br />
<br />
< math > i _ n = frac { sum _ { c _ 1,c _ 2} p (c _ 1,c _ 2) log _ 2 frac { p (c _ 1,c _ 2)}{ p (c _ 1) p (c _ 2)}{ frac _ 12 h ({ p (c _ 1)}}) + frac 12 h ({ p (c _ 2)}}} </math > <br />
<br />
<br />
<br />
If <math> I_n=1 </math> the benchmark and the detected partitions are identical, and if <math> I_n=0 </math> then they are independent of each other.<ref>Barabasi, A.-L. (2014). "Network Science". Chapter 9: Communities.</ref><br />
<br />
If <math> I_n=1 </math> the benchmark and the detected partitions are identical, and if <math> I_n=0 </math> then they are independent of each other.<br />
<br />
如果'''<font color="#32CD32">此处需插入公式</font>'''基准和检测到的分割是相同的,并且如果'''<font color="#32CD32">此处需插入公式</font>''',那么它们彼此独立。<br />
<br />
<br />
<br />
==References 参考文献==<br />
<br />
{{Reflist}}<br />
<br />
<br />
<br />
{{DEFAULTSORT:Lancichinetti-Fortunato-Radicchi benchmark}}<br />
<br />
[[Category:Algorithms]]<br />
<br />
Category:Algorithms<br />
<br />
类别: 算法<br />
<br />
[[Category:Random graphs]]<br />
<br />
Category:Random graphs<br />
<br />
类别: 随机图<br />
<br />
[[Category:Benchmarks (computing)]]<br />
<br />
Category:Benchmarks (computing)<br />
<br />
类别: 基准(计算)<br />
<br />
[[Category:Statistical methods]]<br />
<br />
Category:Statistical methods<br />
<br />
类别: 统计方法<br />
<br />
<noinclude><br />
<br />
<small>This page was moved from [[wikipedia:en:Lancichinetti–Fortunato–Radicchi benchmark]]. Its edit history can be viewed at [[LFR算法/edithistory]]</small></noinclude><br />
<br />
[[Category:待整理页面]]</div>粲兰