更改

删除2,966字节 、 2021年8月7日 (六) 23:22
第1,611行: 第1,611行:  
{{refend}}
 
{{refend}}
   −
== Further reading ==
+
== 进一步阅读 ==
 
{{refbegin|30em}}
 
{{refbegin|30em}}
 
* DH Author, 'Why Are There Still So Many Jobs? The History and Future of Workplace Automation' (2015) 29(3) Journal of Economic Perspectives 3.
 
* DH Author, 'Why Are There Still So Many Jobs? The History and Future of Workplace Automation' (2015) 29(3) Journal of Economic Perspectives 3.
* [[Margaret Boden|Boden, Margaret]], ''Mind As Machine'', [[Oxford University Press]], 2006.
+
* Margaret Boden, ''Mind As Machine'', Oxford University Press, 2006.
* [[Kenneth Cukier|Cukier, Kenneth]], "Ready for Robots?  How to Think about the Future of AI", ''[[Foreign Affairs]]'', vol. 98, no. 4 (July/August 2019), pp. 192–98. [[George Dyson (science historian)|George Dyson]], historian of computing, writes (in what might be called "Dyson's Law") that "Any system simple enough to be understandable will not be complicated enough to behave intelligently, while any system complicated enough to behave intelligently will be too complicated to understand." (p. 197.)  Computer scientist [[Alex Pentland]] writes:  "Current [[machine learning|AI machine-learning]] [[algorithm]]s are, at their core, dead simple stupid.  They work, but they work by brute force." (p. 198.)
+
* Kenneth Cukier, "Ready for Robots?  How to Think about the Future of AI", ''Foreign Affairs'', vol. 98, no. 4 (July/August 2019), pp. 192–98.  
* [[Pedro Domingos|Domingos, Pedro]], "Our Digital Doubles:  AI will serve our species, not control it", ''[[Scientific American]]'', vol. 319, no. 3 (September 2018), pp. 88–93.
+
* Pedro Domingos, "Our Digital Doubles:  AI will serve our species, not control it", ''Scientific American'', vol. 319, no. 3 (September 2018), pp. 88–93.
* [[Alison Gopnik|Gopnik, Alison]], "Making AI More Human:  Artificial intelligence has staged a revival by starting to incorporate what we know about how children learn", ''[[Scientific American]]'', vol. 316, no. 6 (June 2017), pp. 60–65.
+
* Alison Gopnik, "Making AI More Human:  Artificial intelligence has staged a revival by starting to incorporate what we know about how children learn", ''Scientific American'', vol. 316, no. 6 (June 2017), pp. 60–65.
 
* Johnston, John (2008) ''The Allure of Machinic Life: Cybernetics, Artificial Life, and the New AI'', MIT Press.
 
* Johnston, John (2008) ''The Allure of Machinic Life: Cybernetics, Artificial Life, and the New AI'', MIT Press.
* [[Christof Koch|Koch, Christof]], "Proust among the Machines", ''[[Scientific American]]'', vol. 321, no. 6 (December 2019), pp. 46–49. [[Christof Koch]] doubts the possibility of "intelligent" machines attaining [[consciousness]], because "[e]ven the most sophisticated [[brain simulation]]s are unlikely to produce conscious [[feelings]]." (p. 48.) According to Koch, "Whether machines can become [[sentience|sentient]] [is important] for [[ethics|ethical]] reasons. If computers experience life through their own senses, they cease to be purely a means to an end determined by their usefulness to... humans. Per GNW [the [[Global Workspace Theory#Global neuronal workspace|Global Neuronal Workspace]] theory], they turn from mere objects into subjects... with a [[point of view (philosophy)|point of view]].... Once computers' [[cognitive abilities]] rival those of humanity, their impulse to push for legal and political [[rights]] will become irresistible – the right not to be deleted, not to have their memories wiped clean, not to suffer [[pain]] and degradation. The alternative, embodied by IIT [Integrated Information Theory], is that computers will remain only supersophisticated machinery, ghostlike empty shells, devoid of what we value most: the feeling of life itself." (p. 49.)
+
* Christof Koch, "Proust among the Machines", ''Scientific American'', vol. 321, no. 6 (December 2019), pp. 46–49.
* [[Gary Marcus|Marcus, Gary]], "Am I Human?: Researchers need new ways to distinguish artificial intelligence from the natural kind", ''[[Scientific American]]'', vol. 316, no. 3 (March 2017), pp. 58–63. A stumbling block to AI has been an incapacity for reliable [[disambiguation]].  An example is the "pronoun disambiguation problem":  a machine has no way of determining to whom or what a [[pronoun]] in a sentence refers. (p. 61.)
+
* Gary Marcus, "Am I Human?: Researchers need new ways to distinguish artificial intelligence from the natural kind", ''Scientific American'', vol. 316, no. 3 (March 2017), pp. 58–63.  
 
* E McGaughey, 'Will Robots Automate Your Job Away? Full Employment, Basic Income, and Economic Democracy' (2018) [https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3044448 SSRN, part 2(3)] {{Webarchive|url=https://web.archive.org/web/20180524201340/https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3044448 |date=24 May 2018 }}.
 
* E McGaughey, 'Will Robots Automate Your Job Away? Full Employment, Basic Income, and Economic Democracy' (2018) [https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3044448 SSRN, part 2(3)] {{Webarchive|url=https://web.archive.org/web/20180524201340/https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3044448 |date=24 May 2018 }}.
* [[George Musser]], "[[Artificial Imagination]]:  How machines could learn [[creativity]] and [[common sense]], among other human qualities", ''[[Scientific American]]'', vol. 320, no. 5 (May 2019), pp. 58–63.
+
* George Musser, "Artificial Imagination:  How machines could learn creativity and common sense, among other human qualities", ''Scientific American'', vol. 320, no. 5 (May 2019), pp. 58–63.
* Myers, Courtney Boyd ed. (2009). [https://www.forbes.com/2009/06/22/singularity-robots-computers-opinions-contributors-artificial-intelligence-09_land.html "The AI Report"] {{Webarchive|url=https://web.archive.org/web/20170729114303/https://www.forbes.com/2009/06/22/singularity-robots-computers-opinions-contributors-artificial-intelligence-09_land.html |date=29 July 2017 }}. ''Forbes'' June 2009
+
* Myers, Courtney Boyd ed. (2009). [https://www.forbes.com/2009/06/22/singularity-robots-computers-opinions-contributors-artificial-intelligence-09_land.html "The AI Report"]. ''Forbes'' June 2009
 
* {{cite book |last=Raphael |first=Bertram |author-link=Bertram Raphael |year=1976 |title=The Thinking Computer |publisher=W.H.Freeman and Company |isbn=978-0-7167-0723-3 |url=https://archive.org/details/thinkingcomputer00raph |access-date=22 August 2020 |archive-date=26 July 2020 |archive-url=https://web.archive.org/web/20200726215746/https://archive.org/details/thinkingcomputer00raph |url-status=live }}
 
* {{cite book |last=Raphael |first=Bertram |author-link=Bertram Raphael |year=1976 |title=The Thinking Computer |publisher=W.H.Freeman and Company |isbn=978-0-7167-0723-3 |url=https://archive.org/details/thinkingcomputer00raph |access-date=22 August 2020 |archive-date=26 July 2020 |archive-url=https://web.archive.org/web/20200726215746/https://archive.org/details/thinkingcomputer00raph |url-status=live }}
* Scharre, Paul, "Killer Apps:  The Real Dangers of an AI Arms Race", ''[[Foreign Affairs]]'', vol. 98, no. 3 (May/June 2019), pp. 135–44.  "Today's AI technologies are powerful but unreliable.  Rules-based systems cannot deal with circumstances their programmers did not anticipate.  Learning systems are limited by the data on which they were trained.  AI failures have already led to tragedy.  Advanced autopilot features in cars, although they perform well in some circumstances, have driven cars without warning into trucks, concrete barriers, and parked cars.  In the wrong situation, AI systems go from supersmart to superdumb in an instant.  When an enemy is trying to manipulate and hack an AI system, the risks are even greater."  (p. 140.)  
+
* Scharre, Paul, "Killer Apps:  The Real Dangers of an AI Arms Race", ''Foreign Affairs'', vol. 98, no. 3 (May/June 2019), pp. 135–44.  "Today's AI technologies are powerful but unreliable.  Rules-based systems cannot deal with circumstances their programmers did not anticipate.  Learning systems are limited by the data on which they were trained.  AI failures have already led to tragedy.  Advanced autopilot features in cars, although they perform well in some circumstances, have driven cars without warning into trucks, concrete barriers, and parked cars.  In the wrong situation, AI systems go from supersmart to superdumb in an instant.  When an enemy is trying to manipulate and hack an AI system, the risks are even greater."  (p. 140.)  
 
* {{cite journal | last1 = Serenko | first1 = Alexander | year = 2010 | title = The development of an AI journal ranking based on the revealed preference approach | url = http://www.aserenko.com/papers/JOI_Serenko_AI_Journal_Ranking_Published.pdf | journal = Journal of Informetrics | volume = 4 | issue = 4 | pages = 447–459 | doi = 10.1016/j.joi.2010.04.001 | access-date = 24 August 2013 | archive-date = 4 October 2013 | archive-url = https://web.archive.org/web/20131004215236/http://www.aserenko.com/papers/JOI_Serenko_AI_Journal_Ranking_Published.pdf | url-status = live }}
 
* {{cite journal | last1 = Serenko | first1 = Alexander | year = 2010 | title = The development of an AI journal ranking based on the revealed preference approach | url = http://www.aserenko.com/papers/JOI_Serenko_AI_Journal_Ranking_Published.pdf | journal = Journal of Informetrics | volume = 4 | issue = 4 | pages = 447–459 | doi = 10.1016/j.joi.2010.04.001 | access-date = 24 August 2013 | archive-date = 4 October 2013 | archive-url = https://web.archive.org/web/20131004215236/http://www.aserenko.com/papers/JOI_Serenko_AI_Journal_Ranking_Published.pdf | url-status = live }}
 
* {{cite journal | last1 = Serenko | first1 = Alexander | author2 = Michael Dohan | year = 2011 | title = Comparing the expert survey and citation impact journal ranking methods: Example from the field of Artificial Intelligence | url = http://www.aserenko.com/papers/JOI_AI_Journal_Ranking_Serenko.pdf | journal = Journal of Informetrics | volume = 5 | issue = 4 | pages = 629–649 | doi = 10.1016/j.joi.2011.06.002 | access-date = 12 September 2013 | archive-date = 4 October 2013 | archive-url = https://web.archive.org/web/20131004212839/http://www.aserenko.com/papers/JOI_AI_Journal_Ranking_Serenko.pdf | url-status = live }}
 
* {{cite journal | last1 = Serenko | first1 = Alexander | author2 = Michael Dohan | year = 2011 | title = Comparing the expert survey and citation impact journal ranking methods: Example from the field of Artificial Intelligence | url = http://www.aserenko.com/papers/JOI_AI_Journal_Ranking_Serenko.pdf | journal = Journal of Informetrics | volume = 5 | issue = 4 | pages = 629–649 | doi = 10.1016/j.joi.2011.06.002 | access-date = 12 September 2013 | archive-date = 4 October 2013 | archive-url = https://web.archive.org/web/20131004212839/http://www.aserenko.com/papers/JOI_AI_Journal_Ranking_Serenko.pdf | url-status = live }}
第1,636行: 第1,636行:  
|work=MIT Technology Review
 
|work=MIT Technology Review
 
}}
 
}}
* [[Adam Tooze|Tooze, Adam]], "Democracy and Its Discontents", ''[[The New York Review of Books]]'', vol. LXVI, no. 10 (6 June 2019), pp. 52–53, 56–57. "Democracy has no clear answer for the mindless operation of [[bureaucracy|bureaucratic]] and [[technology|technological power]].  We may indeed be witnessing its extension in the form of artificial intelligence and robotics.  Likewise, after decades of dire warning, the [[environmentalism|environmental problem]] remains fundamentally unaddressed.... Bureaucratic overreach and environmental catastrophe are precisely the kinds of slow-moving existential challenges that democracies deal with very badly.... Finally, there is the threat du jour:  [[corporation]]s and the technologies they promote."  (pp. 56–57.)
+
* Adam Tooze, "Democracy and Its Discontents", ''The New York Review of Books'', vol. LXVI, no. 10 (6 June 2019), pp. 52–53, 56–57.  
 
{{refend}}
 
{{refend}}
    +
</br>
    
== 编者推荐==
 
== 编者推荐==
7,129

个编辑