更改

跳到导航 跳到搜索
第378行: 第378行:  
==局限==
 
==局限==
 
虽然机器学习在某些领域是革命性的,但有效的机器学习仍是困难的,因为找出模式很难,而且往往没有足够的训练数据;因此,许多机器学习程序往往无法达到预期值
 
虽然机器学习在某些领域是革命性的,但有效的机器学习仍是困难的,因为找出模式很难,而且往往没有足够的训练数据;因此,许多机器学习程序往往无法达到预期值
<ref>
+
<ref>[http://web.archive.org/web/20170320225010/https://www.bloomberg.com/news/articles/2016-11-10/why-machine-learning-models-often-fail-to-learn-quicktake-q-a "Why Machine Learning Models Often Fail to Learn: QuickTake Q&A"]. ''Bloomberg.com.'' 2016-11-10. Retrieved 2017-04-10.</ref><ref>[https://hbr.org/2017/04/the-first-wave-of-corporate-ai-is-doomed-to-fail "The First Wave of Corporate AI Is Doomed to Fail"]. Harvard Business Review. 2017-04-18. Retrieved 2018-08-20.</ref><ref>[https://venturebeat.com/2016/09/17/why-the-a-i-euphoria-is-doomed-to-fail/ "Why the A.I. euphoria is doomed to fail"]. VentureBeat. 2016-09-18. Retrieved 2018-08-20.
[http://web.archive.org/web/20170320225010/https://www.bloomberg.com/news/articles/2016-11-10/why-machine-learning-models-often-fail-to-learn-quicktake-q-a "Why Machine Learning Models Often Fail to Learn: QuickTake Q&A"]. ''Bloomberg.com.'' 2016-11-10. Retrieved 2017-04-10.
+
</ref> 。造成这种情况的原因很多:缺乏(适当的)数据、无法访问数据、数据偏见、隐私问题、错误的任务选择和算法、错误的工具和人员、缺乏资源和评估问题<ref>
</ref>
+
[https://www.kdnuggets.com/2018/07/why-machine-learning-project-fail.html|title=9 Reasons why your machine learning project will fail "9 Reasons why your machine learning project will fail"]. www.kdnuggets.com. Retrieved 2018-08-20.</ref>。
<ref>
+
 
[https://hbr.org/2017/04/the-first-wave-of-corporate-ai-is-doomed-to-fail "The First Wave of Corporate AI Is Doomed to Fail"]. Harvard Business Review. 2017-04-18. Retrieved 2018-08-20.
  −
</ref>
  −
<ref>
  −
[https://venturebeat.com/2016/09/17/why-the-a-i-euphoria-is-doomed-to-fail/ "Why the A.I. euphoria is doomed to fail"]. VentureBeat. 2016-09-18. Retrieved 2018-08-20.
  −
</ref>
  −
造成这种情况的原因很多:缺乏(适当的)数据、无法访问数据、数据偏见、隐私问题、错误的任务选择和算法、错误的工具和人员、缺乏资源和评估问题<ref>
  −
[https://www.kdnuggets.com/2018/07/why-machine-learning-project-fail.html|title=9 Reasons why your machine learning project will fail "9 Reasons why your machine learning project will fail"]. www.kdnuggets.com. Retrieved 2018-08-20.
  −
</ref>。
      
机器学习方法尤其会受到不同数据偏见的影响。只针对当前客户进行训练的机器学习系统可能无法预测训练数据中未表示的新客户组的需求。当接受人工数据训练时,机器学习很可能会产生与社会上已经存在的相同的成体制偏见和无意识偏见<ref>{{Cite journal|last=Garcia|first=Megan|date=2016|title=Racist in the Machine|url=https://read.dukeupress.edu/world-policy-journal/article/33/4/111-117/30942|journal=World Policy Journal|language=en|volume=33|issue=4|pages=111–117|issn:0740-2775}}</ref> 。
 
机器学习方法尤其会受到不同数据偏见的影响。只针对当前客户进行训练的机器学习系统可能无法预测训练数据中未表示的新客户组的需求。当接受人工数据训练时,机器学习很可能会产生与社会上已经存在的相同的成体制偏见和无意识偏见<ref>{{Cite journal|last=Garcia|first=Megan|date=2016|title=Racist in the Machine|url=https://read.dukeupress.edu/world-policy-journal/article/33/4/111-117/30942|journal=World Policy Journal|language=en|volume=33|issue=4|pages=111–117|issn:0740-2775}}</ref> 。
从数据中学到的语言模型已被证明含有类似人类的偏见<ref>{{Cite journal|last=Caliskan|first=Aylin|last2=Bryson|first2=Joanna J.|last3=Narayanan|first3=Arvind|date=2017-04-14|title=Semantics derived automatically from language corpora contain human-like biases|url=http://science.sciencemag.org/content/356/6334/183|journal=Science|language=en|volume=356|issue=6334|pages=183–186|doi:10.1126/science.aal4230|issn:0036-8075|pmid:28408601}}</ref><ref>Wang, Xinan; Dasgupta, Sanjoy (2016), Lee, D. D.; Sugiyama, M.; Luxburg, U. V.; Guyon, I., eds., [http://papers.nips.cc/paper/6227-an-algorithm-for-l1-nearest-neighbor-search-via-monotonic-embedding.pdf "An algorithm for L1 nearest neighbor search via monotonic embedding"] (PDF), ''Advances in Neural Information Processing Systems 29'', Curran Associates, Inc., pp. 983–991, Retrieved 2018-08-20</ref> 。用于犯罪风险评估的机器学习系统被发现对黑人有偏见<ref>[https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing "Machine Bias"]. ProPublica. Julia Angwin, Jeff Larson, Lauren Kirchner, Surya Mattu. 2016-05-23. Retrieved 2018-08-20.</ref><ref>[https://www.nytimes.com/2017/10/26/opinion/algorithm-compas-sentencing-bias.html "Opinion | When an Algorithm Helps Send You to Prison"]. New York Times. Retrieved 2018-08-20.</ref> 。在2015年,谷歌上黑人的照片常常被贴上大猩猩的标签<ref>[https://www.bbc.co.uk/news/technology-33347866 "Google apologises for racist blunder"]. BBC News. 2015-07-01. Retrieved 2018-08-20.</ref> ,而到2018年,这仍然没有得到很好的解决,但据报道,谷歌仍在使用变通方法将所有大猩猩从训练数据中删除,因此根本无法识别真正的大猩猩<ref>[https://www.theverge.com/2018/1/12/16882408/google-racist-gorillas-photo-recognition-algorithm-ai "Google 'fixed' its racist algorithm by removing gorillas from its image-labeling tech"]. The Verge. Retrieved 2018-08-20.</ref>。在许多其他系统中<ref>[https://www.nytimes.com/2016/06/26/opinion/sunday/artificial-intelligences-white-guy-problem.html "Opinion | Artificial Intelligence's White Guy Problem"]. New York Times. Retrieved 2018-08-20.</ref> ,也发现了识别非白人的类似问题。2016年,微软测试了一个从Twitter上学习的[https://en.wikipedia.org/wiki/Chatbot 聊天机器人],而后者却很快学会了种族主义和性别歧视的语言<ref>Metz, Rachel. [https://www.technologyreview.com/s/601111/why-microsoft-accidentally-unleashed-a-neo-nazi-sexbot/ "Why Microsoft's teen chatbot, Tay, said lots of awful things online"]. MIT Technology Review. Retrieved 2018-08-20.</ref>。由于这些挑战,机器在其他领域的有效使用仍有很长的路要走<ref>Simonite, Tom. [https://www.technologyreview.com/s/603944/microsoft-ai-isnt-yet-adaptable-enough-to-help-businesses/ "Microsoft says its racist chatbot illustrates how AI isn't adaptable enough to help most businesses"]. MIT Technology Review. Retrieved 2018-08-20.</ref>。2018年,[https://en.wikipedia.org/wiki/Uber Uber]的一辆自动驾驶汽车未能检测到行人并导致其在事故中丧生。<ref>[https://www.economist.com/the-economist-explains/2018/05/29/why-ubers-self-driving-car-killed-a-pedestrian "Why Uber's self-driving car killed a pedestrian"]. The Economist. Retrieved 2018-08-20.</ref>。IBM Watson系统在医疗保健领域使用机器学习的尝试,即便经过多年的时间和数十亿美元的投资,也未能实现<ref>
+
从数据中学到的语言模型已被证明含有类似人类的偏见<ref>{{Cite journal|last=Caliskan|first=Aylin|last2=Bryson|first2=Joanna J.|last3=Narayanan|first3=Arvind|date=2017-04-14|title=Semantics derived automatically from language corpora contain human-like biases|url=http://science.sciencemag.org/content/356/6334/183|journal=Science|language=en|volume=356|issue=6334|pages=183–186}}</ref><ref>Wang, Xinan; Dasgupta, Sanjoy (2016), Lee, D. D.; Sugiyama, M.; Luxburg, U. V.; Guyon, I., eds., [http://papers.nips.cc/paper/6227-an-algorithm-for-l1-nearest-neighbor-search-via-monotonic-embedding.pdf "An algorithm for L1 nearest neighbor search via monotonic embedding"] (PDF), ''Advances in Neural Information Processing Systems 29'', Curran Associates, Inc., pp. 983–991, Retrieved 2018-08-20</ref> 。用于犯罪风险评估的机器学习系统被发现对黑人有偏见<ref>[https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing "Machine Bias"]. ProPublica. Julia Angwin, Jeff Larson, Lauren Kirchner, Surya Mattu. 2016-05-23. Retrieved 2018-08-20.</ref><ref>[https://www.nytimes.com/2017/10/26/opinion/algorithm-compas-sentencing-bias.html "Opinion | When an Algorithm Helps Send You to Prison"]. New York Times. Retrieved 2018-08-20.</ref> 。在2015年,谷歌上黑人的照片常常被贴上大猩猩的标签<ref>[https://www.bbc.co.uk/news/technology-33347866 "Google apologises for racist blunder"]. BBC News. 2015-07-01. Retrieved 2018-08-20.</ref> ,而到2018年,这仍然没有得到很好的解决,但据报道,谷歌仍在使用变通方法将所有大猩猩从训练数据中删除,因此根本无法识别真正的大猩猩<ref>[https://www.theverge.com/2018/1/12/16882408/google-racist-gorillas-photo-recognition-algorithm-ai "Google 'fixed' its racist algorithm by removing gorillas from its image-labeling tech"]. The Verge. Retrieved 2018-08-20.</ref>。在许多其他系统中<ref>[https://www.nytimes.com/2016/06/26/opinion/sunday/artificial-intelligences-white-guy-problem.html "Opinion | Artificial Intelligence's White Guy Problem"]. New York Times. Retrieved 2018-08-20.</ref> ,也发现了识别非白人的类似问题。2016年,微软测试了一个从Twitter上学习的[https://en.wikipedia.org/wiki/Chatbot 聊天机器人],而后者却很快学会了种族主义和性别歧视的语言<ref>Metz, Rachel. [https://www.technologyreview.com/s/601111/why-microsoft-accidentally-unleashed-a-neo-nazi-sexbot/ "Why Microsoft's teen chatbot, Tay, said lots of awful things online"]. MIT Technology Review. Retrieved 2018-08-20.</ref>。由于这些挑战,机器在其他领域的有效使用仍有很长的路要走<ref>Simonite, Tom. [https://www.technologyreview.com/s/603944/microsoft-ai-isnt-yet-adaptable-enough-to-help-businesses/ "Microsoft says its racist chatbot illustrates how AI isn't adaptable enough to help most businesses"]. MIT Technology Review. Retrieved 2018-08-20.</ref>。2018年,[https://en.wikipedia.org/wiki/Uber Uber]的一辆自动驾驶汽车未能检测到行人并导致其在事故中丧生。<ref>[https://www.economist.com/the-economist-explains/2018/05/29/why-ubers-self-driving-car-killed-a-pedestrian "Why Uber's self-driving car killed a pedestrian"]. The Economist. Retrieved 2018-08-20.</ref>。IBM Watson系统在医疗保健领域使用机器学习的尝试,即便经过多年的时间和数十亿美元的投资,也未能实现<ref>
 
[https://www.statnews.com/2018/07/25/ibm-watson-recommended-unsafe-incorrect-treatments/ "IBM's Watson recommended 'unsafe and incorrect' cancer treatments - STAT"]. STAT. 2018-07-25. Retrieved 2018-08-21.</ref><ref>Hernandez, Daniela; Greenwald, Ted (2018-08-11). [https://www.wsj.com/articles/ibm-bet-billions-that-watson-could-improve-cancer-treatment-it-hasnt-worked-1533961147 "IBM Has a Watson Dilemma"].Wall Street Journal. ISSN 0099-9660. Retrieved 2018-08-21.
 
[https://www.statnews.com/2018/07/25/ibm-watson-recommended-unsafe-incorrect-treatments/ "IBM's Watson recommended 'unsafe and incorrect' cancer treatments - STAT"]. STAT. 2018-07-25. Retrieved 2018-08-21.</ref><ref>Hernandez, Daniela; Greenwald, Ted (2018-08-11). [https://www.wsj.com/articles/ibm-bet-billions-that-watson-could-improve-cancer-treatment-it-hasnt-worked-1533961147 "IBM Has a Watson Dilemma"].Wall Street Journal. ISSN 0099-9660. Retrieved 2018-08-21.
 
</ref>。
 
</ref>。
    +
<br>
    
==模型评估==
 
==模型评估==
7,129

个编辑

导航菜单