众包

来自集智百科 - 复杂系统|人工智能|复杂科学|复杂网络|自组织
跳到导航 跳到搜索

此词条暂由彩云小译翻译,翻译字数共9829,未经人工整理和审校,带来阅读不便,请见谅。

模板:Very long

文件:Crowdtesting.jpg
This graphic symbolizes the use of ideas from a wide range of individuals, as used in crowdsourcing



thumb|right|200px |This graphic symbolizes the use of ideas from a wide range of individuals, as used in crowdsourcing

拇指 | 右 | 200px | 这张图表象征着在众包中广泛使用来自不同个体的想法

Crowdsourcing is a sourcing model in which individuals or organizations obtain goods or services, including ideas, voting, micro-tasks and finances, from a large, relatively open and often rapidly evolving group of participants. 模板:As of, crowdsourcing typically involves using the internet to attract and divide work between participants to achieve a cumulative result. The word "crowdsourcing" itself - a portmanteau of "crowd" and "outsourcing" - was allegedly coined in 2005.[1][2][3][4] Crowdsourcing is not necessarily an "online" activity; it existed before Internet access became a household commodity.[5]

Crowdsourcing is a sourcing model in which individuals or organizations obtain goods or services, including ideas, voting, micro-tasks and finances, from a large, relatively open and often rapidly evolving group of participants. , crowdsourcing typically involves using the internet to attract and divide work between participants to achieve a cumulative result. The word "crowdsourcing" itself - a portmanteau of "crowd" and "outsourcing" - was allegedly coined in 2005. Crowdsourcing is not necessarily an "online" activity; it existed before Internet access became a household commodity.

众包是个人或组织从一个庞大的、相对开放且经常快速变化的参与者群体中获得商品或服务,包括创意、投票、微任务和资金的一种外包模式。众包通常会利用互联网吸引参与者,分配工作,以达到累积的结果。“众包”是“群众”和“外包”的合成词,据称在2005年创造[1][2][3][4]。众包不一定是一种“线上”活动,在互联网走进千家万户前众包就已经存在[5]

Major differences distinguish crowdsourcing from outsourcing. Crowdsourcing comes from a less-specific, more public group, whereas outsourcing is commissioned from a specific, named group, and includes a mix of bottom-up and top-down processes.[6][7][8] Advantages of using crowdsourcing may include improved costs, speed, quality, flexibility, scalability, or diversity.[9][10]

Major differences distinguish crowdsourcing from outsourcing. Crowdsourcing comes from a less-specific, more public group, whereas outsourcing is commissioned from a specific, named group, and includes a mix of bottom-up and top-down processes.Brabham, D. C. (2013). Crowdsourcing. Cambridge, Massachusetts; London, England: The MIT Press.Prpić, J., & Shukla, P. (2016). Crowd Science: Measurements, Models, and Methods. In Proceedings of the 49th Annual Hawaii International Conference on System Sciences, Kauai, Hawaii: IEEE Computer Society Advantages of using crowdsourcing may include improved costs, speed, quality, flexibility, scalability, or diversity.

众包和外包的主要区别在于,众包来自一个不那么特定、更公开的群体,而外包则来自一个特定的、有名称的群体,有自下而上,也有自上而下[6][7][8] 。众包的优势有提高成本、速度、质量、灵活性、可扩展性、或多样性。

Some forms of crowdsourcing, such as in "idea competitions" or "innovation contests" provide ways for organizations to learn beyond the "base of minds" provided by their employees (e.g. LEGO Ideas).[11][12] Tedious "microtasks" performed in parallel by large, paid crowds (e.g. Amazon Mechanical Turk) are another form of crowdsourcing. Not-for-profit organizations have used crowdsourcing to develop common goods (e.g. Wikipedia).[13] The effect of user communication and the platform presentation should be taken into account模板:By whom when evaluating the performance of ideas in crowdsourcing contexts.[14]

Some forms of crowdsourcing, such as in "idea competitions" or "innovation contests" provide ways for organizations to learn beyond the "base of minds" provided by their employees (e.g. LEGO Ideas). Tedious "microtasks" performed in parallel by large, paid crowds (e.g. Amazon Mechanical Turk) are another form of crowdsourcing. Not-for-profit organizations have used crowdsourcing to develop common goods (e.g. Wikipedia). The effect of user communication and the platform presentation should be taken into account when evaluating the performance of ideas in crowdsourcing contexts.

某些众包,例如在“创意竞赛”或“创新竞赛”中,参赛者不仅仅能学会雇主提出的“思维方式”(如:乐高创意 LEGO Ideas)[11][12] 。另一种众包是大量付费人群同时进行枯燥的“微型任务”(如:亚马逊土耳其机器人Amazon Mechanical Turk)。非营利组织使用众包来开发常见商品(如:维基百科)。在评估众包中的创意表现时,应考虑到用户沟通和平台展示的效果。

Crowdsourcing can play a role in democratization.[15]

Crowdsourcing can play a role in democratization.

众包可以促进民主化进程[15]

Definitions 定义

The term "crowdsourcing" was coined in 2005 by Jeff Howe and Mark Robinson, editors at Wired, to describe how businesses were using the Internet to "outsource work to the crowd",[1] which quickly led to the portmanteau "crowdsourcing". Howe first published a definition for the term crowdsourcing in a companion blog post to his June 2006 Wired article, "The Rise of Crowdsourcing", which came out in print just days later:[16]

Simply defined, crowdsourcing represents the act of a company or institution taking a function once performed by employees and outsourcing it to an undefined (and generally large) network of people in the form of an open call. This can take the form of peer-production (when the job is performed collaboratively), but is also often undertaken by sole individuals. The crucial prerequisite is the use of the open call format and the large network of potential laborers.

The term "crowdsourcing" was coined in 2005 by Jeff Howe and Mark Robinson, editors at Wired, to describe how businesses were using the Internet to "outsource work to the crowd", which quickly led to the portmanteau "crowdsourcing". Howe first published a definition for the term crowdsourcing in a companion blog post to his June 2006 Wired article, "The Rise of Crowdsourcing", which came out in print just days later: Simply defined, crowdsourcing represents the act of a company or institution taking a function once performed by employees and outsourcing it to an undefined (and generally large) network of people in the form of an open call. This can take the form of peer-production (when the job is performed collaboratively), but is also often undertaken by sole individuals. The crucial prerequisite is the use of the open call format and the large network of potential laborers.

《连线》(Wired)杂志的编辑杰夫•豪(Jeff Howe)和马克•罗宾逊(Mark Robinson)于2005年提出“众包”(crowdsourcing),描述企业如何利用互联网“将工作外包给大众”("outsource work to the crowd"),“众包”(crowdsourcing)这个合成词便很快出现。2006年6月,豪在《连线》杂志上发表了一篇名为《众包的兴起》(The Rise of Crowdsourcing)的文章,该文章几天后便刊印出来。文章中包括他的一条博客,首次给众包下了定义:

简单来说,众包指一家公司或机构以公开选拔的形式,将一项曾由员工履行的职能外包给一个未定义(通常是大型)的人际网络。工作可以群体完成(如果工作是合作完成),但也常常由单独的个人完成。关键是通过公开选拔建立庞大的潜在劳动力网络。

In a 1 February 2008, article, Daren C. Brabham, "the first [person] to publish scholarly research using the word crowdsourcing" and writer of the 2013 book Crowdsourcing, defined it as an "online, distributed problem-solving and production model".[17][18] Kristen L. Guth and Brabham found that the performance of ideas offered in crowdsourcing platforms are affected not only by their quality, but also by the communication among users about the ideas, and presentation in the platform itself.[14]

In a 1 February 2008, article, Daren C. Brabham, "the first [person] to publish scholarly research using the word crowdsourcing" and writer of the 2013 book Crowdsourcing, defined it as an "online, distributed problem-solving and production model". Kristen L. Guth and Brabham found that the performance of ideas offered in crowdsourcing platforms are affected not only by their quality, but also by the communication among users about the ideas, and presentation in the platform itself.

在2008年2月1日的一篇文章中,Daren c. Brabham,“第一个使用众包这个词发表学术研究的人”和2013年著作 Crowdsourcing 的作者,将其定义为“在线,分布式解决问题和生产模式”。克里斯汀 · l · 古斯和布拉汉姆发现,在众包平台上提供的想法的表现不仅受到其质量的影响,而且还受到用户之间关于想法的交流,以及平台本身的表现形式的影响。

After studying more than 40 definitions of crowdsourcing in the scientific and popular literature, Enrique Estellés-Arolas and Fernando González Ladrón-de-Guevara, researchers at the Technical University of Valencia, developed a new integrating definition:[4]

After studying more than 40 definitions of crowdsourcing in the scientific and popular literature, Enrique Estellés-Arolas and Fernando González Ladrón-de-Guevara, researchers at the Technical University of Valencia, developed a new integrating definition:

在研究了科学和通俗文献中40多种众包的定义之后,费尔南多·冈萨雷斯华伦西亚大学的研究人员 Enrique Estellés-Arolas 和 Ladrón-de-Guevara 提出了一个新的整合定义:

Crowdsourcing is a type of participative online activity in which an individual, an institution, a nonprofit organization, or company proposes to a group of individuals of varying knowledge, heterogeneity, and number, via a flexible open call, the voluntary undertaking of a task. The undertaking of the task; of variable complexity and modularity, and; in which the crowd should participate, bringing their work, money, knowledge **[and/or]** experience, always entails mutual benefit. The user will receive the satisfaction of a given type of need, be it economic, social recognition, self-esteem, or the development of individual skills, while the crowdsourcer will obtain and use to their advantage that which the user has brought to the venture, whose form will depend on the type of activity undertaken.

Crowdsourcing is a type of participative online activity in which an individual, an institution, a nonprofit organization, or company proposes to a group of individuals of varying knowledge, heterogeneity, and number, via a flexible open call, the voluntary undertaking of a task. The undertaking of the task; of variable complexity and modularity, and; in which the crowd should participate, bringing their work, money, knowledge **[and/or]** experience, always entails mutual benefit. The user will receive the satisfaction of a given type of need, be it economic, social recognition, self-esteem, or the development of individual skills, while the crowdsourcer will obtain and use to their advantage that which the user has brought to the venture, whose form will depend on the type of activity undertaken.

众包是一种参与性的在线活动,在这种活动中,个人、机构、非营利组织或公司通过一个灵活的公开电话,向一群知识、异质性和数量各异的个人提出自愿承担一项任务的建议。任务的执行; 具有不同的复杂性和模块性,以及群体应该参与的任务,带来他们的工作、金钱、知识 * * [和/或] * * 经验,总是需要互惠互利。用户将得到某种特定类型的需求的满足,无论是经济、社会认可、自尊还是个人技能的发展,而众包方将获得和利用用户为企业带来的好处,其形式将取决于所开展的活动的类型。

As mentioned by the definitions of Brabham and Estellés-Arolas and Ladrón-de-Guevara above, crowdsourcing in the modern conception is an IT-mediated phenomenon, meaning that a form of IT is always used to create and access crowds of people.[19][20] In this respect, crowdsourcing has been considered to encompass three separate, but stable techniques; competition crowdsourcing, virtual labor market crowdsourcing, and open collaboration crowdsourcing.[10][19][21]

As mentioned by the definitions of Brabham and Estellés-Arolas and Ladrón-de-Guevara above, crowdsourcing in the modern conception is an IT-mediated phenomenon, meaning that a form of IT is always used to create and access crowds of people.Vuković, M. (2009). Crowdsourcing for enterprises. In Services-I, 2009 World Conference on (pp. 686-692). IEEE. In this respect, crowdsourcing has been considered to encompass three separate, but stable techniques; competition crowdsourcing, virtual labor market crowdsourcing, and open collaboration crowdsourcing.de Vreede, T., Nguyen, C., de Vreede, G. J., Boughzala, I., Oh, O., & Reiter-Palmon, R. (2013). A Theoretical Model of User Engagement in Crowdsourcing. In Collaboration and Technology (pp. 94-109). Springer Berlin Heidelberg

正如上文提到的 Brabham 和 Estellés-Arolas 以及 Ladrón-de-Guevara 的定义所述,在现代概念中,众包是一种以信息技术为中介的现象,意味着某种形式的信息技术总是被用来创造和接触人群。(2009).企业众包。在服务方面-i,2009年世界会议。686-692).美国电气与电子工程师协会。在这方面,众包被认为包含3个独立但稳定的技术: 竞争众包、虚拟劳动力市场众包和开放协作 crowdsourcing.de Vreede,t,Nguyen,c,de Vreede,g. j. ,Boughzala,i. Oh,o. ,& Reiter-Palmon,r. (2013)。众包中用户参与的理论模型。在协作和技术方面(pp。94-109).海德堡施普林格

Henk van Ess, a college lecturer in online communications, emphasizes the need to "give back" the crowdsourced results to the public on ethical grounds. His nonscientific, noncommercial definition is widely cited in the popular press:[22]

Crowdsourcing is channeling the experts' desire to solve a problem and then freely sharing the answer with everyone.

Henk van Ess, a college lecturer in online communications, emphasizes the need to "give back" the crowdsourced results to the public on ethical grounds. His nonscientific, noncommercial definition is widely cited in the popular press: Crowdsourcing is channeling the experts' desire to solve a problem and then freely sharing the answer with everyone.

在线交流方面的大学讲师亨克 · 范 · 埃斯(Henk van Ess)强调,基于道德的考虑,有必要将众包的结果“回馈”给公众。他的非科学、非商业的定义在大众媒体上被广泛引用: 众包是引导专家解决问题的愿望,然后与每个人自由分享答案。

Despite the multiplicity of definitions for crowdsourcing, one constant has been the broadcasting of problems to the public, and an open call for contributions to help solve the problem. Members of the public submit solutions that are then owned by the entity, which originally broadcast the problem. In some cases, the contributor of the solution is compensated monetarily with prizes or with recognition. In other cases, the only rewards may be kudos or intellectual satisfaction. Crowdsourcing may produce solutions from amateurs or volunteers working in their spare time or from experts or small businesses, which were previously unknown to the initiating organization.[5]

Despite the multiplicity of definitions for crowdsourcing, one constant has been the broadcasting of problems to the public, and an open call for contributions to help solve the problem. Members of the public submit solutions that are then owned by the entity, which originally broadcast the problem. In some cases, the contributor of the solution is compensated monetarily with prizes or with recognition. In other cases, the only rewards may be kudos or intellectual satisfaction. Crowdsourcing may produce solutions from amateurs or volunteers working in their spare time or from experts or small businesses, which were previously unknown to the initiating organization.

尽管众包的定义多种多样,但有一个常量是向公众传播问题,以及公开征集捐款以帮助解决问题。公众成员提交的解决方案,然后由实体拥有,最初播放的问题。在某些情况下,解决方案的贡献者会得到奖励或认可的金钱补偿。在其他情况下,唯一的回报可能是荣誉或智力上的满足。众包可以通过业余爱好者或志愿者在业余时间提供解决方案,也可以通过专家或小型企业提供解决方案,而这些都是发起组织以前不知道的。

Another consequence of the multiple definitions is the controversy surrounding what kinds of activities that may be considered crowdsourcing.

Another consequence of the multiple definitions is the controversy surrounding what kinds of activities that may be considered crowdsourcing.

多重定义的另一个后果是围绕什么类型的活动可以被认为是众包的争议。

Historical examples

While the term "crowdsourcing" was popularized online to describe Internet-based activities,[18] some examples of projects, in retrospect, can be described as crowdsourcing.

While the term "crowdsourcing" was popularized online to describe Internet-based activities, some examples of projects, in retrospect, can be described as crowdsourcing.

虽然”众包”这个词在网上流行起来是为了描述以互联网为基础的活动,但回顾过去,一些项目的例子可以被描述为众包。

Timeline of major events

  • 594 BCE – Solon requires that all citizens swear to uphold his laws, which among other things, strengthens citizen inclusion and involvement in the governance of Ancient Athens, the earliest example of democratic government for which reliable documentation exists
  • 618–907 – Tang dynasty introduces the Joint-Stock Company the earliest form of crowdfunding
  • 1714 – The longitude rewards: When the British government was trying to find a way to measure a ship's longitudinal position, they offered the public a monetary prize to whomever came up with the best solution.[23]
  • 1783 – King Louis XVI offered an award to the person who could "make the alkali" by decomposing sea salt by the "simplest and most economic method".[23]
  • 1848 – Matthew Fontaine Maury distributed 5000 copies of his Wind and Current Charts free of charge on the condition that sailors returned a standardized log of their voyage to the U.S. Naval Observatory. By 1861, he had distributed 200,000 copies free of charge, on the same conditions.[24]
  • 1849 – A network of some 150 volunteer weather observers all over the USA was set up as a part of the Smithsonian Institution's Meteorological Project started by the Smithsonian's first Secretary, Joseph Henry, who used the telegraph to gather volunteers' data and create a large weather map, making new information available to the public daily. For instance, volunteers tracked a tornado passing through Wisconsin and sent the findings via telegraph to the Smithsonian. Henry's project is considered the origin of what later became the National Weather Service. Within a decade, the project had more than 600 volunteer observers and had spread to Canada, Mexico, Latin America, and the Caribbean.[25]
  • 1884 – Publication of the Oxford English Dictionary: 800 volunteers catalogued words to create the first fascicle of the OED.[23]
  • 1916 – Planters Peanuts contest: The Mr. Peanut logo was designed by a 14-year-old boy who won the Planter Peanuts logo contest.[23]
  • 1957 – Jørn Utzon, winner of the design competition for the Sydney Opera House.[23]
  • 1970 – French amateur photo contest C'était Paris en 1970 ("This Was Paris in 1970") sponsored by the city of Paris, France-Inter radio, and the Fnac: 14,000 photographers produced 70,000 black-and-white prints and 30,000 color slides of the French capital to document the architectural changes of Paris. Photographs were donated to the Bibliothèque historique de la ville de Paris.[26]
  • 1979 – Robert Axelrod invites academics on-line to submit FORTRAN algorithms to play the repeated Prisoner's Dilemma; this results in Tit for Tat.[27]
  • 1991 – Linus Torvalds begins work on the Linux operating system, inviting programmers around the world to contribute code.
  • 1996 – The Hollywood Stock Exchange was founded: Allowed for the buying and selling of shares.[23]
  • 1997 – British rock band Marillion raised $60,000 from their fans to help finance their U.S. tour.[23]
  • 1999 – SETI@home was launched by the University of California, Berkeley. Volunteers can contribute to searching for signals that might come from extraterrestrial intelligence by installing a program that uses idle computer time for analyzing chunks of data recorded by radio telescopes involved in the SERENDIP program.
  • 2000 – JustGiving established: This online platform allows the public to help raise money for charities.[23]
  • 2000 – UNV Online Volunteering service launched: Connecting people who commit their time and skills over the Internet to help organizations address development challenges.[28]
  • 2000 – iStockPhoto was founded: The free stock imagery website allows the public to contribute to and receive commission for their contributions.[29]
  • 2001 – Launch of Wikipedia: "Free-access, free content Internet encyclopedia".[30]
  • 2001 – Foundation of Topcoder – crowdsourcing software development company.[31][32]
  • 2004 – OpenStreetMap, a collaborative project to create a free editable map of the world, is launched.[33]
  • 2004 – Toyota's first "Dream car art" contest: Children were asked globally to draw their "dream car of the future".[34]
  • 2005 – Kodak's "Go for the Gold" contest: Kodak asked anyone to submit a picture of a personal victory.[34]
  • 2006 – Jeff Howe coined the term crowdsourcing in Wired.[29]
  • 2009 – Waze, a community-oriented GPS app, allows for users to submit road info and route data based on location, such as reports of car accidents or traffic, and integrates that data into its routing algorithms for all users of the app.
  • 2011 – Casting of Flavours (Do us a flavor in the USA) – a campaign launched by PepsiCo's Lay's in Spain. The campaign was about a contest that was held for initiating a flavor for the snack.[35]
  • 2012 - Crowdsourcing Week platform launches in Singapore.
  • 2019 - Crowdsourcing Week launches the global BOLD Awards to recognise achievements and innovations in crowdsourcing and related technology.[36]
  • 594 BCE – Solon requires that all citizens swear to uphold his laws, which among other things, strengthens citizen inclusion and involvement in the governance of Ancient Athens, the earliest example of democratic government for which reliable documentation exists
  • 618–907 – Tang dynasty introduces the Joint-Stock Company the earliest form of crowdfunding
  • 1714 – The longitude rewards: When the British government was trying to find a way to measure a ship's longitudinal position, they offered the public a monetary prize to whomever came up with the best solution.
  • 1783 – King Louis XVI offered an award to the person who could "make the alkali" by decomposing sea salt by the "simplest and most economic method".
  • 1848 – Matthew Fontaine Maury distributed 5000 copies of his Wind and Current Charts free of charge on the condition that sailors returned a standardized log of their voyage to the U.S. Naval Observatory. By 1861, he had distributed 200,000 copies free of charge, on the same conditions.Hern, Chester G.(2002). Tracks in the Sea, p. 123 & 246. McGraw Hill. .
  • 1849 – A network of some 150 volunteer weather observers all over the USA was set up as a part of the Smithsonian Institution's Meteorological Project started by the Smithsonian's first Secretary, Joseph Henry, who used the telegraph to gather volunteers' data and create a large weather map, making new information available to the public daily. For instance, volunteers tracked a tornado passing through Wisconsin and sent the findings via telegraph to the Smithsonian. Henry's project is considered the origin of what later became the National Weather Service. Within a decade, the project had more than 600 volunteer observers and had spread to Canada, Mexico, Latin America, and the Caribbean.
  • 1884 – Publication of the Oxford English Dictionary: 800 volunteers catalogued words to create the first fascicle of the OED.
  • 1916 – Planters Peanuts contest: The Mr. Peanut logo was designed by a 14-year-old boy who won the Planter Peanuts logo contest.
  • 1957 – Jørn Utzon, winner of the design competition for the Sydney Opera House.
  • 1970 – French amateur photo contest C'était Paris en 1970 ("This Was Paris in 1970") sponsored by the city of Paris, France-Inter radio, and the Fnac: 14,000 photographers produced 70,000 black-and-white prints and 30,000 color slides of the French capital to document the architectural changes of Paris. Photographs were donated to the Bibliothèque historique de la ville de Paris.
  • 1979 – Robert Axelrod invites academics on-line to submit FORTRAN algorithms to play the repeated Prisoner's Dilemma; this results in Tit for Tat.
  • 1991 – Linus Torvalds begins work on the Linux operating system, inviting programmers around the world to contribute code.
  • 1996 – The Hollywood Stock Exchange was founded: Allowed for the buying and selling of shares.
  • 1997 – British rock band Marillion raised $60,000 from their fans to help finance their U.S. tour.
  • 1999 – SETI@home was launched by the University of California, Berkeley. Volunteers can contribute to searching for signals that might come from extraterrestrial intelligence by installing a program that uses idle computer time for analyzing chunks of data recorded by radio telescopes involved in the SERENDIP program.
  • 2000 – JustGiving established: This online platform allows the public to help raise money for charities.
  • 2000 – UNV Online Volunteering service launched: Connecting people who commit their time and skills over the Internet to help organizations address development challenges.
  • 2000 – iStockPhoto was founded: The free stock imagery website allows the public to contribute to and receive commission for their contributions.
  • 2001 – Launch of Wikipedia: "Free-access, free content Internet encyclopedia".
  • 2001 – Foundation of Topcoder – crowdsourcing software development company.
  • 2004 – OpenStreetMap, a collaborative project to create a free editable map of the world, is launched.
  • 2004 – Toyota's first "Dream car art" contest: Children were asked globally to draw their "dream car of the future".
  • 2005 – Kodak's "Go for the Gold" contest: Kodak asked anyone to submit a picture of a personal victory.
  • 2006 – Jeff Howe coined the term crowdsourcing in Wired.
  • 2009 – Waze, a community-oriented GPS app, allows for users to submit road info and route data based on location, such as reports of car accidents or traffic, and integrates that data into its routing algorithms for all users of the app.
  • 2011 – Casting of Flavours (Do us a flavor in the USA) – a campaign launched by PepsiCo's Lay's in Spain. The campaign was about a contest that was held for initiating a flavor for the snack.
  • 2012 - Crowdsourcing Week platform launches in Singapore.
  • 2019 - Crowdsourcing Week launches the global BOLD Awards to recognise achievements and innovations in crowdsourcing and related technology.

公元前594年-梭伦要求所有公民发誓维护他的法律,其中包括加强公民包容和参与治理古雅典,最早的民主政府的例子,可靠的文件存在

  • 618-907-唐朝介绍了股份有限公司的最早形式的众筹
  • 1714-经度奖励: 当英国政府试图找到一种方法来衡量船舶的纵向位置,他们向公众提供了一个金钱奖励谁提出了最好的解决方案。1783年的今天,国王路易十六授予能够通过“最简单、最经济的方法”分解海盐“制造碱”的人以奖励。1848年的今天,美国毛里航空公司免费分发了5000份他的风力和海流图表,条件是船员们必须将标准航海日志归还给美国。海军天文台。到1861年,在同样的条件下,他免费发行了20万本。123 & 246.麦格劳 · 希尔。.1849年的今天,作为史密森学会首任秘书 Joseph Henry 发起的史密森尼学会气象项目的一部分,一个由150名志愿天气观测员组成的网络在美国各地建立起来。例如,志愿者追踪一个穿过威斯康星州的龙卷风,并通过电报将发现发送给史密森尼博物馆。亨利的计划被认为是后来成为国家气象局的起源。在十年内,该项目有600多名志愿观察员,并已扩展到加拿大、墨西哥、拉丁美洲和加勒比地区。1884年的今天,出版了《牛津英语词典志: 800名志愿者对词汇进行了分类,创建了《牛津英语词典》的第一册。
  • 1916年的花生种植比赛: 花生先生的标志是由一名14岁的男孩设计的,他赢得了“花生种植园”标志比赛。
  • 1957年悉尼歌剧院设计比赛冠军约恩 · 乌重。
  • 1970年——法国业余摄影大赛 c’était Paris en 1970(“ This Was Paris in 1970”) ,由法国巴黎市国际广播电台和 Fnac 赞助,14000名摄影师制作了70000张黑白照片和30000张彩色幻灯片,记录巴黎的建筑变化。照片被捐赠给巴黎市立历史图书馆。
  • 1979年的今天,罗伯特 · 阿克塞尔罗德邀请网上学者提交 FORTRAN 算法,用于反复演绎囚徒困境,结果是以牙还牙。1991年的今天,Linus Torvalds 开始在 Linux 操作系统上工作,邀请世界各地的程序员贡献代码。1996年的今天,好莱坞证券交易所成立: 允许买卖股票。
  • 1997年,英国摇滚乐队玛丽莲从歌迷那里筹集了6万美元资助他们的美国巡演。
  • 1999-SETI@home 由加州大学伯克利分校发起。志愿者可以通过安装一个利用空闲计算机时间分析由 SERENDIP 项目中的射电望远镜记录的数据块的程序,帮助搜索可能来自外星人的信号。
  • 「2000年嘉善捐款」成立: 这个网上平台让市民协助慈善机构筹款。
  • 2000年——联合国志愿人员组织推出了在线志愿服务: 将通过因特网投入时间和技能的人联系起来,帮助各组织应对发展挑战。
  • 2000年 -iStockPhoto 成立: 免费库存图像网站允许公众为其贡献捐款并收取佣金。
  • 2001年-维基百科推出: 「免费接达,免费内容网络百科全书」。
  • 2001-创办 Topcoder-crowdsourcing 软件开发公司。
  • 2004-开放街道地图,一个创建免费可编辑的世界地图的合作项目,已经启动。
  • 2004年——丰田首次举办”梦幻汽车艺术”比赛: 邀请全球儿童绘制他们的”未来梦幻汽车”。2005年的今天,柯达举办了“争取金牌”比赛: 柯达要求每个人提交一张自己获胜的照片。
  • 2006年的今天,杰夫 · 豪在《连线》杂志上创造了“众包”一词。
  • 2009-Waze,一个面向社区的 GPS 应用程序,允许用户提交基于位置的道路信息和路线数据,例如汽车事故或交通报告,并将这些数据集成到应用程序的所有用户的路由算法中。
  • 2011年度百事乐事在西班牙发起的名为《香味的选择》(Do us a flavors in the USA)的活动。这个活动是关于举办一个比赛,为的是为这种小吃创造一种口味。
  • 2012年众包周平台在新加坡推出。
  • 2019年众包周推出全球 BOLD 大奖,表扬在众包及相关技术方面的成就和创新。

Early competitions

Crowdsourcing has often been used in the past as a competition to discover a solution. The French government proposed several of these competitions, often rewarded with Montyon Prizes, created for poor Frenchmen who had done virtuous acts.[37] These included the Leblanc process, or the Alkali prize, where a reward was provided for separating the salt from the alkali, and the Fourneyron's turbine, when the first hydraulic commercial turbine was developed.[38]

Crowdsourcing has often been used in the past as a competition to discover a solution. The French government proposed several of these competitions, often rewarded with Montyon Prizes, created for poor Frenchmen who had done virtuous acts. These included the Leblanc process, or the Alkali prize, where a reward was provided for separating the salt from the alkali, and the Fourneyron's turbine, when the first hydraulic commercial turbine was developed.

在过去,众包经常被用作寻找解决方案的竞赛。法国政府提出了几个这样的竞赛项目,通常会获得蒙廷奖,这些奖项是为那些行善的法国穷人设立的。其中包括勒布朗工艺,或碱金奖,奖励从碱中分离盐,以及 Fourneyron 的涡轮机,当第一个液压商业涡轮机被开发出来。

In response to a challenge from the French government, Nicolas Appert won a prize for inventing a new way of food preservation that involved sealing food in air-tight jars.[39] The British government provided a similar reward to find an easy way to determine a ship's longitude in the Longitude Prize. During the Great Depression, out-of-work clerks tabulated higher mathematical functions in the Mathematical Tables Project as an outreach project.[40] One of the biggest crowdsourcing campaigns was a public design contest in 2010, hosted by the Indian government's finance ministry to create a symbol for the Indian rupee. Thousands of people sent in entries before the government zeroed in on the final symbol based on the Devanagari script using the letter Ra.[41]

In response to a challenge from the French government, Nicolas Appert won a prize for inventing a new way of food preservation that involved sealing food in air-tight jars. The British government provided a similar reward to find an easy way to determine a ship's longitude in the Longitude Prize. During the Great Depression, out-of-work clerks tabulated higher mathematical functions in the Mathematical Tables Project as an outreach project. One of the biggest crowdsourcing campaigns was a public design contest in 2010, hosted by the Indian government's finance ministry to create a symbol for the Indian rupee. Thousands of people sent in entries before the government zeroed in on the final symbol based on the Devanagari script using the letter Ra.

为了回应法国政府的挑战,Nicolas Appert 发明了一种新的密封食物的食物保存法,并因此获奖。英国政府提供了一个类似的奖励,找到一个简单的方法,以确定船舶的经度在经度奖。在大萧条时期,失业的工作人员在数学表格项目中列出了更高的数学函数,这是一个扩展项目。2010年,印度政府财政部举办了一场公共设计竞赛,旨在为印度卢比创造一个符号,这是规模最大的众包活动之一。成千上万的人在政府注意到最后一个符号之前发送了参赛作品,这个符号基于使用字母 Ra 的德瓦纳加里文字。

In astronomy

Crowdsourcing in astronomy was used in the early 19th century by astronomer Denison Olmsted. After being awakened in a late November night due to a meteor shower taking place, Olmsted noticed a pattern in the shooting stars. Olmsted wrote a brief report of this meteor shower in the local newspaper. "As the cause of 'Falling Stars' is not understood by meteorologists, it is desirable to collect all the facts attending this phenomenon, stated with as much precision as possible," Olmsted wrote to readers, in a report subsequently picked up and pooled to newspapers nationwide. Responses came pouring in from many states, along with scientists' observations sent to the American Journal of Science and Arts.[42] These responses helped him make a series of scientific breakthroughs, the major discovery being that meteor showers are seen nationwide, and fall from space under the influence of gravity. Also, they demonstrated that the showers appeared in yearly cycles, a fact that often eluded scientists. The responses allowed him to suggest a velocity for the meteors, although his estimate turned out to be too conservative. If he had just taken the responses as presented, his conjecture on the meteors' velocity would have been closer to their actual speed.

Crowdsourcing in astronomy was used in the early 19th century by astronomer Denison Olmsted. After being awakened in a late November night due to a meteor shower taking place, Olmsted noticed a pattern in the shooting stars. Olmsted wrote a brief report of this meteor shower in the local newspaper. "As the cause of 'Falling Stars' is not understood by meteorologists, it is desirable to collect all the facts attending this phenomenon, stated with as much precision as possible," Olmsted wrote to readers, in a report subsequently picked up and pooled to newspapers nationwide. Responses came pouring in from many states, along with scientists' observations sent to the American Journal of Science and Arts. These responses helped him make a series of scientific breakthroughs, the major discovery being that meteor showers are seen nationwide, and fall from space under the influence of gravity. Also, they demonstrated that the showers appeared in yearly cycles, a fact that often eluded scientists. The responses allowed him to suggest a velocity for the meteors, although his estimate turned out to be too conservative. If he had just taken the responses as presented, his conjecture on the meteors' velocity would have been closer to their actual speed.

19世纪早期,天文学家丹尼森 · 奥姆斯特德使用了众包的方式。奥姆斯特德在11月的一个深夜被一场流星雨惊醒后,注意到了流星雨中的一个图案。奥姆斯特德在当地报纸上写了一篇关于这次流星雨的简短报道。奥姆斯特德在随后发给全国各大报纸的一份报告中写道,“气象学家不了解‘流星’的起因,因此有必要尽可能准确地收集有关这一现象的所有事实。”。来自许多州的回应如潮水般涌来,连同科学家们发表在《美国科学与艺术杂志》上的观察报告。这些反应帮助他取得了一系列的科学突破,主要的发现是流星雨在全国范围内都能看到,而且是在重力的影响下从太空坠落的。同时,他们还证明了这些阵雨是以年周期的形式出现的,这个事实常常让科学家们困惑不已。这些回答使他能够提出流星的速度,尽管他的估计结果被证明是过于保守的。如果他只是按照所给出的答案来推测流星的速度,那么他对流星速度的推测就会更接近流星的实际速度。

A more recent version of crowdsourcing in astronomy is NASA's photo organizing project,[43] which asks internet users to browse photos taken from space and try to identify the location the picture is documenting.[44]

A more recent version of crowdsourcing in astronomy is NASA's photo organizing project, which asks internet users to browse photos taken from space and try to identify the location the picture is documenting.

天文学领域最新的众包项目是美国宇航局的照片整理项目,该项目要求互联网用户浏览从太空拍摄的照片,并尝试确定照片所记录的位置。

In energy system research

In energy system research

= = 在能源系统研究中 =

Energy system models require large and diverse datasets, increasingly so given the trend towards greater temporal and spatial resolution.[45] In response, there have been several initiatives to crowdsource this data. Launched in December 2009, OpenEI is a collaborative website, run by the US government, providing open energy data.[46][47] While much of its information is from US government sources, the platform also seeks crowdsourced input from around the world.[48] The semantic wiki and database Enipedia also publishes energy systems data using the concept of crowdsourced open information. Enipedia went live in March 2011.[49][50]:184–188

Energy system models require large and diverse datasets, increasingly so given the trend towards greater temporal and spatial resolution.

In response, there have been several initiatives to crowdsource this data.  Launched in December 2009, OpenEI is a collaborative website, run by the US government, providing open energy data.
While much of its information is from US government sources, the platform also seeks crowdsourced input from around the world.
 The semantic wiki and database Enipedia also publishes energy systems data using the concept of crowdsourced open information. Enipedia went live in March 2011.Chapter9 discusses in depth the initial development of Enipedia.

能源系统模型需要大量和多样化的数据集,考虑到时间和空间分辨率越来越高的趋势,这一点越来越重要。作为回应,已经有几个项目将这些数据众包出去。成立于2009年12月,OpenEI 是一个协作网站,由美国政府运营,提供开放能源数据。虽然该平台的大部分信息来自美国政府,但该平台也寻求来自世界各地的众包投入。语义维基和数据库 Enipedia 也使用众包开放信息的概念发布能源系统数据。Enipedia 于2011年3月上线,第九章深入讨论了 Enipedia 的最初发展。

In genealogy research

Genealogical research was using crowdsourcing techniques long before personal computers were common. Beginning in 1942, members of The Church of Jesus Christ of Latter-day Saints encouraged members to submit information about their ancestors. The submitted information was gathered together into a single collection. In 1969, to encourage more people to participate in gathering genealogical information about their ancestors, the church started the three-generation program. In this program, church members were asked to prepare documented family group record forms for the first three generations. The program was later expanded to encourage members to research at least four generations and became known as the four-generation program.[51]

Genealogical research was using crowdsourcing techniques long before personal computers were common. Beginning in 1942, members of The Church of Jesus Christ of Latter-day Saints encouraged members to submit information about their ancestors. The submitted information was gathered together into a single collection. In 1969, to encourage more people to participate in gathering genealogical information about their ancestors, the church started the three-generation program. In this program, church members were asked to prepare documented family group record forms for the first three generations. The program was later expanded to encourage members to research at least four generations and became known as the four-generation program.

在个人电脑普及之前很久,家谱研究就使用了众包技术。从1942年开始,耶稣基督后期圣徒教会成员鼓励成员提交关于他们祖先的信息。提交的信息被收集在一起成为一个单一的集合。1969年,为了鼓励更多的人参与收集他们祖先的家谱信息,教会开始了三代计划。在这个节目中,教会成员被要求为前三代准备有记录的家庭团体记录表格。该项目后来扩大到鼓励成员研究至少四代,并成为众所周知的四代计划。

Institutes that have records of interest to genealogical research have used crowds of volunteers to create catalogs and indices to records.

Institutes that have records of interest to genealogical research have used crowds of volunteers to create catalogs and indices to records.

对家谱研究感兴趣的研究所利用大批志愿者来创建记录的目录和索引。

In geography

Volunteered Geographic Information (VGI) is geographic information generated through Crowdsourcing, as opposed to traditional methods of Professional Geographic Information (PGI).[52] In describing the built environment, VGI has many advantages over PGI, primarily perceived currency,[53] accuracy[54] and authority.[55] OpenStreetMap is an example of crowdsourced mapping project.[33]

Volunteered Geographic Information (VGI) is geographic information generated through Crowdsourcing, as opposed to traditional methods of Professional Geographic Information (PGI). In describing the built environment, VGI has many advantages over PGI, primarily perceived currency, accuracy and authority. OpenStreetMap is an example of crowdsourced mapping project.

自发地理信息地理信息是通过众包产生的地理信息,而不是传统的专业地理信息方法。在描述构建环境时,VGI 比 PGI 有许多优势,主要是感知的通用性、准确性和权威性。是众包地图项目的一个例子。

In engineering

Many companies are introducing crowdsourcing to grow their engineering capabilities and find solutions to unsolved technical challenges and the need to adopt newest technologies such as 3D printing, IOT, etc.

Many companies are introducing crowdsourcing to grow their engineering capabilities and find solutions to unsolved technical challenges and the need to adopt newest technologies such as 3D printing, IOT, etc.

许多公司正在引入众包来提高他们的工程能力,寻找解决未解决的技术难题的方案,并且需要采用最新的技术,如3 d 打印,物联网等。

In genetic genealogy research

Genetic genealogy is a combination of traditional genealogy with genetics. The rise of personal DNA testing, after the turn of the century, by companies such as Gene by Gene, FTDNA, GeneTree, 23andMe, and Ancestry.com, has led to public and semipublic databases of DNA testing which uses crowdsourcing techniques. In recent years, citizen science projects have become increasingly focused providing benefits to scientific research.[56][57][58] This includes support, organization, and dissemination of personal DNA (genetic) testing. Similar to amateur astronomy, citizen scientists encouraged by volunteer organizations like the International Society of Genetic Genealogy,[59] have provided valuable information and research to the professional scientific community.[60]

Genetic genealogy is a combination of traditional genealogy with genetics. The rise of personal DNA testing, after the turn of the century, by companies such as Gene by Gene, FTDNA, GeneTree, 23andMe, and Ancestry.com, has led to public and semipublic databases of DNA testing which uses crowdsourcing techniques. In recent years, citizen science projects have become increasingly focused providing benefits to scientific research.Bonney, R. and LaBranche, M. (2004). Citizen Science: Involving the Public in Research. ASTC Dimensions. May/June 2004, p. 13. This includes support, organization, and dissemination of personal DNA (genetic) testing. Similar to amateur astronomy, citizen scientists encouraged by volunteer organizations like the International Society of Genetic Genealogy, have provided valuable information and research to the professional scientific community.

遗传系谱学是传统家谱学和遗传学的结合。个人 DNA 测试的兴起,在世纪之交之后,由 Gene by Gene、 FTDNA、 GeneTree、23andMe 和 ancestry. com 等公司提供,已经导致了公开和半公开的 DNA 测试数据库,这些数据库使用了众包技术。近年来,公民科学项目越来越注重为科学研究提供效益。邦尼,r. 和勒布朗齐,m. (2004)。公民科学: 让公众参与研究。美国材料试验协会尺寸。二零零四年五月/六月,第13页。这包括个人 DNA (基因)测试的支持、组织和传播。类似于业余天文学,公民科学家在志愿者组织的鼓励下,如国际遗传系谱学协会,已经为专业科学团体提供了有价值的信息和研究。

Spencer Wells, director of the Genographic Project blurb:

Since 2005, the Genographic Project, has used the latest genetic technology to expand our knowledge of the human story, and its pioneering use of DNA testing to engage and involve the public in the research effort has helped to create a new breed of "citizen scientist." Geno 2.0 expands the scope for citizen science, harnessing the power of the crowd to discover new details of human population history.[61]

Spencer Wells, director of the Genographic Project blurb: Since 2005, the Genographic Project, has used the latest genetic technology to expand our knowledge of the human story, and its pioneering use of DNA testing to engage and involve the public in the research effort has helped to create a new breed of "citizen scientist." Geno 2.0 expands the scope for citizen science, harnessing the power of the crowd to discover new details of human population history.

斯宾塞 · 威尔斯(Spencer Wells)是基因测序项目的负责人,他说: 自2005年以来,基因测序项目利用最新的基因技术扩展了我们对人类故事的认识,它开创性地利用 DNA 测试吸引公众参与研究工作,帮助创造了一种新型的“公民科学家”2.0扩展了公民科学的范围,利用群体的力量发现人类历史的新细节。

In journalism

Crowdsourcing is increasingly used in professional journalism. Journalists are able to crowdsource information from the crowd typically by fact checking the information, and then using the information they've gathered in their articles as they see fit. The leading daily newspaper in Sweden has successfully used crowdsourcing in investigating the home loan interest rates in the country in 2013–2014, resulting to over 50,000 submissions.[62] The leading daily newspaper in Finland crowdsourced investigation in stock short selling in 2011–2012, and the crowdsourced information lead to a revelation of a sketchy tax evasion system in a Finnish bank. The bank executive was fired and policy changes followed.[63] TalkingPointsMemo in the United States asked its readers to examine 3000 emails concerning the firing of federal prosecutors in 2008. The British newspaper the Guardian crowdsourced the examination of hundreds of thousands of documents in 2009.[64]


Crowdsourcing is increasingly used in professional journalism. Journalists are able to crowdsource information from the crowd typically by fact checking the information, and then using the information they've gathered in their articles as they see fit. The leading daily newspaper in Sweden has successfully used crowdsourcing in investigating the home loan interest rates in the country in 2013–2014, resulting to over 50,000 submissions. The leading daily newspaper in Finland crowdsourced investigation in stock short selling in 2011–2012, and the crowdsourced information lead to a revelation of a sketchy tax evasion system in a Finnish bank. The bank executive was fired and policy changes followed. TalkingPointsMemo in the United States asked its readers to examine 3000 emails concerning the firing of federal prosecutors in 2008. The British newspaper the Guardian crowdsourced the examination of hundreds of thousands of documents in 2009.

众包越来越多地应用于专业新闻业。新闻工作者可以通过事实核实信息,然后使用他们在文章中收集到的信息,按照他们认为合适的方式,从人群中寻找信息。瑞典的主要日报成功地利用众包调查了2013-2014年瑞典的住房贷款利率,结果提交了50000多份申请。芬兰主要的日报在2011-2012年进行了股票卖空的众包调查,众包信息揭露了一家芬兰银行的逃税系统。这位银行高管被解雇,政策也随之改变。美国 TalkingPointsMemo 网站要求读者查看3000封有关2008年解雇联邦检察官的电子邮件。2009年,英国报纸《卫报》将对成千上万份文件的检查众包出去。

In linguistics

Crowdsourcing strategies have been applied to estimate word knowledge, vocabulary size, and word origin.[65] Implicit crowdsourcing on social media has also helped efficiently approximate sociolinguistic data. Reddit conversations in various location-based subreddits were analyzed for the presence of grammatical forms unique to a regional dialect. These were then used to map the extent of the speaker population. The results could roughly approximate large-scale surveys on the subject without engaging in field interviews.[66]

Crowdsourcing strategies have been applied to estimate word knowledge, vocabulary size, and word origin. Implicit crowdsourcing on social media has also helped efficiently approximate sociolinguistic data. Reddit conversations in various location-based subreddits were analyzed for the presence of grammatical forms unique to a regional dialect. These were then used to map the extent of the speaker population. The results could roughly approximate large-scale surveys on the subject without engaging in field interviews.

众包策略已经被应用于估计词汇知识、词汇量和词源。社交媒体上的隐性众包也有助于有效地近似社会语言学数据。研究人员分析了各种基于地理位置的看板中的 Reddit 对话,看看是否存在某个地区方言所特有的语法形式。然后,这些数据被用来绘制说话者人口的范围。这些结果可以大致接近关于这一主题的大规模调查,而无需进行实地访谈。

In ornithology

Another early example of crowdsourcing occurred in the field of ornithology. On 25 December 1900, Frank Chapman, an early officer of the National Audubon Society, initiated a tradition, dubbed the "Christmas Day Bird Census". The project called birders from across North America to count and record the number of birds in each species they witnessed on Christmas Day. The project was successful, and the records from 27 different contributors were compiled into one bird census, which tallied around 90 species of birds.[67] This large-scale collection of data constituted an early form of citizen science, the premise upon which crowdsourcing is based. In the 2012 census, more than 70,000 individuals participated across 2,369 bird count circles.[68] Christmas 2014 marked the National Audubon Society's 115th annual Christmas Bird Count.

Another early example of crowdsourcing occurred in the field of ornithology. On 25 December 1900, Frank Chapman, an early officer of the National Audubon Society, initiated a tradition, dubbed the "Christmas Day Bird Census". The project called birders from across North America to count and record the number of birds in each species they witnessed on Christmas Day. The project was successful, and the records from 27 different contributors were compiled into one bird census, which tallied around 90 species of birds. This large-scale collection of data constituted an early form of citizen science, the premise upon which crowdsourcing is based. In the 2012 census, more than 70,000 individuals participated across 2,369 bird count circles. Christmas 2014 marked the National Audubon Society's 115th annual Christmas Bird Count.

另一个早期众包的例子发生在鸟类学领域。在1900年12月25日,Frank Chapman,一个奥杜邦学会的早期官员,开创了一个传统,被称为“圣诞节鸟类普查”。该项目召集了来自北美各地的观鸟者,计算并记录他们在圣诞节目睹的各种鸟类的数量。这个项目是成功的,27个不同的参与者的记录被汇编成一个鸟类普查,大约有90种鸟类。这种大规模的数据收集构成了公民科学的早期形式,而这正是众包的基础。在2012年的人口普查中,超过70000个个体参与了2369个鸟类数量圈。2014年圣诞节标志着奥杜邦学会第115次一年一度的圣诞鸟数量。

In public policy

Crowdsourcing public policy and the production of public services is also referred to as citizen sourcing. While some scholars argue crowdsourcing is a policy tool[69] or a definite means of co-production[70] others question that and argue that crowdsourcing should be considered just as a technological enabler that simply can increase speed and ease of participation.[71]

Crowdsourcing public policy and the production of public services is also referred to as citizen sourcing. While some scholars argue crowdsourcing is a policy tool or a definite means of co-production others question that and argue that crowdsourcing should be considered just as a technological enabler that simply can increase speed and ease of participation.

众包公共政策和公共服务的生产也被称为公民外包。尽管一些学者认为,众包是一种政策工具,或者是一种明确的合作生产方式,但其他人对此提出了质疑,并认为,应该把众包仅仅看作是一种技术推动因素,可以简单地提高参与的速度和便利性。

The first conference focusing on Crowdsourcing for Politics and Policy took place at Oxford University, under the auspices of the Oxford Internet Institute in 2014. Research has emerged since 2012[72] that focuses on the use of crowdsourcing for policy purposes.[73][74] These include the experimental investigation of the use of Virtual Labor Markets for policy assessment,[75] and an assessment of the potential for citizen involvement in process innovation for public administration.[76]

The first conference focusing on Crowdsourcing for Politics and Policy took place at Oxford University, under the auspices of the Oxford Internet Institute in 2014. Research has emerged since 2012 that focuses on the use of crowdsourcing for policy purposes. These include the experimental investigation of the use of Virtual Labor Markets for policy assessment, and an assessment of the potential for citizen involvement in process innovation for public administration.

2014年,在牛津互联网研究所的赞助下,第一次聚焦于政治与政策的众包会议在牛津大学举行。自2012年以来,出现了一些研究,重点关注众包在政策上的应用。其中包括对使用虚拟劳动力市场进行政策评估的试验性调查,以及对公民参与公共行政程序创新的潜力进行评估。

Governments across the world are increasingly using crowdsourcing for knowledge discovery and civic engagement. Iceland crowdsourced their constitution reform process in 2011, and Finland has crowdsourced several law reform processes to address their off-road traffic laws. The Finnish government allowed citizens to go on an online forum to discuss problems and possible resolutions regarding some off-road traffic laws. The crowdsourced information and resolutions would then be passed on to legislators for them to refer to when making a decision, letting citizens more directly contribute to public policy.[77][78] The City of Palo Alto is crowdsourcing people's feedback for its Comprehensive City Plan update in a process, which started in 2015.[79] The House of Representatives in Brazil has used crowdsourcing in policy-reforms, and federal agencies in the United States have used crowdsourcing for several years.[80]

Governments across the world are increasingly using crowdsourcing for knowledge discovery and civic engagement. Iceland crowdsourced their constitution reform process in 2011, and Finland has crowdsourced several law reform processes to address their off-road traffic laws. The Finnish government allowed citizens to go on an online forum to discuss problems and possible resolutions regarding some off-road traffic laws. The crowdsourced information and resolutions would then be passed on to legislators for them to refer to when making a decision, letting citizens more directly contribute to public policy. The City of Palo Alto is crowdsourcing people's feedback for its Comprehensive City Plan update in a process, which started in 2015. The House of Representatives in Brazil has used crowdsourcing in policy-reforms, and federal agencies in the United States have used crowdsourcing for several years.

世界各国政府越来越多地利用众包来发现知识和公民参与。冰岛在2011年将其宪法改革进程群众化,芬兰则将若干法律改革进程群众化,以解决其越野交通法律问题。芬兰政府允许公民在网上论坛上讨论一些关于越野交通法的问题和可能的解决方案。众包信息和决议随后会传递给立法者,让他们在做决定时参考,让公民更直接地为公共政策做出贡献。帕洛阿尔托市(The City of Palo Alto)从2015年开始进行城市规划综合更新,目前正通过众包的方式收集人们的反馈意见。巴西众议院在政策改革中使用众包,美国联邦机构使用众包已有数年。

In seismology

In seismology

= = = 在地震学中

The European-Mediterranean Seismological Centre (EMSC) has developed a seismic detection system by monitoring the traffic peaks on its website and by the analysis of keywords used on Twitter.[81]

The European-Mediterranean Seismological Centre (EMSC) has developed a seismic detection system by monitoring the traffic peaks on its website and by the analysis of keywords used on Twitter.

欧洲-地中海地震中心通过监测其网站上的流量峰值和分析 Twitter 上使用的关键词,开发了一个地震检测系统。

In libraries, museums and archives

In libraries, museums and archives

= = 在图书馆,博物馆和档案馆 =

Newspaper text correction at the National Library of Australia was an early, influential example of work with text transcriptions for crowdsourcing in cultural heritage institutions.[82] The steve.museum project provided a prototype for tagging artworks.[83] Crowdsourcing is used in libraries for OCR corrections on digitized texts, for tagging and for funding, especially in the absence of financial and human means.[84]:5 Volunteers can contribute explicitly with conscious effort or implicitly without being known by turning the text on the raw newspaper image into human corrected digital form.[84]:16

Newspaper text correction at the National Library of Australia was an early, influential example of work with text transcriptions for crowdsourcing in cultural heritage institutions. The steve.museum project provided a prototype for tagging artworks. Crowdsourcing is used in libraries for OCR corrections on digitized texts, for tagging and for funding, especially in the absence of financial and human means.Andro, M. (2018). Digital libraries and crowdsourcing, Wiley / ISTE. . Volunteers can contribute explicitly with conscious effort or implicitly without being known by turning the text on the raw newspaper image into human corrected digital form.

澳大利亚国家图书馆的报纸文本校正是一个早期的、有影响力的例子,它利用文本转录在文化遗产机构中进行众包。博物馆项目为给艺术品贴标签提供了一个原型。图书馆使用群众外包对数字化文本进行光学字符识别更正、进行标记和筹集资金,特别是在缺乏财政和人力资源的情况下。Andro,m. (2018).数字图书馆和众包,Wiley/ISTE。.志愿者可以有意识或无意识地做出贡献,将原始报纸图片上的文字转化为人工修正的数字形式。

Modern methods

Currently, crowdsourcing has transferred mainly to the Internet, which provides a particularly beneficial venue for crowdsourcing since individuals tend to be more open in web-based projects where they are not being physically judged or scrutinized, and thus can feel more comfortable sharing. This approach ultimately allows for well-designed artistic projects because individuals are less conscious, or maybe even less aware, of scrutiny towards their work. In an online atmosphere, more attention can be given to the specific needs of a project, rather than spending as much time in communication with other individuals.[85]

Currently, crowdsourcing has transferred mainly to the Internet, which provides a particularly beneficial venue for crowdsourcing since individuals tend to be more open in web-based projects where they are not being physically judged or scrutinized, and thus can feel more comfortable sharing. This approach ultimately allows for well-designed artistic projects because individuals are less conscious, or maybe even less aware, of scrutiny towards their work. In an online atmosphere, more attention can be given to the specific needs of a project, rather than spending as much time in communication with other individuals.


目前,众包主要转移到互联网上,互联网为众包提供了一个特别有益的场所,因为个人在基于网络的项目中往往更加开放,在这些项目中,他们不会受到实际的评判或审查,因此可以更舒服地分享。这种方法最终允许精心设计的艺术项目,因为个人对自己的作品缺乏意识,或者甚至更少意识到对其作品的审查。在网络的氛围中,可以更多地关注项目的具体需求,而不是花费同样多的时间与其他人交流。

According to a definition by Henk van Ess:[86]

"The crowdsourced problem can be huge (epic tasks like finding alien life or mapping earthquake zones) or very small ('where can I skate safely?'). Some examples of successful crowdsourcing themes are problems that bug people, things that make people feel good about themselves, projects that tap into niche knowledge of proud experts, subjects that people find sympathetic or any form of injustice."

According to a definition by Henk van Ess:Ess, Henk van "Crowdsourcing: how to find a crowd", ARD ZDF Akademie 2010, Berlin, p. 99, "The crowdsourced problem can be huge (epic tasks like finding alien life or mapping earthquake zones) or very small ('where can I skate safely?'). Some examples of successful crowdsourcing themes are problems that bug people, things that make people feel good about themselves, projects that tap into niche knowledge of proud experts, subjects that people find sympathetic or any form of injustice."

根据 Henk van Ess 的定义: Ess,Henk van“ Crowdsourcing: how to find a crowd”,ARD ZDF Akademie 2010,柏林,第99页,“众包问题可以是巨大的(史诗般的任务,比如寻找外星生命或地震区) ,也可以是非常小的(‘我在哪里可以安全地滑冰?').一些成功的众包主题的例子包括困扰人们的问题、让人们自我感觉良好的事情、利用自豪专家的利基知识的项目、让人们感到同情的主题或任何形式的不公正。”

Crowdsourcing can either take an explicit or an implicit route. Explicit crowdsourcing lets users work together to evaluate, share, and build different specific tasks, while implicit crowdsourcing means that users solve a problem as a side effect of something else they are doing.

Crowdsourcing can either take an explicit or an implicit route. Explicit crowdsourcing lets users work together to evaluate, share, and build different specific tasks, while implicit crowdsourcing means that users solve a problem as a side effect of something else they are doing.

众包可以选择显性或隐性的路线。明确的众包让用户一起评估、分享和建立不同的特定任务,而隐含的众包意味着用户解决一个问题作为他们正在做的其他事情的副作用。

With explicit crowdsourcing, users can evaluate particular items like books or webpages, or share by posting products or items. Users can also build artifacts by providing information and editing other people's work.

With explicit crowdsourcing, users can evaluate particular items like books or webpages, or share by posting products or items. Users can also build artifacts by providing information and editing other people's work.

通过明确的众包,用户可以评估特定的项目,如书籍或网页,或通过发布产品或项目来分享。用户还可以通过提供信息和编辑其他人的工作来构建工件。

Implicit crowdsourcing can take two forms: standalone and piggyback. Standalone allows people to solve problems as a side effect of the task they are actually doing, whereas piggyback takes users' information from a third-party website to gather information.[87]

Implicit crowdsourcing can take two forms: standalone and piggyback. Standalone allows people to solve problems as a side effect of the task they are actually doing, whereas piggyback takes users' information from a third-party website to gather information.

隐性众包有两种形式: 独立众包和背负众包。独立解决问题是人们实际工作的副作用,而搭载则是从第三方网站上获取用户信息来收集信息。

In his 2013 book, Crowdsourcing, Daren C. Brabham puts forth a problem-based typology of crowdsourcing approaches:[88]

In his 2013 book, Crowdsourcing, Daren C. Brabham puts forth a problem-based typology of crowdsourcing approaches:

在 Daren c. Brabham 2013年出版的《 Crowdsourcing 》一书中,他提出了一种基于问题的群众外包方法类型学:

  • Knowledge discovery and management is used for information management problems where an organization mobilizes a crowd to find and assemble information. It is ideal for creating collective resources.
  • Distributed human intelligence tasking is used for information management problems where an organization has a set of information in hand and mobilizes a crowd to process or analyze the information. It is ideal for processing large data sets that computers cannot easily do.
  • Broadcast search is used for ideation problems where an organization mobilizes a crowd to come up with a solution to a problem that has an objective, provable right answer. It is ideal for scientific problem solving.
  • Peer-vetted creative production is used for ideation problems, where an organization mobilizes a crowd to come up with a solution to a problem which has an answer that is subjective or dependent on public support. It is ideal for design, aesthetic, or policy problems.
  • Knowledge discovery and management is used for information management problems where an organization mobilizes a crowd to find and assemble information. It is ideal for creating collective resources.
  • Distributed human intelligence tasking is used for information management problems where an organization has a set of information in hand and mobilizes a crowd to process or analyze the information. It is ideal for processing large data sets that computers cannot easily do.
  • Broadcast search is used for ideation problems where an organization mobilizes a crowd to come up with a solution to a problem that has an objective, provable right answer. It is ideal for scientific problem solving.
  • Peer-vetted creative production is used for ideation problems, where an organization mobilizes a crowd to come up with a solution to a problem which has an answer that is subjective or dependent on public support. It is ideal for design, aesthetic, or policy problems.


  • 知识发现和管理用于信息管理问题,即组织动员人群查找和收集信息。它是创造集体资源的理想选择。
  • 分布式人工智能任务用于信息管理问题,即一个组织拥有一组信息并动员一群人来处理或分析这些信息。这是理想的处理大型数据集,计算机不能轻松做到。
  • 广播搜索用于构思问题,即一个组织动员一群人为一个问题提出一个客观的、可证明的正确答案。这是理想的科学问题解决。
  • 同行审查的创意产品用于构思问题,即一个组织动员一群人想出一个解决问题的办法,这个办法的答案是主观的或依赖于公众的支持。它是设计、美学或政策问题的理想选择。

Crowdsourcing often allows participants to rank each other's contributions, e.g. in answer to the question "What is one thing we can do to make Acme a great company?" One common method for ranking is "like" counting, where the contribution with the most likes ranks first. This method is simple and easy to understand, but it privileges early contributions, which have more time to accumulate likes. In recent years several crowdsourcing companies have begun to use pairwise comparisons, backed by ranking algorithms. Ranking algorithms do not penalize late contributions. They also produce results faster. Ranking algorithms have proven to be at least 10 times faster than manual stack ranking.[89] One drawback, however, is that ranking algorithms are more difficult to understand than like counting.

Crowdsourcing often allows participants to rank each other's contributions, e.g. in answer to the question "What is one thing we can do to make Acme a great company?" One common method for ranking is "like" counting, where the contribution with the most likes ranks first. This method is simple and easy to understand, but it privileges early contributions, which have more time to accumulate likes. In recent years several crowdsourcing companies have begun to use pairwise comparisons, backed by ranking algorithms. Ranking algorithms do not penalize late contributions. They also produce results faster. Ranking algorithms have proven to be at least 10 times faster than manual stack ranking. One drawback, however, is that ranking algorithms are more difficult to understand than like counting.

众包通常允许参与者对彼此的贡献进行排名,例如:。在回答“我们可以做什么来使 Acme 成为一个伟大的公司?”一个常见的排名方法是“喜欢”计算,其中最喜欢的贡献排名第一。这种方法简单易懂,但它优先考虑早期的贡献,这样有更多的时间来积累喜欢。最近几年,一些众包公司已经开始使用排名算法来进行成对比较。排名算法不惩罚后来的贡献。它们也能更快地产生结果。排序算法已被证明至少比手工堆栈排序快10倍。然而,一个缺点是排序算法比计数更难理解。

In "How to Manage Crowdsourcing Platforms Effectively," Ivo Blohm states that there are four types of Crowdsourcing Platforms: Microtasking, Information Pooling, Broadcast Search, and Open Collaboration. They differ in the diversity and aggregation of contributions that are created. The diversity of information collected can either be homogenous or heterogenous. The aggregation of information can either be selective or integrative.[90]

In "How to Manage Crowdsourcing Platforms Effectively," Ivo Blohm states that there are four types of Crowdsourcing Platforms: Microtasking, Information Pooling, Broadcast Search, and Open Collaboration. They differ in the diversity and aggregation of contributions that are created. The diversity of information collected can either be homogenous or heterogenous. The aggregation of information can either be selective or integrative.

在《如何有效管理众包平台》一书中,Ivo Blohm 指出有四种类型的众包平台: 微任务、信息汇集、广播搜索和开放协作。它们在创建的贡献的多样性和集合方面有所不同。收集到的信息的多样性可以是同质的,也可以是异质的。信息的聚合可以是选择性的,也可以是综合性的。

Examples

Some common categories of crowdsourcing can be used effectively in the commercial world, including crowdvoting, crowdsolving, crowdfunding, microwork, creative crowdsourcing, crowdsource workforce management, and inducement prize contests. Although this may not be an exhaustive list, the items cover the current major ways in which people use crowds to perform tasks.[91]

Some common categories of crowdsourcing can be used effectively in the commercial world, including crowdvoting, crowdsolving, crowdfunding, microwork, creative crowdsourcing, crowdsource workforce management, and inducement prize contests. Although this may not be an exhaustive list, the items cover the current major ways in which people use crowds to perform tasks.

一些常见的众包类别可以在商业领域得到有效利用,包括众投、众解、众筹、微工作、创意众包、众源劳动力管理和诱导奖竞赛。虽然这可能不是一个详尽的列表,项目涵盖了当前人们使用 crowds 执行任务的主要方式。

Crowdvoting

Crowdvoting occurs when a website gathers a large group's opinions and judgments on a certain topic. The Iowa Electronic Market is a prediction market that gathers crowds' views on politics and tries to ensure accuracy by having participants pay money to buy and sell contracts based on political outcomes.[92]

Crowdvoting occurs when a website gathers a large group's opinions and judgments on a certain topic. The Iowa Electronic Market is a prediction market that gathers crowds' views on politics and tries to ensure accuracy by having participants pay money to buy and sell contracts based on political outcomes.

群众投票发生在网站收集一个大群体对某个话题的意见和判断时。爱荷华州电子市场(Iowa Electronic Market)是一个预测市场,它收集民众对政治的看法,并试图通过让参与者支付费用来买卖基于政治结果的合约来确保准确性。

Some of the most famous examples have made use of social media channels: Domino's Pizza, Coca-Cola, Heineken, and Sam Adams have thus crowdsourced a new pizza, bottle design, beer, and song, respectively.[93] Threadless.com selects the T-shirts it sells by having users provide designs and vote on the ones they like, which are then printed and available for purchase.[18]

Some of the most famous examples have made use of social media channels: Domino's Pizza, Coca-Cola, Heineken, and Sam Adams have thus crowdsourced a new pizza, bottle design, beer, and song, respectively. Threadless.com selects the T-shirts it sells by having users provide designs and vote on the ones they like, which are then printed and available for purchase.


一些最著名的例子利用了社交媒体渠道: 多米诺披萨,可口可乐,喜力和山姆亚当斯因此分别众包了一个新的披萨,瓶子设计,啤酒和歌曲。Threadless 网站通过让用户提供设计并投票选出他们喜欢的 t 恤,然后印刷出来供用户购买,从而选择出售的 t 恤。

The California Report Card (CRC), a program jointly launched in January 2014 by the Center for Information Technology Research in the Interest of Society[94] and Lt. Governor Gavin Newsom, is an example of modern-day crowd voting. Participants access the CRC online and vote on six timely issues. Through principal component analysis, the users are then placed into an online "café" in which they can present their own political opinions and grade the suggestions of other participants. This system aims to effectively involve the greater public in relevant political discussions and highlight the specific topics with which Californians are most concerned.

The California Report Card (CRC), a program jointly launched in January 2014 by the Center for Information Technology Research in the Interest of Society and Lt. Governor Gavin Newsom, is an example of modern-day crowd voting. Participants access the CRC online and vote on six timely issues. Through principal component analysis, the users are then placed into an online "café" in which they can present their own political opinions and grade the suggestions of other participants. This system aims to effectively involve the greater public in relevant political discussions and highlight the specific topics with which Californians are most concerned.

加州成绩单(CRC)是2014年1月由社会利益信息技术研究中心(Center for Information Technology Research in The Interest of Society)和 lt。加文 · 纽森州长就是现代群众投票的一个例子。参与者在线访问 CRC 并就六个及时问题进行投票。通过主成分分析,用户可以进入一个网上“咖啡馆”,在那里他们可以表达自己的政治观点,并对其他参与者的建议进行评分。该系统旨在有效地让更多的公众参与相关的政治讨论,并突出加州人最关心的具体议题。

Crowdvoting's value in the movie industry was shown when in 2009 a crowd accurately predicting the success or failure of a movie based on its trailer,[95][96] a feat that was replicated in 2013 by Google.[97]

Crowdvoting's value in the movie industry was shown when in 2009 a crowd accurately predicting the success or failure of a movie based on its trailer,Escoffier, N. and B. McKelvey (2014). "Using "Crowd-Wisdom Strategy" to Co-Create Market Value: Proof-of-Concept from the Movie Industry." in International Perspective on Business Innovation and Disruption in the Creative Industries: Film, Video, Photography, P. Wikstrom and R. DeFillippi, eds., UK: Edward Elgar Publishing Ltd, Chap. 11. a feat that was replicated in 2013 by Google.Chen, A. and R. Panaligan (2013). "Quantifying movie magic with Google search." Google White Paper, Industry Perspectives+User Insights

2009年,一群人根据一部电影的预告片 Escoffier n. 和 b. McKelvey (2014) ,准确地预测了一部电影的成功或失败,这表明了众投票在电影业的价值。运用“群众智慧战略”共同创造市场价值——来自电影业的概念验证在《国际视角下创意产业的商业创新和颠覆: 电影,视频,摄影》 ,p. Wikstrom and r. DeFillippi,eds,英国: 爱德华·埃尔加出版社有限公司,第一章。11. 谷歌在2013年复制了这一壮举。“用谷歌搜索量化电影魔力。”谷歌白皮书,行业观点 + 用户洞察

On reddit, users collectively rate web content, discussions and comments as well as questions posed to persons of interest in "AMA" and AskScience online interviews.

On reddit, users collectively rate web content, discussions and comments as well as questions posed to persons of interest in "AMA" and AskScience online interviews.

在 reddit 上,用户共同评价网络内容、讨论和评论,以及向“ AMA”和 AskScience 在线访谈中感兴趣的人提出的问题。

In 2017, Project Fanchise purchased a team in the Indoor Football League and created the Salt Lake Screaming Eagles a fan run team. Using a mobile app the fans voted on the day-to-day operations of the team, the mascot name, signing of players and even the offensive play calling during games.[98]

In 2017, Project Fanchise purchased a team in the Indoor Football League and created the Salt Lake Screaming Eagles a fan run team. Using a mobile app the fans voted on the day-to-day operations of the team, the mascot name, signing of players and even the offensive play calling during games.

2017年,Project Fanchise 在室内足球联赛中购买了一支球队,并创建了盐湖城尖叫鹰队,一支球迷管理的球队。球迷们使用手机应用程序对球队的日常运作、吉祥物名称、球员签约、甚至比赛期间的进攻呼叫进行投票。

Crowdsourcing creative work

Crowdsourcing creative work

= = 众包创意工作 = =

Creative crowdsourcing spans sourcing creative projects such as graphic design, crowdsourcing architecture, product design,[11] apparel design, movies,[99] writing, company naming,[100] illustration, etc.[101][102] While crowdsourcing competitions have been used for decades in some creative fields (such as architecture), creative crowdsourcing has proliferated with the recent development of web-based platforms where clients can solicit a wide variety of creative work at lower cost than by traditional means.

Creative crowdsourcing spans sourcing creative projects such as graphic design, crowdsourcing architecture, product design, apparel design, movies,Cunard, C. (2010). "The Movie Research Experience gets audiences involved in filmmaking." The Daily Bruin, (19 July) writing, company naming, illustration, etc. While crowdsourcing competitions have been used for decades in some creative fields (such as architecture), creative crowdsourcing has proliferated with the recent development of web-based platforms where clients can solicit a wide variety of creative work at lower cost than by traditional means.

创意众包包括平面设计、众包建筑、产品设计、服装设计、电影、冠达公司(Cunard,c.)等创意项目。“电影研究经历让观众参与到电影制作中。”《布鲁因日报》(7月19日)的文章、公司名称、插图等。虽然众包竞赛已在一些创意领域(如建筑学)使用了几十年,但随着最近网络平台的发展,创意众包激增,客户可以用比传统方式更低的成本获得各种各样的创意工作。

Crowdsourcing in software development

Crowdsourcing approach to software development is used by a number of companies. Notable examples are Topcoder and its parent company Wipro.[31][32][103]

Crowdsourcing approach to software development is used by a number of companies. Notable examples are Topcoder and its parent company Wipro.

软件开发的众包方法被许多公司采用。著名的例子是 Topcoder 和它的母公司 Wipro。

Crowdsourcing language-related data collection

Crowdsourcing has also been used for gathering language-related data. For dictionary work, as was mentioned above, it was applied over a hundred years ago by the Oxford English Dictionary editors, using paper and postage. Much later, a call for collecting examples of proverbs on a specific topic (religious pluralism) was printed in a journal.[104] Today, as "crowdsourcing" has the inherent connotation of being web-based, such language-related data gathering is being conducted on the web by crowdsourcing in accelerating ways. Currently, a number of dictionary compilation projects are being conducted on the web, particularly for languages that are not highly academically documented, such as for the Oromo language.[105] Software programs have been developed for crowdsourced dictionaries, such as WeSay.[106] A slightly different form of crowdsourcing for language data has been the online creation of scientific and mathematical terminology for American Sign Language.[107] Mining publicly available social media conversations can be used as a form of implicit crowdsourcing to approximate the geographic extent of speaker dialects.[66] Proverb collection is also being done via crowdsourcing on the Web, most innovatively for the Pashto language of Afghanistan and Pakistan.[108][109][110] Crowdsourcing has been extensively used to collect high-quality gold standard for creating automatic systems in natural language processing (e.g. named entity recognition, entity linking).[111]

Crowdsourcing has also been used for gathering language-related data. For dictionary work, as was mentioned above, it was applied over a hundred years ago by the Oxford English Dictionary editors, using paper and postage. Much later, a call for collecting examples of proverbs on a specific topic (religious pluralism) was printed in a journal.Stan Nussbaum. 2003. Proverbial perspectives on pluralism. Connections: the journal of the WEA Missions Committee October, pp. 30, 31. Today, as "crowdsourcing" has the inherent connotation of being web-based, such language-related data gathering is being conducted on the web by crowdsourcing in accelerating ways. Currently, a number of dictionary compilation projects are being conducted on the web, particularly for languages that are not highly academically documented, such as for the Oromo language. Software programs have been developed for crowdsourced dictionaries, such as WeSay. A slightly different form of crowdsourcing for language data has been the online creation of scientific and mathematical terminology for American Sign Language. Mining publicly available social media conversations can be used as a form of implicit crowdsourcing to approximate the geographic extent of speaker dialects. Proverb collection is also being done via crowdsourcing on the Web, most innovatively for the Pashto language of Afghanistan and Pakistan.Edward Zellem. 2014. Mataluna: 151 Afghan Pashto Proverbs. Tampa, FL: Culture Direct. Crowdsourcing has been extensively used to collect high-quality gold standard for creating automatic systems in natural language processing (e.g. named entity recognition, entity linking).

众包还被用于收集与语言相关的数据。对于字典工作,就像上面提到的那样,它在一百多年前就被牛津英语词典的编辑使用,使用纸张和邮资。很久以后,一份期刊上刊登了一篇呼吁收集有关某一特定主题(宗教多元化)的谚语范例的文章。斯坦 · 努斯鲍姆。2003.多元主义的寓言视角。联系: WEA 传教委员会杂志,年月日,页。30, 31.如今,由于“众包”具有基于网络的固有内涵,这种与语言相关的数据收集正通过众包加速在网络上进行。目前,一些词典编纂项目正在网上进行,特别是针对那些学术成果不高的语言,例如奥罗莫语。软件程序已经开发出来用于众包字典,比如 WeSay。语言数据众包的另一个略有不同的形式是在线创建美国手语的科学和数学术语。挖掘公开的社交媒体对话可以作为一种隐式众包的形式,来近似说话者方言的地理范围。谚语收集也是通过网络众包完成的,最具创新性的是阿富汗和巴基斯坦的普什图语。爱德华 · 泽勒姆。2014.151阿富汗普什图谚语。佛罗里达州坦帕: 文化直播。众包被广泛用于收集高质量的黄金标准,用于创建自然语言处理中的自动化系统(例如:。命名实体识别,实体链接)。

Crowdsolving

Crowdsolving

= = 众包解决 =

Crowdsolving is a collaborative, yet holistic, way of solving a problem using many people, communities, groups, or resources. It is a type of crowdsourcing with focus on complex and intellectually demanding problems requiring considerable effort, and quality/ uniqueness of contribution.[112]

Crowdsolving is a collaborative, yet holistic, way of solving a problem using many people, communities, groups, or resources. It is a type of crowdsourcing with focus on complex and intellectually demanding problems requiring considerable effort, and quality/ uniqueness of contribution.Geiger D, Rosemann M, Fielt E. Crowdsourcing information systems: a systems theory perspective. InProceedings of the 22nd Australasian Conference on Information Systems (ACIS 2011) 2011.

群体解决是一种协作的、整体的方式,它使用许多人、社区、团体或资源来解决问题。它是一种众包模式,专注于复杂且智力要求高的问题,需要付出相当大的努力,以及贡献的质量/独特性。众包信息系统: 一个系统理论的观点。2011年第22届澳大利亚信息系统会议论文集。

Problem-Idea Chains are a form of idea crowdsourcing and crowdsolving, where individuals are asked to submit ideas to solve problems and then problems with those ideas. The aim is to find encourage individuals to find practical solutions to problems that are well thought through.[113]

Problem-Idea Chains are a form of idea crowdsourcing and crowdsolving, where individuals are asked to submit ideas to solve problems and then problems with those ideas. The aim is to find encourage individuals to find practical solutions to problems that are well thought through.

问题-想法链是一种众包和众解的形式,个人被要求提交解决问题的想法,然后用这些想法解决问题。其目的是鼓励个人为经过深思熟虑的问题找到切实可行的解决办法。

Crowdfunding

Crowdfunding is the process of funding projects by a multitude of people contributing a small amount to attain a certain monetary goal, typically via the Internet.[114] Crowdfunding has been used for both commercial and charitable purposes.[115] The crowdfuding model that has been around the longest is rewards-based crowdfunding. This model is where people can prepurchase products, buy experiences, or simply donate. While this funding may in some cases go towards helping a business, funders are not allowed to invest and become shareholders via rewards-based crowdfunding.[116]


Crowdfunding is the process of funding projects by a multitude of people contributing a small amount to attain a certain monetary goal, typically via the Internet. Crowdfunding has been used for both commercial and charitable purposes. The crowdfuding model that has been around the longest is rewards-based crowdfunding. This model is where people can prepurchase products, buy experiences, or simply donate. While this funding may in some cases go towards helping a business, funders are not allowed to invest and become shareholders via rewards-based crowdfunding.

众筹是一个由众多人通过互联网贡献少量资金来达到一定金钱目标的项目融资过程。众筹已被用于商业和慈善目的。众筹模式历史最悠久的是基于奖励的众筹模式。这种模式是人们可以预购产品,购买体验,或者简单地捐赠。尽管在某些情况下,这些资金可能用于帮助企业,但不允许资助者通过基于奖励的众筹方式进行投资和成为股东。

Individuals, businesses, and entrepreneurs can showcase their businesses and projects to the entire world by creating a profile, which typically includes a short video introducing their project, a list of rewards per donation, and illustrations through images. The goal is to create a compelling message towards which readers will be drawn. Funders make monetary contribution for numerous reasons:

Individuals, businesses, and entrepreneurs can showcase their businesses and projects to the entire world by creating a profile, which typically includes a short video introducing their project, a list of rewards per donation, and illustrations through images. The goal is to create a compelling message towards which readers will be drawn. Funders make monetary contribution for numerous reasons:

个人、企业和企业家可以通过创建一个个人资料来向全世界展示他们的企业和项目,其中通常包括一个介绍他们项目的短视频,每次捐赠的奖励清单,以及通过图像的插图。我们的目标是创造一个引人注目的信息,吸引读者。出于多种原因,资助者提供资金捐助:

  1. They connect to the greater purpose of the campaign, such as being a part of an entrepreneurial community and supporting an innovative idea or product.[117]
  2. They connect to a physical aspect of the campaign like rewards and gains from investment.[117]
  3. They connect to the creative display of the campaign's presentation.
  4. They want to see new products before the public.[117]
  1. They connect to the greater purpose of the campaign, such as being a part of an entrepreneurial community and supporting an innovative idea or product.Agrawal, Ajay, Christian Catalini, and Avi Goldfarb. "Some Simple Economics of Crowdfunding." National Bureau of Economic Research (2014): 63-97.
  2. They connect to a physical aspect of the campaign like rewards and gains from investment.
  3. They connect to the creative display of the campaign's presentation.
  4. They want to see new products before the public.
  1. 他们联系到了活动的更大目的,比如成为创业社区的一部分,支持创新的想法或产品。阿格拉瓦尔,阿贾伊,克里斯蒂安 · 卡塔里尼和阿维 · 戈德法布。“一些简单的众筹经济学。”国家经济研究局(2014) : 63-97。# 它们与活动的物理方面有关,比如投资的回报和收益。# 他们连接到创造性的展示活动的介绍。
  • 他们想在公众面前看到新产品。

The dilemma for equity crowdfunding in the US as of 2012 was how the Securities and Exchange Commission (SEC) is going to regulate the entire process. At the time, rules and regulations were being refined by the SEC, which had until 1 January 2013, to tweak the fundraising methods. The regulators were overwhelmed trying to regulate Dodd – Frank and all the other rules and regulations involving public companies and the way they trade. Advocates of regulation claimed that crowdfunding would open up the flood gates for fraud, called it the "wild west" of fundraising, and compared it to the 1980s days of penny stock "cold-call cowboys". The process allows for up to $1 million to be raised without some of the regulations being involved. Companies under the then-current proposal would have exemptions available and be able to raise capital from a larger pool of persons, which can include lower thresholds for investor criteria, whereas the old rules required that the person be an "accredited" investor. These people are often recruited from social networks, where the funds can be acquired from an equity purchase, loan, donation, or ordering. The amounts collected have become quite high, with requests that are over a million dollars for software such as Trampoline Systems, which used it to finance the commercialization of their new software.

The dilemma for equity crowdfunding in the US as of 2012 was how the Securities and Exchange Commission (SEC) is going to regulate the entire process. At the time, rules and regulations were being refined by the SEC, which had until 1 January 2013, to tweak the fundraising methods. The regulators were overwhelmed trying to regulate Dodd – Frank and all the other rules and regulations involving public companies and the way they trade. Advocates of regulation claimed that crowdfunding would open up the flood gates for fraud, called it the "wild west" of fundraising, and compared it to the 1980s days of penny stock "cold-call cowboys". The process allows for up to $1 million to be raised without some of the regulations being involved. Companies under the then-current proposal would have exemptions available and be able to raise capital from a larger pool of persons, which can include lower thresholds for investor criteria, whereas the old rules required that the person be an "accredited" investor. These people are often recruited from social networks, where the funds can be acquired from an equity purchase, loan, donation, or ordering. The amounts collected have become quite high, with requests that are over a million dollars for software such as Trampoline Systems, which used it to finance the commercialization of their new software.

截至2012年,股权众筹在美国面临的困境是,美国证券交易委员会(SEC)将如何监管整个过程。当时,美国证券交易委员会正在修改相关规定,以调整筹资方法。监管机构在试图监管多德-弗兰克(Dodd-Frank)法案和所有其它涉及上市公司及其交易方式的规定和规定时,已经不堪重负。支持监管的人士声称,众筹将打开欺诈的大门,称之为筹资的“狂野西部”,并将其与上世纪80年代廉价股的“冷战牛仔”相提并论。这一过程允许在不涉及一些规定的情况下筹集高达100万美元的资金。根据当时的建议,公司可以享有豁免,并能够从更多的人员中筹集资金,其中可包括较低的投资者标准门槛,而旧的规则要求此人是”认可”投资者。这些人通常是从社交网络中招募来的,这些资金可以通过购买股票、贷款、捐赠或下订单获得。征集的数额已经变得相当高,要求超过一百万美元的软件,如蹦床系统,用它来资助商业化他们的新软件。

Mobile crowdsourcing

Mobile crowdsourcing, involves activities that take place on smartphones or mobile platforms that are frequently characterized by GPS technology.[118] This allows for real-time data gathering and gives projects greater reach and accessibility. However, mobile crowdsourcing can lead to an urban bias, as well as safety and privacy concerns.[119][120][121]

Mobile crowdsourcing, involves activities that take place on smartphones or mobile platforms that are frequently characterized by GPS technology. This allows for real-time data gathering and gives projects greater reach and accessibility. However, mobile crowdsourcing can lead to an urban bias, as well as safety and privacy concerns.

移动众包---- 包括在智能手机或移动平台上进行的活动,这些平台通常是拥有属性/全球定位系统技术。这使得可以实时收集数据,并使项目具有更大的覆盖面和可访问性。然而,移动众包可能导致城市偏见,以及安全和隐私问题。

Macrowork

Macrowork tasks typically have these characteristics: they can be done independently, they take a fixed amount of time, and they require special skills. Macrotasks could be part of specialized projects or could be part of a large, visible project where workers pitch in wherever they have the required skills. The key distinguishing factors are that macrowork requires specialized skills and typically takes longer, while microwork requires no specialized skills.

Macrowork tasks typically have these characteristics: they can be done independently, they take a fixed amount of time, and they require special skills. Macrotasks could be part of specialized projects or could be part of a large, visible project where workers pitch in wherever they have the required skills. The key distinguishing factors are that macrowork requires specialized skills and typically takes longer, while microwork requires no specialized skills.

宏工作任务通常具有以下特征: 它们可以独立完成,它们占用固定的时间,它们需要特殊的技能。宏任务可以是专门项目的一部分,也可以是大型可见项目的一部分,在这个项目中,工作人员可以在任何他们拥有所需技能的地方投入工作。区分宏观工作的关键因素是,宏观工作需要专门技能,通常需要更长的时间,而微观工作不需要专门技能。

Microwork

Microwork is a crowdsourcing platform that allows users to do small tasks for which computers lack aptitude for low amounts of money. Amazon's popular Mechanical Turk has created many different projects for users to participate in, where each task requires very little time and offers a very small amount in payment.[5] The Chinese versions of this, commonly called Witkey, are similar and include such sites as Taskcn.com and k68.cn. When choosing tasks, since only certain users "win", users learn to submit later and pick less popular tasks to increase the likelihood of getting their work chosen.[122] An example of a Mechanical Turk project is when users searched satellite images for a boat to find lost researcher Jim Gray.[87] Based on an elaborate survey of participants in a microtask crowdsourcing platform, Gadiraju et al. have proposed a taxonomy of different types of microtasks that are crowdsourced.[123] Two important questions in microtask crowdsourcing are dynamic task allocation and answer aggregation.

Microwork is a crowdsourcing platform that allows users to do small tasks for which computers lack aptitude for low amounts of money. Amazon's popular Mechanical Turk has created many different projects for users to participate in, where each task requires very little time and offers a very small amount in payment. The Chinese versions of this, commonly called Witkey, are similar and include such sites as Taskcn.com and k68.cn. When choosing tasks, since only certain users "win", users learn to submit later and pick less popular tasks to increase the likelihood of getting their work chosen. An example of a Mechanical Turk project is when users searched satellite images for a boat to find lost researcher Jim Gray. Based on an elaborate survey of participants in a microtask crowdsourcing platform, Gadiraju et al. have proposed a taxonomy of different types of microtasks that are crowdsourced. Two important questions in microtask crowdsourcing are dynamic task allocation and answer aggregation.

微工作是一个众包平台,它允许用户做一些小任务,而这些小任务是电脑缺乏资金能力的。亚马逊流行的 Mechanical Turk 已经为用户创建了许多不同的项目,每个任务只需要很少的时间和支付很少的费用。这个网站的中文版本,通常被称为威客,类似的网站包括 taskcn 和 k68.cn。当选择任务时,因为只有特定的用户“赢”,用户学习提交后,选择不太受欢迎的任务,以增加他们的工作选择的可能性。一个土耳其机器人项目的例子是,用户搜索卫星图像寻找失踪的研究员吉姆 · 格雷。基于对微任务众包平台参与者的详细调查,Gadiraju 等人。提出了不同类型的微任务的分类,这些微任务是众包的。微任务群众外包中的两个重要问题是动态任务分配和答案聚合。

Simple projects

Simple projects are those that require a large amount of time and skills compared to micro and macrowork. While an example of macrowork would be writing survey feedback, simple projects rather include activities like writing a basic line of code or programming a database, which both require a larger time commitment and skill level. These projects are usually not found on sites like Amazon Mechanical Turk, and are rather posted on platforms like Upwork that call for a specific expertise.[124]

Simple projects are those that require a large amount of time and skills compared to micro and macrowork. While an example of macrowork would be writing survey feedback, simple projects rather include activities like writing a basic line of code or programming a database, which both require a larger time commitment and skill level. These projects are usually not found on sites like Amazon Mechanical Turk, and are rather posted on platforms like Upwork that call for a specific expertise.

与微观和宏观工作相比,简单的项目需要大量的时间和技能。宏工作的一个例子是编写调查反馈,简单的项目包括诸如编写基本代码行或编写数据库之类的活动,这两者都需要更多的时间投入和技能水平。这些项目通常不会出现在像 Amazon Mechanical Turk 这样的网站上,而是会发布在像 Upwork 这样需要特定专业知识的平台上。

Complex projects

Complex projects generally take the most time, have higher stakes, and call for people with very specific skills. These are generally "one-off" projects that are difficult to accomplish and can include projects like designing a new product that a company hopes to patent. Tasks like that would be "complex" because design is a meticulous process that requires a large amount of time to perfect, and also people doing these projects must have specialized training in design to effectively complete the project. These projects usually pay the highest, yet are rarely offered.[125]

Complex projects generally take the most time, have higher stakes, and call for people with very specific skills. These are generally "one-off" projects that are difficult to accomplish and can include projects like designing a new product that a company hopes to patent. Tasks like that would be "complex" because design is a meticulous process that requires a large amount of time to perfect, and also people doing these projects must have specialized training in design to effectively complete the project. These projects usually pay the highest, yet are rarely offered.

复杂的项目通常花费最多的时间,有更高的风险,并且需要具备非常特殊技能的人。这些通常是“一次性”的项目,很难完成,可以包括一些项目,比如设计一个公司希望申请专利的新产品。像这样的任务是“复杂的”,因为设计是一个细致的过程,需要大量的时间来完善,而且做这些项目的人必须有专门的设计培训,以有效地完成项目。这些项目通常报酬最高,但很少提供。

Inducement prize contests

Web-based idea competitions or inducement prize contests often consist of generic ideas, cash prizes, and an Internet-based platform to facilitate easy idea generation and discussion. An example of these competitions includes an event like IBM's 2006 "Innovation Jam", attended by over 140,000 international participants and yielding around 46,000 ideas.[126][127] Another example is the Netflix Prize in 2009. The idea was to ask the crowd to come up with a recommendation algorithm more accurate than Netflix's own algorithm. It had a grand prize of US$1,000,000, and it was given to the BellKor's Pragmatic Chaos team which bested Netflix's own algorithm for predicting ratings, by 10.06%.

Web-based idea competitions or inducement prize contests often consist of generic ideas, cash prizes, and an Internet-based platform to facilitate easy idea generation and discussion. An example of these competitions includes an event like IBM's 2006 "Innovation Jam", attended by over 140,000 international participants and yielding around 46,000 ideas.

Another example is the Netflix Prize in 2009. The idea was to ask the crowd to come up with a recommendation algorithm more accurate than Netflix's own algorithm. It had a grand prize of US$1,000,000, and it was given to the BellKor's Pragmatic Chaos team which bested Netflix's own algorithm for predicting ratings, by 10.06%.

基于网络的创意竞赛或诱导奖竞赛通常包括一般创意、现金奖励和一个基于互联网的平台,以方便创意的产生和讨论。这些竞赛的一个例子包括 IBM 2006年的“ Innovation Jam”活动,有14万多名国际参与者参加,产生了大约4.6万个想法。另一个例子是2009年的 Netflix 奖。这个想法是要求人群提出一个比 Netflix 自己的算法更准确的推荐算法。它获得了100万美元的大奖,并被授予 BellKor’s Pragmatic Chaos 团队,该团队以10.06% 的优势击败了 Netflix 自己的预测分级算法。

Another example of competition-based crowdsourcing is the 2009 DARPA balloon experiment, where DARPA placed 10 balloon markers across the United States and challenged teams to compete to be the first to report the location of all the balloons. A collaboration of efforts was required to complete the challenge quickly and in addition to the competitive motivation of the contest as a whole, the winning team (MIT, in less than nine hours) established its own "collaborapetitive" environment to generate participation in their team.[128] A similar challenge was the Tag Challenge, funded by the US State Department, which required locating and photographing individuals in five cities in the US and Europe within 12 hours based only on a single photograph. The winning team managed to locate three suspects by mobilizing volunteers worldwide using a similar incentive scheme to the one used in the balloon challenge.[129]

Another example of competition-based crowdsourcing is the 2009 DARPA balloon experiment, where DARPA placed 10 balloon markers across the United States and challenged teams to compete to be the first to report the location of all the balloons. A collaboration of efforts was required to complete the challenge quickly and in addition to the competitive motivation of the contest as a whole, the winning team (MIT, in less than nine hours) established its own "collaborapetitive" environment to generate participation in their team. A similar challenge was the Tag Challenge, funded by the US State Department, which required locating and photographing individuals in five cities in the US and Europe within 12 hours based only on a single photograph. The winning team managed to locate three suspects by mobilizing volunteers worldwide using a similar incentive scheme to the one used in the balloon challenge.

基于竞争的众包的另一个例子是2009年美国国防部高级研究计划局(DARPA)的气球实验,DARPA 在美国各地放置了10个气球标记,并要求各个团队竞争,争取第一个报告所有气球的位置。为了迅速完成挑战,需要各方共同努力,除了整个竞赛的竞争动机之外,获胜队伍(麻省理工学院,不到9个小时)建立了自己的”合作/适应”环境,以促进参与其团队。一个类似的挑战是由美国国务院资助的标签挑战赛,要求在12小时内仅凭一张照片就能在美国和欧洲的5个城市定位和拍摄个人。获胜队伍通过动员世界各地的志愿者使用类似于气球挑战中使用的激励机制,设法找到了三名嫌疑人。

Open innovation platforms are a very effective way of crowdsourcing people's thoughts and ideas to do research and development. The company InnoCentive is a crowdsourcing platform for corporate research and development where difficult scientific problems are posted for crowds of solvers to discover the answer and win a cash prize, which can range from $10,000 to $100,000 per challenge.[18] InnoCentive, of Waltham, Massachusetts and London, England provides access to millions of scientific and technical experts from around the world. The company claims a success rate of 50% in providing successful solutions to previously unsolved scientific and technical problems. IdeaConnection.com challenges people to come up with new inventions and innovations and Ninesigma.com connects clients with experts in various fields. The X Prize Foundation creates and runs incentive competitions offering between $1 million and $30 million for solving challenges. Local Motors is another example of crowdsourcing. A community of 20,000 automotive engineers, designers, and enthusiasts competes to build off-road rally trucks.[130]

Open innovation platforms are a very effective way of crowdsourcing people's thoughts and ideas to do research and development. The company InnoCentive is a crowdsourcing platform for corporate research and development where difficult scientific problems are posted for crowds of solvers to discover the answer and win a cash prize, which can range from $10,000 to $100,000 per challenge. InnoCentive, of Waltham, Massachusetts and London, England provides access to millions of scientific and technical experts from around the world. The company claims a success rate of 50% in providing successful solutions to previously unsolved scientific and technical problems. IdeaConnection.com challenges people to come up with new inventions and innovations and Ninesigma.com connects clients with experts in various fields. The X Prize Foundation creates and runs incentive competitions offering between $1 million and $30 million for solving challenges. Local Motors is another example of crowdsourcing. A community of 20,000 automotive engineers, designers, and enthusiasts competes to build off-road rally trucks.

开放式创新平台是一个非常有效的方式,众包人们的思想和想法,做研究和开发。公司 InnoCentive 是一个为企业研发提供众包平台,在这个平台上,解决难题的人群可以发布科学问题,从而找到答案并赢得奖金,奖金可以从每次挑战1万美元到10万美元不等。位于沃尔瑟姆、马萨诸塞州和伦敦的 InnoCentive 为来自世界各地的数百万科学和技术专家提供了访问途径。该公司声称成功解决以前未解决的科学和技术问题的成功率为50% 。纽约 ideaconnection.com 挑战人们想出新的发明和创新,ninesigma 网站将客户与各个领域的专家联系起来。X 奖基金会创建并运行奖励竞赛,奖金在100万到3000万美元之间,用于解决挑战。本地汽车公司是众包的另一个例子。一个由20,000名汽车工程师,设计师和爱好者组成的社区正在建造越野拉力赛卡车。

Implicit crowdsourcing

Implicit crowdsourcing is less obvious because users do not necessarily know they are contributing, yet can still be very effective in completing certain tasks. Rather than users actively participating in solving a problem or providing information, implicit crowdsourcing involves users doing another task entirely where a third party gains information for another topic based on the user's actions.[18]

Implicit crowdsourcing is less obvious because users do not necessarily know they are contributing, yet can still be very effective in completing certain tasks. Rather than users actively participating in solving a problem or providing information, implicit crowdsourcing involves users doing another task entirely where a third party gains information for another topic based on the user's actions.

隐式众包不那么明显,因为用户不一定知道他们在贡献,但仍然可以非常有效地完成某些任务。隐式众包不是用户积极参与解决问题或提供信息,而是让用户完全从事另一项任务,即第三方根据用户的行为获取关于另一主题的信息。

A good example of implicit crowdsourcing is the ESP game, where users guess what images are and then these labels are used to tag Google images. Another popular use of implicit crowdsourcing is through reCAPTCHA, which asks people to solve CAPTCHAs to prove they are human, and then provides CAPTCHAs from old books that cannot be deciphered by computers, to digitize them for the web. Like many tasks solved using the Mechanical Turk, CAPTCHAs are simple for humans, but often very difficult for computers.[87]

A good example of implicit crowdsourcing is the ESP game, where users guess what images are and then these labels are used to tag Google images. Another popular use of implicit crowdsourcing is through reCAPTCHA, which asks people to solve CAPTCHAs to prove they are human, and then provides CAPTCHAs from old books that cannot be deciphered by computers, to digitize them for the web. Like many tasks solved using the Mechanical Turk, CAPTCHAs are simple for humans, but often very difficult for computers.

隐式众包的一个很好的例子是 ESP 游戏,用户猜测图片是什么,然后这些标签用于标记谷歌图片。另一个隐式众包的流行应用是通过 reCAPTCHA,它要求人们解出验证码来证明他们是人类,然后提供旧书中的验证码,这些验证码不能被计算机解密,然后把它们数字化到网络上。像许多使用 Mechanical Turk 解决的任务一样,CAPTCHAs 对于人类来说很简单,但是对于计算机来说却非常困难。

Piggyback crowdsourcing can be seen most frequently by websites such as Google that data-mine a user's search history and websites to discover keywords for ads, spelling corrections, and finding synonyms. In this way, users are unintentionally helping to modify existing systems, such as Google's AdWords.[131]

Piggyback crowdsourcing can be seen most frequently by websites such as Google that data-mine a user's search history and websites to discover keywords for ads, spelling corrections, and finding synonyms. In this way, users are unintentionally helping to modify existing systems, such as Google's AdWords.


谷歌(Google)等网站最常见的“搭便车式众包”(Piggyback crowdsourcing)现象是,它们通过挖掘用户的搜索历史和网站数据,发现广告、拼写更正和查找同义词等关键词。通过这种方式,用户无意中帮助修改了现有的系统,比如谷歌的 AdWords。

Health-care crowdsourcing

Research has emerged that outlines the use of crowdsourcing techniques in the public health domain.[132][133][134] The collective intelligence outcomes from crowdsourcing are being generated in three broad categories of public health care; health promotion,[133] health research,[135] and health maintenance.[136] Crowdsourcing also enables researchers to move from small homogeneous groups of participants to large heterogenous groups,[137] beyond convenience samples such as students or higher educated people. The SESH group focuses on using crowdsourcing to improve health.

Research has emerged that outlines the use of crowdsourcing techniques in the public health domain. The collective intelligence outcomes from crowdsourcing are being generated in three broad categories of public health care; health promotion, health research, and health maintenance. Crowdsourcing also enables researchers to move from small homogeneous groups of participants to large heterogenous groups, beyond convenience samples such as students or higher educated people. The SESH group focuses on using crowdsourcing to improve health.

有研究概述了众包技术在公共卫生领域的应用。来自众包的集体智慧成果产生于三大类公共卫生保健: 健康促进、健康研究和健康维护。众包还使研究人员能够从同质的参与者小群体转移到大的异质群体,而不仅仅是学生或受过高等教育的人这样的便利样本。小组致力于利用众包改善健康状况。

Crowdsourcing in agriculture

Crowdsource research also reaches to the field of agriculture. This is mainly to give the farmers and experts a kind of help in identification of different types of weeds[138] from the fields and also to give them the best way to remove the weeds from fields.

Crowdsource research also reaches to the field of agriculture. This is mainly to give the farmers and experts a kind of help in identification of different types of weeds from the fields and also to give them the best way to remove the weeds from fields.

众包研究也涉及到农业领域。这主要是为了帮助农民和专家鉴定田间不同类型的杂草,也为他们提供除草的最佳方法。

Crowdsourcing in cheating in bridge

Boye Brogeland initiated a crowdsourcing investigation of cheating by top-level bridge players that showed several players were guilty, which led to their suspension.[139]


Boye Brogeland initiated a crowdsourcing investigation of cheating by top-level bridge players that showed several players were guilty, which led to their suspension.Primarily on the Bridge Winners website

博伊 · 布罗格兰发起了一项针对顶级桥牌运动员作弊行为的众包调查,调查显示几名运动员有罪,导致他们被停赛。主要在桥牌获胜者网站上

Crowdshipping

Crowdshipping (crowd-shipping) is a peer-to-peer shipping service, usually conducted via an online platform or marketplace.[140] There are several methods that have been categorized as crowd-shipping:

  • Travelers heading in the direction of the buyer, and are willing to bring the package as part of their luggage for a reward.[141]
  • Truck drivers whose route lies along the buyer's location and who are willing to take extra items in their truck.[142]
  • Community-based platforms that connect international buyers and local forwarders, by allowing buyers to use forwarder's address as purchase destination, after which forwarders ship items further to the buyer.[143]


Crowdshipping (crowd-shipping) is a peer-to-peer shipping service, usually conducted via an online platform or marketplace. There are several methods that have been categorized as crowd-shipping:

  • Travelers heading in the direction of the buyer, and are willing to bring the package as part of their luggage for a reward.
  • Truck drivers whose route lies along the buyer's location and who are willing to take extra items in their truck.
  • Community-based platforms that connect international buyers and local forwarders, by allowing buyers to use forwarder's address as purchase destination, after which forwarders ship items further to the buyer.

Crowdshiping (crowd-shipping)是一种点对点运输服务,通常通过在线平台或市场进行。有几种方法被归类为群体运输:

  • 旅行者前往买家的方向,并愿意带来的包裹作为他们的行李的一部分,以获得奖励。
  • 卡车司机,他们的路线沿着买家所在地,他们愿意把额外的物品装上卡车。
  • 以社区为基础的平台,连接国际买家和本地货运代理,让买家使用货运代理的地址作为购买目的地,然后货运代理再运送货物给买家。

Crowdsourcers

A number of motivations exist for businesses to use crowdsourcing to accomplish their tasks, find solutions for problems, or to gather information. These include the ability to offload peak demand, access cheap labor and information, generate better results, access a wider array of talent than might be present in one organization, and undertake problems that would have been too difficult to solve internally.[144] Crowdsourcing allows businesses to submit problems on which contributors can work, on topics such as science, manufacturing, biotech, and medicine, with monetary rewards for successful solutions. Although crowdsourcing complicated tasks can be difficult, simple work tasks can be crowdsourced cheaply and effectively.[145]

A number of motivations exist for businesses to use crowdsourcing to accomplish their tasks, find solutions for problems, or to gather information. These include the ability to offload peak demand, access cheap labor and information, generate better results, access a wider array of talent than might be present in one organization, and undertake problems that would have been too difficult to solve internally. Crowdsourcing allows businesses to submit problems on which contributors can work, on topics such as science, manufacturing, biotech, and medicine, with monetary rewards for successful solutions. Although crowdsourcing complicated tasks can be difficult, simple work tasks can be crowdsourced cheaply and effectively.

企业利用众包来完成任务、找到问题的解决方案或者收集信息,这样的动机有很多。这些能力包括卸载高峰需求,获得廉价劳动力和信息,产生更好的结果,获得比在一个组织中可能存在的更广泛的人才,以及解决内部难以解决的问题。众包允许企业提交问题,供稿人可以在科学、制造业、生物技术和医药等领域工作,成功的解决方案可以获得金钱奖励。尽管众包复杂的任务可能很困难,但是简单的工作任务可以通过众包廉价而有效地完成。

Crowdsourcing also has the potential to be a problem-solving mechanism for government and nonprofit use.[146] Urban and transit planning are prime areas for crowdsourcing. One project to test crowdsourcing's public participation process for transit planning in Salt Lake City was carried out from 2008 to 2009, funded by a U.S. Federal Transit Administration grant.[147] Another notable application of crowdsourcing to government problem solving is the Peer to Patent Community Patent Review project for the U.S. Patent and Trademark Office.[148]

Crowdsourcing also has the potential to be a problem-solving mechanism for government and nonprofit use. Urban and transit planning are prime areas for crowdsourcing. One project to test crowdsourcing's public participation process for transit planning in Salt Lake City was carried out from 2008 to 2009, funded by a U.S. Federal Transit Administration grant.

Another notable application of crowdsourcing to government problem solving is the Peer to Patent Community Patent Review project for the U.S. Patent and Trademark Office.

众包还有可能成为政府和非营利组织使用的一种解决问题的机制。城市和交通规划是众包的主要领域。2008年至2009年,在美国联邦运输局基金会的资助下,盐湖城实施了一个测试众包公共交通规划参与过程的项目。另一个众包在政府解决问题中的著名应用是美国专利和商标局的点对点专利社区专利审查项目。

Researchers have used crowdsourcing systems like the Mechanical Turk to aid their research projects by crowdsourcing some aspects of the research process, such as data collection, parsing, and evaluation. Notable examples include using the crowd to create speech and language databases,[149][150] and using the crowd to conduct user studies.[131] Crowdsourcing systems provide these researchers with the ability to gather large amounts of data. Additionally, using crowdsourcing, researchers can collect data from populations and demographics they may not have had access to locally, but that improve the validity and value of their work.[151]

Researchers have used crowdsourcing systems like the Mechanical Turk to aid their research projects by crowdsourcing some aspects of the research process, such as data collection, parsing, and evaluation. Notable examples include using the crowd to create speech and language databases, and using the crowd to conduct user studies. Crowdsourcing systems provide these researchers with the ability to gather large amounts of data. Additionally, using crowdsourcing, researchers can collect data from populations and demographics they may not have had access to locally, but that improve the validity and value of their work.

研究人员使用诸如土耳其机器人这样的众包系统来帮助他们的研究项目,方法是将研究过程的某些方面众包出去,比如数据收集、语法分析和评估。值得注意的例子包括利用人群创建语音和语言数据库,以及利用人群进行用户研究。众包系统为这些研究人员提供了收集大量数据的能力。此外,通过众包,研究人员可以从他们可能无法在当地获得的人口和人口统计数据中收集数据,但这提高了他们工作的有效性和价值。

Artists have also used crowdsourcing systems. In his project called the Sheep Market, Aaron Koblin used Mechanical Turk to collect 10,000 drawings of sheep from contributors around the world.[152] Sam Brown (artist) leverages the crowd by asking visitors of his website explodingdog to send him sentences that he uses as inspirations for paintings.[153] Art curator Andrea Grover argues that individuals tend to be more open in crowdsourced projects because they are not being physically judged or scrutinized.[85] As with other crowdsourcers, artists use crowdsourcing systems to generate and collect data. The crowd also can be used to provide inspiration and to collect financial support for an artist's work.[154]

Artists have also used crowdsourcing systems. In his project called the Sheep Market, Aaron Koblin used Mechanical Turk to collect 10,000 drawings of sheep from contributors around the world. Sam Brown (artist) leverages the crowd by asking visitors of his website explodingdog to send him sentences that he uses as inspirations for paintings. Art curator Andrea Grover argues that individuals tend to be more open in crowdsourced projects because they are not being physically judged or scrutinized. As with other crowdsourcers, artists use crowdsourcing systems to generate and collect data. The crowd also can be used to provide inspiration and to collect financial support for an artist's work.

艺术家们也使用了众包系统。在他的绵羊市场项目中,Aaron Koblin 使用 Mechanical Turk 从世界各地的贡献者那里收集了10000幅绵羊图画。山姆 · 布朗(艺术家)利用人群,让他的网站 explodingdog 的访问者给他发送一些句子,他用这些句子作为绘画的灵感。艺术策展人安德里亚 · 格罗弗认为,在众包项目中,个人往往更加开放,因为他们没有受到身体上的评判或审查。和其他众包者一样,艺术家们使用众包系统来生成和收集数据。人群也可以被用来提供灵感和为艺术家的作品收集财政支持。

Additionally, crowdsourcing from 100 million drivers is being used by INRIX to collect users' driving times to provide better GPS routing and real-time traffic updates.[155]

Additionally, crowdsourcing from 100 million drivers is being used by INRIX to collect users' driving times to provide better GPS routing and real-time traffic updates.

此外,INRIX 使用来自1亿司机的众包来收集用户的驾驶时间,以提供更好的 GPS 路由和实时交通更新。

Demographics

The crowd is an umbrella term for the people who contribute to crowdsourcing efforts. Though it is sometimes difficult to gather data about the demographics of the crowd, a study by Ross et al. surveyed the demographics of a sample of the more than 400,000 registered crowdworkers using Amazon Mechanical Turk to complete tasks for pay. A previous study in 2008 by Ipeirotis found that users at that time were primarily American, young, female, and well-educated, with 40% earning more than $40,000 per year. In November 2009, Ross found a very different Mechanical Turk population, 36% of which was Indian. Two-thirds of Indian workers were male, and 66% had at least a bachelor's degree. Two-thirds had annual incomes less than $10,000, with 27% sometimes or always depending on income from Mechanical Turk to make ends meet.[156]

The crowd is an umbrella term for the people who contribute to crowdsourcing efforts. Though it is sometimes difficult to gather data about the demographics of the crowd, a study by Ross et al. surveyed the demographics of a sample of the more than 400,000 registered crowdworkers using Amazon Mechanical Turk to complete tasks for pay. A previous study in 2008 by Ipeirotis found that users at that time were primarily American, young, female, and well-educated, with 40% earning more than $40,000 per year. In November 2009, Ross found a very different Mechanical Turk population, 36% of which was Indian. Two-thirds of Indian workers were male, and 66% had at least a bachelor's degree. Two-thirds had annual incomes less than $10,000, with 27% sometimes or always depending on income from Mechanical Turk to make ends meet.

群体是对那些为众包努力做出贡献的人的总称。罗斯等人的一项研究表明,虽然有时很难收集人群的人口统计数据。对40多万名使用亚马逊土耳其机器人完成有偿任务的注册众包工作者进行了调查。Ipeirotis 在2008年进行的一项研究发现,当时的用户主要是美国人、年轻人、女性和受过良好教育的人,其中40% 的人年收入超过4万美元。2009年11月,罗斯发现了一个非常不同的土耳其机器人群体,其中36% 是印度人。三分之二的印度工人是男性,66% 至少拥有学士学位。三分之二的人年收入低于1万美元,其中27% 有时或总是依靠“土耳其机器人”的收入来维持收支平衡。

The average US user of Mechanical Turk earned $2.30 per hour for tasks in 2009, versus $1.58 for the average Indian worker.[citation needed] While the majority of users worked less than five hours per week, 18% worked 15 hours per week or more. This is less than minimum wage in the United States (but not in India), which Ross suggests raises ethical questions for researchers who use crowdsourcing.

The average US user of Mechanical Turk earned $2.30 per hour for tasks in 2009, versus $1.58 for the average Indian worker. While the majority of users worked less than five hours per week, 18% worked 15 hours per week or more. This is less than minimum wage in the United States (but not in India), which Ross suggests raises ethical questions for researchers who use crowdsourcing.

2009年,Mechanical Turk 的美国用户平均每小时赚2.3美元,而印度用户平均每小时赚1.58美元。大多数用户每周工作时间少于5小时,18% 的用户每周工作15小时或更多。这低于美国的最低工资标准(但不包括印度) ,罗斯认为这对使用众包的研究人员提出了道德问题。

The demographics of Microworkers.com differ from Mechanical Turk in that the US and India together account for only 25% of workers; 197 countries are represented among users, with Indonesia (18%) and Bangladesh (17%) contributing the largest share. However, 28% of employers are from the US.[157]

The demographics of Microworkers.com differ from Mechanical Turk in that the US and India together account for only 25% of workers; 197 countries are represented among users, with Indonesia (18%) and Bangladesh (17%) contributing the largest share. However, 28% of employers are from the US.

Microworkers.com 的人口统计数据与 Mechanical Turk 的不同之处在于,美国和印度加在一起仅占工人总数的25% ; 197个国家在用户中占有一席之地,其中印度尼西亚(18%)和孟加拉国(17%)占有最大份额。然而,28% 的雇主来自美国。

Another study of the demographics of the crowd at iStockphoto found a crowd that was largely white, middle- to upper-class, higher educated, worked in a so-called "white-collar job" and had a high-speed Internet connection at home.[158] In a crowd-sourcing diary study of 30 days in Europe the participants were predominantly higher educated women.[137]

Another study of the demographics of the crowd at iStockphoto found a crowd that was largely white, middle- to upper-class, higher educated, worked in a so-called "white-collar job" and had a high-speed Internet connection at home. In a crowd-sourcing diary study of 30 days in Europe the participants were predominantly higher educated women.

另一项针对 iStockphoto 人群的人口统计学研究发现,这些人大部分是白人,中上层阶级,受过高等教育,从事所谓的“白领工作”,在家里可以高速上网。在欧洲进行的一项为期30天的众包日记研究中,参与者主要是受过高等教育的女性。

Studies have also found that crowds are not simply collections of amateurs or hobbyists. Rather, crowds are often professionally trained in a discipline relevant to a given crowdsourcing task and sometimes hold advanced degrees and many years of experience in the profession.[158][159][160][161] Claiming that crowds are amateurs, rather than professionals, is both factually untrue and may lead to marginalization of crowd labor rights.[162]

Studies have also found that crowds are not simply collections of amateurs or hobbyists. Rather, crowds are often professionally trained in a discipline relevant to a given crowdsourcing task and sometimes hold advanced degrees and many years of experience in the profession. Claiming that crowds are amateurs, rather than professionals, is both factually untrue and may lead to marginalization of crowd labor rights.

研究还发现,人群并不仅仅是业余爱好者或业余爱好者的集合。相反,人群往往受过与特定众包任务相关的专业训练,有时拥有高级学位和多年的专业经验。声称群众是业余爱好者,而不是专业人士,这两者都是不真实的,并可能导致边缘化的群众劳动权利。

G. D. Saxton et al. (2013) studied the role of community users, among other elements, during his content analysis of 103 crowdsourcing organizations. Saxton et al. developed a taxonomy of nine crowdsourcing models (intermediary model, citizen media production, collaborative software development, digital goods sales, product design, peer-to-peer social financing, consumer report model, knowledge base building model, and collaborative science project model) in which to categorize the roles of community users, such as researcher, engineer, programmer, journalist, graphic designer, etc., and the products and services developed.[163]

G. D. Saxton et al. (2013) studied the role of community users, among other elements, during his content analysis of 103 crowdsourcing organizations. Saxton et al. developed a taxonomy of nine crowdsourcing models (intermediary model, citizen media production, collaborative software development, digital goods sales, product design, peer-to-peer social financing, consumer report model, knowledge base building model, and collaborative science project model) in which to categorize the roles of community users, such as researcher, engineer, programmer, journalist, graphic designer, etc., and the products and services developed.

萨克斯顿等。(2013)在他对103个众包组织的内容分析中,研究了社区用户的角色,以及其他因素。萨克斯顿等人。开发了9种众包模式(中介模式、公民媒体生产、协作软件开发、数字产品销售、产品设计、点对点社会融资、消费者报告模式、知识库建设模式和协同科学项目模式)的分类,以此对社区用户的角色进行分类,包括研究员、工程师、程序员、记者、平面设计师等,以及开发的产品和服务。

Motivations

模板:Further

Contributors

Many scholars of crowdsourcing suggest that both intrinsic and extrinsic motivations cause people to contribute to crowdsourced tasks and these factors influence different types of contributors.[164][78][158][159][161][165][166][167] For example, students and people employed full-time rate human capital advancement as less important than part-time workers do, while women rate social contact as more important than men do.[165]

Many scholars of crowdsourcing suggest that both intrinsic and extrinsic motivations cause people to contribute to crowdsourced tasks and these factors influence different types of contributors. For example, students and people employed full-time rate human capital advancement as less important than part-time workers do, while women rate social contact as more important than men do.

许多研究众包的学者认为,内在动机和外在动机都会促使人们对众包任务做出贡献,而这些因素会影响不同类型的贡献者。例如,学生和全职就业者认为人力资本提升不如兼职工作者重要,而妇女认为社会接触比男子更重要。

Intrinsic motivations are broken down into two categories: enjoyment-based and community-based motivations. Enjoyment-based motivations refer to motivations related to the fun and enjoyment that contributors experience through their participation. These motivations include: skill variety, task identity, task autonomy, direct feedback from the job, and pastime. Community-based motivations refer to motivations related to community participation, and include community identification and social contact. In crowdsourced journalism, the motivation factors are intrinsic: the crowd is driven by a possibility to make social impact, contribute to social change and help their peers.[164]

Intrinsic motivations are broken down into two categories: enjoyment-based and community-based motivations. Enjoyment-based motivations refer to motivations related to the fun and enjoyment that contributors experience through their participation. These motivations include: skill variety, task identity, task autonomy, direct feedback from the job, and pastime. Community-based motivations refer to motivations related to community participation, and include community identification and social contact. In crowdsourced journalism, the motivation factors are intrinsic: the crowd is driven by a possibility to make social impact, contribute to social change and help their peers.

内在动机分为两类: 基于享受的动机和基于社区的动机。基于享受的动机指的是与贡献者通过参与体验到的乐趣和享受有关的动机。这些动机包括: 技能多样性、任务身份、任务自主性、来自工作的直接反馈和消遣。社区动机是指与社区参与相关的动机,包括社区认同和社会交往。在众包新闻中,动机因素是内在的: 群体受到一种可能性的驱动,这种可能性可以产生社会影响,促进社会变革,并帮助他们的同龄人。

Extrinsic motivations are broken down into three categories: immediate payoffs, delayed payoffs, and social motivations. Immediate payoffs, through monetary payment, are the immediately received compensations given to those who complete tasks. Delayed payoffs are benefits that can be used to generate future advantages, such as training skills and being noticed by potential employers. Social motivations are the rewards of behaving pro-socially,[168] such as the altruistic motivations of online volunteers. Chandler and Kapelner found that US users of the Amazon Mechanical Turk were more likely to complete a task when told they were going to "help researchers identify tumor cells," than when they were not told the purpose of their task. However, of those who completed the task, quality of output did not depend on the framing of the task.[169]

Extrinsic motivations are broken down into three categories: immediate payoffs, delayed payoffs, and social motivations. Immediate payoffs, through monetary payment, are the immediately received compensations given to those who complete tasks. Delayed payoffs are benefits that can be used to generate future advantages, such as training skills and being noticed by potential employers. Social motivations are the rewards of behaving pro-socially, such as the altruistic motivations of online volunteers. Chandler and Kapelner found that US users of the Amazon Mechanical Turk were more likely to complete a task when told they were going to "help researchers identify tumor cells," than when they were not told the purpose of their task. However, of those who completed the task, quality of output did not depend on the framing of the task.

外在动机分为三类: 即时回报、延迟回报和社会动机。立即回报,通过货币支付,是给那些完成任务的人立即得到的补偿。延迟支付是可以用来产生未来优势的好处,例如培训技能和被潜在雇主注意到。社会动机是亲社会行为的回报,如在线志愿者的利他动机。钱德勒和卡普尔纳发现,当亚马逊土耳其机器人的美国用户被告知他们将“帮助研究人员识别肿瘤细胞”时,他们完成任务的可能性比没有告诉他们任务的目的时候更大。然而,在那些完成任务的人中,输出的质量并不取决于任务的框架。

Motivation factors in crowdsourcing are often a mix of intrinsic and extrinsic factors.[170] In a crowdsourced law-making project, the crowd was motivated by a mix of intrinsic and extrinsic factors. Intrinsic motivations included fulfilling civic duty, affecting the law for sociotropic reasons, to deliberate with and learn from peers. Extrinsic motivations included changing the law for financial gain or other benefits. Participation in crowdsourced policy-making was an act of grassroots advocacy, whether to pursue one's own interest or more altruistic goals, such as protecting nature.[171]

Motivation factors in crowdsourcing are often a mix of intrinsic and extrinsic factors. In a crowdsourced law-making project, the crowd was motivated by a mix of intrinsic and extrinsic factors. Intrinsic motivations included fulfilling civic duty, affecting the law for sociotropic reasons, to deliberate with and learn from peers. Extrinsic motivations included changing the law for financial gain or other benefits. Participation in crowdsourced policy-making was an act of grassroots advocacy, whether to pursue one's own interest or more altruistic goals, such as protecting nature.

众包中的动机因素通常是内在因素和外在因素的混合。在一个众包的法律制定项目中,人群受到内在因素和外在因素的混合激励。内在动机包括履行公民义务,出于社会性原因影响法律,与同龄人讨论并向他们学习。外在动机包括为了经济利益或其他利益而修改法律。参与众包政策制定是一种基层宣传行为,无论是追求个人利益还是更多的利他目标,如保护自然。

Another form of social motivation is prestige or status. The International Children's Digital Library recruits volunteers to translate and review books. Because all translators receive public acknowledgment for their contributions, Kaufman and Schulz cite this as a reputation-based strategy to motivate individuals who want to be associated with institutions that have prestige. The Mechanical Turk uses reputation as a motivator in a different sense, as a form of quality control. Crowdworkers who frequently complete tasks in ways judged to be inadequate can be denied access to future tasks, providing motivation to produce high-quality work.[172]

Another form of social motivation is prestige or status. The International Children's Digital Library recruits volunteers to translate and review books. Because all translators receive public acknowledgment for their contributions, Kaufman and Schulz cite this as a reputation-based strategy to motivate individuals who want to be associated with institutions that have prestige. The Mechanical Turk uses reputation as a motivator in a different sense, as a form of quality control. Crowdworkers who frequently complete tasks in ways judged to be inadequate can be denied access to future tasks, providing motivation to produce high-quality work.

另一种形式的社会动机是声望或地位。国际儿童数字图书馆招募志愿者翻译和审查书籍。因为所有译者的贡献都会得到公开的认可,考夫曼和舒尔茨将其作为一个基于声誉的策略来激励那些希望与有声望的机构合作的个人。在另一种意义上,Mechanical Turk 使用声誉作为激励因素,作为质量控制的一种形式。经常以被认为不适当的方式完成任务的群体工作者可能被拒绝进入未来的任务,从而提供了产生高质量工作的动力。

Requesters

Using crowdsourcing through means such as Amazon Mechanical Turk can help provide researchers and requesters with an already established infrastructure for their projects, allowing them to easily use a crowd and access participants from a diverse culture background. Using crowdsourcing can also help complete the work for projects that would normally have geographical and population size limitations.[173]

Using crowdsourcing through means such as Amazon Mechanical Turk can help provide researchers and requesters with an already established infrastructure for their projects, allowing them to easily use a crowd and access participants from a diverse culture background. Using crowdsourcing can also help complete the work for projects that would normally have geographical and population size limitations.

通过诸如亚马逊土耳其机器人这样的方式使用众包,可以帮助研究人员和请求者为他们的项目提供一个已经建立的基础设施,使他们能够轻松地使用一群人并接触来自不同文化背景的参与者。使用众包还有助于完成通常具有地理和人口规模限制的项目的工作。

Participation in crowdsourcing

Despite the potential global reach of IT applications online, recent research illustrates that differences in location模板:Which affect participation outcomes in IT-mediated crowds.[174]

Despite the potential global reach of IT applications online, recent research illustrates that differences in location affect participation outcomes in IT-mediated crowds.

尽管在线信息技术应用具有潜在的全球影响力,但最近的研究表明,地理位置的差异会影响以信息技术为媒介的人群的参与结果。

Limitations and controversies

At least six major topics cover the limitations and controversies about crowdsourcing:

  1. Impact of crowdsourcing on product quality
  2. Entrepreneurs contribute less capital themselves
  3. Increased number of funded ideas
  4. The value and impact of the work received from the crowd
  5. The ethical implications of low wages paid to crowdworkers
  6. Trustworthiness and informed decision making

At least six major topics cover the limitations and controversies about crowdsourcing:

  1. Impact of crowdsourcing on product quality
  2. Entrepreneurs contribute less capital themselves
  3. Increased number of funded ideas
  4. The value and impact of the work received from the crowd
  5. The ethical implications of low wages paid to crowdworkers
  6. Trustworthiness and informed decision making

至少有六个主要话题涵盖了众包的局限性和争议性: # 众包对产品质量的影响 # 企业家自己贡献更少的资本 # 资助的想法数量增加 # 群体工作的价值和影响 # 低工资支付给众包工人的道德影响 # 可信赖性和知情的决策

Impact of crowdsourcing on product quality

Crowdsourcing allows anyone to participate, allowing for many unqualified participants and resulting in large quantities of unusable contributions. Companies, or additional crowdworkers, then have to sort through all of these low-quality contributions. The task of sorting through crowdworkers' contributions, along with the necessary job of managing the crowd, requires companies to hire actual employees, thereby increasing management overhead.[175] For example, susceptibility to faulty results is caused by targeted, malicious work efforts. Since crowdworkers completing microtasks are paid per task, often a financial incentive causes workers to complete tasks quickly rather than well. Verifying responses is time-consuming, so requesters often depend on having multiple workers complete the same task to correct errors. However, having each task completed multiple times increases time and monetary costs.[176]

Crowdsourcing allows anyone to participate, allowing for many unqualified participants and resulting in large quantities of unusable contributions. Companies, or additional crowdworkers, then have to sort through all of these low-quality contributions. The task of sorting through crowdworkers' contributions, along with the necessary job of managing the crowd, requires companies to hire actual employees, thereby increasing management overhead. For example, susceptibility to faulty results is caused by targeted, malicious work efforts. Since crowdworkers completing microtasks are paid per task, often a financial incentive causes workers to complete tasks quickly rather than well. Verifying responses is time-consuming, so requesters often depend on having multiple workers complete the same task to correct errors. However, having each task completed multiple times increases time and monetary costs.

众包允许任何人参与,允许许多不合格的参与者,导致大量无法使用的贡献。然后,公司或其他群众工作者必须对所有这些低质量的贡献进行分类。整理众包工人贡献的任务,以及管理众包的必要工作,要求公司雇佣实际的员工,从而增加管理开销。例如,对错误结果的敏感性是由有针对性的恶意工作导致的。由于完成微型任务的群体工作者是按任务付费的,经济刺激往往会导致工人更快地完成任务,而不是很好地完成任务。验证响应非常耗时,因此请求者通常依赖于让多个工作者完成相同的任务来更正错误。然而,每项任务多次完成会增加时间和金钱成本。

Crowdsourcing quality is also impacted by task design. Lukyanenko et al.[177] argue that, the prevailing practice of modeling crowdsourcing data collection tasks in terms of fixed classes (options), unnecessarily restricts quality. Results demonstrate that information accuracy depends on the classes used to model domains, with participants providing more accurate information when classifying phenomena at a more general level (which is typically less useful to sponsor organizations, hence less common). Further, greater overall accuracy is expected when participants could provide free-form data compared to tasks in which they select from constrained choices.

Crowdsourcing quality is also impacted by task design. Lukyanenko et al. argue that, the prevailing practice of modeling crowdsourcing data collection tasks in terms of fixed classes (options), unnecessarily restricts quality. Results demonstrate that information accuracy depends on the classes used to model domains, with participants providing more accurate information when classifying phenomena at a more general level (which is typically less useful to sponsor organizations, hence less common). Further, greater overall accuracy is expected when participants could provide free-form data compared to tasks in which they select from constrained choices.

众包的质量也受到任务设计的影响。卢基亚年科等。提出,根据固定类别(选项)将众包数据收集任务建模的流行做法,不必要地限制了质量。结果表明,信息的准确性取决于用于建立域模型的类别,参与者在对现象进行更一般的分类时提供更准确的信息(这通常对赞助组织不太有用,因此不太常见)。此外,当参与者能够提供自由格式的数据时,与他们从有限的选择中选择的任务相比,预期总体准确性会更高。

Just as limiting, oftentimes the scenario is that just not enough skills or expertise exist in the crowd to successfully accomplish the desired task. While this scenario does not affect "simple" tasks such as image labeling, it is particularly problematic for more complex tasks, such as engineering design or product validation. A comparison between the evaluation of business models from experts and an anonymous online crowd showed that an anonymous online crowd cannot evaluate business models to the same level as experts.[178] In these cases, it may be difficult or even impossible to find the qualified people in the crowd, as their voices may be drowned out by consistent, but incorrect crowd members.[179] However, if the difficulty of the task is even "intermediate" in its difficulty, estimating crowdworkers' skills and intentions and leveraging them for inferring true responses works well,[180] albeit with an additional computation cost.

Just as limiting, oftentimes the scenario is that just not enough skills or expertise exist in the crowd to successfully accomplish the desired task. While this scenario does not affect "simple" tasks such as image labeling, it is particularly problematic for more complex tasks, such as engineering design or product validation. A comparison between the evaluation of business models from experts and an anonymous online crowd showed that an anonymous online crowd cannot evaluate business models to the same level as experts. In these cases, it may be difficult or even impossible to find the qualified people in the crowd, as their voices may be drowned out by consistent, but incorrect crowd members. However, if the difficulty of the task is even "intermediate" in its difficulty, estimating crowdworkers' skills and intentions and leveraging them for inferring true responses works well, albeit with an additional computation cost.

同样有局限性的是,通常情况下,人群中没有足够的技能或专业知识来成功地完成预期的任务。虽然这种情况不会影响图像标记等“简单”任务,但对于更复杂的任务(如工程设计或产品验证)而言,这种情况尤其成问题。专家对商业模式的评估和匿名在线群体的评估之间的比较表明,匿名在线群体无法将商业模式评估到与专家相同的水平。在这种情况下,很难甚至不可能在人群中找到合格的人,因为他们的声音可能会被一致但不正确的人群淹没。然而,如果这项任务的难度只是“中等”,那么估计群体工作者的技能和意图并利用它们来推断真实的反应就能很好地工作,尽管需要额外的计算成本。

Crowdworkers are a nonrandom sample of the population. Many researchers use crowdsourcing to quickly and cheaply conduct studies with larger sample sizes than would be otherwise achievable. However, due to limited access to the Internet, participation in low developed countries is relatively low. Participation in highly developed countries is similarly low, largely because the low amount of pay is not a strong motivation for most users in these countries. These factors lead to a bias in the population pool towards users in medium developed countries, as deemed by the human development index.[181]

Crowdworkers are a nonrandom sample of the population. Many researchers use crowdsourcing to quickly and cheaply conduct studies with larger sample sizes than would be otherwise achievable. However, due to limited access to the Internet, participation in low developed countries is relatively low. Participation in highly developed countries is similarly low, largely because the low amount of pay is not a strong motivation for most users in these countries. These factors lead to a bias in the population pool towards users in medium developed countries, as deemed by the human development index.

群众工作者是群体中的一个非随机样本。许多研究人员使用众包来快速而廉价地进行大样本量的研究,而这在其他情况下是无法实现的。然而,由于互联网接入有限,低发达国家的参与率相对较低。高度发达国家的参与率也同样很低,主要是因为这些国家的大多数用户并不以低工资为强烈动机。这些因素导致人类发展指数认为中等发达国家的人口库偏向用户。

The likelihood that a crowdsourced project will fail due to lack of monetary motivation or too few participants increases over the course of the project. Crowdsourcing markets are not a first-in, first-out queue. Tasks that are not completed quickly may be forgotten, buried by filters and search procedures so that workers do not see them. This results in a long-tail power law distribution of completion times.[182] Additionally, low-paying research studies online have higher rates of attrition, with participants not completing the study once started.[151] Even when tasks are completed, crowdsourcing does not always produce quality results. When Facebook began its localization program in 2008, it encountered some criticism for the low quality of its crowdsourced translations.[183]

The likelihood that a crowdsourced project will fail due to lack of monetary motivation or too few participants increases over the course of the project. Crowdsourcing markets are not a first-in, first-out queue. Tasks that are not completed quickly may be forgotten, buried by filters and search procedures so that workers do not see them. This results in a long-tail power law distribution of completion times. Additionally, low-paying research studies online have higher rates of attrition, with participants not completing the study once started. Even when tasks are completed, crowdsourcing does not always produce quality results. When Facebook began its localization program in 2008, it encountered some criticism for the low quality of its crowdsourced translations.

一个众包项目由于缺乏金钱动机或参与者太少而失败的可能性在项目过程中增加。众包市场不是先进先出的队伍。没有快速完成的任务可能会被遗忘,被过滤器和搜索程序掩埋,这样员工就看不到它们。这导致了完工时间的长尾幂律分布。此外,低收入的在线研究有更高的自然减员率,参与者一旦开始就不能完成研究。即使任务已经完成,众包也不总能产生高质量的结果。当 Facebook 在2008年开始它的本地化项目时,它因为众包翻译的低质量而遭到了一些批评。

One of the problems of crowdsourcing products is the lack of interaction between the crowd and the client. Usually little information is known about the final desired product, and often very limited interaction with the final client occurs. This can decrease the quality of product because client interaction is a vital part of the design process.[184]

One of the problems of crowdsourcing products is the lack of interaction between the crowd and the client. Usually little information is known about the final desired product, and often very limited interaction with the final client occurs. This can decrease the quality of product because client interaction is a vital part of the design process.

众包产品的一个问题就是缺乏大众和客户之间的互动。通常很少有关于最终期望的产品的信息,并且通常与最终客户端的交互非常有限。这可能会降低产品的质量,因为客户交互是设计过程中至关重要的部分。

An additional cause of the decrease in product quality that can result from crowdsourcing is the lack of collaboration tools. In a typical workplace, coworkers are organized in such a way that they can work together and build upon each other's knowledge and ideas. Furthermore, the company often provides employees with the necessary information, procedures, and tools to fulfill their responsibilities. However, in crowdsourcing, crowdworkers are left to depend on their own knowledge and means to complete tasks.[175]

An additional cause of the decrease in product quality that can result from crowdsourcing is the lack of collaboration tools. In a typical workplace, coworkers are organized in such a way that they can work together and build upon each other's knowledge and ideas. Furthermore, the company often provides employees with the necessary information, procedures, and tools to fulfill their responsibilities. However, in crowdsourcing, crowdworkers are left to depend on their own knowledge and means to complete tasks.

众包导致产品质量下降的另一个原因是缺乏协作工具。在一个典型的工作场所,同事的组织方式是他们可以一起工作,并建立在彼此的知识和想法。此外,公司经常为员工提供必要的信息、程序和工具来履行他们的职责。然而,在众包中,众工只能依靠自己的知识和手段来完成任务。

A crowdsourced project is usually expected to be unbiased by incorporating a large population of participants with a diverse background. However, most of the crowdsourcing works are done by people who are paid or directly benefit from the outcome (e.g. most of open source projects working on Linux). In many other cases, the end product is the outcome of a single person's endeavour, who creates the majority of the product, while the crowd only participates in minor details.[185]

A crowdsourced project is usually expected to be unbiased by incorporating a large population of participants with a diverse background. However, most of the crowdsourcing works are done by people who are paid or directly benefit from the outcome (e.g. most of open source projects working on Linux). In many other cases, the end product is the outcome of a single person's endeavour, who creates the majority of the product, while the crowd only participates in minor details.

一个来源于大众的项目通常希望通过吸收大量具有不同背景的参与者而得到公正的评价。然而,大多数众包工作都是由那些获得报酬或直接从中受益的人完成的(例如:。大多数开源项目都是在 Linux 上运行的)。在许多其他情况下,最终的产品是一个人的努力的结果,谁创造了大部分的产品,而群众只参与在次要的细节。

Entrepreneurs contribute less capital themselves

To make an idea turn into a reality, the first component needed is capital. Depending on the scope and complexity of the crowdsourced project, the amount of necessary capital can range from a few thousand dollars to hundreds of thousands, if not more. The capital-raising process can take from days to months depending on different variables, including the entrepreneur's network and the amount of initial self-generated capital.

To make an idea turn into a reality, the first component needed is capital. Depending on the scope and complexity of the crowdsourced project, the amount of necessary capital can range from a few thousand dollars to hundreds of thousands, if not more. The capital-raising process can take from days to months depending on different variables, including the entrepreneur's network and the amount of initial self-generated capital.

要使一个想法变成现实,首先需要的是资本。根据众包项目的范围和复杂程度,必要的资金数额可以从几千美元到几十万美元,甚至更多。融资过程可能需要几天到几个月,这取决于不同的变量,包括企业家的网络和初始自产资本的数量。

The crowdsourcing process allows entrepreneurs to access to a wide range of investors who can take different stakes in the project.[186] In effect, crowdsourcing simplifies the capital-raising process and allows entrepreneurs to spend more time on the project itself and reaching milestones rather than dedicating time to get it started. Overall, the simplified access to capital can save time to start projects and potentially increase efficiency of projects.

The crowdsourcing process allows entrepreneurs to access to a wide range of investors who can take different stakes in the project. In effect, crowdsourcing simplifies the capital-raising process and allows entrepreneurs to spend more time on the project itself and reaching milestones rather than dedicating time to get it started. Overall, the simplified access to capital can save time to start projects and potentially increase efficiency of projects.

众包过程允许企业家接触到范围广泛的投资者,这些投资者可以在项目中持有不同的股份。实际上,众包简化了筹资过程,使企业家能够花更多时间在项目本身上,达到里程碑,而不是花时间启动项目。总体而言,简化资金获取途径可以节省项目启动时间,并可能提高项目效率。

Opponents of this issue argue easier access to capital through a large number of smaller investors can hurt the project and its creators. With a simplified capital-raising process involving more investors with smaller stakes, investors are more risk-seeking because they can take on an investment size with which they are comfortable.[186] This leads to entrepreneurs losing possible experience convincing investors who are wary of potential risks in investing because they do not depend on one single investor for the survival of their project. Instead of being forced to assess risks and convince large institutional investors why their project can be successful, wary investors can be replaced by others who are willing to take on the risk.

Opponents of this issue argue easier access to capital through a large number of smaller investors can hurt the project and its creators. With a simplified capital-raising process involving more investors with smaller stakes, investors are more risk-seeking because they can take on an investment size with which they are comfortable. This leads to entrepreneurs losing possible experience convincing investors who are wary of potential risks in investing because they do not depend on one single investor for the survival of their project. Instead of being forced to assess risks and convince large institutional investors why their project can be successful, wary investors can be replaced by others who are willing to take on the risk.

这个问题的反对者认为,通过大量小投资者更容易获得资金,可能会损害项目及其创建者的利益。由于融资过程简化,涉及的投资者数量更多,持股比例更小,因此投资者更愿意冒险,因为他们可以选择自己感到满意的投资规模。这导致企业家失去了可能的经验,使那些担心投资潜在风险的投资者信服,因为他们不依赖单一的投资者来维持他们的项目。与其被迫评估风险,说服大型机构投资者相信他们的项目能够成功,谨慎的投资者可以被其他愿意承担风险的投资者所取代。

There are translation companies and several users of translations who pretend to use crowdsourcing as a means for drastically cutting costs, instead of hiring professional translators. This situation has been systematically denounced by IAPTI and other translator organizations.[187]

There are translation companies and several users of translations who pretend to use crowdsourcing as a means for drastically cutting costs, instead of hiring professional translators. This situation has been systematically denounced by IAPTI and other translator organizations.

有一些翻译公司和一些翻译用户假装使用众包作为大幅削减成本的手段,而不是雇佣专业翻译。这种情况受到采购和翻译机构以及其他翻译组织的一贯谴责。

Increased number of funded ideas

The raw number of ideas that get funded and the quality of the ideas is a large controversy over the issue of crowdsourcing.

The raw number of ideas that get funded and the quality of the ideas is a large controversy over the issue of crowdsourcing.

获得资助的创意的原始数量和创意的质量在众包问题上是一个巨大的争议。

Proponents argue that crowdsourcing is beneficial because it allows niche ideas that would not survive venture capitalist or angel funding, many times the primary investors in startups, to be started. Many ideas are killed in their infancy due to insufficient support and lack of capital, but crowdsourcing allows these ideas to be started if an entrepreneur can find a community to take interest in the project.[188]

Proponents argue that crowdsourcing is beneficial because it allows niche ideas that would not survive venture capitalist or angel funding, many times the primary investors in startups, to be started. Many ideas are killed in their infancy due to insufficient support and lack of capital, but crowdsourcing allows these ideas to be started if an entrepreneur can find a community to take interest in the project.

支持者认为众包是有益的,因为它允许那些在风险投资家或者天使投资人---- 很多时候是创业公司的主要投资人---- 的帮助下无法存活的小众创意得以启动。由于缺乏足够的支持和资金,许多创意在萌芽阶段就夭折了,但是如果一个企业家能够找到一个社区对这个项目感兴趣,那么众包就可以启动这些创意。

Crowdsourcing allows those who would benefit from the project to fund and become a part of it, which is one way for small niche ideas get started.[189] However, when the raw number of projects grows, the number of possible failures can also increase. Crowdsourcing assists niche and high-risk projects to start because of a perceived need from a select few who seek the product. With high risk and small target markets, the pool of crowdsourced projects faces a greater possible loss of capital, lower return, and lower levels of success.[190]

Crowdsourcing allows those who would benefit from the project to fund and become a part of it, which is one way for small niche ideas get started. However, when the raw number of projects grows, the number of possible failures can also increase. Crowdsourcing assists niche and high-risk projects to start because of a perceived need from a select few who seek the product. With high risk and small target markets, the pool of crowdsourced projects faces a greater possible loss of capital, lower return, and lower levels of success.

众包允许那些从项目中获益的人投资并成为其中的一部分,这也是小众创意开始的一种方式。然而,当项目的原始数量增加时,可能的失败数量也会增加。众包协助利基和高风险的项目启动,因为感知需要从少数选择谁寻求产品。由于高风险和小目标市场,众包项目可能面临更大的资本损失、更低的回报和更低的成功率。

Concerns

Because crowdworkers are considered independent contractors rather than employees, they are not guaranteed minimum wage. In practice, workers using the Amazon Mechanical Turk generally earn less than the minimum wage. In 2009, it was reported that United States Turk users earned an average of $2.30 per hour for tasks, while users in India earned an average of $1.58 per hour, which is below minimum wage in the United States (but not in India).[156][191] Some researchers who have considered using Mechanical Turk to get participants for research studies, have argued that the wage conditions might be unethical.[151][192] However, according to other research, workers on Amazon Mechanical Turk do not feel they are exploited, and are ready to participate in crowdsourcing activities in the future.[193] When Facebook began its localization program in 2008, it received criticism for using free labor in crowdsourcing the translation of site guidelines.[183]

Because crowdworkers are considered independent contractors rather than employees, they are not guaranteed minimum wage. In practice, workers using the Amazon Mechanical Turk generally earn less than the minimum wage. In 2009, it was reported that United States Turk users earned an average of $2.30 per hour for tasks, while users in India earned an average of $1.58 per hour, which is below minimum wage in the United States (but not in India). Some researchers who have considered using Mechanical Turk to get participants for research studies, have argued that the wage conditions might be unethical.Greg Norcie, 2011, "Ethical and practical considerations for compensation of crowdsourced research participants," CHI WS on Ethics Logs and VideoTape: Ethics in Large Scale Trials & User Generated Content, , accessed 30 June 2015. However, according to other research, workers on Amazon Mechanical Turk do not feel they are exploited, and are ready to participate in crowdsourcing activities in the future. When Facebook began its localization program in 2008, it received criticism for using free labor in crowdsourcing the translation of site guidelines.

因为群众工作者被认为是独立的承包商而不是雇员,他们没有最低工资保障。实际上,使用亚马逊土耳其机器人的工人的工资通常低于最低工资标准。2009年,据报告,美国土耳其用户完成任务的平均时薪为2.30美元,而印度用户的平均时薪为1.58美元,低于美国的最低工资(但印度不是这样)。一些研究人员曾考虑使用土耳其机器人让参与者参与研究,他们认为这样的工资条件可能是不道德的。Greg Norcie,2011,“补偿众包研究参与者的道德和实际考虑”CHI WS on Ethics Logs and VideoTape: Ethics in Large Scale Trials & User Generated Content,accessed 30 June 2015。然而,根据其他的研究,亚马逊土耳其机器人的工作人员并不觉得自己受到了剥削,他们准备在未来参与众包活动。当 Facebook 在2008年开始其本地化项目时,它因使用免费劳动力众包翻译网站指南而受到批评。

Typically, no written contracts, nondisclosure agreements, or employee agreements are made with crowdworkers. For users of the Amazon Mechanical Turk, this means that requestors decide whether users' work is acceptable, and reserve the right to withhold pay if it does not meet their standards.[173] Critics say that crowdsourcing arrangements exploit individuals in the crowd, and a call has been made for crowds to organize for their labor rights.[194][162][195]

Typically, no written contracts, nondisclosure agreements, or employee agreements are made with crowdworkers. For users of the Amazon Mechanical Turk, this means that requestors decide whether users' work is acceptable, and reserve the right to withhold pay if it does not meet their standards. Critics say that crowdsourcing arrangements exploit individuals in the crowd, and a call has been made for crowds to organize for their labor rights.The Crowdsourcing Scam (Dec. 2014), The Baffler, No. 26

通常情况下,没有书面合同,保密协议,或员工协议与众工。对于亚马逊土耳其机器人的用户来说,这意味着请求者决定用户的作品是否可以接受,如果不符合他们的标准,他们保留不付费的权利。批评人士说,众包安排剥削了人群中的个人,并呼吁人群组织起来争取他们的劳工权利。众包骗局》(The Crowdsourcing Scam) ,The Baffler 出版社,2014年12月。26

Collaboration between crowd members can also be difficult or even discouraged, especially in the context of competitive crowd sourcing. Crowdsourcing site InnoCentive allows organizations to solicit solutions to scientific and technological problems; only 10.6% of respondents report working in a team on their submission.[159] Amazon Mechanical Turk workers collaborated with academics to create a platform, WeAreDynamo.org, that allows them to organize and create campaigns to better their work situation.[196]

Collaboration between crowd members can also be difficult or even discouraged, especially in the context of competitive crowd sourcing. Crowdsourcing site InnoCentive allows organizations to solicit solutions to scientific and technological problems; only 10.6% of respondents report working in a team on their submission. Amazon Mechanical Turk workers collaborated with academics to create a platform, WeAreDynamo.org, that allows them to organize and create campaigns to better their work situation.

群体成员之间的合作也可能是困难的,甚至是不被鼓励的,特别是在竞争激烈的群体外包环境下。众包网站 InnoCentive 允许组织征求科学和技术问题的解决方案; 只有10.6% 的受访者表示他们在一个团队中工作。亚马逊土耳其机器人的工作人员与学者合作创建了一个平台,wearedeno. org,使他们能够组织和创建活动,以改善他们的工作环境。

See also




References

  1. 1.0 1.1 1.2 Safire, William (5 February 2009). "On Language". New York Times Magazine. Retrieved 19 May 2013. Comes now Jeff Howe, contributing editor for Wired magazine, who recalls pitching an article idea in 2005 to Mark Robinson, his editor there, about how the Internet was helping businesses use amateurs to replace professionals. He reports that Robinson said, 'Hmmm … it's like they're outsourcing to the crowd.'
    'Or,' Howe informs me, 'I said, crowdsourcing. Frankly, I was joking. Silicon Valley’s affection for portmanteaus is a bit of an inside joke at Wired. But Mark liked my story idea, and liked the word even more.'
  2. 2.0 2.1 Schenk, Eric; Guittard, Claude (1 January 2009). "Crowdsourcing What can be Outsourced to the Crowd and Why". Retrieved 1 October 2018. {{cite journal}}: Cite journal requires |journal= (help)
  3. 3.0 3.1 Hirth, Matthias; Hoßfeld, Tobias; Tran-Gia, Phuoc (2011). "Anatomy of a Crowdsourcing Platform - Using the Example of Microworkers.com". 2011 Fifth International Conference on Innovative Mobile and Internet Services in Ubiquitous Computing. pp. 322–329. doi:10.1109/IMIS.2011.89. ISBN 978-1-61284-733-7. http://i3wue.de/staff/matthias.hirth/author_version/papers/conf_410_author_version.pdf. 
  4. 4.0 4.1 4.2 Estellés-Arolas, Enrique; González-Ladrón-de-Guevara, Fernando (2012), "Towards an Integrated Crowdsourcing Definition" (PDF), Journal of Information Science, 38 (2): 189–200, doi:10.1177/0165551512437638, hdl:10251/56904, S2CID 18535678
  5. 5.0 5.1 5.2 5.3 Howe, Jeff (2006). "The Rise of Crowdsourcing". Wired.
  6. 6.0 6.1 Brabham, D. C. (2013). Crowdsourcing. Cambridge, Massachusetts; London, England: The MIT Press.
  7. 7.0 7.1 Brabham, D. C. (2008). "Crowdsourcing as a Model for Problem Solving an Introduction and Cases". Convergence: The International Journal of Research into New Media Technologies. 14 (1): 75–90. CiteSeerX 10.1.1.175.1623. doi:10.1177/1354856507084420. S2CID 145310730.
  8. 8.0 8.1 Prpić, J., & Shukla, P. (2016). Crowd Science: Measurements, Models, and Methods. In Proceedings of the 49th Annual Hawaii International Conference on System Sciences, Kauai, Hawaii: IEEE Computer Society
  9. Buettner, Ricardo (2015). A Systematic Literature Review of Crowdsourcing Research from a Human Resource Management Perspective. 48th Annual Hawaii International Conference on System Sciences. Kauai, Hawaii: IEEE. pp. 4609–4618. doi:10.13140/2.1.2061.1845. ISBN 978-1-4799-7367-5.
  10. 10.0 10.1 Prpić, John; Taeihagh, Araz; Melton, James (September 2015). "The Fundamentals of Policy Crowdsourcing". Policy & Internet. 7 (3): 340–361. arXiv:1802.04143. doi:10.1002/poi3.102. S2CID 3626608.
  11. 11.0 11.1 11.2 Liu, Wei; Moultrie, James; Ye, Songhe (4 May 2019). "The Customer-Dominated Innovation Process: Involving Customers as Designers and Decision-Makers in Developing New Product". The Design Journal. 22 (3): 299–324. doi:10.1080/14606925.2019.1592324. ISSN 1460-6925. S2CID 145931864.
  12. 12.0 12.1 Schlagwein, Daniel; Bjørn-Andersen, Niels (2014), "Organizational Learning with Crowdsourcing: The Revelatory Case of LEGO" (PDF), Journal of the Association for Information Systems, 15 (11): 754–778, doi:10.17705/1jais.00380
  13. Taeihagh, Araz (19 June 2017). "Crowdsourcing, Sharing Economies, and Development". Journal of Developing Societies (in English). 33 (2): 0169796X1771007. arXiv:1707.06603. doi:10.1177/0169796x17710072. S2CID 32008949.
  14. 14.0 14.1 Guth, Kristen L.; Brabham, Daren C. (4 August 2017). "Finding the diamond in the rough: Exploring communication and platform in crowdsourcing performance". Communication Monographs. 84 (4): 510–533. doi:10.1080/03637751.2017.1359748. ISSN 0363-7751. S2CID 54045924.
  15. 15.0 15.1 Diamond, Larry; Whittington, Zak (2009). "Social Media". In Welzel, Christian; Haerpfer, Christian W.; Bernhagen, Patrick et al.. Democratization (2 ed.). Oxford: Oxford University Press. 2018. p. 256. ISBN 9780198732280. https://books.google.com/books?id=0IN8DwAAQBAJ. "Another way that social media can contribute to democratization is by 'crowdsourcing' information. This elicits the knowledge and wisdom of the 'crowd' [...]." 
  16. Howe, Jeff (2 June 2006). "Crowdsourcing: A Definition". Crowdsourcing Blog. Retrieved 2 January 2013.
  17. "Daren C. Brabham". USC Annenberg. University of Southern California. Retrieved 17 September 2014.
  18. 18.0 18.1 18.2 18.3 18.4 Brabham, Daren (2008), "Crowdsourcing as a Model for Problem Solving: An Introduction and Cases" (PDF), Convergence: The International Journal of Research into New Media Technologies, 14 (1): 75–90, CiteSeerX 10.1.1.175.1623, doi:10.1177/1354856507084420, S2CID 145310730, archived from the original (PDF) on 25 April 2012
  19. 19.0 19.1 Afuah, A.; Tucci, C. L. (2012). "Crowdsourcing as a Solution to Distant Search". Academy of Management Review. 37 (3): 355–375. doi:10.5465/amr.2010.0146.
  20. Vuković, M. (2009). Crowdsourcing for enterprises. In Services-I, 2009 World Conference on (pp. 686-692). IEEE.
  21. de Vreede, T., Nguyen, C., de Vreede, G. J., Boughzala, I., Oh, O., & Reiter-Palmon, R. (2013). A Theoretical Model of User Engagement in Crowdsourcing. In Collaboration and Technology (pp. 94-109). Springer Berlin Heidelberg
  22. Claypole, Maurice (14 February 2012). "Learning through crowdsourcing is deaf to the language challenge". The Guardian. London.
  23. 23.0 23.1 23.2 23.3 23.4 23.5 23.6 23.7 "A Brief History of Crowdsourcing [Infographic]". Crowdsourcing.org. 18 March 2012. Archived from the original on 3 July 2015. Retrieved 2 July 2015.
  24. Hern, Chester G.(2002). Tracks in the Sea, p. 123 & 246. McGraw Hill. .
  25. "Smithsonian Crowdsourcing Since 1849". Smithsonian Institution Archives. 14 April 2011. Retrieved 24 August 2018.
  26. Clark, Catherine E. (25 April 1970). "'C'était Paris en 1970'". Études Photographiques (31). Retrieved 2 July 2015.
  27. Axelrod R. (1980), "'Effective choice in the Prisoner's Dilemma'", Journal of Conflict Resolution, 24 (1): 3−25, doi:10.1177/002200278002400101, S2CID 143112198
  28. "UNV Online Volunteering Service | History". Onlinevolunteering.org. Archived from the original on 2 July 2015. Retrieved 2 July 2015.
  29. 29.0 29.1 "Wired 14.06: The Rise of Crowdsourcing". Archive.wired.com. 4 January 2009. Retrieved 2 July 2015.
  30. Lih, Andrew (2009). The Wikipedia revolution: how a bunch of nobodies created the world's greatest encyclopedia (1st ed.). New York: Hyperion. ISBN 978-1401303716. https://archive.org/details/wikipediarevolut00liha. 
  31. 31.0 31.1 Lakhani KR, Garvin DA, Lonstein E (January 2010). "TopCoder (A): Developing Software through Crowdsourcing". Harvard Business School Case: 610–032.
  32. 32.0 32.1 Phadnisi, Shilpa (21 October 2016). "Appirio's TopCoder too is a big catch for Wipro". The Times of India. Retrieved 30 April 2018.
  33. 33.0 33.1 "For The Love Of Open Mapping Data". TechCrunch (in English). Retrieved 23 July 2019.
  34. 34.0 34.1 [1] -{zh-cn:互联网档案馆; zh-tw:網際網路檔案館; zh-hk:互聯網檔案館;}-存檔,存档日期29 November 2014.
  35. Garrigos-Simon, Fernando J.; Gil-Pechuán, Ignacio; Estelles-Miguel, Sofia (2015) (in en). Advances in Crowdsourcing. Springer. ISBN 9783319183411. https://books.google.com/books?id=WrclCQAAQBAJ&q=pepsico+2012+do+me+a+flavor+crowdsourcing&pg=PA154. 
  36. "Are you a BOLD innovator?". Innovators magazine (in British English). 15 November 2019. Retrieved 12 July 2021.{{cite web}}: CS1 maint: url-status (link)
  37. "Antoine-Jean-Baptiste-Robert Auget, Baron de Montyon". New Advent. Retrieved 25 February 2012.
  38. "It Was All About Alkali". Chemistry Chronicles. Retrieved 25 February 2012.
  39. "Nicolas Appert". John Blamire. Retrieved 25 February 2012.
  40. "9 Examples of Crowdsourcing, Before 'Crowdsourcing' Existed". MemeBurn. 15 September 2011. Retrieved 25 February 2012.
  41. Pande, Shamni. "The People Know Best". Business Today. India: Living Media India Limited.
  42. Vergano, Dan (30 August 2014). "1833 Meteor Storm Started Citizen Science". National Geographic. StarStruck. Retrieved 18 September 2014.
  43. "Gateway to Astronaut Photography of Earth". NASA.
  44. McLaughlin, Elliot. "Image Overload: Help us sort it all out, NASA requests". Cnn.com. CNN. Retrieved 18 September 2014.
  45. Després, Jacques; Hadjsaid, Nouredine; Criqui, Patrick; Noirot, Isabelle (1 February 2015). "Modelling the impacts of variable renewable sources on the power sector: reconsidering the typology of energy modelling tools". Energy. 80: 486–495. doi:10.1016/j.energy.2014.12.005. ISSN 0360-5442.
  46. "OpenEI — Energy Information, Data, and other Resources". OpenEI. Retrieved 26 September 2016.
  47. Garvin, Peggy (12 December 2009). "New Gateway: Open Energy Info". SLA Government Information Division. Dayton, OH, USA. Retrieved 26 September 2016.https://en.wikipedia.org/wiki/Defekte_Weblinks?dwl={{{url}}} Seite nicht mehr abrufbar], Suche in Webarchiven: Kategorie:Wikipedia:Weblink offline (andere Namensräume)[http://timetravel.mementoweb.org/list/2010/Kategorie:Wikipedia:Vorlagenfehler/Vorlage:Toter Link/URL_fehlt
  48. Brodt-Giles, Debbie (2012). WREF 2012: OpenEI — an open energy data and information exchange for international audiences. Golden, CO, USA: National Renewable Energy Laboratory (NREL). https://ases.conference-services.net/resources/252/2859/pdf/SOLAR2012_0677_full%20paper.pdf. 
  49. Davis, Chris; Chmieliauskas, Alfredas; Dijkema, Gerard; Nikolic, Igor. "Enipedia". Delft, The Netherlands: Energy and Industry group, Faculty of Technology, Policy and Management, TU Delft. Archived from the original on 10 June 2014. Retrieved 7 October 2016.
  50. Davis, Chris (2012). Making sense of open data: from raw data to actionable insight — PhD thesis. Delft, The Netherlands: Delft University of Technology. https://www.researchgate.net/publication/255823727. Chapter模板:Nbsp9 discusses in depth the initial development of Enipedia.
  51. "What Is the Four-Generation Program?". The Church of Jesus Christ of Latter-day Saints. Retrieved 30 January 2012.
  52. Parker, Christopher J.; May, Andrew; Mitchell, Val (November 2013). "The role of VGI and PGI in supporting outdoor activities". Applied Ergonomics. 44 (6): 886–894. doi:10.1016/j.apergo.2012.04.013. ISSN 0003-6870. PMID 22795180.
  53. Parker, Christopher J.; May, Andrew; Mitchell, Val (15 May 2014). "User-centred design of neogeography: the impact of volunteered geographic information on users' perceptions of online map 'mashups'". Ergonomics (in English). 57 (7): 987–997. doi:10.1080/00140139.2014.909950. ISSN 0014-0139. PMID 24827070. S2CID 13458260.
  54. Brown, Michael; Sharples, Sarah; Harding, Jenny; Parker, Christopher J. (November 2013). "Usability of Geographic Information: Current challenges and future directions" (PDF). Applied Ergonomics. 44 (6): 855–865. doi:10.1016/j.apergo.2012.10.013. PMID 23177775.
  55. Parker, Christopher J.; May, Andrew; Mitchell, Val (August 2012). "Understanding Design with VGI using an Information Relevance Framework". Transactions in GIS (in English). 16 (4): 545–560. doi:10.1111/j.1467-9671.2012.01302.x. ISSN 1361-1682.
  56. Bonney, R. and LaBranche, M. (2004). Citizen Science: Involving the Public in Research. ASTC Dimensions. May/June 2004, p. 13.
  57. Baretto, C.; Fastovsky, D.; Sheehan, P. (2003). "A Model for Integrating the Public into Scientific Research". Journal of Geoscience Education. 50 (1): 71–75. Bibcode:2003JGeEd..51...71B. doi:10.5408/1089-9995-51.1.71. S2CID 67761505.
  58. McCaffrey, R.E. (2005). "Using Citizen Science in Urban Bird Studies" (PDF). Urban Habitats. 3 (1): 70–86.
  59. King, Turi E.; Jobling, Mark A. (2009). "What's in a name? Y chromosomes, surnames and the genetic genealogy revolution". Trends in Genetics. 25 (8): 351–60. doi:10.1016/j.tig.2009.06.003. hdl:2381/8106. PMID 19665817. The International Society of Genetic Genealogy (www.isogg.org) advocates the use of genetics as a tool for genealogical research, and provides a support network for genetic genealogists. It hosts the ISOGG Y-haplogroup tree, which has the virtue of being regularly updated.
  60. Mendex, etc. al., Fernando (28 February 2013). "An African American Paternal Lineage Adds an Extremely Ancient Root to the Human Y Chromosome Phylogenetic Tree". The American Journal of Human Genetics. 92 (3): 454–459. doi:10.1016/j.ajhg.2013.02.002. PMC 3591855. PMID 23453668.
  61. Wells, Spencer (2013). "The Genographic Project and the Rise of Citizen Science". Southern California Genealogical Society (SCGS). Archived from the original on 10 July 2013. Retrieved 10 July 2013.
  62. Aitamurto, Tanja (2015). "Motivation Factors in Crowdsourced Journalism: Social Impact, Social Change and Peer-Learning". International Journal of Communication. 9: 3523–3543.
  63. Aitamurto, Tanja (2016). "Crowdsourcing as a Knowledge-Search Method in Digital Journalism: Ruptured Ideals and Blended Responsibility". Digital Journalism. 4 (2): 280–297. doi:10.1080/21670811.2015.1034807. S2CID 156243124.
  64. Aitamurto, Tanja (2013). "Balancing between open and closed: co-creation in magazine journalism". Digital Journalism. 1 (2): 229–251. doi:10.1080/21670811.2012.750150. S2CID 62882093.
  65. Keuleers; et al. (February 2015). "Word knowledge in the crowd: Measuring vocabulary size and word prevalence in a massive online experiment". Quarterly Journal of Experimental Psychology. 68 (8): 1665–1692. doi:10.1080/17470218.2015.1022560. PMID 25715025. S2CID 4894686.
  66. 66.0 66.1 "The Extension of Positive 'anymore'". Google Docs (in English). Retrieved 27 September 2020.
  67. "History of the Christmas Bird Count | Audubon". Birds.audubon.org. 22 January 2015. Retrieved 2 July 2015.
  68. "Thank you!". Audubon. 5 October 2017. Archived from the original on 24 August 2014.
  69. Smith, Graham; Richards, Robert C.; Gastil, John (12 May 2015). "The Potential ofParticipediaas a Crowdsourcing Tool for Comparative Analysis of Democratic Innovations" (PDF). Policy & Internet (in English). 7 (2): 243–262. doi:10.1002/poi3.93. ISSN 1944-2866.
  70. Moon, M. Jae (2018). "Evolution of co-production in the information age: crowdsourcing as a model of web-based co-production in Korea". Policy and Society (in English). 37 (3): 294–309. doi:10.1080/14494035.2017.1376475. ISSN 1449-4035. S2CID 158440300.
  71. Taeihagh, Araz (8 November 2017). "Crowdsourcing: a new tool for policy-making?". Policy Sciences (in English). 50 (4): 629–647. arXiv:1802.03113. doi:10.1007/s11077-017-9303-3. ISSN 0032-2687. S2CID 27696037.
  72. Aitamurto, Tanja (2012). Crowdsourcing for Democracy: New Era In Policy–Making.. Committee for the Future, Parliament of Finland. pp. 10–30. ISBN 978-951-53-3459-6. 
  73. Prpić, J.; Taeihagh, A.; Melton, J. (2014). "Crowdsourcing the Policy Cycle. Collective Intelligence 2014, MIT Center for Collective Intelligence" (PDF). Humancomputation.com. Archived from the original (PDF) on 24 June 2015. Retrieved 2 July 2015.
  74. Prpić, J.; Taeihagh, A.; Melton, J. (2014). "A Framework for Policy Crowdsourcing. Oxford Internet Institute, University of Oxford - IPP 2014 - Crowdsourcing for Politics and Policy" (PDF). Ipp.oxii.ox.ac.uk. Retrieved 2 October 2018.
  75. Prpić, J.; Taeihagh, A.; Melton, J. (2014). "Experiments on Crowdsourcing Policy Assessment. Oxford Internet Institute, University of Oxford - IPP 2014 - Crowdsourcing for Politics and Policy" (PDF). Ipp.oii.ox.ac.uk. Retrieved 2 July 2015.
  76. Thapa, B.; Niehaves, B.; Seidel, C.; Plattfaut, R. (2015). "Citizen involvement in public sector innovation: Government and citizen perspectives". Information Polity. 20 (1): 3–17. doi:10.3233/IP-150351.
  77. Aitamurto and Landemore (4 February 2015). "Five design principles for crowdsourced policymaking: Assessing the case of crowdsourced off-road traffic law reform in Finland". Journal of Social Media for Organizations (1): 1–19.
  78. 78.0 78.1 Aitamurto, Landemore, Galli (2016). "Unmasking the Crowd: Participants' Motivation Factors, Profile and Expectations for Participation in Crowdsourced Policymaking". Information, Communication & Society. 20 (8): 1239–1260. doi:10.1080/1369118x.2016.1228993. S2CID 151989757 – via Routledge.{{cite journal}}: CS1 maint: multiple names: authors list (link)
  79. Aitamurto, Chen, Cherif, Galli and Santana (2016). "Making Sense of Crowdsourced Civic Data with Big Data Tools". ACM Digital Archive: Academic Mindtrek. doi:10.1145/2994310.2994366. S2CID 16855773 – via ACM Digital Archive.{{cite journal}}: CS1 maint: multiple names: authors list (link)
  80. Aitamurto, Tanja (2015-01-31). Crowdsourcing for Democracy: New Era in Policymaking. Committee for the Future, Parliament of Finland. ISBN 978-951-53-3459-6. http://thefinnishexperiment.com/2015/01/31/crowdsourcing-for-democracy-new-era-in-policy-making/. 
  81. "Home - ISCRAM2015 - University of Agder" (PDF). iscram2015.uia.no. Archived from the original (PDF) on 17 October 2016. Retrieved 14 October 2016.
  82. Holley, Rose (March 2010). "Crowdsourcing: How and Why Should Libraries Do It?". D-Lib Magazine. 16 (3/4). doi:10.1045/march2010-holley. Retrieved 21 May 2021.
  83. Trant, Jennifer (2009). Tagging, Folksonomy and Art Museums: Results of steve.museum's research. Archives & Museum Informatics. http://conference.archimuse.com/files/trantSteveResearchReport2008.pdf. 
  84. 84.0 84.1 Andro, M. (2018). Digital libraries and crowdsourcing, Wiley / ISTE. .
  85. 85.0 85.1 DeVun, Leah (19 November 2009). "Looking at how crowds produce and present art". Wired News. Archived from the original on 24 October 2012. Retrieved 26 February 2012.
  86. Ess, Henk van "Crowdsourcing: how to find a crowd", ARD ZDF Akademie 2010, Berlin, p. 99,
  87. 87.0 87.1 87.2 Doan, A.; Ramarkrishnan, R.; Halevy, A. (2011), "Crowdsourcing Systems on the World Wide Web" (PDF), Communications of the ACM, 54 (4): 86–96, doi:10.1145/1924421.1924442, S2CID 207184672
  88. Brabham, Daren C. (2013), Crowdsourcing, MIT Press, p. 45
  89. "Crowdvoting: How Elo Limits Disruption". thevisionlab.com. 25 May 2017.
  90. Blohm, Ivo (2018). "How to Manage Crowdsourcing Platforms Effectively" (PDF). BerkeleyHaas. 60 (2): 122–149. doi:10.1177/0008125617738255. S2CID 73551209.
  91. Howe, Jeff (2008), "Crowdsourcing: Why the Power of the Crowd is Driving the Future of Business" (PDF), The International Achievement Institute., archived from the original (PDF) on 23 September 2015, retrieved 9 April 2012
  92. Robson, John (24 February 2012). "IEM Demonstrates the Political Wisdom of Crowds". Canoe.ca. Archived from the original on 7 April 2012. Retrieved 31 March 2012.
  93. "4 Great Examples of Crowdsourcing through Social Media". digitalagencymarketing.com. 2012. Archived from the original on 1 April 2012. Retrieved 29 March 2012.
  94. Goldberg, Ken; Newsom, Gavin (12 June 2014). "Let's amplify California's collective intelligence". Citris-uc.org. Retrieved 14 June 2014.
  95. Escoffier, N. and B. McKelvey (2014). "Using "Crowd-Wisdom Strategy" to Co-Create Market Value: Proof-of-Concept from the Movie Industry." in International Perspective on Business Innovation and Disruption in the Creative Industries: Film, Video, Photography, P. Wikstrom and R. DeFillippi, eds., UK: Edward Elgar Publishing Ltd, Chap. 11.
  96. Block, A. B. (21 April 2010). "How boxoffice trading could flop". The Hollywood Reporter.
  97. Chen, A. and R. Panaligan (2013). "Quantifying movie magic with Google search." Google White Paper, Industry Perspectives+User Insights
  98. Williams, Jack (17 February 2017). "An Indoor Football Team Has Its Fans Call the Plays". The New York Times (in English). ISSN 0362-4331. Retrieved 7 February 2018.
  99. Cunard, C. (2010). "The Movie Research Experience gets audiences involved in filmmaking." The Daily Bruin, (19 July)
  100. MacArthur, Kate. "Squadhelp wants your company to crowdsource better names (and avoid Boaty McBoatface)". chicagotribune.com (in English). Retrieved 28 August 2017.
  101. "Compete To Create Your Dream Home". FastCoexist.com. 4 June 2013. Retrieved 3 February 2014.
  102. "Designers, clients forge ties on web". Boston Herald. 11 June 2012. Retrieved 3 February 2014.
  103. Raviart, Dominique (4 October 2017). "Wipro Converging its Crowdtesting Activities Around QaaS & Topcoder". NelsonHall. Retrieved 30 April 2018.
  104. Stan Nussbaum. 2003. Proverbial perspectives on pluralism. Connections: the journal of the WEA Missions Committee October, pp. 30, 31.
  105. "Oromo dictionary project". OromoDictionary.com. Retrieved 3 February 2014.
  106. "Description of WeSay software and process" (PDF). Retrieved 3 February 2014.
  107. "Developing ASL vocabulary for science and math". Washington.edu. 7 December 2012. Retrieved 3 February 2014.
  108. "Pashto Proverb Collection project". AfghanProverbs.com. Archived from the original on 4 February 2014. Retrieved 3 February 2014.
  109. "Comparing methods of collecting proverbs" (PDF). gial.edu.
  110. Edward Zellem. 2014. Mataluna: 151 Afghan Pashto Proverbs. Tampa, FL: Culture Direct.
  111. Zhai, Haijun; Lingren, Todd; Deleger, Louise; Li, Qi; Kaiser, Megan; Stoutenborough, Laura; Solti, Imre (2013). "Web 2.0-based crowdsourcing for high-quality gold standard development in clinical Natural Language Processing". Journal of Medical Internet Research. 15 (4): e73. doi:10.2196/jmir.2426. PMC 3636329. PMID 23548263.
  112. Geiger D, Rosemann M, Fielt E. Crowdsourcing information systems: a systems theory perspective. InProceedings of the 22nd Australasian Conference on Information Systems (ACIS 2011) 2011.
  113. D, Powell (2015). "A new tool for crowdsourcing". МИР (Модернизация. Инновации. Развитие). 6 (2-2 (22)). ISSN 2079-4665.
  114. Prive, Tanya. "What Is Crowdfunding And How Does It Benefit The Economy". Forbes.com. Retrieved 2 July 2015.
  115. Choy, Katherine; Schlagwein, Daniel (2016), "Crowdsourcing for a better world: On the relation between IT affordances and donor motivations in charitable crowdfunding", Information Technology & People, 29 (1): 221–247, doi:10.1108/ITP-09-2014-0215
  116. Barnett, Chance. "Crowdfunding Sites In 2014". Forbes.com. Retrieved 2 July 2015.
  117. 117.0 117.1 117.2 Agrawal, Ajay, Christian Catalini, and Avi Goldfarb. "Some Simple Economics of Crowdfunding." National Bureau of Economic Research (2014): 63-97.
  118. "Mobile Crowdsourcing". Clickworker. Retrieved 10 December 2014.
  119. Thebault-Spieker, Terveen, & Hecht. Avoiding the South Side and the Suburbs: The Geography of Mobile Crowdsourcing Markets. 
  120. Chatzimiloudis, Konstantinidis & Laoudias, Zeinalipour-Yazti. "Crowdsourcing with smartphones" (PDF). {{cite magazine}}: Cite magazine requires |magazine= (help)
  121. Arkian, Hamid Reza; Diyanat, Abolfazl; Pourkhalili, Atefe (2017). "MIST: Fog-based data analytics scheme with cost-efficient resource provisioning for IoT crowdsensing applications". Journal of Network and Computer Applications. 82: 152–165. doi:10.1016/j.jnca.2017.01.012.
  122. Yang, J.; Adamic, L.; Ackerman, M. (2008), "Crowdsourcing and Knowledge Sharing: Strategic User Behavior on Taskcn" (PDF), Proceedings of the 9th ACM Conference on Electronic Commerce, doi:10.1145/1386790.1386829, S2CID 15553154
  123. Gadiraju, U.; Kawase, R.; Dietze, S. (2014), "A Taxonomy of Microtasks on the Web", Proceedings of the 25th ACM Conference on Hypertext and Social Media: 218–223, doi:10.1145/2631775.2631819, ISBN 9781450329545, S2CID 15164576
  124. Felstiner, Alek (August 2011). "Working the Crowd: Employment and Labor Law in the Crowdsourcing Industry" (PDF). Berkeley Journal of Employment & Labor Law. 32: 150–151 – via WTF.
  125. "View of Crowdsourcing: Libertarian Panacea or Regulatory Nightmare?". online-shc.com (in English). Retrieved 26 May 2017.https://en.wikipedia.org/wiki/Defekte_Weblinks?dwl={{{url}}} Seite nicht mehr abrufbar], Suche in Webarchiven: Kategorie:Wikipedia:Weblink offline (andere Namensräume)[http://timetravel.mementoweb.org/list/2010/Kategorie:Wikipedia:Vorlagenfehler/Vorlage:Toter Link/URL_fehlt
  126. Leimeister, J.M.; Huber, M.; Bretschneider, U.; Krcmar, H. (2009), "Leveraging Crowdsourcing: Activation-Supporting Components for IT-Based Ideas Competition", Journal of Management Information Systems, 26 (1): 197–224, doi:10.2753/mis0742-1222260108, S2CID 17485373
  127. Ebner, W.; Leimeister, J.; Krcmar, H. (2009), "Community Engineering for Innovations: The Ideas Competition as a method to nurture a Virtual Community for Innovations", R&D Management, 39 (4): 342–356, doi:10.1111/j.1467-9310.2009.00564.x, S2CID 16316321https://en.wikipedia.org/wiki/Defekte_Weblinks?dwl={{{url}}} Seite nicht mehr abrufbar], Suche in Webarchiven: Kategorie:Wikipedia:Weblink offline (andere Namensräume)[http://timetravel.mementoweb.org/list/2010/Kategorie:Wikipedia:Vorlagenfehler/Vorlage:Toter Link/URL_fehlt 模板:Cbignore
  128. "DARPA Network Challenge". DARPA Network Challenge. Archived from the original on 11 August 2011. Retrieved 28 November 2011.
  129. "Social media web snares 'criminals'". New Scientist. Retrieved 4 April 2012.
  130. "Beyond XPrize: The 10 Best Crowdsourcing Tools and Technologies". 20 February 2012. Retrieved 30 March 2012.
  131. 131.0 131.1 Kittur, A.; Chi, E.H.; Sun, B. (2008), "Crowdsourcing user studies with Mechanical Turk" (PDF), Chi 2008
  132. Tang, Weiming; Han, Larry; Best, John; Zhang, Ye; Mollan, Katie; Kim, Julie; Liu, Fengying; Hudgens, Michael; Bayus, Barry (1 June 2016). "Crowdsourcing HIV Test Promotion Videos: A Noninferiority Randomized Controlled Trial in China". Clinical Infectious Diseases. 62 (11): 1436–1442. doi:10.1093/cid/ciw171. ISSN 1537-6591. PMC 4872295. PMID 27129465.
  133. 133.0 133.1 Zhang, Ye; Kim, Julie A.; Liu, Fengying; Tso, Lai Sze; Tang, Weiming; Wei, Chongyi; Bayus, Barry L.; Tucker, Joseph D. (November 2015). "Creative Contributory Contests to Spur Innovation in Sexual Health: 2 Cases and a Guide for Implementation". Sexually Transmitted Diseases. 42 (11): 625–628. doi:10.1097/OLQ.0000000000000349. ISSN 1537-4521. PMC 4610177. PMID 26462186.
  134. Créquit, Perrine (2018). "Mapping of Crowdsourcing in Health: Systematic Review". Journal of Medical Internet Research (in English). 20 (5): e187. doi:10.2196/jmir.9330. PMC 5974463. PMID 29764795.
  135. van der Krieke; et al. (2015). "HowNutsAreTheDutch (HoeGekIsNL): A crowdsourcing study of mental symptoms and strengths" (PDF). International Journal of Methods in Psychiatric Research. 25 (2): 123–144. doi:10.1002/mpr.1495. PMC 6877205. PMID 26395198.
  136. Prpić, J. (2015). "Health Care Crowds: Collective Intelligence in Public Health. Collective Intelligence 2015. Center for the Study of Complex Systems, University of Michigan". Papers.ssrn.com. SSRN 2570593. {{cite journal}}: Cite journal requires |journal= (help)
  137. 137.0 137.1 van der Krieke, L; Blaauw, FJ; Emerencia, AC; Schenk, HM; Slaets, JP; Bos, EH; de Jonge, P; Jeronimus, BF (2016). "Temporal Dynamics of Health and Well-Being: A Crowdsourcing Approach to Momentary Assessments and Automated Generation of Personalized Feedback (2016)". Psychosomatic Medicine. 79 (2): 213–223. doi:10.1097/PSY.0000000000000378. PMID 27551988. S2CID 10955232.
  138. Rahman, Mahbubur; Blackwell, Brenna; Banerjee, Nilanjan; Dharmendra, Saraswat (2015), "Smartphone-based hierarchical crowdsourcing for weed identification", Computers and Electronics in Agriculture, 113: 14–23, doi:10.1016/j.compag.2014.12.012, retrieved 12 August 2015
  139. Primarily on the Bridge Winners website
  140. Dolan, Shelagh, "Crowdsourced delivery explained: making same day shipping cheaper through local couriers.", Business Insider, archived from the original on 22 May 2018, retrieved 21 May 2018
  141. Murison, Malek, "LivingPackets uses IoT, crowdshipping to transform deliveries", Internet of Business, retrieved 19 April 2018
  142. Biller, David; Sciaudone, Christina, "Goldman Sachs, Soros Bet on the Uber of Brazilian Trucking", Bloomberg, retrieved 11 March 2019
  143. Tyrsina, Radu, "Parcl Uses Trusted Forwarders to Bring you Products that don't Ship to your Country", Tehcnology Personalised, archived from the original on 3 October 2015, retrieved 1 October 2015
  144. Noveck, Beth Simone (2009), Wiki Government: How Technology Can Make Government Better, Democracy Stronger, and Citizens More Powerful, Brookings Institution Press
  145. Sarasua, Cristina; Simperl, Elena; Noy, Natalya F. (2012), "Crowdsourcing Ontology Alignment with Microtasks" (PDF), Institute AIFB. Karlsruhe Institute of Technology: 2
  146. Hollow, Matthew. "Crowdfunding and Civic Society in Europe: A Profitable Partnership?". Open Citizenship. Retrieved 29 April 2013.
  147. Federal Transit Administration Public Transportation Participation Pilot Program, U.S. Department of Transportation, archived from the original on 7 January 2009
  148. Peer-to-Patent Community Patent Review Project, Peer-to-Patent Community Patent Review Project
  149. Callison-Burch, C.; Dredze, M. (2010), "Creating Speech and Language Data With Amazon's Mechanical Turk" (PDF), Human Language Technologies Conference: 1–12, archived from the original (PDF) on 2 August 2012, retrieved 28 February 2012
  150. McGraw, I.; Seneff, S. (2011), "Growing a Spoken Language Interface on Amazon Mechanical Turk" (PDF), Interspeech: 3057–3060
  151. 151.0 151.1 151.2 Mason, W.; Suri, S. (2010), "Conducting Behavioral Research on Amazon's Mechanical Turk", Behavior Research Methods, SSRN 1691163
  152. Koblin, A. (2008), "The sheep market", Creativity and Cognition: 451, doi:10.1145/1640233.1640348, ISBN 9781605588650, S2CID 20609292
  153. "explodingdog 2015". Explodingdog.com. Retrieved 2 July 2015.
  154. Linver, D. (2010), Crowdsourcing and the Evolving Relationship between Art and Artist, archived from the original on 14 July 2014, retrieved 28 February 2012
  155. "Why". INRIX.com. 13 September 2014. Archived from the original on 12 October 2014. Retrieved 2 July 2015.
  156. 156.0 156.1 Ross, J.; Irani, L.; Silberman, M.S.; Zaldivar, A.; Tomlinson, B. (2010). "Who are the Crowdworkers? Shifting Demographics in Mechanical Turk" (PDF). Chi 2010. Archived from the original (PDF) on 1 April 2011. Retrieved 28 February 2012.
  157. Hirth, M.; Hoßfeld, T.; Train-Gia, P. (2011), Human Cloud as Emerging Internet Application – Anatomy of the Microworkers Crowdsourcing Platform (PDF)
  158. 158.0 158.1 158.2 Brabham, Daren C. (2008). "Moving the Crowd at iStockphoto: The Composition of the Crowd and Motivations for Participation in a Crowdsourcing Application". First Monday. 13 (6). doi:10.5210/fm.v13i6.2159. Archived from the original on 24 November 2012. Retrieved 27 June 2012.
  159. 159.0 159.1 159.2 Lakhani; et al. (2007). "The Value of Openness in Scientific Problem Solving" (PDF). Retrieved 26 February 2012. {{cite journal}}: Cite journal requires |journal= (help)
  160. Brabham, Daren C. (2012). "Managing Unexpected Publics Online: The Challenge of Targeting Specific Groups with the Wide-Reaching Tool of the Internet". International Journal of Communication.
  161. 161.0 161.1 Brabham, Daren C. (2010). "Moving the Crowd at Threadless: Motivations for Participation in a Crowdsourcing Application". Information, Communication & Society. 13 (8): 1122–1145. doi:10.1080/13691181003624090. S2CID 143402410.
  162. 162.0 162.1 Brabham, Daren C. (2012). "The Myth of Amateur Crowds: A Critical Discourse Analysis of Crowdsourcing Coverage". Information, Communication & Society. 15 (3): 394–410. doi:10.1080/1369118X.2011.641991. S2CID 145675154.
  163. Saxton, Oh, & Kishore (2013). "Rules of Crowdsourcing: Models, Issues, and Systems of Control". Information Systems Management. 30: 2–20. CiteSeerX 10.1.1.300.8026. doi:10.1080/10580530.2013.739883. S2CID 16811686.
  164. 164.0 164.1 Aitamurto, Tanja (2015). "Motivation Factors in Crowdsourced Journalism: Social Impact, Social Change, and Peer Learning". International Journal of Communication. 9: 3523–3543.
  165. 165.0 165.1 Kaufmann, N.; Schulze, T.; Viet, D. (2011). "More than fun and money. Worker Motivation in Crowdsourcing – A Study on Mechanical Turk" (PDF). Proceedings of the Seventeenth Americas Conference on Information Systems. Archived from the original (PDF) on 27 February 2012.
  166. Brabham, Daren C. (2012). "Motivations for Participation in a Crowdsourcing Application to Improve Public Engagement in Transit Planning". Journal of Applied Communication Research. 40 (3): 307–328. doi:10.1080/00909882.2012.693940. S2CID 144807388.
  167. Lietsala, Katri; Joutsen, Atte (2007). "Hang-a-rounds and True Believers: A Case Analysis of the Roles and Motivational Factors of the Star Wreck Fans". MindTrek 2007 Conference Proceedings.
  168. "State of the World's Volunteerism Report 2011" (PDF). Unv.org. Archived from the original (PDF) on 2 December 2014. Retrieved 1 July 2015.
  169. Chandler, D.; Kapelner, A. (2010). "Breaking Monotony with Meaning: Motivation in Crowdsourcing Markets" (PDF). Journal of Economic Behavior & Organization. 90: 123–133. arXiv:1210.0962. doi:10.1016/j.jebo.2013.03.003. S2CID 8563262.
  170. Aparicio, M.; Costa, C.; Braga, A. (2012) (PDF). Proposing a system to support crowdsourcing. pp. 13–17. doi:10.1145/2316936.2316940. ISBN 9781450315258. https://www.researchgate.net/publication/232659015. 
  171. Aitamurto, Landemore, Galli (2016). "Unmasking the Crowd: Participants' Motivation Factors, Expectations, and Profile in a Crowdsourced Law Reform". Information, Communication & Society.{{cite journal}}: CS1 maint: multiple names: authors list (link)
  172. Quinn, Alexander J.; Bederson, Benjamin B. (2011). "Human Computation:A Survey and Taxonomy of a Growing Field, CHI 2011 [Computer Human Interaction conference], May 7–12, 2011, Vancouver, BC, Canada" (PDF). Retrieved 30 June 2015.
  173. 173.0 173.1 Paolacci, G; Chandler, J; Ipeirotis, P.G. (2010). "Running experiments on Amazon Mechanical Turk". Judgment and Decision Making. 5 (5): 411–419. hdl:1765/31983.
  174. Prpić, J; Shukla, P.; Roth, Y.; Lemoine, J.F. (2015). "A Geography of Participation in IT-Mediated Crowds". Proceedings of the Hawaii International Conference on Systems Sciences 2015.. SSRN 2494537. 
  175. 175.0 175.1 Borst, Irma. "The Case For and Against Crowdsourcing: Part 2". Archived from the original on 12 September 2015. Retrieved 9 February 2015.
  176. Ipeirotis; Provost; Wang (2010). "Quality Management on Amazon Mechanical Turk" (PDF). Archived from the original (PDF) on 9 August 2012. Retrieved 28 February 2012. {{cite journal}}: Cite journal requires |journal= (help)
  177. Lukyanenko, Roman; Parsons, Jeffrey; Wiersma, Yolanda (2014). "The IQ of the Crowd: Understanding and Improving Information Quality in Structured User-Generated Content". Information Systems Research. 25 (4): 669–689. doi:10.1287/isre.2014.0537.
  178. Goerzen, Thomas; Kundisch, Dennis (11 August 2016). "Can the Crowd Substitute Experts in Evaluation of Creative Ideas? An Experimental Study Using Business Models". AMCIS 2016 Proceedings.
  179. Burnap, Alex; Ren, Alex J.; Papazoglou, Giannis; Gerth, Richard; Gonzalez, Richard; Papalambros, Panos. "When Crowdsourcing Fails: A Study of Expertise on Crowdsourced Design Evaluation" (PDF). Archived from the original (PDF) on 29 October 2015. Retrieved 19 May 2015. {{cite journal}}: Cite journal requires |journal= (help)
  180. Kurve, Aditya; Miller, David J.; Kesidis, George (30 May 2014). "Multicategory Crowdsourcing Accounting for Variable Task Difficulty, Worker Skill, and Worker Intention". IEEE Kde (99).
  181. Hirth; Hoßfeld; Tran-Gia (2011), Human Cloud as Emerging Internet Application - Anatomy of the Microworkers Crowdsourcing Platform (PDF)
  182. Ipeirotis, Panagiotis G. (2010). "Analyzing the Amazon Mechanical Turk Marketplace" (PDF). XRDS: Crossroads, the ACM Magazine for Students. 17 (2): 16–21. doi:10.1145/1869086.1869094. S2CID 6472586. SSRN 1688194. Retrieved 2 October 2018.
  183. 183.0 183.1 Hosaka, Tomoko A. (April 2008). "Facebook asks users to translate for free". NBC News.
  184. Britt, Darice. "Crowdsourcing: The Debate Roars On". Archived from the original on 1 July 2014. Retrieved 4 December 2012.
  185. Woods, Dan (28 September 2009). "The Myth of Crowdsourcing". Forbes. Retrieved 4 December 2012.
  186. 186.0 186.1 Aitamurto, Tanja; Leiponen, Aija (1 January 1970). "The Promise of Idea Crowdsourcing: Benefits, Contexts, Limitations | Tanja Aitamurto". Ideasproject.com. Retrieved 2 July 2015.
  187. "International Translators Association Launched in Argentina". Latin American Herald Tribune. Retrieved 23 November 2016.
  188. Kleeman, Frank (2008). "Un(der)paid Innovators: The Commercial Utilization of Consumer Work through Crowdsourcing". Sti-studies.de. Retrieved 2 July 2015.
  189. Jason (2011). "Crowdsourcing: A Million Heads is Better Than One". Crowdsourcing.org. Archived from the original on 3 July 2015. Retrieved 2 July 2015.
  190. Dupree, Steven (2014). "Crowdfunding 101: Pros and Cons". Gsb.stanford.edu. Retrieved 2 July 2015.
  191. "Fair Labor Standards Act Advisor". Retrieved 28 February 2012.
  192. Greg Norcie, 2011, "Ethical and practical considerations for compensation of crowdsourced research participants," CHI WS on Ethics Logs and VideoTape: Ethics in Large Scale Trials & User Generated Content, [2] -{zh-cn:互联网档案馆; zh-tw:網際網路檔案館; zh-hk:互聯網檔案館;}-存檔,存档日期2012-06-30., accessed 30 June 2015.
  193. Busarovs, Aleksejs (2013). "Ethical Aspects of Crowdsourcing, or is it a Modern Form of Exploitation" (PDF). International Journal of Economics & Business Administration. 1 (1): 3–14. doi:10.35808/ijeba/1. Retrieved 26 November 2014.
  194. Graham, Mark; Hjorth, Isis; Lehdonvirta, Vili (1 May 2017). "Digital labour and development: impacts of global digital labour platforms and the gig economy on worker livelihoods". Transfer: European Review of Labour and Research (in English). 23 (2): 135–162. doi:10.1177/1024258916687250. ISSN 1024-2589. PMC 5518998. PMID 28781494.
  195. The Crowdsourcing Scam (Dec. 2014), The Baffler, No. 26
  196. Salehi; et al. (2015). "We Are Dynamo: Overcoming Stalling and Friction in Collective Action for Crowd Workers" (PDF). Retrieved 16 June 2015. {{cite journal}}: Cite journal requires |journal= (help)

External links


模板:Open navbox 模板:Sharing economy


This page was moved from wikipedia:en:Crowdsourcing. Its edit history can be viewed at 众包/edithistory