第140行: |
第140行: |
| There are techniques (which are described below) using which we can create a differentially private algorithm for functions with low sensitivity. | | There are techniques (which are described below) using which we can create a differentially private algorithm for functions with low sensitivity. |
| | | |
− | 我们也可以通过一些技术(下面将描述) 使用低灵敏度函数设计差分隐私算法。 | + | 我们也可以通过一些技术(如下所述) 使用低灵敏度函数设计差分隐私算法。 |
| | | |
| ===The Laplace mechanism=== | | ===The Laplace mechanism=== |
第187行: |
第187行: |
| Now suppose a malicious user (often termed an ''adversary'') wants to find whether Chandler has diabetes or not. Suppose he also knows in which row of the database Chandler resides. Now suppose the adversary is only allowed to use a particular form of query <math>Q_i</math> that returns the partial sum of the first <math>i</math> rows of column <math>X</math> in the database. In order to find Chandler's diabetes status the adversary executes <math>Q_5(D_1)</math> and <math>Q_4(D_1)</math>, then computes their difference. In this example, <math>Q_5(D_1) = 3</math> and <math>Q_4(D_1) = 2</math>, so their difference is 1. This indicates that the "Has Diabetes" field in Chandler's row must be 1. This example highlights how individual information can be compromised even without explicitly querying for the information of a specific individual. | | Now suppose a malicious user (often termed an ''adversary'') wants to find whether Chandler has diabetes or not. Suppose he also knows in which row of the database Chandler resides. Now suppose the adversary is only allowed to use a particular form of query <math>Q_i</math> that returns the partial sum of the first <math>i</math> rows of column <math>X</math> in the database. In order to find Chandler's diabetes status the adversary executes <math>Q_5(D_1)</math> and <math>Q_4(D_1)</math>, then computes their difference. In this example, <math>Q_5(D_1) = 3</math> and <math>Q_4(D_1) = 2</math>, so their difference is 1. This indicates that the "Has Diabetes" field in Chandler's row must be 1. This example highlights how individual information can be compromised even without explicitly querying for the information of a specific individual. |
| | | |
− | 现在假设一个恶意用户(通常称为对手)想要查看 Chandler 是否患有糖尿病。假设他也知道 Chandler 在数据库的哪一行中。现在假设对手只允许使用特定形式的查询 q _ i,该查询返回数据库中列 x 的第一个 i 行的部分和。为了查找钱德勒的糖尿病状态,对手执行 q5(d1)和 q4(d1) ,然后计算它们的差异。在这个例子中,q _ 5(d _ 1) = 3和 q _ 4(d _ 1) = 2,所以它们的差值是1。这表示 Chandler 行中的“ Has Diabetes”字段必须为1。这个示例突出说明了即使不显式查询特定个人的信息,个人信息也可能被泄露。
| + | 现在假设一个恶意用户(通常称为对手)想要查看 Chandler 是否患有糖尿病。假设他也知道 Chandler 在数据库的哪一行中。现在假设对手只被允许使用特定形式的查询<math>Q_i</math>,该查询返回数据库中列<math>X</math>的第一个<math>i</math>行的部分和。为了查找Chandler的糖尿病状态,对手执行 <math>Q_5(D_1)</math>和<math>Q_4(D_1)</math> ,然后计算它们的差值。在这个例子中,<math>Q_5(D_1) = 3</math>和 <math>Q_4(D_1) = 2</math>,所以它们的差值是1。这表示 Chandler 行中的“Has Diabetes”字段必然为1。这个例子强调了即使不显式查询特定个人的信息,个人信息也可能被泄露。 |
| | | |
| Continuing this example, if we construct <math>D_2</math> by replacing (Chandler, 1) with (Chandler, 0) then this malicious adversary will be able to distinguish <math>D_2</math> from <math>D_1</math> by computing <math>Q_5 - Q_4</math> for each dataset. If the adversary were required to receive the values <math>Q_i</math> via an <math>\epsilon</math>-differentially private algorithm, for a sufficiently small <math>\epsilon</math>, then he or she would be unable to distinguish between the two datasets. | | Continuing this example, if we construct <math>D_2</math> by replacing (Chandler, 1) with (Chandler, 0) then this malicious adversary will be able to distinguish <math>D_2</math> from <math>D_1</math> by computing <math>Q_5 - Q_4</math> for each dataset. If the adversary were required to receive the values <math>Q_i</math> via an <math>\epsilon</math>-differentially private algorithm, for a sufficiently small <math>\epsilon</math>, then he or she would be unable to distinguish between the two datasets. |
第196行: |
第196行: |
| {{See also|Local differential privacy}} | | {{See also|Local differential privacy}} |
| | | |
− | A simple example, especially developed in the [[social science]]s,<ref name=":7">{{cite journal |last=Warner |first=S. L. |date=March 1965 |title=Randomised response: a survey technique for eliminating evasive answer bias |jstor=2283137 |journal=[[Journal of the American Statistical Association]] |publisher=[[Taylor & Francis]] |volume=60 |issue=309 |pages=63–69 |doi= 10.1080/01621459.1965.10480775|pmid=12261830 }}</ref> is to ask a person to answer the question "Do you own the ''attribute A''?", according to the following procedure: | + | A simple example, especially developed in the [[social sciences]],<ref name=":7">{{cite journal |last=Warner |first=S. L. |date=March 1965 |title=Randomised response: a survey technique for eliminating evasive answer bias |jstor=2283137 |journal=[[Journal of the American Statistical Association]] |publisher=[[Taylor & Francis]] |volume=60 |issue=309 |pages=63–69 |doi= 10.1080/01621459.1965.10480775|pmid=12261830 }}</ref> is to ask a person to answer the question "Do you own the ''attribute A''?", according to the following procedure: |
| | | |
− | 一个简单的例子,尤其是在社会科学领域<ref name=":7" />,这就是指让一个人遵从下列程序回答“你拥有属性''A''吗?”:
| + | ju一个简单的例子,尤其是在<font color="#ff8000">社会科学Social Sciences</font>领域<ref name=":7" />,这就是指让一个人遵从下列程序回答“你拥有属性''A''吗?”: |
| | | |
| #[[Coin flipping|Toss a coin]]. | | #[[Coin flipping|Toss a coin]]. |
第204行: |
第204行: |
| #If tails, then toss the coin again and answer "Yes" if heads, "No" if tails. | | #If tails, then toss the coin again and answer "Yes" if heads, "No" if tails. |
| | | |
− | #抛硬币。 | + | #<font color="#ff8000">抛硬币Toss a Coin</font>。 |
− | #如果是正面,再掷硬币(忽略结果),诚实地回答问题。 | + | #如果是正面,再掷一次硬币(忽略结果),并诚实地回答问题。 |
| #如果是反面,再掷一次硬币,如果是正面,回答“是”; 如果是反面,回答“否”。 | | #如果是反面,再掷一次硬币,如果是正面,回答“是”; 如果是反面,回答“否”。 |
| | | |
| (The seemingly redundant extra toss in the first case is needed in situations where just the ''act'' of tossing a coin may be observed by others, even if the actual result stays hidden.) The confidentiality then arises from the [[Falsifiability|refutability]] of the individual responses. | | (The seemingly redundant extra toss in the first case is needed in situations where just the ''act'' of tossing a coin may be observed by others, even if the actual result stays hidden.) The confidentiality then arises from the [[Falsifiability|refutability]] of the individual responses. |
| | | |
− | (在第一种情况下,看似多余的额外投掷是必要的,因为在这种情况下,即使实际结果仍然隐藏着,仅仅是抛硬币的动作就可能被其他人观察到。)这种保密性来自于个人反应的可反驳性。
| + | (在第一种情况下,看似多余的额外投掷是必要的,因为在这种情况下,即使实际结果仍然隐藏着,仅仅是抛硬币的动作就可能被其他人观察到。)这种保密性来自于个人反应的<font color="#ff8000">可驳性Refutability</font>。 |
| | | |
− | But, overall, these data with many responses are significant, since positive responses are given to a quarter by people who do not have the ''attribute A'' and three-quarters by people who actually possess it. | + | But, overall, these data with many responses are significant, since positive responses are given to a quarter by people who do not have the ''attribute A'' and three-quarters by people who actually possess it. Thus, if ''p'' is the true proportion of people with ''A'', then we expect to obtain (1/4)(1-''p'') + (3/4)''p'' = (1/4) + ''p''/2 positive responses. Hence it is possible to estimate ''p''. |
− | Thus, if ''p'' is the true proportion of people with ''A'', then we expect to obtain (1/4)(1-''p'') + (3/4)''p'' = (1/4) + ''p''/2 positive responses. Hence it is possible to estimate ''p''. | |
| | | |
| 但是,总的来说,这些有很多回答的数据是有意义的,因为有四分之一的人给出了肯定的回答,而有四分之三的人给出了真正拥有''A''属性的人的答案。因此,如果''p''是''A''型人群的真实比例,那么我们期望得到(1/4)(1-''p'') + (3/4)''p'' = (1/4) + ''p''/2的积极反应。因此,我们可以估计''p''。 | | 但是,总的来说,这些有很多回答的数据是有意义的,因为有四分之一的人给出了肯定的回答,而有四分之三的人给出了真正拥有''A''属性的人的答案。因此,如果''p''是''A''型人群的真实比例,那么我们期望得到(1/4)(1-''p'') + (3/4)''p'' = (1/4) + ''p''/2的积极反应。因此,我们可以估计''p''。 |
第219行: |
第218行: |
| In particular, if the ''attribute A'' is synonymous with illegal behavior, then answering "Yes" is not incriminating, insofar as the person has a probability of a "Yes" response, whatever it may be. | | In particular, if the ''attribute A'' is synonymous with illegal behavior, then answering "Yes" is not incriminating, insofar as the person has a probability of a "Yes" response, whatever it may be. |
| | | |
− | In particular, if the attribute A is synonymous with illegal behavior, then answering "Yes" is not incriminating, insofar as the person has a probability of a "Yes" response, whatever it may be.
| + | 特别是,如果属性''A''是非法行为的同义词,在这个人有可能回答“是”的情况下,无论结果可能是什么,回答“是”并不意味着定罪。 |
| | | |
− | 特别是,如果属性 a 是非法行为的同义词,那么回答“是”并不意味着定罪,只要这个人有可能作出“是”的回答,无论它可能是什么。
| + | Although this example, inspired by [[randomized response]], might be applicable to [[Microdata (statistics)|microdata]] (i.e., releasing datasets with each individual response), by definition differential privacy excludes microdata releases and is only applicable to queries (i.e., aggregating individual responses into one result) as this would violate the requirements, more specifically the plausible deniability that a subject participated or not.<ref name=":10">Dwork, Cynthia. "A firm foundation for private data analysis." Communications of the ACM 54.1 (2011): 86–95, supra note 19, page 91.</ref><ref name=":11">Bambauer, Jane, Krishnamurty Muralidhar, and Rathindra Sarathy. "Fool's gold: an illustrated critique of differential privacy." Vand. J. Ent. & Tech. L. 16 (2013): 701.</ref> |
| | | |
− | Although this example, inspired by [[randomized response]], might be applicable to [[Microdata (statistics)|microdata]] (i.e., releasing datasets with each individual response), by definition differential privacy excludes microdata releases and is only applicable to queries (i.e., aggregating individual responses into one result) as this would violate the requirements, more specifically the plausible deniability that a subject participated or not.<ref>Dwork, Cynthia. "A firm foundation for private data analysis." Communications of the ACM 54.1 (2011): 86–95, supra note 19, page 91.</ref><ref>Bambauer, Jane, Krishnamurty Muralidhar, and Rathindra Sarathy. "Fool's gold: an illustrated critique of differential privacy." Vand. J. Ent. & Tech. L. 16 (2013): 701.</ref>
| + | 虽然这个例子受到了<font color="#ff8000">随机应答Randomized Response</font>的启发,可能适用于<font color="#ff8000">微数据Microdata</font>(例如,发布每个响应的数据集),但根据定义,差分隐私排除了微数据的发布,并且只适用于查询(例如,将单个响应聚合成一个结果),因为这将违反其要求,更具体地说,是对一个主体是否参与的合理否认。<ref name=":10" /><ref name=":11" /> --~~ |
| | | |
− | Although this example, inspired by randomized response, might be applicable to microdata (i.e., releasing datasets with each individual response), by definition differential privacy excludes microdata releases and is only applicable to queries (i.e., aggregating individual responses into one result) as this would violate the requirements, more specifically the plausible deniability that a subject participated or not.Dwork, Cynthia. "A firm foundation for private data analysis." Communications of the ACM 54.1 (2011): 86–95, supra note 19, page 91.Bambauer, Jane, Krishnamurty Muralidhar, and Rathindra Sarathy. "Fool's gold: an illustrated critique of differential privacy." Vand. J. Ent. & Tech. L. 16 (2013): 701.
| + | ===Stable transformations=== |
− | | |
− | 虽然这个例子受到了随机化回答的启发,可能适用于微数据(例如,发布每个响应的数据集) ,但根据定义,差分隐私排除了微数据发布,并且只适用于查询(例如,将单个响应聚合成一个结果) ,因为这将违反要求,更具体地说,是一个主题参与或不参与的似是而非的否认。辛西娅。“为私人数据分析奠定坚实的基础。”美国计算机学会通讯54.1(2011) : 86-95,上注19,第91页. Bambauer,Jane,Krishnamurty Muralidhar,and Rathindra Sarathy。“愚人的黄金: 对差分隐私的插图式批评。”Vand.J. Ent.北京科技发展有限公司。L. 16(2013) : 701.
| |
− | | |
− | === Stable transformations === | |
| A transformation <math>T</math> is <math>c</math>-stable if the [[Hamming distance]] between <math>T(A)</math> and <math>T(B)</math> is at most <math>c</math>-times the Hamming distance between <math>A</math> and <math>B</math> for any two databases <math>A,B</math>. Theorem 2 in <ref name="PINQ" /> asserts that if there is a mechanism <math>M</math> that is <math>\epsilon</math>-differentially private, then the composite mechanism <math>M\circ T</math> is <math>(\epsilon \times c)</math>-differentially private. | | A transformation <math>T</math> is <math>c</math>-stable if the [[Hamming distance]] between <math>T(A)</math> and <math>T(B)</math> is at most <math>c</math>-times the Hamming distance between <math>A</math> and <math>B</math> for any two databases <math>A,B</math>. Theorem 2 in <ref name="PINQ" /> asserts that if there is a mechanism <math>M</math> that is <math>\epsilon</math>-differentially private, then the composite mechanism <math>M\circ T</math> is <math>(\epsilon \times c)</math>-differentially private. |
| | | |
− | A transformation T is c-stable if the Hamming distance between T(A) and T(B) is at most c-times the Hamming distance between A and B for any two databases A,B. Theorem 2 in asserts that if there is a mechanism M that is \epsilon-differentially private, then the composite mechanism M\circ T is (\epsilon \times c)-differentially private. | + | 对于任意两个数据库<math>A,B</math>,如果<math>T(A)</math>和 <math>T(B)</math>)之间的汉明距离最多是<math>A</math>和<math>B</math>之间的<font color="#ff8000">汉明距离Hamming Distance</font>的<math>c</math>倍,则变换<math>T</math>是 <math>c</math>-稳定的。文章<ref name="PINQ" />中的定理2指出,如果存在一个机制<math>M</math>是<math>\epsilon</math>-差分隐私的,那么复合机制<math>M\circ T</math>也是<math>(\epsilon \times c)</math>-差分隐私的。 |
− | | |
− | 对于任意两个数据库 a,b,如果 t (a)和 t (b)之间的汉明距离最多是 a 和 b 之间的汉明距离的 c 倍,则变换 t 是 c 稳定的。定理2断言,如果存在一个机制 m 是 epsilon-微分私有的,那么复合机制 m circ t 是(epsilon 乘以 c)-微分私有的。
| |
| | | |
| This could be generalized to group privacy, as the group size could be thought of as the Hamming distance <math>h</math> between | | This could be generalized to group privacy, as the group size could be thought of as the Hamming distance <math>h</math> between |
| <math>A</math> and <math>B</math> (where <math>A</math> contains the group and <math>B</math> doesn't). In this case <math>M\circ T</math> is <math>(\epsilon \times c \times h)</math>-differentially private. | | <math>A</math> and <math>B</math> (where <math>A</math> contains the group and <math>B</math> doesn't). In this case <math>M\circ T</math> is <math>(\epsilon \times c \times h)</math>-differentially private. |
| | | |
− | This could be generalized to group privacy, as the group size could be thought of as the Hamming distance h between
| |
− | A and B (where A contains the group and B doesn't). In this case M\circ T is (\epsilon \times c \times h)-differentially private.
| |
| | | |
− | 这可以推广到群组隐私,因为群组大小可以被认为是 a 和 b 之间的汉明距离 h (其中 a 包含群组,而 b 没有)。在这种情况下,m circ t 是(epsilon 乘以 c 乘以 h)-微分私有的。
| + | 这可以推广到群组体隐私,因为群体大小可以被视为<math>A</math>和<math>B</math>之间的汉明距离<math>h</math>(其中<math>A</math>包含群组,而<math>B</math>没有)。在这种情况下,<math>M\circ T</math>是<math>(\epsilon \times c \times h)</math>-差分隐私的。 |
| | | |
| | | |
第252行: |
第243行: |
| Several uses of differential privacy in practice are known to date: | | Several uses of differential privacy in practice are known to date: |
| *2008: [[United States Census Bureau|U.S. Census Bureau]], for showing commuting patterns.<ref name="MachanavajjhalaKAGV08" /> | | *2008: [[United States Census Bureau|U.S. Census Bureau]], for showing commuting patterns.<ref name="MachanavajjhalaKAGV08" /> |
− | *2014: [[Google]]'s RAPPOR, for telemetry such as learning statistics about unwanted software hijacking users' settings. <ref name="RAPPOR" /><ref>{{Citation|title=google/rappor|date=2021-07-15|url=https://github.com/google/rappor|publisher=GitHub}}</ref> | + | * 2014: [[Google]]'s RAPPOR, for telemetry such as learning statistics about unwanted software hijacking users' settings. <ref name="RAPPOR" /><ref>{{Citation|title=google/rappor|date=2021-07-15|url=https://github.com/google/rappor|publisher=GitHub}}</ref> |
| *2015: Google, for sharing historical traffic statistics.<ref name="Eland" /> | | *2015: Google, for sharing historical traffic statistics.<ref name="Eland" /> |
| *2016: [[Apple Inc.|Apple]] announced its intention to use differential privacy in [[iOS 10]] to improve its [[Intelligent personal assistant]] technology.<ref>{{cite web|title=Apple - Press Info - Apple Previews iOS 10, the Biggest iOS Release Ever|url=https://www.apple.com/pr/library/2016/06/13Apple-Previews-iOS-10-The-Biggest-iOS-Release-Ever.html|website=Apple|access-date=16 June 2016}}</ref> | | *2016: [[Apple Inc.|Apple]] announced its intention to use differential privacy in [[iOS 10]] to improve its [[Intelligent personal assistant]] technology.<ref>{{cite web|title=Apple - Press Info - Apple Previews iOS 10, the Biggest iOS Release Ever|url=https://www.apple.com/pr/library/2016/06/13Apple-Previews-iOS-10-The-Biggest-iOS-Release-Ever.html|website=Apple|access-date=16 June 2016}}</ref> |
| *2017: Microsoft, for telemetry in Windows.<ref name="DpWinTelemetry" /> | | *2017: Microsoft, for telemetry in Windows.<ref name="DpWinTelemetry" /> |
| *2019: Privitar Lens is an API using differential privacy.<ref>{{cite web|title=Privitar Lens|url=https://www.privitar.com/privitar-lens|access-date=20 February 2018}}</ref> | | *2019: Privitar Lens is an API using differential privacy.<ref>{{cite web|title=Privitar Lens|url=https://www.privitar.com/privitar-lens|access-date=20 February 2018}}</ref> |
− | *2020: LinkedIn, for advertiser queries.<ref name="DpLinkedIn" /> | + | * 2020: LinkedIn, for advertiser queries.<ref name="DpLinkedIn" /> |
| | | |
| | | |
第267行: |
第258行: |
| *2017: Microsoft, for telemetry in Windows. | | *2017: Microsoft, for telemetry in Windows. |
| *2019: Privitar Lens is an API using differential privacy. | | *2019: Privitar Lens is an API using differential privacy. |
− | * 2020: LinkedIn, for advertiser queries. | + | *2020: LinkedIn, for advertiser queries. |
| | | |
| 2008: u.s. Census Bureau,for shows comforting patterns. 在实践中,差分隐私的几个用途已经为人所知: | | 2008: u.s. Census Bureau,for shows comforting patterns. 在实践中,差分隐私的几个用途已经为人所知: |
第275行: |
第266行: |
| *2017: 微软,Windows 遥测系统。2019: priveritar Lens 是一个使用差分隐私的 API。2020: LinkedIn,for advertiser queries. | | *2017: 微软,Windows 遥测系统。2019: priveritar Lens 是一个使用差分隐私的 API。2020: LinkedIn,for advertiser queries. |
| | | |
− | ==Public purpose considerations== | + | ==Public purpose considerations == |
| There are several public purpose considerations regarding differential privacy that are important to consider, especially for policymakers and policy-focused audiences interested in the social opportunities and risks of the technology:<ref>{{Cite web|title=Technology Factsheet: Differential Privacy|url=https://www.belfercenter.org/publication/technology-factsheet-differential-privacy|access-date=2021-04-12|website=Belfer Center for Science and International Affairs|language=en}}</ref> | | There are several public purpose considerations regarding differential privacy that are important to consider, especially for policymakers and policy-focused audiences interested in the social opportunities and risks of the technology:<ref>{{Cite web|title=Technology Factsheet: Differential Privacy|url=https://www.belfercenter.org/publication/technology-factsheet-differential-privacy|access-date=2021-04-12|website=Belfer Center for Science and International Affairs|language=en}}</ref> |
| | | |
第284行: |
第275行: |
| *'''Data Utility & Accuracy.''' The main concern with differential privacy is the tradeoff between data utility and individual privacy. If the privacy loss parameter is set to favor utility, the privacy benefits are lowered (less “noise” is injected into the system); if the privacy loss parameter is set to favor heavy privacy, the accuracy and utility of the dataset are lowered (more “noise” is injected into the system). It is important for policymakers to consider the tradeoffs posed by differential privacy in order to help set appropriate best practices and standards around the use of this privacy preserving practice, especially considering the diversity in organizational use cases. It is worth noting, though, that decreased accuracy and utility is a common issue among all statistical disclosure limitation methods and is not unique to differential privacy. What is unique, however, is how policymakers, researchers, and implementers can consider mitigating against the risks presented through this tradeoff. | | *'''Data Utility & Accuracy.''' The main concern with differential privacy is the tradeoff between data utility and individual privacy. If the privacy loss parameter is set to favor utility, the privacy benefits are lowered (less “noise” is injected into the system); if the privacy loss parameter is set to favor heavy privacy, the accuracy and utility of the dataset are lowered (more “noise” is injected into the system). It is important for policymakers to consider the tradeoffs posed by differential privacy in order to help set appropriate best practices and standards around the use of this privacy preserving practice, especially considering the diversity in organizational use cases. It is worth noting, though, that decreased accuracy and utility is a common issue among all statistical disclosure limitation methods and is not unique to differential privacy. What is unique, however, is how policymakers, researchers, and implementers can consider mitigating against the risks presented through this tradeoff. |
| | | |
− | *Data Utility & Accuracy. The main concern with differential privacy is the tradeoff between data utility and individual privacy. If the privacy loss parameter is set to favor utility, the privacy benefits are lowered (less “noise” is injected into the system); if the privacy loss parameter is set to favor heavy privacy, the accuracy and utility of the dataset are lowered (more “noise” is injected into the system). It is important for policymakers to consider the tradeoffs posed by differential privacy in order to help set appropriate best practices and standards around the use of this privacy preserving practice, especially considering the diversity in organizational use cases. It is worth noting, though, that decreased accuracy and utility is a common issue among all statistical disclosure limitation methods and is not unique to differential privacy. What is unique, however, is how policymakers, researchers, and implementers can consider mitigating against the risks presented through this tradeoff. | + | * Data Utility & Accuracy. The main concern with differential privacy is the tradeoff between data utility and individual privacy. If the privacy loss parameter is set to favor utility, the privacy benefits are lowered (less “noise” is injected into the system); if the privacy loss parameter is set to favor heavy privacy, the accuracy and utility of the dataset are lowered (more “noise” is injected into the system). It is important for policymakers to consider the tradeoffs posed by differential privacy in order to help set appropriate best practices and standards around the use of this privacy preserving practice, especially considering the diversity in organizational use cases. It is worth noting, though, that decreased accuracy and utility is a common issue among all statistical disclosure limitation methods and is not unique to differential privacy. What is unique, however, is how policymakers, researchers, and implementers can consider mitigating against the risks presented through this tradeoff. |
| | | |
| | | |
第304行: |
第295行: |
| | | |
| *Quasi-identifier | | *Quasi-identifier |
− | *Exponential mechanism (differential privacy) – a technique for designing differentially private algorithms | + | * Exponential mechanism (differential privacy) – a technique for designing differentially private algorithms |
| *k-anonymity | | *k-anonymity |
| *Differentially private analysis of graphs | | *Differentially private analysis of graphs |
第312行: |
第303行: |
| *准标识符 | | *准标识符 |
| *指数机制(差分隐私)-一种设计不同私有算法的技术 | | *指数机制(差分隐私)-一种设计不同私有算法的技术 |
− | *k-匿名 | + | * k-匿名 |
| *图的不同私有分析 | | *图的不同私有分析 |
| *受保护的健康信息 | | *受保护的健康信息 |
| | | |
− | ==References == | + | ==References== |
| {{Reflist|refs= | | {{Reflist|refs= |
| <ref name="DKMMN06"> | | <ref name="DKMMN06"> |
第421行: |
第412行: |
| *[https://journalprivacyconfidentiality.org/index.php/jpc/article/view/404 Abowd, John. 2017. “How Will Statistical Agencies Operate When All Data Are Private?”. Journal of Privacy and Confidentiality 7 (3).] {{doi|10.29012/jpc.v7i3.404}} ([https://www2.census.gov/cac/sac/meetings/2017-09/role-statistical-agency.pdf slides]) | | *[https://journalprivacyconfidentiality.org/index.php/jpc/article/view/404 Abowd, John. 2017. “How Will Statistical Agencies Operate When All Data Are Private?”. Journal of Privacy and Confidentiality 7 (3).] {{doi|10.29012/jpc.v7i3.404}} ([https://www2.census.gov/cac/sac/meetings/2017-09/role-statistical-agency.pdf slides]) |
| *[http://www.jetlaw.org/wp-content/uploads/2018/12/4_Wood_Final.pdf "Differential Privacy: A Primer for a Non-technical Audience"], Kobbi Nissim, Thomas Steinke, Alexandra Wood, [[Micah Altman]], Aaron Bembenek, Mark Bun, Marco Gaboardi, David R. O’Brien, and Salil Vadhan, Harvard Privacy Tools Project, February 14, 2018 | | *[http://www.jetlaw.org/wp-content/uploads/2018/12/4_Wood_Final.pdf "Differential Privacy: A Primer for a Non-technical Audience"], Kobbi Nissim, Thomas Steinke, Alexandra Wood, [[Micah Altman]], Aaron Bembenek, Mark Bun, Marco Gaboardi, David R. O’Brien, and Salil Vadhan, Harvard Privacy Tools Project, February 14, 2018 |
− | *Dinur, Irit and Kobbi Nissim. 2003. Revealing information while preserving privacy. In Proceedings of the twenty-second ACM SIGMOD-SIGACT-SIGART symposium on Principles of database systems(PODS '03). ACM, New York, NY, USA, 202-210. {{doi|10.1145/773153.773173}}. | + | * Dinur, Irit and Kobbi Nissim. 2003. Revealing information while preserving privacy. In Proceedings of the twenty-second ACM SIGMOD-SIGACT-SIGART symposium on Principles of database systems(PODS '03). ACM, New York, NY, USA, 202-210. {{doi|10.1145/773153.773173}}. |
| *Dwork, Cynthia, Frank McSherry, Kobbi Nissim, and Adam Smith. 2006. in Halevi, S. & Rabin, T. (Eds.) Calibrating Noise to Sensitivity in Private Data Analysis Theory of Cryptography: Third Theory of Cryptography Conference, TCC 2006, New York, NY, USA, March 4–7, 2006. Proceedings, Springer Berlin Heidelberg, 265-284, {{doi|10.1007/11681878 14}}. | | *Dwork, Cynthia, Frank McSherry, Kobbi Nissim, and Adam Smith. 2006. in Halevi, S. & Rabin, T. (Eds.) Calibrating Noise to Sensitivity in Private Data Analysis Theory of Cryptography: Third Theory of Cryptography Conference, TCC 2006, New York, NY, USA, March 4–7, 2006. Proceedings, Springer Berlin Heidelberg, 265-284, {{doi|10.1007/11681878 14}}. |
| *Dwork, Cynthia. 2006. Differential Privacy, 33rd International Colloquium on Automata, Languages and Programming, part II (ICALP 2006), Springer Verlag, 4052, 1-12, {{ISBN|3-540-35907-9}}. | | *Dwork, Cynthia. 2006. Differential Privacy, 33rd International Colloquium on Automata, Languages and Programming, part II (ICALP 2006), Springer Verlag, 4052, 1-12, {{ISBN|3-540-35907-9}}. |
| *Dwork, Cynthia and Aaron Roth. 2014. The Algorithmic Foundations of Differential Privacy. Foundations and Trends in Theoretical Computer Science. Vol. 9, Nos. 3–4. 211–407, {{doi|10.1561/0400000042}}. | | *Dwork, Cynthia and Aaron Roth. 2014. The Algorithmic Foundations of Differential Privacy. Foundations and Trends in Theoretical Computer Science. Vol. 9, Nos. 3–4. 211–407, {{doi|10.1561/0400000042}}. |
− | * Machanavajjhala, Ashwin, Daniel Kifer, John M. Abowd, Johannes Gehrke, and Lars Vilhuber. 2008. Privacy: Theory Meets Practice on the Map, International Conference on Data Engineering (ICDE) 2008: 277-286, {{doi|10.1109/ICDE.2008.4497436}}. | + | *Machanavajjhala, Ashwin, Daniel Kifer, John M. Abowd, Johannes Gehrke, and Lars Vilhuber. 2008. Privacy: Theory Meets Practice on the Map, International Conference on Data Engineering (ICDE) 2008: 277-286, {{doi|10.1109/ICDE.2008.4497436}}. |
| *Dwork, Cynthia and Moni Naor. 2010. On the Difficulties of Disclosure Prevention in Statistical Databases or The Case for Differential Privacy, Journal of Privacy and Confidentiality: Vol. 2: Iss. 1, Article 8. Available at: http://repository.cmu.edu/jpc/vol2/iss1/8. | | *Dwork, Cynthia and Moni Naor. 2010. On the Difficulties of Disclosure Prevention in Statistical Databases or The Case for Differential Privacy, Journal of Privacy and Confidentiality: Vol. 2: Iss. 1, Article 8. Available at: http://repository.cmu.edu/jpc/vol2/iss1/8. |
− | * Kifer, Daniel and Ashwin Machanavajjhala. 2011. No free lunch in data privacy. In Proceedings of the 2011 ACM SIGMOD International Conference on Management of data (SIGMOD '11). ACM, New York, NY, USA, 193-204. {{doi|10.1145/1989323.1989345}}. | + | *Kifer, Daniel and Ashwin Machanavajjhala. 2011. No free lunch in data privacy. In Proceedings of the 2011 ACM SIGMOD International Conference on Management of data (SIGMOD '11). ACM, New York, NY, USA, 193-204. {{doi|10.1145/1989323.1989345}}. |
| *Erlingsson, Úlfar, Vasyl Pihur and Aleksandra Korolova. 2014. RAPPOR: Randomized Aggregatable Privacy-Preserving Ordinal Response. In Proceedings of the 2014 ACM SIGSAC Conference on Computer and Communications Security (CCS '14). ACM, New York, NY, USA, 1054-1067. {{doi|10.1145/2660267.2660348}}. | | *Erlingsson, Úlfar, Vasyl Pihur and Aleksandra Korolova. 2014. RAPPOR: Randomized Aggregatable Privacy-Preserving Ordinal Response. In Proceedings of the 2014 ACM SIGSAC Conference on Computer and Communications Security (CCS '14). ACM, New York, NY, USA, 1054-1067. {{doi|10.1145/2660267.2660348}}. |
| *Abowd, John M. and Ian M. Schmutte. 2017 . Revisiting the economics of privacy: Population statistics and confidentiality protection as public goods. Labor Dynamics Institute, Cornell University, Labor Dynamics Institute, Cornell University, at https://digitalcommons.ilr.cornell.edu/ldi/37/ | | *Abowd, John M. and Ian M. Schmutte. 2017 . Revisiting the economics of privacy: Population statistics and confidentiality protection as public goods. Labor Dynamics Institute, Cornell University, Labor Dynamics Institute, Cornell University, at https://digitalcommons.ilr.cornell.edu/ldi/37/ |
第434行: |
第425行: |
| *Ding, Bolin, Janardhan Kulkarni, and Sergey Yekhanin 2017. Collecting Telemetry Data Privately, NIPS 2017. | | *Ding, Bolin, Janardhan Kulkarni, and Sergey Yekhanin 2017. Collecting Telemetry Data Privately, NIPS 2017. |
| *http://www.win-vector.com/blog/2015/10/a-simpler-explanation-of-differential-privacy/ | | *http://www.win-vector.com/blog/2015/10/a-simpler-explanation-of-differential-privacy/ |
− | * Ryffel, Theo, Andrew Trask, et. al. [[arxiv:1811.04017|"A generic framework for privacy preserving deep learning"]] | + | *Ryffel, Theo, Andrew Trask, et. al. [[arxiv:1811.04017|"A generic framework for privacy preserving deep learning"]] |
| | | |
| *A reading list on differential privacy | | *A reading list on differential privacy |
| *Abowd, John. 2017. “How Will Statistical Agencies Operate When All Data Are Private?”. Journal of Privacy and Confidentiality 7 (3). (slides) | | *Abowd, John. 2017. “How Will Statistical Agencies Operate When All Data Are Private?”. Journal of Privacy and Confidentiality 7 (3). (slides) |
| *"Differential Privacy: A Primer for a Non-technical Audience", Kobbi Nissim, Thomas Steinke, Alexandra Wood, Micah Altman, Aaron Bembenek, Mark Bun, Marco Gaboardi, David R. O’Brien, and Salil Vadhan, Harvard Privacy Tools Project, February 14, 2018 | | *"Differential Privacy: A Primer for a Non-technical Audience", Kobbi Nissim, Thomas Steinke, Alexandra Wood, Micah Altman, Aaron Bembenek, Mark Bun, Marco Gaboardi, David R. O’Brien, and Salil Vadhan, Harvard Privacy Tools Project, February 14, 2018 |
− | *Dinur, Irit and Kobbi Nissim. 2003. Revealing information while preserving privacy. In Proceedings of the twenty-second ACM SIGMOD-SIGACT-SIGART symposium on Principles of database systems(PODS '03). ACM, New York, NY, USA, 202-210. . | + | * Dinur, Irit and Kobbi Nissim. 2003. Revealing information while preserving privacy. In Proceedings of the twenty-second ACM SIGMOD-SIGACT-SIGART symposium on Principles of database systems(PODS '03). ACM, New York, NY, USA, 202-210. . |
| *Dwork, Cynthia, Frank McSherry, Kobbi Nissim, and Adam Smith. 2006. in Halevi, S. & Rabin, T. (Eds.) Calibrating Noise to Sensitivity in Private Data Analysis Theory of Cryptography: Third Theory of Cryptography Conference, TCC 2006, New York, NY, USA, March 4–7, 2006. Proceedings, Springer Berlin Heidelberg, 265-284, . | | *Dwork, Cynthia, Frank McSherry, Kobbi Nissim, and Adam Smith. 2006. in Halevi, S. & Rabin, T. (Eds.) Calibrating Noise to Sensitivity in Private Data Analysis Theory of Cryptography: Third Theory of Cryptography Conference, TCC 2006, New York, NY, USA, March 4–7, 2006. Proceedings, Springer Berlin Heidelberg, 265-284, . |
| *Dwork, Cynthia. 2006. Differential Privacy, 33rd International Colloquium on Automata, Languages and Programming, part II (ICALP 2006), Springer Verlag, 4052, 1-12, . | | *Dwork, Cynthia. 2006. Differential Privacy, 33rd International Colloquium on Automata, Languages and Programming, part II (ICALP 2006), Springer Verlag, 4052, 1-12, . |
| *Dwork, Cynthia and Aaron Roth. 2014. The Algorithmic Foundations of Differential Privacy. Foundations and Trends in Theoretical Computer Science. Vol. 9, Nos. 3–4. 211–407, . | | *Dwork, Cynthia and Aaron Roth. 2014. The Algorithmic Foundations of Differential Privacy. Foundations and Trends in Theoretical Computer Science. Vol. 9, Nos. 3–4. 211–407, . |
− | *Machanavajjhala, Ashwin, Daniel Kifer, John M. Abowd, Johannes Gehrke, and Lars Vilhuber. 2008. Privacy: Theory Meets Practice on the Map, International Conference on Data Engineering (ICDE) 2008: 277-286, . | + | * Machanavajjhala, Ashwin, Daniel Kifer, John M. Abowd, Johannes Gehrke, and Lars Vilhuber. 2008. Privacy: Theory Meets Practice on the Map, International Conference on Data Engineering (ICDE) 2008: 277-286, . |
| *Dwork, Cynthia and Moni Naor. 2010. On the Difficulties of Disclosure Prevention in Statistical Databases or The Case for Differential Privacy, Journal of Privacy and Confidentiality: Vol. 2: Iss. 1, Article 8. Available at: http://repository.cmu.edu/jpc/vol2/iss1/8. | | *Dwork, Cynthia and Moni Naor. 2010. On the Difficulties of Disclosure Prevention in Statistical Databases or The Case for Differential Privacy, Journal of Privacy and Confidentiality: Vol. 2: Iss. 1, Article 8. Available at: http://repository.cmu.edu/jpc/vol2/iss1/8. |
| *Kifer, Daniel and Ashwin Machanavajjhala. 2011. No free lunch in data privacy. In Proceedings of the 2011 ACM SIGMOD International Conference on Management of data (SIGMOD '11). ACM, New York, NY, USA, 193-204. . | | *Kifer, Daniel and Ashwin Machanavajjhala. 2011. No free lunch in data privacy. In Proceedings of the 2011 ACM SIGMOD International Conference on Management of data (SIGMOD '11). ACM, New York, NY, USA, 193-204. . |
第483行: |
第474行: |
| *Differential Privacy by Cynthia Dwork, ICALP July 2006. | | *Differential Privacy by Cynthia Dwork, ICALP July 2006. |
| *The Algorithmic Foundations of Differential Privacy by Cynthia Dwork and Aaron Roth, 2014. | | *The Algorithmic Foundations of Differential Privacy by Cynthia Dwork and Aaron Roth, 2014. |
− | * Differential Privacy: A Survey of Results by Cynthia Dwork, Microsoft Research, April 2008 | + | *Differential Privacy: A Survey of Results by Cynthia Dwork, Microsoft Research, April 2008 |
| *Privacy of Dynamic Data: Continual Observation and Pan Privacy by Moni Naor, Institute for Advanced Study, November 2009 | | *Privacy of Dynamic Data: Continual Observation and Pan Privacy by Moni Naor, Institute for Advanced Study, November 2009 |
| *Tutorial on Differential Privacy by Katrina Ligett, California Institute of Technology, December 2013 | | *Tutorial on Differential Privacy by Katrina Ligett, California Institute of Technology, December 2013 |
| *A Practical Beginner's Guide To Differential Privacy by Christine Task, Purdue University, April 2012 | | *A Practical Beginner's Guide To Differential Privacy by Christine Task, Purdue University, April 2012 |
− | * Private Map Maker v0.2 on the Common Data Project blog | + | *Private Map Maker v0.2 on the Common Data Project blog |
| *Learning Statistics with Privacy, aided by the Flip of a Coin by Úlfar Erlingsson, Google Research Blog, October 2014 | | *Learning Statistics with Privacy, aided by the Flip of a Coin by Úlfar Erlingsson, Google Research Blog, October 2014 |
| *Technology Factsheet: Differential Privacy by Raina Gandhi and Amritha Jayanti, Belfer Center for Science and International Affairs, Fall 2020 | | *Technology Factsheet: Differential Privacy by Raina Gandhi and Amritha Jayanti, Belfer Center for Science and International Affairs, Fall 2020 |
第494行: |
第485行: |
| *私人地图制作者 v0.2 on the Common Data Project Blog | | *私人地图制作者 v0.2 on the Common Data Project Blog |
| *Learning Statistics with Privacy,added by the Flip of a Coin by úlfar Erlingsson,Google Research Blog,October 2014 | | *Learning Statistics with Privacy,added by the Flip of a Coin by úlfar Erlingsson,Google Research Blog,October 2014 |
− | * Technology Factsheet: 差分隐私地图制作者 Raina Gandhi and Amritha Jayanti,Belfer Center for Science and International Affairs,Fall 2020 | + | *Technology Factsheet: 差分隐私地图制作者 Raina Gandhi and Amritha Jayanti,Belfer Center for Science and International Affairs,Fall 2020 |
| | | |
| [[Category:Theory of cryptography]] | | [[Category:Theory of cryptography]] |