第3行: |
第3行: |
| '''Differential privacy''' is a system for publicly sharing information about a dataset by describing the patterns of groups within the dataset while withholding information about individuals in the dataset. The idea behind differential privacy is that if the effect of making an arbitrary single substitution in the database is small enough, the query result cannot be used to infer much about any single individual, and therefore provides privacy. Another way to describe differential privacy is as a constraint on the algorithms used to publish aggregate information about a [[statistical database]] which limits the disclosure of private information of records whose information is in the database. For example, differentially private algorithms are used by some government agencies to publish demographic information or other statistical aggregates while ensuring [[confidentiality]] of survey responses, and [[#Adoption of differential privacy in real-world applications|by companies]] to collect information about user behavior while controlling what is visible even to internal analysts. | | '''Differential privacy''' is a system for publicly sharing information about a dataset by describing the patterns of groups within the dataset while withholding information about individuals in the dataset. The idea behind differential privacy is that if the effect of making an arbitrary single substitution in the database is small enough, the query result cannot be used to infer much about any single individual, and therefore provides privacy. Another way to describe differential privacy is as a constraint on the algorithms used to publish aggregate information about a [[statistical database]] which limits the disclosure of private information of records whose information is in the database. For example, differentially private algorithms are used by some government agencies to publish demographic information or other statistical aggregates while ensuring [[confidentiality]] of survey responses, and [[#Adoption of differential privacy in real-world applications|by companies]] to collect information about user behavior while controlling what is visible even to internal analysts. |
| | | |
− | 差分隐私是一个用于公开分享数据集信息的系统,它在描述数据集中的群体特征的同时保护了数据集中的个人信息。差分隐私的理念是,如果在数据库中进行任意单次更迭的影响足够小,那么查询结果就不能用于推断任何单一个体的大量信息,因此个体的隐私得以保证。另一种对于差分隐私的描述表示,这是针对发布<font color="#ff8000">统计数据库Statistical Database</font>的汇集信息的算法的约束条件,这种算法限制了数据库中信息的记录的个人信息的披露。例如,一些政府机构使用差分隐私算法公布人口信息或其他统计数据,同时确保调查结果的<font color="#ff8000">保密性Confidentiality</font>;而公司则使用该算法收集用户行为信息,同时控制哪些信息对于内部分析人员是可见的。
| + | '''差分隐私'''是一个用于公开分享数据集信息的系统,它在描述数据集中的群体特征的同时保护了数据集中的个人信息。差分隐私的理念是,如果在数据库中进行任意单次更迭的影响足够小,那么查询结果就不能用于推断任何单一个体的大量信息,因此个体的隐私得以保证。另一种对于差分隐私的描述表示,这是针对发布<font color="#ff8000">统计数据库Statistical Database</font>的汇集信息的算法的约束条件,这种算法限制了数据库中信息的记录的个人信息的披露。例如,一些政府机构使用差分隐私算法公布人口信息或其他统计数据,同时确保调查结果的<font color="#ff8000">保密性Confidentiality</font>;而公司则使用该算法收集用户行为信息,同时控制哪些信息对于内部分析人员是可见的。 |
| | | |
| Roughly, an algorithm is differentially private if an observer seeing its output cannot tell if a particular individual's information was used in the computation. | | Roughly, an algorithm is differentially private if an observer seeing its output cannot tell if a particular individual's information was used in the computation. |
第14行: |
第14行: |
| 差分隐私是由密码学家开发的,因此经常与<font color="#ff8000">密码学Cryptography</font>相关,且其大量内容来自于密码学。 | | 差分隐私是由密码学家开发的,因此经常与<font color="#ff8000">密码学Cryptography</font>相关,且其大量内容来自于密码学。 |
| | | |
− | == History== | + | == History 历史== |
| Official statistics organizations are charged with collecting information from individuals or establishments, and publishing aggregate data to serve the public interest. For example, the [[1790 United States Census]] collected information about individuals living in the United States and published tabulations based on sex, age, race, and condition of servitude. Statistical organizations have long collected information under a promise of [[confidentiality]] that the information provided will be used for statistical purposes, but that the publications will not produce information that can be traced back to a specific individual or establishment. To accomplish this goal, statistical organizations have long suppressed information in their publications. For example, in a table presenting the sales of each business in a town grouped by business category, a cell that has information from only one company might be suppressed, in order to maintain the confidentiality of that company's specific sales. | | Official statistics organizations are charged with collecting information from individuals or establishments, and publishing aggregate data to serve the public interest. For example, the [[1790 United States Census]] collected information about individuals living in the United States and published tabulations based on sex, age, race, and condition of servitude. Statistical organizations have long collected information under a promise of [[confidentiality]] that the information provided will be used for statistical purposes, but that the publications will not produce information that can be traced back to a specific individual or establishment. To accomplish this goal, statistical organizations have long suppressed information in their publications. For example, in a table presenting the sales of each business in a town grouped by business category, a cell that has information from only one company might be suppressed, in order to maintain the confidentiality of that company's specific sales. |
| | | |
第45行: |
第45行: |
| 自此,后续的研究展示了许多方法,可以在保证高度隐私的同时,从数据库中生成非常准确的统计数据。<ref name=":5" /><ref name=":6" /> | | 自此,后续的研究展示了许多方法,可以在保证高度隐私的同时,从数据库中生成非常准确的统计数据。<ref name=":5" /><ref name=":6" /> |
| | | |
− | ==ε-differential privacy == | + | ==ε-differential privacy ε-差分隐私 == |
| The 2006 Dwork, McSherry, Nissim and Smith article introduced the concept of ε-differential privacy, a mathematical definition for the privacy loss associated with any data release drawn from a statistical database. (Here, the term ''statistical database'' means a set of data that are collected under the pledge of confidentiality for the purpose of producing statistics that, by their production, do not compromise the privacy of those individuals who provided the data.) | | The 2006 Dwork, McSherry, Nissim and Smith article introduced the concept of ε-differential privacy, a mathematical definition for the privacy loss associated with any data release drawn from a statistical database. (Here, the term ''statistical database'' means a set of data that are collected under the pledge of confidentiality for the purpose of producing statistics that, by their production, do not compromise the privacy of those individuals who provided the data.) |
| | | |
第65行: |
第65行: |
| 这篇2006年的论文给出了差分隐私的数学定义,以及加入拉普拉斯噪音(i. e. 满足<font color="#ff8000">拉普拉斯分布Laplace Distribution</font>的噪音)后能够满足该定义的机制。 | | 这篇2006年的论文给出了差分隐私的数学定义,以及加入拉普拉斯噪音(i. e. 满足<font color="#ff8000">拉普拉斯分布Laplace Distribution</font>的噪音)后能够满足该定义的机制。 |
| | | |
− | ===Definition of ε-differential privacy=== | + | ===Definition of ε-differential privacy ε-差分隐私的定义=== |
| Let ε be a positive [[real number]] and <math>\mathcal{A}</math> be a [[randomized algorithm]] that takes a dataset as input (representing the actions of the trusted party holding the data). | | Let ε be a positive [[real number]] and <math>\mathcal{A}</math> be a [[randomized algorithm]] that takes a dataset as input (representing the actions of the trusted party holding the data). |
| Let <math>\textrm{im}\ \mathcal{A}</math> denote the [[image (mathematics)|image]] of <math>\mathcal{A}</math>. The algorithm <math>\mathcal{A}</math> is said to provide <math>\epsilon</math>-differential privacy if, for all datasets <math>D_1</math> and <math>D_2</math> that differ on a single element (i.e., the data of one person), and all subsets <math>S</math> of <math>\textrm{im}\ \mathcal{A}</math>: | | Let <math>\textrm{im}\ \mathcal{A}</math> denote the [[image (mathematics)|image]] of <math>\mathcal{A}</math>. The algorithm <math>\mathcal{A}</math> is said to provide <math>\epsilon</math>-differential privacy if, for all datasets <math>D_1</math> and <math>D_2</math> that differ on a single element (i.e., the data of one person), and all subsets <math>S</math> of <math>\textrm{im}\ \mathcal{A}</math>: |
第84行: |
第84行: |
| 差异化隐私提供了稳健性和鲁棒性保证,其可组合性、对后处理的鲁棒性以及在相关数据存在的情况下的功能损耗,促进了差分隐私机制的模块化设计和分析。 | | 差异化隐私提供了稳健性和鲁棒性保证,其可组合性、对后处理的鲁棒性以及在相关数据存在的情况下的功能损耗,促进了差分隐私机制的模块化设计和分析。 |
| | | |
− | ===Composability=== | + | ===Composability 可组合性=== |
| '''(Self-)composability''' refers to the fact that the joint distribution of the outputs of (possibly adaptively chosen) differentially private mechanisms satisfies differential privacy. | | '''(Self-)composability''' refers to the fact that the joint distribution of the outputs of (possibly adaptively chosen) differentially private mechanisms satisfies differential privacy. |
| | | |
第97行: |
第97行: |
| '''并行合成。'''如果前面的机制是在私有数据库的''不相交''子集上计算的,那么函数<math>g</math>将是<math>(\max_i \epsilon_i)</math>-差分隐私的。<ref name="PINQ" /> | | '''并行合成。'''如果前面的机制是在私有数据库的''不相交''子集上计算的,那么函数<math>g</math>将是<math>(\max_i \epsilon_i)</math>-差分隐私的。<ref name="PINQ" /> |
| | | |
− | ===Robustness to post-processing === | + | ===Robustness to post-processing 后处理的鲁棒性 === |
| For any deterministic or randomized function <math>F</math> defined over the image of the mechanism <math>\mathcal{A}</math>, if <math>\mathcal{A}</math> satisfies ε-differential privacy, so does <math>F(\mathcal{A})</math>. | | For any deterministic or randomized function <math>F</math> defined over the image of the mechanism <math>\mathcal{A}</math>, if <math>\mathcal{A}</math> satisfies ε-differential privacy, so does <math>F(\mathcal{A})</math>. |
| | | |
第106行: |
第106行: |
| 可组合性和后处理的鲁棒性保证了差分隐私机制的模块化构建和分析,并促成了隐私损失预算的概念。如果访问复杂机制的敏感数据的所有部分都是分别差分隐私的,那么不论接下来是任何后处理,它们的组合依然是差分隐私的。 | | 可组合性和后处理的鲁棒性保证了差分隐私机制的模块化构建和分析,并促成了隐私损失预算的概念。如果访问复杂机制的敏感数据的所有部分都是分别差分隐私的,那么不论接下来是任何后处理,它们的组合依然是差分隐私的。 |
| | | |
− | ===Group privacy=== | + | ===Group privacy 群体隐私=== |
| In general, ε-differential privacy is designed to protect the privacy between neighboring databases which differ only in one row. This means that no adversary with arbitrary auxiliary information can know if '''one''' particular participant submitted his information. However this is also extendable if we want to protect databases differing in <math>c</math> rows, which amounts to adversary with arbitrary auxiliary information can know if '''<math>c</math>''' particular participants submitted their information. This can be achieved because if <math>c</math> items change, the probability dilation is bounded by <math>\exp ( \epsilon c )</math> instead of <math>\exp ( \epsilon )</math>,'''<ref name="Dwork, ICALP 2006" />''' i.e., for D<sub>1</sub> and D<sub>2</sub> differing on <math>c</math> items: | | In general, ε-differential privacy is designed to protect the privacy between neighboring databases which differ only in one row. This means that no adversary with arbitrary auxiliary information can know if '''one''' particular participant submitted his information. However this is also extendable if we want to protect databases differing in <math>c</math> rows, which amounts to adversary with arbitrary auxiliary information can know if '''<math>c</math>''' particular participants submitted their information. This can be achieved because if <math>c</math> items change, the probability dilation is bounded by <math>\exp ( \epsilon c )</math> instead of <math>\exp ( \epsilon )</math>,'''<ref name="Dwork, ICALP 2006" />''' i.e., for D<sub>1</sub> and D<sub>2</sub> differing on <math>c</math> items: |
| | | |
第118行: |
第118行: |
| 因此,将 ε 设置为<math>\epsilon/c</math>可以达到预期的结果(<math>c</math>项的保护)。换句话说,并非每一项都实现ε- 差分隐私保护,而是每组的<math>c</math>项都被ε- 差分隐私保护(并且每一项都被<math>(\epsilon/c)</math>-差分隐私保护)。 | | 因此,将 ε 设置为<math>\epsilon/c</math>可以达到预期的结果(<math>c</math>项的保护)。换句话说,并非每一项都实现ε- 差分隐私保护,而是每组的<math>c</math>项都被ε- 差分隐私保护(并且每一项都被<math>(\epsilon/c)</math>-差分隐私保护)。 |
| | | |
− | ==ε-differentially private mechanisms== | + | ==ε-differentially private mechanisms ε-差分隐私机制== |
| Since differential privacy is a probabilistic concept, any differentially private mechanism is necessarily randomized. Some of these, like the Laplace mechanism, described below, rely on adding controlled noise to the function that we want to compute. Others, like the [[Exponential mechanism (differential privacy)|exponential mechanism]]<ref name=":8">[http://research.microsoft.com/pubs/65075/mdviadp.pdf F.McSherry and K.Talwar. Mechasim Design via Differential Privacy. Proceedings of the 48th Annual Symposium of Foundations of Computer Science, 2007.]</ref> and posterior sampling<ref name=":9">[https://arxiv.org/abs/1306.1066 Christos Dimitrakakis, Blaine Nelson, Aikaterini Mitrokotsa, Benjamin Rubinstein. Robust and Private Bayesian Inference. Algorithmic Learning Theory 2014]</ref> sample from a problem-dependent family of distributions instead. | | Since differential privacy is a probabilistic concept, any differentially private mechanism is necessarily randomized. Some of these, like the Laplace mechanism, described below, rely on adding controlled noise to the function that we want to compute. Others, like the [[Exponential mechanism (differential privacy)|exponential mechanism]]<ref name=":8">[http://research.microsoft.com/pubs/65075/mdviadp.pdf F.McSherry and K.Talwar. Mechasim Design via Differential Privacy. Proceedings of the 48th Annual Symposium of Foundations of Computer Science, 2007.]</ref> and posterior sampling<ref name=":9">[https://arxiv.org/abs/1306.1066 Christos Dimitrakakis, Blaine Nelson, Aikaterini Mitrokotsa, Benjamin Rubinstein. Robust and Private Bayesian Inference. Algorithmic Learning Theory 2014]</ref> sample from a problem-dependent family of distributions instead. |
| | | |
| 因为差分隐私是一个概率概念,所以任何差分隐私机制都必然是随机的。其中一些,如接下来描述的拉普拉斯机制,依赖于在我们要计算的函数中添加可控噪声。其他的机制,如指数机制<ref name=":8" />和后验抽样<ref name=":9" />,则是从一个与问题相关的分布系列中取样。 | | 因为差分隐私是一个概率概念,所以任何差分隐私机制都必然是随机的。其中一些,如接下来描述的拉普拉斯机制,依赖于在我们要计算的函数中添加可控噪声。其他的机制,如指数机制<ref name=":8" />和后验抽样<ref name=":9" />,则是从一个与问题相关的分布系列中取样。 |
| | | |
− | ===Sensitivity=== | + | ===Sensitivity 敏感性=== |
| Let <math>d</math> be a positive integer, <math>\mathcal{D}</math> be a collection of datasets, and <math>f \colon \mathcal{D} \rightarrow \mathbb{R}^d</math> be a function. The ''sensitivity'' <ref name="DMNS06" /> of a function, denoted <math>\Delta f</math>, is defined by | | Let <math>d</math> be a positive integer, <math>\mathcal{D}</math> be a collection of datasets, and <math>f \colon \mathcal{D} \rightarrow \mathbb{R}^d</math> be a function. The ''sensitivity'' <ref name="DMNS06" /> of a function, denoted <math>\Delta f</math>, is defined by |
| :<math>\Delta f=\max \lVert f(D_1)-f(D_2) \rVert_1,</math> | | :<math>\Delta f=\max \lVert f(D_1)-f(D_2) \rVert_1,</math> |
第142行: |
第142行: |
| 我们也可以通过一些技术(如下所述) 使用低灵敏度函数设计差分隐私算法。 | | 我们也可以通过一些技术(如下所述) 使用低灵敏度函数设计差分隐私算法。 |
| | | |
− | ===The Laplace mechanism=== | + | ===The Laplace mechanism 拉普拉斯机制=== |
| {{See also|Additive noise mechanisms}} | | {{See also|Additive noise mechanisms}} |
| The Laplace mechanism adds Laplace noise (i.e. noise from the [[Laplace distribution]], which can be expressed by probability density function <math>\text{noise}(y)\propto \exp(-|y|/\lambda)\,\!</math>, which has mean zero and standard deviation <math>\sqrt{2} \lambda\,\!</math>). Now in our case we define the output function of <math>\mathcal{A}\,\!</math> as a real valued function (called as the transcript output by <math>\mathcal{A}\,\!</math>) as <math>\mathcal{T}_{\mathcal{A}}(x)=f(x)+Y\,\!</math> where <math>Y \sim \text{Lap}(\lambda)\,\!\,\!</math> and <math>f\,\!</math> is the original real valued query/function we planned to execute on the database. Now clearly <math>\mathcal{T}_{\mathcal{A}}(x)\,\!</math> can be considered to be a continuous random variable, where | | The Laplace mechanism adds Laplace noise (i.e. noise from the [[Laplace distribution]], which can be expressed by probability density function <math>\text{noise}(y)\propto \exp(-|y|/\lambda)\,\!</math>, which has mean zero and standard deviation <math>\sqrt{2} \lambda\,\!</math>). Now in our case we define the output function of <math>\mathcal{A}\,\!</math> as a real valued function (called as the transcript output by <math>\mathcal{A}\,\!</math>) as <math>\mathcal{T}_{\mathcal{A}}(x)=f(x)+Y\,\!</math> where <math>Y \sim \text{Lap}(\lambda)\,\!\,\!</math> and <math>f\,\!</math> is the original real valued query/function we planned to execute on the database. Now clearly <math>\mathcal{T}_{\mathcal{A}}(x)\,\!</math> can be considered to be a continuous random variable, where |
第193行: |
第193行: |
| 继续讨论这个例子,如果我们用(Chandler, 0)替换(Chandler, 1)来构造<math>D_2</math>,那么这个恶意的对手将能够通过计算每个数据集的<math>Q_5 - Q_4</math>来区分<math>D_2</math>和<math>D_1</math>。如果对手被要求通过一个 <math>\epsilon</math> 差分隐私算法对于一个足够小的<math>\epsilon</math>接收<math>Q_i</math>值,那么他或她将无法区分两个数据集。 | | 继续讨论这个例子,如果我们用(Chandler, 0)替换(Chandler, 1)来构造<math>D_2</math>,那么这个恶意的对手将能够通过计算每个数据集的<math>Q_5 - Q_4</math>来区分<math>D_2</math>和<math>D_1</math>。如果对手被要求通过一个 <math>\epsilon</math> 差分隐私算法对于一个足够小的<math>\epsilon</math>接收<math>Q_i</math>值,那么他或她将无法区分两个数据集。 |
| | | |
− | ===Randomized response=== | + | ===Randomized response 随机应答=== |
| {{See also|Local differential privacy}} | | {{See also|Local differential privacy}} |
| | | |
第224行: |
第224行: |
| 虽然这个例子受到了<font color="#ff8000">随机应答Randomized Response</font>的启发,可能适用于<font color="#ff8000">微数据Microdata</font>(例如,发布每个响应的数据集),但根据定义,差分隐私排除了微数据的发布,并且只适用于查询(例如,将单个响应聚合成一个结果),因为这将违反其要求,更具体地说,是对一个主体是否参与的合理否认。<ref name=":10" /><ref name=":11" /> --~~ | | 虽然这个例子受到了<font color="#ff8000">随机应答Randomized Response</font>的启发,可能适用于<font color="#ff8000">微数据Microdata</font>(例如,发布每个响应的数据集),但根据定义,差分隐私排除了微数据的发布,并且只适用于查询(例如,将单个响应聚合成一个结果),因为这将违反其要求,更具体地说,是对一个主体是否参与的合理否认。<ref name=":10" /><ref name=":11" /> --~~ |
| | | |
− | ===Stable transformations=== | + | ===Stable transformations 稳定转换=== |
| A transformation <math>T</math> is <math>c</math>-stable if the [[Hamming distance]] between <math>T(A)</math> and <math>T(B)</math> is at most <math>c</math>-times the Hamming distance between <math>A</math> and <math>B</math> for any two databases <math>A,B</math>. Theorem 2 in <ref name="PINQ" /> asserts that if there is a mechanism <math>M</math> that is <math>\epsilon</math>-differentially private, then the composite mechanism <math>M\circ T</math> is <math>(\epsilon \times c)</math>-differentially private. | | A transformation <math>T</math> is <math>c</math>-stable if the [[Hamming distance]] between <math>T(A)</math> and <math>T(B)</math> is at most <math>c</math>-times the Hamming distance between <math>A</math> and <math>B</math> for any two databases <math>A,B</math>. Theorem 2 in <ref name="PINQ" /> asserts that if there is a mechanism <math>M</math> that is <math>\epsilon</math>-differentially private, then the composite mechanism <math>M\circ T</math> is <math>(\epsilon \times c)</math>-differentially private. |
| | | |
第231行: |
第231行: |
| This could be generalized to group privacy, as the group size could be thought of as the Hamming distance <math>h</math> between | | This could be generalized to group privacy, as the group size could be thought of as the Hamming distance <math>h</math> between |
| <math>A</math> and <math>B</math> (where <math>A</math> contains the group and <math>B</math> doesn't). In this case <math>M\circ T</math> is <math>(\epsilon \times c \times h)</math>-differentially private. | | <math>A</math> and <math>B</math> (where <math>A</math> contains the group and <math>B</math> doesn't). In this case <math>M\circ T</math> is <math>(\epsilon \times c \times h)</math>-differentially private. |
− |
| |
| | | |
| 这可以推广到群组体隐私,因为群体大小可以被视为<math>A</math>和<math>B</math>之间的汉明距离<math>h</math>(其中<math>A</math>包含群组,而<math>B</math>没有)。在这种情况下,<math>M\circ T</math>是<math>(\epsilon \times c \times h)</math>-差分隐私的。 | | 这可以推广到群组体隐私,因为群体大小可以被视为<math>A</math>和<math>B</math>之间的汉明距离<math>h</math>(其中<math>A</math>包含群组,而<math>B</math>没有)。在这种情况下,<math>M\circ T</math>是<math>(\epsilon \times c \times h)</math>-差分隐私的。 |
| | | |
− | ==Adoption of differential privacy in real-world applications== | + | ==现实世界中差分隐私算法的应用== |
− | {{see also|Implementations of differentially private analyses}} | + | {{see also|Implementations of differentially private analyses 参见:差分隐私分析的实践}} |
| Several uses of differential privacy in practice are known to date: | | Several uses of differential privacy in practice are known to date: |
| *2008: [[United States Census Bureau|U.S. Census Bureau]], for showing commuting patterns.<ref name="MachanavajjhalaKAGV08" /> | | *2008: [[United States Census Bureau|U.S. Census Bureau]], for showing commuting patterns.<ref name="MachanavajjhalaKAGV08" /> |
第254行: |
第253行: |
| *2020:LinkedIn,供广告商查询。<ref name="DpLinkedIn" /> | | *2020:LinkedIn,供广告商查询。<ref name="DpLinkedIn" /> |
| | | |
− | ==Public purpose considerations == | + | ==公共目标的考虑 == |
| There are several public purpose considerations regarding differential privacy that are important to consider, especially for policymakers and policy-focused audiences interested in the social opportunities and risks of the technology:<ref name=":15">{{Cite web|title=Technology Factsheet: Differential Privacy|url=https://www.belfercenter.org/publication/technology-factsheet-differential-privacy|access-date=2021-04-12|website=Belfer Center for Science and International Affairs|language=en}}</ref> | | There are several public purpose considerations regarding differential privacy that are important to consider, especially for policymakers and policy-focused audiences interested in the social opportunities and risks of the technology:<ref name=":15">{{Cite web|title=Technology Factsheet: Differential Privacy|url=https://www.belfercenter.org/publication/technology-factsheet-differential-privacy|access-date=2021-04-12|website=Belfer Center for Science and International Affairs|language=en}}</ref> |
| | | |
− | 关于差分隐私技术,有几个重要的公共目标的问题需要考虑,特别是对于那些对此技术的社会机遇和风险感兴趣的决策者和政策聚焦的受众:<ref name=":15" />
| + | 关于差分隐私技术,有几个重要的公共目标的问题需要考虑,尤其是对于那些对此技术的社会机遇和风险感兴趣的决策者和政策聚焦的受众而言:<ref name=":15" /> |
| | | |
| *'''Data Utility & Accuracy.''' The main concern with differential privacy is the tradeoff between data utility and individual privacy. If the privacy loss parameter is set to favor utility, the privacy benefits are lowered (less “noise” is injected into the system); if the privacy loss parameter is set to favor heavy privacy, the accuracy and utility of the dataset are lowered (more “noise” is injected into the system). It is important for policymakers to consider the tradeoffs posed by differential privacy in order to help set appropriate best practices and standards around the use of this privacy preserving practice, especially considering the diversity in organizational use cases. It is worth noting, though, that decreased accuracy and utility is a common issue among all statistical disclosure limitation methods and is not unique to differential privacy. What is unique, however, is how policymakers, researchers, and implementers can consider mitigating against the risks presented through this tradeoff. | | *'''Data Utility & Accuracy.''' The main concern with differential privacy is the tradeoff between data utility and individual privacy. If the privacy loss parameter is set to favor utility, the privacy benefits are lowered (less “noise” is injected into the system); if the privacy loss parameter is set to favor heavy privacy, the accuracy and utility of the dataset are lowered (more “noise” is injected into the system). It is important for policymakers to consider the tradeoffs posed by differential privacy in order to help set appropriate best practices and standards around the use of this privacy preserving practice, especially considering the diversity in organizational use cases. It is worth noting, though, that decreased accuracy and utility is a common issue among all statistical disclosure limitation methods and is not unique to differential privacy. What is unique, however, is how policymakers, researchers, and implementers can consider mitigating against the risks presented through this tradeoff. |
第267行: |
第266行: |
| *'''数据隐私及安全'''。差分隐私提供了一个量化的隐私损失度量和上限,并允许管理者在隐私和准确性之间做出明确的权衡。它对仍然未知的隐私攻击具有鲁棒性。然而,它鼓励更大的数据共享——如果做得不好,会增加隐私风险。差分隐私意味着隐私是受到保护的,但这在很大程度上取决于选择的隐私损失参数,并可能会导致对安全性的错误认识。最后,尽管它对未来不可预见的隐私攻击具有鲁棒性,但可能会有我们无法预测的对策被设计出来。 | | *'''数据隐私及安全'''。差分隐私提供了一个量化的隐私损失度量和上限,并允许管理者在隐私和准确性之间做出明确的权衡。它对仍然未知的隐私攻击具有鲁棒性。然而,它鼓励更大的数据共享——如果做得不好,会增加隐私风险。差分隐私意味着隐私是受到保护的,但这在很大程度上取决于选择的隐私损失参数,并可能会导致对安全性的错误认识。最后,尽管它对未来不可预见的隐私攻击具有鲁棒性,但可能会有我们无法预测的对策被设计出来。 |
| | | |
− | ==See also== | + | ==另见== |
− | *[[Quasi-identifier]] | + | *[[Quasi-identifier|Quasi-identifier准标识符]] |
− | *[[Exponential mechanism (differential privacy)]] – a technique for designing differentially private algorithms | + | *[[Exponential mechanism (differential privacy)|Exponential mechanism (differential privacy)指数机制(差分隐私)]] – a technique for designing differentially private algorithms一种设计差分隐私算法的技术 |
− | *[[k-anonymity]] | + | *[[k-anonymity|k-anonymity k-匿名]] |
− | *[[Differentially private analysis of graphs]] | + | *[[Differentially private analysis of graphs|Differentially private analysis of graphs图的差分隐私分析]] |
− | *[[Protected health information]] | + | *[[Protected health information|Protected health information受保护的健康信息]] |
− | | |
− | *准标识符
| |
− | *指数机制(差分隐私)——一种设计差分隐私算法的技术
| |
− | * k-匿名
| |
− | *图的差分隐私分析
| |
− | *受保护的健康信息
| |
| | | |
− | ==References== | + | ==参考文献== |
| {{Reflist|refs= | | {{Reflist|refs= |
| <ref name="DKMMN06"> | | <ref name="DKMMN06"> |
第288行: |
第281行: |
| | | |
| Dwork, Cynthia, Krishnaram Kenthapadi, Frank McSherry, Ilya Mironov, and Moni Naor. "Our data, ourselves: Privacy via distributed noise generation." In Advances in Cryptology-EUROCRYPT 2006, pp. 486–503. Springer Berlin Heidelberg, 2006. | | Dwork, Cynthia, Krishnaram Kenthapadi, Frank McSherry, Ilya Mironov, and Moni Naor. "Our data, ourselves: Privacy via distributed noise generation." In Advances in Cryptology-EUROCRYPT 2006, pp. 486–503. Springer Berlin Heidelberg, 2006. |
− |
| |
− | {{Reflist|refs=
| |
− |
| |
− | Dwork, Cynthia, Krishnaram Kenthapadi, Frank McSherry, Ilya Mironov, and Moni Naor.“我们的数据,我们自己: 通过分布式噪音产生的隐私。”密码学的进展-eurocrypt 2006,第页。486–503.Springer Berlin Heidelberg,2006年。
| |
| | | |
| <!-- unused refs <ref name="CABP13"> | | <!-- unused refs <ref name="CABP13"> |
第302行: |
第291行: |
| Hall, Rob, Alessandro Rinaldo, and Larry Wasserman. "Random differential privacy." arXiv preprint arXiv:1112.2680 (2011).--> | | Hall, Rob, Alessandro Rinaldo, and Larry Wasserman. "Random differential privacy." arXiv preprint arXiv:1112.2680 (2011).--> |
| | | |
− | 霍尔、罗布、亚历山德罗 · 里纳尔多和拉里 · 沃瑟曼。随机差分隐私arXiv preprint arXiv:1112.2680 (2011).-->
| |
| | | |
| <ref name="MachanavajjhalaKAGV08"> | | <ref name="MachanavajjhalaKAGV08"> |
第310行: |
第298行: |
| Ashwin Machanavajjhala, Daniel Kifer, John M. Abowd, Johannes Gehrke, and Lars Vilhuber. "Privacy: Theory meets Practice on the Map". In Proceedings of the 24th International Conference on Data Engineering, ICDE) 2008. | | Ashwin Machanavajjhala, Daniel Kifer, John M. Abowd, Johannes Gehrke, and Lars Vilhuber. "Privacy: Theory meets Practice on the Map". In Proceedings of the 24th International Conference on Data Engineering, ICDE) 2008. |
| | | |
− |
| |
− | Ashwin Machanavajjhala, Daniel Kifer, John M. Abowd, Johannes Gehrke, and Lars Vilhuber.“隐私权: 理论与实践相结合的地图”。2008年第24届国际数据工程会议论文集。
| |
| | | |
| <ref name="RAPPOR"> | | <ref name="RAPPOR"> |
第318行: |
第304行: |
| | | |
| Úlfar Erlingsson, Vasyl Pihur, Aleksandra Korolova. "RAPPOR: Randomized Aggregatable Privacy-Preserving Ordinal Response". In Proceedings of the 21st ACM Conference on Computer and Communications Security (CCS), 2014. | | Úlfar Erlingsson, Vasyl Pihur, Aleksandra Korolova. "RAPPOR: Randomized Aggregatable Privacy-Preserving Ordinal Response". In Proceedings of the 21st ACM Conference on Computer and Communications Security (CCS), 2014. |
− |
| |
− |
| |
− | Úlfar Erlingsson, Vasyl Pihur, Aleksandra Korolova.“ RAPPOR: 随机可聚合隐私保护顺序响应”。2014年美国计算机与通信安全会议论文集。
| |
| | | |
| <ref name="DMNS06"> | | <ref name="DMNS06"> |
第328行: |
第311行: |
| Calibrating Noise to Sensitivity in Private Data Analysis by Cynthia Dwork, Frank McSherry, Kobbi Nissim, Adam Smith. In Theory of Cryptography Conference (TCC), Springer, 2006. . The full version appears in Journal of Privacy and Confidentiality, 7 (3), 17-51. | | Calibrating Noise to Sensitivity in Private Data Analysis by Cynthia Dwork, Frank McSherry, Kobbi Nissim, Adam Smith. In Theory of Cryptography Conference (TCC), Springer, 2006. . The full version appears in Journal of Privacy and Confidentiality, 7 (3), 17-51. |
| | | |
− | 在私人数据分析中校准噪声的灵敏度。《密码学理论》(TCC) ,Springer,2006。完整版本发表在《隐私与保密期刊》 ,7(3) ,17-51。
| |
| | | |
| <ref name="PINQ"> | | <ref name="PINQ"> |
第335行: |
第317行: |
| | | |
| Privacy integrated queries: an extensible platform for privacy-preserving data analysis by Frank D. McSherry. In Proceedings of the 35th SIGMOD International Conference on Management of Data (SIGMOD), 2009. | | Privacy integrated queries: an extensible platform for privacy-preserving data analysis by Frank D. McSherry. In Proceedings of the 35th SIGMOD International Conference on Management of Data (SIGMOD), 2009. |
− |
| |
− | 隐私集成查询: 一个可扩展的隐私保护数据分析平台。在第35届 SIGMOD 国际数据管理会议论文集中,2009年。
| |
| | | |
| <ref name="Dwork, ICALP 2006"> | | <ref name="Dwork, ICALP 2006"> |
第344行: |
第324行: |
| Differential Privacy by Cynthia Dwork, International Colloquium on Automata, Languages and Programming (ICALP) 2006, p. 1–12. | | Differential Privacy by Cynthia Dwork, International Colloquium on Automata, Languages and Programming (ICALP) 2006, p. 1–12. |
| | | |
− | 差分隐私,2006年自动机,语言和编程国际座谈会,第1-12页。
| |
| | | |
| <ref name="DPBook"> | | <ref name="DPBook"> |
第352行: |
第331行: |
| The Algorithmic Foundations of Differential Privacy by Cynthia Dwork and Aaron Roth. Foundations and Trends in Theoretical Computer Science. Vol. 9, no. 3–4, pp. 211‐407, Aug. 2014. | | The Algorithmic Foundations of Differential Privacy by Cynthia Dwork and Aaron Roth. Foundations and Trends in Theoretical Computer Science. Vol. 9, no. 3–4, pp. 211‐407, Aug. 2014. |
| | | |
− | 差分隐私的算法基础》 by Cynthia Dwork 和 Aaron Roth。理论计算机科学的基础与发展趋势。第一卷。9号,不。3-4页。211‐407, Aug. 2014.
| |
| | | |
| <ref name="Eland">[https://europe.googleblog.com/2015/11/tackling-urban-mobility-with-technology.html Tackling Urban Mobility with Technology] by Andrew Eland. Google Policy Europe Blog, Nov 18, 2015.</ref> | | <ref name="Eland">[https://europe.googleblog.com/2015/11/tackling-urban-mobility-with-technology.html Tackling Urban Mobility with Technology] by Andrew Eland. Google Policy Europe Blog, Nov 18, 2015.</ref> |
第358行: |
第336行: |
| Tackling Urban Mobility with Technology by Andrew Eland. Google Policy Europe Blog, Nov 18, 2015. | | Tackling Urban Mobility with Technology by Andrew Eland. Google Policy Europe Blog, Nov 18, 2015. |
| | | |
− | 利用技术解决城市流动性问题。谷歌政策欧洲博客,2015年11月18日。
| |
| | | |
| <ref name="DpWinTelemetry">[https://www.microsoft.com/en-us/research/publication/collecting-telemetry-data-privately/ Collecting telemetry data privately] by Bolin Ding, Jana Kulkarni, Sergey Yekhanin. NIPS 2017.</ref> | | <ref name="DpWinTelemetry">[https://www.microsoft.com/en-us/research/publication/collecting-telemetry-data-privately/ Collecting telemetry data privately] by Bolin Ding, Jana Kulkarni, Sergey Yekhanin. NIPS 2017.</ref> |
第364行: |
第341行: |
| Collecting telemetry data privately by Bolin Ding, Jana Kulkarni, Sergey Yekhanin. NIPS 2017. | | Collecting telemetry data privately by Bolin Ding, Jana Kulkarni, Sergey Yekhanin. NIPS 2017. |
| | | |
− | 私下收集遥测数据,由 Bolin Ding,Jana Kulkarni,Sergey Yekhanin。2017年。
| |
| | | |
| <ref name="DpLinkedIn">[https://arxiv.org/abs/2002.05839 LinkedIn's Audience Engagements API: A Privacy Preserving Data Analytics System at Scale] by Ryan Rogers, Subbu Subramaniam, Sean Peng, David Durfee, Seunghyun Lee, Santosh Kumar Kancha, Shraddha Sahay, Parvez Ahammad. arXiv:2002.05839.</ref> | | <ref name="DpLinkedIn">[https://arxiv.org/abs/2002.05839 LinkedIn's Audience Engagements API: A Privacy Preserving Data Analytics System at Scale] by Ryan Rogers, Subbu Subramaniam, Sean Peng, David Durfee, Seunghyun Lee, Santosh Kumar Kancha, Shraddha Sahay, Parvez Ahammad. arXiv:2002.05839.</ref> |
| | | |
| LinkedIn's Audience Engagements API: A Privacy Preserving Data Analytics System at Scale by Ryan Rogers, Subbu Subramaniam, Sean Peng, David Durfee, Seunghyun Lee, Santosh Kumar Kancha, Shraddha Sahay, Parvez Ahammad. arXiv:2002.05839. | | LinkedIn's Audience Engagements API: A Privacy Preserving Data Analytics System at Scale by Ryan Rogers, Subbu Subramaniam, Sean Peng, David Durfee, Seunghyun Lee, Santosh Kumar Kancha, Shraddha Sahay, Parvez Ahammad. arXiv:2002.05839. |
− |
| |
− | 的受众接触 API: 一个大规模的隐私保护数据分析系统作者: Ryan Rogers,Subbu Subramaniam,Sean Peng,David Durfee,Seunghyun Lee,Santosh Kumar Kancha,Shraddha Sahay,Parvez Ahammad。2002.05839.
| |
| | | |
| <ref name="DP19">[https://arxiv.org/abs/1906.01337 SoK: Differential Privacies] by Damien Desfontaines, Balázs Pejó. 2019.</ref> | | <ref name="DP19">[https://arxiv.org/abs/1906.01337 SoK: Differential Privacies] by Damien Desfontaines, Balázs Pejó. 2019.</ref> |
第377行: |
第351行: |
| SoK: Differential Privacies by Damien Desfontaines, Balázs Pejó. 2019. | | SoK: Differential Privacies by Damien Desfontaines, Balázs Pejó. 2019. |
| }} | | }} |
− |
| |
− | 差异私生活作者: Damien Desfontaines,Balázs pej ó。2019.
| |
| }} | | }} |
| | | |
− | ==Further reading== | + | ==参考文献== |
− | *[https://desfontain.es/privacy/index.html A reading list on differential privacy]
| |
− | *[https://journalprivacyconfidentiality.org/index.php/jpc/article/view/404 Abowd, John. 2017. “How Will Statistical Agencies Operate When All Data Are Private?”. Journal of Privacy and Confidentiality 7 (3).] {{doi|10.29012/jpc.v7i3.404}} ([https://www2.census.gov/cac/sac/meetings/2017-09/role-statistical-agency.pdf slides])
| |
− | *[http://www.jetlaw.org/wp-content/uploads/2018/12/4_Wood_Final.pdf "Differential Privacy: A Primer for a Non-technical Audience"], Kobbi Nissim, Thomas Steinke, Alexandra Wood, [[Micah Altman]], Aaron Bembenek, Mark Bun, Marco Gaboardi, David R. O’Brien, and Salil Vadhan, Harvard Privacy Tools Project, February 14, 2018
| |
− | * Dinur, Irit and Kobbi Nissim. 2003. Revealing information while preserving privacy. In Proceedings of the twenty-second ACM SIGMOD-SIGACT-SIGART symposium on Principles of database systems(PODS '03). ACM, New York, NY, USA, 202-210. {{doi|10.1145/773153.773173}}.
| |
− | *Dwork, Cynthia, Frank McSherry, Kobbi Nissim, and Adam Smith. 2006. in Halevi, S. & Rabin, T. (Eds.) Calibrating Noise to Sensitivity in Private Data Analysis Theory of Cryptography: Third Theory of Cryptography Conference, TCC 2006, New York, NY, USA, March 4–7, 2006. Proceedings, Springer Berlin Heidelberg, 265-284, {{doi|10.1007/11681878 14}}.
| |
− | *Dwork, Cynthia. 2006. Differential Privacy, 33rd International Colloquium on Automata, Languages and Programming, part II (ICALP 2006), Springer Verlag, 4052, 1-12, {{ISBN|3-540-35907-9}}.
| |
− | *Dwork, Cynthia and Aaron Roth. 2014. The Algorithmic Foundations of Differential Privacy. Foundations and Trends in Theoretical Computer Science. Vol. 9, Nos. 3–4. 211–407, {{doi|10.1561/0400000042}}.
| |
− | *Machanavajjhala, Ashwin, Daniel Kifer, John M. Abowd, Johannes Gehrke, and Lars Vilhuber. 2008. Privacy: Theory Meets Practice on the Map, International Conference on Data Engineering (ICDE) 2008: 277-286, {{doi|10.1109/ICDE.2008.4497436}}.
| |
− | *Dwork, Cynthia and Moni Naor. 2010. On the Difficulties of Disclosure Prevention in Statistical Databases or The Case for Differential Privacy, Journal of Privacy and Confidentiality: Vol. 2: Iss. 1, Article 8. Available at: http://repository.cmu.edu/jpc/vol2/iss1/8.
| |
− | *Kifer, Daniel and Ashwin Machanavajjhala. 2011. No free lunch in data privacy. In Proceedings of the 2011 ACM SIGMOD International Conference on Management of data (SIGMOD '11). ACM, New York, NY, USA, 193-204. {{doi|10.1145/1989323.1989345}}.
| |
− | *Erlingsson, Úlfar, Vasyl Pihur and Aleksandra Korolova. 2014. RAPPOR: Randomized Aggregatable Privacy-Preserving Ordinal Response. In Proceedings of the 2014 ACM SIGSAC Conference on Computer and Communications Security (CCS '14). ACM, New York, NY, USA, 1054-1067. {{doi|10.1145/2660267.2660348}}.
| |
− | *Abowd, John M. and Ian M. Schmutte. 2017 . Revisiting the economics of privacy: Population statistics and confidentiality protection as public goods. Labor Dynamics Institute, Cornell University, Labor Dynamics Institute, Cornell University, at https://digitalcommons.ilr.cornell.edu/ldi/37/
| |
− | *Abowd, John M. and Ian M. Schmutte. Forthcoming. An Economic Analysis of Privacy Protection and Statistical Accuracy as Social Choices. American Economic Review, {{arxiv|1808.06303}}
| |
− | *Apple, Inc. 2016. Apple previews iOS 10, the biggest iOS release ever. Press Release (June 13). https://www.apple.com/newsroom/2016/06/apple-previews-ios-10-biggest-ios-release-ever.html.
| |
− | *Ding, Bolin, Janardhan Kulkarni, and Sergey Yekhanin 2017. Collecting Telemetry Data Privately, NIPS 2017.
| |
− | *http://www.win-vector.com/blog/2015/10/a-simpler-explanation-of-differential-privacy/
| |
− | *Ryffel, Theo, Andrew Trask, et. al. [[arxiv:1811.04017|"A generic framework for privacy preserving deep learning"]]
| |
− | | |
− | *A reading list on differential privacy
| |
− | *Abowd, John. 2017. “How Will Statistical Agencies Operate When All Data Are Private?”. Journal of Privacy and Confidentiality 7 (3). (slides)
| |
− | *"Differential Privacy: A Primer for a Non-technical Audience", Kobbi Nissim, Thomas Steinke, Alexandra Wood, Micah Altman, Aaron Bembenek, Mark Bun, Marco Gaboardi, David R. O’Brien, and Salil Vadhan, Harvard Privacy Tools Project, February 14, 2018
| |
− | * Dinur, Irit and Kobbi Nissim. 2003. Revealing information while preserving privacy. In Proceedings of the twenty-second ACM SIGMOD-SIGACT-SIGART symposium on Principles of database systems(PODS '03). ACM, New York, NY, USA, 202-210. .
| |
− | *Dwork, Cynthia, Frank McSherry, Kobbi Nissim, and Adam Smith. 2006. in Halevi, S. & Rabin, T. (Eds.) Calibrating Noise to Sensitivity in Private Data Analysis Theory of Cryptography: Third Theory of Cryptography Conference, TCC 2006, New York, NY, USA, March 4–7, 2006. Proceedings, Springer Berlin Heidelberg, 265-284, .
| |
− | *Dwork, Cynthia. 2006. Differential Privacy, 33rd International Colloquium on Automata, Languages and Programming, part II (ICALP 2006), Springer Verlag, 4052, 1-12, .
| |
− | *Dwork, Cynthia and Aaron Roth. 2014. The Algorithmic Foundations of Differential Privacy. Foundations and Trends in Theoretical Computer Science. Vol. 9, Nos. 3–4. 211–407, .
| |
− | * Machanavajjhala, Ashwin, Daniel Kifer, John M. Abowd, Johannes Gehrke, and Lars Vilhuber. 2008. Privacy: Theory Meets Practice on the Map, International Conference on Data Engineering (ICDE) 2008: 277-286, .
| |
− | *Dwork, Cynthia and Moni Naor. 2010. On the Difficulties of Disclosure Prevention in Statistical Databases or The Case for Differential Privacy, Journal of Privacy and Confidentiality: Vol. 2: Iss. 1, Article 8. Available at: http://repository.cmu.edu/jpc/vol2/iss1/8.
| |
− | *Kifer, Daniel and Ashwin Machanavajjhala. 2011. No free lunch in data privacy. In Proceedings of the 2011 ACM SIGMOD International Conference on Management of data (SIGMOD '11). ACM, New York, NY, USA, 193-204. .
| |
− | *Erlingsson, Úlfar, Vasyl Pihur and Aleksandra Korolova. 2014. RAPPOR: Randomized Aggregatable Privacy-Preserving Ordinal Response. In Proceedings of the 2014 ACM SIGSAC Conference on Computer and Communications Security (CCS '14). ACM, New York, NY, USA, 1054-1067. .
| |
− | *Abowd, John M. and Ian M. Schmutte. 2017 . Revisiting the economics of privacy: Population statistics and confidentiality protection as public goods. Labor Dynamics Institute, Cornell University, Labor Dynamics Institute, Cornell University, at https://digitalcommons.ilr.cornell.edu/ldi/37/
| |
− | *Abowd, John M. and Ian M. Schmutte. Forthcoming. An Economic Analysis of Privacy Protection and Statistical Accuracy as Social Choices. American Economic Review,
| |
− | *Apple, Inc. 2016. Apple previews iOS 10, the biggest iOS release ever. Press Release (June 13). https://www.apple.com/newsroom/2016/06/apple-previews-ios-10-biggest-ios-release-ever.html.
| |
− | *Ding, Bolin, Janardhan Kulkarni, and Sergey Yekhanin 2017. Collecting Telemetry Data Privately, NIPS 2017.
| |
− | *http://www.win-vector.com/blog/2015/10/a-simpler-explanation-of-differential-privacy/
| |
− | *Ryffel, Theo, Andrew Trask, et. al. "A generic framework for privacy preserving deep learning"
| |
− | | |
− | 差分隐私上的阅读清单。2017.“当所有数据都是私人数据时,统计机构将如何运作?”。隐私与保密期刊7(3)。(幻灯片)
| |
− | *“差分隐私: 非技术观众入门”,Kobbi Nissim,Thomas Steinke,Alexandra Wood,Micah Altman,Aaron Bembenek,Mark Bun,Marco gabordi,David r. o’brien,and Salil Vadhan,Harvard Privacy Tools Project,February 14,2018
| |
− | *Dinur,Irit and Kobbi Nissim。2003.在保护隐私的同时披露信息。在第二十二届 ACM SIGMOD-SIGACT-SIGART 数据库系统原理研讨会会议录(PODS’03)。ACM,纽约,纽约,美国,202-210. 。
| |
− | *Dwork、 Cynthia、 Frank McSherry、 Kobbi Nissim 和 Adam Smith。2006. in Halevi,s & Rabin,t.(Eds.)在密码学的私人数据分析理论中校准噪声的灵敏度: 第三次密码学理论会议,TCC 2006,纽约,纽约,美国,2006年3月4-7。美国国家科学院院刊,Springer Berlin Heidelberg,265-284,。
| |
− | *辛西娅。2006.差分隐私,第33届国际自动机,语言和编程学术讨论会,第二部分(ICALP 2006) ,Springer Verlag,4052,1-12,。
| |
− | *Dwork,Cynthia and Aaron Roth.2014.差分隐私的算法基础。理论计算机科学的基础与发展趋势。第一卷。9,Nos.3–4.211–407, .
| |
− | *Machanavajjhala,Ashwin,Daniel Kifer,John m. Abowd,Johannes Gehrke,and Lars Vilhuber.2008.隐私权: 理论与实践的结合,国际数据工程会议2008:277-286,。
| |
− | *Dwork、 Cynthia 和 Moni Naor。2010.关于统计数据库中的披露防范的困难或者差分隐私的案例,隐私和保密期刊: 第一卷。2: Iss.1,第8条。网址: http://repository.cmu.edu/jpc/vol2/iss1/8。
| |
− | *Kifer,Daniel and Ashwin Machanavajjhala.2011.数据隐私没有免费午餐。在2011年 ACM SIGMOD 国际数据管理会议记录(SIGMOD’11)。ACM,纽约,纽约,美国,193-204. 。
| |
− | *Erlingsson, Úlfar, Vasyl Pihur and Aleksandra Korolova.2014.RAPPOR: 随机可聚合隐私保护顺序响应。在2014年 ACM SIGSAC 计算机和通信安全会议(CCS’14)的会议记录中。ACM,纽约,纽约,美国,1054-1067。
| |
− | *以上,约翰 · m · 施穆特和伊恩 · m · 施穆特。2017 .重温隐私经济学: 人口统计和保密性保护作为公共产品。劳动动力学研究所,康奈尔大学,劳动动力学研究所,康奈尔大学, https://digitalcommons.ilr.cornell.edu/ldi/37/。即将到来。作为社会选择的隐私权保护与统计准确性的经济学分析。美国经济评论》 ,
| |
− | *苹果公司,2016。苹果预览 iOS 10,史上最大的 iOS 发布。新闻稿(六月十三日)。Https://www.apple.com/newsroom/2016/06/apple-previews-ios-10-biggest-ios-release-ever.html.
| |
− | *丁、博林、贾纳丹•库尔卡尼及谢尔盖•叶卡宁二○一七。私下收集遥测数据 NIPS 2017。
| |
− | *http://www.win-vector.com/blog/2015/10/a-simpler-explanation-of-differential-privacy/
| |
− | *Ryffel,Theo,Andrew Trask,et.艾尔。“一个保护隐私的通用深度学习框架”
| |
− | | |
− | ==External links==
| |
− | *[https://www.microsoft.com/en-us/research/wp-content/uploads/2016/02/dwork.pdf Differential Privacy] by Cynthia Dwork, ICALP July 2006.
| |
− | *[http://www.cis.upenn.edu/~aaroth/Papers/privacybook.pdf The Algorithmic Foundations of Differential Privacy] by Cynthia Dwork and Aaron Roth, 2014.
| |
− | *[http://research.microsoft.com/apps/pubs/default.aspx?id=74339 Differential Privacy: A Survey of Results] by Cynthia Dwork, Microsoft Research, April 2008
| |
− | *[http://video.ias.edu/csdm/dynamicdata Privacy of Dynamic Data: Continual Observation and Pan Privacy] by Moni Naor, Institute for Advanced Study, November 2009
| |
− | *[http://simons.berkeley.edu/talks/katrina-ligett-2013-12-11 Tutorial on Differential Privacy] by [[Katrina Ligett]], California Institute of Technology, December 2013
| |
− | *[http://www.cerias.purdue.edu/news_and_events/events/security_seminar/details/index/j9cvs3as2h1qds1jrdqfdc3hu8 A Practical Beginner's Guide To Differential Privacy] by Christine Task, Purdue University, April 2012
| |
− | *[https://commondataproject.org/blog/2011/04/27/the-cdp-private-map-maker-v0-2/ Private Map Maker v0.2] on the Common Data Project blog
| |
− | *[https://research.googleblog.com/2014/10/learning-statistics-with-privacy-aided.html Learning Statistics with Privacy, aided by the Flip of a Coin] by Úlfar Erlingsson, Google Research Blog, October 2014
| |
− | *[https://www.belfercenter.org/publication/technology-factsheet-differential-privacy Technology Factsheet: Differential Privacy] by Raina Gandhi and Amritha Jayanti, Belfer Center for Science and International Affairs, Fall 2020
| |
− | | |
− | 差分隐私: Cynthia Dwork,ICALP July 2006。差分隐私的算法基础》 ,Cynthia Dwork 和 Aaron Roth,2014年。2013年12月,加州理工学院卡特里娜 · 利格特教授,差分隐私,差分隐私,差分隐私实用指南,克里斯汀 · 特拉克,普渡大学,2012年4月
| |
− | *私人地图制作者 v0.2 on the Common Data Project Blog
| |
− | *Learning Statistics with Privacy,added by the Flip of a Coin by úlfar Erlingsson,Google Research Blog,October 2014
| |
− | *Technology Factsheet: 差分隐私地图制作者 Raina Gandhi and Amritha Jayanti,Belfer Center for Science and International Affairs,Fall 2020
| |
− | | |
− | [[Category:Theory of cryptography]]
| |
− | [[Category:Information privacy]]
| |
− | | |
− |
| |
− | Category:Theory of cryptography
| |
− | Category:Information privacy
| |
− | | |
− | 密码学理论范畴: 信息隐私
| |
− | | |
− | <noinclude>
| |
− | | |
− | <small>This page was moved from [[wikipedia:en:Differential privacy]]. Its edit history can be viewed at [[差分隐私/edithistory]]</small></noinclude>
| |
− | {| class="wikitable" style="margin-left: auto; margin-right: auto; border: none;"
| |
− | |-
| |
− | ! 姓名! !患有糖尿病(x) |
| |
− | ==Adoption of differential privacy in real-world applications==
| |
− | {{see also|Implementations of differentially private analyses}}
| |
− | Several uses of differential privacy in practice are known to date:
| |
− | *2008: [[United States Census Bureau|U.S. Census Bureau]], for showing commuting patterns.<ref name="MachanavajjhalaKAGV08" />
| |
− | * 2014: [[Google]]'s RAPPOR, for telemetry such as learning statistics about unwanted software hijacking users' settings. <ref name="RAPPOR" /><ref>{{Citation|title=google/rappor|date=2021-07-15|url=https://github.com/google/rappor|publisher=GitHub}}</ref>
| |
− | *2015: Google, for sharing historical traffic statistics.<ref name="Eland" />
| |
− | *2016: [[Apple Inc.|Apple]] announced its intention to use differential privacy in [[iOS 10]] to improve its [[Intelligent personal assistant]] technology.<ref>{{cite web|title=Apple - Press Info - Apple Previews iOS 10, the Biggest iOS Release Ever|url=https://www.apple.com/pr/library/2016/06/13Apple-Previews-iOS-10-The-Biggest-iOS-Release-Ever.html|website=Apple|access-date=16 June 2016}}</ref>
| |
− | *2017: Microsoft, for telemetry in Windows.<ref name="DpWinTelemetry" />
| |
− | *2019: Privitar Lens is an API using differential privacy.<ref>{{cite web|title=Privitar Lens|url=https://www.privitar.com/privitar-lens|access-date=20 February 2018}}</ref>
| |
− | * 2020: LinkedIn, for advertiser queries.<ref name="DpLinkedIn" />
| |
− | | |
− | | |
− | Several uses of differential privacy in practice are known to date:
| |
− | *2008: U.S. Census Bureau, for showing commuting patterns.
| |
− | *2014: Google's RAPPOR, for telemetry such as learning statistics about unwanted software hijacking users' settings.
| |
− | *2015: Google, for sharing historical traffic statistics.
| |
− | *2016: Apple announced its intention to use differential privacy in iOS 10 to improve its Intelligent personal assistant technology.
| |
− | *2017: Microsoft, for telemetry in Windows.
| |
− | *2019: Privitar Lens is an API using differential privacy.
| |
− | *2020: LinkedIn, for advertiser queries.
| |
− | | |
− | 2008: u.s. Census Bureau,for shows comforting patterns. 在实践中,差分隐私的几个用途已经为人所知:
| |
− | *2008: 美国人口普查局,显示通勤模式。
| |
− | *2014年: 谷歌的 RAPPOR,用于遥测,例如了解不受欢迎的软件劫持用户设置的统计数据。2015: Google,分享历史流量统计数据。
| |
− | *2016年: 苹果公司宣布打算在 iOS 10中使用差分隐私智能个人助理来改进其智能个人助理技术。
| |
− | *2017: 微软,Windows 遥测系统。2019: priveritar Lens 是一个使用差分隐私的 API。2020: LinkedIn,for advertiser queries.
| |
− | | |
− | ==Public purpose considerations ==
| |
− | There are several public purpose considerations regarding differential privacy that are important to consider, especially for policymakers and policy-focused audiences interested in the social opportunities and risks of the technology:<ref>{{Cite web|title=Technology Factsheet: Differential Privacy|url=https://www.belfercenter.org/publication/technology-factsheet-differential-privacy|access-date=2021-04-12|website=Belfer Center for Science and International Affairs|language=en}}</ref>
| |
− | | |
− | There are several public purpose considerations regarding differential privacy that are important to consider, especially for policymakers and policy-focused audiences interested in the social opportunities and risks of the technology:
| |
− | | |
− | 关于差分隐私技术,有几个公共目的方面的考虑是需要考虑的,特别是对于那些对技术的社会机遇和风险感兴趣的决策者和政策关注的受众:
| |
− | | |
− | *'''Data Utility & Accuracy.''' The main concern with differential privacy is the tradeoff between data utility and individual privacy. If the privacy loss parameter is set to favor utility, the privacy benefits are lowered (less “noise” is injected into the system); if the privacy loss parameter is set to favor heavy privacy, the accuracy and utility of the dataset are lowered (more “noise” is injected into the system). It is important for policymakers to consider the tradeoffs posed by differential privacy in order to help set appropriate best practices and standards around the use of this privacy preserving practice, especially considering the diversity in organizational use cases. It is worth noting, though, that decreased accuracy and utility is a common issue among all statistical disclosure limitation methods and is not unique to differential privacy. What is unique, however, is how policymakers, researchers, and implementers can consider mitigating against the risks presented through this tradeoff.
| |
− | | |
− | * Data Utility & Accuracy. The main concern with differential privacy is the tradeoff between data utility and individual privacy. If the privacy loss parameter is set to favor utility, the privacy benefits are lowered (less “noise” is injected into the system); if the privacy loss parameter is set to favor heavy privacy, the accuracy and utility of the dataset are lowered (more “noise” is injected into the system). It is important for policymakers to consider the tradeoffs posed by differential privacy in order to help set appropriate best practices and standards around the use of this privacy preserving practice, especially considering the diversity in organizational use cases. It is worth noting, though, that decreased accuracy and utility is a common issue among all statistical disclosure limitation methods and is not unique to differential privacy. What is unique, however, is how policymakers, researchers, and implementers can consider mitigating against the risks presented through this tradeoff.
| |
− | | |
− | | |
− | *数据的实用性及准确性。差分隐私的主要关注点在于数据效用和个人隐私之间的权衡。如果将隐私损失参数设置为有利于实用性,则隐私好处降低(向系统中注入的“噪音”较少) ; 如果将隐私损失参数设置为有利于重隐私性,则数据集的准确性和实用性降低(向系统中注入更多的“噪音”)。对于决策者来说,重要的是要考虑到差分隐私的权衡,以帮助建立适当的最佳实践和标准来使用这种隐私保护实践,特别是考虑到组织用例的多样性。值得注意的是,在所有的统计披露限制方法中,降低准确性和效用是一个共同的问题,并不是差分隐私唯一的。然而,独特之处在于,决策者、研究人员和实施者可以考虑如何减轻这种权衡带来的风险。
| |
− | | |
− | *'''Data Privacy & Security.''' Differential privacy provides a quantified measure of privacy loss and an upper bound and allows curators to choose the explicit tradeoff between privacy and accuracy. It is robust to still unknown privacy attacks. However, it encourages greater data sharing, which if done poorly, increases privacy risk. Differential privacy implies that privacy is protected, but this depends very much on the privacy loss parameter chosen and may instead lead to a false sense of security. Finally, though it is robust against unforeseen future privacy attacks, a countermeasure may be devised that we cannot predict.
| |
− | | |
− | *Data Privacy & Security. Differential privacy provides a quantified measure of privacy loss and an upper bound and allows curators to choose the explicit tradeoff between privacy and accuracy. It is robust to still unknown privacy attacks. However, it encourages greater data sharing, which if done poorly, increases privacy risk. Differential privacy implies that privacy is protected, but this depends very much on the privacy loss parameter chosen and may instead lead to a false sense of security. Finally, though it is robust against unforeseen future privacy attacks, a countermeasure may be devised that we cannot predict.
| |
− | | |
− | | |
− | *资料私隐及保安。差分隐私图书馆提供了一个量化的隐私损失度量和上限,并允许馆长在隐私和准确性之间做出明确的权衡。它对仍然未知的隐私攻击是健壮的。然而,它鼓励更大的数据共享,如果做得不好,会增加隐私风险。差分隐私意味着隐私是受到保护的,但这在很大程度上取决于选择的隐私损失参数,并可能会导致错误的安全感。最后,尽管它对未来不可预见的隐私攻击是健壮的,但可以设计出一种我们无法预测的对策。
| |
− | | |
− | ==See also==
| |
− | *[[Quasi-identifier]]
| |
− | *[[Exponential mechanism (differential privacy)]] – a technique for designing differentially private algorithms
| |
− | *[[k-anonymity]]
| |
− | *[[Differentially private analysis of graphs]]
| |
− | *[[Protected health information]]
| |
− | | |
− | *Quasi-identifier
| |
− | * Exponential mechanism (differential privacy) – a technique for designing differentially private algorithms
| |
− | *k-anonymity
| |
− | *Differentially private analysis of graphs
| |
− | *Protected health information
| |
− | | |
− | | |
− | *准标识符
| |
− | *指数机制(差分隐私)-一种设计不同私有算法的技术
| |
− | * k-匿名
| |
− | *图的不同私有分析
| |
− | *受保护的健康信息
| |
− | | |
− | ==References==
| |
| {{Reflist|refs= | | {{Reflist|refs= |
| <ref name="DKMMN06"> | | <ref name="DKMMN06"> |
第634行: |
第454行: |
| }} | | }} |
| | | |
− | ==Further reading== | + | ==Further reading 进一步阅读== |
| *[https://desfontain.es/privacy/index.html A reading list on differential privacy] | | *[https://desfontain.es/privacy/index.html A reading list on differential privacy] |
| *[https://journalprivacyconfidentiality.org/index.php/jpc/article/view/404 Abowd, John. 2017. “How Will Statistical Agencies Operate When All Data Are Private?”. Journal of Privacy and Confidentiality 7 (3).] {{doi|10.29012/jpc.v7i3.404}} ([https://www2.census.gov/cac/sac/meetings/2017-09/role-statistical-agency.pdf slides]) | | *[https://journalprivacyconfidentiality.org/index.php/jpc/article/view/404 Abowd, John. 2017. “How Will Statistical Agencies Operate When All Data Are Private?”. Journal of Privacy and Confidentiality 7 (3).] {{doi|10.29012/jpc.v7i3.404}} ([https://www2.census.gov/cac/sac/meetings/2017-09/role-statistical-agency.pdf slides]) |
第653行: |
第473行: |
| *Ryffel, Theo, Andrew Trask, et. al. [[arxiv:1811.04017|"A generic framework for privacy preserving deep learning"]] | | *Ryffel, Theo, Andrew Trask, et. al. [[arxiv:1811.04017|"A generic framework for privacy preserving deep learning"]] |
| | | |
− | *A reading list on differential privacy
| + | ==External links 相关链接== |
− | *Abowd, John. 2017. “How Will Statistical Agencies Operate When All Data Are Private?”. Journal of Privacy and Confidentiality 7 (3). (slides)
| |
− | *"Differential Privacy: A Primer for a Non-technical Audience", Kobbi Nissim, Thomas Steinke, Alexandra Wood, Micah Altman, Aaron Bembenek, Mark Bun, Marco Gaboardi, David R. O’Brien, and Salil Vadhan, Harvard Privacy Tools Project, February 14, 2018
| |
− | * Dinur, Irit and Kobbi Nissim. 2003. Revealing information while preserving privacy. In Proceedings of the twenty-second ACM SIGMOD-SIGACT-SIGART symposium on Principles of database systems(PODS '03). ACM, New York, NY, USA, 202-210. .
| |
− | *Dwork, Cynthia, Frank McSherry, Kobbi Nissim, and Adam Smith. 2006. in Halevi, S. & Rabin, T. (Eds.) Calibrating Noise to Sensitivity in Private Data Analysis Theory of Cryptography: Third Theory of Cryptography Conference, TCC 2006, New York, NY, USA, March 4–7, 2006. Proceedings, Springer Berlin Heidelberg, 265-284, .
| |
− | *Dwork, Cynthia. 2006. Differential Privacy, 33rd International Colloquium on Automata, Languages and Programming, part II (ICALP 2006), Springer Verlag, 4052, 1-12, .
| |
− | *Dwork, Cynthia and Aaron Roth. 2014. The Algorithmic Foundations of Differential Privacy. Foundations and Trends in Theoretical Computer Science. Vol. 9, Nos. 3–4. 211–407, .
| |
− | * Machanavajjhala, Ashwin, Daniel Kifer, John M. Abowd, Johannes Gehrke, and Lars Vilhuber. 2008. Privacy: Theory Meets Practice on the Map, International Conference on Data Engineering (ICDE) 2008: 277-286, .
| |
− | *Dwork, Cynthia and Moni Naor. 2010. On the Difficulties of Disclosure Prevention in Statistical Databases or The Case for Differential Privacy, Journal of Privacy and Confidentiality: Vol. 2: Iss. 1, Article 8. Available at: http://repository.cmu.edu/jpc/vol2/iss1/8.
| |
− | *Kifer, Daniel and Ashwin Machanavajjhala. 2011. No free lunch in data privacy. In Proceedings of the 2011 ACM SIGMOD International Conference on Management of data (SIGMOD '11). ACM, New York, NY, USA, 193-204. .
| |
− | *Erlingsson, Úlfar, Vasyl Pihur and Aleksandra Korolova. 2014. RAPPOR: Randomized Aggregatable Privacy-Preserving Ordinal Response. In Proceedings of the 2014 ACM SIGSAC Conference on Computer and Communications Security (CCS '14). ACM, New York, NY, USA, 1054-1067. .
| |
− | *Abowd, John M. and Ian M. Schmutte. 2017 . Revisiting the economics of privacy: Population statistics and confidentiality protection as public goods. Labor Dynamics Institute, Cornell University, Labor Dynamics Institute, Cornell University, at https://digitalcommons.ilr.cornell.edu/ldi/37/
| |
− | *Abowd, John M. and Ian M. Schmutte. Forthcoming. An Economic Analysis of Privacy Protection and Statistical Accuracy as Social Choices. American Economic Review,
| |
− | *Apple, Inc. 2016. Apple previews iOS 10, the biggest iOS release ever. Press Release (June 13). https://www.apple.com/newsroom/2016/06/apple-previews-ios-10-biggest-ios-release-ever.html.
| |
− | *Ding, Bolin, Janardhan Kulkarni, and Sergey Yekhanin 2017. Collecting Telemetry Data Privately, NIPS 2017.
| |
− | *http://www.win-vector.com/blog/2015/10/a-simpler-explanation-of-differential-privacy/
| |
− | *Ryffel, Theo, Andrew Trask, et. al. "A generic framework for privacy preserving deep learning"
| |
− | | |
− | 差分隐私上的阅读清单。2017.“当所有数据都是私人数据时,统计机构将如何运作?”。隐私与保密期刊7(3)。(幻灯片)
| |
− | *“差分隐私: 非技术观众入门”,Kobbi Nissim,Thomas Steinke,Alexandra Wood,Micah Altman,Aaron Bembenek,Mark Bun,Marco gabordi,David r. o’brien,and Salil Vadhan,Harvard Privacy Tools Project,February 14,2018
| |
− | *Dinur,Irit and Kobbi Nissim。2003.在保护隐私的同时披露信息。在第二十二届 ACM SIGMOD-SIGACT-SIGART 数据库系统原理研讨会会议录(PODS’03)。ACM,纽约,纽约,美国,202-210. 。
| |
− | *Dwork、 Cynthia、 Frank McSherry、 Kobbi Nissim 和 Adam Smith。2006. in Halevi,s & Rabin,t.(Eds.)在密码学的私人数据分析理论中校准噪声的灵敏度: 第三次密码学理论会议,TCC 2006,纽约,纽约,美国,2006年3月4-7。美国国家科学院院刊,Springer Berlin Heidelberg,265-284,。
| |
− | *辛西娅。2006.差分隐私,第33届国际自动机,语言和编程学术讨论会,第二部分(ICALP 2006) ,Springer Verlag,4052,1-12,。
| |
− | *Dwork,Cynthia and Aaron Roth.2014.差分隐私的算法基础。理论计算机科学的基础与发展趋势。第一卷。9,Nos.3–4.211–407, .
| |
− | *Machanavajjhala,Ashwin,Daniel Kifer,John m. Abowd,Johannes Gehrke,and Lars Vilhuber.2008.隐私权: 理论与实践的结合,国际数据工程会议2008:277-286,。
| |
− | *Dwork、 Cynthia 和 Moni Naor。2010.关于统计数据库中的披露防范的困难或者差分隐私的案例,隐私和保密期刊: 第一卷。2: Iss.1,第8条。网址: http://repository.cmu.edu/jpc/vol2/iss1/8。
| |
− | *Kifer,Daniel and Ashwin Machanavajjhala.2011.数据隐私没有免费午餐。在2011年 ACM SIGMOD 国际数据管理会议记录(SIGMOD’11)。ACM,纽约,纽约,美国,193-204. 。
| |
− | *Erlingsson, Úlfar, Vasyl Pihur and Aleksandra Korolova.2014.RAPPOR: 随机可聚合隐私保护顺序响应。在2014年 ACM SIGSAC 计算机和通信安全会议(CCS’14)的会议记录中。ACM,纽约,纽约,美国,1054-1067。
| |
− | *以上,约翰 · m · 施穆特和伊恩 · m · 施穆特。2017 .重温隐私经济学: 人口统计和保密性保护作为公共产品。劳动动力学研究所,康奈尔大学,劳动动力学研究所,康奈尔大学, https://digitalcommons.ilr.cornell.edu/ldi/37/。即将到来。作为社会选择的隐私权保护与统计准确性的经济学分析。美国经济评论》 ,
| |
− | *苹果公司,2016。苹果预览 iOS 10,史上最大的 iOS 发布。新闻稿(六月十三日)。Https://www.apple.com/newsroom/2016/06/apple-previews-ios-10-biggest-ios-release-ever.html.
| |
− | *丁、博林、贾纳丹•库尔卡尼及谢尔盖•叶卡宁二○一七。私下收集遥测数据 NIPS 2017。
| |
− | *http://www.win-vector.com/blog/2015/10/a-simpler-explanation-of-differential-privacy/
| |
− | *Ryffel,Theo,Andrew Trask,et.艾尔。“一个保护隐私的通用深度学习框架”
| |
− | | |
− | ==External links== | |
| *[https://www.microsoft.com/en-us/research/wp-content/uploads/2016/02/dwork.pdf Differential Privacy] by Cynthia Dwork, ICALP July 2006. | | *[https://www.microsoft.com/en-us/research/wp-content/uploads/2016/02/dwork.pdf Differential Privacy] by Cynthia Dwork, ICALP July 2006. |
| *[http://www.cis.upenn.edu/~aaroth/Papers/privacybook.pdf The Algorithmic Foundations of Differential Privacy] by Cynthia Dwork and Aaron Roth, 2014. | | *[http://www.cis.upenn.edu/~aaroth/Papers/privacybook.pdf The Algorithmic Foundations of Differential Privacy] by Cynthia Dwork and Aaron Roth, 2014. |
第697行: |
第483行: |
| *[https://research.googleblog.com/2014/10/learning-statistics-with-privacy-aided.html Learning Statistics with Privacy, aided by the Flip of a Coin] by Úlfar Erlingsson, Google Research Blog, October 2014 | | *[https://research.googleblog.com/2014/10/learning-statistics-with-privacy-aided.html Learning Statistics with Privacy, aided by the Flip of a Coin] by Úlfar Erlingsson, Google Research Blog, October 2014 |
| *[https://www.belfercenter.org/publication/technology-factsheet-differential-privacy Technology Factsheet: Differential Privacy] by Raina Gandhi and Amritha Jayanti, Belfer Center for Science and International Affairs, Fall 2020 | | *[https://www.belfercenter.org/publication/technology-factsheet-differential-privacy Technology Factsheet: Differential Privacy] by Raina Gandhi and Amritha Jayanti, Belfer Center for Science and International Affairs, Fall 2020 |
− |
| |
− | *Differential Privacy by Cynthia Dwork, ICALP July 2006.
| |
− | *The Algorithmic Foundations of Differential Privacy by Cynthia Dwork and Aaron Roth, 2014.
| |
− | *Differential Privacy: A Survey of Results by Cynthia Dwork, Microsoft Research, April 2008
| |
− | *Privacy of Dynamic Data: Continual Observation and Pan Privacy by Moni Naor, Institute for Advanced Study, November 2009
| |
− | *Tutorial on Differential Privacy by Katrina Ligett, California Institute of Technology, December 2013
| |
− | *A Practical Beginner's Guide To Differential Privacy by Christine Task, Purdue University, April 2012
| |
− | *Private Map Maker v0.2 on the Common Data Project blog
| |
− | *Learning Statistics with Privacy, aided by the Flip of a Coin by Úlfar Erlingsson, Google Research Blog, October 2014
| |
− | *Technology Factsheet: Differential Privacy by Raina Gandhi and Amritha Jayanti, Belfer Center for Science and International Affairs, Fall 2020
| |
− |
| |
− | 差分隐私: Cynthia Dwork,ICALP July 2006。差分隐私的算法基础》 ,Cynthia Dwork 和 Aaron Roth,2014年。2013年12月,加州理工学院卡特里娜 · 利格特教授,差分隐私,差分隐私,差分隐私实用指南,克里斯汀 · 特拉克,普渡大学,2012年4月
| |
− | *私人地图制作者 v0.2 on the Common Data Project Blog
| |
− | *Learning Statistics with Privacy,added by the Flip of a Coin by úlfar Erlingsson,Google Research Blog,October 2014
| |
− | *Technology Factsheet: 差分隐私地图制作者 Raina Gandhi and Amritha Jayanti,Belfer Center for Science and International Affairs,Fall 2020
| |
| | | |
| [[Category:Theory of cryptography]] | | [[Category:Theory of cryptography]] |
| [[Category:Information privacy]] | | [[Category:Information privacy]] |
− |
| |
− |
| |
− | Category:Theory of cryptography
| |
− | Category:Information privacy
| |
− |
| |
− | 密码学理论范畴: 信息隐私
| |
− |
| |
| <noinclude> | | <noinclude> |
| | | |
− | <small>This page was moved from [[wikipedia:en:Differential privacy]]. Its edit history can be viewed at [[差分隐私/edithistory]]</small></noinclude> | + | <small>'''本词条内容源自wikipedia[[wikipedia:en:Differential privacy]]及公开资料,遵守 CC3.0协议。'''Its edit history can be viewed at [[差分隐私/edithistory]]</small></noinclude> |
− | | + | [[Category:Theory of cryptography]] |
− | |}
| + | [[Category:Information privacy]] |