第33行: |
第33行: |
| In 2003, [[Kobbi Nissim]] and [[Irit Dinur]] demonstrated that it is impossible to publish arbitrary queries on a private statistical database without revealing some amount of private information, and that the entire information content of the database can be revealed by publishing the results of a surprisingly small number of random queries—far fewer than was implied by previous work.<ref name=":2">Irit Dinur and Kobbi Nissim. 2003. Revealing information while preserving privacy. In Proceedings of the twenty-second ACM SIGMOD-SIGACT-SIGART symposium on Principles of database systems (PODS '03). ACM, New York, NY, USA, 202–210. {{doi|10.1145/773153.773173}}</ref> The general phenomenon is known as the [[Reconstruction attack|Fundamental Law of Information Recovery]], and its key insight, namely that in the most general case, privacy cannot be protected without injecting some amount of noise, led to development of differential privacy. | | In 2003, [[Kobbi Nissim]] and [[Irit Dinur]] demonstrated that it is impossible to publish arbitrary queries on a private statistical database without revealing some amount of private information, and that the entire information content of the database can be revealed by publishing the results of a surprisingly small number of random queries—far fewer than was implied by previous work.<ref name=":2">Irit Dinur and Kobbi Nissim. 2003. Revealing information while preserving privacy. In Proceedings of the twenty-second ACM SIGMOD-SIGACT-SIGART symposium on Principles of database systems (PODS '03). ACM, New York, NY, USA, 202–210. {{doi|10.1145/773153.773173}}</ref> The general phenomenon is known as the [[Reconstruction attack|Fundamental Law of Information Recovery]], and its key insight, namely that in the most general case, privacy cannot be protected without injecting some amount of noise, led to development of differential privacy. |
| | | |
− | 2003年,Kobbi Nissim和Irit Dinur证明,在私人统计数据库上发布任意查询而不泄露一些私人信息是不可能的,而且通过公开少得惊人的随机查询的结果就可能会显示数据库的全部信息内容——远远少于以往研究的推断。<ref name=":2" />这种现象被称为信息恢复基本定律,其核心观点是,在最普遍的情况下,隐私无法在不加入一定量的噪音的情况下得到保护,这导致了差分隐私的发展。 | + | 2003年,Kobbi Nissim和Irit Dinur证明,在私人统计数据库上发布任意查询而不泄露一些私人信息是不可能的,而且通过公开少得惊人的随机查询的结果就可能会显示数据库的全部信息内容——远远少于以往研究的推断。<ref name=":2" />这种现象被称为<font color="#ff8000">信息恢复基本定律Fundamental Law of Information Recovery</font>,其核心观点是,在最普遍的情况下,隐私无法在不加入一定量的噪音的情况下得到保护,这导致了差分隐私的发展。 |
| | | |
− | In 2006, [[Cynthia Dwork]], [[Frank McSherry]], [[Kobbi Nissim]] and [[Adam D. Smith]] published an article formalizing the amount of noise that needed to be added and proposing a generalized mechanism for doing so.<ref name="DMNS06" /> Their work was a co-recipient of the 2016 TCC Test-of-Time Award<ref>{{cite web |title=TCC Test-of-Time Award |url=https://www.iacr.org/workshops/tcc/awards.html}}</ref> and the 2017 [[Gödel Prize]].<ref>{{cite web |title=2017 Gödel Prize |url=https://www.eatcs.org/index.php/component/content/article/1-news/2450-2017-godel-prize}}</ref> | + | In 2006, [[Cynthia Dwork]], [[Frank McSherry]], [[Kobbi Nissim]] and [[Adam D. Smith]] published an article formalizing the amount of noise that needed to be added and proposing a generalized mechanism for doing so.<ref name="DMNS06" /> Their work was a co-recipient of the 2016 TCC Test-of-Time Award<ref name=":3">{{cite web |title=TCC Test-of-Time Award |url=https://www.iacr.org/workshops/tcc/awards.html}}</ref> and the 2017 [[Gödel Prize]].<ref name=":4">{{cite web |title=2017 Gödel Prize |url=https://www.eatcs.org/index.php/component/content/article/1-news/2450-2017-godel-prize}}</ref> |
| | | |
| In 2006, Cynthia Dwork, Frank McSherry, Kobbi Nissim and Adam D. Smith published an article formalizing the amount of noise that needed to be added and proposing a generalized mechanism for doing so. Their work was a co-recipient of the 2016 TCC Test-of-Time Award and the 2017 Gödel Prize. | | In 2006, Cynthia Dwork, Frank McSherry, Kobbi Nissim and Adam D. Smith published an article formalizing the amount of noise that needed to be added and proposing a generalized mechanism for doing so. Their work was a co-recipient of the 2016 TCC Test-of-Time Award and the 2017 Gödel Prize. |
| | | |
− | 2006年,辛西娅 · 德沃克、弗兰克 · 麦克谢里、科比 · 尼西姆和亚当 · d · 史密斯发表了一篇文章,正式确定了需要增加的噪音量,并提出了一种通用的增加噪音的机制。他们的工作是2016年移动通信协会时间测试奖和2017年哥德尔奖的共同获得者。 | + | 2006年,辛西娅 · 德沃克、弗兰克 · 麦克谢里、科比 · 尼西姆和亚当 · d · 史密斯发表了一篇文章,正式确定了需要增加的噪音量,并提出了一种通用的增加噪音的机制。<ref name="DMNS06" /> 他们的工作共同获得了2016年移动通信协会时间检验奖<ref name=":3" />和2017年哥德尔奖<ref name=":4" />。 |
| | | |
− | Since then, subsequent research has shown that there are many ways to produce very accurate statistics from the database while still ensuring high levels of privacy.<ref>{{Cite journal|last=Hilton|first=Michael|s2cid=16861132|title=Differential Privacy: A Historical Survey}}</ref><ref>{{Cite book|title=Theory and Applications of Models of Computation|volume=4978|last=Dwork|first=Cynthia|date=2008-04-25|publisher=Springer Berlin Heidelberg|isbn=9783540792277|editor-last=Agrawal|editor-first=Manindra|series=Lecture Notes in Computer Science|pages=1–19|language=en|chapter=Differential Privacy: A Survey of Results|doi=10.1007/978-3-540-79228-4_1|editor-last2=Du|editor-first2=Dingzhu|editor-last3=Duan|editor-first3=Zhenhua|editor-last4=Li|editor-first4=Angsheng|chapter-url=https://www.microsoft.com/en-us/research/publication/differential-privacy-a-survey-of-results/}}</ref> | + | Since then, subsequent research has shown that there are many ways to produce very accurate statistics from the database while still ensuring high levels of privacy.<ref name=":5">{{Cite journal|last=Hilton|first=Michael|s2cid=16861132|title=Differential Privacy: A Historical Survey}}</ref><ref name=":6">{{Cite book|title=Theory and Applications of Models of Computation|volume=4978|last=Dwork|first=Cynthia|date=2008-04-25|publisher=Springer Berlin Heidelberg|isbn=9783540792277|editor-last=Agrawal|editor-first=Manindra|series=Lecture Notes in Computer Science|pages=1–19|language=en|chapter=Differential Privacy: A Survey of Results|doi=10.1007/978-3-540-79228-4_1|editor-last2=Du|editor-first2=Dingzhu|editor-last3=Duan|editor-first3=Zhenhua|editor-last4=Li|editor-first4=Angsheng|chapter-url=https://www.microsoft.com/en-us/research/publication/differential-privacy-a-survey-of-results/}}</ref> |
| | | |
− | Since then, subsequent research has shown that there are many ways to produce very accurate statistics from the database while still ensuring high levels of privacy.
| + | 自此,后续的研究展示了许多方法,可以在保证高度隐私的同时,从数据库中生成非常准确的统计数据。<ref name=":5" /><ref name=":6" /> |
| | | |
− | 从那时起,随后的研究表明,有许多方法可以在保证高度隐私的同时,从数据库中生成非常准确的统计数据。
| + | == ε-differential privacy== |
− | | |
− | ==ε-differential privacy== | |
| The 2006 Dwork, McSherry, Nissim and Smith article introduced the concept of ε-differential privacy, a mathematical definition for the privacy loss associated with any data release drawn from a statistical database. (Here, the term ''statistical database'' means a set of data that are collected under the pledge of confidentiality for the purpose of producing statistics that, by their production, do not compromise the privacy of those individuals who provided the data.) | | The 2006 Dwork, McSherry, Nissim and Smith article introduced the concept of ε-differential privacy, a mathematical definition for the privacy loss associated with any data release drawn from a statistical database. (Here, the term ''statistical database'' means a set of data that are collected under the pledge of confidentiality for the purpose of producing statistics that, by their production, do not compromise the privacy of those individuals who provided the data.) |
| | | |
− | The 2006 Dwork, McSherry, Nissim and Smith article introduced the concept of ε-differential privacy, a mathematical definition for the privacy loss associated with any data release drawn from a statistical database. (Here, the term statistical database means a set of data that are collected under the pledge of confidentiality for the purpose of producing statistics that, by their production, do not compromise the privacy of those individuals who provided the data.)
| |
| | | |
| 2006年 Dwork、 McSherry、 Nissim 和 Smith 的文章引入了 ε- 差别隐私的概念,这是一个数学定义,用来定义从统计数据库中发布的任何数据所导致的隐私损失。(在这里,统计数据库一词是指根据保密承诺收集的一组数据,目的是编制统计数据,而编制这些数据不会损害提供数据的个人的隐私。) | | 2006年 Dwork、 McSherry、 Nissim 和 Smith 的文章引入了 ε- 差别隐私的概念,这是一个数学定义,用来定义从统计数据库中发布的任何数据所导致的隐私损失。(在这里,统计数据库一词是指根据保密承诺收集的一组数据,目的是编制统计数据,而编制这些数据不会损害提供数据的个人的隐私。) |
| | | |
− | The intuition for the 2006 definition of ε-differential privacy is that a person's privacy cannot be compromised by a statistical release if their data are not in the database. Therefore, with differential privacy, the goal is to give each individual roughly the same privacy that would result from having their data removed. That is, the statistical functions run on the database should not overly depend on the data of any one individual.
| |
| | | |
| The intuition for the 2006 definition of ε-differential privacy is that a person's privacy cannot be compromised by a statistical release if their data are not in the database. Therefore, with differential privacy, the goal is to give each individual roughly the same privacy that would result from having their data removed. That is, the statistical functions run on the database should not overly depend on the data of any one individual. | | The intuition for the 2006 definition of ε-differential privacy is that a person's privacy cannot be compromised by a statistical release if their data are not in the database. Therefore, with differential privacy, the goal is to give each individual roughly the same privacy that would result from having their data removed. That is, the statistical functions run on the database should not overly depend on the data of any one individual. |
第62行: |
第58行: |
| Of course, how much any individual contributes to the result of a database query depends in part on how many people's data are involved in the query. If the database contains data from a single person, that person's data contributes 100%. If the database contains data from a hundred people, each person's data contributes just 1%. The key insight of differential privacy is that as the query is made on the data of fewer and fewer people, more noise needs to be added to the query result to produce the same amount of privacy. Hence the name of the 2006 paper, "Calibrating noise to sensitivity in private data analysis." | | Of course, how much any individual contributes to the result of a database query depends in part on how many people's data are involved in the query. If the database contains data from a single person, that person's data contributes 100%. If the database contains data from a hundred people, each person's data contributes just 1%. The key insight of differential privacy is that as the query is made on the data of fewer and fewer people, more noise needs to be added to the query result to produce the same amount of privacy. Hence the name of the 2006 paper, "Calibrating noise to sensitivity in private data analysis." |
| | | |
− | Of course, how much any individual contributes to the result of a database query depends in part on how many people's data are involved in the query. If the database contains data from a single person, that person's data contributes 100%. If the database contains data from a hundred people, each person's data contributes just 1%. The key insight of differential privacy is that as the query is made on the data of fewer and fewer people, more noise needs to be added to the query result to produce the same amount of privacy. Hence the name of the 2006 paper, "Calibrating noise to sensitivity in private data analysis."
| |
| | | |
| 当然,任何个体对数据库查询结果的贡献程度部分取决于查询中涉及的人员数据的数量。如果数据库包含来自一个人的数据,那么该人的数据贡献率为100% 。如果数据库包含来自100人的数据,每个人的数据贡献率仅为1% 。差分隐私的主要观点是,由于查询是针对越来越少的人的数据进行的,所以需要在查询结果中添加更多的噪音来产生同样的隐私。因此,2006年的论文得名为《在私人数据分析中将噪声校准到灵敏度》 | | 当然,任何个体对数据库查询结果的贡献程度部分取决于查询中涉及的人员数据的数量。如果数据库包含来自一个人的数据,那么该人的数据贡献率为100% 。如果数据库包含来自100人的数据,每个人的数据贡献率仅为1% 。差分隐私的主要观点是,由于查询是针对越来越少的人的数据进行的,所以需要在查询结果中添加更多的噪音来产生同样的隐私。因此,2006年的论文得名为《在私人数据分析中将噪声校准到灵敏度》 |
第68行: |
第63行: |
| The 2006 paper presents both a mathematical definition of differential privacy and a mechanism based on the addition of Laplace noise (i.e. noise coming from the [[Laplace distribution]]) that satisfies the definition. | | The 2006 paper presents both a mathematical definition of differential privacy and a mechanism based on the addition of Laplace noise (i.e. noise coming from the [[Laplace distribution]]) that satisfies the definition. |
| | | |
− | The 2006 paper presents both a mathematical definition of differential privacy and a mechanism based on the addition of Laplace noise (i.e. noise coming from the Laplace distribution) that satisfies the definition.
| |
| | | |
| 2006年的论文给出了差分隐私的数学定义,以及基于拉普拉斯噪音(即。拉普拉斯分布发出的噪音)。 | | 2006年的论文给出了差分隐私的数学定义,以及基于拉普拉斯噪音(即。拉普拉斯分布发出的噪音)。 |
| | | |
− | ===Definition of ε-differential privacy=== | + | === Definition of ε-differential privacy=== |
| Let ε be a positive [[real number]] and <math>\mathcal{A}</math> be a [[randomized algorithm]] that takes a dataset as input (representing the actions of the trusted party holding the data). | | Let ε be a positive [[real number]] and <math>\mathcal{A}</math> be a [[randomized algorithm]] that takes a dataset as input (representing the actions of the trusted party holding the data). |
| Let <math>\textrm{im}\ \mathcal{A}</math> denote the [[image (mathematics)|image]] of <math>\mathcal{A}</math>. The algorithm <math>\mathcal{A}</math> is said to provide <math>\epsilon</math>-differential privacy if, for all datasets <math>D_1</math> and <math>D_2</math> that differ on a single element (i.e., the data of one person), and all subsets <math>S</math> of <math>\textrm{im}\ \mathcal{A}</math>: | | Let <math>\textrm{im}\ \mathcal{A}</math> denote the [[image (mathematics)|image]] of <math>\mathcal{A}</math>. The algorithm <math>\mathcal{A}</math> is said to provide <math>\epsilon</math>-differential privacy if, for all datasets <math>D_1</math> and <math>D_2</math> that differ on a single element (i.e., the data of one person), and all subsets <math>S</math> of <math>\textrm{im}\ \mathcal{A}</math>: |
第87行: |
第81行: |
| where the probability is taken over the randomness used by the algorithm. | | where the probability is taken over the randomness used by the algorithm. |
| | | |
− | 设 ε 是一个正实数,而 mathcal { a }是一个以数据集作为输入(表示持有数据的受信任方的操作)的随机化算法。让 textrm { im }数学{ a }表示数学{ a }的映像。算法数学{ a }被称为提供 epsilon-differentiation 保密性,如果对于所有数据集 d1和 d2在单个元素上不同(即,一个人的数据) ,以及所有 textrm { im }数学{ a }的子集 s: Pr [ mathcal { a }(d _ 1)在 s ] leq exp left (epon right) cdot Pr [数学{ a }(d _ 2)在 s ]中,其中概率取代了算法所使用的随机性。 | + | 设 ε 是一个正实数,而 mathcal { a }是一个以数据集作为输入(表示持有数据的受信任方的操作)的随机化算法。让 textrm { im }数学{ a }表示数学{ a }的映像。算法数学{ a }被称为提供 epsilon-differentiation 保密性,如果对于所有数据集 d1和 d2在单个元素上不同(即,一个人的数据) ,以及所有 textrm { im }数学{ a }的子集 s: Pr [ mathcal { a }(d _ 1)在 s ] leq exp left (epon right) cdot Pr [数学{ a }(d _ 2)在 s ]中,其中概率取代了算法所使用的随机性。<ref name="DPBook" /> |
| | | |
| Differential privacy offers strong and robust guarantees that facilitate modular design and analysis of differentially private mechanisms due to its [[#Composability|composability]], [[#Robustness to post-processing|robustness to post-processing]], and graceful degradation in the presence of [[#Group privacy|correlated data]]. | | Differential privacy offers strong and robust guarantees that facilitate modular design and analysis of differentially private mechanisms due to its [[#Composability|composability]], [[#Robustness to post-processing|robustness to post-processing]], and graceful degradation in the presence of [[#Group privacy|correlated data]]. |
第114行: |
第108行: |
| 平行构图。如果前面的机制是在私有数据库的不相交子集上计算的,那么函数 g 将是(max _ i epsilon _ i)-微分私有。 | | 平行构图。如果前面的机制是在私有数据库的不相交子集上计算的,那么函数 g 将是(max _ i epsilon _ i)-微分私有。 |
| | | |
− | ===Robustness to post-processing=== | + | === Robustness to post-processing=== |
| For any deterministic or randomized function <math>F</math> defined over the image of the mechanism <math>\mathcal{A}</math>, if <math>\mathcal{A}</math> satisfies ε-differential privacy, so does <math>F(\mathcal{A})</math>. | | For any deterministic or randomized function <math>F</math> defined over the image of the mechanism <math>\mathcal{A}</math>, if <math>\mathcal{A}</math> satisfies ε-differential privacy, so does <math>F(\mathcal{A})</math>. |
| | | |
第148行: |
第142行: |
| 因此,将 ε 设置为 epsilon/c 可以达到预期的结果(c 项的保护)。换句话说,取代了每个条目 ε- 差别私有保护,现在每组 c 条目都是 ε- 差别私有保护(每个条目 ε/c)-差别私有保护)。 | | 因此,将 ε 设置为 epsilon/c 可以达到预期的结果(c 项的保护)。换句话说,取代了每个条目 ε- 差别私有保护,现在每组 c 条目都是 ε- 差别私有保护(每个条目 ε/c)-差别私有保护)。 |
| | | |
− | == ε-differentially private mechanisms== | + | ==ε-differentially private mechanisms== |
| Since differential privacy is a probabilistic concept, any differentially private mechanism is necessarily randomized. Some of these, like the Laplace mechanism, described below, rely on adding controlled noise to the function that we want to compute. Others, like the [[Exponential mechanism (differential privacy)|exponential mechanism]]<ref>[http://research.microsoft.com/pubs/65075/mdviadp.pdf F.McSherry and K.Talwar. Mechasim Design via Differential Privacy. Proceedings of the 48th Annual Symposium of Foundations of Computer Science, 2007.]</ref> and posterior sampling<ref>[https://arxiv.org/abs/1306.1066 Christos Dimitrakakis, Blaine Nelson, Aikaterini Mitrokotsa, Benjamin Rubinstein. Robust and Private Bayesian Inference. Algorithmic Learning Theory 2014]</ref> sample from a problem-dependent family of distributions instead. | | Since differential privacy is a probabilistic concept, any differentially private mechanism is necessarily randomized. Some of these, like the Laplace mechanism, described below, rely on adding controlled noise to the function that we want to compute. Others, like the [[Exponential mechanism (differential privacy)|exponential mechanism]]<ref>[http://research.microsoft.com/pubs/65075/mdviadp.pdf F.McSherry and K.Talwar. Mechasim Design via Differential Privacy. Proceedings of the 48th Annual Symposium of Foundations of Computer Science, 2007.]</ref> and posterior sampling<ref>[https://arxiv.org/abs/1306.1066 Christos Dimitrakakis, Blaine Nelson, Aikaterini Mitrokotsa, Benjamin Rubinstein. Robust and Private Bayesian Inference. Algorithmic Learning Theory 2014]</ref> sample from a problem-dependent family of distributions instead. |
| | | |
第219行: |
第213行: |
| !Name!!Has Diabetes (X) | | !Name!!Has Diabetes (X) |
| |- | | |- |
− | | Ross | + | |Ross |
| ||1 | | ||1 |
| |- | | |- |
第228行: |
第222行: |
| ||0 | | ||0 |
| |- | | |- |
− | |Phoebe | + | | Phoebe |
| ||0 | | ||0 |
| |- | | |- |
| |Chandler | | |Chandler |
− | ||1 | + | || 1 |
| |- | | |- |
| |Rachel | | |Rachel |
− | || 0 | + | ||0 |
| |} | | |} |
| | | |
| {| class="wikitable" style="margin-left: auto; margin-right: auto; border: none;" | | {| class="wikitable" style="margin-left: auto; margin-right: auto; border: none;" |
| |- | | |- |
− | ! Name!!Has Diabetes (X) | + | !Name!!Has Diabetes (X) |
| |- | | |- |
| |Ross | | |Ross |
| ||1 | | ||1 |
| |- | | |- |
− | | Monica | + | |Monica |
− | ||1 | + | || 1 |
| |- | | |- |
| |Joey | | |Joey |
第288行: |
第282行: |
| #[[Coin flipping|Toss a coin]]. | | #[[Coin flipping|Toss a coin]]. |
| #If heads, then toss the coin again (ignoring the outcome), and answer the question honestly. | | #If heads, then toss the coin again (ignoring the outcome), and answer the question honestly. |
− | # If tails, then toss the coin again and answer "Yes" if heads, "No" if tails. | + | #If tails, then toss the coin again and answer "Yes" if heads, "No" if tails. |
| | | |
| #Toss a coin. | | #Toss a coin. |
第322行: |
第316行: |
| 虽然这个例子受到了随机化回答的启发,可能适用于微数据(例如,发布每个响应的数据集) ,但根据定义,差分隐私排除了微数据发布,并且只适用于查询(例如,将单个响应聚合成一个结果) ,因为这将违反要求,更具体地说,是一个主题参与或不参与的似是而非的否认。辛西娅。“为私人数据分析奠定坚实的基础。”美国计算机学会通讯54.1(2011) : 86-95,上注19,第91页. Bambauer,Jane,Krishnamurty Muralidhar,and Rathindra Sarathy。“愚人的黄金: 对差分隐私的插图式批评。”Vand.J. Ent.北京科技发展有限公司。L. 16(2013) : 701. | | 虽然这个例子受到了随机化回答的启发,可能适用于微数据(例如,发布每个响应的数据集) ,但根据定义,差分隐私排除了微数据发布,并且只适用于查询(例如,将单个响应聚合成一个结果) ,因为这将违反要求,更具体地说,是一个主题参与或不参与的似是而非的否认。辛西娅。“为私人数据分析奠定坚实的基础。”美国计算机学会通讯54.1(2011) : 86-95,上注19,第91页. Bambauer,Jane,Krishnamurty Muralidhar,and Rathindra Sarathy。“愚人的黄金: 对差分隐私的插图式批评。”Vand.J. Ent.北京科技发展有限公司。L. 16(2013) : 701. |
| | | |
− | ===Stable transformations=== | + | ===Stable transformations === |
| A transformation <math>T</math> is <math>c</math>-stable if the [[Hamming distance]] between <math>T(A)</math> and <math>T(B)</math> is at most <math>c</math>-times the Hamming distance between <math>A</math> and <math>B</math> for any two databases <math>A,B</math>. Theorem 2 in <ref name="PINQ" /> asserts that if there is a mechanism <math>M</math> that is <math>\epsilon</math>-differentially private, then the composite mechanism <math>M\circ T</math> is <math>(\epsilon \times c)</math>-differentially private. | | A transformation <math>T</math> is <math>c</math>-stable if the [[Hamming distance]] between <math>T(A)</math> and <math>T(B)</math> is at most <math>c</math>-times the Hamming distance between <math>A</math> and <math>B</math> for any two databases <math>A,B</math>. Theorem 2 in <ref name="PINQ" /> asserts that if there is a mechanism <math>M</math> that is <math>\epsilon</math>-differentially private, then the composite mechanism <math>M\circ T</math> is <math>(\epsilon \times c)</math>-differentially private. |
| | | |
第348行: |
第342行: |
| Several uses of differential privacy in practice are known to date: | | Several uses of differential privacy in practice are known to date: |
| *2008: [[United States Census Bureau|U.S. Census Bureau]], for showing commuting patterns.<ref name="MachanavajjhalaKAGV08" /> | | *2008: [[United States Census Bureau|U.S. Census Bureau]], for showing commuting patterns.<ref name="MachanavajjhalaKAGV08" /> |
− | *2014: [[Google]]'s RAPPOR, for telemetry such as learning statistics about unwanted software hijacking users' settings. <ref name="RAPPOR" /><ref>{{Citation|title=google/rappor|date=2021-07-15|url=https://github.com/google/rappor|publisher=GitHub}}</ref> | + | * 2014: [[Google]]'s RAPPOR, for telemetry such as learning statistics about unwanted software hijacking users' settings. <ref name="RAPPOR" /><ref>{{Citation|title=google/rappor|date=2021-07-15|url=https://github.com/google/rappor|publisher=GitHub}}</ref> |
| *2015: Google, for sharing historical traffic statistics.<ref name="Eland" /> | | *2015: Google, for sharing historical traffic statistics.<ref name="Eland" /> |
| *2016: [[Apple Inc.|Apple]] announced its intention to use differential privacy in [[iOS 10]] to improve its [[Intelligent personal assistant]] technology.<ref>{{cite web|title=Apple - Press Info - Apple Previews iOS 10, the Biggest iOS Release Ever|url=https://www.apple.com/pr/library/2016/06/13Apple-Previews-iOS-10-The-Biggest-iOS-Release-Ever.html|website=Apple|access-date=16 June 2016}}</ref> | | *2016: [[Apple Inc.|Apple]] announced its intention to use differential privacy in [[iOS 10]] to improve its [[Intelligent personal assistant]] technology.<ref>{{cite web|title=Apple - Press Info - Apple Previews iOS 10, the Biggest iOS Release Ever|url=https://www.apple.com/pr/library/2016/06/13Apple-Previews-iOS-10-The-Biggest-iOS-Release-Ever.html|website=Apple|access-date=16 June 2016}}</ref> |
− | * 2017: Microsoft, for telemetry in Windows.<ref name="DpWinTelemetry" /> | + | *2017: Microsoft, for telemetry in Windows.<ref name="DpWinTelemetry" /> |
− | *2019: Privitar Lens is an API using differential privacy.<ref>{{cite web|title=Privitar Lens|url=https://www.privitar.com/privitar-lens|access-date=20 February 2018}}</ref> | + | * 2019: Privitar Lens is an API using differential privacy.<ref>{{cite web|title=Privitar Lens|url=https://www.privitar.com/privitar-lens|access-date=20 February 2018}}</ref> |
| *2020: LinkedIn, for advertiser queries.<ref name="DpLinkedIn" /> | | *2020: LinkedIn, for advertiser queries.<ref name="DpLinkedIn" /> |
| | | |
第362行: |
第356行: |
| *2016: Apple announced its intention to use differential privacy in iOS 10 to improve its Intelligent personal assistant technology. | | *2016: Apple announced its intention to use differential privacy in iOS 10 to improve its Intelligent personal assistant technology. |
| *2017: Microsoft, for telemetry in Windows. | | *2017: Microsoft, for telemetry in Windows. |
− | * 2019: Privitar Lens is an API using differential privacy. | + | *2019: Privitar Lens is an API using differential privacy. |
| *2020: LinkedIn, for advertiser queries. | | *2020: LinkedIn, for advertiser queries. |
| | | |
第399行: |
第393行: |
| *[[Protected health information]] | | *[[Protected health information]] |
| | | |
− | *Quasi-identifier | + | * Quasi-identifier |
| *Exponential mechanism (differential privacy) – a technique for designing differentially private algorithms | | *Exponential mechanism (differential privacy) – a technique for designing differentially private algorithms |
| *k-anonymity | | *k-anonymity |
− | * Differentially private analysis of graphs | + | *Differentially private analysis of graphs |
| *Protected health information | | *Protected health information |
| | | |
第533行: |
第527行: |
| | | |
| *A reading list on differential privacy | | *A reading list on differential privacy |
− | *Abowd, John. 2017. “How Will Statistical Agencies Operate When All Data Are Private?”. Journal of Privacy and Confidentiality 7 (3). (slides) | + | * Abowd, John. 2017. “How Will Statistical Agencies Operate When All Data Are Private?”. Journal of Privacy and Confidentiality 7 (3). (slides) |
− | *"Differential Privacy: A Primer for a Non-technical Audience", Kobbi Nissim, Thomas Steinke, Alexandra Wood, Micah Altman, Aaron Bembenek, Mark Bun, Marco Gaboardi, David R. O’Brien, and Salil Vadhan, Harvard Privacy Tools Project, February 14, 2018 | + | * "Differential Privacy: A Primer for a Non-technical Audience", Kobbi Nissim, Thomas Steinke, Alexandra Wood, Micah Altman, Aaron Bembenek, Mark Bun, Marco Gaboardi, David R. O’Brien, and Salil Vadhan, Harvard Privacy Tools Project, February 14, 2018 |
| *Dinur, Irit and Kobbi Nissim. 2003. Revealing information while preserving privacy. In Proceedings of the twenty-second ACM SIGMOD-SIGACT-SIGART symposium on Principles of database systems(PODS '03). ACM, New York, NY, USA, 202-210. . | | *Dinur, Irit and Kobbi Nissim. 2003. Revealing information while preserving privacy. In Proceedings of the twenty-second ACM SIGMOD-SIGACT-SIGART symposium on Principles of database systems(PODS '03). ACM, New York, NY, USA, 202-210. . |
| *Dwork, Cynthia, Frank McSherry, Kobbi Nissim, and Adam Smith. 2006. in Halevi, S. & Rabin, T. (Eds.) Calibrating Noise to Sensitivity in Private Data Analysis Theory of Cryptography: Third Theory of Cryptography Conference, TCC 2006, New York, NY, USA, March 4–7, 2006. Proceedings, Springer Berlin Heidelberg, 265-284, . | | *Dwork, Cynthia, Frank McSherry, Kobbi Nissim, and Adam Smith. 2006. in Halevi, S. & Rabin, T. (Eds.) Calibrating Noise to Sensitivity in Private Data Analysis Theory of Cryptography: Third Theory of Cryptography Conference, TCC 2006, New York, NY, USA, March 4–7, 2006. Proceedings, Springer Berlin Heidelberg, 265-284, . |
第541行: |
第535行: |
| *Machanavajjhala, Ashwin, Daniel Kifer, John M. Abowd, Johannes Gehrke, and Lars Vilhuber. 2008. Privacy: Theory Meets Practice on the Map, International Conference on Data Engineering (ICDE) 2008: 277-286, . | | *Machanavajjhala, Ashwin, Daniel Kifer, John M. Abowd, Johannes Gehrke, and Lars Vilhuber. 2008. Privacy: Theory Meets Practice on the Map, International Conference on Data Engineering (ICDE) 2008: 277-286, . |
| *Dwork, Cynthia and Moni Naor. 2010. On the Difficulties of Disclosure Prevention in Statistical Databases or The Case for Differential Privacy, Journal of Privacy and Confidentiality: Vol. 2: Iss. 1, Article 8. Available at: http://repository.cmu.edu/jpc/vol2/iss1/8. | | *Dwork, Cynthia and Moni Naor. 2010. On the Difficulties of Disclosure Prevention in Statistical Databases or The Case for Differential Privacy, Journal of Privacy and Confidentiality: Vol. 2: Iss. 1, Article 8. Available at: http://repository.cmu.edu/jpc/vol2/iss1/8. |
− | * Kifer, Daniel and Ashwin Machanavajjhala. 2011. No free lunch in data privacy. In Proceedings of the 2011 ACM SIGMOD International Conference on Management of data (SIGMOD '11). ACM, New York, NY, USA, 193-204. . | + | *Kifer, Daniel and Ashwin Machanavajjhala. 2011. No free lunch in data privacy. In Proceedings of the 2011 ACM SIGMOD International Conference on Management of data (SIGMOD '11). ACM, New York, NY, USA, 193-204. . |
| *Erlingsson, Úlfar, Vasyl Pihur and Aleksandra Korolova. 2014. RAPPOR: Randomized Aggregatable Privacy-Preserving Ordinal Response. In Proceedings of the 2014 ACM SIGSAC Conference on Computer and Communications Security (CCS '14). ACM, New York, NY, USA, 1054-1067. . | | *Erlingsson, Úlfar, Vasyl Pihur and Aleksandra Korolova. 2014. RAPPOR: Randomized Aggregatable Privacy-Preserving Ordinal Response. In Proceedings of the 2014 ACM SIGSAC Conference on Computer and Communications Security (CCS '14). ACM, New York, NY, USA, 1054-1067. . |
| *Abowd, John M. and Ian M. Schmutte. 2017 . Revisiting the economics of privacy: Population statistics and confidentiality protection as public goods. Labor Dynamics Institute, Cornell University, Labor Dynamics Institute, Cornell University, at https://digitalcommons.ilr.cornell.edu/ldi/37/ | | *Abowd, John M. and Ian M. Schmutte. 2017 . Revisiting the economics of privacy: Population statistics and confidentiality protection as public goods. Labor Dynamics Institute, Cornell University, Labor Dynamics Institute, Cornell University, at https://digitalcommons.ilr.cornell.edu/ldi/37/ |
第548行: |
第542行: |
| *Ding, Bolin, Janardhan Kulkarni, and Sergey Yekhanin 2017. Collecting Telemetry Data Privately, NIPS 2017. | | *Ding, Bolin, Janardhan Kulkarni, and Sergey Yekhanin 2017. Collecting Telemetry Data Privately, NIPS 2017. |
| *http://www.win-vector.com/blog/2015/10/a-simpler-explanation-of-differential-privacy/ | | *http://www.win-vector.com/blog/2015/10/a-simpler-explanation-of-differential-privacy/ |
− | *Ryffel, Theo, Andrew Trask, et. al. "A generic framework for privacy preserving deep learning" | + | * Ryffel, Theo, Andrew Trask, et. al. "A generic framework for privacy preserving deep learning" |
| | | |
| 差分隐私上的阅读清单。2017.“当所有数据都是私人数据时,统计机构将如何运作?”。隐私与保密期刊7(3)。(幻灯片) | | 差分隐私上的阅读清单。2017.“当所有数据都是私人数据时,统计机构将如何运作?”。隐私与保密期刊7(3)。(幻灯片) |
第555行: |
第549行: |
| *Dwork、 Cynthia、 Frank McSherry、 Kobbi Nissim 和 Adam Smith。2006. in Halevi,s & Rabin,t.(Eds.)在密码学的私人数据分析理论中校准噪声的灵敏度: 第三次密码学理论会议,TCC 2006,纽约,纽约,美国,2006年3月4-7。美国国家科学院院刊,Springer Berlin Heidelberg,265-284,。 | | *Dwork、 Cynthia、 Frank McSherry、 Kobbi Nissim 和 Adam Smith。2006. in Halevi,s & Rabin,t.(Eds.)在密码学的私人数据分析理论中校准噪声的灵敏度: 第三次密码学理论会议,TCC 2006,纽约,纽约,美国,2006年3月4-7。美国国家科学院院刊,Springer Berlin Heidelberg,265-284,。 |
| *辛西娅。2006.差分隐私,第33届国际自动机,语言和编程学术讨论会,第二部分(ICALP 2006) ,Springer Verlag,4052,1-12,。 | | *辛西娅。2006.差分隐私,第33届国际自动机,语言和编程学术讨论会,第二部分(ICALP 2006) ,Springer Verlag,4052,1-12,。 |
− | * Dwork,Cynthia and Aaron Roth.2014.差分隐私的算法基础。理论计算机科学的基础与发展趋势。第一卷。9,Nos.3–4.211–407, . | + | *Dwork,Cynthia and Aaron Roth.2014.差分隐私的算法基础。理论计算机科学的基础与发展趋势。第一卷。9,Nos.3–4.211–407, . |
| *Machanavajjhala,Ashwin,Daniel Kifer,John m. Abowd,Johannes Gehrke,and Lars Vilhuber.2008.隐私权: 理论与实践的结合,国际数据工程会议2008:277-286,。 | | *Machanavajjhala,Ashwin,Daniel Kifer,John m. Abowd,Johannes Gehrke,and Lars Vilhuber.2008.隐私权: 理论与实践的结合,国际数据工程会议2008:277-286,。 |
| *Dwork、 Cynthia 和 Moni Naor。2010.关于统计数据库中的披露防范的困难或者差分隐私的案例,隐私和保密期刊: 第一卷。2: Iss.1,第8条。网址: http://repository.cmu.edu/jpc/vol2/iss1/8。 | | *Dwork、 Cynthia 和 Moni Naor。2010.关于统计数据库中的披露防范的困难或者差分隐私的案例,隐私和保密期刊: 第一卷。2: Iss.1,第8条。网址: http://repository.cmu.edu/jpc/vol2/iss1/8。 |
| *Kifer,Daniel and Ashwin Machanavajjhala.2011.数据隐私没有免费午餐。在2011年 ACM SIGMOD 国际数据管理会议记录(SIGMOD’11)。ACM,纽约,纽约,美国,193-204. 。 | | *Kifer,Daniel and Ashwin Machanavajjhala.2011.数据隐私没有免费午餐。在2011年 ACM SIGMOD 国际数据管理会议记录(SIGMOD’11)。ACM,纽约,纽约,美国,193-204. 。 |
− | *Erlingsson, Úlfar, Vasyl Pihur and Aleksandra Korolova.2014.RAPPOR: 随机可聚合隐私保护顺序响应。在2014年 ACM SIGSAC 计算机和通信安全会议(CCS’14)的会议记录中。ACM,纽约,纽约,美国,1054-1067。 | + | * Erlingsson, Úlfar, Vasyl Pihur and Aleksandra Korolova.2014.RAPPOR: 随机可聚合隐私保护顺序响应。在2014年 ACM SIGSAC 计算机和通信安全会议(CCS’14)的会议记录中。ACM,纽约,纽约,美国,1054-1067。 |
| *以上,约翰 · m · 施穆特和伊恩 · m · 施穆特。2017 .重温隐私经济学: 人口统计和保密性保护作为公共产品。劳动动力学研究所,康奈尔大学,劳动动力学研究所,康奈尔大学, https://digitalcommons.ilr.cornell.edu/ldi/37/。即将到来。作为社会选择的隐私权保护与统计准确性的经济学分析。美国经济评论》 , | | *以上,约翰 · m · 施穆特和伊恩 · m · 施穆特。2017 .重温隐私经济学: 人口统计和保密性保护作为公共产品。劳动动力学研究所,康奈尔大学,劳动动力学研究所,康奈尔大学, https://digitalcommons.ilr.cornell.edu/ldi/37/。即将到来。作为社会选择的隐私权保护与统计准确性的经济学分析。美国经济评论》 , |
| *苹果公司,2016。苹果预览 iOS 10,史上最大的 iOS 发布。新闻稿(六月十三日)。Https://www.apple.com/newsroom/2016/06/apple-previews-ios-10-biggest-ios-release-ever.html. | | *苹果公司,2016。苹果预览 iOS 10,史上最大的 iOS 发布。新闻稿(六月十三日)。Https://www.apple.com/newsroom/2016/06/apple-previews-ios-10-biggest-ios-release-ever.html. |
第578行: |
第572行: |
| | | |
| *Differential Privacy by Cynthia Dwork, ICALP July 2006. | | *Differential Privacy by Cynthia Dwork, ICALP July 2006. |
− | *The Algorithmic Foundations of Differential Privacy by Cynthia Dwork and Aaron Roth, 2014. | + | * The Algorithmic Foundations of Differential Privacy by Cynthia Dwork and Aaron Roth, 2014. |
| *Differential Privacy: A Survey of Results by Cynthia Dwork, Microsoft Research, April 2008 | | *Differential Privacy: A Survey of Results by Cynthia Dwork, Microsoft Research, April 2008 |
| *Privacy of Dynamic Data: Continual Observation and Pan Privacy by Moni Naor, Institute for Advanced Study, November 2009 | | *Privacy of Dynamic Data: Continual Observation and Pan Privacy by Moni Naor, Institute for Advanced Study, November 2009 |
第588行: |
第582行: |
| | | |
| 差分隐私: Cynthia Dwork,ICALP July 2006。差分隐私的算法基础》 ,Cynthia Dwork 和 Aaron Roth,2014年。2013年12月,加州理工学院卡特里娜 · 利格特教授,差分隐私,差分隐私,差分隐私实用指南,克里斯汀 · 特拉克,普渡大学,2012年4月 | | 差分隐私: Cynthia Dwork,ICALP July 2006。差分隐私的算法基础》 ,Cynthia Dwork 和 Aaron Roth,2014年。2013年12月,加州理工学院卡特里娜 · 利格特教授,差分隐私,差分隐私,差分隐私实用指南,克里斯汀 · 特拉克,普渡大学,2012年4月 |
− | *私人地图制作者 v0.2 on the Common Data Project Blog | + | * 私人地图制作者 v0.2 on the Common Data Project Blog |
| *Learning Statistics with Privacy,added by the Flip of a Coin by úlfar Erlingsson,Google Research Blog,October 2014 | | *Learning Statistics with Privacy,added by the Flip of a Coin by úlfar Erlingsson,Google Research Blog,October 2014 |
− | * Technology Factsheet: 差分隐私地图制作者 Raina Gandhi and Amritha Jayanti,Belfer Center for Science and International Affairs,Fall 2020 | + | *Technology Factsheet: 差分隐私地图制作者 Raina Gandhi and Amritha Jayanti,Belfer Center for Science and International Affairs,Fall 2020 |
| | | |
| [[Category:Theory of cryptography]] | | [[Category:Theory of cryptography]] |