更改

跳到导航 跳到搜索
删除5,613字节 、 2020年8月16日 (日) 19:03
无编辑摘要
第56行: 第56行:     
(or [[Poisson distribution|Poisson]] in the limit of large ''n'', if the average degree <math>\langle k\rangle=p(n-1)</math> is held fixed). Most networks in the real world, however, have degree distributions very different from this. Most are highly [[Skewness|right-skewed]], meaning that a large majority of nodes have low degree but a small number, known as "hubs", have high degree. Some networks, notably the Internet, the [[world wide web]], and some social networks were argued to have degree distributions that approximately follow a [[power law]]: <math>
 
(or [[Poisson distribution|Poisson]] in the limit of large ''n'', if the average degree <math>\langle k\rangle=p(n-1)</math> is held fixed). Most networks in the real world, however, have degree distributions very different from this. Most are highly [[Skewness|right-skewed]], meaning that a large majority of nodes have low degree but a small number, known as "hubs", have high degree. Some networks, notably the Internet, the [[world wide web]], and some social networks were argued to have degree distributions that approximately follow a [[power law]]: <math>
 +
    
(or Poisson in the limit of large n, if the average degree <math>\langle k\rangle=p(n-1)</math> is held fixed). Most networks in the real world, however, have degree distributions very different from this. Most are highly right-skewed, meaning that a large majority of nodes have low degree but a small number, known as "hubs", have high degree. Some networks, notably the Internet, the world wide web, and some social networks were argued to have degree distributions that approximately follow a power law: <math>
 
(or Poisson in the limit of large n, if the average degree <math>\langle k\rangle=p(n-1)</math> is held fixed). Most networks in the real world, however, have degree distributions very different from this. Most are highly right-skewed, meaning that a large majority of nodes have low degree but a small number, known as "hubs", have high degree. Some networks, notably the Internet, the world wide web, and some social networks were argued to have degree distributions that approximately follow a power law: <math>
   −
(如果平均度 < math > langle k rangle = p (n-1) </math > 保持不变,也会出现在有限节点的泊松分布)。
  −
然而,现实世界中的大多数网络的度分布与上述分布非常不同。大多数节点是高度右倾的,这意味着大多数节点的度值较低,但少数节点,即所谓的“枢纽” ,度值较高。一些网络,特别是互联网、万维网和一些社交网络,被认为具有近似遵循幂定律的幂律分布: < math >
  −
  −
P(k)\sim k^{-\gamma}
      +
(如果平均度<math>\langle k\rangle=p(n-1)</math>保持不变,也会出现有限节点的泊松分布)。然而,现实世界中的大多数网络的度分布与上述分布非常不同。大多数节点是高度右倾的,这意味着大多数节点的度值较低,但少数节点,即所谓的“枢纽” ,度值较高。一些网络,特别是互联网、万维网和一些社交网络,被认为具有近似遵循幂定律的幂律分布:<math>
 
P(k)\sim k^{-\gamma}
 
P(k)\sim k^{-\gamma}
 +
</math>
      第71行: 第70行:  
</math>, where γ is a constant. Such networks are called scale-free networks and have attracted particular attention for their structural and dynamical properties. However, recently, there have been some researches based on real-world data sets claiming despite the fact that most of the observed networks have fat-tailed degree distributions, they deviate from being scale-free.  
 
</math>, where γ is a constant. Such networks are called scale-free networks and have attracted particular attention for their structural and dynamical properties. However, recently, there have been some researches based on real-world data sets claiming despite the fact that most of the observed networks have fat-tailed degree distributions, they deviate from being scale-free.  
   −
</math>,γ是一个常数。这种网络被称为无标度网络,因其结构和动力学性质而引起人们的特别关注。然而,最近有一些基于真实数据的研究声称,尽管大多数观测到的网络具有肥尾度分布,但它们偏离了无标度分布。
+
γ是一个常数。这种网络被称为'''<font color="#ff8000">无标度网络 Scale-Free Networks</font>''',因其结构和动力学性质而引起人们的特别关注。然而,最近有一些基于真实数据的研究表明,尽管大多数观测到的网络具有'''<font color="#ff8000">肥尾度分布 Fat-Tailed Degree Distributions</font>''',但它们无标度分布的特点并不明显。
      第77行: 第76行:  
== Excess degree distribution ==
 
== Excess degree distribution ==
 
超额度分布
 
超额度分布
 +
 +
 
Excess degree distribution is the probability distribution, for a node reached by following an edge, of the number of other edges attached to that node.<ref name=":0">{{Cite book|last=Newman|first=Mark|url=http://www.oxfordscholarship.com/view/10.1093/oso/9780198805090.001.0001/oso-9780198805090|title=Networks|date=2018-10-18|publisher=Oxford University Press|isbn=978-0-19-880509-0|volume=1|language=en|doi=10.1093/oso/9780198805090.001.0001}}</ref> In other words, it is the distribution of outgoing links from a node reached by following a link.
 
Excess degree distribution is the probability distribution, for a node reached by following an edge, of the number of other edges attached to that node.<ref name=":0">{{Cite book|last=Newman|first=Mark|url=http://www.oxfordscholarship.com/view/10.1093/oso/9780198805090.001.0001/oso-9780198805090|title=Networks|date=2018-10-18|publisher=Oxford University Press|isbn=978-0-19-880509-0|volume=1|language=en|doi=10.1093/oso/9780198805090.001.0001}}</ref> In other words, it is the distribution of outgoing links from a node reached by following a link.
    
Excess degree distribution is the probability distribution, for a node reached by following an edge, of the number of other edges attached to that node. In other words, it is the distribution of outgoing links from a node reached by following a link.
 
Excess degree distribution is the probability distribution, for a node reached by following an edge, of the number of other edges attached to that node. In other words, it is the distribution of outgoing links from a node reached by following a link.
   −
超额度分布是通过跟随一条边到达该节点的其他边的数量的概率分布。换句话说,它是通过跟随一个链接从一个节点到达的传出链接的分布。
+
超额度分布是通过跟随一条边到达该节点的其他边的数量的概率分布。换句话说,它是通过跟随一个链接从一个节点到达的其传出链接的分布。
 
        −
Suppose a network has a degree distribution <math>
      
Suppose a network has a degree distribution  
 
Suppose a network has a degree distribution  
  −
假设一个网络具有度分布<math>P(k)</math >,
  −
         +
假设一个网络具有度分布<math>
 
P(k)
 
P(k)
  −
P (k)
  −
   
</math>, by selecting one node (randomly or not) and going to one of its neighbors (assuming to have one neighbor at least), then the probability of that node to have <math>
 
</math>, by selecting one node (randomly or not) and going to one of its neighbors (assuming to have one neighbor at least), then the probability of that node to have <math>
   −
</math>, by selecting one node (randomly or not) and going to one of its neighbors (assuming to have one neighbor at least), then the probability of that node to have <math>
  −
  −
通过选择一个节点(随机或非随机)并转到它的一个邻居(假设至少有一个邻居) ,那么该节点具有< math > k</math>个邻居的概率
      +
通过选择一个节点(随机或非随机)跟踪它的一个邻居(假设至少有一个邻居) ,那么该节点具有<math>
 
k
 
k
 
+
</math> 个邻居的概率不是由<math>
k
  −
 
  −
K
  −
 
  −
</math> neighbors is not given by <math>
  −
 
  −
</math> neighbors is not given by <math>
  −
 
  −
不是由 <math>P(k)</math>给出的
  −
 
   
P(k)
 
P(k)
 +
</math>.给出的
   −
P(k)
      +
The reason is that, whenever some node is selected in a heterogeneous network, it is more probable to reach the hobs by following one of the existing neighbors of that node. The true probability of such nodes to have degree
    +
The reason is that, whenever some node is selected in a heterogeneous network, it is more probable to reach the hobs by following one of the existing neighbors of that node. The true probability of such nodes to have degree
   −
</math>. The reason is that, whenever some node is selected in a heterogeneous network, it is more probable to reach the hobs by following one of the existing neighbors of that node. The true probability of such nodes to have degree <math>
  −
  −
</math>. The reason is that, whenever some node is selected in a heterogeneous network, it is more probable to reach the hobs by following one of the existing neighbors of that node. The true probability of such nodes to have degree <math>
  −
  −
原因在于,无论何时在异质网络中选择某个节点,它都更有可能通过跟随该节点的某个现有邻居到达枢纽节点。这些节点具有度< math >k</math>的真实概率
  −
  −
k
      +
原因在于,无论何时在异质网络中选择某个节点,它都更有可能通过跟随该节点的某个现有邻居到达枢纽节点。这些节点具有度<math>
 
k
 
k
 
+
</math>的真实概率是<math>
K
  −
 
  −
</math> is <math>
  −
 
  −
</math> is <math>
  −
 
  −
是<math>q(k)</math >
  −
 
   
q(k)
 
q(k)
 +
</math>
   −
q(k)
+
which is called the ''excess degree'' of that node. In the [[configuration model]], which correlations between the nodes have been ignored and every node is assumed to be connected to any other nodes in the network with the same probability, the excess degree distribution can be found as<ref name=":0" />:
 
  −
Q (k)
  −
 
  −
</math> which is called the ''excess degree'' of that node. In the [[configuration model]], which correlations between the nodes have been ignored and every node is assumed to be connected to any other nodes in the network with the same probability, the excess degree distribution can be found as<ref name=":0" />:
  −
 
  −
</math> which is called the excess degree of that node. In the configuration model, which correlations between the nodes have been ignored and every node is assumed to be connected to any other nodes in the network with the same probability, the excess degree distribution can be found as:
     −
它被称为该节点的超额度。在配置模型中,忽略节点之间的相关性,并假定每个节点以相同的概率连接到网络中的其他任何节点,超额度分布表示为:
+
which is called the excess degree of that node. In the configuration model, which correlations between the nodes have been ignored and every node is assumed to be connected to any other nodes in the network with the same probability, the excess degree distribution can be found as:
    +
它被称为该节点的超额度。在'''<font color="#ff8000">配置模型 Configuration Model</font>'''中,忽略节点之间的相关性,并假定每个节点以相同的概率连接到网络中的其他任何节点,超额度分布表示为:
      −
<math>
      
<math>
 
<math>
  −
《数学》
  −
   
q(k) =  \frac{k+1}{\langle k \rangle}P(k+1),
 
q(k) =  \frac{k+1}{\langle k \rangle}P(k+1),
  −
q(k) =  \frac{k+1}{\langle k \rangle}P(k+1),
  −
  −
Q (k) = frac { k + 1}{ langle k rangle } p (k + 1) ,
  −
  −
</math>
  −
   
</math>
 
</math>
   第174行: 第129行:     
where <math>
 
where <math>
  −
where <math>
  −
  −
  −
  −
{\langle k \rangle}
  −
   
{\langle k \rangle}
 
{\langle k \rangle}
  −
  −
   
</math> is the mean-degree (average degree) of the model. It follows to that fact that the average degree of the neighbor of any node is greater than the average degree of that node. In social networks, it mean that your friends, on average, have more friends than you. This is famous as the [[friendship paradox]]. It can be shown that a network can have a [[giant component]], if its average excess degree is larger than one:
 
</math> is the mean-degree (average degree) of the model. It follows to that fact that the average degree of the neighbor of any node is greater than the average degree of that node. In social networks, it mean that your friends, on average, have more friends than you. This is famous as the [[friendship paradox]]. It can be shown that a network can have a [[giant component]], if its average excess degree is larger than one:
    
</math> is the mean-degree (average degree) of the model. It follows to that fact that the average degree of the neighbor of any node is greater than the average degree of that node. In social networks, it mean that your friends, on average, have more friends than you. This is famous as the friendship paradox. It can be shown that a network can have a giant component, if its average excess degree is larger than one:
 
</math> is the mean-degree (average degree) of the model. It follows to that fact that the average degree of the neighbor of any node is greater than the average degree of that node. In social networks, it mean that your friends, on average, have more friends than you. This is famous as the friendship paradox. It can be shown that a network can have a giant component, if its average excess degree is larger than one:
   −
这里<math>{\langle k \rangle}</math > 是模型的平均度。由此可知,任何节点的邻居的平均度大于该节点的平均度。在社交网络中,这意味着你的朋友平均比你拥有更多的朋友。这就是著名的友谊悖论。可以证明,如果一个网络的平均超额度大于1,那么它可以有一个巨大的联通子网络:
+
这里<math>{\langle k \rangle}</math > 是模型的平均度。由此可知,任何节点的邻居的平均度大于该节点的平均度。在社交网络中,这意味着你的朋友平均比你拥有更多的朋友。这就是著名的'''<font color="#ff8000">友谊悖论 Friendship Paradox</font>'''。可以证明,如果一个网络的平均超额度大于1,那么它可以有一个巨大的联通子网络:
          
<math>
 
<math>
  −
<math>
  −
  −
   
\sum_k kq(k) > 1 \Rightarrow  {\langle k^2 \rangle}-2{\langle k \rangle}>0  
 
\sum_k kq(k) > 1 \Rightarrow  {\langle k^2 \rangle}-2{\langle k \rangle}>0  
  −
\sum_k kq(k) > 1 \Rightarrow  {\langle k^2 \rangle}-2{\langle k \rangle}>0
  −
  −
1 right tarrow { langle k ^ 2 rangle }-2{ langle k rangle } > 0
  −
  −
</math>
  −
   
</math>
 
</math>
  −
        第222行: 第154行:  
== The Generating Functions Method ==
 
== The Generating Functions Method ==
 
生成函数方法
 
生成函数方法
[[Probability-generating function|Generating functions]] can be used to calculate different properties of random networks. Given the degree distribution and the excess degree distribution of some network, <math>
     −
Generating functions can be used to calculate different properties of random networks. Given the degree distribution and the excess degree distribution of some network, <math>
+
[[Probability-generating function|Generating functions]] can be used to calculate different properties of random networks. Given the degree distribution and the excess degree distribution of some network,
 
  −
生成函数可以用来计算随机网络的不同性质。给定某些网络的度分布和超度分布,< math >P(k)</math>和<math>q(k)</math>
      +
Generating functions can be used to calculate different properties of random networks. Given the degree distribution and the excess degree distribution of some network,
    +
生成函数可以用来计算随机网络的不同性质。给定某些网络的度分布和超度分布,<math>
 
P(k)
 
P(k)
 
+
</math> <math>
P(k)
  −
 
  −
P (k)
  −
 
  −
</math> and <math>
  −
 
  −
</math> and <math>
  −
 
  −
[ math ]和[ math ]
  −
 
  −
q(k)
  −
 
   
q(k)
 
q(k)
 +
</math>
   −
Q (k)
     −
</math> respectively, it is possible to write two power series in the following forms:
+
respectively, it is possible to write two power series in the following forms:
   −
</math> respectively, it is possible to write two power series in the following forms:
+
respectively, it is possible to write two power series in the following forms:
    
可以以下列形式写出两个幂级数:
 
可以以下列形式写出两个幂级数:
  −
      
<math>
 
<math>
  −
<math>
  −
  −
  −
   
G_0(x) = \textstyle \sum_{k} \displaystyle P(k)x^k  
 
G_0(x) = \textstyle \sum_{k} \displaystyle P(k)x^k  
  −
G_0(x) = \textstyle \sum_{k} \displaystyle P(k)x^k
  −
  −
  −
  −
</math> and <math>
  −
   
</math> and <math>
 
</math> and <math>
  −
[ math ]和[ math ]
  −
  −
G_1(x) = \textstyle \sum_{k} \displaystyle q(k)x^k = \textstyle \sum_{k} \displaystyle \frac{k}{\langle k \rangle}P(k)x^{k-1}
  −
   
G_1(x) = \textstyle \sum_{k} \displaystyle q(k)x^k = \textstyle \sum_{k} \displaystyle \frac{k}{\langle k \rangle}P(k)x^{k-1}  
 
G_1(x) = \textstyle \sum_{k} \displaystyle q(k)x^k = \textstyle \sum_{k} \displaystyle \frac{k}{\langle k \rangle}P(k)x^{k-1}  
  −
1(x) = textstyle sum { k } displaystyle q (k) x ^ k = textstyle sum { k } displastyle frac { k }{ langle k rangle } p (k) x ^ { k-1}
  −
   
</math>
 
</math>
  −
</math>
  −
  −
  −
  −
  −
  −
<math>
      
<math>
 
<math>
  −
《数学》
  −
  −
G_1(x)
  −
   
G_1(x)
 
G_1(x)
 
+
</math> 也可得出通过 <math>
G _ 1(x)
  −
 
  −
</math> can also be obtained from derivatives of <math>
  −
 
  −
</math> can also be obtained from derivatives of <math>
  −
 
  −
G_0(x)</math > 也可以从 < math > 的导数得到
  −
 
  −
 
  −
 
   
G_0(x)
 
G_0(x)
 
+
</math>的导数:
G _ 0(x)
  −
 
  −
</math>:
  −
 
  −
</math>:
  −
 
  −
(/math)
  −
 
  −
 
      
<math>
 
<math>
  −
<math>
  −
  −
《数学》
  −
   
G_1(x) = \frac{G'_0(x)}{G'_0(1)}  
 
G_1(x) = \frac{G'_0(x)}{G'_0(1)}  
  −
G_1(x) = \frac{G'_0(x)}{G'_0(1)}
  −
  −
1(x) = frac { g’ _ 0(x)}{ g’ _ 0(1)}
  −
  −
</math>
  −
   
</math>
 
</math>
  −
数学
           −
If we know the generating function for a probability distribution <math>
+
If we know the generating function for a probability distribution  
   −
If we know the generating function for a probability distribution <math>
+
If we know the generating function for a probability distribution  
    
如果我们知道生成函数的一个概率分布,然后我们就得到<math>P(k)</math>的值通过鉴别:
 
如果我们知道生成函数的一个概率分布,然后我们就得到<math>P(k)</math>的值通过鉴别:
  −
P(k)
  −
  −
P(k)
  −
  −
P (k)
  −
  −
</math> then we can recover the values of <math>
  −
  −
</math> then we can recover the values of <math>
  −
  −
  −
  −
P(k)
  −
  −
P(k)
  −
  −
P (k)
  −
  −
</math> by differentiating:
  −
  −
</math> by differentiating:
  −
  −
  −
  −
      
<math>
 
<math>
  −
<math>
  −
  −
《数学》
  −
   
P(k) = \frac{1}{k!} {\operatorname{d}^k\!G\over\operatorname{d}\!x^k}\biggl \vert _{x=0}
 
P(k) = \frac{1}{k!} {\operatorname{d}^k\!G\over\operatorname{d}\!x^k}\biggl \vert _{x=0}
  −
P(k) = \frac{1}{k!} {\operatorname{d}^k\!G\over\operatorname{d}\!x^k}\biggl \vert _{x=0}
  −
  −
= frac {1}{ k! }{ operatorname { d } ^ k! g over operatorname { d } ! x ^ k } bigvert { x = 0}
  −
  −
</math>
  −
   
</math>
 
</math>
   −
数学
      +
Some properties, e.g. the moments, can be easily calculated from
   −
 
+
Some properties, e.g. the moments, can be easily calculated from  
Some properties, e.g. the moments, can be easily calculated from <math>
  −
 
  −
Some properties, e.g. the moments, can be easily calculated from <math>
      
一些性质,例如,时刻,可以很容易地计算依据 < math > G0(x)</math>和它的导数:
 
一些性质,例如,时刻,可以很容易地计算依据 < math > G0(x)</math>和它的导数:
  −
G_0(x)
  −
  −
G_0(x)
  −
  −
G _ 0(x)
  −
  −
</math> and its derivatives:
  −
  −
</math> and its derivatives:
  −
  −
  −
         
*<math>
 
*<math>
   
{\langle k \rangle} = G'_0(1)
 
{\langle k \rangle} = G'_0(1)
  −
{\langle k \rangle} = G'_0(1)
  −
  −
{ langle k rangle } = g’ _ 0(1)
  −
   
</math>
 
</math>
  −
</math>
  −
  −
数学
  −
   
*<math>
 
*<math>
   
{\langle k^2 \rangle} = G''_0(1) + G'_0(1)  
 
{\langle k^2 \rangle} = G''_0(1) + G'_0(1)  
  −
{\langle k^2 \rangle} = G_0(1) + G'_0(1)
  −
  −
{ langle k ^ 2 rangle } = g _ 0(1) + g’ _ 0(1)
  −
  −
</math>
  −
   
</math>
 
</math>
  −
数学
  −
        第450行: 第225行:     
* <math>
 
* <math>
   
{\langle k^m \rangle} = \Biggl[{\bigg(\operatorname{x}{\operatorname{d}\!\over\operatorname{dx}\!}\biggl)^m}G_0(x)\Biggl]_{x=1}
 
{\langle k^m \rangle} = \Biggl[{\bigg(\operatorname{x}{\operatorname{d}\!\over\operatorname{dx}\!}\biggl)^m}G_0(x)\Biggl]_{x=1}
  −
{\langle k^m \rangle} = \Biggl[{\bigg(\operatorname{x}{\operatorname{d}\!\over\operatorname{dx}\!}\biggl)^m}G_0(x)\Biggl]_{x=1}
  −
  −
{ langle k ^ m rangle } = Biggl [{ bigg (operatorname { x }{ operatorname { d } ! over operatorname { dx } ![} Biggl) ^ m } g _ 0(x) Biggl ] _ { x = 1}
  −
  −
</math>
  −
   
</math>
 
</math>
  −
数学
        第469行: 第234行:  
For Poisson-distributed random networks, such as the ER graph, <math>
 
For Poisson-distributed random networks, such as the ER graph, <math>
   −
对于泊松分布的随机网络,如 ER 图,< math > G1(x) = G0(x)</math>
+
对于泊松分布的随机网络,如 ER 图,
 +
 
 +
</math>, that is the reason why the theory of random networks of this type is especially simple. The probability distributions for the 1st and 2nd-nearest neighbors are generated by the functions
   −
G_1(x) = G_0(x)
+
that is the reason why the theory of random networks of this type is especially simple. The probability distributions for the 1st and 2nd-nearest neighbors are generated by the functions
    +
<math>
 
G_1(x) = G_0(x)  
 
G_1(x) = G_0(x)  
 
+
</math>这就是为什么这种类型的随机网络理论特别简单的原因。第一和第二近邻的概率分布是由函数<math>
G1(x) = g0(x)
  −
 
  −
</math>, that is the reason why the theory of random networks of this type is especially simple. The probability distributions for the 1st and 2nd-nearest neighbors are generated by the functions <math>
  −
 
  −
</math>, that is the reason why the theory of random networks of this type is especially simple. The probability distributions for the 1st and 2nd-nearest neighbors are generated by the functions <math>
  −
 
  −
这就是为什么这种类型的随机网络理论特别简单的原因。第一和第二近邻的概率分布是由函数 < math > G0(x)</math>和<math>G0(G1(x))</math>生成的
  −
 
  −
G_0(x)
  −
 
   
G_0(x)  
 
G_0(x)  
 +
</math> 和<math>G0(G1(x))</math>生成的
   −
G _ 0(x)
  −
  −
</math> and <math>
  −
  −
</math> and <math>
  −
  −
[ math ]和[ math ]
     −
G_0(G_1(x))
  −
  −
G_0(G_1(x))
  −
  −
G _ 0(g _ 1(x))
      
</math>. By extension, the distribution of <math>
 
</math>. By extension, the distribution of <math>
第505行: 第252行:  
</math>. By extension, the distribution of <math>
 
</math>. By extension, the distribution of <math>
   −
继续扩展,< math > m-th</math>的邻居的分布
+
继续扩展,<math>
 +
m
 +
</math>-th的邻居的分布
    
m
 
m
第517行: 第266行:  
</math>-th neighbors is generated by:
 
</math>-th neighbors is generated by:
   −
这个邻居是由以下几个因素产生的:
+
是由以下几个因素产生的:
 
  −
 
     −
<math>
      
<math>
 
<math>
  −
《数学》
  −
   
G_0\bigl(G_1(...G_1(x)...)\bigr)  
 
G_0\bigl(G_1(...G_1(x)...)\bigr)  
 
+
</math>, <math>
G_0\bigl(G_1(...G_1(x)...)\bigr)
  −
 
  −
G _ 0 bigl (g _ 1(... g _ 1(x) ...) bigr)
  −
 
  −
</math>, with <math>
  −
 
  −
</math>, with <math>
  −
 
  −
[ math ] ,with < math >
  −
 
   
m-1  
 
m-1  
 
+
</math> 迭代到 <math>
m-1
  −
 
  −
M-1
  −
 
  −
</math> iterations of the function <math>
  −
 
  −
</math> iterations of the function <math>
  −
 
  −
函数的迭代
  −
 
  −
G_1
  −
 
   
G_1  
 
G_1  
 
+
</math> 函数本身。
1
  −
 
  −
</math> acting on itself.<ref name=":1">{{Cite journal|last=Newman|first=M. E. J.|last2=Strogatz|first2=S. H.|last3=Watts|first3=D. J.|date=2001-07-24|title=Random graphs with arbitrary degree distributions and their applications|url=https://link.aps.org/doi/10.1103/PhysRevE.64.026118|journal=Physical Review E|language=en|volume=64|issue=2|pages=026118|doi=10.1103/PhysRevE.64.026118|issn=1063-651X|doi-access=free}}</ref>
  −
 
  −
</math> acting on itself.
  −
 
  −
自我作用。
        第569行: 第283行:  
The average number of 1st neighbors, <math>
 
The average number of 1st neighbors, <math>
   −
第一个邻居的平均数量c1是
+
第一个邻居的平均数量<math>
 
   
c_1
 
c_1
 
+
</math>
c_1
+
<math>
 
  −
C _ 1
  −
 
  −
</math>, is <math>
  −
 
  −
</math>, is <math>
  −
 
  −
[ math ] ,is < math >  
  −
 
   
{\langle k \rangle} = {dG_0(x)\over dx}|_{x=1}
 
{\langle k \rangle} = {dG_0(x)\over dx}|_{x=1}
 +
</math>
   −
{\langle k \rangle} = {dG_0(x)\over dx}|_{x=1}
+
and the average number of 2nd neighbors is: <math>
 
  −
{ langle k rangle } = { dG _ 0(x) over dx } | { x = 1}
  −
 
  −
</math> and the average number of 2nd neighbors is: <math>
     −
</math> and the average number of 2nd neighbors is: <math>
+
and the average number of 2nd neighbors is: <math>
    
第二个邻居的平均数量是:
 
第二个邻居的平均数量是:
 
+
<math>
 
c_2 = \biggl[ {d\over dx}G_0\big(G_1(x)\big)\biggl]_{x=1} = G_1'(1)G'_0\big(G_1(1)\big) =  G_1'(1)G'_0(1) = G''_0(1)
 
c_2 = \biggl[ {d\over dx}G_0\big(G_1(x)\big)\biggl]_{x=1} = G_1'(1)G'_0\big(G_1(1)\big) =  G_1'(1)G'_0(1) = G''_0(1)
  −
c_2 = \biggl[ {d\over dx}G_0\big(G_1(x)\big)\biggl]_{x=1} = G_1'(1)G'_0\big(G_1(1)\big) =  G_1'(1)G'_0(1) = G_0(1)
  −
  −
C2 = biggl [{ d over dx } g0 big (g1(x) big)] _ { x = 1} = g1’(1) g’0 big (g1(1) big) = g1’(1) g’0(1)
  −
  −
</math>
  −
   
</math>
 
</math>
  −
数学
  −
         
== Degree distribution for directed networks ==
 
== Degree distribution for directed networks ==
 
有向网络的度分布
 
有向网络的度分布
 +
 
[[File:Enwiki-degree-distribution.png|thumb|right|320px|
 
[[File:Enwiki-degree-distribution.png|thumb|right|320px|
 
图1:In/out degree distribution for Wikipedia's hyperlink graph (logarithmic scales) 维基百科超链接图(对数尺度)的入/出度分布]]
 
图1:In/out degree distribution for Wikipedia's hyperlink graph (logarithmic scales) 维基百科超链接图(对数尺度)的入/出度分布]]
第620行: 第312行:     
在有向网络中,每个节点都有一些入度k{in}
 
在有向网络中,每个节点都有一些入度k{in}
  −
k_{
  −
  −
k_{
  −
  −
这是一个很好的例子
  −
  −
in}
  −
  −
in}
        第638行: 第320行:     
和一些出度k{out}
 
和一些出度k{out}
  −
k_{out}
  −
  −
k_{out}
        第661行: 第339行:  
</math> is the probability that a randomly chosen node has in-degree <math>
 
</math> is the probability that a randomly chosen node has in-degree <math>
   −
如果 < math > P(k_{in}, k_{out})</math>是一个随机选择的具有入出度的节点的可能性
+
如果 <math>
 
+
P(k_{in}, k_{out})
k_{
+
</math> 是一个随机选择的具有入出度的节点的可能性
 
  −
k_{
     −
这是一个很好的例子
  −
  −
in}
  −
  −
in}
  −
  −
开始吧
  −
  −
</math> and out-degree <math>
  −
  −
</math> and out-degree <math>
  −
  −
“数学”和“学位”
  −
  −
k_{out}
  −
  −
k_{out}
  −
  −
我不知道你在说什么
      
</math> then the generating function assigned to this [[joint probability distribution]] can be written with two valuables <math>
 
</math> then the generating function assigned to this [[joint probability distribution]] can be written with two valuables <math>
第693行: 第350行:  
那么分配给这个生成函数的联合概率分布可以被写成两个变量 <math>x</math>和<math>y</math>
 
那么分配给这个生成函数的联合概率分布可以被写成两个变量 <math>x</math>和<math>y</math>
   −
x
  −
  −
x
  −
  −
X
  −
  −
</math> and <math>
  −
  −
</math> and <math>
  −
  −
[ math ]和[ math ]
  −
  −
y
  −
  −
y
  −
  −
Y
  −
  −
</math> as:
  −
  −
</math> as:
  −
  −
  −
  −
  −
  −
<math>
      
<math>
 
<math>
  −
  −
  −
\mathcal{G}(x,y) =  \sum_{k_{in},k_{out}} \displaystyle P({k_{in},k_{out}})x^{k_{in}}y^{k_{out}} .
  −
   
\mathcal{G}(x,y) =  \sum_{k_{in},k_{out}} \displaystyle P({k_{in},k_{out}})x^{k_{in}}y^{k_{out}} .
 
\mathcal{G}(x,y) =  \sum_{k_{in},k_{out}} \displaystyle P({k_{in},k_{out}})x^{k_{in}}y^{k_{out}} .
  −
   
</math>  
 
</math>  
   −
</math>
  −
  −
数学
        第751行: 第371行:     
所以,
 
所以,
         
<math>
 
<math>
  −
<math>
  −
  −
  −
  −
\langle{k_{in}-k_{out}}\rangle =\sum_{k_{in},k_{out}} \displaystyle (k_{in}-k_{out})P({k_{in},k_{out}}) = 0
  −
   
\langle{k_{in}-k_{out}}\rangle =\sum_{k_{in},k_{out}} \displaystyle (k_{in}-k_{out})P({k_{in},k_{out}}) = 0  
 
\langle{k_{in}-k_{out}}\rangle =\sum_{k_{in},k_{out}} \displaystyle (k_{in}-k_{out})P({k_{in},k_{out}}) = 0  
  −
  −
   
</math>,
 
</math>,
  −
</math>,
  −
        第783行: 第389行:     
<math>
 
<math>
  −
<math>
  −
  −
《数学》
  −
   
  {\partial \mathcal{G}\over\partial x}\vert _{x,y=1} =  {\partial \mathcal{G}\over\partial y}\vert _{x,y=1} = c,
 
  {\partial \mathcal{G}\over\partial x}\vert _{x,y=1} =  {\partial \mathcal{G}\over\partial y}\vert _{x,y=1} = c,
  −
{\partial \mathcal{G}\over\partial x}\vert _{x,y=1} =  {\partial \mathcal{G}\over\partial y}\vert _{x,y=1} = c,
  −
  −
{ partial mathcal { g } over partial x } vert _ { x,y = 1} = { partial mathcal { g } over partial y } vert _ { x,y = 1} = c,
  −
   
   
 
   
   
</math>
 
</math>
   −
</math>
     −
数学
      +
where <math>c
    +
where <math>c
   −
where <math>
     −
where <math>
  −
  −
  −
c
  −
  −
c
  −
  −
C
      
</math> is the mean degree (both in and out) of the nodes in the network; <math>
 
</math> is the mean degree (both in and out) of the nodes in the network; <math>
第821行: 第407行:  
C是网络中节点的平均度(内部和外部)
 
C是网络中节点的平均度(内部和外部)
    +
<math>
 
\langle{k_{in}}\rangle = \langle{k_{out}}\rangle = c.  
 
\langle{k_{in}}\rangle = \langle{k_{out}}\rangle = c.  
  −
\langle{k_{in}}\rangle = \langle{k_{out}}\rangle = c.
  −
  −
  −
   
</math>
 
</math>
  −
</math>
  −
  −
数学
        第839行: 第417行:  
Using the function <math>
 
Using the function <math>
   −
使用函数 < math > \mathcal{G}(x,y)</math>
+
使用函数<math>
 
  −
\mathcal{G}(x,y)
  −
 
   
\mathcal{G}(x,y)
 
\mathcal{G}(x,y)
 
+
</math>
      第851行: 第426行:  
</math>, we can again find the generation function for the in/out-degree distribution and in/out-excess degree distribution, as before. <math>
 
</math>, we can again find the generation function for the in/out-degree distribution and in/out-excess degree distribution, as before. <math>
   −
</math > ,如前所述,我们可以再次找到入/出度分布和入/出超量度分布的生成函数。
+
如前所述,我们可以再次找到入/出度分布和入/出超量度分布的生成函数。
    
G^{in}_0(x)  
 
G^{in}_0(x)  
第863行: 第438行:  
</math> can be defined as generating functions for the number of arriving links at a randomly chosen node, and <math>
 
</math> can be defined as generating functions for the number of arriving links at a randomly chosen node, and <math>
   −
可以将 </math > 定义为一个随机选择的节点上到达的链接数的生成函数,以及 < math >  
+
可以将<math>
 +
G^{in}_0(x)
 +
</math>定义为一个随机选择的节点上到达的链接数的生成函数,以及 < math >  
    
G^{in}_1(x)
 
G^{in}_1(x)
第875行: 第452行:  
</math>can be defined as the number of arriving links at a node reached by following a randomly chosen link. We can also define generating functions <math>
 
</math>can be defined as the number of arriving links at a node reached by following a randomly chosen link. We can also define generating functions <math>
   −
</math > 可以定义为按照随机选择的链接到达一个节点的到达链接数。我们也可以定义生成函数 < math >  
+
<math>
 
+
G^{in}_1(x)
G^{out}_0(y)
+
</math>可以定义为按照随机选择的链接到达一个节点的到达链接数。我们也可以定义生成函数 <math>
 
   
G^{out}_0(y)
 
G^{out}_0(y)
 
+
</math> <math>
0(y)
  −
 
  −
</math> and <math>
  −
 
  −
</math> and <math>
  −
 
  −
[ math ]和[ math ]
  −
 
   
G^{out}_1(y)
 
G^{out}_1(y)
 +
</math>
   −
G^{out}_1(y)
  −
  −
1(y)
      
</math> for the number leaving such a node:<ref name=":1" />
 
</math> for the number leaving such a node:<ref name=":1" />
第904行: 第470行:     
* <math>
 
* <math>
  −
G^{in}_0(x) = \mathcal{G}(x,1)
  −
   
G^{in}_0(x) = \mathcal{G}(x,1)
 
G^{in}_0(x) = \mathcal{G}(x,1)
  −
0(x) = mathcal { g }(x,1)
  −
  −
</math>
  −
   
</math>
 
</math>
  −
数学
  −
   
* <math>
 
* <math>
   
G^{in}_1(x) =  \frac{1}{c} {\partial \mathcal{G}\over\partial x}\vert _{y=1}  
 
G^{in}_1(x) =  \frac{1}{c} {\partial \mathcal{G}\over\partial x}\vert _{y=1}  
  −
G^{in}_1(x) =  \frac{1}{c} {\partial \mathcal{G}\over\partial x}\vert _{y=1}
  −
  −
1(x) = frac {1}{ c }{ partial mathcal { g } over partial x } vert _ { y = 1}
  −
   
</math>
 
</math>
  −
</math>
  −
  −
数学
  −
   
* <math>
 
* <math>
   
G^{out}_0(y) = \mathcal{G}(1,y)
 
G^{out}_0(y) = \mathcal{G}(1,y)
  −
G^{out}_0(y) = \mathcal{G}(1,y)
  −
  −
0(y) = mathcal { g }(1,y)
  −
  −
</math>
  −
   
</math>
 
</math>
  −
数学
  −
   
* <math>
 
* <math>
  −
G^{out}_1(y) =  \frac{1}{c} {\partial \mathcal{G}\over\partial y}\vert _{x=1}
  −
   
G^{out}_1(y) =  \frac{1}{c} {\partial \mathcal{G}\over\partial y}\vert _{x=1}  
 
G^{out}_1(y) =  \frac{1}{c} {\partial \mathcal{G}\over\partial y}\vert _{x=1}  
  −
1(y) = frac {1}{ c }{ partial mathcal { g } over partial y } vert _ { x = 1}
  −
   
</math>
 
</math>
  −
</math>
  −
  −
数学
  −
        第965行: 第487行:  
Here, the average number of 1st neighbors, <math>
 
Here, the average number of 1st neighbors, <math>
   −
这里,第一个邻居的平均数量C,< math >  
+
这里,第一个邻居的平均数量math>
 +
c
 +
</math>,< math >  
    
c
 
c
第977行: 第501行:  
</math>, or as previously introduced as <math>
 
</math>, or as previously introduced as <math>
   −
或者像之前表示的C1
+
或者像之前表示的<math>
 
   
c_1
 
c_1
 
+
</math>
c_1
+
<math>
 
  −
C _ 1
  −
 
  −
</math>, is <math>
  −
 
  −
</math>, is <math>
  −
 
  −
[ math ] ,is < math >  
  −
 
   
  {\partial \mathcal{G}\over\partial x}\biggl \vert _{x,y=1} =  {\partial \mathcal{G}\over\partial y}\biggl \vert _{x,y=1}
 
  {\partial \mathcal{G}\over\partial x}\biggl \vert _{x,y=1} =  {\partial \mathcal{G}\over\partial y}\biggl \vert _{x,y=1}
 +
 +
</math>
   −
{\partial \mathcal{G}\over\partial x}\biggl \vert _{x,y=1} =  {\partial \mathcal{G}\over\partial y}\biggl \vert _{x,y=1}
  −
  −
{ partial mathcal { g } over partial x } biggl vert _ { x,y = 1} = { partial mathcal { g } over partial y } biggl vert _ { x,y = 1}
  −
  −
      
</math> and the average number of 2nd neighbors reachable from a randomly chosen node is given by: <math>
 
</math> and the average number of 2nd neighbors reachable from a randomly chosen node is given by: <math>
第1,003行: 第514行:  
</math> and the average number of 2nd neighbors reachable from a randomly chosen node is given by: <math>
 
</math> and the average number of 2nd neighbors reachable from a randomly chosen node is given by: <math>
   −
从一个随机选择的节点上可达到的第二邻居的平均数是:
+
从一个随机选择的节点上可达到的第二邻居的平均数是<math>
 +
c_2 = G_1'(1)G'_0(1) ={\partial^2 \mathcal{G}\over\partial x\partial y}\biggl \vert _{x,y=1}
 +
</math>. :
    
c_2 = G_1'(1)G'_0(1) ={\partial^2 \mathcal{G}\over\partial x\partial y}\biggl \vert _{x,y=1}
 
c_2 = G_1'(1)G'_0(1) ={\partial^2 \mathcal{G}\over\partial x\partial y}\biggl \vert _{x,y=1}
第1,014行: 第527行:  
</math>. These are also the numbers of 1st and 2nd neighbors from which a random node can be reached, since these equations are manifestly symmetric in <math>
 
</math>. These are also the numbers of 1st and 2nd neighbors from which a random node can be reached, since these equations are manifestly symmetric in <math>
   −
这些也是从这些随机节点可以达到第一和第二邻居的数量,因为这些方程显然是在x和y上对称的 < math >
+
这些是从随机选择的节点达到第一和第二邻居的数量,因为这些方程显然是在x和y上对称的
    
x
 
x
75

个编辑

导航菜单