更改
跳到导航
跳到搜索
无编辑摘要
=BA Model 的平均场理论=
Barabási, Albert-László; Albert, Réka. (October 15, 1999). "Emergence of scaling in random networks". Science. 286 (5439): 509–512. arXiv:cond-mat/9910332 Freely accessible. Bibcode:1999Sci...286..509B. <ref> Barabási, Albert-László; Albert, Réka. (October 15, 1999). "Emergence of scaling in random networks". Science. 286 (5439): 509–512. arXiv:cond-mat/9910332 Freely accessible. Bibcode:1999Sci...286..509B. </ref>
R. Albert, A.-L. Barabási. Statistical mechanics of complex networks. Reviews of Modern Physics 74, 47-97 (2002) <ref>R. Albert, A.-L. Barabási. Statistical mechanics of complex networks. Reviews of Modern Physics 74, 47-97 (2002)</ref>
H. Jeong - Z. Néda - A. L. Barabási (2003) Measuring preferential attachment in evolving networks. Europhys. Lett., 61 (4), p. 567 <ref>H. Jeong - Z. Néda - A. L. Barabási (2003) Measuring preferential attachment in evolving networks. Europhys. Lett., 61 (4), p. 567 </ref>
==模型设定==
* 初始状态有<math>m_0</math>个节点
* 1. 增长原则:每次加入一个节点i (加入时间记为<math>t_i</math>), 每个节点的加入带来m条边,2m个度的增加
** 其中老节点分到的度数是m,新加入的那一个节点分到的度数为m
** 那么到时间t的时候,网络的总节点数是<math>m_0 + t</math>,网络的总度数为<math>2mt</math>。
* 2. 优先链接原则:每一次从m条边中占有一条边的概率正比于节点的度<math>k_i</math>
** 那么显然,加入的越早(<math>t_i</math>越小)越容易获得更多的链接数。
** 从时间0开始,每一个时间步系统中的节点度<math>k_i</math>是不断增加的。
==度的增长/时间依赖性==
<math>k_i</math>在一个时间步获得一个度的概率表示为<math>\prod (k_i) </math>, 那么有:
<math>\prod (k_i) = \frac{k_i}{\sum k_i} = \frac{k_i}{2mt}</math>
一个时间步,<math>k_i</math>随t的变化率可以表达为:
<math>\frac{\partial k_i}{\partial t} = \Delta k \prod (k_i) = m \frac{k_i}{2mt} = \frac{k_i}{2t}</math>
<math>\frac{\partial k_i}{k_i} = \frac{\partial t}{2t}</math>
<math>\int \frac{1}{k_i} d k_i = \int \frac{1}{2t} dt</math>
积分结果为:
<math>k_i =(Ct) ^ {0.5}</math> 公式(1)
此时,根据模型的初始条件,每个新加入节点获得的度是m:
<math>k_i(t_i) = m </math> 代入公式(1)
可以得到<math>C = m^{2}/t_i</math> 公式(2)
代入公式(1),得到:
<math>k_i = m (\frac{t}{t_i})^{0.5}</math> 公式(3)
对于一个节点i,其加入网络的时间<math>t_i</math>是固定的,我们可以观察其度<math>k_i</math>随着时间的幂律关系。
H. Jeong - Z. Néda - A. L. Barabási (2003) Measuring preferential attachment in evolving networks. Europhys. Lett., 61 (4), p. 567 <ref>H. Jeong - Z. Néda - A. L. Barabási (2003) Measuring preferential attachment in evolving networks. Europhys. Lett., 61 (4), p. 567 </ref>
==累积概率分布==
当我们思考一个累积概率分布的时候,我们想要的是<math>k_i(t) < k</math>的概率:<math>P(k_i(t) < k) </math>
由公式(3),可以知道:
<math>P(k_i(t) < k) = P( m (\frac{t}{t_i})^{0.5} < k ) = P( t_i > \frac{m^2 t}{k^2} ) = 1 - P(t_i \leqslant \frac{m^2 t}{k^2} )</math> 公式(4)
在初始状态<math> t = 0</math>, 有<math>m_0</math>个节点,那么<math>t_{m_0} = 0</math>
假设我们将节点加入的时间步是均匀的,那么<math>t_i</math>的概率是一个常数:
<math>P(t_i) = \frac{1}{m_0 + t}</math> 公式(5)
===均匀分布的性质===
* 设连续型随机变量X的概率密度函数为 f(x)=1/(b-a),a≤x≤b, 则称随机变量X服从[a,b]上的均匀分布,记为X~U[a,b]。
* 若[x1,x2]是[a,b]的任一子区间,则 P{x_1≤x≤x_2}=(x_2-x_1)/(b-a).
根据均匀分布的性质,将公式(5)代入公式(4)得到:
<math>P(k_i(t) < k) = 1- \frac{m^2 t}{k^2} P(t_i) = 1 - \frac{m^2 t}{k^2 (m_0 + t)} </math> 公式(6)
对累积概率函数求微分,就可以到达概率密度函数:
<math>P( k ) = \frac{\partial P(k_i(t) < k)}{\partial k} = \frac{2m^2 t}{m_0 + k} \frac{1}{k^3}</math> 公式(7)
也就是说:<math>\gamma = 3</math>, 与m无关。
= Implementation =
<syntaxhighlight lang="python">
def barabasi_albert_graph(n, m, seed=None):
if m < 1 or m >= n:
return []
if seed is not None:
random.seed(seed)
plot_G = nx.empty_graph(n)
nx.draw_circular(plot_G, with_labels=True, font_weight='bold')
pos = nx.circular_layout(plot_G)
ax = plt.gca()
# yield ax
# Add m initial nodes (m0 in barabasi-speak)
G = nx.empty_graph(m)
# Target nodes for new edges
targets = list(range(m))
# List of existing nodes, with nodes repeated once for each adjacent edge
repeated_nodes = []
# Start adding the other n-m nodes. The first node is m.
source = m
while source < n:
G.add_edges_from(zip([source] * m, targets))
plot_G = nx.empty_graph(m)
plot_G.add_edges_from(zip([source] * m, targets))
nx.draw_networkx_edges(plot_G, pos, ax=ax)
yield ax
# Add edges to m nodes from the source.
# Add one node to the list for each new edge just created.
repeated_nodes.extend(targets)
# And the new node "source" has m edges to add to the list.
repeated_nodes.extend([source] * m)
# Now choose m unique nodes from the existing nodes
# Pick uniformly from repeated_nodes (preferential attachement)
targets = nx.random_graphs._random_subset(repeated_nodes, m)
source += 1
return G
</syntaxhighlight>
[[https://github.com/swarma/DemoForComplexity/blob/master/ab_model.ipynb 仓库地址]]
=References=
<references/>
[[Category:复杂系统]]
Barabási, Albert-László; Albert, Réka. (October 15, 1999). "Emergence of scaling in random networks". Science. 286 (5439): 509–512. arXiv:cond-mat/9910332 Freely accessible. Bibcode:1999Sci...286..509B. <ref> Barabási, Albert-László; Albert, Réka. (October 15, 1999). "Emergence of scaling in random networks". Science. 286 (5439): 509–512. arXiv:cond-mat/9910332 Freely accessible. Bibcode:1999Sci...286..509B. </ref>
R. Albert, A.-L. Barabási. Statistical mechanics of complex networks. Reviews of Modern Physics 74, 47-97 (2002) <ref>R. Albert, A.-L. Barabási. Statistical mechanics of complex networks. Reviews of Modern Physics 74, 47-97 (2002)</ref>
H. Jeong - Z. Néda - A. L. Barabási (2003) Measuring preferential attachment in evolving networks. Europhys. Lett., 61 (4), p. 567 <ref>H. Jeong - Z. Néda - A. L. Barabási (2003) Measuring preferential attachment in evolving networks. Europhys. Lett., 61 (4), p. 567 </ref>
==模型设定==
* 初始状态有<math>m_0</math>个节点
* 1. 增长原则:每次加入一个节点i (加入时间记为<math>t_i</math>), 每个节点的加入带来m条边,2m个度的增加
** 其中老节点分到的度数是m,新加入的那一个节点分到的度数为m
** 那么到时间t的时候,网络的总节点数是<math>m_0 + t</math>,网络的总度数为<math>2mt</math>。
* 2. 优先链接原则:每一次从m条边中占有一条边的概率正比于节点的度<math>k_i</math>
** 那么显然,加入的越早(<math>t_i</math>越小)越容易获得更多的链接数。
** 从时间0开始,每一个时间步系统中的节点度<math>k_i</math>是不断增加的。
==度的增长/时间依赖性==
<math>k_i</math>在一个时间步获得一个度的概率表示为<math>\prod (k_i) </math>, 那么有:
<math>\prod (k_i) = \frac{k_i}{\sum k_i} = \frac{k_i}{2mt}</math>
一个时间步,<math>k_i</math>随t的变化率可以表达为:
<math>\frac{\partial k_i}{\partial t} = \Delta k \prod (k_i) = m \frac{k_i}{2mt} = \frac{k_i}{2t}</math>
<math>\frac{\partial k_i}{k_i} = \frac{\partial t}{2t}</math>
<math>\int \frac{1}{k_i} d k_i = \int \frac{1}{2t} dt</math>
积分结果为:
<math>k_i =(Ct) ^ {0.5}</math> 公式(1)
此时,根据模型的初始条件,每个新加入节点获得的度是m:
<math>k_i(t_i) = m </math> 代入公式(1)
可以得到<math>C = m^{2}/t_i</math> 公式(2)
代入公式(1),得到:
<math>k_i = m (\frac{t}{t_i})^{0.5}</math> 公式(3)
对于一个节点i,其加入网络的时间<math>t_i</math>是固定的,我们可以观察其度<math>k_i</math>随着时间的幂律关系。
H. Jeong - Z. Néda - A. L. Barabási (2003) Measuring preferential attachment in evolving networks. Europhys. Lett., 61 (4), p. 567 <ref>H. Jeong - Z. Néda - A. L. Barabási (2003) Measuring preferential attachment in evolving networks. Europhys. Lett., 61 (4), p. 567 </ref>
==累积概率分布==
当我们思考一个累积概率分布的时候,我们想要的是<math>k_i(t) < k</math>的概率:<math>P(k_i(t) < k) </math>
由公式(3),可以知道:
<math>P(k_i(t) < k) = P( m (\frac{t}{t_i})^{0.5} < k ) = P( t_i > \frac{m^2 t}{k^2} ) = 1 - P(t_i \leqslant \frac{m^2 t}{k^2} )</math> 公式(4)
在初始状态<math> t = 0</math>, 有<math>m_0</math>个节点,那么<math>t_{m_0} = 0</math>
假设我们将节点加入的时间步是均匀的,那么<math>t_i</math>的概率是一个常数:
<math>P(t_i) = \frac{1}{m_0 + t}</math> 公式(5)
===均匀分布的性质===
* 设连续型随机变量X的概率密度函数为 f(x)=1/(b-a),a≤x≤b, 则称随机变量X服从[a,b]上的均匀分布,记为X~U[a,b]。
* 若[x1,x2]是[a,b]的任一子区间,则 P{x_1≤x≤x_2}=(x_2-x_1)/(b-a).
根据均匀分布的性质,将公式(5)代入公式(4)得到:
<math>P(k_i(t) < k) = 1- \frac{m^2 t}{k^2} P(t_i) = 1 - \frac{m^2 t}{k^2 (m_0 + t)} </math> 公式(6)
对累积概率函数求微分,就可以到达概率密度函数:
<math>P( k ) = \frac{\partial P(k_i(t) < k)}{\partial k} = \frac{2m^2 t}{m_0 + k} \frac{1}{k^3}</math> 公式(7)
也就是说:<math>\gamma = 3</math>, 与m无关。
= Implementation =
<syntaxhighlight lang="python">
def barabasi_albert_graph(n, m, seed=None):
if m < 1 or m >= n:
return []
if seed is not None:
random.seed(seed)
plot_G = nx.empty_graph(n)
nx.draw_circular(plot_G, with_labels=True, font_weight='bold')
pos = nx.circular_layout(plot_G)
ax = plt.gca()
# yield ax
# Add m initial nodes (m0 in barabasi-speak)
G = nx.empty_graph(m)
# Target nodes for new edges
targets = list(range(m))
# List of existing nodes, with nodes repeated once for each adjacent edge
repeated_nodes = []
# Start adding the other n-m nodes. The first node is m.
source = m
while source < n:
G.add_edges_from(zip([source] * m, targets))
plot_G = nx.empty_graph(m)
plot_G.add_edges_from(zip([source] * m, targets))
nx.draw_networkx_edges(plot_G, pos, ax=ax)
yield ax
# Add edges to m nodes from the source.
# Add one node to the list for each new edge just created.
repeated_nodes.extend(targets)
# And the new node "source" has m edges to add to the list.
repeated_nodes.extend([source] * m)
# Now choose m unique nodes from the existing nodes
# Pick uniformly from repeated_nodes (preferential attachement)
targets = nx.random_graphs._random_subset(repeated_nodes, m)
source += 1
return G
</syntaxhighlight>
[[https://github.com/swarma/DemoForComplexity/blob/master/ab_model.ipynb 仓库地址]]
=References=
<references/>
[[Category:复杂系统]]