Ising模型与最大熵分布

来自集智百科 - 复杂系统|人工智能|复杂科学|复杂网络|自组织
跳到导航 跳到搜索

Definitions

n spins [math]\displaystyle{ \textstyle\underline \sigma\in \{+1,-1\}^n }[/math] are connected by couplings [math]\displaystyle{ \textstyle J\in\mathcal{R}^{n\times n} }[/math].
Different realizations of [math]\displaystyle{ \textstyle J_{ij} }[/math] give different systems, for example

  • [math]\displaystyle{ J_{ij}=\textrm{Constant} }[/math]: Ferromagnets model or anti-ferromagnets.
  • [math]\displaystyle{ J_{ij}=\mathcal{N}(0,\Delta): }[/math]Sherrington-Kirkpatrick model, spin glasses.
  • [math]\displaystyle{ J_{ij} \leftarrow }[/math] Hebb's rule: Hopfield model, associative memories.
  • [math]\displaystyle{ J_{ij} }[/math] are learned from data: neural networks.


The energy of a configuration [math]\displaystyle{ \underline\sigma }[/math] is

[math]\displaystyle{ E(\underline\sigma)=-\sum_{ij}J_{ij}\sigma_i\sigma_j-\sum_i\sigma_i\theta_i, }[/math]

where [math]\displaystyle{ \theta_i }[/math] is the external field added on spin [math]\displaystyle{ \textstyle i }[/math].
Note that In the whole discussions I would set the external field to zero, because this does not change quantitatively the results we are going to show, but significantly reduces the length of formulas :)

In the canonical ensemble, the probability of finding a configuration in the equilibrium at inverse temperature [math]\displaystyle{ \beta }[/math] follows the Boltzmann distribution:

[math]\displaystyle{ P(\sigma)=\frac{1}{Z}e^{\sum_{ij}\beta J_{ij}\sigma_i\sigma_j}, }[/math]

where

[math]\displaystyle{ Z=\sum_{\underline\sigma}e^{\sum_{ij}\beta J_{ij}\sigma_i\sigma_j} }[/math]

is the partition function.
Notice that

  • There are totally [math]\displaystyle{ 2^n }[/math] configurations in the summation.
  • when [math]\displaystyle{ \beta =0 }[/math], every configuration has the identical Boltzmann weights, which is [math]\displaystyle{ \textstyle 2^{-n} }[/math].
  • when [math]\displaystyle{ \beta\to\infty }[/math], only configurations having the lowest energy has finite probability measure.

Why Ising model?

In addition to physical motivations (phase transitions, criticality, ...), another reason that the Ising model is useful in model science and technique is that it is the Maximum entropy model given first two moments of observations. That is the distribution that make the least bias or claim to the observed data.

Suppose we have m configurations [math]\displaystyle{ \textstyle \{\underline\sigma\}\in\{1,-1\}^{m\times n} }[/math] that are sampled from the Boltzmann distribution of the model, then we can define the following statistics that can be observed from data:

  • Magnetization [math]\displaystyle{ \textstyle m_i= \sum_{t=1}^m\sigma_i^t\langle \sigma_i\rangle\approx }[/math]
  • Correlations [math]\displaystyle{ \textstyle C_{ij}= \sum_{t=1}^m\sigma_i^t\sigma_j^t\approx\langle \sigma_i\sigma_j\rangle }[/math]

Many distributions can be used to generate data with given first and second moments, suppose [math]\displaystyle{ \textstyle P(\underline\sigma) }[/math] is such a distribution. Then we can write out the entropy of the distribution as

[math]\displaystyle{ S_p=-\sum_{\underline\sigma}P(\underline\sigma)\log P(\underline\sigma). }[/math]

Of cause, there are constraints that need to be satisfied:

[math]\displaystyle{ \sum_{\underline\sigma}P(\underline\sigma)=1 }[/math]
[math]\displaystyle{ \forall i,\,\, \sum_{\underline\sigma}P(\underline\sigma)\sigma_i=m_i }[/math]
[math]\displaystyle{ \forall (i,j),\,\, \sum_{\underline\sigma}P(\underline\sigma)\sigma_i\sigma_j=C_{ij}. }[/math]

We define a Lagrangian as

[math]\displaystyle{ \mathcal {L}_P=-\sum_{\underline\sigma}P(\underline\sigma)\log P(\underline\sigma)+\sum_i\lambda_i\left (m_i-\sum_{\underline\sigma}P(\underline\sigma)\sigma_i\right )+\sum_{ij}\lambda_{ij}\left (C_{ij}-\sum_{\underline\sigma}P(\underline\sigma)\sigma_i\sigma_j\right )+\lambda \sum_{\underline\sigma}P(\underline\sigma)-1, }[/math]

where [math]\displaystyle{ \textstyle \{\lambda_i\}\,\,\{\lambda_{ij}\} }[/math] are multipliers.

By setting [math]\displaystyle{ \textstyle \frac{\partial\mathcal {L}_P}{\partial P}=0 }[/math], we have

[math]\displaystyle{ -(\log P-1)+\sum_i\lambda_i\sigma_i+\sum_{ij}\lambda_{ij}\sigma_i\sigma_j+\lambda=0, }[/math]

which yields

[math]\displaystyle{ P(\sigma)=\frac{1}{Z}e^{\sum_{ij}\beta J_{ij}\sigma_i\sigma_j}. }[/math]


相关wiki