# 平均场理论

In physics and probability theory, mean-field theory (aka MFT or rarely self-consistent field theory) studies the behavior of high-dimensional random (stochastic) models by studying a simpler model that approximates the original by averaging over degrees of freedom. Such models consider many individual components that interact with each other. In MFT, the effect of all the other individuals on any given individual is approximated by a single averaged effect, thus reducing a many-body problem to a one-body problem.

In physics and probability theory, mean-field theory (aka MFT or rarely self-consistent field theory) studies the behavior of high-dimensional random (stochastic) models by studying a simpler model that approximates the original by averaging over degrees of freedom. Such models consider many individual components that interact with each other. In MFT, the effect of all the other individuals on any given individual is approximated by a single averaged effect, thus reducing a many-body problem to a one-body problem.

The main idea of MFT is to replace all interactions to any one body with an average or effective interaction, sometimes called a molecular field.[1] This reduces any many-body problem into an effective one-body problem. The ease of solving MFT problems means that some insight into the behavior of the system can be obtained at a lower computational cost.

The main idea of MFT is to replace all interactions to any one body with an average or effective interaction, sometimes called a molecular field. This reduces any many-body problem into an effective one-body problem. The ease of solving MFT problems means that some insight into the behavior of the system can be obtained at a lower computational cost.

MFT 的主要思想是用一个平均的或有效的相互作用，有时称为分子场，来代替任何一个物体的所有相互作用。这就把任何多体问题转化为有效的单体问题。解决 MFT 问题的容易性意味着可以以较低的计算成本获得对系统行为的一些了解。

MFT has since been applied to a wide range of fields outside of physics, including statistical inference, graphical models, neuroscience[2], artificial intelligence, epidemic models,[3] queueing theory,[4] computer-network performance and game theory,[5] as in the quantal response equilibrium.

MFT has since been applied to a wide range of fields outside of physics, including statistical inference, graphical models, neuroscience, artificial intelligence, epidemic models, queueing theory, computer-network performance and game theory, as in the quantal response equilibrium.

## Origins

The ideas first appeared in physics (statistical mechanics) in the work of Pierre Curie[6] and Pierre Weiss to describe phase transitions.[7] MFT has been used in the Bragg–Williams approximation, models on Bethe lattice, Landau theory, Pierre–Weiss approximation, Flory–Huggins solution theory, and Scheutjens–Fleer theory.

The ideas first appeared in physics (statistical mechanics) in the work of Pierre Curie and Pierre Weiss to describe phase transitions. MFT has been used in the Bragg–Williams approximation, models on Bethe lattice, Landau theory, Pierre–Weiss approximation, Flory–Huggins solution theory, and Scheutjens–Fleer theory.

Systems with many (sometimes infinite) degrees of freedom are generally hard to solve exactly or compute in closed, analytic form, except for some simple cases (e.g. certain Gaussian random-field theories, the 1D Ising model). Often combinatorial problems arise that make things like computing the partition function of a system difficult. MFT is an approximation method that often makes the original solvable and open to calculation. Sometimes, MFT gives very accurate approximations.

Systems with many (sometimes infinite) degrees of freedom are generally hard to solve exactly or compute in closed, analytic form, except for some simple cases (e.g. certain Gaussian random-field theories, the 1D Ising model). Often combinatorial problems arise that make things like computing the partition function of a system difficult. MFT is an approximation method that often makes the original solvable and open to calculation. Sometimes, MFT gives very accurate approximations.

In field theory, the Hamiltonian may be expanded in terms of the magnitude of fluctuations around the mean of the field. In this context, MFT can be viewed as the "zeroth-order" expansion of the Hamiltonian in fluctuations. Physically, this means that an MFT system has no fluctuations, but this coincides with the idea that one is replacing all interactions with a "mean field".

In field theory, the Hamiltonian may be expanded in terms of the magnitude of fluctuations around the mean of the field. In this context, MFT can be viewed as the "zeroth-order" expansion of the Hamiltonian in fluctuations. Physically, this means that an MFT system has no fluctuations, but this coincides with the idea that one is replacing all interactions with a "mean field".

Quite often, MFT provides a convenient launch point to studying higher-order fluctuations. For example, when computing the partition function, studying the combinatorics of the interaction terms in the Hamiltonian can sometimes at best produce perturbative results or Feynman diagrams that correct the mean-field approximation.

Quite often, MFT provides a convenient launch point to studying higher-order fluctuations. For example, when computing the partition function, studying the combinatorics of the interaction terms in the Hamiltonian can sometimes at best produce perturbative results or Feynman diagrams that correct the mean-field approximation.

MFT 常常为研究高阶波动提供了一个方便的起点。例如，当计算配分函数时，研究哈密顿量中相互作用项的组合有时候最多只能产生微扰结果或修正平均场近似的费曼图。

## Validity

In general, dimensionality plays a strong role in determining whether a mean-field approach will work for any particular problem. There is sometimes a critical dimension, above which MFT is valid and below which it is not.

In general, dimensionality plays a strong role in determining whether a mean-field approach will work for any particular problem. There is sometimes a critical dimension, above which MFT is valid and below which it is not.

Heuristically, many interactions are replaced in MFT by one effective interaction. So if the field or particle exhibits many random interactions in the original system, they tend to cancel each other out, so the mean effective interaction and MFT will be more accurate. This is true in cases of high dimensionality, when the Hamiltonian includes long-range forces, or when the particles are extended (e.g. polymers). The Ginzburg criterion is the formal expression of how fluctuations render MFT a poor approximation, often depending upon the number of spatial dimensions in the system of interest.

Heuristically, many interactions are replaced in MFT by one effective interaction. So if the field or particle exhibits many random interactions in the original system, they tend to cancel each other out, so the mean effective interaction and MFT will be more accurate. This is true in cases of high dimensionality, when the Hamiltonian includes long-range forces, or when the particles are extended (e.g. polymers). The Ginzburg criterion is the formal expression of how fluctuations render MFT a poor approximation, often depending upon the number of spatial dimensions in the system of interest.

## Formal approach (Hamiltonian)

The formal basis for mean-field theory is the Bogoliubov inequality. This inequality states that the free energy of a system with Hamiltonian

The formal basis for mean-field theory is the Bogoliubov inequality. This inequality states that the free energy of a system with Hamiltonian

$\displaystyle{ \mathcal{H} = \mathcal{H}_0 + \Delta \mathcal{H} }$
$\displaystyle{ \mathcal{H} = \mathcal{H}_0 + \Delta \mathcal{H} }$


has the following upper bound:

has the following upper bound:

$\displaystyle{ F \leq F_0 \ \stackrel{\mathrm{def}}{=}\ \langle \mathcal{H} \rangle_0 - T S_0, }$
$\displaystyle{ F \leq F_0 \ \stackrel{\mathrm{def}}{=}\ \langle \mathcal{H} \rangle_0 - T S_0, }$


1. 数学，数学，数学，数学

where $\displaystyle{ S_0 }$ is the entropy, and $\displaystyle{ F }$ and $\displaystyle{ F_0 }$ are Helmholtz free energies. The average is taken over the equilibrium ensemble of the reference system with Hamiltonian $\displaystyle{ \mathcal{H}_0 }$. In the special case that the reference Hamiltonian is that of a non-interacting system and can thus be written as

where $\displaystyle{ S_0 }$ is the entropy, and $\displaystyle{ F }$ and $\displaystyle{ F_0 }$ are Helmholtz free energies. The average is taken over the equilibrium ensemble of the reference system with Hamiltonian $\displaystyle{ \mathcal{H}_0 }$. In the special case that the reference Hamiltonian is that of a non-interacting system and can thus be written as

$\displaystyle{ \mathcal{H}_0 = \sum_{i=1}^N h_i(\xi_i), }$
$\displaystyle{ \mathcal{H}_0 = \sum_{i=1}^N h_i(\xi_i), }$


0 = sum { i = 1} ^ n h _ i (xi _ i) ，</math >

where $\displaystyle{ \xi_i }$ are the degrees of freedom of the individual components of our statistical system (atoms, spins and so forth), one can consider sharpening the upper bound by minimizing the right side of the inequality. The minimizing reference system is then the "best" approximation to the true system using non-correlated degrees of freedom and is known as the mean-field approximation.

where $\displaystyle{ \xi_i }$ are the degrees of freedom of the individual components of our statistical system (atoms, spins and so forth), one can consider sharpening the upper bound by minimizing the right side of the inequality. The minimizing reference system is then the "best" approximation to the true system using non-correlated degrees of freedom and is known as the mean-field approximation.

For the most common case that the target Hamiltonian contains only pairwise interactions, i.e.,

For the most common case that the target Hamiltonian contains only pairwise interactions, i.e.,

$\displaystyle{ \mathcal{H} = \sum_{(i,j) \in \mathcal{P}} V_{i,j}(\xi_i, \xi_j), }$
$\displaystyle{ \mathcal{H} = \sum_{(i,j) \in \mathcal{P}} V_{i,j}(\xi_i, \xi_j), }$


[数学]数学[数学][数学][数学][数学][数学][数学][数学][数学][数学][数学][数学][数学][数学][数学][数学][数学][数学][数学][数学][数学][数学][数学][数学][数学][数学][数学][数学][数学][数学][数学][数学][数学][数学][数学][数学]

where $\displaystyle{ \mathcal{P} }$ is the set of pairs that interact, the minimizing procedure can be carried out formally. Define $\displaystyle{ \operatorname{Tr}_i f(\xi_i) }$ as the generalized sum of the observable $\displaystyle{ f }$ over the degrees of freedom of the single component (sum for discrete variables, integrals for continuous ones). The approximating free energy is given by

where $\displaystyle{ \mathcal{P} }$ is the set of pairs that interact, the minimizing procedure can be carried out formally. Define $\displaystyle{ \operatorname{Tr}_i f(\xi_i) }$ as the generalized sum of the observable $\displaystyle{ f }$ over the degrees of freedom of the single component (sum for discrete variables, integrals for continuous ones). The approximating free energy is given by

\displaystyle{ \begin{align} \lt math\gt \begin{align} 1.1.1.2.2.2.2.2.2.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3 F_0 &= \operatorname{Tr}_{1,2,\ldots,N} \mathcal{H}(\xi_1, \xi_2, \ldots, \xi_N) P^{(N)}_0(\xi_1, \xi_2, \ldots, \xi_N) \\ F_0 &= \operatorname{Tr}_{1,2,\ldots,N} \mathcal{H}(\xi_1, \xi_2, \ldots, \xi_N) P^{(N)}_0(\xi_1, \xi_2, \ldots, \xi_N) \\ 1,2，ldots，n } cal { h }(xi _ 1，xi _ 2，ldots，xi _ n) p ^ {(n)} _ 0(xi _ 1，xi _ 2，ldots，xi _ n) &+ kT \,\operatorname{Tr}_{1,2,\ldots,N} P^{(N)}_0(\xi_1, \xi_2, \ldots, \xi_N) \log P^{(N)}_0(\xi_1, \xi_2, \ldots,\xi_N), &+ kT \,\operatorname{Tr}_{1,2,\ldots,N} P^{(N)}_0(\xi_1, \xi_2, \ldots, \xi_N) \log P^{(N)}_0(\xi_1, \xi_2, \ldots,\xi_N), 1,2，ldots，n } p ^ {(n)} _ 0(xi _ 1，xi _ 2，ldots，xi _ n) log p ^ {(n)} _ 0(xi _ 1，xi _ 2，ldots，xi _ n) \end{align} }

\end{align}[/itex]

where $\displaystyle{ P^{(N)}_0(\xi_1, \xi_2, \dots, \xi_N) }$ is the probability to find the reference system in the state specified by the variables $\displaystyle{ (\xi_1, \xi_2, \dots, \xi_N) }$. This probability is given by the normalized Boltzmann factor

where $\displaystyle{ P^{(N)}_0(\xi_1, \xi_2, \dots, \xi_N) }$ is the probability to find the reference system in the state specified by the variables $\displaystyle{ (\xi_1, \xi_2, \dots, \xi_N) }$. This probability is given by the normalized Boltzmann factor

\displaystyle{ \begin{align} \lt math\gt \begin{align} 1.1.1.2.2.2.2.2.2.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3 P^{(N)}_0(\xi_1, \xi_2, \ldots, \xi_N) P^{(N)}_0(\xi_1, \xi_2, \ldots, \xi_N) P ^ {(n)} _ 0(xi _ 1，xi _ 2，ldots，xi _ n) &= \frac{1}{Z^{(N)}_0} e^{-\beta \mathcal{H}_0(\xi_1, \xi_2, \ldots, \xi_N)} \\ &= \frac{1}{Z^{(N)}_0} e^{-\beta \mathcal{H}_0(\xi_1, \xi_2, \ldots, \xi_N)} \\ 0(xi _ 1，xi _ 2，ldots，xi _ n)} &= \prod_{i=1}^N \frac{1}{Z_0} e^{-\beta h_i(\xi_i)} \ \stackrel{\mathrm{def}}{=}\ \prod_{i=1}^N P^{(i)}_0(\xi_i), &= \prod_{i=1}^N \frac{1}{Z_0} e^{-\beta h_i(\xi_i)} \ \stackrel{\mathrm{def}}{=}\ \prod_{i=1}^N P^{(i)}_0(\xi_i), 1} ^ n frac {1}{ z _ 0} e ^ {-beta h _ i (xi _ i)} stackrel { mathrm { def }{ = } prod _ { i = 1} ^ n p ^ {(i)} _ 0(xi _ i) , \end{align} }

\end{align}[/itex]

where $\displaystyle{ Z_0 }$ is the partition function. Thus

where $\displaystyle{ Z_0 }$ is the partition function. Thus

\displaystyle{ \begin{align} \lt math\gt \begin{align} 1.1.1.2.2.2.2.2.2.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3 F_0 &= \sum_{(i,j) \in \mathcal{P}} \operatorname{Tr}_{i,j} V_{i,j}(\xi_i, \xi_j) P^{(i)}_0(\xi_i) P^{(j)}_0(\xi_j) \\ F_0 &= \sum_{(i,j) \in \mathcal{P}} \operatorname{Tr}_{i,j} V_{i,j}(\xi_i, \xi_j) P^{(i)}_0(\xi_i) P^{(j)}_0(\xi_j) \\ F _ 0 & = sum _ {(i，j) in mathcal { p }算子名{ Tr } _ { i，j } v _ { i，j }(xi _ i，xi _ j) p ^ {(i)} _ 0(xi _ i) p ^ {(j)} _ 0(xi _ j)) &+ kT \sum_{i=1}^N \operatorname{Tr}_i P^{(i)}_0(\xi_i) \log P^{(i)}_0(\xi_i). &+ kT \sum_{i=1}^N \operatorname{Tr}_i P^{(i)}_0(\xi_i) \log P^{(i)}_0(\xi_i). & + kT sum { i = 1} ^ n 操作符名称{ Tr } i p ^ {(i)} _ 0(xi _ i) log p ^ {(i)} _ 0(xi _ i)。 \end{align} }

\end{align}[/itex]

In order to minimize, we take the derivative with respect to the single-degree-of-freedom probabilities $\displaystyle{ P^{(i)}_0 }$ using a Lagrange multiplier to ensure proper normalization. The end result is the set of self-consistency equations

In order to minimize, we take the derivative with respect to the single-degree-of-freedom probabilities $\displaystyle{ P^{(i)}_0 }$ using a Lagrange multiplier to ensure proper normalization. The end result is the set of self-consistency equations

$\displaystyle{ P^{(i)}_0(\xi_i) = \frac{1}{Z_0} e^{-\beta h_i^{MF}(\xi_i)},\quad i = 1, 2, \ldots, N, }$
$\displaystyle{ P^{(i)}_0(\xi_i) = \frac{1}{Z_0} e^{-\beta h_i^{MF}(\xi_i)},\quad i = 1, 2, \ldots, N, }$


< math > p ^ {(i)} _ 0(xi _ i) = frac {1}{ z _ 0} e ^ {-beta h _ i ^ { MF }(xi _ i)} ，quad i = 1,2，ldots，n，</math >

where the mean field is given by

where the mean field is given by

$\displaystyle{ h_i^\text{MF}(\xi_i) = \sum_{\{j \mid (i,j) \in \mathcal{P}\}} \operatorname{Tr}_j V_{i,j}(\xi_i, \xi_j) P^{(j)}_0(\xi_j). }$
$\displaystyle{ h_i^\text{MF}(\xi_i) = \sum_{\{j \mid (i,j) \in \mathcal{P}\}} \operatorname{Tr}_j V_{i,j}(\xi_i, \xi_j) P^{(j)}_0(\xi_j). }$


## Applications

Mean-field theory can be applied to a number of physical systems so as to study phenomena such as phase transitions.[8]

$\displaystyle{ H = -J \sum_{\langle i, j \rangle} (m_i + \delta s_i) (m_j + \delta s_j) - h \sum_i s_i, }$


< math > h =-j sum { langle i，j rangle }(m _ i + delta s _ i)(m _ j + delta s _ j)-h sum i s _ i，</math >

### Ising model

where we define $\displaystyle{ \delta s_i \equiv s_i - m_i }$; this is the fluctuation of the spin.

Consider the Ising model on a $\displaystyle{ d }$-dimensional lattice. The Hamiltonian is given by

$\displaystyle{ H = -J \sum_{\langle i, j \rangle} s_i s_j - h \sum_i s_i, }$

If we expand the right side, we obtain one term that is entirely dependent on the mean values of the spins and independent of the spin configurations. This is the trivial term, which does not affect the statistical properties of the system. The next term is the one involving the product of the mean value of the spin and the fluctuation value. Finally, the last term involves a product of two fluctuation values.

where the $\displaystyle{ \sum_{\langle i, j \rangle} }$ indicates summation over the pair of nearest neighbors $\displaystyle{ \langle i, j \rangle }$, and $\displaystyle{ s_i, s_j = \pm 1 }$ are neighboring Ising spins.

The mean-field approximation consists of neglecting this second-order fluctuation term:

Let us transform our spin variable by introducing the fluctuation from its mean value $\displaystyle{ m_i \equiv \langle s_i \rangle }$. We may rewrite the Hamiltonian as

$\displaystyle{ H \approx H^\text{MF} \equiv -J \sum_{\langle i, j \rangle} (m_i m_j + m_i \delta s_j + m_j \delta s_i) - h \sum_i s_i. }$


(m _ i m _ j + m _ i delta s _ j + m _ j delta s _ i)-h sum _ i s _ i.数学

$\displaystyle{ H = -J \sum_{\langle i, j \rangle} (m_i + \delta s_i) (m_j + \delta s_j) - h \sum_i s_i, }$

These fluctuations are enhanced at low dimensions, making MFT a better approximation for high dimensions.

where we define $\displaystyle{ \delta s_i \equiv s_i - m_i }$; this is the fluctuation of the spin.

Again, the summand can be reexpanded. In addition, we expect that the mean value of each spin is site-independent, since the Ising chain is translationally invariant. This yields

If we expand the right side, we obtain one term that is entirely dependent on the mean values of the spins and independent of the spin configurations. This is the trivial term, which does not affect the statistical properties of the system. The next term is the one involving the product of the mean value of the spin and the fluctuation value. Finally, the last term involves a product of two fluctuation values.

$\displaystyle{ H^\text{MF} = -J \sum_{\langle i, j \rangle} \big(m^2 + 2m(s_i - m)\big) - h \sum_i s_i. }$


=-j sum { langle i，j rangle } big (m ^ 2 + 2m (s _ i-m) big)-h sum _ i s _ i. </math >

The mean-field approximation consists of neglecting this second-order fluctuation term:

$\displaystyle{ H \approx H^\text{MF} \equiv -J \sum_{\langle i, j \rangle} (m_i m_j + m_i \delta s_j + m_j \delta s_i) - h \sum_i s_i. }$

The summation over neighboring spins can be rewritten as $\displaystyle{ \sum_{\langle i, j \rangle} = \frac{1}{2} \sum_i \sum_{j \in nn(i)} }$, where $\displaystyle{ nn(i) }$ means "nearest neighbor of $\displaystyle{ i }$", and the $\displaystyle{ 1/2 }$ prefactor avoids double counting, since each bond participates in two spins. Simplifying leads to the final expression

These fluctuations are enhanced at low dimensions, making MFT a better approximation for high dimensions.

$\displaystyle{ H^\text{MF} = \frac{J m^2 N z}{2} - \underbrace{(h + m J z)}_{h^\text{eff.}} \sum_i s_i, }$


2}-underbrace {(h + m j z)} _ h ^ text { eff. }数学，数学

Again, the summand can be reexpanded. In addition, we expect that the mean value of each spin is site-independent, since the Ising chain is translationally invariant. This yields

where $\displaystyle{ z }$ is the coordination number. At this point, the Ising Hamiltonian has been decoupled into a sum of one-body Hamiltonians with an effective mean field $\displaystyle{ h^\text{eff.} = h + J z m }$, which is the sum of the external field $\displaystyle{ h }$ and of the mean field induced by the neighboring spins. It is worth noting that this mean field directly depends on the number of nearest neighbors and thus on the dimension of the system (for instance, for a hypercubic lattice of dimension $\displaystyle{ d }$, $\displaystyle{ z = 2 d }$).

$\displaystyle{ H^\text{MF} = -J \sum_{\langle i, j \rangle} \big(m^2 + 2m(s_i - m)\big) - h \sum_i s_i. }$

Substituting this Hamiltonian into the partition function and solving the effective 1D problem, we obtain

The summation over neighboring spins can be rewritten as $\displaystyle{ \sum_{\langle i, j \rangle} = \frac{1}{2} \sum_i \sum_{j \in nn(i)} }$, where $\displaystyle{ nn(i) }$ means "nearest neighbor of $\displaystyle{ i }$", and the $\displaystyle{ 1/2 }$ prefactor avoids double counting, since each bond participates in two spins. Simplifying leads to the final expression

$\displaystyle{ Z = e^{-\frac{\beta J m^2 Nz}{2}} \left[2 \cosh\left(\frac{h + m J z}{k_\text{B} T}\right)\right]^N, }$


$\displaystyle{ H^\text{MF} = \frac{J m^2 N z}{2} - \underbrace{(h + m J z)}_{h^\text{eff.}} \sum_i s_i, }$

where $\displaystyle{ N }$ is the number of lattice sites. This is a closed and exact expression for the partition function of the system. We may obtain the free energy of the system and calculate critical exponents. In particular, we can obtain the magnetization $\displaystyle{ m }$ as a function of $\displaystyle{ h^\text{eff.} }$.

where $\displaystyle{ z }$ is the coordination number. At this point, the Ising Hamiltonian has been decoupled into a sum of one-body Hamiltonians with an effective mean field $\displaystyle{ h^\text{eff.} = h + J z m }$, which is the sum of the external field $\displaystyle{ h }$ and of the mean field induced by the neighboring spins. It is worth noting that this mean field directly depends on the number of nearest neighbors and thus on the dimension of the system (for instance, for a hypercubic lattice of dimension $\displaystyle{ d }$, $\displaystyle{ z = 2 d }$).

We thus have two equations between $\displaystyle{ m }$ and $\displaystyle{ h^\text{eff.} }$, allowing us to determine $\displaystyle{ m }$ as a function of temperature. This leads to the following observation:

Substituting this Hamiltonian into the partition function and solving the effective 1D problem, we obtain

$\displaystyle{ Z = e^{-\frac{\beta J m^2 Nz}{2}} \left[2 \cosh\left(\frac{h + m J z}{k_\text{B} T}\right)\right]^N, }$

$\displaystyle{ T_\text{c} }$ is given by the following relation: $\displaystyle{ T_\text{c} = \frac{J z}{k_B} }$.

“数学”是通过以下关系给出的: < math > t _ text { c } = frac { j }{ k _ b } </math > 。

where $\displaystyle{ N }$ is the number of lattice sites. This is a closed and exact expression for the partition function of the system. We may obtain the free energy of the system and calculate critical exponents. In particular, we can obtain the magnetization $\displaystyle{ m }$ as a function of $\displaystyle{ h^\text{eff.} }$.

This shows that MFT can account for the ferromagnetic phase transition.

We thus have two equations between $\displaystyle{ m }$ and $\displaystyle{ h^\text{eff.} }$, allowing us to determine $\displaystyle{ m }$ as a function of temperature. This leads to the following observation:

• For temperatures greater than a certain value $\displaystyle{ T_\text{c} }$, the only solution is $\displaystyle{ m = 0 }$. The system is paramagnetic.

Similarly, MFT can be applied to other types of Hamiltonian as in the following cases:

• For $\displaystyle{ T \lt T_\text{c} }$, there are two non-zero solutions: $\displaystyle{ m = \pm m_0 }$. The system is ferromagnetic.

$\displaystyle{ T_\text{c} }$ is given by the following relation: $\displaystyle{ T_\text{c} = \frac{J z}{k_B} }$.

This shows that MFT can account for the ferromagnetic phase transition.

### Application to other systems

Similarly, MFT can be applied to other types of Hamiltonian as in the following cases:

• To study the metal–superconductor transition. In this case, the analog of the magnetization is the superconducting gap $\displaystyle{ \Delta }$.

In mean-field theory, the mean field appearing in the single-site problem is a scalar or vectorial time-independent quantity. However, this need not always be the case: in a variant of mean-field theory called dynamical mean-field theory (DMFT), the mean field becomes a time-dependent quantity. For instance, DMFT can be applied to the Hubbard model to study the metal–Mott-insulator transition.

## Extension to time-dependent mean fields

In mean-field theory, the mean field appearing in the single-site problem is a scalar or vectorial time-independent quantity. However, this need not always be the case: in a variant of mean-field theory called dynamical mean-field theory (DMFT), the mean field becomes a time-dependent quantity. For instance, DMFT can be applied to the Hubbard model to study the metal–Mott-insulator transition.

Category:Statistical mechanics

Category:Concepts in physics

This page was moved from wikipedia:en:Mean-field theory. Its edit history can be viewed at 平均场理论/edithistory

1. Chaikin, P. M.; Lubensky, T. C. (2007). Principles of condensed matter physics (4th print ed.). Cambridge: Cambridge University Press. ISBN 978-0-521-79450-3.
2. Parr, Thomas; Sajid, Noor; Friston, Karl (2020). "Modules or Mean-Fields?" (PDF). Entropy. 22 (552): 552. doi:10.3390/e22050552. Retrieved 22 May 2020.
3. Boudec, J. Y. L.; McDonald, D.; Mundinger, J. (2007). "A Generic Mean Field Convergence Result for Systems of Interacting Objects". Fourth International Conference on the Quantitative Evaluation of Systems (QEST 2007). pp. 3. doi:10.1109/QEST.2007.8. ISBN 978-0-7695-2883-0.
4. Baccelli, F.; Karpelevich, F. I.; Kelbert, M. Y.; Puhalskii, A. A.; Rybko, A. N.; Suhov, Y. M. (1992). "A mean-field limit for a class of queueing networks". Journal of Statistical Physics. 66 (3–4): 803. Bibcode:1992JSP....66..803B. doi:10.1007/BF01055703. Unknown parameter |s2cid= ignored (help)
5. Lasry, J. M.; Lions, P. L. (2007). "Mean field games" (PDF). Japanese Journal of Mathematics. 2: 229–260. doi:10.1007/s11537-007-0657-8. Unknown parameter |s2cid= ignored (help)
6. Kadanoff, L. P. (2009). "More is the Same; Phase Transitions and Mean Field Theories". Journal of Statistical Physics. 137 (5–6): 777–797. arXiv:0906.0653. Bibcode:2009JSP...137..777K. doi:10.1007/s10955-009-9814-1. Unknown parameter |s2cid= ignored (help)
7. Weiss, Pierre (1907). "L'hypothèse du champ moléculaire et la propriété ferromagnétique". J. Phys. Theor. Appl. 6 (1): 661–690. doi:10.1051/jphystap:019070060066100.
8. Mean-field theory can be applied to a number of physical systems so as to study phenomena such as phase transitions. 平均场理论可以应用于许多物理系统，以便研究相变等现象。 Stanley Consider the Ising model on a $\displaystyle{ d }$-dimensional lattice. The Hamiltonian is given by 考虑一个 < math > d </math > 维格上的 Ising 模型。哈密顿函数是由, H. E. (1971 Let us transform our spin variable by introducing the fluctuation from its mean value $\displaystyle{ m_i \equiv \langle s_i \rangle }$. We may rewrite the Hamiltonian as 让我们通过引入涨落来转换自旋变量，从它的平均值 < math > m _ i = l _ i rangle </math > 。我们可以把哈密顿函数改写成). "Mean Field Theory of Magnetic Phase Transitions where the $\displaystyle{ \sum_{\langle i, j \rangle} }$ indicates summation over the pair of nearest neighbors $\displaystyle{ \langle i, j \rangle }$, and $\displaystyle{ s_i, s_j = \pm 1 }$ are neighboring Ising spins. 其中 < math > sum { langle i，j rangle } </math > 表示对最近邻居 < math > langle i，j rangle </math > ，和 < math > s i，s j = pm 1 </math > 是邻近的 Ising 自旋。". Introduction to Phase Transitions and Critical Phenomena. Oxford University Press $\displaystyle{ H = -J \sum_{\langle i, j \rangle} s_i s_j - h \sum_i s_i, }$ $\displaystyle{ H = -J \sum_{\langle i, j \rangle} s_i s_j - h \sum_i s_i, }$. ISBN 0-19-505316-8.