# 正态分布

{{Infobox probability distribution

{{Infobox probability distribution

{ Infobox 概率分布

 | name       = Normal distribution

 | name       = Normal distribution


| name = 正态分布

 | type       = density

 | type       = density


 | pdf_image  = Normal Distribution PDF.svg

 | pdf_image  = Normal Distribution PDF.svg


 | pdf_caption = The red curve is the standard normal distribution

 | pdf_caption = The red curve is the standard normal distribution


 | cdf_image  = Normal Distribution CDF.svg

 | cdf_image  = Normal Distribution CDF.svg


 | cdf_caption =

 | cdf_caption =


2012年10月11日

 | notation   = $\displaystyle{ \mathcal{N}(\mu,\sigma^2) }$

 | notation   = $\displaystyle{ \mathcal{N}(\mu,\sigma^2) }$


| 符号 = < math > mathcal { n }(mu，sigma ^ 2) </math >

 | parameters = $\displaystyle{ \mu\in\R }$ = mean (location)$\displaystyle{ \sigma^2\gt 0 }$ = variance (squared scale)

 | parameters = $\displaystyle{ \mu\in\R }$ = mean (location)$\displaystyle{ \sigma^2\gt 0 }$ = variance (squared scale)


| 参数 = < math > mu in r </math > = mean (location) < br/> < math > sigma ^ 2 > 0 </math > = variance (squared scale)

 | support    = $\displaystyle{ x\in\R }$

 | support    = $\displaystyle{ x\in\R }$


| support = < math > x in r </math >

 | pdf        = $\displaystyle{ \frac{1}{\sigma\sqrt{2\pi}} e^{-\frac{1}{2}\left(\frac{x - \mu}{\sigma}\right)^2} }$

 | pdf        = $\displaystyle{ \frac{1}{\sigma\sqrt{2\pi}} e^{-\frac{1}{2}\left(\frac{x - \mu}{\sigma}\right)^2} }$


| pdf = < math > frac {1}{ sigma sqrt {2 pi } e ^ {-frac {1}{2}左(frac { x-mu }{ sigma }右) ^ 2} </math >

 | cdf        = $\displaystyle{ \frac{1}{2}\left[1 + \operatorname{erf}\left( \frac{x-\mu}{\sigma\sqrt{2}}\right)\right] }$

 | cdf        = $\displaystyle{ \frac{1}{2}\left[1 + \operatorname{erf}\left( \frac{x-\mu}{\sigma\sqrt{2}}\right)\right] }$


| cdf = < math > frac {1}{2}左[1 + operatorname { erf }左(frac { x-mu }{ sigma sqrt {2}右)] </math >

 | quantile   = $\displaystyle{ \mu+\sigma\sqrt{2} \operatorname{erf}^{-1}(2p-1) }$

 | quantile   = $\displaystyle{ \mu+\sigma\sqrt{2} \operatorname{erf}^{-1}(2p-1) }$


| quantile = < math > mu + sigma sqrt {2} operatorname { erf } ^ {-1}(2p-1) </math >

 | mean       = $\displaystyle{ \mu }$

 | mean       = $\displaystyle{ \mu }$


| mean = math > mu

 | median     = $\displaystyle{ \mu }$

 | median     = $\displaystyle{ \mu }$


| median = math > mu

 | mode       = $\displaystyle{ \mu }$

 | mode       = $\displaystyle{ \mu }$


| mode = math > mu

 | variance   = $\displaystyle{ \sigma^2 }$

 | variance   = $\displaystyle{ \sigma^2 }$


| 方差 = < math > sigma ^ 2 </math >

 | mad        = $\displaystyle{ \sigma\sqrt{2/\pi} }$

 | mad        = $\displaystyle{ \sigma\sqrt{2/\pi} }$


| mad = < math > sigma sqrt {2/pi } </math >

 | skewness   = $\displaystyle{ 0 }$

 | skewness   = $\displaystyle{ 0 }$


| skewness = < math > 0

 | kurtosis   = $\displaystyle{ 0 }$

 | kurtosis   = $\displaystyle{ 0 }$


| 峭度 = < math > 0 </math > < ! ——不要用 old 样式的峭度取代它。-->

 | entropy    = $\displaystyle{ \frac{1}{2} \log(2\pi e\sigma^2) }$

 | entropy    = $\displaystyle{ \frac{1}{2} \log(2\pi e\sigma^2) }$


| 熵 = < math > frac {1}{2} log (2 pi e sigma ^ 2) </math >

 | mgf        = $\displaystyle{ \exp(\mu t + \sigma^2t^2/2) }$

 | mgf        = $\displaystyle{ \exp(\mu t + \sigma^2t^2/2) }$


| mgf = < math > exp (mu t + sigma ^ 2t ^ 2/2) </math >

 | char       = $\displaystyle{ \exp(i\mu t - \sigma^2 t^2/2) }$

 | char       = $\displaystyle{ \exp(i\mu t - \sigma^2 t^2/2) }$


| char = math > exp (i mu t-sigma ^ 2 t ^ 2/2) math

 | fisher     = $\displaystyle{ \mathcal{I}(\mu,\sigma) =\begin {pmatrix} 1/\sigma^2 & 0 \\ 0 & 2/\sigma^2\end{pmatrix} }$

 | fisher     = $\displaystyle{ \mathcal{I}(\mu,\sigma) =\begin {pmatrix} 1/\sigma^2 & 0 \\ 0 & 2/\sigma^2\end{pmatrix} }$


| fisher = < math > mathcal { i }(mu，sigma) = begin { pmatrix }1/sigma ^ 2 & 00 & 2/sigma ^ 2 end { pmatrix } </math >

$\displaystyle{ \mathcal{I}(\mu,\sigma^2) =\begin {pmatrix} 1/\sigma^2 & 0 \\ 0 & 1/(2\sigma^4)\end{pmatrix} }$

$\displaystyle{ \mathcal{I}(\mu,\sigma^2) =\begin {pmatrix} 1/\sigma^2 & 0 \\ 0 & 1/(2\sigma^4)\end{pmatrix} }$

(mu，sigma ^ 2) = begin { pmatrix }1/sigma ^ 2 & 00 & 1/(2 sigma ^ 4) end { pmatrix } </math >

 | KLDiv      = $\displaystyle{ { 1 \over 2 } \left\{ \left( \frac{\sigma_0}{\sigma_1} \right)^2 + \frac{(\mu_1 - \mu_0)^2}{\sigma_1^2} - 1 + 2 \ln {\sigma_1 \over \sigma_0} \right\} }$

 | KLDiv      = $\displaystyle{ { 1 \over 2 } \left\{ \left( \frac{\sigma_0}{\sigma_1} \right)^2 + \frac{(\mu_1 - \mu_0)^2}{\sigma_1^2} - 1 + 2 \ln {\sigma_1 \over \sigma_0} \right\} }$


| KLDiv = < math > {1 over 2}左{ left (frac { sigma _ 0}{ sigma _ 1}右) ^ 2 + frac {(mu _ 1-mu _ 0) ^ 2}{ sigma _ 1 ^ 2}-1 + 2 ln { sigma _ 1 over sigma _ 0}右} </math >

}}

}}

}}

In probability theory, a normal (or Gaussian or Gauss or Laplace–Gauss) distribution is a type of continuous probability distribution for a real-valued random variable. The general form of its probability density function is

In probability theory, a normal (or Gaussian or Gauss or Laplace–Gauss) distribution is a type of continuous probability distribution for a real-valued random variable. The general form of its probability density function is

$\displaystyle{ \lt math\gt 《数学》 f(x) = \frac{1}{\sigma \sqrt{2\pi} } e^{-\frac{1}{2}\left(\frac{x-\mu}{\sigma}\right)^2} f(x) = \frac{1}{\sigma \sqrt{2\pi} } e^{-\frac{1}{2}\left(\frac{x-\mu}{\sigma}\right)^2} F (x) = frac {1}{ sigma sqrt {2 pi } e ^ {-frac {1}{2}左(frac { x-mu }{ sigma }右) ^ 2} }$

[/itex]

The parameter $\displaystyle{ \mu }$ is the mean or expectation of the distribution (and also its median and mode), while the parameter $\displaystyle{ \sigma }$ is its standard deviation. The variance of the distribution is $\displaystyle{ \sigma^2 }$. A random variable with a Gaussian distribution is said to be normally distributed, and is called a normal deviate.

The parameter $\displaystyle{ \mu }$ is the mean or expectation of the distribution (and also its median and mode), while the parameter $\displaystyle{ \sigma }$ is its standard deviation. The variance of the distribution is $\displaystyle{ \sigma^2 }$. A random variable with a Gaussian distribution is said to be normally distributed, and is called a normal deviate.

Normal distributions are important in statistics and are often used in the natural and social sciences to represent real-valued random variables whose distributions are not known. Their importance is partly due to the central limit theorem. It states that, under some conditions, the average of many samples (observations) of a random variable with finite mean and variance is itself a random variable—whose distribution converges to a normal distribution as the number of samples increases. Therefore, physical quantities that are expected to be the sum of many independent processes, such as measurement errors, often have distributions that are nearly normal.

Normal distributions are important in statistics and are often used in the natural and social sciences to represent real-valued random variables whose distributions are not known. Their importance is partly due to the central limit theorem. It states that, under some conditions, the average of many samples (observations) of a random variable with finite mean and variance is itself a random variable—whose distribution converges to a normal distribution as the number of samples increases. Therefore, physical quantities that are expected to be the sum of many independent processes, such as measurement errors, often have distributions that are nearly normal.

Moreover, Gaussian distributions have some unique properties that are valuable in analytic studies. For instance, any linear combination of a fixed collection of normal deviates is a normal deviate. Many results and methods, such as propagation of uncertainty and least squares parameter fitting, can be derived analytically in explicit form when the relevant variables are normally distributed.

Moreover, Gaussian distributions have some unique properties that are valuable in analytic studies. For instance, any linear combination of a fixed collection of normal deviates is a normal deviate. Many results and methods, such as propagation of uncertainty and least squares parameter fitting, can be derived analytically in explicit form when the relevant variables are normally distributed.

A normal distribution is sometimes informally called a bell curve. However, many other distributions are bell-shaped (such as the Cauchy, Student's t, and logistic distributions).

A normal distribution is sometimes informally called a bell curve. However, many other distributions are bell-shaped (such as the Cauchy, Student's t, and logistic distributions).

## Definitions

### Standard normal distribution

The simplest case of a normal distribution is known as the standard normal distribution. This is a special case when $\displaystyle{ \mu=0 }$ and $\displaystyle{ \sigma =1 }$, and it is described by this probability density function:

The simplest case of a normal distribution is known as the standard normal distribution. This is a special case when $\displaystyle{ \mu=0 }$ and $\displaystyle{ \sigma =1 }$, and it is described by this probability density function: goes even further, defining the standard normal as having a variance of $\displaystyle{ \sigma^2 = 1/(2\pi) }$:

$\displaystyle{ \varphi(x) = e^{-\pi x^2} }$

“ math”“ varphi (x) = e ^ {-pi x ^ 2}“ math”

$\displaystyle{ \varphi(x) = \frac 1{\sqrt{2\pi}}e^{- \frac 12 x^2} }$

Every normal distribution is a version of the standard normal distribution, whose domain has been stretched by a factor $\displaystyle{ \sigma }$ (the standard deviation) and then translated by $\displaystyle{ \mu }$ (the mean value):

Here, the factor $\displaystyle{ 1/\sqrt{2\pi} }$ ensures that the total area under the curve $\displaystyle{ \varphi(x) }$ is equal to one.模板:NoteTag The factor $\displaystyle{ 1/2 }$ in the exponent ensures that the distribution has unit variance (i.e., variance being equal to one), and therefore also unit standard deviation. This function is symmetric around $\displaystyle{ x=0 }$, where it attains its maximum value $\displaystyle{ 1/\sqrt{2\pi} }$ and has inflection points at $\displaystyle{ x=+1 }$ and $\displaystyle{ x=-1 }$.

$\displaystyle{ 《数学》 f(x \mid \mu, \sigma^2) =\frac 1 \sigma \varphi\left(\frac{x-\mu} \sigma \right) F (x mid mu，sigma ^ 2) = frac 1 sigma varphi left (frac { x-mu } sigma right) Authors differ on which normal distribution should be called the "standard" one. [[Carl Friedrich Gauss]], for example, defined the standard normal as having a variance of \lt math\gt \sigma^2 = 1/2 }$. That is:

[/itex]

$\displaystyle{ \varphi(x) = \frac{e^{-x^2}}{\sqrt\pi} }$

The probability density must be scaled by $\displaystyle{ 1/\sigma }$ so that the integral is still 1.

On the other hand, Stephen Stigler goes even further, defining the standard normal as having a variance of $\displaystyle{ \sigma^2 = 1/(2\pi) }$:

$\displaystyle{ \varphi(x) = e^{-\pi x^2} }$

If $\displaystyle{ Z }$ is a standard normal deviate, then $\displaystyle{ X=\sigma Z + \mu }$ will have a normal distribution with expected value $\displaystyle{ \mu }$ and standard deviation $\displaystyle{ \sigma }$. Conversely, if $\displaystyle{ X }$ is a normal deviate with parameters $\displaystyle{ \mu }$ and $\displaystyle{ \sigma^2 }$, then the distribution $\displaystyle{ Z=(X-\mu)/\sigma }$ will have a standard normal distribution. This variate is also called the standardized form of $\displaystyle{ X }$.

### General normal distribution

Every normal distribution is a version of the standard normal distribution, whose domain has been stretched by a factor $\displaystyle{ \sigma }$ (the standard deviation) and then translated by $\displaystyle{ \mu }$ (the mean value):

The probability density of the standard Gaussian distribution (standard normal distribution, with zero mean and unit variance) is often denoted with the Greek letter $\displaystyle{ \phi }$ (phi). The alternative form of the Greek letter phi, $\displaystyle{ \varphi }$, is also used quite often. Thus when a random variable $\displaystyle{ X }$ is normally distributed with mean $\displaystyle{ \mu }$ and standard deviation $\displaystyle{ \sigma }$, one may write

$\displaystyle{ \lt math\gt X \sim \mathcal{N}(\mu,\sigma^2). }$

(mu，sigma ^ 2)

f(x \mid \mu, \sigma^2) =\frac 1 \sigma \varphi\left(\frac{x-\mu} \sigma \right)

[/itex]

Some authors advocate using the precision $\displaystyle{ \tau }$ as the parameter defining the width of the distribution, instead of the deviation $\displaystyle{ \sigma }$ or the variance $\displaystyle{ \sigma^2 }$. The precision is normally defined as the reciprocal of the variance, $\displaystyle{ 1/\sigma^2 }$. The formula for the distribution then becomes

The probability density must be scaled by $\displaystyle{ 1/\sigma }$ so that the integral is still 1.

$\displaystyle{ f(x) = \sqrt{\frac\tau{2\pi}} e^{-\tau(x-\mu)^2/2}. }$

< math > f (x) = sqrt { frac tau {2 pi } e ^ {-tau (x-mu) ^ 2/2} . </math >

If $\displaystyle{ Z }$ is a standard normal deviate, then $\displaystyle{ X=\sigma Z + \mu }$ will have a normal distribution with expected value $\displaystyle{ \mu }$ and standard deviation $\displaystyle{ \sigma }$. Conversely, if $\displaystyle{ X }$ is a normal deviate with parameters $\displaystyle{ \mu }$ and $\displaystyle{ \sigma^2 }$, then the distribution $\displaystyle{ Z=(X-\mu)/\sigma }$ will have a standard normal distribution. This variate is also called the standardized form of $\displaystyle{ X }$.

This choice is claimed to have advantages in numerical computations when $\displaystyle{ \sigma }$ is very close to zero, and simplifies formulas in some contexts, such as in the Bayesian inference of variables with multivariate normal distribution.

### Notation

The probability density of the standard Gaussian distribution (standard normal distribution, with zero mean and unit variance) is often denoted with the Greek letter $\displaystyle{ \phi }$ (phi). The alternative form of the Greek letter phi, $\displaystyle{ \varphi }$, is also used quite often.

Alternatively, the reciprocal of the standard deviation $\displaystyle{ \tau^\prime=1/\sigma }$ might be defined as the precision, in which case the expression of the normal distribution becomes

The normal distribution is often referred to as $\displaystyle{ N(\mu,\sigma^2) }$ or $\displaystyle{ \mathcal{N}(\mu,\sigma^2) }$. Thus when a random variable $\displaystyle{ X }$ is normally distributed with mean $\displaystyle{ \mu }$ and standard deviation $\displaystyle{ \sigma }$, one may write

$\displaystyle{ f(x) = \frac{\tau^\prime}{\sqrt{2\pi}} e^{-(\tau^\prime)^2(x-\mu)^2/2}. }$


< math > f (x) = frac { tau ^ prime }{ sqrt {2 pi } e ^ {-(tau ^ prime) ^ 2(x-mu) ^ 2/2} . </math >

$\displaystyle{ X \sim \mathcal{N}(\mu,\sigma^2). }$

According to Stigler, this formulation is advantageous because of a much simpler and easier-to-remember formula, and simple approximate formulas for the quantiles of the distribution.

### Alternative parameterizations

Normal distributions form an exponential family with natural parameters $\displaystyle{ \textstyle\theta_1=\frac{\mu}{\sigma^2} }$ and $\displaystyle{ \textstyle\theta_2=\frac{-1}{2\sigma^2} }$, and natural statistics x and x2. The dual expectation parameters for normal distribution are and .

Some authors advocate using the precision $\displaystyle{ \tau }$ as the parameter defining the width of the distribution, instead of the deviation $\displaystyle{ \sigma }$ or the variance $\displaystyle{ \sigma^2 }$. The precision is normally defined as the reciprocal of the variance, $\displaystyle{ 1/\sigma^2 }$. The formula for the distribution then becomes

$\displaystyle{ f(x) = \sqrt{\frac\tau{2\pi}} e^{-\tau(x-\mu)^2/2}. }$

The cumulative distribution function (CDF) of the standard normal distribution, usually denoted with the capital Greek letter $\displaystyle{ \Phi }$ (phi), It gives the probability that the value of a standard normal random variable $\displaystyle{ X }$ will exceed $\displaystyle{ x }$: $\displaystyle{ P(X\gt x) }$. Other definitions of the $\displaystyle{ Q }$-function, all of which are simple transformations of $\displaystyle{ \Phi }$, are also used occasionally.

This choice is claimed to have advantages in numerical computations when $\displaystyle{ \sigma }$ is very close to zero, and simplifies formulas in some contexts, such as in the Bayesian inference of variables with multivariate normal distribution.

The graph of the standard normal CDF $\displaystyle{ \Phi }$ has 2-fold rotational symmetry around the point (0,1/2); that is, $\displaystyle{ \Phi(-x) = 1 - \Phi(x) }$. Its antiderivative (indefinite integral) can be expressed as follows:

$\displaystyle{ \int \Phi(x)\, dx = x\Phi(x) + \varphi(x) + C. }$

$\displaystyle{ \int \Phi(x)\, dx = x\Phi(x) + \varphi(x) + C. }$

Alternatively, the reciprocal of the standard deviation $\displaystyle{ \tau^\prime=1/\sigma }$ might be defined as the precision, in which case the expression of the normal distribution becomes

The CDF of the standard normal distribution can be expanded by Integration by parts into a series:

$\displaystyle{ f(x) = \frac{\tau^\prime}{\sqrt{2\pi}} e^{-(\tau^\prime)^2(x-\mu)^2/2}. }$

$\displaystyle{ \Phi(x)=\frac{1}{2} + \frac{1}{\sqrt{2\pi}}\cdot e^{-x^2/2} \left[x + \frac{x^3}{3} + \frac{x^5}{3\cdot 5} + \cdots + \frac{x^{2n+1}}{(2n+1)!!} + \cdots\right] }$

< math > Phi (x) = frac {1}{2} + frac {1}{ sqrt {2 pi } cdot e ^ {-x ^ 2/2}左[ x + frac { x ^ 3}{3}{ x ^ 5}{3 cdot 5} + cdots + frac { x ^ {2 n + 1}}{(2n + 1) ! } ! }[ + 点右] </math >

According to Stigler, this formulation is advantageous because of a much simpler and easier-to-remember formula, and simple approximate formulas for the quantiles of the distribution.

where $\displaystyle{ !! }$ denotes the double factorial.

Normal distributions form an exponential family with natural parameters $\displaystyle{ \textstyle\theta_1=\frac{\mu}{\sigma^2} }$ and $\displaystyle{ \textstyle\theta_2=\frac{-1}{2\sigma^2} }$, and natural statistics x and x2. The dual expectation parameters for normal distribution are η1 = μ and η2 = μ2 + σ2.

An asymptotic expansion of the CDF for large x can also be derived using integration by parts. For more, see Error function#Asymptotic expansion.

### Cumulative distribution function

The cumulative distribution function (CDF) of the standard normal distribution, usually denoted with the capital Greek letter $\displaystyle{ \Phi }$ (phi), is the integral

$\displaystyle{ \Phi(x) = \frac 1 {\sqrt{2\pi}} \int_{-\infty}^x e^{-t^2/2} \, dt }$

For the normal distribution, the values less than one standard deviation away from the mean account for 68.27% of the set; while two standard deviations from the mean account for 95.45%; and three standard deviations account for 99.73%.

About 68% of values drawn from a normal distribution are within one standard deviation σ away from the mean; about 95% of the values lie within two standard deviations; and about 99.7% are within three standard deviations.

The related error function $\displaystyle{ \operatorname{erf}(x) }$ gives the probability of a random variable, with normal distribution of mean 0 and variance 1/2 falling in the range $\displaystyle{ [-x, x] }$. That is:

{ | class = “ wikitable” style = “ text-align: center; margin-left: 24 pt”
$\displaystyle{ \operatorname{erf}(x) = \frac 2 {\sqrt\pi} \int_0^x e^{-t^2} \, dt }$
$\displaystyle{ n }$ $\displaystyle{ p= F(\mu+n\sigma) - F(\mu-n\sigma) }$ $\displaystyle{ \text{i.e. }1-p }$ $\displaystyle{ \text{or }1\text{ in }p }$ OEIS 数学! ！P = f (mu + n sigma)-f (mu-n sigma) </math! ！例如:。1-p </math > ！1 text { in } p </math > ! ！OEIS

These integrals cannot be expressed in terms of elementary functions, and are often said to be special functions. However, many numerical approximations are known; see below for more.

1 1

The two functions are closely related, namely

{ | cellpadding = “0” cellspacing = “0” style = “ width: 16em; ”
 “ text-align: right; width: 7em; ” | | | style = “ text-align: left; width: 9em; ” | $\displaystyle{ \Phi(x) = \frac{1}{2} \left[1 + \operatorname{erf}\left( \frac x {\sqrt 2} \right) \right] }$

||

||

For a generic normal distribution with density $\displaystyle{ f }$, mean $\displaystyle{ \mu }$ and deviation $\displaystyle{ \sigma }$, the cumulative distribution function is

|-

|-

|2 || || ||

|2 || || ||

\displaystyle{ {| cellpadding="0" cellspacing="0" style="width: 16em;" { | cellpadding = “0” cellspacing = “0” style = “ width: 16em; ” F(x) = \Phi\left(\frac{x-\mu} \sigma \right) = \frac{1}{2} \left[1 + \operatorname{erf}\left(\frac{x-\mu}{\sigma \sqrt 2 }\right)\right] | style="text-align: right; width: 7em;" | || style="text-align: left; width: 9em;" | “ text-align: right; width: 7em; ” | | | style = “ text-align: left; width: 9em; ” | }

|}

|}

||

||

The complement of the standard normal CDF, $\displaystyle{ Q(x) = 1 - \Phi(x) }$, is often called the Q-function, especially in engineering texts. It gives the probability that the value of a standard normal random variable $\displaystyle{ X }$ will exceed $\displaystyle{ x }$: $\displaystyle{ P(X\gt x) }$. Other definitions of the $\displaystyle{ Q }$-function, all of which are simple transformations of $\displaystyle{ \Phi }$, are also used occasionally.

|-

|-

|3 || || ||

|3 || || ||

The graph of the standard normal CDF $\displaystyle{ \Phi }$ has 2-fold rotational symmetry around the point (0,1/2); that is, $\displaystyle{ \Phi(-x) = 1 - \Phi(x) }$. Its antiderivative (indefinite integral) can be expressed as follows:

{ | cellpadding = “0” cellspacing = “0” style = “ width: 16em; ”
$\displaystyle{ \int \Phi(x)\, dx = x\Phi(x) + \varphi(x) + C. }$
 “ text-align: right; width: 7em; ” | | | style = “ text-align: left; width: 9em; ” |

|}

The CDF of the standard normal distribution can be expanded by Integration by parts into a series:

||

||

|-

|-

$\displaystyle{ \Phi(x)=\frac{1}{2} + \frac{1}{\sqrt{2\pi}}\cdot e^{-x^2/2} \left[x + \frac{x^3}{3} + \frac{x^5}{3\cdot 5} + \cdots + \frac{x^{2n+1}}{(2n+1)!!} + \cdots\right] }$

|4 || || ||

|4 || || ||

{ | cellpadding = “0” cellspacing = “0” style = “ width: 16em; ” where $\displaystyle{ !! }$ denotes the double factorial.
 “ text-align: right; width: 7em; ” | | | style = “ text-align: left; width: 9em; ” |

|}

An asymptotic expansion of the CDF for large x can also be derived using integration by parts. For more, see Error function#Asymptotic expansion.

|-

|-

|5 || || ||

|5 || || ||

#### Standard deviation and coverage

{ | cellpadding = “0” cellspacing = “0” style = “ width: 16em; ” 模板:Further
 “ text-align: right; width: 7em; ” | | | style = “ text-align: left; width: 9em; ” | 文件:Standard deviation diagram.svg For the normal distribution, the values less than one standard deviation away from the mean account for 68.27% of the set; while two standard deviations from the mean account for 95.45%; and three standard deviations account for 99.73%.

|}

About 68% of values drawn from a normal distribution are within one standard deviation σ away from the mean; about 95% of the values lie within two standard deviations; and about 99.7% are within three standard deviations. This fact is known as the 68-95-99.7 (empirical) rule, or the 3-sigma rule.

|-

|-

|6 || || ||

|6 || || ||

More precisely, the probability that a normal deviate lies in the range between $\displaystyle{ \mu-n\sigma }$ and $\displaystyle{ \mu+n\sigma }$ is given by

{ | cellpadding = “0” cellspacing = “0” style = “ width: 16em; ”
\displaystyle{ | style="text-align: right; width: 7em;" | || style="text-align: left; width: 9em;" | “ text-align: right; width: 7em; ” | | | style = “ text-align: left; width: 9em; ” | F(\mu+n\sigma) - F(\mu-n\sigma) = \Phi(n)-\Phi(-n) = \operatorname{erf} \left(\frac{n}{\sqrt{2}}\right). |} |} }

|}

To 12 significant figures, the values for $\displaystyle{ n=1,2,\ldots , 6 }$ are:

For large $\displaystyle{ n }$, one can use the approximation $\displaystyle{ 1 - p \approx \frac{e^{-n^2/2}}{n\sqrt{\pi/2}} }$.

The following table gives the quantile $\displaystyle{ z_p }$ such that $\displaystyle{ X }$ will lie in the range $\displaystyle{ \mu \pm z_p\sigma }$ with a specified probability $\displaystyle{ p }$. These values are useful to determine tolerance interval for sample averages and other statistical estimators with normal (or asymptotically normal) distributions:. NOTE: the following table shows $\displaystyle{ \sqrt 2 \operatorname{erf}^{-1}(p)=\Phi^{-1}\left(\frac{p+1}{2}\right) }$, not $\displaystyle{ \Phi^{-1}(p) }$ as defined above. 下表给出了分位数 < math > > z _ p </math > ，使得 < math > x </math > 将位于 < math > mu pm z _ p </math > 的范围内，具有指定的概率 < math > p </math > 。这些值对于确定样本平均值和其他正态(或渐近正态)分布的统计估计的容许区间是有用的: 。注意: 下表显示了 < math > sqrt 2操作员名{ erf } ^ {-1}(p) = Phi ^ {-1} left (frac { p + 1}{2}右) </math > ，而不是 < math > Phi ^ {-1}(p) </math > 。
$\displaystyle{ n }$ $\displaystyle{ p= F(\mu+n\sigma) - F(\mu-n\sigma) }$ $\displaystyle{ \text{i.e. }1-p }$ $\displaystyle{ \text{or }1\text{ in }p }$ OEIS
1 模板:Val 模板:Val

The quantile function of a distribution is the inverse of the cumulative distribution function. The quantile function of the standard normal distribution is called the probit function, and can be expressed in terms of the inverse error function:

\displaystyle{ 《数学》 | style="text-align: right; width: 7em;" | {{val|3}} || style="text-align: left; width: 9em;" | {{#invoke:Gapnum|main|.15148718753}} \Phi^{-1}(p) = \sqrt2\operatorname{erf}^{-1}(2p - 1), \quad p\in(0,1). Phi ^ {-1}(p) = sqrt2操作数名{ erf } ^ {-1}(2p-1) ，方 p 在(0,1)中。 |} } 数学 \displaystyle{ 《数学》 |2 || {{val|0.954499736104}} || {{val|0.045500263896}} || F^{-1}(p) = \mu + \sigma\Phi^{-1}(p) F ^ {-1}(p) = mu + sigma Phi ^ {-1}(p) {| cellpadding="0" cellspacing="0" style="width: 16em;" = \mu + \sigma\sqrt 2 \operatorname{erf}^{-1}(2p - 1), \quad p\in(0,1). = mu + sigma sqrt 2操作器名{ erf } ^ {-1}(2p-1) ，quad p in (0,1)。 | style="text-align: right; width: 7em;" | {{val|21}} || style="text-align: left; width: 9em;" | {{#invoke:Gapnum|main|.9778945080}} } 数学
 模板:OEIS2C For a normal random variable with mean $\displaystyle{ \mu }$ and variance $\displaystyle{ \sigma^2 }$, the quantile function is 对于平均值 < math > > mu 和方差 < math > sigma ^ 2 的正常随机变量，分位函数为

The quantile $\displaystyle{ \Phi^{-1}(p) }$ of the standard normal distribution is commonly denoted as $\displaystyle{ z_p }$. These values are used in hypothesis testing, construction of confidence intervals and Q-Q plots. A normal random variable $\displaystyle{ X }$ will exceed $\displaystyle{ \mu + z_p\sigma }$ with probability $\displaystyle{ 1-p }$, and will lie outside the interval $\displaystyle{ \mu \pm z_p\sigma }$ with probability $\displaystyle{ 2(1-p) }$. In particular, the quantile $\displaystyle{ z_{0.975} }$ is 1.96; therefore a normal random variable will lie outside the interval $\displaystyle{ \mu \pm 1.96\sigma }$ in only 5% of cases.

3 模板:Val 模板:Val
{ | class = “ wikable” style = “ text-align: left; margin-left: 24 pt; border: none; background: none; ”
$\displaystyle{ p }$ $\displaystyle{ z_p }$ 模板:Val 脚本错误：没有“Gapnum”这个模块。

8 style = “ border: none; background: none; ” |

4 模板:Val 模板:Val 0.80 0.999 0.80 0.999
 模板:Val 脚本错误：没有“Gapnum”这个模块。 0.9 0.9999 0.9 0.9999
0.95 0.99999 0.95 0.99999 5 模板:Val 模板:Val
 0.98 0.999999 0.98 0.999999 模板:Val 脚本错误：没有“Gapnum”这个模块。

0.99 0.9999999 0.99 0.9999999 6 模板:Val 模板:Val 0.995 0.99999999 0.995 0.99999999

 模板:Val 脚本错误：没有“Gapnum”这个模块。 0.998 1 0.998 1

|}

For small $\displaystyle{ p }$, the quantile function has the useful asymptotic expansion

For large $\displaystyle{ n }$, one can use the approximation $\displaystyle{ 1 - p \approx \frac{e^{-n^2/2}}{n\sqrt{\pi/2}} }$.

$\displaystyle{ \Phi^{-1}(p)=-\sqrt{\ln\frac{1}{p^2}-\ln\ln\frac{1}{p^2}-\ln(2\pi)}+\mathcal{o}(1). }$

< math > Phi ^ {-1}(p) =-sqrt { ln frac {1}{ p ^ 2}-ln frac {1}{ p ^ 2}-ln (2 pi)} + mathcal { o }(1) . </math >

#### Quantile function

The normal distribution is the only distribution whose cumulants beyond the first two (i.e., other than the mean and variance) are zero. It is also the continuous distribution with the maximum entropy for a specified mean and variance. Geary has shown, assuming that the mean and variance are finite, that the normal distribution is the only distribution where the mean and variance calculated from a set of independent draws are independent of each other.

The quantile function of a distribution is the inverse of the cumulative distribution function. The quantile function of the standard normal distribution is called the probit function, and can be expressed in terms of the inverse error function:

The normal distribution is a subclass of the elliptical distributions. The normal distribution is symmetric about its mean, and is non-zero over the entire real line. As such it may not be a suitable model for variables that are inherently positive or strongly skewed, such as the weight of a person or the price of a share. Such variables may be better described by other distributions, such as the log-normal distribution or the Pareto distribution.

$\displaystyle{ \Phi^{-1}(p) = \sqrt2\operatorname{erf}^{-1}(2p - 1), \quad p\in(0,1). The value of the normal distribution is practically zero when the value \lt math\gt x }$ lies more than a few standard deviations away from the mean (e.g., a spread of three standard deviations covers all but 0.27% of the total distribution). Therefore, it may not be an appropriate model when one expects a significant fraction of outliers—values that lie many standard deviations away from the mean—and least squares and other statistical inference methods that are optimal for normally distributed variables often become highly unreliable when applied to such data. In those cases, a more heavy-tailed distribution should be assumed and the appropriate robust statistical inference methods applied.

[/itex]

For a normal random variable with mean $\displaystyle{ \mu }$ and variance $\displaystyle{ \sigma^2 }$, the quantile function is

The Gaussian distribution belongs to the family of stable distributions which are the attractors of sums of independent, identically distributed distributions whether or not the mean or variance is finite. Except for the Gaussian which is a limiting case, all stable distributions have heavy tails and infinite variance. It is one of the few distributions that are stable and that have probability density functions that can be expressed analytically, the others being the Cauchy distribution and the Lévy distribution.

$\displaystyle{ F^{-1}(p) = \mu + \sigma\Phi^{-1}(p) = \mu + \sigma\sqrt 2 \operatorname{erf}^{-1}(2p - 1), \quad p\in(0,1). The normal distribution with density \lt math\gt f(x) }$ (mean $\displaystyle{ \mu }$ and standard deviation $\displaystyle{ \sigma \gt 0 }$) has the following properties:

[/itex]

The quantile $\displaystyle{ \Phi^{-1}(p) }$ of the standard normal distribution is commonly denoted as $\displaystyle{ z_p }$. These values are used in hypothesis testing, construction of confidence intervals and Q-Q plots. A normal random variable $\displaystyle{ X }$ will exceed $\displaystyle{ \mu + z_p\sigma }$ with probability $\displaystyle{ 1-p }$, and will lie outside the interval $\displaystyle{ \mu \pm z_p\sigma }$ with probability $\displaystyle{ 2(1-p) }$. In particular, the quantile $\displaystyle{ z_{0.975} }$ is 1.96; therefore a normal random variable will lie outside the interval $\displaystyle{ \mu \pm 1.96\sigma }$ in only 5% of cases.

The following table gives the quantile $\displaystyle{ z_p }$ such that $\displaystyle{ X }$ will lie in the range $\displaystyle{ \mu \pm z_p\sigma }$ with a specified probability $\displaystyle{ p }$. These values are useful to determine tolerance interval for sample averages and other statistical estimators with normal (or asymptotically normal) distributions:. NOTE: the following table shows $\displaystyle{ \sqrt 2 \operatorname{erf}^{-1}(p)=\Phi^{-1}\left(\frac{p+1}{2}\right) }$, not $\displaystyle{ \Phi^{-1}(p) }$ as defined above.

$\displaystyle{ 《数学》 | 0.99 || {{val|2.575829303549}} || 0.9999999 || {{val|5.326723886384}} \operatorname{E}\left[(X-\mu)^p\right] = 操作员名称{ e }左[(x-mu) ^ p 右] = |- \begin{cases} 开始{ cases } | 0.995 || {{val|2.807033768344}} || 0.99999999 || {{val|5.730728868236}} 0 & \text{if }p\text{ is odd,} \\ 0 & text { if } p text { is odd，} |- \sigma^p (p-1)!! & \text{if }p\text{ is even.} Sigma ^ p (p-1)！& text { if } p text { is even. } | 0.998 || {{val|3.090232306168}} || 0.999999999 || {{val|6.109410204869}} \end{cases} 结束{ cases } |} }$ 数学 Here $\displaystyle{ n!! }$ denotes the double factorial, that is, the product of all numbers from $\displaystyle{ n }$ to 1 that have the same parity as $\displaystyle{ n. }$ 这里 < math > n！！</math > 表示双阶乘，即从 < math > n </math > 到具有与 < math > n </math > 相同奇偶性的1的所有数字的乘积 For small $\displaystyle{ p }$, the quantile function has the useful asymptotic expansion $\displaystyle{ \Phi^{-1}(p)=-\sqrt{\ln\frac{1}{p^2}-\ln\ln\frac{1}{p^2}-\ln(2\pi)}+\mathcal{o}(1). }$ The central absolute moments coincide with plain moments for all even orders, but are nonzero for odd orders. For any non-negative integer $\displaystyle{ p, }$ 对于所有偶数阶，中心绝对时刻与平面时刻重合，但对于奇数阶，中心绝对时刻不为零。对于任何非负整数 < math > p，</math >

## Properties

\displaystyle{ \begin{align} 1.1.1.2.2.2.2.2.2.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.4.3.3.3.3.3.3.3.3.3.3.3.4.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3 The normal distribution is the only distribution whose [[cumulant]]s beyond the first two (i.e., other than the mean and [[variance]]) are zero. It is also the continuous distribution with the [[maximum entropy probability distribution|maximum entropy]] for a specified mean and variance.\lt ref\gt {{cite book|last=Cover|first=Thomas M.|author2=Thomas, Joy A.|year=2006|title=Elements of Information Theory|url=https://archive.org/details/elementsinformat00cove|url-access=limited|publisher=John Wiley and Sons|page=[https://archive.org/details/elementsinformat00cove/page/n279 254]}}\lt /ref\gt \lt ref\gt {{cite journal|last1=Park|first1=Sung Y.|last2=Bera|first2=Anil K.|year=2009|title=Maximum Entropy Autoregressive Conditional Heteroskedasticity Model|journal=Journal of Econometrics|pages=219–230|url=http://www.wise.xmu.edu.cn/Master/Download/..%5C..%5CUploadFiles%5Cpaper-masterdownload%5C2009519932327055475115776.pdf|accessdate=2011-06-02|doi=10.1016/j.jeconom.2008.12.014|volume=150|issue=2|citeseerx=10.1.1.511.9750}}\lt /ref\gt Geary has shown, assuming that the mean and variance are finite, that the normal distribution is the only distribution where the mean and variance calculated from a set of independent draws are independent of each other.\lt ref name=Geary1936\gt Geary RC(1936) The distribution of the "Student's" ratio for the non-normal samples". Supplement to the Journal of the Royal Statistical Society 3 (2): 178–184\lt /ref\gt \lt ref name=Lukas1942\gt Lukas E (1942) A characterization of the normal distribution. Annals of Mathematical Statistics 13: 91–93\lt /ref\gt \operatorname{E}\left[|X - \mu|^p\right] &= \sigma^p (p-1)!! \cdot \begin{cases} [ | x-mu | ^ p 右] & = sigma ^ p (p-1) ! ！开始{ cases } \sqrt{\frac{2}{\pi}} & \text{if }p\text{ is odd} \\ {2}{ pi } & text { if } p text { is odd } The normal distribution is a subclass of the [[elliptical distribution]]s. The normal distribution is [[Symmetric distribution|symmetric]] about its mean, and is non-zero over the entire real line. As such it may not be a suitable model for variables that are inherently positive or strongly skewed, such as the [[weight]] of a person or the price of a [[share (finance)|share]]. Such variables may be better described by other distributions, such as the [[log-normal distribution]] or the [[Pareto distribution]]. 1 & \text{if }p\text{ is even} 1 & text { if } p text { is even } \end{cases} \\ 结束{ cases } The value of the normal distribution is practically zero when the value \lt math\gt x } lies more than a few standard deviations away from the mean (e.g., a spread of three standard deviations covers all but 0.27% of the total distribution). Therefore, it may not be an appropriate model when one expects a significant fraction of outliers—values that lie many standard deviations away from the mean—and least squares and other statistical inference methods that are optimal for normally distributed variables often become highly unreliable when applied to such data. In those cases, a more heavy-tailed distribution should be assumed and the appropriate robust statistical inference methods applied.

  &= \sigma^p \cdot \frac{2^{p/2}\Gamma\left(\frac{p+1} 2 \right)}{\sqrt\pi}.


2 ^ { p/2} Gamma left (frac { p + 1}2 right)}{ sqrt pi }.

\end{align}[/itex]


The Gaussian distribution belongs to the family of stable distributions which are the attractors of sums of independent, identically distributed distributions whether or not the mean or variance is finite. Except for the Gaussian which is a limiting case, all stable distributions have heavy tails and infinite variance. It is one of the few distributions that are stable and that have probability density functions that can be expressed analytically, the others being the Cauchy distribution and the Lévy distribution.

The last formula is valid also for any non-integer $\displaystyle{ p\gt -1. }$ When the mean $\displaystyle{ \mu \ne 0, }$ the plain and absolute moments can be expressed in terms of confluent hypergeometric functions $\displaystyle{ {}_1F_1 }$ and $\displaystyle{ U. }$

### Symmetries and derivatives

\displaystyle{ \begin{align} 1.1.1.2.2.2.2.2.2.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.4.3.3.3.3.3.3.3.3.3.3.3.4.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3 The normal distribution with density \lt math\gt f(x) } (mean $\displaystyle{ \mu }$ and standard deviation $\displaystyle{ \sigma \gt 0 }$) has the following properties:

  \operatorname{E}\left[X^p\right] &= \sigma^p\cdot (-i\sqrt 2)^p U\left(-\frac{p}{2}, \frac{1}{2}, -\frac{1}{2} \left( \frac \mu \sigma \right)^2 \right), \\


• It is symmetric around the point $\displaystyle{ x=\mu, }$ which is at the same time the mode, the median and the mean of the distribution.
  \operatorname{E}\left[|X|^p \right] &= \sigma^p \cdot 2^{p/2} \frac {\Gamma\left(\frac{1+p} 2\right)}{\sqrt\pi} {}_1F_1\left( -\frac{p}{2}, \frac{1}{2}, -\frac{1}{2} \left( \frac \mu \sigma \right)^2 \right).


• It is unimodal: its first derivative is positive for $\displaystyle{ x\lt \mu, }$ negative for $\displaystyle{ x\gt \mu, }$ and zero only at $\displaystyle{ x=\mu. }$
\end{align}[/itex]


• The area under the curve and over the $\displaystyle{ x }$-axis is unity (i.e. equal to one).
• Its first derivative is $\displaystyle{ f^\prime(x)=-\frac{x-\mu}{\sigma^2} f(x). }$

These expressions remain valid even if $\displaystyle{ p }$ is not an integer. See also generalized Hermite polynomials.

• Its density has two inflection points (where the second derivative of $\displaystyle{ f }$ is zero and changes sign), located one standard deviation away from the mean, namely at $\displaystyle{ x=\mu-\sigma }$ and $\displaystyle{ x=\mu+\sigma. }$
$\displaystyle{ p }$ $\displaystyle{ z_p }$ $\displaystyle{ p }$ $\displaystyle{ z_p }$ Furthermore, the density $\displaystyle{ \varphi }$ of the standard normal distribution (i.e. $\displaystyle{ \mu=0 }$ and $\displaystyle{ \sigma=1 }$) also has the following properties: 此外，标准正态分布的密度 < math > varphi 。< math > mu = 0 和 < math > sigma = 1 )也有以下属性: 0.80 模板:Val 0.999 模板:Val 0.90 模板:Val 0.9999 模板:Val 0.95 模板:Val 0.99999 模板:Val The plain and absolute moments of a variable $\displaystyle{ X }$ are the expected values of $\displaystyle{ X^p }$ and $\displaystyle{ |X|^p }$, respectively. If the expected value $\displaystyle{ \mu }$ of $\displaystyle{ X }$ is zero, these parameters are called central moments. Usually we are interested only in moments with integer order $\displaystyle{ \ p }$. 变量 < math > x 的平面和绝对时刻分别是 < math > x ^ p 和 < math > | x | ^ p 的期望值。如果 < math > x 的期望值 < math > mu 为零，这些参数称为中心矩。通常我们只对整数顺序的时刻感兴趣。 0.98 模板:Val 0.999999 模板:Val If $\displaystyle{ X }$ has a normal distribution, these moments exist and are finite for any $\displaystyle{ p }$ whose real part is greater than −1. For any non-negative integer $\displaystyle{ p }$, the plain central moments are: 如果 < math > x 具有正态分布，那么这些矩对于任何实部大于 -1的 < math > p 都存在并且是有限的。对于任何非负整数 < math > p ，简单的中心矩是:
{ | class = “ wikitable” style = “ background: # fff; margin: auto; ”
• Its first derivative is $\displaystyle{ \varphi^\prime(x)=-x\varphi(x). }$
\begin{cases} [/itex] 1 & \text{if }p\text{ is even} The last formula is valid also for any non-integer $\displaystyle{ p\gt -1. }$ When the mean $\displaystyle{ \mu \ne 0, }$ the plain and absolute moments can be expressed in terms of confluent hypergeometric functions $\displaystyle{ {}_1F_1 }$ and $\displaystyle{ U. }$[citation needed]
Order Non-central moment Central moment 肃静！非中心时刻! ！中心力矩

Furthermore, the density $\displaystyle{ \varphi }$ of the standard normal distribution (i.e. $\displaystyle{ \mu=0 }$ and $\displaystyle{ \sigma=1 }$) also has the following properties:

1 1
• Its second derivative is $\displaystyle{ \varphi^{\prime\prime}(x)=(x^2-1)\varphi(x) }$
$\displaystyle{ \mu }$ < math > > mu
• More generally, its nth derivative is $\displaystyle{ \varphi^{(n)}(x) = (-1)^n\operatorname{He}_n(x)\varphi(x), }$ where $\displaystyle{ \operatorname{He}_n(x) }$ is the nth (probabilist) Hermite polynomial.
$\displaystyle{ 0 }$ < math > 0
• The probability that a normally distributed variable $\displaystyle{ X }$ with known $\displaystyle{ \mu }$ and $\displaystyle{ \sigma }$ is in a particular set, can be calculated by using the fact that the fraction $\displaystyle{ Z = (X-\mu)/\sigma }$ has a standard normal distribution.
2 2

### Moments

$\displaystyle{ \mu^2+\sigma^2 }$ < math > mu ^ 2 + sigma ^ 2 </math > $\displaystyle{ \sigma^2 }$ < math > sigma ^ 2 </math >

The plain and absolute moments of a variable $\displaystyle{ X }$ are the expected values of $\displaystyle{ X^p }$ and $\displaystyle{ |X|^p }$, respectively. If the expected value $\displaystyle{ \mu }$ of $\displaystyle{ X }$ is zero, these parameters are called central moments. Usually we are interested only in moments with integer order $\displaystyle{ \ p }$.

3 3

If $\displaystyle{ X }$ has a normal distribution, these moments exist and are finite for any $\displaystyle{ p }$ whose real part is greater than −1. For any non-negative integer $\displaystyle{ p }$, the plain central moments are:

$\displaystyle{ \mu^3+3\mu\sigma^2 }$ < math > mu ^ 3 + 3 mu sigma ^ 2 </math >
$\displaystyle{ | \lt math\gt 0 }$
< math > 0
   \operatorname{E}\left[(X-\mu)^p\right] =

4 4
       0 & \text{if }p\text{ is odd,} \\

$\displaystyle{ \mu^4+6\mu^2\sigma^2+3\sigma^4 }$ < math > mu ^ 4 + 6 mu ^ 2 sigma ^ 2 + 3 sigma ^ 4 </math >
       \sigma^p (p-1)!! & \text{if }p\text{ is even.}

$\displaystyle{ 3\sigma^4 }$ < math > 3 sigma ^ 4 </math >
     \end{cases}

5 5

Here $\displaystyle{ n!! }$ denotes the double factorial, that is, the product of all numbers from $\displaystyle{ n }$ to 1 that have the same parity as $\displaystyle{ n. }$

$\displaystyle{ \mu^5+10\mu^3\sigma^2+15\mu\sigma^4 }$ < math > mu ^ 5 + 10 mu ^ 3 sigma ^ 2 + 15 mu sigma ^ 4

$\displaystyle{ 0 }$ < math > 0

The central absolute moments coincide with plain moments for all even orders, but are nonzero for odd orders. For any non-negative integer $\displaystyle{ p, }$

6 6
\displaystyle{ \begin{align} | \lt math\gt \mu^6+15\mu^4\sigma^2+45\mu^2\sigma^4+15\sigma^6 }
< math > mu ^ 6 + 15 mu ^ 4 sigma ^ 2 + 45 mu ^ 2 sigma ^ 4 + 15 sigma ^ 6 </math >
  \operatorname{E}\left[|X - \mu|^p\right] &= \sigma^p (p-1)!! \cdot \begin{cases}

$\displaystyle{ 15\sigma^6 }$ < math > 15 sigma ^ 6 < math >
    \sqrt{\frac{2}{\pi}} & \text{if }p\text{ is odd} \\

7 7
  \end{cases} \\

$\displaystyle{ \mu^7+21\mu^5\sigma^2+105\mu^3\sigma^4+105\mu\sigma^6 }$ < math > mu ^ 7 + 21 mu ^ 5 sigma ^ 2 + 105 mu ^ 3 sigma ^ 4 + 105 mu sigma ^ 6 </math >
  &= \sigma^p \cdot \frac{2^{p/2}\Gamma\left(\frac{p+1} 2 \right)}{\sqrt\pi}.

$\displaystyle{ 0 }$ < math > 0
\end{align}[/itex]

8 8

$\displaystyle{ \mu^8+28\mu^6\sigma^2+210\mu^4\sigma^4+420\mu^2\sigma^6+105\sigma^8 }$ < math > mu ^ 8 + 28 mu ^ 6 sigma ^ 2 + 210 mu ^ 4 sigma ^ 4 + 420 mu ^ 2 sigma ^ 6 + 105 sigma ^ 8 </math >
\displaystyle{ \begin{align} | \lt math\gt 105\sigma^8 }
< math > 105 sigma ^ 8 < math >
  \operatorname{E}\left[X^p\right] &= \sigma^p\cdot (-i\sqrt 2)^p U\left(-\frac{p}{2}, \frac{1}{2}, -\frac{1}{2} \left( \frac \mu \sigma \right)^2 \right), \\

  \operatorname{E}\left[|X|^p \right] &= \sigma^p \cdot 2^{p/2} \frac {\Gamma\left(\frac{1+p} 2\right)}{\sqrt\pi} {}_1F_1\left( -\frac{p}{2}, \frac{1}{2}, -\frac{1}{2} \left( \frac \mu \sigma \right)^2 \right).

\end{align}[/itex]


The expectation of $\displaystyle{ X }$ conditioned on the event that $\displaystyle{ X }$ lies in an interval $\displaystyle{ [a,b] }$ is given by

$\displaystyle{ \operatorname{E}\left[X \mid a\lt X\lt b \right] = \mu - \sigma^2\frac{f(b)-f(a)}{F(b)-F(a)} }$

These expressions remain valid even if $\displaystyle{ p }$ is not an integer. See also generalized Hermite polynomials.

where $\displaystyle{ f }$ and $\displaystyle{ F }$ respectively are the density and the cumulative distribution function of $\displaystyle{ X }$. For $\displaystyle{ b=\infty }$ this is known as the inverse Mills ratio. Note that above, density $\displaystyle{ f }$ of $\displaystyle{ X }$ is used instead of standard normal density as in inverse Mills ratio, so here we have $\displaystyle{ \sigma^2 }$ instead of $\displaystyle{ \sigma }$.

The Fourier transform of a normal density $\displaystyle{ f }$ with mean $\displaystyle{ \mu }$ and standard deviation $\displaystyle{ \sigma }$ is 正常密度的傅里叶变换是平均值 < math > > mu </math > 和标准差 < math > sigma </math > $\displaystyle{ 《数学》 | 1 \hat f(t) = \int_{-\infty}^\infty f(x)e^{-itx} \, dx = e^{ -i\mu t} e^{- \frac12 (\sigma t)^2} F (t) = int _ {-infty } ^ infty f (x) e ^ {-itx } ，dx = e ^ {-i mu t } e ^ {-frac12(sigma t) ^ 2} | \lt math\gt \mu }$ [/itex] 数学 where $\displaystyle{ i }$ is the imaginary unit. If the mean $\displaystyle{ \mu=0 }$, the first factor is 1, and the Fourier transform is, apart from a constant factor, a normal density on the frequency domain, with mean 0 and standard deviation $\displaystyle{ 1/\sigma }$. In particular, the standard normal distribution $\displaystyle{ \varphi }$ is an eigenfunction of the Fourier transform. 其中“ math”是虚数单位。如果平均值 < math > mu = 0 </math > ，那么第一个因子是1，除了一个常数因子外，傅里叶变换密度在频率域是正常的，平均值0和标准差 < math > 1/sigma </math > 。特别是，标准正态分布是傅里叶变换的特征函数。 $\displaystyle{ M(t) = \operatorname{E}[e^{tX}] = \hat f(it) = e^{\mu t} e^{\tfrac12 \sigma^2 t^2} }$ [ math > m (t) = operatorname { e }[ e ^ { tX }] = hat f (it) = e ^ { mu t } e ^ { tfrac12 sigma ^ 2 t ^ 2} </math > $\displaystyle{ g(t) = \ln M(t) = \mu t + \tfrac 12 \sigma^2 t^2 }$ [ math > g (t) = ln m (t) = mu t + tfrac 12 sigma ^ 2 t ^ 2] In the limit when $\displaystyle{ \sigma }$ tends to zero, the probability density $\displaystyle{ f(x) }$ eventually tends to zero at any $\displaystyle{ x\ne \mu }$, but grows without limit if $\displaystyle{ x = \mu }$, while its integral remains equal to 1. Therefore, the normal distribution cannot be defined as an ordinary function when $\displaystyle{ \sigma = 0 }$. 当 < math > sigma </math > 趋于零时，概率密度 < math > f (x) </math > 在任意 < math > x ne mu </math > 时最终趋于零，但当 < math > x = mu </math > 时，概率密度无限增长，而其积分仍然等于1。因此，当 < math > sigma = 0 </math > 时，正态分布不能被定义为普通函数。 $\displaystyle{ F(x) = \lt math \gt f (x) = | 8 \begin{cases} 开始{ cases } | \lt math\gt \mu^8+28\mu^6\sigma^2+210\mu^4\sigma^4+420\mu^2\sigma^6+105\sigma^8 }$ 0 & \text{if }x < \mu \\ 0 & text { if } x < mu
Order Non-central moment Central moment
$\displaystyle{ 0 }$
2 $\displaystyle{ \mu^2+\sigma^2 }$

In probability theory, the Fourier transform of the probability distribution of a real-valued random variable $\displaystyle{ X }$ is closely connected to the characteristic function $\displaystyle{ \varphi_X(t) }$ of that variable, which is defined as the expected value of $\displaystyle{ e^{itX} }$, as a function of the real variable $\displaystyle{ t }$ (the frequency parameter of the Fourier transform). This definition can be analytically extended to a complex-value variable $\displaystyle{ t }$. The relation between both is:

$\displaystyle{ \sigma^2 }$

$\displaystyle{ \varphi_X(t) = \hat f(-t) }$

[ math ] varphi _ x (t) = hat f (- t) </math >

3 $\displaystyle{ \mu^3+3\mu\sigma^2 }$

The moment generating function of a real random variable $\displaystyle{ X }$ is the expected value of $\displaystyle{ e^{tX} }$, as a function of the real parameter $\displaystyle{ t }$. For a normal distribution with density $\displaystyle{ f }$, mean $\displaystyle{ \mu }$ and deviation $\displaystyle{ \sigma }$, the moment generating function exists and is equal to

$\displaystyle{ 0 }$
4 $\displaystyle{ \mu^4+6\mu^2\sigma^2+3\sigma^4 }$

The cumulant generating function is the logarithm of the moment generating function, namely

$\displaystyle{ 3\sigma^4 }$
5 $\displaystyle{ \mu^5+10\mu^3\sigma^2+15\mu\sigma^4 }$

Since this is a quadratic polynomial in $\displaystyle{ t }$, only the first two cumulants are nonzero, namely the mean $\displaystyle{ \mu }$ and the variance $\displaystyle{ \sigma^2 }$.

$\displaystyle{ 0 }$
6

Within Stein's method the Stein operator and class of a random variable $\displaystyle{ X \sim \mathcal{N}(\mu, \sigma^2) }$ are $\displaystyle{ \mathcal{A}f(x) = \sigma^2 f'(x) - (x-\mu)f(x) }$ and $\displaystyle{ \mathcal{F} }$ the class of all absolutely continuous functions $\displaystyle{ f : \R \to \R \mbox{ such that }\mathbb{E}[|f'(X)|]\lt \infty }$.

$\displaystyle{ \mu^6+15\mu^4\sigma^2+45\mu^2\sigma^4+15\sigma^6 }$ $\displaystyle{ 15\sigma^6 }$
7 $\displaystyle{ \mu^7+21\mu^5\sigma^2+105\mu^3\sigma^4+105\mu\sigma^6 }$

However, one can define the normal distribution with zero variance as a generalized function; specifically, as Dirac's "delta function" $\displaystyle{ \delta }$ translated by the mean $\displaystyle{ \mu }$, that is $\displaystyle{ f(x)=\delta(x-\mu). }$

$\displaystyle{ 0 }$

Its CDF is then the Heaviside step function translated by the mean $\displaystyle{ \mu }$, namely

$\displaystyle{ 105\sigma^8 }$
 1 & \text{if }x \geq \mu


1 & text { if } x geq mu

\end{cases}

[/itex]

The expectation of $\displaystyle{ X }$ conditioned on the event that $\displaystyle{ X }$ lies in an interval $\displaystyle{ [a,b] }$ is given by

$\displaystyle{ \operatorname{E}\left[X \mid a\lt X\lt b \right] = \mu - \sigma^2\frac{f(b)-f(a)}{F(b)-F(a)} }$

where $\displaystyle{ f }$ and $\displaystyle{ F }$ respectively are the density and the cumulative distribution function of $\displaystyle{ X }$. For $\displaystyle{ b=\infty }$ this is known as the inverse Mills ratio. Note that above, density $\displaystyle{ f }$ of $\displaystyle{ X }$ is used instead of standard normal density as in inverse Mills ratio, so here we have $\displaystyle{ \sigma^2 }$ instead of $\displaystyle{ \sigma }$.

Of all probability distributions over the reals with a specified mean $\displaystyle{ \mu }$ and variance $\displaystyle{ \sigma^2 }$, the normal distribution $\displaystyle{ N(\mu,\sigma^2) }$ is the one with maximum entropy. If $\displaystyle{ X }$ is a continuous random variable with probability density $\displaystyle{ f(x) }$, then the entropy of $\displaystyle{ X }$ is defined as

$\displaystyle{ 《数学》 === Fourier transform and characteristic function === H(X) = - \int_{-\infty}^\infty f(x)\log f(x)\, dx H (x) =-int _ {-infty } ^ infty f (x) log f (x) ，dx The [[Fourier transform]] of a normal density \lt math\gt f }$ with mean $\displaystyle{ \mu }$ and standard deviation $\displaystyle{ \sigma }$ is

[/itex]

$\displaystyle{ where \lt math\gt f(x)\log f(x) }$ is understood to be zero whenever $\displaystyle{ f(x)=0 }$. This functional can be maximized, subject to the constraints that the distribution is properly normalized and has a specified variance, by using variational calculus. A function with two Lagrange multipliers is defined:

\hat f(t) = \int_{-\infty}^\infty f(x)e^{-itx} \, dx = e^{ -i\mu t} e^{- \frac12 (\sigma t)^2}

[/itex]

$\displaystyle{ 《数学》 L=\int_{-\infty}^\infty f(x)\ln(f(x))\,dx-\lambda_0\left(1-\int_{-\infty}^\infty f(x)\,dx\right)-\lambda\left(\sigma^2-\int_{-\infty}^\infty f(x)(x-\mu)^2\,dx\right) L = int _ {-infty } ^ infty f (x) ln (f (x)) ，dx-lambda _ 0 left (1-int _ {-infty } ^ infty f (x) ，dx 右)-lambda left (sigma ^ 2-int _ {-infty } ^ infty f (x)(x-mu) ^ 2，dx 右) where \lt math\gt i }$ is the imaginary unit. If the mean $\displaystyle{ \mu=0 }$, the first factor is 1, and the Fourier transform is, apart from a constant factor, a normal density on the frequency domain, with mean 0 and standard deviation $\displaystyle{ 1/\sigma }$. In particular, the standard normal distribution $\displaystyle{ \varphi }$ is an eigenfunction of the Fourier transform.

[/itex]

In probability theory, the Fourier transform of the probability distribution of a real-valued random variable $\displaystyle{ X }$ is closely connected to the characteristic function $\displaystyle{ \varphi_X(t) }$ of that variable, which is defined as the expected value of $\displaystyle{ e^{itX} }$, as a function of the real variable $\displaystyle{ t }$ (the frequency parameter of the Fourier transform). This definition can be analytically extended to a complex-value variable $\displaystyle{ t }$. The relation between both is:

where $\displaystyle{ f(x) }$ is, for now, regarded as some density function with mean $\displaystyle{ \mu }$ and standard deviation $\displaystyle{ \sigma }$.

$\displaystyle{ \varphi_X(t) = \hat f(-t) }$

At maximum entropy, a small variation $\displaystyle{ \delta f(x) }$ about $\displaystyle{ f(x) }$ will produce a variation $\displaystyle{ \delta L }$ about $\displaystyle{ L }$ which is equal to 0:

### Moment and cumulant generating functions

The moment generating function of a real random variable $\displaystyle{ X }$ is the expected value of $\displaystyle{ e^{tX} }$, as a function of the real parameter $\displaystyle{ t }$. For a normal distribution with density $\displaystyle{ f }$, mean $\displaystyle{ \mu }$ and deviation $\displaystyle{ \sigma }$, the moment generating function exists and is equal to

$\displaystyle{ 《数学》 0=\delta L=\int_{-\infty}^\infty \delta f(x)\left (\ln(f(x))+1+\lambda_0+\lambda(x-\mu)^2\right )\,dx 0 = delta l = int _ {-infty } ^ infty delta f (x) left (ln (f (x)) + 1 + lambda _ 0 + lambda (x-mu) ^ 2 right) ，dx :\lt math\gt M(t) = \operatorname{E}[e^{tX}] = \hat f(it) = e^{\mu t} e^{\tfrac12 \sigma^2 t^2} }$

[/itex]

The cumulant generating function is the logarithm of the moment generating function, namely

Since this must hold for any small $\displaystyle{ \delta f(x) }$, the term in brackets must be zero, and solving for $\displaystyle{ f(x) }$ yields:

$\displaystyle{ g(t) = \ln M(t) = \mu t + \tfrac 12 \sigma^2 t^2 }$

$\displaystyle{ f(x)=e^{-\lambda_0-1-\lambda(x-\mu)^2} }$

= e ^ {-lambda _ 0-1-lambda (x-mu) ^ 2} </math >

Since this is a quadratic polynomial in $\displaystyle{ t }$, only the first two cumulants are nonzero, namely the mean $\displaystyle{ \mu }$ and the variance $\displaystyle{ \sigma^2 }$.

Using the constraint equations to solve for $\displaystyle{ \lambda_0 }$ and $\displaystyle{ \lambda }$ yields the density of the normal distribution:

### Stein operator and class

$\displaystyle{ 《数学》 Within [[Stein's method]] the Stein operator and class of a random variable \lt math\gt X \sim \mathcal{N}(\mu, \sigma^2) }$ are $\displaystyle{ \mathcal{A}f(x) = \sigma^2 f'(x) - (x-\mu)f(x) }$ and $\displaystyle{ \mathcal{F} }$ the class of all absolutely continuous functions $\displaystyle{ f : \R \to \R \mbox{ such that }\mathbb{E}[|f'(X)|]\lt \infty }$.

f(x, \mu, \sigma)=\frac{1}{\sqrt{2\pi\sigma^2}}e^{-\frac{(x-\mu)^2}{2\sigma^2}}

F (x，mu，sigma) = frac {1}{ sqrt {2 pi sigma ^ 2} e ^ {-frac {(x-mu) ^ 2}{2 sigma ^ 2}

[/itex]

### Zero-variance limit

The entropy of a normal distribution is equal to

In the limit when $\displaystyle{ \sigma }$ tends to zero, the probability density $\displaystyle{ f(x) }$ eventually tends to zero at any $\displaystyle{ x\ne \mu }$, but grows without limit if $\displaystyle{ x = \mu }$, while its integral remains equal to 1. Therefore, the normal distribution cannot be defined as an ordinary function when $\displaystyle{ \sigma = 0 }$.

$\displaystyle{ 《数学》 H(x)=\tfrac{1}{2}(1+\log(2\sigma^2\pi)) H (x) = tfrac {1}{2}(1 + log (2 sigma ^ 2 pi)) However, one can define the normal distribution with zero variance as a [[generalized function]]; specifically, as [[Dirac delta function|Dirac's "delta function"]] \lt math\gt \delta }$ translated by the mean $\displaystyle{ \mu }$, that is $\displaystyle{ f(x)=\delta(x-\mu). }$

[/itex]

Its CDF is then the Heaviside step function translated by the mean $\displaystyle{ \mu }$, namely

$\displaystyle{ F(x) = \begin{cases} The family of normal distributions is closed under linear transformations: if \lt math\gt X }$ is normally distributed with mean $\displaystyle{ \mu }$ and standard deviation $\displaystyle{ \sigma }$, then the variable $\displaystyle{ Y=aX+b }$, for any real numbers $\displaystyle{ a }$ and $\displaystyle{ b }$, is also normally distributed, with

 0 & \text{if }x < \mu \\


mean $\displaystyle{ a\mu+b }$ and standard deviation $\displaystyle{ |a|\sigma }$.

 1 & \text{if }x \geq \mu


\end{cases}

Also if $\displaystyle{ X_1 }$ and $\displaystyle{ X_2 }$ are two independent normal random variables, with means $\displaystyle{ \mu_1 }$, $\displaystyle{ \mu_2 }$ and standard deviations $\displaystyle{ \sigma_1 }$, $\displaystyle{ \sigma_2 }$, then their sum $\displaystyle{ X_1+X_2 }$ will also be normally distributed,[proof] with mean $\displaystyle{ \mu_1 + \mu_2 }$ and variance $\displaystyle{ \sigma_1^2 + \sigma_2^2 }$.

[/itex]

In particular, if $\displaystyle{ X }$ and $\displaystyle{ Y }$ are independent normal deviates with zero mean and variance $\displaystyle{ \sigma^2 }$, then $\displaystyle{ X + Y }$ and $\displaystyle{ X - Y }$ are also independent and normally distributed, with zero mean and variance $\displaystyle{ 2\sigma^2 }$. This is a special case of the polarization identity.

### Maximum entropy

Of all probability distributions over the reals with a specified mean $\displaystyle{ \mu }$ and variance $\displaystyle{ \sigma^2 }$, the normal distribution $\displaystyle{ N(\mu,\sigma^2) }$ is the one with maximum entropy. If $\displaystyle{ X }$ is a continuous random variable with probability density $\displaystyle{ f(x) }$, then the entropy of $\displaystyle{ X }$ is defined as

Also, if $\displaystyle{ X_1 }$, $\displaystyle{ X_2 }$ are two independent normal deviates with mean $\displaystyle{ \mu }$ and deviation $\displaystyle{ \sigma }$, and $\displaystyle{ a }$, $\displaystyle{ b }$ are arbitrary real numbers, then the variable

$\displaystyle{ \lt math\gt 《数学》 H(X) = - \int_{-\infty}^\infty f(x)\log f(x)\, dx X_3 = \frac{aX_1 + bX_2 - (a+b)\mu}{\sqrt{a^2+b^2}} + \mu 3 = frac { aX _ 1 + bX _ 2-(a + b) mu }{ sqrt { a ^ 2 + b ^ 2}} + mu }$
 [/itex]


is also normally distributed with mean $\displaystyle{ \mu }$ and deviation $\displaystyle{ \sigma }$. It follows that the normal distribution is stable (with exponent $\displaystyle{ \alpha=2 }$).

where $\displaystyle{ f(x)\log f(x) }$ is understood to be zero whenever $\displaystyle{ f(x)=0 }$. This functional can be maximized, subject to the constraints that the distribution is properly normalized and has a specified variance, by using variational calculus. A function with two Lagrange multipliers is defined:

More generally, any linear combination of independent normal deviates is a normal deviate.

$\displaystyle{ L=\int_{-\infty}^\infty f(x)\ln(f(x))\,dx-\lambda_0\left(1-\int_{-\infty}^\infty f(x)\,dx\right)-\lambda\left(\sigma^2-\int_{-\infty}^\infty f(x)(x-\mu)^2\,dx\right) }$

For any positive integer $\displaystyle{ \text{n} }$, any normal distribution with mean $\displaystyle{ \mu }$ and variance $\displaystyle{ \sigma^2 }$ is the distribution of the sum of $\displaystyle{ \text{n} }$ independent normal deviates, each with mean $\displaystyle{ \frac{\mu}{n} }$ and variance $\displaystyle{ \frac{\sigma^2}{n} }$. This property is called infinite divisibility.

where $\displaystyle{ f(x) }$ is, for now, regarded as some density function with mean $\displaystyle{ \mu }$ and standard deviation $\displaystyle{ \sigma }$.

Conversely, if $\displaystyle{ X_1 }$ and $\displaystyle{ X_2 }$ are independent random variables and their sum $\displaystyle{ X_1+X_2 }$ has a normal distribution, then both $\displaystyle{ X_1 }$ and $\displaystyle{ X_2 }$ must be normal deviates.

At maximum entropy, a small variation $\displaystyle{ \delta f(x) }$ about $\displaystyle{ f(x) }$ will produce a variation $\displaystyle{ \delta L }$ about $\displaystyle{ L }$ which is equal to 0:

This result is known as Cramér’s decomposition theorem, and is equivalent to saying that the convolution of two distributions is normal if and only if both are normal. Cramér's theorem implies that a linear combination of independent non-Gaussian variables will never have an exactly normal distribution, although it may approach it arbitrarily closely.

$\displaystyle{ 0=\delta L=\int_{-\infty}^\infty \delta f(x)\left (\ln(f(x))+1+\lambda_0+\lambda(x-\mu)^2\right )\,dx Bernstein's theorem states that if \lt math\gt X }$ and $\displaystyle{ Y }$ are independent and $\displaystyle{ X + Y }$ and $\displaystyle{ X - Y }$ are also independent, then both X and Y must necessarily have normal distributions.

[/itex]

More generally, if $\displaystyle{ X_1, \ldots, X_n }$ are independent random variables, then two distinct linear combinations $\displaystyle{ \sum{a_kX_k} }$ and $\displaystyle{ \sum{b_kX_k} }$will be independent if and only if all $\displaystyle{ X_k }$ are normal and $\displaystyle{ \sum{a_kb_k\sigma_k^2=0} }$, where $\displaystyle{ \sigma_k^2 }$ denotes the variance of $\displaystyle{ X_k }$.[proof] For non-normal random variables uncorrelatedness does not imply independence.

Since this must hold for any small $\displaystyle{ \delta f(x) }$, the term in brackets must be zero, and solving for $\displaystyle{ f(x) }$ yields:

|3= The Kullback–Leibler divergence of one normal distribution $\displaystyle{ X_1 \sim N(\mu_1, \sigma^2_1) }$ from another $\displaystyle{ X_2 \sim N(\mu_2, \sigma^2_2) }$ is given by:

| 3 = 一个正态分布的 Kullback-Leibler 分歧 < math > x1sim n (mu _ 1，sigma ^ 2_ 1) </math > 与另一个 < math > x2sim n (mu _ 2，sigma ^ 2_ 2) </math > 由:

$\displaystyle{ f(x)=e^{-\lambda_0-1-\lambda(x-\mu)^2} }$
$\displaystyle{ 《数学》 D_\mathrm{KL}( X_1 \,\|\, X_2 ) = \frac{(\mu_1 - \mu_2)^2}{2\sigma_2^2} + \frac{1}{2}\left( \frac{\sigma_1^2}{\sigma_2^2} - 1 - \ln\frac{\sigma_1^2}{\sigma_2^2} \right) D _ mathrm { KL }(x _ 1，| ，x _ 2) = frac {(mu _ 1-mu _ 2) ^ 2}{2 sigma _ 2 ^ 2} + frac {1}{2}左(frac { sigma _ 1 ^ 2}{ sigma _ 2 ^ 2}-1-ln frac { sigma _ 1 ^ 2}{ σ _ 2}右) Using the constraint equations to solve for \lt math\gt \lambda_0 }$ and $\displaystyle{ \lambda }$ yields the density of the normal distribution:

 [/itex]


The Hellinger distance between the same distributions is equal to

$\displaystyle{ \lt math\gt 《数学》 f(x, \mu, \sigma)=\frac{1}{\sqrt{2\pi\sigma^2}}e^{-\frac{(x-\mu)^2}{2\sigma^2}} H^2(X_1,X_2) = 1 - \sqrt{\frac{2\sigma_1\sigma_2}{\sigma_1^2+\sigma_2^2}} 2(x _ 1，x _ 2) = 1-sqrt { frac {2 sigma _ 1 sigma _ 2}{ sigma _ 1 ^ 2 + sigma _ 2 ^ 2}} }$
                      e^{-\frac{1}{4}\frac{(\mu_1-\mu_2)^2}{\sigma_1^2+\sigma_2^2}}


E ^ {-frac {1}{4} frac {(mu _ 1-mu _ 2) ^ 2}{ sigma _ 1 ^ 2 + sigma _ 2 ^ 2}}

The entropy of a normal distribution is equal to

 [/itex]


$\displaystyle{ H(x)=\tfrac{1}{2}(1+\log(2\sigma^2\pi)) |4= The Fisher information matrix for a normal distribution is diagonal and takes the form | 4 = 正态分布的费雪资讯矩阵是对角矩阵，并且是这样的形式 }$
$\displaystyle{ 《数学》 \mathcal I = \begin{pmatrix} \frac{1}{\sigma^2} & 0 \\ 0 & \frac{1}{2\sigma^4} \end{pmatrix} 数学 i = begin { pmatrix } frac {1}{ sigma ^ 2} & 00 & frac {1}{2 sigma ^ 4} end { pmatrix } === Operations on normal deviates === }$


The family of normal distributions is closed under linear transformations: if $\displaystyle{ X }$ is normally distributed with mean $\displaystyle{ \mu }$ and standard deviation $\displaystyle{ \sigma }$, then the variable $\displaystyle{ Y=aX+b }$, for any real numbers $\displaystyle{ a }$ and $\displaystyle{ b }$, is also normally distributed, with

mean $\displaystyle{ a\mu+b }$ and standard deviation $\displaystyle{ |a|\sigma }$.

|5= The conjugate prior of the mean of a normal distribution is another normal distribution. Specifically, if $\displaystyle{ x_1, \ldots, x_n }$ are iid $\displaystyle{ \sim N(\mu, \sigma^2) }$ and the prior is $\displaystyle{ \mu \sim N(\mu_0 , \sigma^2_0) }$, then the posterior distribution for the estimator of $\displaystyle{ \mu }$ will be

| 5 = 正态分布均值的共轭先验是另一个正态分布。具体来说，如果 < math > x _ 1，ldots，x _ n </math > 是 iid < math > sim n (mu，sigma ^ 2) </math > 而优先是 < math > mu sim n (mu _ 0，sigma ^ 2 _ 0) </math > ，那么 < math > mu </math > 估计量的后验概率将是

$\displaystyle{ 《数学》 Also if \lt math\gt X_1 }$ and $\displaystyle{ X_2 }$ are two independent normal random variables, with means $\displaystyle{ \mu_1 }$, $\displaystyle{ \mu_2 }$ and standard deviations $\displaystyle{ \sigma_1 }$, $\displaystyle{ \sigma_2 }$, then their sum $\displaystyle{ X_1+X_2 }$ will also be normally distributed,[proof] with mean $\displaystyle{ \mu_1 + \mu_2 }$ and variance $\displaystyle{ \sigma_1^2 + \sigma_2^2 }$.

   \mu \mid x_1,\ldots,x_n \sim \mathcal{N}\left( \frac{\frac{\sigma^2}{n}\mu_0 + \sigma_0^2\bar{x}}{\frac{\sigma^2}{n}+\sigma_0^2},\left( \frac{n}{\sigma^2} + \frac{1}{\sigma_0^2} \right)^{-1} \right)


Mu mid x _ 1，ldots，x _ n sim mathcal { n } left (frac { frac { sigma ^ 2}{ n } mu _ 0 + sigma _ 0 ^ 2 bar { x }{ frac { sigma ^ 2}{ n } + sigma _ 0 ^ 2} ，left (frac { n }{ sigma ^ 2} + frac {1}{ σ _ 2}右) ^ {-1}右)

 [/itex]


In particular, if $\displaystyle{ X }$ and $\displaystyle{ Y }$ are independent normal deviates with zero mean and variance $\displaystyle{ \sigma^2 }$, then $\displaystyle{ X + Y }$ and $\displaystyle{ X - Y }$ are also independent and normally distributed, with zero mean and variance $\displaystyle{ 2\sigma^2 }$. This is a special case of the polarization identity.

|6= The family of normal distributions not only forms an exponential family (EF), but in fact forms a natural exponential family (NEF) with quadratic variance function (NEF-QVF). Many properties of normal distributions generalize to properties of NEF-QVF distributions, NEF distributions, or EF distributions generally. NEF-QVF distributions comprises 6 families, including Poisson, Gamma, binomial, and negative binomial distributions, while many of the common families studied in probability and statistics are NEF or EF.

| 6 = 正态分布族不仅形成指数族(EF) ，而且实际上形成了具有二次方差函数(NEF-qvf)的自然指数族(NEF)。正态分布的许多性质一般推广到 NEF-QVF 分布、 NEF 分布或 EF 分布的性质。NEF-QVF 分布包括6个族，包括 Poisson 分布、 Gamma 分布、二项分布和负二项分布，而许多在概率统计学中研究的常见族是 NEF 或 EF。

Also, if $\displaystyle{ X_1 }$, $\displaystyle{ X_2 }$ are two independent normal deviates with mean $\displaystyle{ \mu }$ and deviation $\displaystyle{ \sigma }$, and $\displaystyle{ a }$, $\displaystyle{ b }$ are arbitrary real numbers, then the variable

$\displaystyle{ |7= In information geometry, the family of normal distributions forms a statistical manifold with constant curvature \lt math\gt -1 }$. The same family is flat with respect to the (±1)-connections ∇$\displaystyle{ ^{(e)} }$ and ∇$\displaystyle{ ^{(m)} }$.

| 7 = 在信息几何，正态分布族形成了一个统计常曲率。相对于(± 1)-关系 ^ {(e)} </math > 和 ^ {(m)} </math > ，这个家庭是平的。

   X_3 = \frac{aX_1 + bX_2 - (a+b)\mu}{\sqrt{a^2+b^2}} + \mu

 [/itex]


}}

}}

is also normally distributed with mean $\displaystyle{ \mu }$ and deviation $\displaystyle{ \sigma }$. It follows that the normal distribution is stable (with exponent $\displaystyle{ \alpha=2 }$).

More generally, any linear combination of independent normal deviates is a normal deviate.

#### Infinite divisibility and Cramér's theorem

As the number of discrete events increases, the function begins to resemble a normal distribution

For any positive integer $\displaystyle{ \text{n} }$, any normal distribution with mean $\displaystyle{ \mu }$ and variance $\displaystyle{ \sigma^2 }$ is the distribution of the sum of $\displaystyle{ \text{n} }$ independent normal deviates, each with mean $\displaystyle{ \frac{\mu}{n} }$ and variance $\displaystyle{ \frac{\sigma^2}{n} }$. This property is called infinite divisibility.

Comparison of probability density functions, $\displaystyle{ p(k) }$ for the sum of $\displaystyle{ n }$ fair 6-sided dice to show their convergence to a normal distribution with increasing $\displaystyle{ na }$, in accordance to the central limit theorem. In the bottom-right graph, smoothed profiles of the previous graphs are rescaled, superimposed and compared with a normal distribution (black curve).

Conversely, if $\displaystyle{ X_1 }$ and $\displaystyle{ X_2 }$ are independent random variables and their sum $\displaystyle{ X_1+X_2 }$ has a normal distribution, then both $\displaystyle{ X_1 }$ and $\displaystyle{ X_2 }$ must be normal deviates.

The central limit theorem states that under certain (fairly common) conditions, the sum of many random variables will have an approximately normal distribution. More specifically, where $\displaystyle{ X_1,\ldots ,X_n }$ are independent and identically distributed random variables with the same arbitrary distribution, zero mean, and variance $\displaystyle{ \sigma^2 }$ and $\displaystyle{ Z }$ is their

This result is known as Cramér’s decomposition theorem, and is equivalent to saying that the convolution of two distributions is normal if and only if both are normal. Cramér's theorem implies that a linear combination of independent non-Gaussian variables will never have an exactly normal distribution, although it may approach it arbitrarily closely.

mean scaled by $\displaystyle{ \sqrt{n} }$

$\displaystyle{ Z = \sqrt{n}\left(\frac{1}{n}\sum_{i=1}^n X_i\right) }$

[ math > z = sqrt { n } left (frac {1}{ n } sum { i = 1} ^ n xi right) </math >

#### Bernstein's theorem

Then, as $\displaystyle{ n }$ increases, the probability distribution of $\displaystyle{ Z }$ will tend to the normal distribution with zero mean and variance $\displaystyle{ \sigma^2 }$.

Bernstein's theorem states that if $\displaystyle{ X }$ and $\displaystyle{ Y }$ are independent and $\displaystyle{ X + Y }$ and $\displaystyle{ X - Y }$ are also independent, then both X and Y must necessarily have normal distributions.

The theorem can be extended to variables $\displaystyle{ (X_i) }$ that are not independent and/or not identically distributed if certain constraints are placed on the degree of dependence and the moments of the distributions.

More generally, if $\displaystyle{ X_1, \ldots, X_n }$ are independent random variables, then two distinct linear combinations $\displaystyle{ \sum{a_kX_k} }$ and $\displaystyle{ \sum{b_kX_k} }$will be independent if and only if all $\displaystyle{ X_k }$ are normal and $\displaystyle{ \sum{a_kb_k\sigma_k^2=0} }$, where $\displaystyle{ \sigma_k^2 }$ denotes the variance of $\displaystyle{ X_k }$.

Many test statistics, scores, and estimators encountered in practice contain sums of certain random variables in them, and even more estimators can be represented as sums of random variables through the use of influence functions. The central limit theorem implies that those statistical parameters will have asymptotically normal distributions.

### Other properties

{{ordered list

The central limit theorem also implies that certain distributions can be approximated by the normal distribution, for example:

|1= If the characteristic function $\displaystyle{ \phi_X }$ of some random variable $\displaystyle{ X }$ is of the form $\displaystyle{ \phi_X(t) = \exp^{Q(t)} }$, where $\displaystyle{ Q(t) }$ is a polynomial, then the Marcinkiewicz theorem (named after Józef Marcinkiewicz) asserts that $\displaystyle{ Q }$ can be at most a quadratic polynomial, and therefore $\displaystyle{ X }$ is a normal random variable. The consequence of this result is that the normal distribution is the only distribution with a finite number (two) of non-zero cumulants.

|2= If $\displaystyle{ X }$ and $\displaystyle{ Y }$ are jointly normal and uncorrelated, then they are independent. The requirement that $\displaystyle{ X }$ and $\displaystyle{ Y }$ should be jointly normal is essential; without it the property does not hold.[proof] For non-normal random variables uncorrelatedness does not imply independence.

|3= The Kullback–Leibler divergence of one normal distribution $\displaystyle{ X_1 \sim N(\mu_1, \sigma^2_1) }$ from another $\displaystyle{ X_2 \sim N(\mu_2, \sigma^2_2) }$ is given by:

$\displaystyle{ Whether these approximations are sufficiently accurate depends on the purpose for which they are needed, and the rate of convergence to the normal distribution. It is typically the case that such approximations are less accurate in the tails of the distribution. 这些近似值是否足够准确取决于它们的用途，以及收敛到正态分布的速度。典型的情况是，这种近似在分布的尾部不太准确。 D_\mathrm{KL}( X_1 \,\|\, X_2 ) = \frac{(\mu_1 - \mu_2)^2}{2\sigma_2^2} + \frac{1}{2}\left( \frac{\sigma_1^2}{\sigma_2^2} - 1 - \ln\frac{\sigma_1^2}{\sigma_2^2} \right) }$

A general upper bound for the approximation error in the central limit theorem is given by the Berry–Esseen theorem, improvements of the approximation are given by the Edgeworth expansions.

The Hellinger distance between the same distributions is equal to

$\displaystyle{ H^2(X_1,X_2) = 1 - \sqrt{\frac{2\sigma_1\sigma_2}{\sigma_1^2+\sigma_2^2}} If X is distributed normally with mean μ and variance σ\lt sup\gt 2\lt /sup\gt , then 如果 x 是正态分布的，且平均 μ 和方差 σ \lt sup \gt 2 \lt /sup \gt ，则 e^{-\frac{1}{4}\frac{(\mu_1-\mu_2)^2}{\sigma_1^2+\sigma_2^2}} }$

|4= The Fisher information matrix for a normal distribution is diagonal and takes the form

$\displaystyle{ \mathcal I = \begin{pmatrix} \frac{1}{\sigma^2} & 0 \\ 0 & \frac{1}{2\sigma^4} \end{pmatrix} }$

|5= The conjugate prior of the mean of a normal distribution is another normal distribution. Specifically, if $\displaystyle{ x_1, \ldots, x_n }$ are iid $\displaystyle{ \sim N(\mu, \sigma^2) }$ and the prior is $\displaystyle{ \mu \sim N(\mu_0 , \sigma^2_0) }$, then the posterior distribution for the estimator of $\displaystyle{ \mu }$ will be

If $\displaystyle{ X_1 }$ and $\displaystyle{ X_2 }$ are two independent standard normal random variables with mean 0 and variance 1, then

$\displaystyle{ \mu \mid x_1,\ldots,x_n \sim \mathcal{N}\left( \frac{\frac{\sigma^2}{n}\mu_0 + \sigma_0^2\bar{x}}{\frac{\sigma^2}{n}+\sigma_0^2},\left( \frac{n}{\sigma^2} + \frac{1}{\sigma_0^2} \right)^{-1} \right) }$

|6= The family of normal distributions not only forms an exponential family (EF), but in fact forms a natural exponential family (NEF) with quadratic variance function (NEF-QVF). Many properties of normal distributions generalize to properties of NEF-QVF distributions, NEF distributions, or EF distributions generally. NEF-QVF distributions comprises 6 families, including Poisson, Gamma, binomial, and negative binomial distributions, while many of the common families studied in probability and statistics are NEF or EF.

|7= In information geometry, the family of normal distributions forms a statistical manifold with constant curvature $\displaystyle{ -1 }$. The same family is flat with respect to the (±1)-connections ∇$\displaystyle{ ^{(e)} }$ and ∇$\displaystyle{ ^{(m)} }$.

$\displaystyle{ X_1^2 + \cdots + X_n^2 \sim \chi_n^2. }$

1 ^ 2 + cdots + x n ^ 2 sim chi n ^ 2. </math >

}}

$\displaystyle{ t = \frac{\overline X - \mu}{S/\sqrt{n}} = \frac{\frac{1}{n}(X_1+\cdots+X_n) - \mu}{\sqrt{\frac{1}{n(n-1)}\left[(X_1-\overline X)^2+\cdots+(X_n-\overline X)^2\right]}} \sim t_{n-1}. }$

< math > t = frac { overline x-mu }{ s/sqrt { n } = frac { frac {1}{ n }(x _ 1 + cdots + x _ n)-mu }{ sqrt { frac {1}{ n (n-1)}}左[(x _ 1-overline x) ^ 2 + cdots + (x _ n-overline x) ^ 2]} t { n-1} </math >

## Related distributions

$\displaystyle{ F = \frac{\left(X_1^2+X_2^2+\cdots+X_n^2\right)/n}{\left(Y_1^2+Y_2^2+\cdots+Y_m^2\right)/m} \sim F_{n,m}. }$

< math > f = frac { left (x _ 1 ^ 2 + x _ 2 ^ 2 + cdots + x _ n ^ 2 right)/n }{ left (y _ 1 ^ 2 + y _ 2 ^ 2 + cdots + y _ m ^ 2 right)/m } sim f _ { n，m } </math >

### Central limit theorem

As the number of discrete events increases, the function begins to resemble a normal distribution

Comparison of probability density functions, $\displaystyle{ p(k) }$ for the sum of $\displaystyle{ n }$ fair 6-sided dice to show their convergence to a normal distribution with increasing $\displaystyle{ na }$, in accordance to the central limit theorem. In the bottom-right graph, smoothed profiles of the previous graphs are rescaled, superimposed and compared with a normal distribution (black curve).

The split normal distribution is most directly defined in terms of joining scaled sections of the density functions of different normal distributions and rescaling the density to integrate to one. The truncated normal distribution results from rescaling a section of a single density function.

The central limit theorem states that under certain (fairly common) conditions, the sum of many random variables will have an approximately normal distribution. More specifically, where $\displaystyle{ X_1,\ldots ,X_n }$ are independent and identically distributed random variables with the same arbitrary distribution, zero mean, and variance $\displaystyle{ \sigma^2 }$ and $\displaystyle{ Z }$ is their

The notion of normal distribution, being one of the most important distributions in probability theory, has been extended far beyond the standard framework of the univariate (that is one-dimensional) case (Case 1). All these extensions are also called normal or Gaussian laws, so a certain ambiguity in names exists.

mean scaled by $\displaystyle{ \sqrt{n} }$

$\displaystyle{ Z = \sqrt{n}\left(\frac{1}{n}\sum_{i=1}^n X_i\right) }$

Then, as $\displaystyle{ n }$ increases, the probability distribution of $\displaystyle{ Z }$ will tend to the normal distribution with zero mean and variance $\displaystyle{ \sigma^2 }$.

The theorem can be extended to variables $\displaystyle{ (X_i) }$ that are not independent and/or not identically distributed if certain constraints are placed on the degree of dependence and the moments of the distributions.

Many test statistics, scores, and estimators encountered in practice contain sums of certain random variables in them, and even more estimators can be represented as sums of random variables through the use of influence functions. The central limit theorem implies that those statistical parameters will have asymptotically normal distributions.

The central limit theorem also implies that certain distributions can be approximated by the normal distribution, for example:

• The binomial distribution $\displaystyle{ B(n,p) }$ is approximately normal with mean $\displaystyle{ np }$ and variance $\displaystyle{ np(1-p) }$ for large $\displaystyle{ n }$ and for $\displaystyle{ p }$ not too close to 0 or 1.
• The Poisson distribution with parameter $\displaystyle{ \lambda }$ is approximately normal with mean $\displaystyle{ \lambda }$ and variance $\displaystyle{ \lambda }$, for large values of $\displaystyle{ \lambda }$.
• The chi-squared distribution $\displaystyle{ \chi^2(k) }$ is approximately normal with mean $\displaystyle{ k }$ and variance $\displaystyle{ 2k }$, for large $\displaystyle{ k }$.

A random variable X has a two-piece normal distribution if it has a distribution

• The Student's t-distribution $\displaystyle{ t(\nu) }$ is approximately normal with mean 0 and variance 1 when $\displaystyle{ \nu }$ is large.

$\displaystyle{ f_X( x ) = N( \mu, \sigma_1^2 ) \text{ if } x \le \mu }$


[数学] f _ x (x) = n (mu，sigma _ 1 ^ 2) text { if } x le mu

Whether these approximations are sufficiently accurate depends on the purpose for which they are needed, and the rate of convergence to the normal distribution. It is typically the case that such approximations are less accurate in the tails of the distribution.

$\displaystyle{ f_X( x ) = N( \mu, \sigma_2^2 ) \text{ if } x \ge \mu }$


[数学] f _ x (x) = n (mu，sigma _ 2 ^ 2) text { if } x ge mu

A general upper bound for the approximation error in the central limit theorem is given by the Berry–Esseen theorem, improvements of the approximation are given by the Edgeworth expansions.

where μ is the mean and σ1 and σ2 are the standard deviations of the distribution to the left and right of the mean respectively.

### Operations on a single random variable

The mean, variance and third central moment of this distribution have been determined

If X is distributed normally with mean μ and variance σ2, then

• The exponential of X is distributed log-normally: eX ~ ln(N (μ, σ2)).

$\displaystyle{ \operatorname{E}( X ) = \mu + \sqrt{\frac 2 \pi } ( \sigma_2 - \sigma_1 ) }$

[ math > operatorname { e }(x) = mu + sqrt { frac 2 pi }(sigma 2-sigma _ 1) </math >

$\displaystyle{ \operatorname{V}( X ) = \left( 1 - \frac 2 \pi\right)( \sigma_2 - \sigma_1 )^2 + \sigma_1 \sigma_2 }$

< math > 操作员名称{ v }(x) = left (1-frac 2 pi right)(sigma _ 2-sigma _ 1) ^ 2 + sigma _ 1 sigma _ 2 </math >

• The absolute value of normalized residuals, |Xμ|/σ, has chi distribution with one degree of freedom: |Xμ|/σ ~ $\displaystyle{ \chi_1 }$.
$\displaystyle{ \operatorname{T}( X ) = \sqrt{ \frac 2 \pi}( \sigma_2 - \sigma_1 ) \left[ \left( \frac 4 \pi - 1 \right) ( \sigma_2 - \sigma_1)^2 + \sigma_1 \sigma_2 \right] }$


(2-sigma _ 1)(sigma _ 2-sigma _ 1)(∑ _ 2-sigma _ 1) ^ 2 + ∑ _ 1-∑ _ 2右) </math >

where E(X), V(X) and T(X) are the mean, variance, and third central moment respectively.

One of the main practical uses of the Gaussian law is to model the empirical distributions of many different random variables encountered in practice. In such case a possible extension would be a richer family of distributions, having more than two parameters and therefore being able to fit the empirical distribution more accurately. The examples of such extensions are:

### Combination of two independent random variables

If $\displaystyle{ X_1 }$ and $\displaystyle{ X_2 }$ are two independent standard normal random variables with mean 0 and variance 1, then

• Their sum and difference is distributed normally with mean zero and variance two: $\displaystyle{ X_1 \pm X_2 \sim N(0, 2) }$.
• Their product $\displaystyle{ Z=X_1X_2 }$ follows the Product distribution with density function $\displaystyle{ f_Z(z) = \pi^{-1} K_0(|z|) }$ where $\displaystyle{ K_0 }$ is the modified Bessel function of the second kind. This distribution is symmetric around zero, unbounded at $\displaystyle{ z = 0 }$, and has the characteristic function $\displaystyle{ \phi_Z(t) = (1 + t^2)^{-1/2} }$.
• Their ratio follows the standard Cauchy distribution: $\displaystyle{ X_1/ X_2 \sim \operatorname{Cauchy}(0, 1) }$.

### Combination of two or more independent random variables

It is often the case that we do not know the parameters of the normal distribution, but instead want to estimate them. That is, having a sample $\displaystyle{ (x_1, \ldots, x_n) }$ from a normal $\displaystyle{ N(\mu, \sigma^2) }$ population we would like to learn the approximate values of parameters $\displaystyle{ \mu }$ and $\displaystyle{ \sigma^2 }$. The standard approach to this problem is the maximum likelihood method, which requires maximization of the log-likelihood function:

• If $\displaystyle{ X_1, X_2, \ldots, X_n }$ are independent standard normal random variables, then the sum of their squares has the chi-squared distribution with $\displaystyle{ \text{n} }$ degrees of freedom
$\displaystyle{ 《数学》 ::\lt math\gt X_1^2 + \cdots + X_n^2 \sim \chi_n^2. }$

  \ln\mathcal{L}(\mu,\sigma^2)


• If $\displaystyle{ X_1, X_2, \ldots, X_n }$ are independent normally distributed random variables with means $\displaystyle{ \mu }$ and variances $\displaystyle{ \sigma^2 }$, then their sample mean is independent from the sample standard deviation, which can be demonstrated using Basu's theorem or Cochran's theorem. The ratio of these two quantities will have the Student's t-distribution with $\displaystyle{ \text{n}-1 }$ degrees of freedom:
    = \sum_{i=1}^n \ln f(x_i\mid\mu,\sigma^2)


= sum { i = 1} ^ n ln f (xi mid mu，sigma ^ 2)

$\displaystyle{ t = \frac{\overline X - \mu}{S/\sqrt{n}} = \frac{\frac{1}{n}(X_1+\cdots+X_n) - \mu}{\sqrt{\frac{1}{n(n-1)}\left[(X_1-\overline X)^2+\cdots+(X_n-\overline X)^2\right]}} \sim t_{n-1}. }$
    = -\frac{n}{2}\ln(2\pi) - \frac{n}{2}\ln\sigma^2 - \frac{1}{2\sigma^2}\sum_{i=1}^n (x_i-\mu)^2.


=-frac { n }{2} ln (2 pi)-frac { n }{2} ln sigma ^ 2-frac {1}{2 sigma ^ 2} sum { i = 1} ^ n (x i-mu) ^ 2.

• If $\displaystyle{ X_1, X_2, \ldots, X_n }$, $\displaystyle{ Y_1, Y_2, \ldots, Y_m }$ are independent standard normal random variables, then the ratio of their normalized sums of squares will have the F-distribution with (n, m) degrees of freedom:
 [/itex]


$\displaystyle{ F = \frac{\left(X_1^2+X_2^2+\cdots+X_n^2\right)/n}{\left(Y_1^2+Y_2^2+\cdots+Y_m^2\right)/m} \sim F_{n,m}. }$

Taking derivatives with respect to $\displaystyle{ \mu }$ and $\displaystyle{ \sigma^2 }$ and solving the resulting system of first order conditions yields the maximum likelihood estimates:

$\displaystyle{ 《数学》 === Operations on the density function === \hat{\mu} = \overline{x} \equiv \frac{1}{n}\sum_{i=1}^n x_i, \qquad 1} sum { i = 1} ^ n x _ i，qquad The [[split normal distribution]] is most directly defined in terms of joining scaled sections of the density functions of different normal distributions and rescaling the density to integrate to one. The [[truncated normal distribution]] results from rescaling a section of a single density function. \hat{\sigma}^2 = \frac{1}{n} \sum_{i=1}^n (x_i - \overline{x})^2. 2 = frac {1}{ n } sum { i = 1} ^ n (x _ i-overline { x }) ^ 2. }$


### Extensions

The notion of normal distribution, being one of the most important distributions in probability theory, has been extended far beyond the standard framework of the univariate (that is one-dimensional) case (Case 1). All these extensions are also called normal or Gaussian laws, so a certain ambiguity in names exists.

• The multivariate normal distribution describes the Gaussian law in the k-dimensional Euclidean space. A vector XRk is multivariate-normally distributed if any linear combination of its components 模板:Suaj Xj has a (univariate) normal distribution. The variance of X is a k×k symmetric positive-definite matrix V. The multivariate normal distribution is a special case of the elliptical distributions. As such, its iso-density loci in the k = 2 case are ellipses and in the case of arbitrary k are ellipsoids.
• Complex normal distribution deals with the complex normal vectors. A complex vector XCk is said to be normal if both its real and imaginary components jointly possess a 2k-dimensional multivariate normal distribution. The variance-covariance structure of X is described by two matrices: the variance matrix Γ, and the relation matrix C.

Estimator $\displaystyle{ \textstyle\hat\mu }$ is called the sample mean, since it is the arithmetic mean of all observations. The statistic $\displaystyle{ \textstyle\overline{x} }$ is complete and sufficient for $\displaystyle{ \mu }$, and therefore by the Lehmann–Scheffé theorem, $\displaystyle{ \textstyle\hat\mu }$ is the uniformly minimum variance unbiased (UMVU) estimator. In finite samples it is distributed normally:

$\displaystyle{ 《数学》 * [[Gaussian process]]es are the normally distributed [[stochastic process]]es. These can be viewed as elements of some infinite-dimensional [[Hilbert space]] ''H'', and thus are the analogues of multivariate normal vectors for the case {{nowrap|''k'' {{=}} ∞}}. A random element {{nowrap|''h'' ∈ ''H''}} is said to be normal if for any constant {{nowrap|''a'' ∈ ''H''}} the [[scalar product]] {{nowrap|(''a'', ''h'')}} has a (univariate) normal distribution. The variance structure of such Gaussian random element can be described in terms of the linear ''covariance {{nowrap|operator K: H → H}}''. Several Gaussian processes became popular enough to have their own names: \hat\mu \sim \mathcal{N}(\mu,\sigma^2/n). (mu，sigma ^ 2/n). ** [[Wiener process|Brownian motion]], }$


The variance of this estimator is equal to the μμ-element of the inverse Fisher information matrix $\displaystyle{ \textstyle\mathcal{I}^{-1} }$. This implies that the estimator is finite-sample efficient. Of practical importance is the fact that the standard error of $\displaystyle{ \textstyle\hat\mu }$ is proportional to $\displaystyle{ \textstyle1/\sqrt{n} }$, that is, if one wishes to decrease the standard error by a factor of 10, one must increase the number of points in the sample by a factor of 100. This fact is widely used in determining sample sizes for opinion polls and the number of trials in Monte Carlo simulations.

From the standpoint of the asymptotic theory, $\displaystyle{ \textstyle\hat\mu }$ is consistent, that is, it converges in probability to $\displaystyle{ \mu }$ as $\displaystyle{ n\rightarrow\infty }$. The estimator is also asymptotically normal, which is a simple corollary of the fact that it is normal in finite samples:

$\displaystyle{ 《数学》 \sqrt{n}(\hat\mu-\mu) \,\xrightarrow{d}\, \mathcal{N}(0,\sigma^2). 数学{ n }(0，σ ^ 2)。 A random variable ''X'' has a two-piece normal distribution if it has a distribution }$


$\displaystyle{ f_X( x ) = N( \mu, \sigma_1^2 ) \text{ if } x \le \mu }$
$\displaystyle{ f_X( x ) = N( \mu, \sigma_2^2 ) \text{ if } x \ge \mu }$

where μ is the mean and σ1 and σ2 are the standard deviations of the distribution to the left and right of the mean respectively.

The estimator $\displaystyle{ \textstyle\hat\sigma^2 }$ is called the sample variance, since it is the variance of the sample ($\displaystyle{ (x_1, \ldots, x_n) }$). In practice, another estimator is often used instead of the $\displaystyle{ \textstyle\hat\sigma^2 }$. This other estimator is denoted $\displaystyle{ s^2 }$, and is also called the sample variance, which represents a certain ambiguity in terminology; its square root $\displaystyle{ s }$ is called the sample standard deviation. The estimator $\displaystyle{ s^2 }$ differs from $\displaystyle{ \textstyle\hat\sigma^2 }$ by having instead of n in the denominator (the so-called Bessel's correction):

$\displaystyle{ 《数学》 The mean, variance and third central moment of this distribution have been determined\lt ref name=John1982\gt {{cite journal|last1=John|first1=S|year=1982|title=The three parameter two-piece normal family of distributions and its fitting|url=|journal=Communications in Statistics - Theory and Methods|volume=11|issue=8|pages=879–885|doi=10.1080/03610928208828279}}\lt /ref\gt s^2 = \frac{n}{n-1} \hat\sigma^2 = \frac{1}{n-1} \sum_{i=1}^n (x_i - \overline{x})^2. S ^ 2 = frac { n }{ n-1} hat sigma ^ 2 = frac {1}{ n-1} sum { i = 1} ^ n (x _ i-overline { x }) ^ 2. }$


$\displaystyle{ \operatorname{E}( X ) = \mu + \sqrt{\frac 2 \pi } ( \sigma_2 - \sigma_1 ) }$

The difference between $\displaystyle{ s^2 }$ and $\displaystyle{ \textstyle\hat\sigma^2 }$ becomes negligibly small for large ns. In finite samples however, the motivation behind the use of $\displaystyle{ s^2 }$ is that it is an unbiased estimator of the underlying parameter $\displaystyle{ \sigma^2 }$, whereas $\displaystyle{ \textstyle\hat\sigma^2 }$ is biased. Also, by the Lehmann–Scheffé theorem the estimator $\displaystyle{ s^2 }$ is uniformly minimum variance unbiased (UMVU), similarly, inverting the χ2 distribution of the statistic s2 will give us the confidence interval for σ2:

“垂直排列”和“数学风格”之间的差异对于大数字来说变得微乎其微。然而，在有限的样本中，使用 < math > s ^ 2 </math > 的动机是它是基本参数 < math > sigma ^ 2 </math > 的无偏估计量，而 < math style = " vertical-align: 0" > textstyle hat sigma ^ 2 </math > 是有偏的。同样，根据 Lehmann-scheffé 定理，估计量 < math > s ^ 2 </math > 是一致最小方差无偏(UMVU) ，同样，反演统计量 s < sup > 2 的 χ2 分布将给出 σ < sup > 2 的置信区间:

$\displaystyle{ \operatorname{V}( X ) = \left( 1 - \frac 2 \pi\right)( \sigma_2 - \sigma_1 )^2 + \sigma_1 \sigma_2 }$
$\displaystyle{ \operatorname{T}( X ) = \sqrt{ \frac 2 \pi}( \sigma_2 - \sigma_1 ) \left[ \left( \frac 4 \pi - 1 \right) ( \sigma_2 - \sigma_1)^2 + \sigma_1 \sigma_2 \right] }$

$\displaystyle{ \mu \in \left[ \hat\mu - t_{n-1,1-\alpha/2} \frac{1}{\sqrt{n}}s, 在左[ hat mu-t _ { n-1,1-alpha/2} frac {1}{ sqrt { n }} s, \hat\mu + t_{n-1,1-\alpha/2} \frac{1}{\sqrt{n}}s \right] \approx 帽子 mu + t _ { n-1,1-alpha/2} frac {1}{ sqrt { n }} s 右]接近 where E(''X''), V(''X'') and T(''X'') are the mean, variance, and third central moment respectively. \left[ \hat\mu - |z_{\alpha/2}|\frac{1}{\sqrt n}s, 左[ hat mu-| z _ { alpha/2} | frac {1}{ sqrt n } s, \hat\mu + |z_{\alpha/2}|\frac{1}{\sqrt n}s \right], }$

[ hat mu + | z _ { alpha/2} | frac {1}{ sqrt n } s right ] ，</math >

One of the main practical uses of the Gaussian law is to model the empirical distributions of many different random variables encountered in practice. In such case a possible extension would be a richer family of distributions, having more than two parameters and therefore being able to fit the empirical distribution more accurately. The examples of such extensions are:

$\displaystyle{ \sigma^2 \in \left[ \frac{(n-1)s^2}{\chi^2_{n-1,1-\alpha/2}}, 左[ frac {(n-1) s ^ 2}{ chi ^ 2 _ { n-1,1-alpha/2}} * [[Pearson distribution]] — a four-parameter family of probability distributions that extend the normal law to include different skewness and kurtosis values. \frac{(n-1)s^2}{\chi^2_{n-1,\alpha/2}} \right] \approx Frac {(n-1) s ^ 2}{ chi ^ 2 _ { n-1，alpha/2} right ] approx * The [[generalized normal distribution]], also known as the exponential power distribution, allows for distribution tails with thicker or thinner asymptotic behaviors. \left[ s^2 - |z_{\alpha/2}|\frac{\sqrt{2}}{\sqrt{n}}s^2, 左[ s ^ 2-| z _ { alpha/2} | frac { sqrt {2}{ sqrt { n } s ^ 2, s^2 + |z_{\alpha/2}|\frac{\sqrt{2}}{\sqrt{n}}s^2 \right], }$

s ^ 2 + | z _ { alpha/2} | frac { sqrt {2}{ sqrt { n } s ^ 2 right ] ，</math >

## Statistical inference

### Estimation of parameters

where tk,p and are the pth quantiles of the t- and χ2-distributions respectively. These confidence intervals are of the confidence level , meaning that the true values μ and σ2 fall outside of these intervals with probability (or significance level) α. In practice people usually take 5%}}, resulting in the 95% confidence intervals. The approximate formulas in the display above were derived from the asymptotic distributions of $\displaystyle{ \textstyle\hat\mu }$ and s2. The approximate formulas become valid for large values of n, and are more convenient for the manual calculation since the standard normal quantiles zα/2 do not depend on n. In particular, the most popular value of 5%}}, results in z0.025 1.96}}.

It is often the case that we do not know the parameters of the normal distribution, but instead want to estimate them. That is, having a sample $\displaystyle{ (x_1, \ldots, x_n) }$ from a normal $\displaystyle{ N(\mu, \sigma^2) }$ population we would like to learn the approximate values of parameters $\displaystyle{ \mu }$ and $\displaystyle{ \sigma^2 }$. The standard approach to this problem is the maximum likelihood method, which requires maximization of the log-likelihood function:

$\displaystyle{ \ln\mathcal{L}(\mu,\sigma^2) Normality tests assess the likelihood that the given data set {x\lt sub\gt 1\lt /sub\gt , ..., x\lt sub\gt n\lt /sub\gt } comes from a normal distribution. Typically the null hypothesis H\lt sub\gt 0\lt /sub\gt is that the observations are distributed normally with unspecified mean μ and variance σ\lt sup\gt 2\lt /sup\gt , versus the alternative H\lt sub\gt a\lt /sub\gt that the distribution is arbitrary. Many tests (over 40) have been devised for this problem, the more prominent of them are outlined below: 正态性检验评估给定数据集{ x \lt sub \gt 1 \lt /sub \gt ，... ，x \lt sub \gt n \lt /sub \gt \gt }来自正态分布的可能性。典型的零假设 h \lt sub \gt 0 \lt /sub \gt 是观测值呈正态分布，但未指明均值 μ 和方差 σ \lt sup \gt 2 \lt /sup \gt ，而另一个假设 h \lt sub \gt a \lt /sub \gt 是任意分布。针对这一问题已经设计了许多试验(超过40个) ，其中比较突出的试验概述如下: = \sum_{i=1}^n \ln f(x_i\mid\mu,\sigma^2) = -\frac{n}{2}\ln(2\pi) - \frac{n}{2}\ln\sigma^2 - \frac{1}{2\sigma^2}\sum_{i=1}^n (x_i-\mu)^2. }$

Taking derivatives with respect to $\displaystyle{ \mu }$ and $\displaystyle{ \sigma^2 }$ and solving the resulting system of first order conditions yields the maximum likelihood estimates:

$\displaystyle{ \hat{\mu} = \overline{x} \equiv \frac{1}{n}\sum_{i=1}^n x_i, \qquad \hat{\sigma}^2 = \frac{1}{n} \sum_{i=1}^n (x_i - \overline{x})^2. }$

#### Sample mean

Estimator $\displaystyle{ \textstyle\hat\mu }$ is called the sample mean, since it is the arithmetic mean of all observations. The statistic $\displaystyle{ \textstyle\overline{x} }$ is complete and sufficient for $\displaystyle{ \mu }$, and therefore by the Lehmann–Scheffé theorem, $\displaystyle{ \textstyle\hat\mu }$ is the uniformly minimum variance unbiased (UMVU) estimator. In finite samples it is distributed normally:

$\displaystyle{ Bayesian analysis of normally distributed data is complicated by the many different possibilities that may be considered: 正态分布数据的贝叶斯分析是复杂的，因为可以考虑许多不同的可能性: \hat\mu \sim \mathcal{N}(\mu,\sigma^2/n). }$

The variance of this estimator is equal to the μμ-element of the inverse Fisher information matrix $\displaystyle{ \textstyle\mathcal{I}^{-1} }$. This implies that the estimator is finite-sample efficient. Of practical importance is the fact that the standard error of $\displaystyle{ \textstyle\hat\mu }$ is proportional to $\displaystyle{ \textstyle1/\sqrt{n} }$, that is, if one wishes to decrease the standard error by a factor of 10, one must increase the number of points in the sample by a factor of 100. This fact is widely used in determining sample sizes for opinion polls and the number of trials in Monte Carlo simulations.

From the standpoint of the asymptotic theory, $\displaystyle{ \textstyle\hat\mu }$ is consistent, that is, it converges in probability to $\displaystyle{ \mu }$ as $\displaystyle{ n\rightarrow\infty }$. The estimator is also asymptotically normal, which is a simple corollary of the fact that it is normal in finite samples:

$\displaystyle{ \sqrt{n}(\hat\mu-\mu) \,\xrightarrow{d}\, \mathcal{N}(0,\sigma^2). The formulas for the non-linear-regression cases are summarized in the conjugate prior article. 非线性回归情形的计算公式在共轭先验文献中得到了总结。 }$

#### Sample variance

The following auxiliary formula is useful for simplifying the posterior update equations, which otherwise become fairly tedious.

The estimator $\displaystyle{ \textstyle\hat\sigma^2 }$ is called the sample variance, since it is the variance of the sample ($\displaystyle{ (x_1, \ldots, x_n) }$). In practice, another estimator is often used instead of the $\displaystyle{ \textstyle\hat\sigma^2 }$. This other estimator is denoted $\displaystyle{ s^2 }$, and is also called the sample variance, which represents a certain ambiguity in terminology; its square root $\displaystyle{ s }$ is called the sample standard deviation. The estimator $\displaystyle{ s^2 }$ differs from $\displaystyle{ \textstyle\hat\sigma^2 }$ by having (n − 1) instead of n in the denominator (the so-called Bessel's correction):

$\displaystyle{ \lt math\gt a(x-y)^2 + b(x-z)^2 = (a + b)\left(x - \frac{ay+bz}{a+b}\right)^2 + \frac{ab}{a+b}(y-z)^2 }$

(x-y) ^ 2 + b (x-z) ^ 2 = (a + b) left (x-frac { ay + bz }{ a + b } right) ^ 2 + frac { ab }(y-z) ^ 2 </math >

   s^2 = \frac{n}{n-1} \hat\sigma^2 = \frac{1}{n-1} \sum_{i=1}^n (x_i - \overline{x})^2.

 [/itex]


This equation rewrites the sum of two quadratics in x by expanding the squares, grouping the terms in x, and completing the square. Note the following about the complex constant factors attached to some of the terms:

The difference between $\displaystyle{ s^2 }$ and $\displaystyle{ \textstyle\hat\sigma^2 }$ becomes negligibly small for large n模板:'s. In finite samples however, the motivation behind the use of $\displaystyle{ s^2 }$ is that it is an unbiased estimator of the underlying parameter $\displaystyle{ \sigma^2 }$, whereas $\displaystyle{ \textstyle\hat\sigma^2 }$ is biased. Also, by the Lehmann–Scheffé theorem the estimator $\displaystyle{ s^2 }$ is uniformly minimum variance unbiased (UMVU), which makes it the "best" estimator among all unbiased ones. However it can be shown that the biased estimator $\displaystyle{ \textstyle\hat\sigma^2 }$ is "better" than the $\displaystyle{ s^2 }$ in terms of the mean squared error (MSE) criterion. In finite samples both $\displaystyle{ s^2 }$ and $\displaystyle{ \textstyle\hat\sigma^2 }$ have scaled chi-squared distribution with (n − 1) degrees of freedom:

The factor $\displaystyle{ \frac{ay+bz}{a+b} }$ has the form of a weighted average of y and z.


$\displaystyle{ \lt math\gt \frac{ab}{a+b} = \frac{1}{\frac{1}{a}+\frac{1}{b}} = (a^{-1} + b^{-1})^{-1}. }$ This shows that this factor can be thought of as resulting from a situation where the reciprocals of quantities a and b add directly, so to combine a and b themselves, it's necessary to reciprocate, add, and reciprocate the result again to get back into the original units. This is exactly the sort of operation performed by the harmonic mean, so it is not surprising that $\displaystyle{ \frac{ab}{a+b} }$ is one-half the harmonic mean of a and b.

< math > frac { ab }{ a + b } = frac {1}{ frac {1}{ a } + frac {1}{ b } = (a ^ {1} + b ^ {1}) ^ {-1}.这表明，这个因子可以被认为是由数量 a 和 b 的倒数直接相加的情况产生的，所以为了使 a 和 b 本身相加，有必要往复，往复，再往复的结果返回到原来的单位。这正是由调和平均值执行的一种运算，所以《 math > frac { ab }{ a + b } </math > 是 a 和 b 调和平均值的一半也就不足为奇了。

   s^2 \sim \frac{\sigma^2}{n-1} \cdot \chi^2_{n-1}, \qquad

   \hat\sigma^2 \sim \frac{\sigma^2}{n} \cdot \chi^2_{n-1}.

 [/itex]


A similar formula can be written for the sum of two vector quadratics: If x, y, z are vectors of length k, and A and B are symmetric, invertible matrices of size $\displaystyle{ k\times k }$, then

The first of these expressions shows that the variance of $\displaystyle{ s^2 }$ is equal to $\displaystyle{ 2\sigma^4/(n-1) }$, which is slightly greater than the σσ-element of the inverse Fisher information matrix $\displaystyle{ \textstyle\mathcal{I}^{-1} }$. Thus, $\displaystyle{ s^2 }$ is not an efficient estimator for $\displaystyle{ \sigma^2 }$, and moreover, since $\displaystyle{ s^2 }$ is UMVU, we can conclude that the finite-sample efficient estimator for $\displaystyle{ \sigma^2 }$ does not exist.

$\displaystyle{ 《数学》 Applying the asymptotic theory, both estimators \lt math\gt s^2 }$ and $\displaystyle{ \textstyle\hat\sigma^2 }$ are consistent, that is they converge in probability to $\displaystyle{ \sigma^2 }$ as the sample size $\displaystyle{ n\rightarrow\infty }$. The two estimators are also both asymptotically normal:

\begin{align}

\displaystyle{ & (\mathbf{y}-\mathbf{x})'\mathbf{A}(\mathbf{y}-\mathbf{x}) + (\mathbf{x}-\mathbf{z})' \mathbf{B}(\mathbf{x}-\mathbf{z}) \\ (mathbf { y }-mathbf { x })’ mathbf { a }(mathbf { y }-mathbf { x }) + (mathbf { x }-mathbf { z })’ mathbf { b }(mathbf { x }-mathbf { z }) \sqrt{n}(\hat\sigma^2 - \sigma^2) \simeq = {} & (\mathbf{x} - \mathbf{c})'(\mathbf{A}+\mathbf{B})(\mathbf{x} - \mathbf{c}) + (\mathbf{y} - \mathbf{z})'(\mathbf{A}^{-1} + \mathbf{B}^{-1})^{-1}(\mathbf{y} - \mathbf{z}) = {} & (mathbf { x }-mathbf { c })’(mathbf { a } + mathbf { b })(mathbf { x }-mathbf { c }) + (mathbf { y }-mathbf { z })’(mathbf { a } ^ {-1} + mathbf { b }{-1}) ^ {-1}(math{ y }-mathbf { z }) \sqrt{n}(s^2-\sigma^2) \,\xrightarrow{d}\, \mathcal{N}(0,2\sigma^4). \end{align} 结束{ align } }

[/itex]

In particular, both estimators are asymptotically efficient for $\displaystyle{ \sigma^2 }$.

where

### Confidence intervals

$\displaystyle{ \mathbf{c} = (\mathbf{A} + \mathbf{B})^{-1}(\mathbf{A}\mathbf{y} + \mathbf{B} \mathbf{z}) }$

< math > mathbf { c } = (mathbf { a } + mathbf { b }) ^ {-1}(mathbf { a } mathbf { y } + mathbf { b }) </math >

By Cochran's theorem, for normal distributions the sample mean $\displaystyle{ \textstyle\hat\mu }$ and the sample variance s2 are independent, which means there can be no gain in considering their joint distribution. There is also a converse theorem: if in a sample the sample mean and sample variance are independent, then the sample must have come from the normal distribution. The independence between $\displaystyle{ \textstyle\hat\mu }$ and s can be employed to construct the so-called t-statistic:

Note that the form x′ A x is called a quadratic form and is a scalar:

$\displaystyle{ \lt math\gt \mathbf{x}'\mathbf{A}\mathbf{x} = \sum_{i,j}a_{ij} x_i x_j }$

[数学][数学]

   t = \frac{\hat\mu-\mu}{s/\sqrt{n}} = \frac{\overline{x}-\mu}{\sqrt{\frac{1}{n(n-1)}\sum(x_i-\overline{x})^2}} \sim t_{n-1}


In other words, it sums up all possible combinations of products of pairs of elements from x, with a separate coefficient for each. In addition, since $\displaystyle{ x_i x_j = x_j x_i }$, only the sum $\displaystyle{ a_{ij} + a_{ji} }$ matters for any off-diagonal elements of A, and there is no loss of generality in assuming that A is symmetric. Furthermore, if A is symmetric, then the form $\displaystyle{ \mathbf{x}'\mathbf{A}\mathbf{y} = \mathbf{y}'\mathbf{A}\mathbf{x}. }$

 [/itex]


This quantity t has the Student's t-distribution with (n − 1) degrees of freedom, and it is an ancillary statistic (independent of the value of the parameters). Inverting the distribution of this t-statistics will allow us to construct the confidence interval for μ; similarly, inverting the χ2 distribution of the statistic s2 will give us the confidence interval for σ2:

Another useful formula is as follows:

$\displaystyle{ \mu \in \left[ \hat\mu - t_{n-1,1-\alpha/2} \frac{1}{\sqrt{n}}s, \hat\mu + t_{n-1,1-\alpha/2} \frac{1}{\sqrt{n}}s \right] \approx \lt math\gt \sum_{i=1}^n (x_i-\mu)^2 = \sum_{i=1}^n(x_i-\bar{x})^2 + n(\bar{x} -\mu)^2 }$

< math > sum { i = 1} ^ n (x i-mu) ^ 2 = sum { i = 1} ^ n (x i-bar { x }) ^ 2 + n (bar { x }-mu) ^ 2 </math >

             \left[ \hat\mu - |z_{\alpha/2}|\frac{1}{\sqrt n}s,

                     \hat\mu + |z_{\alpha/2}|\frac{1}{\sqrt n}s \right],[/itex]


where $\displaystyle{ \bar{x} = \frac{1}{n}\sum_{i=1}^n x_i. }$

$\displaystyle{ \sigma^2 \in \left[ \frac{(n-1)s^2}{\chi^2_{n-1,1-\alpha/2}}, \frac{(n-1)s^2}{\chi^2_{n-1,\alpha/2}} \right] \approx \left[ s^2 - |z_{\alpha/2}|\frac{\sqrt{2}}{\sqrt{n}}s^2, For a set of i.i.d. normally distributed data points X of size n where each individual point x follows \lt math\gt x \sim \mathcal{N}(\mu, \sigma^2) }$ with known variance σ2, the conjugate prior distribution is also normally distributed.

                          s^2 + |z_{\alpha/2}|\frac{\sqrt{2}}{\sqrt{n}}s^2 \right],[/itex]


This can be shown more easily by rewriting the variance as the precision, i.e. using τ = 1/σ2. Then if $\displaystyle{ x \sim \mathcal{N}(\mu, 1/\tau) }$ and $\displaystyle{ \mu \sim \mathcal{N}(\mu_0, 1/\tau_0), }$ we proceed as follows.

where tk,p and 模板:SubSup are the pth quantiles of the t- and χ2-distributions respectively. These confidence intervals are of the confidence level 1 − α, meaning that the true values μ and σ2 fall outside of these intervals with probability (or significance level) α. In practice people usually take α = 5%, resulting in the 95% confidence intervals. The approximate formulas in the display above were derived from the asymptotic distributions of $\displaystyle{ \textstyle\hat\mu }$ and s2. The approximate formulas become valid for large values of n, and are more convenient for the manual calculation since the standard normal quantiles zα/2 do not depend on n. In particular, the most popular value of α = 5%, results in |z0.025| = 1.96.

First, the likelihood function is (using the formula above for the sum of differences from the mean):

### Normality tests

\displaystyle{ \begin{align} 1.1.1.2.2.2.2.2.2.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.4.3.3.3.3.3.3.3.3.3.3.3.4.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3 p(\mathbf{X}\mid\mu,\tau) &= \prod_{i=1}^n \sqrt{\frac{\tau}{2\pi}} \exp\left(-\frac{1}{2}\tau(x_i-\mu)^2\right) \\ P (mathbf { x } mid mu，tau) & = prod _ { i = 1} ^ n sqrt { frac { tau }{2 pi } exp left (- frac {1}{2} tau (x _ i-mu) ^ 2 right) Normality tests assess the likelihood that the given data set {''x''\lt sub\gt 1\lt /sub\gt , ..., ''x\lt sub\gt n\lt /sub\gt ''} comes from a normal distribution. Typically the [[null hypothesis]] ''H''\lt sub\gt 0\lt /sub\gt is that the observations are distributed normally with unspecified mean ''μ'' and variance ''σ''\lt sup\gt 2\lt /sup\gt , versus the alternative ''H\lt sub\gt a\lt /sub\gt '' that the distribution is arbitrary. Many tests (over 40) have been devised for this problem, the more prominent of them are outlined below: &= \left(\frac{\tau}{2\pi}\right)^{n/2} \exp\left(-\frac{1}{2}\tau \sum_{i=1}^n (x_i-\mu)^2\right) \\ (& = left (frac { tau }{2 pi } right) ^ { n/2} exp left (- frac {1}{2} tau sum { i = 1} ^ n (x _ i-mu) ^ 2 right)) * '''"Visual" tests''' are more intuitively appealing but subjective at the same time, as they rely on informal human judgement to accept or reject the null hypothesis. &= \left(\frac{\tau}{2\pi}\right)^{n/2} \exp\left[-\frac{1}{2}\tau \left(\sum_{i=1}^n(x_i-\bar{x})^2 + n(\bar{x} -\mu)^2\right)\right]. (& = left (frac { tau }{2 pi } right) ^ { n/2} exp left [-frac {1}{2} tau left (sum { i = 1} ^ n (x _ i-bar { x }) ^ 2 + n (bar { x }-mu) ^ 2 right)]. ** [[Q-Q plot]]— is a plot of the sorted values from the data set against the expected values of the corresponding quantiles from the standard normal distribution. That is, it's a plot of point of the form (Φ\lt sup\gt −1\lt /sup\gt (''p\lt sub\gt k\lt /sub\gt ''), ''x''\lt sub\gt (''k'')\lt /sub\gt ), where plotting points ''p\lt sub\gt k\lt /sub\gt '' are equal to ''p\lt sub\gt k\lt /sub\gt '' = (''k'' − ''α'')/(''n'' + 1 − 2''α'') and ''α'' is an adjustment constant, which can be anything between 0 and 1. If the null hypothesis is true, the plotted points should approximately lie on a straight line. \end{align} }

• P-P plot— similar to the Q-Q plot, but used much less frequently. This method consists of plotting the points (Φ(z(k)), pk), where $\displaystyle{ \textstyle z_{(k)} = (x_{(k)}-\hat\mu)/\hat\sigma }$. For normally distributed data this plot should lie on a 45° line between (0, 0) and (1, 1).
• Shapiro-Wilk test employs the fact that the line in the Q-Q plot has the slope of σ. The test compares the least squares estimate of that slope with the value of the sample variance, and rejects the null hypothesis if these two quantities differ significantly.

Then, we proceed as follows:

• Moment tests:

\displaystyle{ \begin{align} 1.1.1.2.2.2.2.2.2.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.4.3.3.3.3.3.3.3.3.3.3.3.4.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3 ** [[D'Agostino's K-squared test]] p(\mu\mid\mathbf{X}) &\propto p(\mathbf{X}\mid\mu) p(\mu) \\ P (mu mid mathbf { x }) & propto p (mathbf { x } mid mu) p (mu) ** [[Jarque–Bera test]] & = \left(\frac{\tau}{2\pi}\right)^{n/2} \exp\left[-\frac{1}{2}\tau \left(\sum_{i=1}^n(x_i-\bar{x})^2 + n(\bar{x} -\mu)^2\right)\right] \sqrt{\frac{\tau_0}{2\pi}} \exp\left(-\frac{1}{2}\tau_0(\mu-\mu_0)^2\right) \\ (& = left (frac {2 pi } right) ^ { n/2} exp left [-frac {1}{2} tau left (sum { i = 1} ^ n (x i-bar { x }) ^ 2 + n (bar { x }-mu) ^ 2 right)] rt { frac { tau {0}{2 pi } exp left (- frac {1}{2} tau _ 0(mu-mu _ 0) ^ 2 right) * '''Empirical distribution function tests''': &\propto \exp\left(-\frac{1}{2}\left(\tau\left(\sum_{i=1}^n(x_i-\bar{x})^2 + n(\bar{x} -\mu)^2\right) + \tau_0(\mu-\mu_0)^2\right)\right) \\ (- frac {1}{2} left (tau left (sum _ { i = 1} ^ n (x _ i-bar { x }) ^ 2 + n (bar { x }-mu) ^ 2 right) + tau _ 0(mu-mu _ 0) ^ 2 right)) ** [[Lilliefors test]] (an adaptation of the [[Kolmogorov–Smirnov test]]) &\propto \exp\left(-\frac{1}{2} \left(n\tau(\bar{x}-\mu)^2 + \tau_0(\mu-\mu_0)^2 \right)\right) \\ (- frac {1}{2} left (n tau (bar { x }-mu) ^ 2 + tau _ 0(mu-mu _ 0) ^ 2 right)) ** [[Anderson–Darling test]] &= \exp\left(-\frac{1}{2}(n\tau + \tau_0)\left(\mu - \dfrac{n\tau \bar{x} + \tau_0\mu_0}{n\tau + \tau_0}\right)^2 + \frac{n\tau\tau_0}{n\tau+\tau_0}(\bar{x} - \mu_0)^2\right) \\ & = exp left (- frac {1}{2}(n tau + tau _ 0) left (mu-dfrac { n tau bar { x } + tau _ 0}{ n tau + tau _ 0}右) ^ 2 + frac { n tau _ 0}{ n tau + tau _ 0}(bar { x }-mu _ 0) ^ 2右) &\propto \exp\left(-\frac{1}{2}(n\tau + \tau_0)\left(\mu - \dfrac{n\tau \bar{x} + \tau_0\mu_0}{n\tau + \tau_0}\right)^2\right) 左(- frac {1}{2}(n tau + tau _ 0)左(mu-dfrac { n tau bar { x } + tau _ 0 mu _ 0}{ n tau + tau _ 0}右) ^ 2右) === Bayesian analysis of the normal distribution === \end{align} }

Bayesian analysis of normally distributed data is complicated by the many different possibilities that may be considered:

• Either the mean, or the variance, or neither, may be considered a fixed quantity.

In the above derivation, we used the formula above for the sum of two quadratics and eliminated all constant factors not involving μ. The result is the kernel of a normal distribution, with mean $\displaystyle{ \frac{n\tau \bar{x} + \tau_0\mu_0}{n\tau + \tau_0} }$ and precision $\displaystyle{ n\tau + \tau_0 }$, i.e.

• When the variance is unknown, analysis may be done directly in terms of the variance, or in terms of the precision, the reciprocal of the variance. The reason for expressing the formulas in terms of precision is that the analysis of most cases is simplified.
• Both univariate and multivariate cases need to be considered.

$\displaystyle{ p(\mu\mid\mathbf{X}) \sim \mathcal{N}\left(\frac{n\tau \bar{x} + \tau_0\mu_0}{n\tau + \tau_0}, \frac{1}{n\tau + \tau_0}\right) }$

This can be written as a set of Bayesian update equations for the posterior parameters in terms of the prior parameters:

The formulas for the non-linear-regression cases are summarized in the conjugate prior article.

\displaystyle{ \begin{align} 1.1.1.2.2.2.2.2.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.4.3.3.3.3.3.3.3.3.3.3.3 \tau_0' &= \tau_0 + n\tau \\ 0’ & = tau 0 + n tau ==== Sum of two quadratics ==== \mu_0' &= \frac{n\tau \bar{x} + \tau_0\mu_0}{n\tau + \tau_0} \\ 0’ & = frac { n tau bar { x } + tau _ 0 mu _ 0}{ n tau + tau _ 0} \bar{x} &= \frac{1}{n}\sum_{i=1}^n x_i 1}{ n } sum { i = 1} ^ n x _ i ===== Scalar form ===== \end{align} }

The following auxiliary formula is useful for simplifying the posterior update equations, which otherwise become fairly tedious.

That is, to combine n data points with total precision of nτ (or equivalently, total variance of n/σ2) and mean of values $\displaystyle{ \bar{x} }$, derive a new total precision simply by adding the total precision of the data to the prior total precision, and form a new mean through a precision-weighted average, i.e. a weighted average of the data mean and the prior mean, each weighted by the associated total precision. This makes logical sense if the precision is thought of as indicating the certainty of the observations: In the distribution of the posterior mean, each of the input components is weighted by its certainty, and the certainty of this distribution is the sum of the individual certainties. (For the intuition of this, compare the expression "the whole is (or is not) greater than the sum of its parts". In addition, consider that the knowledge of the posterior comes from a combination of the knowledge of the prior and likelihood, so it makes sense that we are more certain of it than of either of its components.)

$\displaystyle{ a(x-y)^2 + b(x-z)^2 = (a + b)\left(x - \frac{ay+bz}{a+b}\right)^2 + \frac{ab}{a+b}(y-z)^2 }$

The above formula reveals why it is more convenient to do Bayesian analysis of conjugate priors for the normal distribution in terms of the precision. The posterior precision is simply the sum of the prior and likelihood precisions, and the posterior mean is computed through a precision-weighted average, as described above. The same formulas can be written in terms of variance by reciprocating all the precisions, yielding the more ugly formulas

This equation rewrites the sum of two quadratics in x by expanding the squares, grouping the terms in x, and completing the square. Note the following about the complex constant factors attached to some of the terms:

1. The factor $\displaystyle{ \frac{ay+bz}{a+b} }$ has the form of a weighted average of y and z.

\displaystyle{ \begin{align} 1.1.1.2.2.2.2.2.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.4.3.3.3.3.3.3.3.3.3.3.3.4.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3 # \lt math\gt \frac{ab}{a+b} = \frac{1}{\frac{1}{a}+\frac{1}{b}} = (a^{-1} + b^{-1})^{-1}. } This shows that this factor can be thought of as resulting from a situation where the reciprocals of quantities a and b add directly, so to combine a and b themselves, it's necessary to reciprocate, add, and reciprocate the result again to get back into the original units. This is exactly the sort of operation performed by the harmonic mean, so it is not surprising that $\displaystyle{ \frac{ab}{a+b} }$ is one-half the harmonic mean of a and b.

{\sigma^2_0}' &= \frac{1}{\frac{n}{\sigma^2} + \frac{1}{\sigma_0^2}} \\

{ sigma ^ 2 _ 0}’ & = frac {1}{ frac { n }{ sigma ^ 2} + frac {1}{ sigma _ 0 ^ 2}}

\mu_0' &= \frac{\frac{n\bar{x}}{\sigma^2} + \frac{\mu_0}{\sigma_0^2}}{\frac{n}{\sigma^2} + \frac{1}{\sigma_0^2}} \\

0’ & = frac { n bar { x }{ sigma ^ 2} + frac { mu _ 0}{ sigma _ 0 ^ 2}{ frac { n }{ sigma ^ 2} + frac {1}{ sigma _ 0 ^ 2}

##### Vector form

\bar{x} &= \frac{1}{n}\sum_{i=1}^n x_i

1}{ n } sum { i = 1} ^ n x _ i

A similar formula can be written for the sum of two vector quadratics: If x, y, z are vectors of length k, and A and B are symmetric, invertible matrices of size $\displaystyle{ k\times k }$, then

\end{align}[/itex]

\displaystyle{ \begin{align} For a set of i.i.d. normally distributed data points X of size n where each individual point x follows \lt math\gt x \sim \mathcal{N}(\mu, \sigma^2) } with known mean μ, the conjugate prior of the variance has an inverse gamma distribution or a scaled inverse chi-squared distribution. The two are equivalent except for having different parameterizations. Although the inverse gamma is more commonly used, we use the scaled inverse chi-squared for the sake of convenience. The prior for σ2 is as follows:

& (\mathbf{y}-\mathbf{x})'\mathbf{A}(\mathbf{y}-\mathbf{x}) + (\mathbf{x}-\mathbf{z})' \mathbf{B}(\mathbf{x}-\mathbf{z}) \\

= {} & (\mathbf{x} - \mathbf{c})'(\mathbf{A}+\mathbf{B})(\mathbf{x} - \mathbf{c}) + (\mathbf{y} - \mathbf{z})'(\mathbf{A}^{-1} + \mathbf{B}^{-1})^{-1}(\mathbf{y} - \mathbf{z})

$\displaystyle{ p(\sigma^2\mid\nu_0,\sigma_0^2) = \frac{(\sigma_0^2\frac{\nu_0}{2})^{\nu_0/2}}{\Gamma\left(\frac{\nu_0}{2} \right)}~\frac{\exp\left[ \frac{-\nu_0 \sigma_0^2}{2 \sigma^2}\right]}{(\sigma^2)^{1+\frac{\nu_0}{2}}} \propto \frac{\exp\left[ \frac{-\nu_0 \sigma_0^2}{2 \sigma^2}\right]}{(\sigma^2)^{1+\frac{\nu_0}{2}}} }$

< math > p (sigma ^ 2 mid nu _ 0，sigma _ 0 ^ 2) = frac {(sigma _ 0 ^ 2 frac { nu _ 0}{2}) ^ { nu _ 0/2}{ Gamma left (frac { nu _ 0}{2}右)} ~ frac { exp left [ frac {-nu _ 0 sigma _ 0 ^ 2}{2 sigma ^ 2}右]{2 sigma ^ 2}{1 + frac { nu _ 0}}{2}}}{ nu _ 0 ^ sigma _ 2}{ propto c { exp 左{ nu _ 0 ^ 2{2 sigma _ 2}{2}{右]{1 + { nu {0}{2}{ ff _ 2}{{{2}{{2}{2}{{2}{2}{2}{2}{2}{2}{2}{2}{2}{2}{

\end{align}

[/itex]

The likelihood function from above, written in terms of the variance, is:

where

\displaystyle{ \begin{align} 1.1.1.2.2.2.2.2.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.4.3.3.3.3.3.3.3.3.3.3.3 p(\mathbf{X}\mid\mu,\sigma^2) &= \left(\frac{1}{2\pi\sigma^2}\right)^{n/2} \exp\left[-\frac{1}{2\sigma^2} \sum_{i=1}^n (x_i-\mu)^2\right] \\ P (mathbf { x } mid mu，sigma ^ 2) & = left (frac {1}{2 pi sigma ^ 2} right) ^ { n/2} exp left [-frac {1}{2 sigma ^ 2} sum { i = 1} ^ n (xi-mu) ^ 2 right ] :\lt math\gt \mathbf{c} = (\mathbf{A} + \mathbf{B})^{-1}(\mathbf{A}\mathbf{y} + \mathbf{B} \mathbf{z}) }

&= \left(\frac{1}{2\pi\sigma^2}\right)^{n/2} \exp\left[-\frac{S}{2\sigma^2}\right]

& = left (frac {1}{2 pi sigma ^ 2} right) ^ { n/2} exp left [-frac { s }{2 sigma ^ 2} right ]

\end{align}[/itex]

Note that the form xA x is called a quadratic form and is a scalar:

$\displaystyle{ \mathbf{x}'\mathbf{A}\mathbf{x} = \sum_{i,j}a_{ij} x_i x_j }$

where

In other words, it sums up all possible combinations of products of pairs of elements from x, with a separate coefficient for each. In addition, since $\displaystyle{ x_i x_j = x_j x_i }$, only the sum $\displaystyle{ a_{ij} + a_{ji} }$ matters for any off-diagonal elements of A, and there is no loss of generality in assuming that A is symmetric. Furthermore, if A is symmetric, then the form $\displaystyle{ \mathbf{x}'\mathbf{A}\mathbf{y} = \mathbf{y}'\mathbf{A}\mathbf{x}. }$

$\displaystyle{ S = \sum_{i=1}^n (x_i-\mu)^2. }$

[数学] s = sum _ { i = 1} ^ n (x _ i-mu) ^ 2

#### Sum of differences from the mean

Another useful formula is as follows:

Then:

$\displaystyle{ \sum_{i=1}^n (x_i-\mu)^2 = \sum_{i=1}^n(x_i-\bar{x})^2 + n(\bar{x} -\mu)^2 }$

\displaystyle{ \begin{align} 1.1.1.2.2.2.2.2.2.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.4.3.3.3.3.3.3.3.3.3.3.3.4.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3 p(\sigma^2\mid\mathbf{X}) &\propto p(\mathbf{X}\mid\sigma^2) p(\sigma^2) \\ P (sigma ^ 2 mid mathbf { x }) & propto p (mathbf { x } mid sigma ^ 2) p (sigma ^ 2) where \lt math\gt \bar{x} = \frac{1}{n}\sum_{i=1}^n x_i. }

&= \left(\frac{1}{2\pi\sigma^2}\right)^{n/2} \exp\left[-\frac{S}{2\sigma^2}\right] \frac{(\sigma_0^2\frac{\nu_0}{2})^{\frac{\nu_0}{2}}}{\Gamma\left(\frac{\nu_0}{2} \right)}~\frac{\exp\left[ \frac{-\nu_0 \sigma_0^2}{2 \sigma^2}\right]}{(\sigma^2)^{1+\frac{\nu_0}{2}}} \\

(& = left (frac {1}{2 pi sigma ^ 2} right) ^ { n/2} exp left [-frac {2 sigma ^ 2} right ] frac {(sigma _ 0 ^ 2 frac { nu _ 0}{2}}) ^ frac { nu _ 0}{2}{ Gamma left (frac {2}{2}{2}{右)} ~ frac { left [ frac {-exp _ 0 ^ 2}{2 sigma ^ 2}{2}{2}{右]{1 + frac { nu _ 0}{2}{2}{ sigma {2}{2}{1 + nu _ 2}{2}{2}{2}{2}{2}{2}{2}{2}{2}{2}{2}{1 + frac {2}{2}{2}{2}{2}{2}{2}{2}

&\propto \left(\frac{1}{\sigma^2}\right)^{n/2} \frac{1}{(\sigma^2)^{1+\frac{\nu_0}{2}}} \exp\left[-\frac{S}{2\sigma^2} + \frac{-\nu_0 \sigma_0^2}{2 \sigma^2}\right] \\

& propto left (frac {1}{ sigma ^ 2} right) ^ { n/2} frac {1}{(sigma ^ 2) ^ {1 + frac { nu _ 0}{2}}} exp left [-frac { s }{2 sigma ^ 2} + frac {-nu _ 0 sigma _ 2}{2 sigma ^ 2}右]

### With known variance

&= \frac{1}{(\sigma^2)^{1+\frac{\nu_0+n}{2}}} \exp\left[-\frac{\nu_0 \sigma_0^2 + S}{2\sigma^2}\right]

[ & = frac {1}{(sigma ^ 2) ^ {1 + frac { nu _ 0 + n }{2}}}} exp 左[-frac { nu _ 0 sigma _ 0 ^ 2 + s }{2 sigma ^ 2}右]

For a set of i.i.d. normally distributed data points X of size n where each individual point x follows $\displaystyle{ x \sim \mathcal{N}(\mu, \sigma^2) }$ with known variance σ2, the conjugate prior distribution is also normally distributed.

\end{align}[/itex]

This can be shown more easily by rewriting the variance as the precision, i.e. using τ = 1/σ2. Then if $\displaystyle{ x \sim \mathcal{N}(\mu, 1/\tau) }$ and $\displaystyle{ \mu \sim \mathcal{N}(\mu_0, 1/\tau_0), }$ we proceed as follows.

The above is also a scaled inverse chi-squared distribution where

First, the likelihood function is (using the formula above for the sum of differences from the mean):

\displaystyle{ \begin{align} 1.1.1.2.2.2.2.2.2.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.4.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3 \nu_0' &= \nu_0 + n \\ 0’ & = nu 0 + n :\lt math\gt \begin{align} \nu_0'{\sigma_0^2}' &= \nu_0 \sigma_0^2 + \sum_{i=1}^n (x_i-\mu)^2 Nu _ 0’{ sigma _ 0 ^ 2}’ & = nu _ 0 sigma _ 0 ^ 2 + sum _ { i = 1} ^ n (x _ i-mu) ^ 2 p(\mathbf{X}\mid\mu,\tau) &= \prod_{i=1}^n \sqrt{\frac{\tau}{2\pi}} \exp\left(-\frac{1}{2}\tau(x_i-\mu)^2\right) \\ \end{align} }

&= \left(\frac{\tau}{2\pi}\right)^{n/2} \exp\left(-\frac{1}{2}\tau \sum_{i=1}^n (x_i-\mu)^2\right) \\

&= \left(\frac{\tau}{2\pi}\right)^{n/2} \exp\left[-\frac{1}{2}\tau \left(\sum_{i=1}^n(x_i-\bar{x})^2 + n(\bar{x} -\mu)^2\right)\right].

or equivalently

\end{align}[/itex]

\displaystyle{ \begin{align} 1.1.1.2.2.2.2.2.2.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.4.3.3.3.3.3.3.3.3.3.3.3.4.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3 Then, we proceed as follows: \nu_0' &= \nu_0 + n \\ 0’ & = nu 0 + n {\sigma_0^2}' &= \frac{\nu_0 \sigma_0^2 + \sum_{i=1}^n (x_i-\mu)^2}{\nu_0+n} { sigma _ 0 ^ 2}’ & = frac { nu _ 0 sigma _ 0 ^ 2 + sum _ { i = 1} ^ n (x _ i-mu) ^ 2}{ nu _ 0 + n } :\lt math\gt \begin{align} \end{align} }

p(\mu\mid\mathbf{X}) &\propto p(\mathbf{X}\mid\mu) p(\mu) \\

& = \left(\frac{\tau}{2\pi}\right)^{n/2} \exp\left[-\frac{1}{2}\tau \left(\sum_{i=1}^n(x_i-\bar{x})^2 + n(\bar{x} -\mu)^2\right)\right] \sqrt{\frac{\tau_0}{2\pi}} \exp\left(-\frac{1}{2}\tau_0(\mu-\mu_0)^2\right) \\

Reparameterizing in terms of an inverse gamma distribution, the result is:

&\propto \exp\left(-\frac{1}{2}\left(\tau\left(\sum_{i=1}^n(x_i-\bar{x})^2 + n(\bar{x} -\mu)^2\right) + \tau_0(\mu-\mu_0)^2\right)\right) \\

&\propto \exp\left(-\frac{1}{2} \left(n\tau(\bar{x}-\mu)^2 + \tau_0(\mu-\mu_0)^2 \right)\right) \\

\displaystyle{ \begin{align} 1.1.1.2.2.2.2.2.2.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.4.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3 &= \exp\left(-\frac{1}{2}(n\tau + \tau_0)\left(\mu - \dfrac{n\tau \bar{x} + \tau_0\mu_0}{n\tau + \tau_0}\right)^2 + \frac{n\tau\tau_0}{n\tau+\tau_0}(\bar{x} - \mu_0)^2\right) \\ \alpha' &= \alpha + \frac{n}{2} \\ 2} &\propto \exp\left(-\frac{1}{2}(n\tau + \tau_0)\left(\mu - \dfrac{n\tau \bar{x} + \tau_0\mu_0}{n\tau + \tau_0}\right)^2\right) \beta' &= \beta + \frac{\sum_{i=1}^n (x_i-\mu)^2}{2} Beta’ & = beta + frac { sum { i = 1} ^ n (x _ i-mu) ^ 2}{2} \end{align} }

\end{align}[/itex]

In the above derivation, we used the formula above for the sum of two quadratics and eliminated all constant factors not involving μ. The result is the kernel of a normal distribution, with mean $\displaystyle{ \frac{n\tau \bar{x} + \tau_0\mu_0}{n\tau + \tau_0} }$ and precision $\displaystyle{ n\tau + \tau_0 }$, i.e.

For a set of i.i.d. normally distributed data points X of size n where each individual point x follows $\displaystyle{ x \sim \mathcal{N}(\mu, \sigma^2) }$ with unknown mean μ and unknown variance σ2, a combined (multivariate) conjugate prior is placed over the mean and variance, consisting of a normal-inverse-gamma distribution.

$\displaystyle{ p(\mu\mid\mathbf{X}) \sim \mathcal{N}\left(\frac{n\tau \bar{x} + \tau_0\mu_0}{n\tau + \tau_0}, \frac{1}{n\tau + \tau_0}\right) }$

Logically, this originates as follows:

From the analysis of the case with unknown mean but known variance, we see that the update equations involve sufficient statistics computed from the data consisting of the mean of the data points and the total variance of the data points, computed in turn from the known variance divided by the number of data points.


This can be written as a set of Bayesian update equations for the posterior parameters in terms of the prior parameters:

From the analysis of the case with unknown variance but known mean, we see that the update equations involve sufficient statistics over the data consisting of the number of data points and sum of squared deviations.


Keep in mind that the posterior update values serve as the prior distribution when further data is handled.  Thus, we should logically think of our priors in terms of the sufficient statistics just described, with the same semantics kept in mind as much as possible.


\displaystyle{ \begin{align} To handle the case where both mean and variance are unknown, we could place independent priors over the mean and variance, with fixed estimates of the average mean, total variance, number of data points used to compute the variance prior, and sum of squared deviations. Note however that in reality, the total variance of the mean depends on the unknown variance, and the sum of squared deviations that goes into the variance prior (appears to) depend on the unknown mean. In practice, the latter dependence is relatively unimportant: Shifting the actual mean shifts the generated points by an equal amount, and on average the squared deviations will remain the same. This is not the case, however, with the total variance of the mean: As the unknown variance increases, the total variance of the mean will increase proportionately, and we would like to capture this dependence. 为了处理均值和方差都未知的情况，我们可以在均值和方差之上放置独立的先验，用平均均值、总方差、用于计算方差先验的数据点数和偏差平方和的固定估计。然而，请注意，在现实中，均值的总方差取决于未知的方差，进入方差之前(似乎)的平方偏差之和取决于未知的均值。在实践中，后一种依赖关系相对来说并不重要: 将实际平均值移动生成的点，移动的数量相等，平均而言，平方偏差将保持不变。然而，对于均值的总方差，情况并非如此: 随着未知方差的增加，均值的总方差将按比例增加，我们希望捕捉这种依赖性。 \tau_0' &= \tau_0 + n\tau \\ This suggests that we create a conditional prior of the mean on the unknown variance, with a hyperparameter specifying the mean of the pseudo-observations associated with the prior, and another parameter specifying the number of pseudo-observations. This number serves as a scaling parameter on the variance, making it possible to control the overall variance of the mean relative to the actual variance parameter. The prior for the variance also has two hyperparameters, one specifying the sum of squared deviations of the pseudo-observations associated with the prior, and another specifying once again the number of pseudo-observations. Note that each of the priors has a hyperparameter specifying the number of pseudo-observations, and in each case this controls the relative variance of that prior. These are given as two separate hyperparameters so that the variance (aka the confidence) of the two priors can be controlled separately. 这就建议我们创建一个关于未知方差的均值条件先验，用一个超参数指定与先验相关联的伪观测值的均值，另一个参数指定伪观测值的数目。这个数字作为方差的标度参数，使得控制平均值相对于实际方差参数的总方差成为可能。方差的先验也有两个超参数，一个指定与先验相关的伪观测值的平方和，另一个指定再次伪观测值的数目。请注意，每个先验都有一个超参数，用于指定伪观测值的数量，并且在每种情况下，这控制了先验的相对方差。这些是作为两个独立的超参数，以便方差(又称置信度)的两个先验可以分别控制。 \mu_0' &= \frac{n\tau \bar{x} + \tau_0\mu_0}{n\tau + \tau_0} \\ This leads immediately to the normal-inverse-gamma distribution, which is the product of the two distributions just defined, with conjugate priors used (an inverse gamma distribution over the variance, and a normal distribution over the mean, conditional on the variance) and with the same four parameters just defined. 这立刻导致了正态-逆-伽马分布，这是刚刚定义的两个分布的乘积，使用了共轭先验(方差上的逆伽玛分布，方差上的正态分布，条件方差)和刚刚定义的相同的四个参数。 \bar{x} &= \frac{1}{n}\sum_{i=1}^n x_i \end{align} }

The priors are normally defined as follows:

That is, to combine n data points with total precision of (or equivalently, total variance of n/σ2) and mean of values $\displaystyle{ \bar{x} }$, derive a new total precision simply by adding the total precision of the data to the prior total precision, and form a new mean through a precision-weighted average, i.e. a weighted average of the data mean and the prior mean, each weighted by the associated total precision. This makes logical sense if the precision is thought of as indicating the certainty of the observations: In the distribution of the posterior mean, each of the input components is weighted by its certainty, and the certainty of this distribution is the sum of the individual certainties. (For the intuition of this, compare the expression "the whole is (or is not) greater than the sum of its parts". In addition, consider that the knowledge of the posterior comes from a combination of the knowledge of the prior and likelihood, so it makes sense that we are more certain of it than of either of its components.)

\displaystyle{ \begin{align} 1.1.1.2.2.2.2.2.2.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.4.3.3.3.3.3.3.3.3.3.3.3.4.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3 p(\mu\mid\sigma^2; \mu_0, n_0) &\sim \mathcal{N}(\mu_0,\sigma^2/n_0) \\ P (mu mid sigma ^ 2; mu _ 0，n _ 0) & sim mathcal { n }(mu _ 0，sigma ^ 2/n _ 0) The above formula reveals why it is more convenient to do [[Bayesian analysis]] of [[conjugate prior]]s for the normal distribution in terms of the precision. The posterior precision is simply the sum of the prior and likelihood precisions, and the posterior mean is computed through a precision-weighted average, as described above. The same formulas can be written in terms of variance by reciprocating all the precisions, yielding the more ugly formulas p(\sigma^2; \nu_0,\sigma_0^2) &\sim I\chi^2(\nu_0,\sigma_0^2) = IG(\nu_0/2, \nu_0\sigma_0^2/2) p(\sigma^2; \nu_0,\sigma_0^2) &\sim I\chi^2(\nu_0,\sigma_0^2) = IG(\nu_0/2, \nu_0\sigma_0^2/2) \end{align} }

\displaystyle{ \begin{align} \lt !-- \\ \lt !-- \\ {\sigma^2_0}' &= \frac{1}{\frac{n}{\sigma^2} + \frac{1}{\sigma_0^2}} \\ & =\frac{(\sigma_0^2\nu_0/2)^{\nu_0/2}}{\Gamma(\nu_0/2)}~\frac{\exp\left[ \frac{-\nu_0 \sigma_0^2}{2 \sigma^2}\right]}{(\sigma^2)^{1+\nu_0/2}} \propto \frac{\exp\left[ \frac{-\nu_0 \sigma_0^2}{2 \sigma^2}\right]}{(\sigma^2)^{1+\nu_0/2}} (& = frac {(sigma _ 0 ^ 2 nu _ 0/2) ^ { nu _ 0/2}{ Gamma (nu _ 0/2)} ~ frac { exp left [ frac {-nu _ 0 sigma _ 0 ^ 2}{2 sigma ^ 2}{右]}{(sigma ^ 2) ^ ^ {1 + nu _ 0/2}}至 frac { exp left [ frac {-nu _ 0 _ 0 ^ sigma _ 2}{2 ^ sigma ^ 2}{2 ^ 2}{2 ^ sigma ^ 2}{1 + nu _ 0/2}{1/2} \mu_0' &= \frac{\frac{n\bar{x}}{\sigma^2} + \frac{\mu_0}{\sigma_0^2}}{\frac{n}{\sigma^2} + \frac{1}{\sigma_0^2}} \\ --\gt --\gt \bar{x} &= \frac{1}{n}\sum_{i=1}^n x_i \end{align} }

The update equations can be derived, and look as follows:

#### With known mean

\displaystyle{ \begin{align} 1.1.1.2.2.2.2.2.2.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.4.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3 For a set of [[i.i.d.]] normally distributed data points '''X''' of size ''n'' where each individual point ''x'' follows \lt math\gt x \sim \mathcal{N}(\mu, \sigma^2) } with known mean μ, the conjugate prior of the variance has an inverse gamma distribution or a scaled inverse chi-squared distribution. The two are equivalent except for having different parameterizations. Although the inverse gamma is more commonly used, we use the scaled inverse chi-squared for the sake of convenience. The prior for σ2 is as follows:

\bar{x} &= \frac 1 n \sum_{i=1}^n x_i \\

1 n sum { i = 1} ^ n x i

\mu_0' &= \frac{n_0\mu_0 + n\bar{x}}{n_0 + n} \\

0’ & = frac { n _ 0 mu _ 0 + n bar { x }{ n _ 0 + n }

$\displaystyle{ p(\sigma^2\mid\nu_0,\sigma_0^2) = \frac{(\sigma_0^2\frac{\nu_0}{2})^{\nu_0/2}}{\Gamma\left(\frac{\nu_0}{2} \right)}~\frac{\exp\left[ \frac{-\nu_0 \sigma_0^2}{2 \sigma^2}\right]}{(\sigma^2)^{1+\frac{\nu_0}{2}}} \propto \frac{\exp\left[ \frac{-\nu_0 \sigma_0^2}{2 \sigma^2}\right]}{(\sigma^2)^{1+\frac{\nu_0}{2}}} }$

n_0' &= n_0 + n \\

0’ & = n 0 + n

\nu_0' &= \nu_0 + n \\

0’ & = nu 0 + n

The likelihood function from above, written in terms of the variance, is:

\nu_0'{\sigma_0^2}' &= \nu_0 \sigma_0^2 + \sum_{i=1}^n (x_i-\bar{x})^2 + \frac{n_0 n}{n_0 + n}(\mu_0 - \bar{x})^2

Nu _ 0’{ sigma _ 0 ^ 2}’ & = nu _ 0 sigma _ 0 ^ 2 + sum _ { i = 1} ^ n (x _ i-bar { x }) ^ 2 + frac { n _ 0 n }{ n _ 0 + n }(mu _ 0-bar { x }) ^ 2

\end{align}[/itex]

\displaystyle{ \begin{align} p(\mathbf{X}\mid\mu,\sigma^2) &= \left(\frac{1}{2\pi\sigma^2}\right)^{n/2} \exp\left[-\frac{1}{2\sigma^2} \sum_{i=1}^n (x_i-\mu)^2\right] \\ The respective numbers of pseudo-observations add the number of actual observations to them. The new mean hyperparameter is once again a weighted average, this time weighted by the relative numbers of observations. Finally, the update for \lt math\gt \nu_0'{\sigma_0^2}' } is similar to the case with known mean, but in this case the sum of squared deviations is taken with respect to the observed data mean rather than the true mean, and as a result a new "interaction term" needs to be added to take care of the additional error source stemming from the deviation between prior and data mean.

&= \left(\frac{1}{2\pi\sigma^2}\right)^{n/2} \exp\left[-\frac{S}{2\sigma^2}\right]

\end{align}[/itex]

The prior distributions are

where

\displaystyle{ \begin{align} 1.1.1.2.2.2.2.2.2.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.4.3.3.3.3.3.3.3.3.3.3.3.4.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3 :\lt math\gt S = \sum_{i=1}^n (x_i-\mu)^2. }

p(\mu\mid\sigma^2; \mu_0, n_0) &\sim \mathcal{N}(\mu_0,\sigma^2/n_0) = \frac{1}{\sqrt{2\pi\frac{\sigma^2}{n_0}}} \exp\left(-\frac{n_0}{2\sigma^2}(\mu-\mu_0)^2\right) \\

P (mu mid sigma ^ 2; mu _ 0，n _ 0) & sim mathcal { n }(mu _ 0，sigma ^ 2/n _ 0) = frac {1}{2 pi frac { σ ^ 2}{ n _ 0} exp 左(- frac { n _ 0}{2 sigma ^ 2}(mu-mu _ 0) ^ 2右)

&\propto (\sigma^2)^{-1/2} \exp\left(-\frac{n_0}{2\sigma^2}(\mu-\mu_0)^2\right) \\

(sigma ^ 2) ^ {-1/2} exp left (- frac { n _ 0}{2 sigma ^ 2}(mu-mu _ 0) ^ 2 right)

Then:

p(\sigma^2; \nu_0,\sigma_0^2) &\sim I\chi^2(\nu_0,\sigma_0^2) = IG(\nu_0/2, \nu_0\sigma_0^2/2) \\

p(\sigma^2; \nu_0,\sigma_0^2) &\sim I\chi^2(\nu_0,\sigma_0^2) = IG(\nu_0/2, \nu_0\sigma_0^2/2) \\

&= \frac{(\sigma_0^2\nu_0/2)^{\nu_0/2}}{\Gamma(\nu_0/2)}~\frac{\exp\left[ \frac{-\nu_0 \sigma_0^2}{2 \sigma^2}\right]}{(\sigma^2)^{1+\nu_0/2}} \\

&= \frac{(\sigma_0^2\nu_0/2)^{\nu_0/2}}{\Gamma(\nu_0/2)}~\frac{\exp\left[ \frac{-\nu_0 \sigma_0^2}{2 \sigma^2}\right]}{(\sigma^2)^{1+\nu_0/2}} \\

\displaystyle{ \begin{align} &\propto {(\sigma^2)^{-(1+\nu_0/2)}} \exp\left[ \frac{-\nu_0 \sigma_0^2}{2 \sigma^2}\right]. [咒语][咒语][咒语][咒语]。 p(\sigma^2\mid\mathbf{X}) &\propto p(\mathbf{X}\mid\sigma^2) p(\sigma^2) \\ \end{align} }

&= \left(\frac{1}{2\pi\sigma^2}\right)^{n/2} \exp\left[-\frac{S}{2\sigma^2}\right] \frac{(\sigma_0^2\frac{\nu_0}{2})^{\frac{\nu_0}{2}}}{\Gamma\left(\frac{\nu_0}{2} \right)}~\frac{\exp\left[ \frac{-\nu_0 \sigma_0^2}{2 \sigma^2}\right]}{(\sigma^2)^{1+\frac{\nu_0}{2}}} \\

&\propto \left(\frac{1}{\sigma^2}\right)^{n/2} \frac{1}{(\sigma^2)^{1+\frac{\nu_0}{2}}} \exp\left[-\frac{S}{2\sigma^2} + \frac{-\nu_0 \sigma_0^2}{2 \sigma^2}\right] \\

Therefore, the joint prior is

&= \frac{1}{(\sigma^2)^{1+\frac{\nu_0+n}{2}}} \exp\left[-\frac{\nu_0 \sigma_0^2 + S}{2\sigma^2}\right]

\end{align}[/itex]

\displaystyle{ \begin{align} 1.1.1.2.2.2.2.2.2.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.4.3.3.3.3.3.3.3.3.3.3.3.4.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3 p(\mu,\sigma^2; \mu_0, n_0, \nu_0,\sigma_0^2) &= p(\mu\mid\sigma^2; \mu_0, n_0)\,p(\sigma^2; \nu_0,\sigma_0^2) \\ P (mu，sigma ^ 2; mu _ 0，n _ 0，nu _ 0，sigma _ 0 ^ 2) & = p (mu mid sigma ^ 2; mu _ 0，n _ 0) ，p (sigma ^ 2; nu _ 0，sigma _ 0 ^ 2) The above is also a scaled inverse chi-squared distribution where &\propto (\sigma^2)^{-(\nu_0+3)/2} \exp\left[-\frac 1 {2\sigma^2}\left(\nu_0\sigma_0^2 + n_0(\mu-\mu_0)^2\right)\right]. (sigma ^ 2) ^ {-(nu _ 0 + 3)/2} exp left [-frac 1{2 sigma ^ 2} left (nu _ 0 sigma _ 0 ^ 2 + n _ 0(mu _ 0) ^ 2 right)]. \end{align} }

\displaystyle{ \begin{align} \nu_0' &= \nu_0 + n \\ The likelihood function from the section above with known variance is: 上面这一节中已知方差的可能函数是: \nu_0'{\sigma_0^2}' &= \nu_0 \sigma_0^2 + \sum_{i=1}^n (x_i-\mu)^2 \end{align} }

\displaystyle{ \begin{align} 1.1.1.2.2.2.2.2.2.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.4.3.3.3.3.3.3.3.3.3.3.3.4.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3 p(\mathbf{X}\mid\mu,\sigma^2) &= \left(\frac{1}{2\pi\sigma^2}\right)^{n/2} \exp\left[-\frac{1}{2\sigma^2} \left(\sum_{i=1}^n(x_i -\mu)^2\right)\right] P (mathbf { x } mid mu，sigma ^ 2) & = left (frac {1}{2 pi sigma ^ 2} right) ^ { n/2} exp left [-frac {1}{2 sigma ^ 2} left (sum { i = 1} ^ n (xi-mu) ^ 2 right)] or equivalently \end{align} }

\displaystyle{ \begin{align} Writing it in terms of variance rather than precision, we get: 用方差而不是精度来写，我们得到: \nu_0' &= \nu_0 + n \\ \lt math\gt \begin{align} 1.1.1.2.2.2.2.2.2.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.4.3.3.3.3.3.3.3.3.3.3.3.4.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3 {\sigma_0^2}' &= \frac{\nu_0 \sigma_0^2 + \sum_{i=1}^n (x_i-\mu)^2}{\nu_0+n} p(\mathbf{X}\mid\mu,\sigma^2) &= \left(\frac{1}{2\pi\sigma^2}\right)^{n/2} \exp\left[-\frac{1}{2\sigma^2} \left(\sum_{i=1}^n(x_i-\bar{x})^2 + n(\bar{x} -\mu)^2\right)\right] \\ P (mathbf { x } mid mu，sigma ^ 2) & = left (frac {1}{2 pi sigma ^ 2} right) ^ { n/2} exp left [-frac {1}{2 sigma ^ 2} left (sum { i = 1} ^ n (xi-bar { x }) ^ 2 + n (bar { x }-mu) ^ 2 right)] \end{align} }

&\propto {\sigma^2}^{-n/2} \exp\left[-\frac{1}{2\sigma^2} \left(S + n(\bar{x} -\mu)^2\right)\right]

[-frac {1}{2 sigma ^ 2} left (s + n (bar { x }-mu) ^ 2 right)]

\end{align}[/itex]

Reparameterizing in terms of an inverse gamma distribution, the result is:

where $\displaystyle{ S = \sum_{i=1}^n(x_i-\bar{x})^2. }$

\displaystyle{ \begin{align} \alpha' &= \alpha + \frac{n}{2} \\ Therefore, the posterior is (dropping the hyperparameters as conditioning factors): 因此，后面是(放弃作为调节因素的超参数) : \beta' &= \beta + \frac{\sum_{i=1}^n (x_i-\mu)^2}{2} \end{align} }

\displaystyle{ \begin{align} 1.1.1.2.2.2.2.2.2.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.4.3.3.3.3.3.3.3.3.3.3.3.4.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3 p(\mu,\sigma^2\mid\mathbf{X}) & \propto p(\mu,\sigma^2) \, p(\mathbf{X}\mid\mu,\sigma^2) \\ P (mu，sigma ^ 2 mid mathbf { x }) & propto p (mu，sigma ^ 2) ，p (mathbf { x } mid mu，sigma ^ 2) ==== With unknown mean and unknown variance ==== & \propto (\sigma^2)^{-(\nu_0+3)/2} \exp\left[-\frac{1}{2\sigma^2}\left(\nu_0\sigma_0^2 + n_0(\mu-\mu_0)^2\right)\right] {\sigma^2}^{-n/2} \exp\left[-\frac{1}{2\sigma^2} \left(S + n(\bar{x} -\mu)^2\right)\right] \\ (sigma ^ 2) ^ {-(nu _ 0 + 3)/2} exp left [-frac {1}{2 sigma ^ 2} left (nu _ 0 sigma _ 0 ^ 2 + n _ 0(mu-mu _ 0) ^ 2 right)]{ sigma ^ 2} ^ {-n/2} exp left [-frac {1}{2 sigma ^ 2} left (s + n (bar { x }-mu) ^ 2 right) For a set of [[i.i.d.]] normally distributed data points '''X''' of size ''n'' where each individual point ''x'' follows \lt math\gt x \sim \mathcal{N}(\mu, \sigma^2) } with unknown mean μ and unknown variance σ2, a combined (multivariate) conjugate prior is placed over the mean and variance, consisting of a normal-inverse-gamma distribution.

&= (\sigma^2)^{-(\nu_0+n+3)/2} \exp\left[-\frac{1}{2\sigma^2}\left(\nu_0\sigma_0^2 + S + n_0(\mu-\mu_0)^2 + n(\bar{x} -\mu)^2\right)\right] \\

(sigma ^ 2) ^ {-(nu _ 0 + n + 3)/2} exp left [-frac {1}{2 sigma ^ 2} left (nu _ 0 sigma _ 0 ^ 2 + s + n _ 0(mu _ 0) ^ 2 + n (bar { x }-mu) ^ 2 right)]

Logically, this originates as follows:

&= (\sigma^2)^{-(\nu_0+n+3)/2} \exp\left[-\frac{1}{2\sigma^2}\left(\nu_0\sigma_0^2 + S + \frac{n_0 n}{n_0+n}(\mu_0-\bar{x})^2 + (n_0+n)\left(\mu-\frac{n_0\mu_0 + n\bar{x}}{n_0 + n}\right)^2\right)\right] \\

(sigma ^ 2) ^ {-(nu _ 0 + n + 3)/2} exp left [-frac {1}{2 sigma ^ 2} left (nu _ 0 sigma _ 0 ^ 2 + s + frac { n _ 0 n }{ n _ 0 + n }(mu _ 0-bar { x }) ^ 2 + (n _ 0 + n) left (mu-frac { n _ 0 mu _ 0 + n bar { x }{ n _ 0 + n }右) ^ 2) right ]

1. From the analysis of the case with unknown mean but known variance, we see that the update equations involve sufficient statistics computed from the data consisting of the mean of the data points and the total variance of the data points, computed in turn from the known variance divided by the number of data points.

& \propto (\sigma^2)^{-1/2} \exp\left[-\frac{n_0+n}{2\sigma^2}\left(\mu-\frac{n_0\mu_0 + n\bar{x}}{n_0 + n}\right)^2\right] \\

[-frac { n _ 0 + n }{2 sigma ^ 2} left (mu-frac { n _ 0 mu _ 0 + n bar { x }{ n _ 0 + n } right) ^ 2 right ]

1. From the analysis of the case with unknown variance but known mean, we see that the update equations involve sufficient statistics over the data consisting of the number of data points and sum of squared deviations.

& \quad\times (\sigma^2)^{-(\nu_0/2+n/2+1)} \exp\left[-\frac{1}{2\sigma^2}\left(\nu_0\sigma_0^2 + S + \frac{n_0 n}{n_0+n}(\mu_0-\bar{x})^2\right)\right] \\

& quad times (sigma ^ 2) ^ {-(nu _ 0/2 + n/2 + 1)} exp left [-frac {1}{2 sigma ^ 2} left (nu _ 0 sigma _ 0 ^ 2 + s + frac { n _ 0 n }(mu _ 0-bar { x }) ^ 2 right)]

1. Keep in mind that the posterior update values serve as the prior distribution when further data is handled. Thus, we should logically think of our priors in terms of the sufficient statistics just described, with the same semantics kept in mind as much as possible.

& = \mathcal{N}_{\mu\mid\sigma^2}\left(\frac{n_0\mu_0 + n\bar{x}}{n_0 + n}, \frac{\sigma^2}{n_0+n}\right) \cdot {\rm IG}_{\sigma^2}\left(\frac12(\nu_0+n), \frac12\left(\nu_0\sigma_0^2 + S + \frac{n_0 n}{n_0+n}(\mu_0-\bar{x})^2\right)\right).

1. To handle the case where both mean and variance are unknown, we could place independent priors over the mean and variance, with fixed estimates of the average mean, total variance, number of data points used to compute the variance prior, and sum of squared deviations. Note however that in reality, the total variance of the mean depends on the unknown variance, and the sum of squared deviations that goes into the variance prior (appears to) depend on the unknown mean. In practice, the latter dependence is relatively unimportant: Shifting the actual mean shifts the generated points by an equal amount, and on average the squared deviations will remain the same. This is not the case, however, with the total variance of the mean: As the unknown variance increases, the total variance of the mean will increase proportionately, and we would like to capture this dependence.

\end{align}[/itex]

1. This suggests that we create a conditional prior of the mean on the unknown variance, with a hyperparameter specifying the mean of the pseudo-observations associated with the prior, and another parameter specifying the number of pseudo-observations. This number serves as a scaling parameter on the variance, making it possible to control the overall variance of the mean relative to the actual variance parameter. The prior for the variance also has two hyperparameters, one specifying the sum of squared deviations of the pseudo-observations associated with the prior, and another specifying once again the number of pseudo-observations. Note that each of the priors has a hyperparameter specifying the number of pseudo-observations, and in each case this controls the relative variance of that prior. These are given as two separate hyperparameters so that the variance (aka the confidence) of the two priors can be controlled separately.
1. This leads immediately to the normal-inverse-gamma distribution, which is the product of the two distributions just defined, with conjugate priors used (an inverse gamma distribution over the variance, and a normal distribution over the mean, conditional on the variance) and with the same four parameters just defined.

In other words, the posterior distribution has the form of a product of a normal distribution over p(μ | σ2) times an inverse gamma distribution over p(σ2), with parameters that are the same as the update equations above.

The priors are normally defined as follows:

\displaystyle{ \begin{align} The occurrence of normal distribution in practical problems can be loosely classified into four categories: 在实际问题中，正态分布的出现大致可分为四类: p(\mu\mid\sigma^2; \mu_0, n_0) &\sim \mathcal{N}(\mu_0,\sigma^2/n_0) \\ Exactly normal distributions; 正态分布; p(\sigma^2; \nu_0,\sigma_0^2) &\sim I\chi^2(\nu_0,\sigma_0^2) = IG(\nu_0/2, \nu_0\sigma_0^2/2) Approximately normal laws, for example when such approximation is justified by the central limit theorem; and 近似正规定律，例如当这种近似被中心极限定理证明是正确的; 和 \end{align} }
Distributions modeled as normal – the normal distribution being the distribution with maximum entropy for a given mean and variance.


The ground state of a quantum harmonic oscillator has the Gaussian distribution.

The update equations can be derived, and look as follows:

Certain quantities in physics are distributed normally, as was first demonstrated by James Clerk Maxwell. Examples of such quantities are:

\displaystyle{ \begin{align} \bar{x} &= \frac 1 n \sum_{i=1}^n x_i \\ \mu_0' &= \frac{n_0\mu_0 + n\bar{x}}{n_0 + n} \\ n_0' &= n_0 + n \\ Approximately normal distributions occur in many situations, as explained by the central limit theorem. When the outcome is produced by many small effects acting additively and independently, its distribution will be close to normal. The normal approximation will not be valid if the effects act multiplicatively (instead of additively), or if there is a single external influence that has a considerably larger magnitude than the rest of the effects. 大约正态分布发生在许多情况下，正如美国中心极限定理协会所解释的。当结果是由许多独立相加的小效应产生时，其分布将接近正常。如果效应是相乘的(而不是相加的) ，或者如果有一个单一的外部影响比其他效应大得多，则正常近似无效。 \nu_0' &= \nu_0 + n \\ \nu_0'{\sigma_0^2}' &= \nu_0 \sigma_0^2 + \sum_{i=1}^n (x_i-\bar{x})^2 + \frac{n_0 n}{n_0 + n}(\mu_0 - \bar{x})^2 \end{align} }

The respective numbers of pseudo-observations add the number of actual observations to them. The new mean hyperparameter is once again a weighted average, this time weighted by the relative numbers of observations. Finally, the update for $\displaystyle{ \nu_0'{\sigma_0^2}' }$ is similar to the case with known mean, but in this case the sum of squared deviations is taken with respect to the observed data mean rather than the true mean, and as a result a new "interaction term" needs to be added to take care of the additional error source stemming from the deviation between prior and data mean.

Histogram of sepal widths for Iris versicolor from Fisher's Iris flower data set, with superimposed best-fitting normal distribution.

The prior distributions are

There are statistical methods to empirically test that assumption, see the above Normality tests section.

\displaystyle{ \begin{align} p(\mu\mid\sigma^2; \mu_0, n_0) &\sim \mathcal{N}(\mu_0,\sigma^2/n_0) = \frac{1}{\sqrt{2\pi\frac{\sigma^2}{n_0}}} \exp\left(-\frac{n_0}{2\sigma^2}(\mu-\mu_0)^2\right) \\ &\propto (\sigma^2)^{-1/2} \exp\left(-\frac{n_0}{2\sigma^2}(\mu-\mu_0)^2\right) \\ p(\sigma^2; \nu_0,\sigma_0^2) &\sim I\chi^2(\nu_0,\sigma_0^2) = IG(\nu_0/2, \nu_0\sigma_0^2/2) \\ &= \frac{(\sigma_0^2\nu_0/2)^{\nu_0/2}}{\Gamma(\nu_0/2)}~\frac{\exp\left[ \frac{-\nu_0 \sigma_0^2}{2 \sigma^2}\right]}{(\sigma^2)^{1+\nu_0/2}} \\ &\propto {(\sigma^2)^{-(1+\nu_0/2)}} \exp\left[ \frac{-\nu_0 \sigma_0^2}{2 \sigma^2}\right]. \end{align} }

Fitted cumulative normal distribution to October rainfalls, see distribution fitting

Therefore, the joint prior is

\displaystyle{ \begin{align} p(\mu,\sigma^2; \mu_0, n_0, \nu_0,\sigma_0^2) &= p(\mu\mid\sigma^2; \mu_0, n_0)\,p(\sigma^2; \nu_0,\sigma_0^2) \\ &\propto (\sigma^2)^{-(\nu_0+3)/2} \exp\left[-\frac 1 {2\sigma^2}\left(\nu_0\sigma_0^2 + n_0(\mu-\mu_0)^2\right)\right]. In regression analysis, lack of normality in residuals simply indicates that the model postulated is inadequate in accounting for the tendency in the data and needs to be augmented; in other words, normality in residuals can always be achieved given a properly constructed model. 在21回归分析，残差中缺乏正态性仅仅表明假设的模型在考虑数据中的趋势方面是不够的，并且需要加以扩充; 换句话说，残差中的正态性总是可以在一个适当构造的模型中实现的。 \end{align} }

The likelihood function from the section above with known variance is:

The [[bean machine, a device invented by Francis Galton, can be called the first generator of normal random variables. This machine consists of a vertical board with interleaved rows of pins. Small balls are dropped from the top and then bounce randomly left or right as they hit the pins. The balls are collected into bins at the bottom and settle down into a pattern resembling the Gaussian curve.]]

\displaystyle{ \begin{align} p(\mathbf{X}\mid\mu,\sigma^2) &= \left(\frac{1}{2\pi\sigma^2}\right)^{n/2} \exp\left[-\frac{1}{2\sigma^2} \left(\sum_{i=1}^n(x_i -\mu)^2\right)\right] In computer simulations, especially in applications of the Monte-Carlo method, it is often desirable to generate values that are normally distributed. The algorithms listed below all generate the standard normal deviates, since a )}} can be generated as μ + σZ}}, where Z is standard normal. All these algorithms rely on the availability of a random number generator U capable of producing uniform random variates. 在计算机模拟中，特别是在蒙特卡罗方法的应用中，常常需要产生正态分布的值。下面列出的算法都生成标准的正常偏差，因为 a)}可以作为 μ + σz }生成，其中 z 是标准的正常偏差。所有这些算法都依赖于随机数生成器 u 的可用性，u 能产生均匀的随机变量。 \end{align} }

Writing it in terms of variance rather than precision, we get:

\displaystyle{ \begin{align} \lt math\gt 《数学》 p(\mathbf{X}\mid\mu,\sigma^2) &= \left(\frac{1}{2\pi\sigma^2}\right)^{n/2} \exp\left[-\frac{1}{2\sigma^2} \left(\sum_{i=1}^n(x_i-\bar{x})^2 + n(\bar{x} -\mu)^2\right)\right] \\ X = \sqrt{- 2 \ln U} \, \cos(2 \pi V) , \qquad X = sqrt {-2 ln u } ，cos (2πv) ，qquad &\propto {\sigma^2}^{-n/2} \exp\left[-\frac{1}{2\sigma^2} \left(S + n(\bar{x} -\mu)^2\right)\right] Y = \sqrt{- 2 \ln U} \, \sin(2 \pi V) . Y = sqrt {-2 ln u } ，sin (2 πv). \end{align} }
 [/itex]


will both have the standard normal distribution, and will be independent. This formulation arises because for a bivariate normal random vector (X, Y) the squared norm will have the chi-squared distribution with two degrees of freedom, which is an easily generated exponential random variable corresponding to the quantity −2ln(U) in these equations; and the angle is distributed uniformly around the circle, chosen by the random variable V.

where $\displaystyle{ S = \sum_{i=1}^n(x_i-\bar{x})^2. }$

$\displaystyle{ X = U\sqrt{\frac{-2\ln S}{S}}, \qquad Y = V\sqrt{\frac{-2\ln S}{S}} }$


[数学] x = u sqrt { frac {-2 ln s }{ s } ，qquad y = v sqrt { frac {-2 ln s }{ s }}} </math >

Therefore, the posterior is (dropping the hyperparameters as conditioning factors):

are returned. Again, X and Y are independent, standard normal random variables.

\displaystyle{ \begin{align} p(\mu,\sigma^2\mid\mathbf{X}) & \propto p(\mu,\sigma^2) \, p(\mathbf{X}\mid\mu,\sigma^2) \\ & \propto (\sigma^2)^{-(\nu_0+3)/2} \exp\left[-\frac{1}{2\sigma^2}\left(\nu_0\sigma_0^2 + n_0(\mu-\mu_0)^2\right)\right] {\sigma^2}^{-n/2} \exp\left[-\frac{1}{2\sigma^2} \left(S + n(\bar{x} -\mu)^2\right)\right] \\ &= (\sigma^2)^{-(\nu_0+n+3)/2} \exp\left[-\frac{1}{2\sigma^2}\left(\nu_0\sigma_0^2 + S + n_0(\mu-\mu_0)^2 + n(\bar{x} -\mu)^2\right)\right] \\ &= (\sigma^2)^{-(\nu_0+n+3)/2} \exp\left[-\frac{1}{2\sigma^2}\left(\nu_0\sigma_0^2 + S + \frac{n_0 n}{n_0+n}(\mu_0-\bar{x})^2 + (n_0+n)\left(\mu-\frac{n_0\mu_0 + n\bar{x}}{n_0 + n}\right)^2\right)\right] \\ & \propto (\sigma^2)^{-1/2} \exp\left[-\frac{n_0+n}{2\sigma^2}\left(\mu-\frac{n_0\mu_0 + n\bar{x}}{n_0 + n}\right)^2\right] \\ The two optional steps allow the evaluation of the logarithm in the last step to be avoided in most cases. These steps can be greatly improved so that the logarithm is rarely evaluated. 这两个可选步骤允许在大多数情况下避免计算最后一步中的对数。这些步骤可以大大改进，因此很少计算对数。 & \quad\times (\sigma^2)^{-(\nu_0/2+n/2+1)} \exp\left[-\frac{1}{2\sigma^2}\left(\nu_0\sigma_0^2 + S + \frac{n_0 n}{n_0+n}(\mu_0-\bar{x})^2\right)\right] \\ & = \mathcal{N}_{\mu\mid\sigma^2}\left(\frac{n_0\mu_0 + n\bar{x}}{n_0 + n}, \frac{\sigma^2}{n_0+n}\right) \cdot {\rm IG}_{\sigma^2}\left(\frac12(\nu_0+n), \frac12\left(\nu_0\sigma_0^2 + S + \frac{n_0 n}{n_0+n}(\mu_0-\bar{x})^2\right)\right). \end{align} }

In other words, the posterior distribution has the form of a product of a normal distribution over p(μ | σ2) times an inverse gamma distribution over p2), with parameters that are the same as the update equations above.

The standard normal CDF is widely used in scientific and statistical computing.

## Occurrence and applications

The values Φ(x) may be approximated very accurately by a variety of methods, such as numerical integration, Taylor series, asymptotic series and continued fractions. Different approximations are used depending on the desired level of accuracy.

The occurrence of normal distribution in practical problems can be loosely classified into four categories:

1. Exactly normal distributions;

{{unordered list

{无序列表

1. Approximately normal laws, for example when such approximation is justified by the central limit theorem; and
1. Distributions modeled as normal – the normal distribution being the distribution with maximum entropy for a given mean and variance.

|1= give the approximation for Φ(x) for x > 0 with the absolute error  < 7.5·10−8 (algorithm 26.2.17):

| 1 = 给出 x > 0的 φ (x)的近似值，绝对误差 < 7.510 < sup >-8 (算法[ http://www.math.sfu.ca/~cbm/aands/page_932.htm 26.2.17]) :

1. Regression problems – the normal distribution being found after systematic effects have been modeled sufficiently well.
$\displaystyle{ 《数学》 \Phi(x) = 1 - \varphi(x)\left(b_1t + b_2t^2 + b_3t^3 + b_4t^4 + b_5t^5\right) + \varepsilon(x), \qquad t = \frac{1}{1+b_0x}, Phi (x) = 1-varphi (x)左(b _ 1t + b _ 2t ^ 2 + b _ 3t ^ 3 + b _ 4t ^ 4 + b _ 5t ^ 5右) + varepsilon (x) ，qt = frac {1}{1 + b _ 0x } , === Exact normality === }$


where ϕ(x) is the standard normal PDF, and b0 = 0.2316419, b1 = 0.319381530, b2 = −0.356563782, b3 = 1.781477937, b4 = −1.821255978, b5 = 1.330274429.

Certain quantities in physics are distributed normally, as was first demonstrated by James Clerk Maxwell. Examples of such quantities are:

|2= lists some dozens of approximations – by means of rational functions, with or without exponentials – for the function. His algorithms vary in the degree of complexity and the resulting precision, with maximum absolute precision of 24 digits. An algorithm by combines Hart's algorithm 5666 with a continued fraction approximation in the tail to provide a fast computation algorithm with a 16-digit precision.

| 2 = 列出了几十个近似值——通过有理函数，有或没有指数函数——用于函数。他的算法在复杂程度和结果精度上有所不同，最大绝对精度为24位。该算法将 Hart 的算法5666与尾部的连分数近似结合起来，提供了一个16位数精度的快速计算算法。

• The position of a particle that experiences diffusion. If initially the particle is located at a specific point (that is its probability distribution is the Dirac delta function), then after time t its location is described by a normal distribution with variance t, which satisfies the diffusion equation $\displaystyle{ \frac{\partial}{\partial t} f(x,t) = \frac{1}{2} \frac{\partial^2}{\partial x^2} f(x,t) }$. If the initial location is given by a certain density function $\displaystyle{ g(x) }$, then the density at time t is the convolution of g and the normal PDF.

|3= after recalling Hart68 solution is not suited for erf, gives a solution for both erf and erfc, with maximal relative error bound, via Rational Chebyshev Approximation.

3 = 在回顾 Hart68解不适用于 erf 后，利用 Rational Chebyshev 逼近给出了 erf 和 erfc 的最大相对误差界的解。

### Approximate normality

Approximately normal distributions occur in many situations, as explained by the central limit theorem. When the outcome is produced by many small effects acting additively and independently, its distribution will be close to normal. The normal approximation will not be valid if the effects act multiplicatively (instead of additively), or if there is a single external influence that has a considerably larger magnitude than the rest of the effects.

|4= suggested a simple algorithm based on the Taylor series expansion

| 4 = 提出了一种基于泰勒级数展开式的简单算法

• In counting problems, where the central limit theorem includes a discrete-to-continuum approximation and where infinitely divisible and decomposable distributions are involved, such as
$\displaystyle{ 《数学》 ** [[Poisson distribution|Poisson random variables]], associated with rare events; \Phi(x) = \frac12 + \varphi(x)\left( x + \frac{x^3} 3 + \frac{x^5}{3\cdot5} + \frac{x^7}{3\cdot5\cdot7} + \frac{x^9}{3\cdot5\cdot7\cdot9} + \cdots \right) Phi (x) = frac12 + varphi (x) left (x + frac { x ^ 3}3 + frac { x ^ 5}{3 cdot5} + frac { x ^ 7}{3 cdot5 cdot7} + frac { x ^ 9}{3 cdot5 cdot7} + frac { x ^ 9}{3 cdot7 cdot9} + cdots right) * [[Thermal radiation]] has a [[Bose–Einstein statistics|Bose–Einstein]] distribution on very short time scales, and a normal distribution on longer timescales due to the central limit theorem. }$


### Assumed normality

for calculating Φ(x) with arbitrary precision. The drawback of this algorithm is comparatively slow calculation time (for example it takes over 300 iterations to calculate the function with 16 digits of precision when ).

for calculating Φ(x) with arbitrary precision.该算法的缺点是计算时间相对较慢(例如计算精度为16位的函数需要300次以上的迭代)。

Histogram of sepal widths for Iris versicolor from Fisher's Iris flower data set, with superimposed best-fitting normal distribution.

/* Styling for Template:Quote */ .templatequote { overflow: hidden; margin: 1em 0; padding: 0 40px; } .templatequote .templatequotecite {

   line-height: 1.5em;
/* @noflip */
text-align: left;
/* @noflip */
margin-top: 0;


}

|5= The GNU Scientific Library calculates values of the standard normal CDF using Hart's algorithms and approximations with Chebyshev polynomials.

GNU科学数值库计算使用 Hart 的算法计算标准普通 CDF 的值，并用切比雪夫多项式近似法计算。

There are statistical methods to empirically test that assumption, see the above Normality tests section.

}}

}}

• In biology, the logarithm of various variables tend to have a normal distribution, that is, they tend to have a log-normal distribution (after separation on male/female subpopulations), with examples including:
• Measures of size of living tissue (length, height, skin area, weight);

Shore (1982) introduced simple approximations that may be incorporated in stochastic optimization models of engineering and operations research, like reliability engineering and inventory analysis. Denoting p=Φ(z), the simplest approximation for the quantile function is:

Shore (1982)引入了简单的近似，这些近似可以用于工程和运筹学的随机优化模型中，如可靠度和库存分析。表示 p = φ (z) ，分位函数的最简单近似是:

• The length of inert appendages (hair, claws, nails, teeth) of biological specimens, in the direction of growth; presumably the thickness of tree bark also falls under this category;
• Certain physiological measurements, such as blood pressure of adult humans.
$\displaystyle{ z=\Phi^{-1}(p)=5.5556\left[1- \left( \frac{1-p} p \right)^{0.1186}\right],\qquad p\ge 1/2 }$


(p) = 5.5556 left [1-left (frac {1-p } p right) ^ {0.1186} right ] ，qquad p ge 1/2 </math >

• Measurement errors in physical experiments are often modeled by a normal distribution. This use of a normal distribution does not imply that one is assuming the measurement errors are normally distributed, rather using the normal distribution produces the most conservative predictions possible given only knowledge about the mean and variance of the errors.

This approximation delivers for z a maximum absolute error of 0.026 (for 0.5 ≤ p ≤ 0.9999, corresponding to 0 ≤ z ≤ 3.719). For p < 1/2 replace p by 1 − p and change sign. Another approximation, somewhat less accurate, is the single-parameter approximation:

• In standardized testing, results can be made to have a normal distribution by either selecting the number and difficulty of questions (as in the IQ test) or transforming the raw test scores into "output" scores by fitting them to the normal distribution. For example, the SAT's traditional range of 200–800 is based on a normal distribution with a mean of 500 and a standard deviation of 100.

Fitted cumulative normal distribution to October rainfalls, see distribution fitting
$\displaystyle{ z=-0.4115\left\{ \frac{1-p} p + \log \left[ \frac{1-p} p \right] - 1 \right\}, \qquad p\ge 1/2 }$


0.4115 left { frac {1-p } p + log left [ frac {1-p } p right ]-1 right } ，qquad p ge 1/2 </math >

The latter had served to derive a simple approximation for the loss integral of the normal distribution, defined by

### Produced normality

\displaystyle{ 《数学》 In [[regression analysis]], lack of normality in [[Errors and residuals in statistics|residuals]] simply indicates that the model postulated is inadequate in accounting for the tendency in the data and needs to be augmented; in other words, normality in residuals can always be achieved given a properly constructed model.{{Citation needed|date=May 2020|reason=This is a crucial claim about fundamental regression analysis.}} \begin{align} 开始{ align } L(z) & =\int_z^\infty (u-z)\varphi(u) \, du=\int_z^\infty [1-\Phi (u)] \, du \\[5pt] L (z) & = int _ z ^ infty (u-z) varphi (u) ，du = int _ z ^ infty [1-Phi (u)] ，du [5 pt ] == Computational methods == L(z) & \approx \begin{cases} L (z) & approx begin { cases } === Generating values from normal distribution === 0.4115\left(\dfrac p {1-p} \right) - z, & p\lt 1/2, \\ \\ 0.4115 left (dfrac p {1-p } right)-z，& p \lt 1/2, [[File:Planche de Galton.jpg|thumb|250px|right|The [[bean machine]], a device invented by [[Francis Galton]], can be called the first generator of normal random variables. This machine consists of a vertical board with interleaved rows of pins. Small balls are dropped from the top and then bounce randomly left or right as they hit the pins. The balls are collected into bins at the bottom and settle down into a pattern resembling the Gaussian curve.]] 0.4115\left( \dfrac {1-p} p \right), & p\ge 1/2. 0.4115 left (dfrac {1-p } p right) ，& p ge 1/2. \end{cases} \\[5pt] 结束{ cases }[5 pt ] In computer simulations, especially in applications of the [[Monte-Carlo method]], it is often desirable to generate values that are normally distributed. The algorithms listed below all generate the standard normal deviates, since a {{nowrap|''N''(''μ, σ''{{su|p=2}})}} can be generated as {{nowrap|''X {{=}} μ + σZ''}}, where ''Z'' is standard normal. All these algorithms rely on the availability of a [[random number generator]] ''U'' capable of producing [[Uniform distribution (continuous)|uniform]] random variates. \text{or, equivalently,} \\ 或者，等价地，} * The most straightforward method is based on the [[probability integral transform]] property: if ''U'' is distributed uniformly on (0,1), then Φ\lt sup\gt −1\lt /sup\gt (''U'') will have the standard normal distribution. The drawback of this method is that it relies on calculation of the [[probit function]] Φ\lt sup\gt −1\lt /sup\gt , which cannot be done analytically. Some approximate methods are described in {{harvtxt |Hart |1968 }} and in the [[error function|erf]] article. Wichura gives a fast algorithm for computing this function to 16 decimal places,\lt ref\gt {{cite journal|last=Wichura|first=Michael J.|year=1988|title=Algorithm AS241: The Percentage Points of the Normal Distribution|journal=Applied Statistics|volume=37|pages=477–84|doi=10.2307/2347330|jstor=2347330|issue=3}}\lt /ref\gt which is used by [[R programming language|R]] to compute random variates of the normal distribution. L(z) & \approx \begin{cases} L (z) & approx begin { cases } * An easy to program approximate approach, that relies on the [[central limit theorem]], is as follows: generate 12 uniform ''U''(0,1) deviates, add them all up, and subtract 6 – the resulting random variable will have approximately standard normal distribution. In truth, the distribution will be [[Irwin–Hall distribution|Irwin–Hall]], which is a 12-section eleventh-order polynomial approximation to the normal distribution. This random deviate will have a limited range of (−6, 6).\lt ref\gt {{harvtxt |Johnson |Kotz |Balakrishnan |1995 |loc=Equation (26.48) }}\lt /ref\gt 0.4115\left\{ 1-\log \left[ \frac p {1-p} \right] \right\}, & p\lt 1/2, \\ \\ 0.4115 left {1-log left [ frac {1-p } right ]} ，& p \lt 1/2, * The [[Box–Muller transform|Box–Muller method]] uses two independent random numbers ''U'' and ''V'' distributed [[uniform distribution (continuous)|uniformly]] on (0,1). Then the two random variables ''X'' and ''Y'' 0.4115 \dfrac{1-p} p, & p\ge 1/2. 0.4115 dfrac {1-p } p & p ge 1/2. :: \lt math\gt \end{cases} 结束{ cases } X = \sqrt{- 2 \ln U} \, \cos(2 \pi V) , \qquad \end{align} 结束{ align } Y = \sqrt{- 2 \ln U} \, \sin(2 \pi V) . }


 [/itex]

will both have the standard normal distribution, and will be independent. This formulation arises because for a bivariate normal random vector (X, Y) the squared norm X2 + Y2 will have the chi-squared distribution with two degrees of freedom, which is an easily generated exponential random variable corresponding to the quantity −2ln(U) in these equations; and the angle is distributed uniformly around the circle, chosen by the random variable V.

This approximation is particularly accurate for the right far-tail (maximum error of 10−3 for z≥1.4). Highly accurate approximations for the CDF, based on Response Modeling Methodology (RMM, Shore, 2011, 2012), are shown in Shore (2005).

• The Marsaglia polar method is a modification of the Box–Muller method which does not require computation of the sine and cosine functions. In this method, U and V are drawn from the uniform (−1,1) distribution, and then S = U2 + V2 is computed. If S is greater or equal to 1, then the method starts over, otherwise the two quantities
$\displaystyle{ X = U\sqrt{\frac{-2\ln S}{S}}, \qquad Y = V\sqrt{\frac{-2\ln S}{S}} }$

Some more approximations can be found at: Error function#Approximation with elementary functions. In particular, small relative error on the whole domain for the CDF $\displaystyle{ \Phi }$ and the quantile function $\displaystyle{ \Phi^{-1} }$ as well, is achieved via an explicitly invertible formula by Sergei Winitzki in 2008.

are returned. Again, X and Y are independent, standard normal random variables.
• The Ratio method is a rejection method. The algorithm proceeds as follows:
• Generate two independent uniform deviates U and V;
• Optional: if X2 ≤ 5 − 4e1/4U then accept X and terminate algorithm;

Some authors attribute the credit for the discovery of the normal distribution to de Moivre, who in 1738 in Seriem Expansi" that was designated for private circulation only. But it was not until the year 1738 that he made his results publicly available. The original pamphlet was reprinted several times, see for example .}} published in the second edition of his "The Doctrine of Chances" the study of the coefficients in the binomial expansion of . De Moivre proved that the middle term in this expansion has the approximate magnitude of $\displaystyle{ 2/\sqrt{2\pi n} }$, and that "If m or ½n be a Quantity infinitely great, then the Logarithm of the Ratio, which a Term distant from the middle by the Interval ℓ, has to the middle Term, is $\displaystyle{ -\frac{2\ell\ell}{n} }$." Although this theorem can be interpreted as the first obscure expression for the normal probability law, Stigler points out that de Moivre himself did not interpret his results as anything more than the approximate rule for the binomial coefficients, and in particular de Moivre lacked the concept of the probability density function.

• Optional: if X2 ≥ 4e−1.35/U + 1.4 then reject X and start over from step 1;
• If X2 ≤ −4 lnU then accept X, otherwise start over the algorithm.

[卡尔·弗里德里希·高斯在1809年发现了正态分布，作为使最小二乘法合理化的一种方法]

The two optional steps allow the evaluation of the logarithm in the last step to be avoided in most cases. These steps can be greatly improved so that the logarithm is rarely evaluated.
• The ziggurat algorithm is faster than the Box–Muller transform and still exact. In about 97% of all cases it uses only two random numbers, one random integer and one random uniform, one multiplication and an if-test. Only in 3% of the cases, where the combination of those two falls outside the "core of the ziggurat" (a kind of rejection sampling using logarithms), do exponentials and more uniform random numbers have to be employed.

In 1809 Gauss published his monograph "Theoria motus corporum coelestium in sectionibus conicis solem ambientium" where among other things he introduces several important statistical concepts, such as the method of least squares, the method of maximum likelihood, and the normal distribution. Gauss used M, , to denote the measurements of some unknown quantity V, and sought the "most probable" estimator of that quantity: the one that maximizes the probability of obtaining the observed experimental results. In his notation φΔ is the probability law of the measurement errors of magnitude Δ. Not knowing what the function φ is, Gauss requires that his method should reduce to the well-known answer: the arithmetic mean of the measured values. }} Starting from these principles, Gauss demonstrates that the only law that rationalizes the choice of arithmetic mean as an estimator of the location parameter, is the normal law of errors:

1809年，高斯发表了他的专著《圆锥段天体绕太阳运动的理论》《圆锥段上的体体运动》 ，其中介绍了最小二乘法、最大似然法和正态分布等几个重要的统计概念。Gauss 使用 m，来表示某个未知量 v 的测量值，并且寻求这个量的“最可能”估计量: 使得获得观察到的实验结果的概率最大化的估计量。在他的记数法中，φδ 是 δ 量级测量误差的概率定律。由于不知道 φ 是什么函数，高斯要求他的方法应该归结为一个众所周知的答案: 测量值的算术平均值。}从这些原则出发，Gauss 证明了唯一能够合理选择算术平均数作为位置参数估计量的法则是正常的误差法则:

• Integer arithmetic can be used to sample from the standard normal distribution. This method is exact in the sense that it satisfies the conditions of ideal approximation; i.e., it is equivalent to sampling a real number from the standard normal distribution and rounding this to the nearest representable floating point number.
• There is also some investigation into the connection between the fast Hadamard transform and the normal distribution, since the transform employs just addition and subtraction and by the central limit theorem random numbers from almost any distribution will be transformed into the normal distribution. In this regard a series of Hadamard transforms can be combined with random permutations to turn arbitrary data sets into a normally distributed data.
$\displaystyle{ 《数学》 \varphi\mathit{\Delta} = \frac h {\surd\pi} \, e^{-\mathrm{hh}\Delta\Delta}, 如果你想知道更多，请访问我们的网站, === Numerical approximations for the normal CDF === }$


</math > < ! ! -- 请不要修改这个公式; 它的间距和风格尽可能地遵循原来的 -- >

The standard normal CDF is widely used in scientific and statistical computing.

where h is "the measure of the precision of the observations". Using this normal law as a generic model for errors in the experiments, Gauss formulates what is now known as the non-linear weighted least squares (NWLS) method.

The values Φ(x) may be approximated very accurately by a variety of methods, such as numerical integration, Taylor series, asymptotic series and continued fractions. Different approximations are used depending on the desired level of accuracy.

Pierre-Simon Laplace proved the central limit theorem in 1810, consolidating the importance of the normal distribution in statistics.


[皮埃尔-西蒙·拉普拉斯在1810年证明了中心极限定理，巩固了正态分布在统计学中的重要性]

{{unordered list

Although Gauss was the first to suggest the normal distribution law, Laplace made significant contributions. }} It was Laplace who first posed the problem of aggregating several observations in 1774, although his own solution led to the Laplacian distribution. It was Laplace who first calculated the value of the integral }}}} in 1782, providing the normalization constant for the normal distribution. Finally, it was Laplace who in 1810 proved and presented to the Academy the fundamental central limit theorem, which emphasized the theoretical importance of the normal distribution.

|1= 脚本错误：没有“Footnotes”这个模块。 give the approximation for Φ(x) for x > 0 with the absolute error 模板:Abs < 7.5·10−8 (algorithm 26.2.17):

$\displaystyle{ It is of interest to note that in 1809 an Irish mathematician Adrain published two derivations of the normal probability law, simultaneously and independently from Gauss. His works remained largely unnoticed by the scientific community, until in 1871 they were "rediscovered" by Abbe. 值得注意的是，在1809年，一位爱尔兰数学家阿德莱恩发表了正常概率定律的两个推导，它们是同时独立于高斯的。他的工作在很大程度上没有被科学界注意到，直到1871年，他们被阿贝“重新发现”。 \Phi(x) = 1 - \varphi(x)\left(b_1t + b_2t^2 + b_3t^3 + b_4t^4 + b_5t^5\right) + \varepsilon(x), \qquad t = \frac{1}{1+b_0x}, }$

In the middle of the 19th century Maxwell demonstrated that the normal distribution is not just a convenient mathematical tool, but may also occur in natural phenomena: "The number of particles whose velocity, resolved in a certain direction, lies between x and x + dx is

where ϕ(x) is the standard normal PDF, and b0 = 0.2316419, b1 = 0.319381530, b2 = −0.356563782, b3 = 1.781477937, b4 = −1.821255978, b5 = 1.330274429.

$\displaystyle{ 《数学》 \operatorname{N} \frac{1}{\alpha\;\sqrt\pi}\; e^{-\frac{x^2}{\alpha^2}} \, dx 1}{ alpha; sqrt pi } ; e ^ {-frac { x ^ 2}{ alpha ^ 2}} ，dx |2= {{harvtxt |Hart |1968 }} lists some dozens of approximations – by means of rational functions, with or without exponentials – for the {{mono|erfc()}} function. His algorithms vary in the degree of complexity and the resulting precision, with maximum absolute precision of 24 digits. An algorithm by {{harvtxt |West |2009 }} combines Hart's algorithm 5666 with a [[continued fraction]] approximation in the tail to provide a fast computation algorithm with a 16-digit precision. }$


</math > < ! ! -- 请不要修改这个公式; 它的间距和风格尽可能接近原来的样式 -- >

|3= 脚本错误：没有“Footnotes”这个模块。 after recalling Hart68 solution is not suited for erf, gives a solution for both erf and erfc, with maximal relative error bound, via Rational Chebyshev Approximation.

Since its introduction, the normal distribution has been known by many different names: the law of error, the law of facility of errors, Laplace's second law, Gaussian law, etc. Gauss himself apparently coined the term with reference to the "normal equations" involved in its applications, with normal having its technical meaning of orthogonal rather than "usual". However, by the end of the 19th century some authors) and Lexis (, ) c. 1875. }} had started using the name normal distribution, where the word "normal" was used as an adjective – the term now being seen as a reflection of the fact that this distribution was seen as typical, common – and thus "normal". Peirce (one of those authors) once defined "normal" thus: "...the 'normal' is not the average (or any other kind of mean) of what actually occurs, but of what would, in the long run, occur under certain circumstances." Around the turn of the 20th century Pearson popularized the term normal as a designation for this distribution.

|4= 脚本错误：没有“Footnotes”这个模块。 suggested a simple algorithm模板:NoteTag based on the Taylor series expansion

$\displaystyle{ Also, it was Pearson who first wrote the distribution in terms of the standard deviation σ as in modern notation. Soon after this, in year 1915, Fisher added the location parameter to the formula for normal distribution, expressing it in the way it is written nowadays: 同时，也是皮尔逊首先用现代符号写出了标准差 σ 的分布。不久之后，在1915年，Fisher 在正态分布公式中加入了位置参数，用现在的写法来表达: \Phi(x) = \frac12 + \varphi(x)\left( x + \frac{x^3} 3 + \frac{x^5}{3\cdot5} + \frac{x^7}{3\cdot5\cdot7} + \frac{x^9}{3\cdot5\cdot7\cdot9} + \cdots \right) \lt math\gt df = \frac{1}{\sqrt{2\sigma^2\pi}}e^{-(x-m)^2/(2\sigma^2)} \, dx }$

< math > df = frac {1}{ sqrt {2 sigma ^ 2 pi } e ^ {-(x-m) ^ 2/(2 sigma ^ 2)} ，dx </math >

 [/itex]


The term "standard normal", which denotes the normal distribution with zero mean and unit variance came into general use around the 1950s, appearing in the popular textbooks by P.G. Hoel (1947) "Introduction to mathematical statistics" and A.M. Mood (1950) "Introduction to the theory of statistics".

“标准正态”一词是20世纪50年代前后在普通教科书中出现的一个概念，它表示的是零均值和单位方差的正态分布。Hoel (1947)《数理统计学导论》和《 a.m。穆德(1950)《统计学理论导论》。

for calculating Φ(x) with arbitrary precision. The drawback of this algorithm is comparatively slow calculation time (for example it takes over 300 iterations to calculate the function with 16 digits of precision when x = 10).

|5= The GNU Scientific Library calculates values of the standard normal CDF using Hart's algorithms and approximations with Chebyshev polynomials.

}}

Shore (1982) introduced simple approximations that may be incorporated in stochastic optimization models of engineering and operations research, like reliability engineering and inventory analysis. Denoting p=Φ(z), the simplest approximation for the quantile function is:

$\displaystyle{ z=\Phi^{-1}(p)=5.5556\left[1- \left( \frac{1-p} p \right)^{0.1186}\right],\qquad p\ge 1/2 }$

This approximation delivers for z a maximum absolute error of 0.026 (for 0.5 ≤ p ≤ 0.9999, corresponding to 0 ≤ z ≤ 3.719). For p < 1/2 replace p by 1 − p and change sign. Another approximation, somewhat less accurate, is the single-parameter approximation:

$\displaystyle{ z=-0.4115\left\{ \frac{1-p} p + \log \left[ \frac{1-p} p \right] - 1 \right\}, \qquad p\ge 1/2 }$

The latter had served to derive a simple approximation for the loss integral of the normal distribution, defined by

\displaystyle{ \begin{align} L(z) & =\int_z^\infty (u-z)\varphi(u) \, du=\int_z^\infty [1-\Phi (u)] \, du \\[5pt] L(z) & \approx \begin{cases} 0.4115\left(\dfrac p {1-p} \right) - z, & p\lt 1/2, \\ \\ 0.4115\left( \dfrac {1-p} p \right), & p\ge 1/2. \end{cases} \\[5pt] \text{or, equivalently,} \\ L(z) & \approx \begin{cases} 0.4115\left\{ 1-\log \left[ \frac p {1-p} \right] \right\}, & p\lt 1/2, \\ \\ 0.4115 \dfrac{1-p} p, & p\ge 1/2. \end{cases} \end{align} }

This approximation is particularly accurate for the right far-tail (maximum error of 10−3 for z≥1.4). Highly accurate approximations for the CDF, based on Response Modeling Methodology (RMM, Shore, 2011, 2012), are shown in Shore (2005).

Some more approximations can be found at: Error function#Approximation with elementary functions. In particular, small relative error on the whole domain for the CDF $\displaystyle{ \Phi }$ and the quantile function $\displaystyle{ \Phi^{-1} }$ as well, is achieved via an explicitly invertible formula by Sergei Winitzki in 2008.

## History

### Development

Some authors attribute the credit for the discovery of the normal distribution to de Moivre, who in 1738模板:NoteTag published in the second edition of his "The Doctrine of Chances" the study of the coefficients in the binomial expansion of (a + b)n. De Moivre proved that the middle term in this expansion has the approximate magnitude of $\displaystyle{ 2/\sqrt{2\pi n} }$, and that "If m or ½n be a Quantity infinitely great, then the Logarithm of the Ratio, which a Term distant from the middle by the Interval , has to the middle Term, is $\displaystyle{ -\frac{2\ell\ell}{n} }$." Although this theorem can be interpreted as the first obscure expression for the normal probability law, Stigler points out that de Moivre himself did not interpret his results as anything more than the approximate rule for the binomial coefficients, and in particular de Moivre lacked the concept of the probability density function.

Carl Friedrich Gauss discovered the normal distribution in 1809 as a way to rationalize the method of least squares.

In 1809 Gauss published his monograph "Theoria motus corporum coelestium in sectionibus conicis solem ambientium" where among other things he introduces several important statistical concepts, such as the method of least squares, the method of maximum likelihood, and the normal distribution. Gauss used M, 模板:Nobr, 模板:Nobr to denote the measurements of some unknown quantity V, and sought the "most probable" estimator of that quantity: the one that maximizes the probability 模板:Nobr of obtaining the observed experimental results. In his notation φΔ is the probability law of the measurement errors of magnitude Δ. Not knowing what the function φ is, Gauss requires that his method should reduce to the well-known answer: the arithmetic mean of the measured values.模板:NoteTag Starting from these principles, Gauss demonstrates that the only law that rationalizes the choice of arithmetic mean as an estimator of the location parameter, is the normal law of errors:

$\displaystyle{ \varphi\mathit{\Delta} = \frac h {\surd\pi} \, e^{-\mathrm{hh}\Delta\Delta}, }$
 | title = Normal Distribution


| title = 正态分布

 | id = p/n067460


| id = p/n067460

where h is "the measure of the precision of the observations". Using this normal law as a generic model for errors in the experiments, Gauss formulates what is now known as the non-linear weighted least squares (NWLS) method.

 | ref = harv


= harv

 }}

 }}


Pierre-Simon Laplace proved the central limit theorem in 1810, consolidating the importance of the normal distribution in statistics.

Although Gauss was the first to suggest the normal distribution law, Laplace made significant contributions.模板:NoteTag It was Laplace who first posed the problem of aggregating several observations in 1774, although his own solution led to the Laplacian distribution. It was Laplace who first calculated the value of the [[Gaussian integral|integral et2 dt = 模板:Sqrt]] in 1782, providing the normalization constant for the normal distribution. Finally, it was Laplace who in 1810 proved and presented to the Academy the fundamental central limit theorem, which emphasized the theoretical importance of the normal distribution.

It is of interest to note that in 1809 an Irish mathematician Adrain published two derivations of the normal probability law, simultaneously and independently from Gauss. His works remained largely unnoticed by the scientific community, until in 1871 they were "rediscovered" by Abbe.

In the middle of the 19th century Maxwell demonstrated that the normal distribution is not just a convenient mathematical tool, but may also occur in natural phenomena: "The number of particles whose velocity, resolved in a certain direction, lies between x and x + dx is

$\displaystyle{ \operatorname{N} \frac{1}{\alpha\;\sqrt\pi}\; e^{-\frac{x^2}{\alpha^2}} \, dx }$

### Naming

Since its introduction, the normal distribution has been known by many different names: the law of error, the law of facility of errors, Laplace's second law, Gaussian law, etc. Gauss himself apparently coined the term with reference to the "normal equations" involved in its applications, with normal having its technical meaning of orthogonal rather than "usual". However, by the end of the 19th century some authors模板:NoteTag had started using the name normal distribution, where the word "normal" was used as an adjective – the term now being seen as a reflection of the fact that this distribution was seen as typical, common – and thus "normal". Peirce (one of those authors) once defined "normal" thus: "...the 'normal' is not the average (or any other kind of mean) of what actually occurs, but of what would, in the long run, occur under certain circumstances." Around the turn of the 20th century Pearson popularized the term normal as a designation for this distribution.

/* Styling for Template:Quote */ .templatequote { overflow: hidden; margin: 1em 0; padding: 0 40px; } .templatequote .templatequotecite {

   line-height: 1.5em;
/* @noflip */
text-align: left;
/* @noflip */
margin-top: 0;


}

Also, it was Pearson who first wrote the distribution in terms of the standard deviation σ as in modern notation. Soon after this, in year 1915, Fisher added the location parameter to the formula for normal distribution, expressing it in the way it is written nowadays:

$\displaystyle{ df = \frac{1}{\sqrt{2\sigma^2\pi}}e^{-(x-m)^2/(2\sigma^2)} \, dx }$

The term "standard normal", which denotes the normal distribution with zero mean and unit variance came into general use around the 1950s, appearing in the popular textbooks by P.G. Hoel (1947) "Introduction to mathematical statistics" and A.M. Mood (1950) "Introduction to the theory of statistics".