正态分布

来自集智百科 - 伊辛模型
跳到导航 跳到搜索

此词条暂由彩云小译翻译,翻译字数共9075,未经人工整理和审校,带来阅读不便,请见谅。

模板:Redirect


模板:Use mdy dates


{{Infobox probability distribution

{{Infobox probability distribution

{ Infobox 概率分布

 | name       = Normal distribution
 | name       = Normal distribution

| name = 正态分布

 | type       = density
 | type       = density

类型 = 密度

 | pdf_image  = Normal Distribution PDF.svg
 | pdf_image  = Normal Distribution PDF.svg

正态分布 PDF.svg

 | pdf_caption = The red curve is the standard normal distribution
 | pdf_caption = The red curve is the standard normal distribution

红色曲线是标准正态分布

 | cdf_image  = Normal Distribution CDF.svg
 | cdf_image  = Normal Distribution CDF.svg

正态分布 CDF.svg

 | cdf_caption = 
 | cdf_caption = 

2012年10月11日

 | notation   = [math]\displaystyle{ \mathcal{N}(\mu,\sigma^2) }[/math]
 | notation   = [math]\displaystyle{ \mathcal{N}(\mu,\sigma^2) }[/math]

| 符号 = < math > mathcal { n }(mu,sigma ^ 2) </math >

 | parameters = [math]\displaystyle{ \mu\in\R }[/math] = mean (location)
[math]\displaystyle{ \sigma^2\gt 0 }[/math] = variance (squared scale)
 | parameters = [math]\displaystyle{ \mu\in\R }[/math] = mean (location)
[math]\displaystyle{ \sigma^2\gt 0 }[/math] = variance (squared scale)

| 参数 = < math > mu in r </math > = mean (location) < br/> < math > sigma ^ 2 > 0 </math > = variance (squared scale)

 | support    = [math]\displaystyle{ x\in\R }[/math]
 | support    = [math]\displaystyle{ x\in\R }[/math]

| support = < math > x in r </math >

 | pdf        = [math]\displaystyle{ \frac{1}{\sigma\sqrt{2\pi}} e^{-\frac{1}{2}\left(\frac{x - \mu}{\sigma}\right)^2} }[/math]
 | pdf        = [math]\displaystyle{ \frac{1}{\sigma\sqrt{2\pi}} e^{-\frac{1}{2}\left(\frac{x - \mu}{\sigma}\right)^2} }[/math]

| pdf = < math > frac {1}{ sigma sqrt {2 pi } e ^ {-frac {1}{2}左(frac { x-mu }{ sigma }右) ^ 2} </math >

 | cdf        = [math]\displaystyle{ \frac{1}{2}\left[1 + \operatorname{erf}\left( \frac{x-\mu}{\sigma\sqrt{2}}\right)\right]  }[/math]
 | cdf        = [math]\displaystyle{ \frac{1}{2}\left[1 + \operatorname{erf}\left( \frac{x-\mu}{\sigma\sqrt{2}}\right)\right]  }[/math]

| cdf = < math > frac {1}{2}左[1 + operatorname { erf }左(frac { x-mu }{ sigma sqrt {2}右)] </math >

 | quantile   = [math]\displaystyle{ \mu+\sigma\sqrt{2} \operatorname{erf}^{-1}(2p-1) }[/math]
 | quantile   = [math]\displaystyle{ \mu+\sigma\sqrt{2} \operatorname{erf}^{-1}(2p-1) }[/math]

| quantile = < math > mu + sigma sqrt {2} operatorname { erf } ^ {-1}(2p-1) </math >

 | mean       = [math]\displaystyle{ \mu }[/math]
 | mean       = [math]\displaystyle{ \mu }[/math]

| mean = math > mu

 | median     = [math]\displaystyle{ \mu }[/math]
 | median     = [math]\displaystyle{ \mu }[/math]

| median = math > mu

 | mode       = [math]\displaystyle{ \mu }[/math]
 | mode       = [math]\displaystyle{ \mu }[/math]

| mode = math > mu

 | variance   = [math]\displaystyle{ \sigma^2 }[/math]
 | variance   = [math]\displaystyle{ \sigma^2 }[/math]

| 方差 = < math > sigma ^ 2 </math >

 | mad        = [math]\displaystyle{ \sigma\sqrt{2/\pi} }[/math]
 | mad        = [math]\displaystyle{ \sigma\sqrt{2/\pi} }[/math]

| mad = < math > sigma sqrt {2/pi } </math >

 | skewness   = [math]\displaystyle{ 0 }[/math]
 | skewness   = [math]\displaystyle{ 0 }[/math]

| skewness = < math > 0

 | kurtosis   = [math]\displaystyle{ 0 }[/math] 
 | kurtosis   = [math]\displaystyle{ 0 }[/math] 

| 峭度 = < math > 0 </math > < ! ——不要用 old 样式的峭度取代它。-->

 | entropy    = [math]\displaystyle{ \frac{1}{2} \log(2\pi e\sigma^2) }[/math]
 | entropy    = [math]\displaystyle{ \frac{1}{2} \log(2\pi e\sigma^2) }[/math]

| 熵 = < math > frac {1}{2} log (2 pi e sigma ^ 2) </math >

 | mgf        = [math]\displaystyle{ \exp(\mu t + \sigma^2t^2/2) }[/math]
 | mgf        = [math]\displaystyle{ \exp(\mu t + \sigma^2t^2/2) }[/math]

| mgf = < math > exp (mu t + sigma ^ 2t ^ 2/2) </math >

 | char       = [math]\displaystyle{ \exp(i\mu t - \sigma^2 t^2/2) }[/math]
 | char       = [math]\displaystyle{ \exp(i\mu t - \sigma^2 t^2/2) }[/math]

| char = math > exp (i mu t-sigma ^ 2 t ^ 2/2) math

 | fisher     = [math]\displaystyle{ \mathcal{I}(\mu,\sigma) =\begin {pmatrix} 1/\sigma^2 & 0 \\ 0 & 2/\sigma^2\end{pmatrix} }[/math] 
 | fisher     = [math]\displaystyle{ \mathcal{I}(\mu,\sigma) =\begin {pmatrix} 1/\sigma^2 & 0 \\ 0 & 2/\sigma^2\end{pmatrix} }[/math] 

| fisher = < math > mathcal { i }(mu,sigma) = begin { pmatrix }1/sigma ^ 2 & 00 & 2/sigma ^ 2 end { pmatrix } </math >

[math]\displaystyle{ \mathcal{I}(\mu,\sigma^2) =\begin {pmatrix} 1/\sigma^2 & 0 \\ 0 & 1/(2\sigma^4)\end{pmatrix} }[/math]

[math]\displaystyle{ \mathcal{I}(\mu,\sigma^2) =\begin {pmatrix} 1/\sigma^2 & 0 \\ 0 & 1/(2\sigma^4)\end{pmatrix} }[/math]

(mu,sigma ^ 2) = begin { pmatrix }1/sigma ^ 2 & 00 & 1/(2 sigma ^ 4) end { pmatrix } </math >

 | KLDiv      = [math]\displaystyle{ { 1 \over 2 } \left\{ \left( \frac{\sigma_0}{\sigma_1} \right)^2 + \frac{(\mu_1 - \mu_0)^2}{\sigma_1^2} - 1 + 2 \ln {\sigma_1 \over \sigma_0} \right\} }[/math]
 | KLDiv      = [math]\displaystyle{ { 1 \over 2 } \left\{ \left( \frac{\sigma_0}{\sigma_1} \right)^2 + \frac{(\mu_1 - \mu_0)^2}{\sigma_1^2} - 1 + 2 \ln {\sigma_1 \over \sigma_0} \right\} }[/math]

| KLDiv = < math > {1 over 2}左{ left (frac { sigma _ 0}{ sigma _ 1}右) ^ 2 + frac {(mu _ 1-mu _ 0) ^ 2}{ sigma _ 1 ^ 2}-1 + 2 ln { sigma _ 1 over sigma _ 0}右} </math >

}}

}}

}}


In probability theory, a normal (or Gaussian or Gauss or Laplace–Gauss) distribution is a type of continuous probability distribution for a real-valued random variable. The general form of its probability density function is

In probability theory, a normal (or Gaussian or Gauss or Laplace–Gauss) distribution is a type of continuous probability distribution for a real-valued random variable. The general form of its probability density function is

在概率论分布中,正态分布(或高斯分布、高斯分布、 Laplace-Gauss 分布)是实值随机变量的连续概率分布分布。概率密度函数的一般形式是

[math]\displaystyle{ \lt math\gt 《数学》 f(x) = \frac{1}{\sigma \sqrt{2\pi} } e^{-\frac{1}{2}\left(\frac{x-\mu}{\sigma}\right)^2} f(x) = \frac{1}{\sigma \sqrt{2\pi} } e^{-\frac{1}{2}\left(\frac{x-\mu}{\sigma}\right)^2} F (x) = frac {1}{ sigma sqrt {2 pi } e ^ {-frac {1}{2}左(frac { x-mu }{ sigma }右) ^ 2} }[/math]

</math>

数学

The parameter [math]\displaystyle{ \mu }[/math] is the mean or expectation of the distribution (and also its median and mode), while the parameter [math]\displaystyle{ \sigma }[/math] is its standard deviation.[1] The variance of the distribution is [math]\displaystyle{ \sigma^2 }[/math].[2] A random variable with a Gaussian distribution is said to be normally distributed, and is called a normal deviate.

The parameter [math]\displaystyle{ \mu }[/math] is the mean or expectation of the distribution (and also its median and mode), while the parameter [math]\displaystyle{ \sigma }[/math] is its standard deviation. The variance of the distribution is [math]\displaystyle{ \sigma^2 }[/math]. A random variable with a Gaussian distribution is said to be normally distributed, and is called a normal deviate.

参数 mu </math > 是分布的平均值或期望值(以及它的中位数和模式) ,而参数 sigma </math > 是它的标准差。分布的方差是 < math > sigma ^ 2 </math > 。一个带有正态分布的随机变量被称为正态分布,并且被称为正态偏离。


Normal distributions are important in statistics and are often used in the natural and social sciences to represent real-valued random variables whose distributions are not known.[3][4] Their importance is partly due to the central limit theorem. It states that, under some conditions, the average of many samples (observations) of a random variable with finite mean and variance is itself a random variable—whose distribution converges to a normal distribution as the number of samples increases. Therefore, physical quantities that are expected to be the sum of many independent processes, such as measurement errors, often have distributions that are nearly normal.[5]

Normal distributions are important in statistics and are often used in the natural and social sciences to represent real-valued random variables whose distributions are not known. Their importance is partly due to the central limit theorem. It states that, under some conditions, the average of many samples (observations) of a random variable with finite mean and variance is itself a random variable—whose distribution converges to a normal distribution as the number of samples increases. Therefore, physical quantities that are expected to be the sum of many independent processes, such as measurement errors, often have distributions that are nearly normal.

正态分布在统计学中很重要,在自然科学和社会科学中经常用来表示分布未知的实值随机变量。它们的重要性在一定程度上要归功于中心极限定理。指出在一定条件下,均值和方差有限的随机变量的多个样本(观测值)的平均值本身是一个随机变量,其分布随样本数目的增加而收敛于正态分布。因此,物理量被认为是许多独立过程的总和,例如测量误差,通常具有接近正态的分布。


Moreover, Gaussian distributions have some unique properties that are valuable in analytic studies. For instance, any linear combination of a fixed collection of normal deviates is a normal deviate. Many results and methods, such as propagation of uncertainty and least squares parameter fitting, can be derived analytically in explicit form when the relevant variables are normally distributed.

Moreover, Gaussian distributions have some unique properties that are valuable in analytic studies. For instance, any linear combination of a fixed collection of normal deviates is a normal deviate. Many results and methods, such as propagation of uncertainty and least squares parameter fitting, can be derived analytically in explicit form when the relevant variables are normally distributed.

此外,高斯分布具有一些独特的性质,在分析研究中是有价值的。例如,一个固定的正常偏差集合的任何一个线性组合都是正常偏差。许多结果和方法,如误差传播和最小二乘参数拟合,可以导出显式形式的解析有关变量是正态分布的。


A normal distribution is sometimes informally called a bell curve.[6] However, many other distributions are bell-shaped (such as the Cauchy, Student's t, and logistic distributions).

A normal distribution is sometimes informally called a bell curve. However, many other distributions are bell-shaped (such as the Cauchy, Student's t, and logistic distributions).

正态分布有时通俗地称为钟形曲线。然而,许多其他分布是钟形的(例如 Cauchy 分布、 Student 分布和 logistic 分布)。


Definitions

Standard normal distribution

The simplest case of a normal distribution is known as the standard normal distribution. This is a special case when [math]\displaystyle{ \mu=0 }[/math] and [math]\displaystyle{ \sigma =1 }[/math], and it is described by this probability density function:[1]

The simplest case of a normal distribution is known as the standard normal distribution. This is a special case when [math]\displaystyle{ \mu=0 }[/math] and [math]\displaystyle{ \sigma =1 }[/math], and it is described by this probability density function: goes even further, defining the standard normal as having a variance of [math]\displaystyle{ \sigma^2 = 1/(2\pi) }[/math]:

正态分布的最简单情况称为标准正态分布。这是一个特殊的例子,当 < math > mu = 0 </math > 和 < math > sigma = 1 </math > 时,它被这个概率密度函数描述得更进一步,将标准正常定义为 < math > sigma ^ 2 = 1/(2 pi) </math > :


[math]\displaystyle{ \varphi(x) = e^{-\pi x^2} }[/math]

“ math”“ varphi (x) = e ^ {-pi x ^ 2}“ math”

[math]\displaystyle{ \varphi(x) = \frac 1{\sqrt{2\pi}}e^{- \frac 12 x^2} }[/math]

Every normal distribution is a version of the standard normal distribution, whose domain has been stretched by a factor [math]\displaystyle{ \sigma }[/math] (the standard deviation) and then translated by [math]\displaystyle{ \mu }[/math] (the mean value):

每个正态分布都是标准正态分布的一个版本,它的域被一个因子 < math > sigma </math > (标准差)拉长,然后被 < math > mu </math > (平均值)翻译:


Here, the factor [math]\displaystyle{ 1/\sqrt{2\pi} }[/math] ensures that the total area under the curve [math]\displaystyle{ \varphi(x) }[/math] is equal to one.模板:NoteTag The factor [math]\displaystyle{ 1/2 }[/math] in the exponent ensures that the distribution has unit variance (i.e., variance being equal to one), and therefore also unit standard deviation. This function is symmetric around [math]\displaystyle{ x=0 }[/math], where it attains its maximum value [math]\displaystyle{ 1/\sqrt{2\pi} }[/math] and has inflection points at [math]\displaystyle{ x=+1 }[/math] and [math]\displaystyle{ x=-1 }[/math].

[math]\displaystyle{ 《数学》 f(x \mid \mu, \sigma^2) =\frac 1 \sigma \varphi\left(\frac{x-\mu} \sigma \right) F (x mid mu,sigma ^ 2) = frac 1 sigma varphi left (frac { x-mu } sigma right) Authors differ on which normal distribution should be called the "standard" one. [[Carl Friedrich Gauss]], for example, defined the standard normal as having a variance of \lt math\gt \sigma^2 = 1/2 }[/math]. That is:

</math>

数学

[math]\displaystyle{ \varphi(x) = \frac{e^{-x^2}}{\sqrt\pi} }[/math]


The probability density must be scaled by [math]\displaystyle{ 1/\sigma }[/math] so that the integral is still 1.

概率密度必须乘以 < math > 1/sigma </math > ,这样积分仍然是1。

On the other hand, Stephen Stigler[7] goes even further, defining the standard normal as having a variance of [math]\displaystyle{ \sigma^2 = 1/(2\pi) }[/math]:

[math]\displaystyle{ \varphi(x) = e^{-\pi x^2} }[/math]

If [math]\displaystyle{ Z }[/math] is a standard normal deviate, then [math]\displaystyle{ X=\sigma Z + \mu }[/math] will have a normal distribution with expected value [math]\displaystyle{ \mu }[/math] and standard deviation [math]\displaystyle{ \sigma }[/math]. Conversely, if [math]\displaystyle{ X }[/math] is a normal deviate with parameters [math]\displaystyle{ \mu }[/math] and [math]\displaystyle{ \sigma^2 }[/math], then the distribution [math]\displaystyle{ Z=(X-\mu)/\sigma }[/math] will have a standard normal distribution. This variate is also called the standardized form of [math]\displaystyle{ X }[/math].

如果 z 是一个标准的正常人,那么 x = sigma z + mu </math > 将有一个正态分布,其期望值为 < math > mu </math > 和标准差 < math > 。反过来,如果 x 是一个正态分布,且参数为 < math > mu </math > 和 < math > sigma ^ 2 </math > ,那么分布 < math > z = (x-mu)/sigma </math > 将有一个标准的正态分布。这个变量也被称为 < math > x </math > 的标准形式。


General normal distribution

Every normal distribution is a version of the standard normal distribution, whose domain has been stretched by a factor [math]\displaystyle{ \sigma }[/math] (the standard deviation) and then translated by [math]\displaystyle{ \mu }[/math] (the mean value):

The probability density of the standard Gaussian distribution (standard normal distribution, with zero mean and unit variance) is often denoted with the Greek letter [math]\displaystyle{ \phi }[/math] (phi). The alternative form of the Greek letter phi, [math]\displaystyle{ \varphi }[/math], is also used quite often. Thus when a random variable [math]\displaystyle{ X }[/math] is normally distributed with mean [math]\displaystyle{ \mu }[/math] and standard deviation [math]\displaystyle{ \sigma }[/math], one may write

标准正态分布的概率密度(标准正态分布,零均值和单位方差)通常用希腊字母 < math > phi </math > (phi)来表示。希腊字母 phi 的另一种形式也经常被使用。因此,当一个随机变量 x </math > 正态分布且平均值 < math > mu </math > 和标准差 < math > sigma </math > 时,人们可以写下


[math]\displaystyle{ \lt math\gt X \sim \mathcal{N}(\mu,\sigma^2). }[/math]

(mu,sigma ^ 2)

f(x \mid \mu, \sigma^2) =\frac 1 \sigma \varphi\left(\frac{x-\mu} \sigma \right)

</math>


Some authors advocate using the precision [math]\displaystyle{ \tau }[/math] as the parameter defining the width of the distribution, instead of the deviation [math]\displaystyle{ \sigma }[/math] or the variance [math]\displaystyle{ \sigma^2 }[/math]. The precision is normally defined as the reciprocal of the variance, [math]\displaystyle{ 1/\sigma^2 }[/math]. The formula for the distribution then becomes

有些作者主张用精确度 < math > tau </math > 作为定义分布宽度的参数,而不是用偏差 < math > sigma </math > 或者方差 < math > sigma ^ 2 </math > 。精密度通常被定义为方差的倒数,< math > 1/sigma ^ 2 < math > 。分布的公式就变成了

The probability density must be scaled by [math]\displaystyle{ 1/\sigma }[/math] so that the integral is still 1.


[math]\displaystyle{ f(x) = \sqrt{\frac\tau{2\pi}} e^{-\tau(x-\mu)^2/2}. }[/math]

< math > f (x) = sqrt { frac tau {2 pi } e ^ {-tau (x-mu) ^ 2/2} . </math >

If [math]\displaystyle{ Z }[/math] is a standard normal deviate, then [math]\displaystyle{ X=\sigma Z + \mu }[/math] will have a normal distribution with expected value [math]\displaystyle{ \mu }[/math] and standard deviation [math]\displaystyle{ \sigma }[/math]. Conversely, if [math]\displaystyle{ X }[/math] is a normal deviate with parameters [math]\displaystyle{ \mu }[/math] and [math]\displaystyle{ \sigma^2 }[/math], then the distribution [math]\displaystyle{ Z=(X-\mu)/\sigma }[/math] will have a standard normal distribution. This variate is also called the standardized form of [math]\displaystyle{ X }[/math].


This choice is claimed to have advantages in numerical computations when [math]\displaystyle{ \sigma }[/math] is very close to zero, and simplifies formulas in some contexts, such as in the Bayesian inference of variables with multivariate normal distribution.

当 < math > sigma </math > 非常接近于零时,这种选择被称为在数值计算中有优势,并且在某些情况下简化了公式,比如在贝叶斯推断变量中使用多变量正态分布。

Notation

The probability density of the standard Gaussian distribution (standard normal distribution, with zero mean and unit variance) is often denoted with the Greek letter [math]\displaystyle{ \phi }[/math] (phi).[8] The alternative form of the Greek letter phi, [math]\displaystyle{ \varphi }[/math], is also used quite often.[1]

Alternatively, the reciprocal of the standard deviation [math]\displaystyle{ \tau^\prime=1/\sigma }[/math] might be defined as the precision, in which case the expression of the normal distribution becomes

或者,标准差的倒数可以定义为精度,在这种情况下,正态分布的表达式就变成了


The normal distribution is often referred to as [math]\displaystyle{ N(\mu,\sigma^2) }[/math] or [math]\displaystyle{ \mathcal{N}(\mu,\sigma^2) }[/math].[1][9] Thus when a random variable [math]\displaystyle{ X }[/math] is normally distributed with mean [math]\displaystyle{ \mu }[/math] and standard deviation [math]\displaystyle{ \sigma }[/math], one may write

[math]\displaystyle{ f(x) = \frac{\tau^\prime}{\sqrt{2\pi}} e^{-(\tau^\prime)^2(x-\mu)^2/2}. }[/math]

< math > f (x) = frac { tau ^ prime }{ sqrt {2 pi } e ^ {-(tau ^ prime) ^ 2(x-mu) ^ 2/2} . </math >


[math]\displaystyle{ X \sim \mathcal{N}(\mu,\sigma^2). }[/math]

According to Stigler, this formulation is advantageous because of a much simpler and easier-to-remember formula, and simple approximate formulas for the quantiles of the distribution.

根据斯蒂格勒的说法,这个公式是有利的,因为它有一个更简单、更容易记忆的公式,以及分布的分位数的简单近似公式。


Alternative parameterizations

Normal distributions form an exponential family with natural parameters [math]\displaystyle{ \textstyle\theta_1=\frac{\mu}{\sigma^2} }[/math] and [math]\displaystyle{ \textstyle\theta_2=\frac{-1}{2\sigma^2} }[/math], and natural statistics x and x2. The dual expectation parameters for normal distribution are and .

正态分布形成指数族,自然参数为 < math > textstyle theta _ 1 = frac { mu }{ sigma ^ 2} </math > 和 < math > textstyle theta _ 2 = frac {-1}{2 sigma ^ 2} </math > ,自然统计量为 x 和 x < sup > 2 。正态分布的对偶期望参数是和。

Some authors advocate using the precision [math]\displaystyle{ \tau }[/math] as the parameter defining the width of the distribution, instead of the deviation [math]\displaystyle{ \sigma }[/math] or the variance [math]\displaystyle{ \sigma^2 }[/math]. The precision is normally defined as the reciprocal of the variance, [math]\displaystyle{ 1/\sigma^2 }[/math].[10] The formula for the distribution then becomes


[math]\displaystyle{ f(x) = \sqrt{\frac\tau{2\pi}} e^{-\tau(x-\mu)^2/2}. }[/math]

The cumulative distribution function (CDF) of the standard normal distribution, usually denoted with the capital Greek letter [math]\displaystyle{ \Phi }[/math] (phi), It gives the probability that the value of a standard normal random variable [math]\displaystyle{ X }[/math] will exceed [math]\displaystyle{ x }[/math]: [math]\displaystyle{ P(X\gt x) }[/math]. Other definitions of the [math]\displaystyle{ Q }[/math]-function, all of which are simple transformations of [math]\displaystyle{ \Phi }[/math], are also used occasionally.

标准正态分布的累积分布函数(CDF) ,通常用大写希腊字母 < math > Phi </math > (Phi)表示,它给出了一个标准正态随机变量 < math > x </math > 的值超过 math > x </math > 的概率。其他关于 < math > q </math >-function 的定义,都是 < math > Phi </math > 的简单变换,偶尔也会被使用。


This choice is claimed to have advantages in numerical computations when [math]\displaystyle{ \sigma }[/math] is very close to zero, and simplifies formulas in some contexts, such as in the Bayesian inference of variables with multivariate normal distribution.

The graph of the standard normal CDF [math]\displaystyle{ \Phi }[/math] has 2-fold rotational symmetry around the point (0,1/2); that is, [math]\displaystyle{ \Phi(-x) = 1 - \Phi(x) }[/math]. Its antiderivative (indefinite integral) can be expressed as follows:

标准常数 CDF < math > Phi </math > 的图形在点(0,1/2)附近有2倍的旋转对称,即 < math > Phi (- x-rrb- = 1-Phi (x) </math > 。它的反衍生物(不定积分)可以表示如下:


[math]\displaystyle{ \int \Phi(x)\, dx = x\Phi(x) + \varphi(x) + C. }[/math]

[math]\displaystyle{ \int \Phi(x)\, dx = x\Phi(x) + \varphi(x) + C. }[/math]

Alternatively, the reciprocal of the standard deviation [math]\displaystyle{ \tau^\prime=1/\sigma }[/math] might be defined as the precision, in which case the expression of the normal distribution becomes


The CDF of the standard normal distribution can be expanded by Integration by parts into a series:

标准正态分布的 CDF 可以通过分部分积分扩展为一系列:

[math]\displaystyle{ f(x) = \frac{\tau^\prime}{\sqrt{2\pi}} e^{-(\tau^\prime)^2(x-\mu)^2/2}. }[/math]


[math]\displaystyle{ \Phi(x)=\frac{1}{2} + \frac{1}{\sqrt{2\pi}}\cdot e^{-x^2/2} \left[x + \frac{x^3}{3} + \frac{x^5}{3\cdot 5} + \cdots + \frac{x^{2n+1}}{(2n+1)!!} + \cdots\right] }[/math]

< math > Phi (x) = frac {1}{2} + frac {1}{ sqrt {2 pi } cdot e ^ {-x ^ 2/2}左[ x + frac { x ^ 3}{3}{ x ^ 5}{3 cdot 5} + cdots + frac { x ^ {2 n + 1}}{(2n + 1) ! } ! }[ + 点右] </math >

According to Stigler, this formulation is advantageous because of a much simpler and easier-to-remember formula, and simple approximate formulas for the quantiles of the distribution.


where [math]\displaystyle{ !! }[/math] denotes the double factorial.

在哪里? 数学!! 数学表示双阶乘。

Normal distributions form an exponential family with natural parameters [math]\displaystyle{ \textstyle\theta_1=\frac{\mu}{\sigma^2} }[/math] and [math]\displaystyle{ \textstyle\theta_2=\frac{-1}{2\sigma^2} }[/math], and natural statistics x and x2. The dual expectation parameters for normal distribution are η1 = μ and η2 = μ2 + σ2.


An asymptotic expansion of the CDF for large x can also be derived using integration by parts. For more, see Error function#Asymptotic expansion.

用于大型 x 的 CDF 渐近展开也可以通过分部积分得到。更多信息,参见误差函数 # 渐近展开。

Cumulative distribution function

The cumulative distribution function (CDF) of the standard normal distribution, usually denoted with the capital Greek letter [math]\displaystyle{ \Phi }[/math] (phi),[1] is the integral


[math]\displaystyle{ \Phi(x) = \frac 1 {\sqrt{2\pi}} \int_{-\infty}^x e^{-t^2/2} \, dt }[/math]

For the normal distribution, the values less than one standard deviation away from the mean account for 68.27% of the set; while two standard deviations from the mean account for 95.45%; and three standard deviations account for 99.73%.

对于正态分布,距离平均值不到1个标准差的数值占集合的68.27% ,距离平均值2个标准差占95.45% ,3个标准差占99.73% 。


About 68% of values drawn from a normal distribution are within one standard deviation σ away from the mean; about 95% of the values lie within two standard deviations; and about 99.7% are within three standard deviations.

从正态分布中得出的数值中,约有68% 在距离均值1个标准差 σ 范围内; 约95% 的数值在2个标准差范围内; 约99.7% 在3个标准差范围内。

The related error function [math]\displaystyle{ \operatorname{erf}(x) }[/math] gives the probability of a random variable, with normal distribution of mean 0 and variance 1/2 falling in the range [math]\displaystyle{ [-x, x] }[/math]. That is:[1]


{ | class = “ wikitable” style = “ text-align: center; margin-left: 24 pt”
[math]\displaystyle{ \operatorname{erf}(x) = \frac 2 {\sqrt\pi} \int_0^x e^{-t^2} \, dt }[/math]
[math]\displaystyle{ n }[/math] [math]\displaystyle{ p= F(\mu+n\sigma) - F(\mu-n\sigma) }[/math] [math]\displaystyle{ \text{i.e. }1-p }[/math] [math]\displaystyle{ \text{or }1\text{ in }p }[/math] OEIS 数学! !P = f (mu + n sigma)-f (mu-n sigma) </math! !例如:。1-p </math > !1 text { in } p </math > ! !OEIS

These integrals cannot be expressed in terms of elementary functions, and are often said to be special functions. However, many numerical approximations are known; see below for more.

1 1

The two functions are closely related, namely

{ | cellpadding = “0” cellspacing = “0” style = “ width: 16em; ”

“ text-align: right; width: 7em; ” | | | style = “ text-align: left; width: 9em; ” |

[math]\displaystyle{ \Phi(x) = \frac{1}{2} \left[1 + \operatorname{erf}\left( \frac x {\sqrt 2} \right) \right] }[/math]


||

||

For a generic normal distribution with density [math]\displaystyle{ f }[/math], mean [math]\displaystyle{ \mu }[/math] and deviation [math]\displaystyle{ \sigma }[/math], the cumulative distribution function is

|-

|-


|2 || || ||

|2 || || ||

[math]\displaystyle{ {| cellpadding="0" cellspacing="0" style="width: 16em;" { | cellpadding = “0” cellspacing = “0” style = “ width: 16em; ” F(x) = \Phi\left(\frac{x-\mu} \sigma \right) = \frac{1}{2} \left[1 + \operatorname{erf}\left(\frac{x-\mu}{\sigma \sqrt 2 }\right)\right] | style="text-align: right; width: 7em;" | || style="text-align: left; width: 9em;" | “ text-align: right; width: 7em; ” | | | style = “ text-align: left; width: 9em; ” | }[/math]

|}

|}


||

||

The complement of the standard normal CDF, [math]\displaystyle{ Q(x) = 1 - \Phi(x) }[/math], is often called the Q-function, especially in engineering texts.[11][12] It gives the probability that the value of a standard normal random variable [math]\displaystyle{ X }[/math] will exceed [math]\displaystyle{ x }[/math]: [math]\displaystyle{ P(X\gt x) }[/math]. Other definitions of the [math]\displaystyle{ Q }[/math]-function, all of which are simple transformations of [math]\displaystyle{ \Phi }[/math], are also used occasionally.[13]

|-

|-


|3 || || ||

|3 || || ||

The graph of the standard normal CDF [math]\displaystyle{ \Phi }[/math] has 2-fold rotational symmetry around the point (0,1/2); that is, [math]\displaystyle{ \Phi(-x) = 1 - \Phi(x) }[/math]. Its antiderivative (indefinite integral) can be expressed as follows:

{ | cellpadding = “0” cellspacing = “0” style = “ width: 16em; ”
[math]\displaystyle{ \int \Phi(x)\, dx = x\Phi(x) + \varphi(x) + C. }[/math]

“ text-align: right; width: 7em; ” | | | style = “ text-align: left; width: 9em; ” |


|}

The CDF of the standard normal distribution can be expanded by Integration by parts into a series:

||

||


|-

|-

[math]\displaystyle{ \Phi(x)=\frac{1}{2} + \frac{1}{\sqrt{2\pi}}\cdot e^{-x^2/2} \left[x + \frac{x^3}{3} + \frac{x^5}{3\cdot 5} + \cdots + \frac{x^{2n+1}}{(2n+1)!!} + \cdots\right] }[/math]

|4 || || ||

|4 || || ||


{ | cellpadding = “0” cellspacing = “0” style = “ width: 16em; ” where [math]\displaystyle{ !! }[/math] denotes the double factorial.

“ text-align: right; width: 7em; ” | | | style = “ text-align: left; width: 9em; ” |


|}

An asymptotic expansion of the CDF for large x can also be derived using integration by parts. For more, see Error function#Asymptotic expansion.[14]

|-

|-


|5 || || ||

|5 || || ||

Standard deviation and coverage

{ | cellpadding = “0” cellspacing = “0” style = “ width: 16em; ” 模板:Further

“ text-align: right; width: 7em; ” | | | style = “ text-align: left; width: 9em; ” |

文件:Standard deviation diagram.svg
For the normal distribution, the values less than one standard deviation away from the mean account for 68.27% of the set; while two standard deviations from the mean account for 95.45%; and three standard deviations account for 99.73%.

|}

About 68% of values drawn from a normal distribution are within one standard deviation σ away from the mean; about 95% of the values lie within two standard deviations; and about 99.7% are within three standard deviations.[6] This fact is known as the 68-95-99.7 (empirical) rule, or the 3-sigma rule.

|-

|-


|6 || || ||

|6 || || ||

More precisely, the probability that a normal deviate lies in the range between [math]\displaystyle{ \mu-n\sigma }[/math] and [math]\displaystyle{ \mu+n\sigma }[/math] is given by

{ | cellpadding = “0” cellspacing = “0” style = “ width: 16em; ”
[math]\displaystyle{ | style="text-align: right; width: 7em;" | || style="text-align: left; width: 9em;" | “ text-align: right; width: 7em; ” | | | style = “ text-align: left; width: 9em; ” | F(\mu+n\sigma) - F(\mu-n\sigma) = \Phi(n)-\Phi(-n) = \operatorname{erf} \left(\frac{n}{\sqrt{2}}\right). |} |} }[/math]

|}

To 12 significant figures, the values for [math]\displaystyle{ n=1,2,\ldots , 6 }[/math] are:[15]


For large [math]\displaystyle{ n }[/math], one can use the approximation [math]\displaystyle{ 1 - p \approx \frac{e^{-n^2/2}}{n\sqrt{\pi/2}} }[/math].

对于较大的 < math > n </math > ,可以使用近似 < math > 1-p approx frac { e ^ { n ^ 2/2}{ n sqrt { pi/2}} </math > 。

The following table gives the quantile [math]\displaystyle{ z_p }[/math] such that [math]\displaystyle{ X }[/math] will lie in the range [math]\displaystyle{ \mu \pm z_p\sigma }[/math] with a specified probability [math]\displaystyle{ p }[/math]. These values are useful to determine tolerance interval for sample averages and other statistical estimators with normal (or asymptotically normal) distributions:. NOTE: the following table shows [math]\displaystyle{ \sqrt 2 \operatorname{erf}^{-1}(p)=\Phi^{-1}\left(\frac{p+1}{2}\right) }[/math], not [math]\displaystyle{ \Phi^{-1}(p) }[/math] as defined above. 下表给出了分位数 < math > > z _ p </math > ,使得 < math > x </math > 将位于 < math > mu pm z _ p </math > 的范围内,具有指定的概率 < math > p </math > 。这些值对于确定样本平均值和其他正态(或渐近正态)分布的统计估计的容许区间是有用的: 。注意: 下表显示了 < math > sqrt 2操作员名{ erf } ^ {-1}(p) = Phi ^ {-1} left (frac { p + 1}{2}右) </math > ,而不是 < math > Phi ^ {-1}(p) </math > 。
[math]\displaystyle{ n }[/math] [math]\displaystyle{ p= F(\mu+n\sigma) - F(\mu-n\sigma) }[/math] [math]\displaystyle{ \text{i.e. }1-p }[/math] [math]\displaystyle{ \text{or }1\text{ in }p }[/math] OEIS
1 模板:Val 模板:Val

The quantile function of a distribution is the inverse of the cumulative distribution function. The quantile function of the standard normal distribution is called the probit function, and can be expressed in terms of the inverse error function:

分布的分位函数是累积分布函数的倒数。标准正态分布的分位函数被称为 probit 函数,可以用反向误差函数来表示:

[math]\displaystyle{ 《数学》 | style="text-align: right; width: 7em;" | {{val|3}} || style="text-align: left; width: 9em;" | {{#invoke:Gapnum|main|.15148718753}} \Phi^{-1}(p) = \sqrt2\operatorname{erf}^{-1}(2p - 1), \quad p\in(0,1). Phi ^ {-1}(p) = sqrt2操作数名{ erf } ^ {-1}(2p-1) ,方 p 在(0,1)中。 |} }[/math] 数学 [math]\displaystyle{ 《数学》 |2 || {{val|0.954499736104}} || {{val|0.045500263896}} || F^{-1}(p) = \mu + \sigma\Phi^{-1}(p) F ^ {-1}(p) = mu + sigma Phi ^ {-1}(p) {| cellpadding="0" cellspacing="0" style="width: 16em;" = \mu + \sigma\sqrt 2 \operatorname{erf}^{-1}(2p - 1), \quad p\in(0,1). = mu + sigma sqrt 2操作器名{ erf } ^ {-1}(2p-1) ,quad p in (0,1)。 | style="text-align: right; width: 7em;" | {{val|21}} || style="text-align: left; width: 9em;" | {{#invoke:Gapnum|main|.9778945080}} }[/math] 数学
模板:OEIS2C

For a normal random variable with mean [math]\displaystyle{ \mu }[/math] and variance [math]\displaystyle{ \sigma^2 }[/math], the quantile function is

对于平均值 < math > > mu </math > 和方差 < math > sigma ^ 2 </math > 的正常随机变量,分位函数为

The quantile [math]\displaystyle{ \Phi^{-1}(p) }[/math] of the standard normal distribution is commonly denoted as [math]\displaystyle{ z_p }[/math]. These values are used in hypothesis testing, construction of confidence intervals and Q-Q plots. A normal random variable [math]\displaystyle{ X }[/math] will exceed [math]\displaystyle{ \mu + z_p\sigma }[/math] with probability [math]\displaystyle{ 1-p }[/math], and will lie outside the interval [math]\displaystyle{ \mu \pm z_p\sigma }[/math] with probability [math]\displaystyle{ 2(1-p) }[/math]. In particular, the quantile [math]\displaystyle{ z_{0.975} }[/math] is 1.96; therefore a normal random variable will lie outside the interval [math]\displaystyle{ \mu \pm 1.96\sigma }[/math] in only 5% of cases.

标准正态分布的分位数 < math > Phi ^ {-1}(p) </math > 通常表示为 < math > z _ p </math > 。这些值用于假设检验,建立置信区间和 q q 图。一个正常的随机变量将超过 < math > > mu + z _ p sigma </math > ,并且将超过 < math > mu pm z _ p sigma </math > </math > 区间 < math > 。特别是,分位数 < math > z _ {0.975} </math > 是1.96; 因此,正常随机变量将位于区间 < math > mu pm 1.96 sigma </math > 之外,只有5% 的病例。

模板:OEIS2C
3 模板:Val 模板:Val
{ | class = “ wikable” style = “ text-align: left; margin-left: 24 pt; border: none; background: none; ”
模板:Val 脚本错误:没有“Gapnum”这个模块。 [math]\displaystyle{ p }[/math] [math]\displaystyle{ z_p }[/math] (数学) ![ math ][ math ]
 

8 style = “ border: none; background: none; ” |

模板:OEIS2C [math]\displaystyle{ p }[/math] [math]\displaystyle{ z_p }[/math] (数学) ![ math ][ math ]
4 模板:Val 模板:Val 0.80 0.999 0.80 0.999
模板:Val 脚本错误:没有“Gapnum”这个模块。 0.90 0.9999 0.90 0.9999
0.95 0.99999 0.95 0.99999 5 模板:Val 模板:Val
0.98 0.999999 0.98 0.999999 模板:Val 脚本错误:没有“Gapnum”这个模块。

0.99 0.9999999 0.99 0.9999999 6 模板:Val 模板:Val 0.995 0.99999999 0.995 0.99999999

模板:Val 脚本错误:没有“Gapnum”这个模块。 0.998 0.999999999 0.998 0.999999999

|}


For small [math]\displaystyle{ p }[/math], the quantile function has the useful asymptotic expansion

对于小小的数学问题,分位函数拥有有用的渐近展开

For large [math]\displaystyle{ n }[/math], one can use the approximation [math]\displaystyle{ 1 - p \approx \frac{e^{-n^2/2}}{n\sqrt{\pi/2}} }[/math].

[math]\displaystyle{ \Phi^{-1}(p)=-\sqrt{\ln\frac{1}{p^2}-\ln\ln\frac{1}{p^2}-\ln(2\pi)}+\mathcal{o}(1). }[/math]

< math > Phi ^ {-1}(p) =-sqrt { ln frac {1}{ p ^ 2}-ln frac {1}{ p ^ 2}-ln (2 pi)} + mathcal { o }(1) . </math >


Quantile function

模板:Further

The normal distribution is the only distribution whose cumulants beyond the first two (i.e., other than the mean and variance) are zero. It is also the continuous distribution with the maximum entropy for a specified mean and variance. Geary has shown, assuming that the mean and variance are finite, that the normal distribution is the only distribution where the mean and variance calculated from a set of independent draws are independent of each other.

正态分布是唯一的累积量超过前两个(即,除了均值和方差)为零的分布。对于一定的均值和方差,它也是具有最大熵的连续分布。Geary 已经证明,假设均值和方差是有限的,正态分布是唯一的分布,其中从一组独立的绘图计算出的均值和方差是相互独立的。


The quantile function of a distribution is the inverse of the cumulative distribution function. The quantile function of the standard normal distribution is called the probit function, and can be expressed in terms of the inverse error function:

The normal distribution is a subclass of the elliptical distributions. The normal distribution is symmetric about its mean, and is non-zero over the entire real line. As such it may not be a suitable model for variables that are inherently positive or strongly skewed, such as the weight of a person or the price of a share. Such variables may be better described by other distributions, such as the log-normal distribution or the Pareto distribution.

正态分布是椭圆分布的一个子类。正态分布对称于它的平均值,并且在整条实线上是非零的。因此,它可能不是一个适合的模型,变量是固有的积极或强烈倾斜,如人的重量或股票价格。这些变量可以用其他的分布更好的描述,比如对数正态分布或者帕累托分布。

[math]\displaystyle{ \Phi^{-1}(p) = \sqrt2\operatorname{erf}^{-1}(2p - 1), \quad p\in(0,1). The value of the normal distribution is practically zero when the value \lt math\gt x }[/math] lies more than a few standard deviations away from the mean (e.g., a spread of three standard deviations covers all but 0.27% of the total distribution). Therefore, it may not be an appropriate model when one expects a significant fraction of outliers—values that lie many standard deviations away from the mean—and least squares and other statistical inference methods that are optimal for normally distributed variables often become highly unreliable when applied to such data. In those cases, a more heavy-tailed distribution should be assumed and the appropriate robust statistical inference methods applied.

当数值 < math > x </math > 距离均值超过几个标准偏差时,正态分布的数值实际上是零(例如,三个标准偏差的差距只占总分布的0.27%)。因此,当人们期望有一个离群值的重要分数时,这可能不是一个合适的模型---- 离群值存在许多偏离均值的标准偏差---- 最小二乘法和其他对于正态分布变量最优的推论统计学方法在应用于这些数据时,往往变得极不可靠。在这种情况下,我们应该假设一个更具重尾分布的方法,并且应用适当的健壮的推论统计学方法。

</math>

For a normal random variable with mean [math]\displaystyle{ \mu }[/math] and variance [math]\displaystyle{ \sigma^2 }[/math], the quantile function is

The Gaussian distribution belongs to the family of stable distributions which are the attractors of sums of independent, identically distributed distributions whether or not the mean or variance is finite. Except for the Gaussian which is a limiting case, all stable distributions have heavy tails and infinite variance. It is one of the few distributions that are stable and that have probability density functions that can be expressed analytically, the others being the Cauchy distribution and the Lévy distribution.

正态分布属于稳定分布族,稳定分布族是独立的、同分布分布和的吸引子,无论均值或方差是否有限。除了高斯分布是极限情形外,所有稳定分布都有重尾和无限方差。它是少数几个稳定的分布之一,并且具有可以用分析方法表示的概率密度函数,其他的分布是柯西分布分布和 Lévy 分布。

[math]\displaystyle{ F^{-1}(p) = \mu + \sigma\Phi^{-1}(p) = \mu + \sigma\sqrt 2 \operatorname{erf}^{-1}(2p - 1), \quad p\in(0,1). The normal distribution with density \lt math\gt f(x) }[/math] (mean [math]\displaystyle{ \mu }[/math] and standard deviation [math]\displaystyle{ \sigma \gt 0 }[/math]) has the following properties:

密度 < math > f (x) </math > (均值 < math > mu </math > 和标准差 < math > sigma > 0 </math >)的正态分布具有以下特性:

</math>

The quantile [math]\displaystyle{ \Phi^{-1}(p) }[/math] of the standard normal distribution is commonly denoted as [math]\displaystyle{ z_p }[/math]. These values are used in hypothesis testing, construction of confidence intervals and Q-Q plots. A normal random variable [math]\displaystyle{ X }[/math] will exceed [math]\displaystyle{ \mu + z_p\sigma }[/math] with probability [math]\displaystyle{ 1-p }[/math], and will lie outside the interval [math]\displaystyle{ \mu \pm z_p\sigma }[/math] with probability [math]\displaystyle{ 2(1-p) }[/math]. In particular, the quantile [math]\displaystyle{ z_{0.975} }[/math] is 1.96; therefore a normal random variable will lie outside the interval [math]\displaystyle{ \mu \pm 1.96\sigma }[/math] in only 5% of cases.


The following table gives the quantile [math]\displaystyle{ z_p }[/math] such that [math]\displaystyle{ X }[/math] will lie in the range [math]\displaystyle{ \mu \pm z_p\sigma }[/math] with a specified probability [math]\displaystyle{ p }[/math]. These values are useful to determine tolerance interval for sample averages and other statistical estimators with normal (or asymptotically normal) distributions:.[16][17] NOTE: the following table shows [math]\displaystyle{ \sqrt 2 \operatorname{erf}^{-1}(p)=\Phi^{-1}\left(\frac{p+1}{2}\right) }[/math], not [math]\displaystyle{ \Phi^{-1}(p) }[/math] as defined above.


[math]\displaystyle{ 《数学》 | 0.99 || {{val|2.575829303549}} || 0.9999999 || {{val|5.326723886384}} \operatorname{E}\left[(X-\mu)^p\right] = 操作员名称{ e }左[(x-mu) ^ p 右] = |- \begin{cases} 开始{ cases } | 0.995 || {{val|2.807033768344}} || 0.99999999 || {{val|5.730728868236}} 0 & \text{if }p\text{ is odd,} \\ 0 & text { if } p text { is odd,} |- \sigma^p (p-1)!! & \text{if }p\text{ is even.} Sigma ^ p (p-1)!& text { if } p text { is even. } | 0.998 || {{val|3.090232306168}} || 0.999999999 || {{val|6.109410204869}} \end{cases} 结束{ cases } |} }[/math] 数学 Here [math]\displaystyle{ n!! }[/math] denotes the double factorial, that is, the product of all numbers from [math]\displaystyle{ n }[/math] to 1 that have the same parity as [math]\displaystyle{ n. }[/math] 这里 < math > n!!</math > 表示双阶乘,即从 < math > n </math > 到具有与 < math > n </math > 相同奇偶性的1的所有数字的乘积 For small [math]\displaystyle{ p }[/math], the quantile function has the useful asymptotic expansion [math]\displaystyle{ \Phi^{-1}(p)=-\sqrt{\ln\frac{1}{p^2}-\ln\ln\frac{1}{p^2}-\ln(2\pi)}+\mathcal{o}(1). }[/math] The central absolute moments coincide with plain moments for all even orders, but are nonzero for odd orders. For any non-negative integer [math]\displaystyle{ p, }[/math] 对于所有偶数阶,中心绝对时刻与平面时刻重合,但对于奇数阶,中心绝对时刻不为零。对于任何非负整数 < math > p,</math >

Properties

[math]\displaystyle{ \begin{align} 1.1.1.2.2.2.2.2.2.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.4.3.3.3.3.3.3.3.3.3.3.3.4.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3 The normal distribution is the only distribution whose [[cumulant]]s beyond the first two (i.e., other than the mean and [[variance]]) are zero. It is also the continuous distribution with the [[maximum entropy probability distribution|maximum entropy]] for a specified mean and variance.\lt ref\gt {{cite book|last=Cover|first=Thomas M.|author2=Thomas, Joy A.|year=2006|title=Elements of Information Theory|url=https://archive.org/details/elementsinformat00cove|url-access=limited|publisher=John Wiley and Sons|page=[https://archive.org/details/elementsinformat00cove/page/n279 254]}}\lt /ref\gt \lt ref\gt {{cite journal|last1=Park|first1=Sung Y.|last2=Bera|first2=Anil K.|year=2009|title=Maximum Entropy Autoregressive Conditional Heteroskedasticity Model|journal=Journal of Econometrics|pages=219–230|url=http://www.wise.xmu.edu.cn/Master/Download/..%5C..%5CUploadFiles%5Cpaper-masterdownload%5C2009519932327055475115776.pdf|accessdate=2011-06-02|doi=10.1016/j.jeconom.2008.12.014|volume=150|issue=2|citeseerx=10.1.1.511.9750}}\lt /ref\gt Geary has shown, assuming that the mean and variance are finite, that the normal distribution is the only distribution where the mean and variance calculated from a set of independent draws are independent of each other.\lt ref name=Geary1936\gt Geary RC(1936) The distribution of the "Student's" ratio for the non-normal samples". Supplement to the Journal of the Royal Statistical Society 3 (2): 178–184\lt /ref\gt \lt ref name=Lukas1942\gt Lukas E (1942) A characterization of the normal distribution. Annals of Mathematical Statistics 13: 91–93\lt /ref\gt \operatorname{E}\left[|X - \mu|^p\right] &= \sigma^p (p-1)!! \cdot \begin{cases} [ | x-mu | ^ p 右] & = sigma ^ p (p-1) ! !开始{ cases } \sqrt{\frac{2}{\pi}} & \text{if }p\text{ is odd} \\ {2}{ pi } & text { if } p text { is odd } The normal distribution is a subclass of the [[elliptical distribution]]s. The normal distribution is [[Symmetric distribution|symmetric]] about its mean, and is non-zero over the entire real line. As such it may not be a suitable model for variables that are inherently positive or strongly skewed, such as the [[weight]] of a person or the price of a [[share (finance)|share]]. Such variables may be better described by other distributions, such as the [[log-normal distribution]] or the [[Pareto distribution]]. 1 & \text{if }p\text{ is even} 1 & text { if } p text { is even } \end{cases} \\ 结束{ cases } The value of the normal distribution is practically zero when the value \lt math\gt x }[/math] lies more than a few standard deviations away from the mean (e.g., a spread of three standard deviations covers all but 0.27% of the total distribution). Therefore, it may not be an appropriate model when one expects a significant fraction of outliers—values that lie many standard deviations away from the mean—and least squares and other statistical inference methods that are optimal for normally distributed variables often become highly unreliable when applied to such data. In those cases, a more heavy-tailed distribution should be assumed and the appropriate robust statistical inference methods applied.

  &= \sigma^p \cdot \frac{2^{p/2}\Gamma\left(\frac{p+1} 2 \right)}{\sqrt\pi}.

2 ^ { p/2} Gamma left (frac { p + 1}2 right)}{ sqrt pi }.


\end{align}</math>

结束{ align } </math >

The Gaussian distribution belongs to the family of stable distributions which are the attractors of sums of independent, identically distributed distributions whether or not the mean or variance is finite. Except for the Gaussian which is a limiting case, all stable distributions have heavy tails and infinite variance. It is one of the few distributions that are stable and that have probability density functions that can be expressed analytically, the others being the Cauchy distribution and the Lévy distribution.

The last formula is valid also for any non-integer [math]\displaystyle{ p\gt -1. }[/math] When the mean [math]\displaystyle{ \mu \ne 0, }[/math] the plain and absolute moments can be expressed in terms of confluent hypergeometric functions [math]\displaystyle{ {}_1F_1 }[/math] and [math]\displaystyle{ U. }[/math]

最后一个公式也适用于任何非整数 < math > p >-1。当平均值 < math > mu ne > 时,平均和绝对矩可以用合流超几何函数 < math > 和 < math > u < math > 来表示


Symmetries and derivatives

[math]\displaystyle{ \begin{align} 1.1.1.2.2.2.2.2.2.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.4.3.3.3.3.3.3.3.3.3.3.3.4.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3 The normal distribution with density \lt math\gt f(x) }[/math] (mean [math]\displaystyle{ \mu }[/math] and standard deviation [math]\displaystyle{ \sigma \gt 0 }[/math]) has the following properties:

  \operatorname{E}\left[X^p\right] &= \sigma^p\cdot (-i\sqrt 2)^p U\left(-\frac{p}{2}, \frac{1}{2}, -\frac{1}{2} \left( \frac \mu \sigma \right)^2 \right), \\

操作符名称{ e }左[ x ^ p 右] & = sigma ^ p cdot (- i sqrt 2) ^ p u left (- frac { p }{2} ,frac {1}{2} ,-frac {1}{2}左(frac mu sigma 右) ^ 2右) ,

  • It is symmetric around the point [math]\displaystyle{ x=\mu, }[/math] which is at the same time the mode, the median and the mean of the distribution.[18]
  \operatorname{E}\left[|X|^p \right] &= \sigma^p \cdot 2^{p/2} \frac {\Gamma\left(\frac{1+p} 2\right)}{\sqrt\pi} {}_1F_1\left( -\frac{p}{2}, \frac{1}{2}, -\frac{1}{2} \left( \frac \mu \sigma \right)^2 \right).

操作符名称{ e }左[ | x | ^ p 右] & = sigma ^ p cdot 2 ^ { p/2} frac { Gamma left (frac {1 + p }2 right)}{ sqrt pi }{}1F _ 1 left (- frac { p }{2} ,frac {1}{2} ,-frac {1}{1}{2}左(frac mu sigma right) ^ 2 right)。

  • It is unimodal: its first derivative is positive for [math]\displaystyle{ x\lt \mu, }[/math] negative for [math]\displaystyle{ x\gt \mu, }[/math] and zero only at [math]\displaystyle{ x=\mu. }[/math]
\end{align}</math>

结束{ align } </math >

  • The area under the curve and over the [math]\displaystyle{ x }[/math]-axis is unity (i.e. equal to one).
  • Its first derivative is [math]\displaystyle{ f^\prime(x)=-\frac{x-\mu}{\sigma^2} f(x). }[/math]

These expressions remain valid even if [math]\displaystyle{ p }[/math] is not an integer. See also generalized Hermite polynomials.

即使 < math > p </math > 不是整数,这些表达式仍然有效。参见广义埃尔米特多项式。

  • Its density has two inflection points (where the second derivative of [math]\displaystyle{ f }[/math] is zero and changes sign), located one standard deviation away from the mean, namely at [math]\displaystyle{ x=\mu-\sigma }[/math] and [math]\displaystyle{ x=\mu+\sigma. }[/math][18]
[math]\displaystyle{ p }[/math] [math]\displaystyle{ z_p }[/math]

Furthermore, the density [math]\displaystyle{ \varphi }[/math] of the standard normal distribution (i.e. [math]\displaystyle{ \mu=0 }[/math] and [math]\displaystyle{ \sigma=1 }[/math]) also has the following properties:

此外,标准正态分布的密度 < math > varphi </math > 。< math > mu = 0 </math > 和 < math > sigma = 1 </math >)也有以下属性:

  [math]\displaystyle{ p }[/math] [math]\displaystyle{ z_p }[/math]
0.80 模板:Val 0.999 模板:Val
0.90 模板:Val 0.9999 模板:Val
0.95 模板:Val 0.99999 模板:Val

The plain and absolute moments of a variable [math]\displaystyle{ X }[/math] are the expected values of [math]\displaystyle{ X^p }[/math] and [math]\displaystyle{ |X|^p }[/math], respectively. If the expected value [math]\displaystyle{ \mu }[/math] of [math]\displaystyle{ X }[/math] is zero, these parameters are called central moments. Usually we are interested only in moments with integer order [math]\displaystyle{ \ p }[/math].

变量 < math > x </math > 的平面和绝对时刻分别是 < math > x ^ p </math > 和 < math > | x | ^ p </math > 的期望值。如果 < math > x </math > 的期望值 < math > mu </math > 为零,这些参数称为中心矩。通常我们只对整数顺序的时刻感兴趣。

0.98 模板:Val 0.999999 模板:Val

If [math]\displaystyle{ X }[/math] has a normal distribution, these moments exist and are finite for any [math]\displaystyle{ p }[/math] whose real part is greater than −1. For any non-negative integer [math]\displaystyle{ p }[/math], the plain central moments are:

如果 < math > x </math > 具有正态分布,那么这些矩对于任何实部大于 -1的 < math > p </math > 都存在并且是有限的。对于任何非负整数 < math > p </math > ,简单的中心矩是:

{ | class = “ wikitable” style = “ background: # fff; margin: auto; ”
  • Its first derivative is [math]\displaystyle{ \varphi^\prime(x)=-x\varphi(x). }[/math]
\begin{cases} </math> 1 & \text{if }p\text{ is even} The last formula is valid also for any non-integer [math]\displaystyle{ p\gt -1. }[/math] When the mean [math]\displaystyle{ \mu \ne 0, }[/math] the plain and absolute moments can be expressed in terms of confluent hypergeometric functions [math]\displaystyle{ {}_1F_1 }[/math] and [math]\displaystyle{ U. }[/math][citation needed]
Order Non-central moment Central moment 肃静!非中心时刻! !中心力矩

Furthermore, the density [math]\displaystyle{ \varphi }[/math] of the standard normal distribution (i.e. [math]\displaystyle{ \mu=0 }[/math] and [math]\displaystyle{ \sigma=1 }[/math]) also has the following properties:

1 1
  • Its second derivative is [math]\displaystyle{ \varphi^{\prime\prime}(x)=(x^2-1)\varphi(x) }[/math]
[math]\displaystyle{ \mu }[/math] < math > > mu
  • More generally, its nth derivative is [math]\displaystyle{ \varphi^{(n)}(x) = (-1)^n\operatorname{He}_n(x)\varphi(x), }[/math] where [math]\displaystyle{ \operatorname{He}_n(x) }[/math] is the nth (probabilist) Hermite polynomial.[20]
[math]\displaystyle{ 0 }[/math] < math > 0
  • The probability that a normally distributed variable [math]\displaystyle{ X }[/math] with known [math]\displaystyle{ \mu }[/math] and [math]\displaystyle{ \sigma }[/math] is in a particular set, can be calculated by using the fact that the fraction [math]\displaystyle{ Z = (X-\mu)/\sigma }[/math] has a standard normal distribution.
2 2

Moments

[math]\displaystyle{ \mu^2+\sigma^2 }[/math] < math > mu ^ 2 + sigma ^ 2 </math > [math]\displaystyle{ \sigma^2 }[/math] < math > sigma ^ 2 </math >

The plain and absolute moments of a variable [math]\displaystyle{ X }[/math] are the expected values of [math]\displaystyle{ X^p }[/math] and [math]\displaystyle{ |X|^p }[/math], respectively. If the expected value [math]\displaystyle{ \mu }[/math] of [math]\displaystyle{ X }[/math] is zero, these parameters are called central moments. Usually we are interested only in moments with integer order [math]\displaystyle{ \ p }[/math].

3 3

If [math]\displaystyle{ X }[/math] has a normal distribution, these moments exist and are finite for any [math]\displaystyle{ p }[/math] whose real part is greater than −1. For any non-negative integer [math]\displaystyle{ p }[/math], the plain central moments are:[21]

[math]\displaystyle{ \mu^3+3\mu\sigma^2 }[/math] < math > mu ^ 3 + 3 mu sigma ^ 2 </math >
[math]\displaystyle{ | \lt math\gt 0 }[/math]
< math > 0
   \operatorname{E}\left[(X-\mu)^p\right] =
4 4
       0 & \text{if }p\text{ is odd,} \\
[math]\displaystyle{ \mu^4+6\mu^2\sigma^2+3\sigma^4 }[/math] < math > mu ^ 4 + 6 mu ^ 2 sigma ^ 2 + 3 sigma ^ 4 </math >
       \sigma^p (p-1)!! & \text{if }p\text{ is even.}
[math]\displaystyle{ 3\sigma^4 }[/math] < math > 3 sigma ^ 4 </math >
     \end{cases}
5 5

Here [math]\displaystyle{ n!! }[/math] denotes the double factorial, that is, the product of all numbers from [math]\displaystyle{ n }[/math] to 1 that have the same parity as [math]\displaystyle{ n. }[/math]

[math]\displaystyle{ \mu^5+10\mu^3\sigma^2+15\mu\sigma^4 }[/math] < math > mu ^ 5 + 10 mu ^ 3 sigma ^ 2 + 15 mu sigma ^ 4


[math]\displaystyle{ 0 }[/math] < math > 0

The central absolute moments coincide with plain moments for all even orders, but are nonzero for odd orders. For any non-negative integer [math]\displaystyle{ p, }[/math]

6 6
[math]\displaystyle{ \begin{align} | \lt math\gt \mu^6+15\mu^4\sigma^2+45\mu^2\sigma^4+15\sigma^6 }[/math]
< math > mu ^ 6 + 15 mu ^ 4 sigma ^ 2 + 45 mu ^ 2 sigma ^ 4 + 15 sigma ^ 6 </math >
  \operatorname{E}\left[|X - \mu|^p\right] &= \sigma^p (p-1)!! \cdot \begin{cases}
[math]\displaystyle{ 15\sigma^6 }[/math] < math > 15 sigma ^ 6 < math >
    \sqrt{\frac{2}{\pi}} & \text{if }p\text{ is odd} \\
7 7
  \end{cases} \\
[math]\displaystyle{ \mu^7+21\mu^5\sigma^2+105\mu^3\sigma^4+105\mu\sigma^6 }[/math] < math > mu ^ 7 + 21 mu ^ 5 sigma ^ 2 + 105 mu ^ 3 sigma ^ 4 + 105 mu sigma ^ 6 </math >
  &= \sigma^p \cdot \frac{2^{p/2}\Gamma\left(\frac{p+1} 2 \right)}{\sqrt\pi}.
[math]\displaystyle{ 0 }[/math] < math > 0
\end{align}</math>
8 8


[math]\displaystyle{ \mu^8+28\mu^6\sigma^2+210\mu^4\sigma^4+420\mu^2\sigma^6+105\sigma^8 }[/math] < math > mu ^ 8 + 28 mu ^ 6 sigma ^ 2 + 210 mu ^ 4 sigma ^ 4 + 420 mu ^ 2 sigma ^ 6 + 105 sigma ^ 8 </math >
[math]\displaystyle{ \begin{align} | \lt math\gt 105\sigma^8 }[/math]
< math > 105 sigma ^ 8 < math >
  \operatorname{E}\left[X^p\right] &= \sigma^p\cdot (-i\sqrt 2)^p U\left(-\frac{p}{2}, \frac{1}{2}, -\frac{1}{2} \left( \frac \mu \sigma \right)^2 \right), \\
  \operatorname{E}\left[|X|^p \right] &= \sigma^p \cdot 2^{p/2} \frac {\Gamma\left(\frac{1+p} 2\right)}{\sqrt\pi} {}_1F_1\left( -\frac{p}{2}, \frac{1}{2}, -\frac{1}{2} \left( \frac \mu \sigma \right)^2 \right).
\end{align}</math>

The expectation of [math]\displaystyle{ X }[/math] conditioned on the event that [math]\displaystyle{ X }[/math] lies in an interval [math]\displaystyle{ [a,b] }[/math] is given by

对于[ math > x </math > 的期望取决于事件 < math > x </math > 在一个区间 < math > [ a,b ] </math > 是由


[math]\displaystyle{ \operatorname{E}\left[X \mid a\lt X\lt b \right] = \mu - \sigma^2\frac{f(b)-f(a)}{F(b)-F(a)} }[/math]

左[ x mid a < x 右] = mu-sigma ^ 2 frac { f (b)-f (a)}{ f (b)-f (a)} </math >

These expressions remain valid even if [math]\displaystyle{ p }[/math] is not an integer. See also generalized Hermite polynomials.

where [math]\displaystyle{ f }[/math] and [math]\displaystyle{ F }[/math] respectively are the density and the cumulative distribution function of [math]\displaystyle{ X }[/math]. For [math]\displaystyle{ b=\infty }[/math] this is known as the inverse Mills ratio. Note that above, density [math]\displaystyle{ f }[/math] of [math]\displaystyle{ X }[/math] is used instead of standard normal density as in inverse Mills ratio, so here we have [math]\displaystyle{ \sigma^2 }[/math] instead of [math]\displaystyle{ \sigma }[/math].

其中,数学和数学分别是密度和累积分布函数。对于 < math > b = infty </math > 这就是所谓的反 Mills 比率。请注意,上面的密度 </math > x </math > 被用来代替标准的正常密度,因此这里我们用 < math > sigma ^ 2 </math > 代替 < math > sigma </math > 。


The Fourier transform of a normal density [math]\displaystyle{ f }[/math] with mean [math]\displaystyle{ \mu }[/math] and standard deviation [math]\displaystyle{ \sigma }[/math] is 正常密度的傅里叶变换是平均值 < math > > mu </math > 和标准差 < math > sigma </math > [math]\displaystyle{ 《数学》 | 1 \hat f(t) = \int_{-\infty}^\infty f(x)e^{-itx} \, dx = e^{ -i\mu t} e^{- \frac12 (\sigma t)^2} F (t) = int _ {-infty } ^ infty f (x) e ^ {-itx } ,dx = e ^ {-i mu t } e ^ {-frac12(sigma t) ^ 2} | \lt math\gt \mu }[/math] </math> 数学 where [math]\displaystyle{ i }[/math] is the imaginary unit. If the mean [math]\displaystyle{ \mu=0 }[/math], the first factor is 1, and the Fourier transform is, apart from a constant factor, a normal density on the frequency domain, with mean 0 and standard deviation [math]\displaystyle{ 1/\sigma }[/math]. In particular, the standard normal distribution [math]\displaystyle{ \varphi }[/math] is an eigenfunction of the Fourier transform. 其中“ math”是虚数单位。如果平均值 < math > mu = 0 </math > ,那么第一个因子是1,除了一个常数因子外,傅里叶变换密度在频率域是正常的,平均值0和标准差 < math > 1/sigma </math > 。特别是,标准正态分布是傅里叶变换的特征函数。 [math]\displaystyle{ M(t) = \operatorname{E}[e^{tX}] = \hat f(it) = e^{\mu t} e^{\tfrac12 \sigma^2 t^2} }[/math] [ math > m (t) = operatorname { e }[ e ^ { tX }] = hat f (it) = e ^ { mu t } e ^ { tfrac12 sigma ^ 2 t ^ 2} </math > [math]\displaystyle{ g(t) = \ln M(t) = \mu t + \tfrac 12 \sigma^2 t^2 }[/math] [ math > g (t) = ln m (t) = mu t + tfrac 12 sigma ^ 2 t ^ 2] In the limit when [math]\displaystyle{ \sigma }[/math] tends to zero, the probability density [math]\displaystyle{ f(x) }[/math] eventually tends to zero at any [math]\displaystyle{ x\ne \mu }[/math], but grows without limit if [math]\displaystyle{ x = \mu }[/math], while its integral remains equal to 1. Therefore, the normal distribution cannot be defined as an ordinary function when [math]\displaystyle{ \sigma = 0 }[/math]. 当 < math > sigma </math > 趋于零时,概率密度 < math > f (x) </math > 在任意 < math > x ne mu </math > 时最终趋于零,但当 < math > x = mu </math > 时,概率密度无限增长,而其积分仍然等于1。因此,当 < math > sigma = 0 </math > 时,正态分布不能被定义为普通函数。 [math]\displaystyle{ F(x) = \lt math \gt f (x) = | 8 \begin{cases} 开始{ cases } | \lt math\gt \mu^8+28\mu^6\sigma^2+210\mu^4\sigma^4+420\mu^2\sigma^6+105\sigma^8 }[/math] 0 & \text{if }x < \mu \\ 0 & text { if } x < mu
Order Non-central moment Central moment
[math]\displaystyle{ 0 }[/math]
2 [math]\displaystyle{ \mu^2+\sigma^2 }[/math]

In probability theory, the Fourier transform of the probability distribution of a real-valued random variable [math]\displaystyle{ X }[/math] is closely connected to the characteristic function [math]\displaystyle{ \varphi_X(t) }[/math] of that variable, which is defined as the expected value of [math]\displaystyle{ e^{itX} }[/math], as a function of the real variable [math]\displaystyle{ t }[/math] (the frequency parameter of the Fourier transform). This definition can be analytically extended to a complex-value variable [math]\displaystyle{ t }[/math]. The relation between both is:

在概率论中,一个实值随机变量的傅里叶变换与该变量的特征函数相关,这个变量被定义为实值变量的一个函数。这个定义可以解析地扩展为复值变量 < math > t </math > 。两者之间的关系是:

[math]\displaystyle{ \sigma^2 }[/math]

[math]\displaystyle{ \varphi_X(t) = \hat f(-t) }[/math]

[ math ] varphi _ x (t) = hat f (- t) </math >

3 [math]\displaystyle{ \mu^3+3\mu\sigma^2 }[/math]

The moment generating function of a real random variable [math]\displaystyle{ X }[/math] is the expected value of [math]\displaystyle{ e^{tX} }[/math], as a function of the real parameter [math]\displaystyle{ t }[/math]. For a normal distribution with density [math]\displaystyle{ f }[/math], mean [math]\displaystyle{ \mu }[/math] and deviation [math]\displaystyle{ \sigma }[/math], the moment generating function exists and is equal to

一个真正的随机变量的时刻母函数是 < math > e ^ { tX } </math > 的期望值,作为一个真实参数的函数。对于密度 < math > f </math > ,平均值 < math > mu </math > 和偏差 < math > sigma </math > 的正态分布,母函数存在,等于

[math]\displaystyle{ 0 }[/math]
4 [math]\displaystyle{ \mu^4+6\mu^2\sigma^2+3\sigma^4 }[/math]

The cumulant generating function is the logarithm of the moment generating function, namely

累积量母函数是母函数矩的对数,即

[math]\displaystyle{ 3\sigma^4 }[/math]
5 [math]\displaystyle{ \mu^5+10\mu^3\sigma^2+15\mu\sigma^4 }[/math]

Since this is a quadratic polynomial in [math]\displaystyle{ t }[/math], only the first two cumulants are nonzero, namely the mean [math]\displaystyle{ \mu }[/math] and the variance [math]\displaystyle{ \sigma^2 }[/math].

因为这是一个二次多项式,所以只有前两个累积量是非零的,即均值 < math > mu </math > 和方差 < math > sigma ^ 2 </math > 。

[math]\displaystyle{ 0 }[/math]
6

Within Stein's method the Stein operator and class of a random variable [math]\displaystyle{ X \sim \mathcal{N}(\mu, \sigma^2) }[/math] are [math]\displaystyle{ \mathcal{A}f(x) = \sigma^2 f'(x) - (x-\mu)f(x) }[/math] and [math]\displaystyle{ \mathcal{F} }[/math] the class of all absolutely continuous functions [math]\displaystyle{ f : \R \to \R \mbox{ such that }\mathbb{E}[|f'(X)|]\lt \infty }[/math].

在 Stein 的方法中,Stein 算子和随机变量类 < math > x sim mathcal { n }(mu,sigma ^ 2) </math > < math > a } f (x) = sigma ^ 2 f’(x)-(x-mu) f (x) </math > 和 < math > cal > cal </math > 所有绝对连续函数的类 < math > f: r 到 r mbox < bb { e }[ | f’(x)] | < infty math > 。

[math]\displaystyle{ \mu^6+15\mu^4\sigma^2+45\mu^2\sigma^4+15\sigma^6 }[/math] [math]\displaystyle{ 15\sigma^6 }[/math]
7 [math]\displaystyle{ \mu^7+21\mu^5\sigma^2+105\mu^3\sigma^4+105\mu\sigma^6 }[/math]

However, one can define the normal distribution with zero variance as a generalized function; specifically, as Dirac's "delta function" [math]\displaystyle{ \delta }[/math] translated by the mean [math]\displaystyle{ \mu }[/math], that is [math]\displaystyle{ f(x)=\delta(x-\mu). }[/math]

然而,我们可以将方差为零的正态分布定义为广义函数,具体来说,就是将 Dirac 的“ delta 函数” < math > delta </math > 翻译成平均值 < math > mu </math > ,即 < math > f (x) = delta (x-mu)。数学

[math]\displaystyle{ 0 }[/math]

Its CDF is then the Heaviside step function translated by the mean [math]\displaystyle{ \mu }[/math], namely

它的 CDF 是单位阶跃函数的平均值

[math]\displaystyle{ 105\sigma^8 }[/math]
 1 & \text{if }x \geq \mu

1 & text { if } x geq mu

\end{cases}

结束{ cases }


</math>

数学

The expectation of [math]\displaystyle{ X }[/math] conditioned on the event that [math]\displaystyle{ X }[/math] lies in an interval [math]\displaystyle{ [a,b] }[/math] is given by

[math]\displaystyle{ \operatorname{E}\left[X \mid a\lt X\lt b \right] = \mu - \sigma^2\frac{f(b)-f(a)}{F(b)-F(a)} }[/math]

where [math]\displaystyle{ f }[/math] and [math]\displaystyle{ F }[/math] respectively are the density and the cumulative distribution function of [math]\displaystyle{ X }[/math]. For [math]\displaystyle{ b=\infty }[/math] this is known as the inverse Mills ratio. Note that above, density [math]\displaystyle{ f }[/math] of [math]\displaystyle{ X }[/math] is used instead of standard normal density as in inverse Mills ratio, so here we have [math]\displaystyle{ \sigma^2 }[/math] instead of [math]\displaystyle{ \sigma }[/math].

Of all probability distributions over the reals with a specified mean [math]\displaystyle{ \mu }[/math] and variance [math]\displaystyle{ \sigma^2 }[/math], the normal distribution [math]\displaystyle{ N(\mu,\sigma^2) }[/math] is the one with maximum entropy. If [math]\displaystyle{ X }[/math] is a continuous random variable with probability density [math]\displaystyle{ f(x) }[/math], then the entropy of [math]\displaystyle{ X }[/math] is defined as

在具有特定均值 < math > mu </math > 和方差 < math > sigma ^ 2 </math > 的实数上的所有概率分布中,正态分布 < math > n (mu,sigma ^ 2) </math > 是熵最大的分布。如果 < math > x </math > 是一个具有概率密度 < math > f (x) </math > 的连续随机变量,那么 < math > x </math > 的熵定义为


[math]\displaystyle{ 《数学》 === Fourier transform and characteristic function === H(X) = - \int_{-\infty}^\infty f(x)\log f(x)\, dx H (x) =-int _ {-infty } ^ infty f (x) log f (x) ,dx The [[Fourier transform]] of a normal density \lt math\gt f }[/math] with mean [math]\displaystyle{ \mu }[/math] and standard deviation [math]\displaystyle{ \sigma }[/math] is[22]

</math>

数学


[math]\displaystyle{ where \lt math\gt f(x)\log f(x) }[/math] is understood to be zero whenever [math]\displaystyle{ f(x)=0 }[/math]. This functional can be maximized, subject to the constraints that the distribution is properly normalized and has a specified variance, by using variational calculus. A function with two Lagrange multipliers is defined:

当 < math > f (x) log f (x) </math > 被理解为0时,< math > f (x) = 0 </math > 。利用变分算法,在分布满足适当规范化和具有一定方差的约束条件下,使该泛函最大化。定义了一个有两个拉格兰奇乘数的函数:

\hat f(t) = \int_{-\infty}^\infty f(x)e^{-itx} \, dx = e^{ -i\mu t} e^{- \frac12 (\sigma t)^2}

</math>

[math]\displaystyle{ 《数学》 L=\int_{-\infty}^\infty f(x)\ln(f(x))\,dx-\lambda_0\left(1-\int_{-\infty}^\infty f(x)\,dx\right)-\lambda\left(\sigma^2-\int_{-\infty}^\infty f(x)(x-\mu)^2\,dx\right) L = int _ {-infty } ^ infty f (x) ln (f (x)) ,dx-lambda _ 0 left (1-int _ {-infty } ^ infty f (x) ,dx 右)-lambda left (sigma ^ 2-int _ {-infty } ^ infty f (x)(x-mu) ^ 2,dx 右) where \lt math\gt i }[/math] is the imaginary unit. If the mean [math]\displaystyle{ \mu=0 }[/math], the first factor is 1, and the Fourier transform is, apart from a constant factor, a normal density on the frequency domain, with mean 0 and standard deviation [math]\displaystyle{ 1/\sigma }[/math]. In particular, the standard normal distribution [math]\displaystyle{ \varphi }[/math] is an eigenfunction of the Fourier transform.

</math>

数学


In probability theory, the Fourier transform of the probability distribution of a real-valued random variable [math]\displaystyle{ X }[/math] is closely connected to the characteristic function [math]\displaystyle{ \varphi_X(t) }[/math] of that variable, which is defined as the expected value of [math]\displaystyle{ e^{itX} }[/math], as a function of the real variable [math]\displaystyle{ t }[/math] (the frequency parameter of the Fourier transform). This definition can be analytically extended to a complex-value variable [math]\displaystyle{ t }[/math].[23] The relation between both is:

where [math]\displaystyle{ f(x) }[/math] is, for now, regarded as some density function with mean [math]\displaystyle{ \mu }[/math] and standard deviation [math]\displaystyle{ \sigma }[/math].

目前,f (x)被认为是一些密度函数,包括平均数学和标准差数学。

[math]\displaystyle{ \varphi_X(t) = \hat f(-t) }[/math]


At maximum entropy, a small variation [math]\displaystyle{ \delta f(x) }[/math] about [math]\displaystyle{ f(x) }[/math] will produce a variation [math]\displaystyle{ \delta L }[/math] about [math]\displaystyle{ L }[/math] which is equal to 0:

在最大熵的情况下,关于 < math > f (x) </math > 的一个微小变化会产生一个变化 < math > > δ </math > < l </math > ,这个变化等于0:

Moment and cumulant generating functions

The moment generating function of a real random variable [math]\displaystyle{ X }[/math] is the expected value of [math]\displaystyle{ e^{tX} }[/math], as a function of the real parameter [math]\displaystyle{ t }[/math]. For a normal distribution with density [math]\displaystyle{ f }[/math], mean [math]\displaystyle{ \mu }[/math] and deviation [math]\displaystyle{ \sigma }[/math], the moment generating function exists and is equal to

[math]\displaystyle{ 《数学》 0=\delta L=\int_{-\infty}^\infty \delta f(x)\left (\ln(f(x))+1+\lambda_0+\lambda(x-\mu)^2\right )\,dx 0 = delta l = int _ {-infty } ^ infty delta f (x) left (ln (f (x)) + 1 + lambda _ 0 + lambda (x-mu) ^ 2 right) ,dx :\lt math\gt M(t) = \operatorname{E}[e^{tX}] = \hat f(it) = e^{\mu t} e^{\tfrac12 \sigma^2 t^2} }[/math]

</math>

数学


The cumulant generating function is the logarithm of the moment generating function, namely

Since this must hold for any small [math]\displaystyle{ \delta f(x) }[/math], the term in brackets must be zero, and solving for [math]\displaystyle{ f(x) }[/math] yields:

因为这对于任何小的 < math > delta f (x) </math > 都是适用的,括号中的项必须是零,并且求解 < math > f (x) </math > 产量:


[math]\displaystyle{ g(t) = \ln M(t) = \mu t + \tfrac 12 \sigma^2 t^2 }[/math]

[math]\displaystyle{ f(x)=e^{-\lambda_0-1-\lambda(x-\mu)^2} }[/math]

= e ^ {-lambda _ 0-1-lambda (x-mu) ^ 2} </math >


Since this is a quadratic polynomial in [math]\displaystyle{ t }[/math], only the first two cumulants are nonzero, namely the mean [math]\displaystyle{ \mu }[/math] and the variance [math]\displaystyle{ \sigma^2 }[/math].

Using the constraint equations to solve for [math]\displaystyle{ \lambda_0 }[/math] and [math]\displaystyle{ \lambda }[/math] yields the density of the normal distribution:

使用约束方程来求解 < math > λ _ 0 </math > 和 < math > λ </math > 得到正态分布的密度:


Stein operator and class

[math]\displaystyle{ 《数学》 Within [[Stein's method]] the Stein operator and class of a random variable \lt math\gt X \sim \mathcal{N}(\mu, \sigma^2) }[/math] are [math]\displaystyle{ \mathcal{A}f(x) = \sigma^2 f'(x) - (x-\mu)f(x) }[/math] and [math]\displaystyle{ \mathcal{F} }[/math] the class of all absolutely continuous functions [math]\displaystyle{ f : \R \to \R \mbox{ such that }\mathbb{E}[|f'(X)|]\lt \infty }[/math].

f(x, \mu, \sigma)=\frac{1}{\sqrt{2\pi\sigma^2}}e^{-\frac{(x-\mu)^2}{2\sigma^2}}

F (x,mu,sigma) = frac {1}{ sqrt {2 pi sigma ^ 2} e ^ {-frac {(x-mu) ^ 2}{2 sigma ^ 2}


</math>

数学

Zero-variance limit

The entropy of a normal distribution is equal to

正态分布的熵等于

In the limit when [math]\displaystyle{ \sigma }[/math] tends to zero, the probability density [math]\displaystyle{ f(x) }[/math] eventually tends to zero at any [math]\displaystyle{ x\ne \mu }[/math], but grows without limit if [math]\displaystyle{ x = \mu }[/math], while its integral remains equal to 1. Therefore, the normal distribution cannot be defined as an ordinary function when [math]\displaystyle{ \sigma = 0 }[/math].

[math]\displaystyle{ 《数学》 H(x)=\tfrac{1}{2}(1+\log(2\sigma^2\pi)) H (x) = tfrac {1}{2}(1 + log (2 sigma ^ 2 pi)) However, one can define the normal distribution with zero variance as a [[generalized function]]; specifically, as [[Dirac delta function|Dirac's "delta function"]] \lt math\gt \delta }[/math] translated by the mean [math]\displaystyle{ \mu }[/math], that is [math]\displaystyle{ f(x)=\delta(x-\mu). }[/math]

</math>

数学

Its CDF is then the Heaviside step function translated by the mean [math]\displaystyle{ \mu }[/math], namely

[math]\displaystyle{ F(x) = \begin{cases} The family of normal distributions is closed under linear transformations: if \lt math\gt X }[/math] is normally distributed with mean [math]\displaystyle{ \mu }[/math] and standard deviation [math]\displaystyle{ \sigma }[/math], then the variable [math]\displaystyle{ Y=aX+b }[/math], for any real numbers [math]\displaystyle{ a }[/math] and [math]\displaystyle{ b }[/math], is also normally distributed, with

在线性变换下,正态分布族是封闭的: 如果 < math > x </math > 正态分布的平均值 < mu </math > 和标准差 < math </math > ,那么对于任何实数 < math > a </math > 和 < math > b </math > ,变量 < math > y = aX + b </math > ,也是正态分布的

 0 & \text{if }x < \mu \\

mean [math]\displaystyle{ a\mu+b }[/math] and standard deviation [math]\displaystyle{ |a|\sigma }[/math].

平均值 < math > a mu + b </math > 和标准差 < math > | a | sigma </math > 。

 1 & \text{if }x \geq \mu

\end{cases}

Also if [math]\displaystyle{ X_1 }[/math] and [math]\displaystyle{ X_2 }[/math] are two independent normal random variables, with means [math]\displaystyle{ \mu_1 }[/math], [math]\displaystyle{ \mu_2 }[/math] and standard deviations [math]\displaystyle{ \sigma_1 }[/math], [math]\displaystyle{ \sigma_2 }[/math], then their sum [math]\displaystyle{ X_1+X_2 }[/math] will also be normally distributed,[proof] with mean [math]\displaystyle{ \mu_1 + \mu_2 }[/math] and variance [math]\displaystyle{ \sigma_1^2 + \sigma_2^2 }[/math].

如果 x _ 1 </math > 和 < math > x _ 2 </math > 是两个独立的正常随机变量,那么平均值 < math > mu _ 1 </math > ,< math > mu _ 2 </math > 和标准差 < math > sigma _ 1 </math > ,那么他们的和也将呈正态分布,平均值 < math > mu _ 1 + mu _ 2 </math > 和方差 < math > sigma _ 1 ^ 2 + sigma _ 2 ^ 2 </math > 。

</math>


In particular, if [math]\displaystyle{ X }[/math] and [math]\displaystyle{ Y }[/math] are independent normal deviates with zero mean and variance [math]\displaystyle{ \sigma^2 }[/math], then [math]\displaystyle{ X + Y }[/math] and [math]\displaystyle{ X - Y }[/math] are also independent and normally distributed, with zero mean and variance [math]\displaystyle{ 2\sigma^2 }[/math]. This is a special case of the polarization identity.

特别是,如果 < math > x </math > 和 < math > y </math > 是独立的正态偏差,其均值和方差均为零,那么 < math > x + y </math > 和 < math > x-y </math > 也是独立的和正态分布的,其均值和方差均为零。这是极化恒等式的一个特例。

Maximum entropy

Of all probability distributions over the reals with a specified mean [math]\displaystyle{ \mu }[/math] and variance [math]\displaystyle{ \sigma^2 }[/math], the normal distribution [math]\displaystyle{ N(\mu,\sigma^2) }[/math] is the one with maximum entropy.[24] If [math]\displaystyle{ X }[/math] is a continuous random variable with probability density [math]\displaystyle{ f(x) }[/math], then the entropy of [math]\displaystyle{ X }[/math] is defined as[25][26][27]

Also, if [math]\displaystyle{ X_1 }[/math], [math]\displaystyle{ X_2 }[/math] are two independent normal deviates with mean [math]\displaystyle{ \mu }[/math] and deviation [math]\displaystyle{ \sigma }[/math], and [math]\displaystyle{ a }[/math], [math]\displaystyle{ b }[/math] are arbitrary real numbers, then the variable

另外,如果 x _ 1 </math > ,< math > x _ 2 </math > 是两个独立的正常偏差,平均值 < math > mu </math > 和偏差 < math > sigma </math > ,而 < math > a </math > ,< math > b </math > 是任意的实数,那么变量

[math]\displaystyle{ \lt math\gt 《数学》 H(X) = - \int_{-\infty}^\infty f(x)\log f(x)\, dx X_3 = \frac{aX_1 + bX_2 - (a+b)\mu}{\sqrt{a^2+b^2}} + \mu 3 = frac { aX _ 1 + bX _ 2-(a + b) mu }{ sqrt { a ^ 2 + b ^ 2}} + mu }[/math]
 </math>

数学


is also normally distributed with mean [math]\displaystyle{ \mu }[/math] and deviation [math]\displaystyle{ \sigma }[/math]. It follows that the normal distribution is stable (with exponent [math]\displaystyle{ \alpha=2 }[/math]).

也是正态分布的,平均值 < math > mu </math > 和偏差 < math > sigma </math > 。因此,正态分布是稳定的(指数 < math > alpha = 2 </math >)。

where [math]\displaystyle{ f(x)\log f(x) }[/math] is understood to be zero whenever [math]\displaystyle{ f(x)=0 }[/math]. This functional can be maximized, subject to the constraints that the distribution is properly normalized and has a specified variance, by using variational calculus. A function with two Lagrange multipliers is defined:


More generally, any linear combination of independent normal deviates is a normal deviate.

更一般地说,任何独立正常偏离者的线性组合都是正常偏离者。

[math]\displaystyle{ L=\int_{-\infty}^\infty f(x)\ln(f(x))\,dx-\lambda_0\left(1-\int_{-\infty}^\infty f(x)\,dx\right)-\lambda\left(\sigma^2-\int_{-\infty}^\infty f(x)(x-\mu)^2\,dx\right) }[/math]

For any positive integer [math]\displaystyle{ \text{n} }[/math], any normal distribution with mean [math]\displaystyle{ \mu }[/math] and variance [math]\displaystyle{ \sigma^2 }[/math] is the distribution of the sum of [math]\displaystyle{ \text{n} }[/math] independent normal deviates, each with mean [math]\displaystyle{ \frac{\mu}{n} }[/math] and variance [math]\displaystyle{ \frac{\sigma^2}{n} }[/math]. This property is called infinite divisibility.

对于任何正整数 < math > text { n } </math > ,任何带均值 < math > mu </math > 和方差 < math > sigma ^ 2 </math > 的正态分布都是 < math > text { n } </math > 独立正态偏差之和的分布,每一个均值 < math > frac { mu }{ n } </math > 和 < math > frac { σ { n } </math > 。这个性质称为无限整除性。


where [math]\displaystyle{ f(x) }[/math] is, for now, regarded as some density function with mean [math]\displaystyle{ \mu }[/math] and standard deviation [math]\displaystyle{ \sigma }[/math].

Conversely, if [math]\displaystyle{ X_1 }[/math] and [math]\displaystyle{ X_2 }[/math] are independent random variables and their sum [math]\displaystyle{ X_1+X_2 }[/math] has a normal distribution, then both [math]\displaystyle{ X_1 }[/math] and [math]\displaystyle{ X_2 }[/math] must be normal deviates.

相反,如果 < math > x _ 1 </math > 和 < math > x _ 2 </math > 是独立的随机变量,并且它们的和 < math > x _ 1 + x _ 2 </math > 具有正态分布,那么 < math > x _ 1 </math > 和 < math > x _ 2 </math > 都必须是正态偏差。


At maximum entropy, a small variation [math]\displaystyle{ \delta f(x) }[/math] about [math]\displaystyle{ f(x) }[/math] will produce a variation [math]\displaystyle{ \delta L }[/math] about [math]\displaystyle{ L }[/math] which is equal to 0:

This result is known as Cramér’s decomposition theorem, and is equivalent to saying that the convolution of two distributions is normal if and only if both are normal. Cramér's theorem implies that a linear combination of independent non-Gaussian variables will never have an exactly normal distribution, although it may approach it arbitrarily closely.

这个结果被称为 Cramér 分解定理,等价于说两个分布的卷积是正态的当且仅当它们都是正态的。克拉梅尔定理暗示了一线性组合独立的非高斯变量不会有一个精确的正态分布,尽管它可以任意接近它。


[math]\displaystyle{ 0=\delta L=\int_{-\infty}^\infty \delta f(x)\left (\ln(f(x))+1+\lambda_0+\lambda(x-\mu)^2\right )\,dx Bernstein's theorem states that if \lt math\gt X }[/math] and [math]\displaystyle{ Y }[/math] are independent and [math]\displaystyle{ X + Y }[/math] and [math]\displaystyle{ X - Y }[/math] are also independent, then both X and Y must necessarily have normal distributions.

伯恩斯坦定理指出,如果 < math > x </math > 和 < math > y </math > 是独立的,< math > x + y </math > 和 < math > x-y </math > 也是独立的,那么 x 和 y 都必须有正态分布。

</math>


More generally, if [math]\displaystyle{ X_1, \ldots, X_n }[/math] are independent random variables, then two distinct linear combinations [math]\displaystyle{ \sum{a_kX_k} }[/math] and [math]\displaystyle{ \sum{b_kX_k} }[/math]will be independent if and only if all [math]\displaystyle{ X_k }[/math] are normal and [math]\displaystyle{ \sum{a_kb_k\sigma_k^2=0} }[/math], where [math]\displaystyle{ \sigma_k^2 }[/math] denotes the variance of [math]\displaystyle{ X_k }[/math].[proof] For non-normal random variables uncorrelatedness does not imply independence.

更一般地说,如果 x _ 1,小点,x _ n </math > 是独立的随机变量,那么两个不同的线性组合 < math > sum { a _ kx _ k } </math > 和 < math > sum { b _ kx _ k } </math > 将是独立的,当且仅当所有 < math > x _ k </math > 是正常的和 < math > sum { a _ kb _ sigma k ^ k _ 2 = 0} </math > ,其中 math < k ^ 2 </math > < x _ k </math > 方差 < x _ k </math > 。对于非正态随机变量,不相关性并不意味着独立性。

Since this must hold for any small [math]\displaystyle{ \delta f(x) }[/math], the term in brackets must be zero, and solving for [math]\displaystyle{ f(x) }[/math] yields:


|3= The Kullback–Leibler divergence of one normal distribution [math]\displaystyle{ X_1 \sim N(\mu_1, \sigma^2_1) }[/math] from another [math]\displaystyle{ X_2 \sim N(\mu_2, \sigma^2_2) }[/math] is given by:

| 3 = 一个正态分布的 Kullback-Leibler 分歧 < math > x1sim n (mu _ 1,sigma ^ 2_ 1) </math > 与另一个 < math > x2sim n (mu _ 2,sigma ^ 2_ 2) </math > 由:

[math]\displaystyle{ f(x)=e^{-\lambda_0-1-\lambda(x-\mu)^2} }[/math]
[math]\displaystyle{ 

《数学》



    D_\mathrm{KL}( X_1 \,\|\, X_2 ) = \frac{(\mu_1 - \mu_2)^2}{2\sigma_2^2} + \frac{1}{2}\left( \frac{\sigma_1^2}{\sigma_2^2} - 1 - \ln\frac{\sigma_1^2}{\sigma_2^2} \right)

D _ mathrm { KL }(x _ 1,| ,x _ 2) = frac {(mu _ 1-mu _ 2) ^ 2}{2 sigma _ 2 ^ 2} + frac {1}{2}左(frac { sigma _ 1 ^ 2}{ sigma _ 2 ^ 2}-1-ln frac { sigma _ 1 ^ 2}{ σ _ 2}右)

Using the constraint equations to solve for \lt math\gt \lambda_0 }[/math] and [math]\displaystyle{ \lambda }[/math] yields the density of the normal distribution:
 </math>

数学


The Hellinger distance between the same distributions is equal to

相同分布之间的 Hellinger 距离等于

[math]\displaystyle{ \lt math\gt 《数学》 f(x, \mu, \sigma)=\frac{1}{\sqrt{2\pi\sigma^2}}e^{-\frac{(x-\mu)^2}{2\sigma^2}} H^2(X_1,X_2) = 1 - \sqrt{\frac{2\sigma_1\sigma_2}{\sigma_1^2+\sigma_2^2}} 2(x _ 1,x _ 2) = 1-sqrt { frac {2 sigma _ 1 sigma _ 2}{ sigma _ 1 ^ 2 + sigma _ 2 ^ 2}} }[/math]
                      e^{-\frac{1}{4}\frac{(\mu_1-\mu_2)^2}{\sigma_1^2+\sigma_2^2}}

E ^ {-frac {1}{4} frac {(mu _ 1-mu _ 2) ^ 2}{ sigma _ 1 ^ 2 + sigma _ 2 ^ 2}}

The entropy of a normal distribution is equal to

 </math>

数学

[math]\displaystyle{ H(x)=\tfrac{1}{2}(1+\log(2\sigma^2\pi)) |4= The Fisher information matrix for a normal distribution is diagonal and takes the form | 4 = 正态分布的费雪资讯矩阵是对角矩阵,并且是这样的形式 }[/math]
[math]\displaystyle{ 

《数学》



    \mathcal I = \begin{pmatrix} \frac{1}{\sigma^2} & 0 \\ 0 & \frac{1}{2\sigma^4} \end{pmatrix}

数学 i = begin { pmatrix } frac {1}{ sigma ^ 2} & 00 & frac {1}{2 sigma ^ 4} end { pmatrix }

=== Operations on normal deviates ===

   }[/math]

数学

The family of normal distributions is closed under linear transformations: if [math]\displaystyle{ X }[/math] is normally distributed with mean [math]\displaystyle{ \mu }[/math] and standard deviation [math]\displaystyle{ \sigma }[/math], then the variable [math]\displaystyle{ Y=aX+b }[/math], for any real numbers [math]\displaystyle{ a }[/math] and [math]\displaystyle{ b }[/math], is also normally distributed, with

mean [math]\displaystyle{ a\mu+b }[/math] and standard deviation [math]\displaystyle{ |a|\sigma }[/math].

|5= The conjugate prior of the mean of a normal distribution is another normal distribution. Specifically, if [math]\displaystyle{ x_1, \ldots, x_n }[/math] are iid [math]\displaystyle{ \sim N(\mu, \sigma^2) }[/math] and the prior is [math]\displaystyle{ \mu \sim N(\mu_0 , \sigma^2_0) }[/math], then the posterior distribution for the estimator of [math]\displaystyle{ \mu }[/math] will be

| 5 = 正态分布均值的共轭先验是另一个正态分布。具体来说,如果 < math > x _ 1,ldots,x _ n </math > 是 iid < math > sim n (mu,sigma ^ 2) </math > 而优先是 < math > mu sim n (mu _ 0,sigma ^ 2 _ 0) </math > ,那么 < math > mu </math > 估计量的后验概率将是


[math]\displaystyle{ 

《数学》

Also if \lt math\gt X_1 }[/math] and [math]\displaystyle{ X_2 }[/math] are two independent normal random variables, with means [math]\displaystyle{ \mu_1 }[/math], [math]\displaystyle{ \mu_2 }[/math] and standard deviations [math]\displaystyle{ \sigma_1 }[/math], [math]\displaystyle{ \sigma_2 }[/math], then their sum [math]\displaystyle{ X_1+X_2 }[/math] will also be normally distributed,[proof] with mean [math]\displaystyle{ \mu_1 + \mu_2 }[/math] and variance [math]\displaystyle{ \sigma_1^2 + \sigma_2^2 }[/math].
   \mu \mid x_1,\ldots,x_n \sim \mathcal{N}\left( \frac{\frac{\sigma^2}{n}\mu_0 + \sigma_0^2\bar{x}}{\frac{\sigma^2}{n}+\sigma_0^2},\left( \frac{n}{\sigma^2} + \frac{1}{\sigma_0^2} \right)^{-1} \right)

Mu mid x _ 1,ldots,x _ n sim mathcal { n } left (frac { frac { sigma ^ 2}{ n } mu _ 0 + sigma _ 0 ^ 2 bar { x }{ frac { sigma ^ 2}{ n } + sigma _ 0 ^ 2} ,left (frac { n }{ sigma ^ 2} + frac {1}{ σ _ 2}右) ^ {-1}右)


 </math>

数学

In particular, if [math]\displaystyle{ X }[/math] and [math]\displaystyle{ Y }[/math] are independent normal deviates with zero mean and variance [math]\displaystyle{ \sigma^2 }[/math], then [math]\displaystyle{ X + Y }[/math] and [math]\displaystyle{ X - Y }[/math] are also independent and normally distributed, with zero mean and variance [math]\displaystyle{ 2\sigma^2 }[/math]. This is a special case of the polarization identity.[28]


|6= The family of normal distributions not only forms an exponential family (EF), but in fact forms a natural exponential family (NEF) with quadratic variance function (NEF-QVF). Many properties of normal distributions generalize to properties of NEF-QVF distributions, NEF distributions, or EF distributions generally. NEF-QVF distributions comprises 6 families, including Poisson, Gamma, binomial, and negative binomial distributions, while many of the common families studied in probability and statistics are NEF or EF.

| 6 = 正态分布族不仅形成指数族(EF) ,而且实际上形成了具有二次方差函数(NEF-qvf)的自然指数族(NEF)。正态分布的许多性质一般推广到 NEF-QVF 分布、 NEF 分布或 EF 分布的性质。NEF-QVF 分布包括6个族,包括 Poisson 分布、 Gamma 分布、二项分布和负二项分布,而许多在概率统计学中研究的常见族是 NEF 或 EF。

Also, if [math]\displaystyle{ X_1 }[/math], [math]\displaystyle{ X_2 }[/math] are two independent normal deviates with mean [math]\displaystyle{ \mu }[/math] and deviation [math]\displaystyle{ \sigma }[/math], and [math]\displaystyle{ a }[/math], [math]\displaystyle{ b }[/math] are arbitrary real numbers, then the variable

[math]\displaystyle{ |7= In information geometry, the family of normal distributions forms a statistical manifold with constant curvature \lt math\gt -1 }[/math]. The same family is flat with respect to the (±1)-connections ∇[math]\displaystyle{ ^{(e)} }[/math] and ∇[math]\displaystyle{ ^{(m)} }[/math].

| 7 = 在信息几何,正态分布族形成了一个统计常曲率。相对于(± 1)-关系 ^ {(e)} </math > 和 ^ {(m)} </math > ,这个家庭是平的。

   X_3 = \frac{aX_1 + bX_2 - (a+b)\mu}{\sqrt{a^2+b^2}} + \mu
 </math>

}}

}}

is also normally distributed with mean [math]\displaystyle{ \mu }[/math] and deviation [math]\displaystyle{ \sigma }[/math]. It follows that the normal distribution is stable (with exponent [math]\displaystyle{ \alpha=2 }[/math]).


More generally, any linear combination of independent normal deviates is a normal deviate.


Infinite divisibility and Cramér's theorem

As the number of discrete events increases, the function begins to resemble a normal distribution

随着离散事件数量的增加,函数开始类似于正态分布

For any positive integer [math]\displaystyle{ \text{n} }[/math], any normal distribution with mean [math]\displaystyle{ \mu }[/math] and variance [math]\displaystyle{ \sigma^2 }[/math] is the distribution of the sum of [math]\displaystyle{ \text{n} }[/math] independent normal deviates, each with mean [math]\displaystyle{ \frac{\mu}{n} }[/math] and variance [math]\displaystyle{ \frac{\sigma^2}{n} }[/math]. This property is called infinite divisibility.[29]

Comparison of probability density functions, [math]\displaystyle{ p(k) }[/math] for the sum of [math]\displaystyle{ n }[/math] fair 6-sided dice to show their convergence to a normal distribution with increasing [math]\displaystyle{ na }[/math], in accordance to the central limit theorem. In the bottom-right graph, smoothed profiles of the previous graphs are rescaled, superimposed and compared with a normal distribution (black curve).

概率密度函数的比较,p (k) </math > 为 < math > n </math > 公平的6面骰子显示他们的收敛到一个正态分布的增加 < math > na </math > ,根据中心极限定理。在右下图中,对前面图的平滑剖面进行重新标度、叠加并与正态分布(黑曲线)进行比较。


Conversely, if [math]\displaystyle{ X_1 }[/math] and [math]\displaystyle{ X_2 }[/math] are independent random variables and their sum [math]\displaystyle{ X_1+X_2 }[/math] has a normal distribution, then both [math]\displaystyle{ X_1 }[/math] and [math]\displaystyle{ X_2 }[/math] must be normal deviates.[30]


The central limit theorem states that under certain (fairly common) conditions, the sum of many random variables will have an approximately normal distribution. More specifically, where [math]\displaystyle{ X_1,\ldots ,X_n }[/math] are independent and identically distributed random variables with the same arbitrary distribution, zero mean, and variance [math]\displaystyle{ \sigma^2 }[/math] and [math]\displaystyle{ Z }[/math] is their

中心极限定理指出,在某些(相当常见的)条件下,许多随机变量之和将具有一个近似正态分布。更具体地说,在 x 1,ldots,x n </math > 具有相同的任意分布,零均值和方差 < math > σ ^ 2 </math > 和 < math > z </math > 是他们的独立同分布

This result is known as Cramér’s decomposition theorem, and is equivalent to saying that the convolution of two distributions is normal if and only if both are normal. Cramér's theorem implies that a linear combination of independent non-Gaussian variables will never have an exactly normal distribution, although it may approach it arbitrarily closely.[31]

mean scaled by [math]\displaystyle{ \sqrt{n} }[/math]

平均乘以 < math > sqrt { n } </math >


[math]\displaystyle{ Z = \sqrt{n}\left(\frac{1}{n}\sum_{i=1}^n X_i\right) }[/math]

[ math > z = sqrt { n } left (frac {1}{ n } sum { i = 1} ^ n xi right) </math >

Bernstein's theorem

Then, as [math]\displaystyle{ n }[/math] increases, the probability distribution of [math]\displaystyle{ Z }[/math] will tend to the normal distribution with zero mean and variance [math]\displaystyle{ \sigma^2 }[/math].

然后,随着 < math > n </math > 的增加,< math > z </math > 的概率分布将趋于正态分布,其中均值和方差为零。

Bernstein's theorem states that if [math]\displaystyle{ X }[/math] and [math]\displaystyle{ Y }[/math] are independent and [math]\displaystyle{ X + Y }[/math] and [math]\displaystyle{ X - Y }[/math] are also independent, then both X and Y must necessarily have normal distributions.[32][33]


The theorem can be extended to variables [math]\displaystyle{ (X_i) }[/math] that are not independent and/or not identically distributed if certain constraints are placed on the degree of dependence and the moments of the distributions.

这个定理可以推广到变量 < math > (xi) </math > ,这些变量不是独立的和/或不是同分布的,如果在依赖度和分布的矩上加上一定的约束。

More generally, if [math]\displaystyle{ X_1, \ldots, X_n }[/math] are independent random variables, then two distinct linear combinations [math]\displaystyle{ \sum{a_kX_k} }[/math] and [math]\displaystyle{ \sum{b_kX_k} }[/math]will be independent if and only if all [math]\displaystyle{ X_k }[/math] are normal and [math]\displaystyle{ \sum{a_kb_k\sigma_k^2=0} }[/math], where [math]\displaystyle{ \sigma_k^2 }[/math] denotes the variance of [math]\displaystyle{ X_k }[/math].[32]


Many test statistics, scores, and estimators encountered in practice contain sums of certain random variables in them, and even more estimators can be represented as sums of random variables through the use of influence functions. The central limit theorem implies that those statistical parameters will have asymptotically normal distributions.

在实践中遇到的许多检验统计量、分数和估计量都包含某些随机变量的和,甚至更多的估计量可以通过使用影响函数表示为随机变量的和。中心极限定理表明这些统计参数将具有渐近正态分布。

Other properties

{{ordered list

The central limit theorem also implies that certain distributions can be approximated by the normal distribution, for example:

中心极限定理还暗示某些分布可以用正态分布来近似,例如:

|1= If the characteristic function [math]\displaystyle{ \phi_X }[/math] of some random variable [math]\displaystyle{ X }[/math] is of the form [math]\displaystyle{ \phi_X(t) = \exp^{Q(t)} }[/math], where [math]\displaystyle{ Q(t) }[/math] is a polynomial, then the Marcinkiewicz theorem (named after Józef Marcinkiewicz) asserts that [math]\displaystyle{ Q }[/math] can be at most a quadratic polynomial, and therefore [math]\displaystyle{ X }[/math] is a normal random variable.[31] The consequence of this result is that the normal distribution is the only distribution with a finite number (two) of non-zero cumulants.


|2= If [math]\displaystyle{ X }[/math] and [math]\displaystyle{ Y }[/math] are jointly normal and uncorrelated, then they are independent. The requirement that [math]\displaystyle{ X }[/math] and [math]\displaystyle{ Y }[/math] should be jointly normal is essential; without it the property does not hold.[34][35][proof] For non-normal random variables uncorrelatedness does not imply independence.


|3= The Kullback–Leibler divergence of one normal distribution [math]\displaystyle{ X_1 \sim N(\mu_1, \sigma^2_1) }[/math] from another [math]\displaystyle{ X_2 \sim N(\mu_2, \sigma^2_2) }[/math] is given by:[36]

[math]\displaystyle{ Whether these approximations are sufficiently accurate depends on the purpose for which they are needed, and the rate of convergence to the normal distribution. It is typically the case that such approximations are less accurate in the tails of the distribution. 这些近似值是否足够准确取决于它们的用途,以及收敛到正态分布的速度。典型的情况是,这种近似在分布的尾部不太准确。 D_\mathrm{KL}( X_1 \,\|\, X_2 ) = \frac{(\mu_1 - \mu_2)^2}{2\sigma_2^2} + \frac{1}{2}\left( \frac{\sigma_1^2}{\sigma_2^2} - 1 - \ln\frac{\sigma_1^2}{\sigma_2^2} \right) }[/math]

A general upper bound for the approximation error in the central limit theorem is given by the Berry–Esseen theorem, improvements of the approximation are given by the Edgeworth expansions.

用 Berry-Esseen 定理给出了逼近误差的一般上界,并用 Edgeworth 展开式给出了近似的改进中心极限定理。

The Hellinger distance between the same distributions is equal to

[math]\displaystyle{ H^2(X_1,X_2) = 1 - \sqrt{\frac{2\sigma_1\sigma_2}{\sigma_1^2+\sigma_2^2}} If X is distributed normally with mean μ and variance σ\lt sup\gt 2\lt /sup\gt , then 如果 x 是正态分布的,且平均 μ 和方差 σ \lt sup \gt 2 \lt /sup \gt ,则 e^{-\frac{1}{4}\frac{(\mu_1-\mu_2)^2}{\sigma_1^2+\sigma_2^2}} }[/math]


|4= The Fisher information matrix for a normal distribution is diagonal and takes the form

[math]\displaystyle{ \mathcal I = \begin{pmatrix} \frac{1}{\sigma^2} & 0 \\ 0 & \frac{1}{2\sigma^4} \end{pmatrix} }[/math]


|5= The conjugate prior of the mean of a normal distribution is another normal distribution.[37] Specifically, if [math]\displaystyle{ x_1, \ldots, x_n }[/math] are iid [math]\displaystyle{ \sim N(\mu, \sigma^2) }[/math] and the prior is [math]\displaystyle{ \mu \sim N(\mu_0 , \sigma^2_0) }[/math], then the posterior distribution for the estimator of [math]\displaystyle{ \mu }[/math] will be

If [math]\displaystyle{ X_1 }[/math] and [math]\displaystyle{ X_2 }[/math] are two independent standard normal random variables with mean 0 and variance 1, then

如果 < math > x _ 1 </math > 和 < math > x _ 2 </math > 是两个独立的标准正态随机变量,平均值为0,方差为1,则 x _ 2 </math > x _ 2 </math > x _ 2 </math > x _ 2 </math > x _ 2 </math > x _ 2 </math > x _ 1 </math > x _ 2 </math > x _ 2 </math > x _ 2 </math > x _ 2 </math > x _ 2 </math > x _ 2 </math > x _ 2 </math > x _ 2 </math > x _ 2 </math > 是两个独立的标准正态随机变量,平均值为0,方差为1

[math]\displaystyle{ \mu \mid x_1,\ldots,x_n \sim \mathcal{N}\left( \frac{\frac{\sigma^2}{n}\mu_0 + \sigma_0^2\bar{x}}{\frac{\sigma^2}{n}+\sigma_0^2},\left( \frac{n}{\sigma^2} + \frac{1}{\sigma_0^2} \right)^{-1} \right) }[/math]


|6= The family of normal distributions not only forms an exponential family (EF), but in fact forms a natural exponential family (NEF) with quadratic variance function (NEF-QVF). Many properties of normal distributions generalize to properties of NEF-QVF distributions, NEF distributions, or EF distributions generally. NEF-QVF distributions comprises 6 families, including Poisson, Gamma, binomial, and negative binomial distributions, while many of the common families studied in probability and statistics are NEF or EF.


|7= In information geometry, the family of normal distributions forms a statistical manifold with constant curvature [math]\displaystyle{ -1 }[/math]. The same family is flat with respect to the (±1)-connections ∇[math]\displaystyle{ ^{(e)} }[/math] and ∇[math]\displaystyle{ ^{(m)} }[/math].[38]


[math]\displaystyle{ X_1^2 + \cdots + X_n^2 \sim \chi_n^2. }[/math]

1 ^ 2 + cdots + x n ^ 2 sim chi n ^ 2. </math >

}}


[math]\displaystyle{ t = \frac{\overline X - \mu}{S/\sqrt{n}} = \frac{\frac{1}{n}(X_1+\cdots+X_n) - \mu}{\sqrt{\frac{1}{n(n-1)}\left[(X_1-\overline X)^2+\cdots+(X_n-\overline X)^2\right]}} \sim t_{n-1}. }[/math]

< math > t = frac { overline x-mu }{ s/sqrt { n } = frac { frac {1}{ n }(x _ 1 + cdots + x _ n)-mu }{ sqrt { frac {1}{ n (n-1)}}左[(x _ 1-overline x) ^ 2 + cdots + (x _ n-overline x) ^ 2]} t { n-1} </math >

Related distributions

[math]\displaystyle{ F = \frac{\left(X_1^2+X_2^2+\cdots+X_n^2\right)/n}{\left(Y_1^2+Y_2^2+\cdots+Y_m^2\right)/m} \sim F_{n,m}. }[/math]

< math > f = frac { left (x _ 1 ^ 2 + x _ 2 ^ 2 + cdots + x _ n ^ 2 right)/n }{ left (y _ 1 ^ 2 + y _ 2 ^ 2 + cdots + y _ m ^ 2 right)/m } sim f _ { n,m } </math >

Central limit theorem

文件:De moivre-laplace.gif
As the number of discrete events increases, the function begins to resemble a normal distribution
文件:Dice sum central limit theorem.svg
Comparison of probability density functions, [math]\displaystyle{ p(k) }[/math] for the sum of [math]\displaystyle{ n }[/math] fair 6-sided dice to show their convergence to a normal distribution with increasing [math]\displaystyle{ na }[/math], in accordance to the central limit theorem. In the bottom-right graph, smoothed profiles of the previous graphs are rescaled, superimposed and compared with a normal distribution (black curve).

The split normal distribution is most directly defined in terms of joining scaled sections of the density functions of different normal distributions and rescaling the density to integrate to one. The truncated normal distribution results from rescaling a section of a single density function.

分裂正态分布最直接的定义是将不同正态分布的密度函数的标度段连接起来,并将密度重新标度积分为一。截断的正态分布是对一个单一密度函数的截面重新标度的结果。


The central limit theorem states that under certain (fairly common) conditions, the sum of many random variables will have an approximately normal distribution. More specifically, where [math]\displaystyle{ X_1,\ldots ,X_n }[/math] are independent and identically distributed random variables with the same arbitrary distribution, zero mean, and variance [math]\displaystyle{ \sigma^2 }[/math] and [math]\displaystyle{ Z }[/math] is their

The notion of normal distribution, being one of the most important distributions in probability theory, has been extended far beyond the standard framework of the univariate (that is one-dimensional) case (Case 1). All these extensions are also called normal or Gaussian laws, so a certain ambiguity in names exists.

正态分布的概念,作为21概率论最重要的分布之一,已经远远超出了单变量(即一维)的标准框架(案例1)。所有这些扩展也被称为正常或高斯定律,所以在名称中存在一定的歧义。

mean scaled by [math]\displaystyle{ \sqrt{n} }[/math]

[math]\displaystyle{ Z = \sqrt{n}\left(\frac{1}{n}\sum_{i=1}^n X_i\right) }[/math]

Then, as [math]\displaystyle{ n }[/math] increases, the probability distribution of [math]\displaystyle{ Z }[/math] will tend to the normal distribution with zero mean and variance [math]\displaystyle{ \sigma^2 }[/math].


The theorem can be extended to variables [math]\displaystyle{ (X_i) }[/math] that are not independent and/or not identically distributed if certain constraints are placed on the degree of dependence and the moments of the distributions.


Many test statistics, scores, and estimators encountered in practice contain sums of certain random variables in them, and even more estimators can be represented as sums of random variables through the use of influence functions. The central limit theorem implies that those statistical parameters will have asymptotically normal distributions.


The central limit theorem also implies that certain distributions can be approximated by the normal distribution, for example:

  • The binomial distribution [math]\displaystyle{ B(n,p) }[/math] is approximately normal with mean [math]\displaystyle{ np }[/math] and variance [math]\displaystyle{ np(1-p) }[/math] for large [math]\displaystyle{ n }[/math] and for [math]\displaystyle{ p }[/math] not too close to 0 or 1.
  • The Poisson distribution with parameter [math]\displaystyle{ \lambda }[/math] is approximately normal with mean [math]\displaystyle{ \lambda }[/math] and variance [math]\displaystyle{ \lambda }[/math], for large values of [math]\displaystyle{ \lambda }[/math].[39]
  • The chi-squared distribution [math]\displaystyle{ \chi^2(k) }[/math] is approximately normal with mean [math]\displaystyle{ k }[/math] and variance [math]\displaystyle{ 2k }[/math], for large [math]\displaystyle{ k }[/math].

A random variable X has a two-piece normal distribution if it has a distribution

如果一个随机变量 x 具有分布,那么它就具有两件正态分布

  • The Student's t-distribution [math]\displaystyle{ t(\nu) }[/math] is approximately normal with mean 0 and variance 1 when [math]\displaystyle{ \nu }[/math] is large.


[math]\displaystyle{  f_X( x ) = N( \mu, \sigma_1^2 )  \text{ if } x \le \mu }[/math]

[数学] f _ x (x) = n (mu,sigma _ 1 ^ 2) text { if } x le mu

Whether these approximations are sufficiently accurate depends on the purpose for which they are needed, and the rate of convergence to the normal distribution. It is typically the case that such approximations are less accurate in the tails of the distribution.

[math]\displaystyle{  f_X( x ) = N( \mu, \sigma_2^2 )  \text{ if } x \ge \mu }[/math]

[数学] f _ x (x) = n (mu,sigma _ 2 ^ 2) text { if } x ge mu


A general upper bound for the approximation error in the central limit theorem is given by the Berry–Esseen theorem, improvements of the approximation are given by the Edgeworth expansions.

where μ is the mean and σ1 and σ2 are the standard deviations of the distribution to the left and right of the mean respectively.

其中 μ 是均值,σ < sub > 1 和 σ < sub > 2 是均值左右分布的标准差。


Operations on a single random variable

The mean, variance and third central moment of this distribution have been determined

确定了这种分布的均值、方差和第三中心矩

If X is distributed normally with mean μ and variance σ2, then

  • The exponential of X is distributed log-normally: eX ~ ln(N (μ, σ2)).

[math]\displaystyle{ \operatorname{E}( X ) = \mu + \sqrt{\frac 2 \pi } ( \sigma_2 - \sigma_1 ) }[/math]

[ math > operatorname { e }(x) = mu + sqrt { frac 2 pi }(sigma 2-sigma _ 1) </math >

[math]\displaystyle{ \operatorname{V}( X ) = \left( 1 - \frac 2 \pi\right)( \sigma_2 - \sigma_1 )^2 + \sigma_1 \sigma_2 }[/math]

< math > 操作员名称{ v }(x) = left (1-frac 2 pi right)(sigma _ 2-sigma _ 1) ^ 2 + sigma _ 1 sigma _ 2 </math >

  • The absolute value of normalized residuals, |Xμ|/σ, has chi distribution with one degree of freedom: |Xμ|/σ ~ [math]\displaystyle{ \chi_1 }[/math].
[math]\displaystyle{  \operatorname{T}( X ) = \sqrt{ \frac 2 \pi}( \sigma_2 - \sigma_1 ) \left[ \left( \frac 4 \pi  - 1 \right) ( \sigma_2 - \sigma_1)^2 + \sigma_1 \sigma_2 \right] }[/math]

(2-sigma _ 1)(sigma _ 2-sigma _ 1)(∑ _ 2-sigma _ 1) ^ 2 + ∑ _ 1-∑ _ 2右) </math >

where E(X), V(X) and T(X) are the mean, variance, and third central moment respectively.

其中 e (x)、 v (x)和 t (x)分别是均值、方差和第三中心矩。


One of the main practical uses of the Gaussian law is to model the empirical distributions of many different random variables encountered in practice. In such case a possible extension would be a richer family of distributions, having more than two parameters and therefore being able to fit the empirical distribution more accurately. The examples of such extensions are:

高斯定律的一个主要实际用途是模拟在实践中遇到的许多不同的随机变量的经验分布。在这种情况下,一个可能的扩展将是一个更丰富的分布族,具有两个以上的参数,因此能够更准确地拟合经验分布。这种扩展的例子有:

Combination of two independent random variables

If [math]\displaystyle{ X_1 }[/math] and [math]\displaystyle{ X_2 }[/math] are two independent standard normal random variables with mean 0 and variance 1, then

  • Their sum and difference is distributed normally with mean zero and variance two: [math]\displaystyle{ X_1 \pm X_2 \sim N(0, 2) }[/math].
  • Their product [math]\displaystyle{ Z=X_1X_2 }[/math] follows the Product distribution[40] with density function [math]\displaystyle{ f_Z(z) = \pi^{-1} K_0(|z|) }[/math] where [math]\displaystyle{ K_0 }[/math] is the modified Bessel function of the second kind. This distribution is symmetric around zero, unbounded at [math]\displaystyle{ z = 0 }[/math], and has the characteristic function [math]\displaystyle{ \phi_Z(t) = (1 + t^2)^{-1/2} }[/math].
  • Their ratio follows the standard Cauchy distribution: [math]\displaystyle{ X_1/ X_2 \sim \operatorname{Cauchy}(0, 1) }[/math].


Combination of two or more independent random variables

It is often the case that we do not know the parameters of the normal distribution, but instead want to estimate them. That is, having a sample [math]\displaystyle{ (x_1, \ldots, x_n) }[/math] from a normal [math]\displaystyle{ N(\mu, \sigma^2) }[/math] population we would like to learn the approximate values of parameters [math]\displaystyle{ \mu }[/math] and [math]\displaystyle{ \sigma^2 }[/math]. The standard approach to this problem is the maximum likelihood method, which requires maximization of the log-likelihood function:

通常情况下,我们不知道正态分布的参数,而是想要估计它们。也就是说,有一个来自正常人群的样本,我们想学习参数的近似值。解决这个问题的标准方法是最大似然法,它要求对数似然函数最大化:

  • If [math]\displaystyle{ X_1, X_2, \ldots, X_n }[/math] are independent standard normal random variables, then the sum of their squares has the chi-squared distribution with [math]\displaystyle{ \text{n} }[/math] degrees of freedom
[math]\displaystyle{ 

《数学》

::\lt math\gt X_1^2 + \cdots + X_n^2 \sim \chi_n^2. }[/math]
  \ln\mathcal{L}(\mu,\sigma^2)

在数学里面{ l }(mu,sigma ^ 2)

  • If [math]\displaystyle{ X_1, X_2, \ldots, X_n }[/math] are independent normally distributed random variables with means [math]\displaystyle{ \mu }[/math] and variances [math]\displaystyle{ \sigma^2 }[/math], then their sample mean is independent from the sample standard deviation,[41] which can be demonstrated using Basu's theorem or Cochran's theorem.[42] The ratio of these two quantities will have the Student's t-distribution with [math]\displaystyle{ \text{n}-1 }[/math] degrees of freedom:
    = \sum_{i=1}^n \ln f(x_i\mid\mu,\sigma^2)

= sum { i = 1} ^ n ln f (xi mid mu,sigma ^ 2)

[math]\displaystyle{ t = \frac{\overline X - \mu}{S/\sqrt{n}} = \frac{\frac{1}{n}(X_1+\cdots+X_n) - \mu}{\sqrt{\frac{1}{n(n-1)}\left[(X_1-\overline X)^2+\cdots+(X_n-\overline X)^2\right]}} \sim t_{n-1}. }[/math]
    = -\frac{n}{2}\ln(2\pi) - \frac{n}{2}\ln\sigma^2 - \frac{1}{2\sigma^2}\sum_{i=1}^n (x_i-\mu)^2.

=-frac { n }{2} ln (2 pi)-frac { n }{2} ln sigma ^ 2-frac {1}{2 sigma ^ 2} sum { i = 1} ^ n (x i-mu) ^ 2.

  • If [math]\displaystyle{ X_1, X_2, \ldots, X_n }[/math], [math]\displaystyle{ Y_1, Y_2, \ldots, Y_m }[/math] are independent standard normal random variables, then the ratio of their normalized sums of squares will have the F-distribution with (n, m) degrees of freedom:[43]
 </math>

数学

[math]\displaystyle{ F = \frac{\left(X_1^2+X_2^2+\cdots+X_n^2\right)/n}{\left(Y_1^2+Y_2^2+\cdots+Y_m^2\right)/m} \sim F_{n,m}. }[/math]

Taking derivatives with respect to [math]\displaystyle{ \mu }[/math] and [math]\displaystyle{ \sigma^2 }[/math] and solving the resulting system of first order conditions yields the maximum likelihood estimates:

对于 < math > mu </math > 和 < math > sigma ^ 2 </math > 的导数,求解结果系统的一阶条件得到最大似然估计:


[math]\displaystyle{ 

《数学》

=== Operations on the density function ===

    \hat{\mu} = \overline{x} \equiv \frac{1}{n}\sum_{i=1}^n x_i, \qquad

1} sum { i = 1} ^ n x _ i,qquad

The [[split normal distribution]] is most directly defined in terms of joining scaled sections of the density functions of different normal distributions and rescaling the density to integrate to one.  The [[truncated normal distribution]] results from rescaling a section of a single density function.

    \hat{\sigma}^2 = \frac{1}{n} \sum_{i=1}^n (x_i - \overline{x})^2.

2 = frac {1}{ n } sum { i = 1} ^ n (x _ i-overline { x }) ^ 2.



   }[/math]

数学

Extensions

The notion of normal distribution, being one of the most important distributions in probability theory, has been extended far beyond the standard framework of the univariate (that is one-dimensional) case (Case 1). All these extensions are also called normal or Gaussian laws, so a certain ambiguity in names exists.

  • The multivariate normal distribution describes the Gaussian law in the k-dimensional Euclidean space. A vector XRk is multivariate-normally distributed if any linear combination of its components 模板:Suaj Xj has a (univariate) normal distribution. The variance of X is a k×k symmetric positive-definite matrix V. The multivariate normal distribution is a special case of the elliptical distributions. As such, its iso-density loci in the k = 2 case are ellipses and in the case of arbitrary k are ellipsoids.
  • Complex normal distribution deals with the complex normal vectors. A complex vector XCk is said to be normal if both its real and imaginary components jointly possess a 2k-dimensional multivariate normal distribution. The variance-covariance structure of X is described by two matrices: the variance matrix Γ, and the relation matrix C.

Estimator [math]\displaystyle{ \textstyle\hat\mu }[/math] is called the sample mean, since it is the arithmetic mean of all observations. The statistic [math]\displaystyle{ \textstyle\overline{x} }[/math] is complete and sufficient for [math]\displaystyle{ \mu }[/math], and therefore by the Lehmann–Scheffé theorem, [math]\displaystyle{ \textstyle\hat\mu }[/math] is the uniformly minimum variance unbiased (UMVU) estimator. In finite samples it is distributed normally:

估计式“垂直排列:-. 3em” > 文本式“ mu”被称为样本平均数,因为它是所有观测值的算术平均数。统计的 < math style = " vertical-align: 0" > textstyle overline { x } </math > 对 < math > 是完整和充分的,因此根据 Lehmann-scheffé 定理,< math style = " vertical-align:-. 3em" > textstyle hat mu </math > 是一致最小方差无偏(UMVU)估计量。在有限的样本中,它是正态分布的:

[math]\displaystyle{ 

《数学》

* [[Gaussian process]]es are the normally distributed [[stochastic process]]es. These can be viewed as elements of some infinite-dimensional [[Hilbert space]] ''H'', and thus are the analogues of multivariate normal vectors for the case {{nowrap|''k'' {{=}} ∞}}. A random element {{nowrap|''h'' ∈ ''H''}} is said to be normal if for any constant {{nowrap|''a'' ∈ ''H''}} the [[scalar product]] {{nowrap|(''a'', ''h'')}} has a (univariate) normal distribution. The variance structure of such Gaussian random element can be described in terms of the linear ''covariance {{nowrap|operator K: H → H}}''. Several Gaussian processes became popular enough to have their own names:

    \hat\mu \sim \mathcal{N}(\mu,\sigma^2/n).

(mu,sigma ^ 2/n).

** [[Wiener process|Brownian motion]],

   }[/math]

数学

The variance of this estimator is equal to the μμ-element of the inverse Fisher information matrix [math]\displaystyle{ \textstyle\mathcal{I}^{-1} }[/math]. This implies that the estimator is finite-sample efficient. Of practical importance is the fact that the standard error of [math]\displaystyle{ \textstyle\hat\mu }[/math] is proportional to [math]\displaystyle{ \textstyle1/\sqrt{n} }[/math], that is, if one wishes to decrease the standard error by a factor of 10, one must increase the number of points in the sample by a factor of 100. This fact is widely used in determining sample sizes for opinion polls and the number of trials in Monte Carlo simulations.

这个估计量的方差等于逆费雪资讯矩阵的 μ 元 < math style = " vertical-align: 0" > textstyle mathcal { i } ^ {-1} </math > 。这意味着估计量是有限样本有效的。实际的重要性在于 < math style = " vertical-align:-. 3em" > textstyle hat mu </math > 的标准错误与 < math style = " vertical-align:-. 3em" > textstyle1/sqrt { n } </math > 成正比,也就是说,如果一个人希望将标准错误降低10倍,那么他必须将样本中的点数增加100倍。这一事实被广泛用于确定民意调查的样本规模和蒙特卡洛模拟中的试验数量。

From the standpoint of the asymptotic theory, [math]\displaystyle{ \textstyle\hat\mu }[/math] is consistent, that is, it converges in probability to [math]\displaystyle{ \mu }[/math] as [math]\displaystyle{ n\rightarrow\infty }[/math]. The estimator is also asymptotically normal, which is a simple corollary of the fact that it is normal in finite samples:

从渐近理论的立场来看,文本样式 mu 是一致的,也就是说,它在概率上收敛于 mu,作为右数。估计量也是渐近正态的,这是它在有限样本中是正态的一个简单推论:

[math]\displaystyle{ 

《数学》



    \sqrt{n}(\hat\mu-\mu) \,\xrightarrow{d}\, \mathcal{N}(0,\sigma^2).

数学{ n }(0,σ ^ 2)。

A random variable ''X'' has a two-piece normal distribution if it has a distribution

   }[/math]

数学


[math]\displaystyle{ f_X( x ) = N( \mu, \sigma_1^2 ) \text{ if } x \le \mu }[/math]
[math]\displaystyle{ f_X( x ) = N( \mu, \sigma_2^2 ) \text{ if } x \ge \mu }[/math]


where μ is the mean and σ1 and σ2 are the standard deviations of the distribution to the left and right of the mean respectively.

The estimator [math]\displaystyle{ \textstyle\hat\sigma^2 }[/math] is called the sample variance, since it is the variance of the sample ([math]\displaystyle{ (x_1, \ldots, x_n) }[/math]). In practice, another estimator is often used instead of the [math]\displaystyle{ \textstyle\hat\sigma^2 }[/math]. This other estimator is denoted [math]\displaystyle{ s^2 }[/math], and is also called the sample variance, which represents a certain ambiguity in terminology; its square root [math]\displaystyle{ s }[/math] is called the sample standard deviation. The estimator [math]\displaystyle{ s^2 }[/math] differs from [math]\displaystyle{ \textstyle\hat\sigma^2 }[/math] by having instead of n in the denominator (the so-called Bessel's correction):

估计量 < math style = " vertical-align: 0" > textstyle hat sigma ^ 2 </math > 被称为样本方差,因为它是样本的方差(< math > (x _ 1,ldots,x _ n) </math >)。在实践中,经常使用另一种估计量来代替 < math style = " 0" > textstyle hat sigma ^ 2 </math > 。另一个估计量被称为 < math > s ^ 2 </math > ,也被称为样本方差,它在术语上表示一定的模糊性; 它的平方根 < math > s </math > 被称为样本标准差。估计量 s ^ 2 </math > 与 < math style = " vertical-align: 0" > textstyle hat sigma ^ 2 </math > 的不同之处在于,它的分母是 n 而不是 n (即所谓的贝塞尔修正) :


[math]\displaystyle{ 

《数学》

The mean, variance and third central moment of this distribution have been determined\lt ref name=John1982\gt {{cite journal|last1=John|first1=S|year=1982|title=The three parameter two-piece normal family of distributions and its fitting|url=|journal=Communications in Statistics - Theory and Methods|volume=11|issue=8|pages=879–885|doi=10.1080/03610928208828279}}\lt /ref\gt 

    s^2 = \frac{n}{n-1} \hat\sigma^2 = \frac{1}{n-1} \sum_{i=1}^n (x_i - \overline{x})^2.

S ^ 2 = frac { n }{ n-1} hat sigma ^ 2 = frac {1}{ n-1} sum { i = 1} ^ n (x _ i-overline { x }) ^ 2.



   }[/math]

数学

[math]\displaystyle{ \operatorname{E}( X ) = \mu + \sqrt{\frac 2 \pi } ( \sigma_2 - \sigma_1 ) }[/math]

The difference between [math]\displaystyle{ s^2 }[/math] and [math]\displaystyle{ \textstyle\hat\sigma^2 }[/math] becomes negligibly small for large ns. In finite samples however, the motivation behind the use of [math]\displaystyle{ s^2 }[/math] is that it is an unbiased estimator of the underlying parameter [math]\displaystyle{ \sigma^2 }[/math], whereas [math]\displaystyle{ \textstyle\hat\sigma^2 }[/math] is biased. Also, by the Lehmann–Scheffé theorem the estimator [math]\displaystyle{ s^2 }[/math] is uniformly minimum variance unbiased (UMVU), similarly, inverting the χ2 distribution of the statistic s2 will give us the confidence interval for σ2:

“垂直排列”和“数学风格”之间的差异对于大数字来说变得微乎其微。然而,在有限的样本中,使用 < math > s ^ 2 </math > 的动机是它是基本参数 < math > sigma ^ 2 </math > 的无偏估计量,而 < math style = " vertical-align: 0" > textstyle hat sigma ^ 2 </math > 是有偏的。同样,根据 Lehmann-scheffé 定理,估计量 < math > s ^ 2 </math > 是一致最小方差无偏(UMVU) ,同样,反演统计量 s < sup > 2 的 χ2 分布将给出 σ < sup > 2 的置信区间:

[math]\displaystyle{ \operatorname{V}( X ) = \left( 1 - \frac 2 \pi\right)( \sigma_2 - \sigma_1 )^2 + \sigma_1 \sigma_2 }[/math]
[math]\displaystyle{ \operatorname{T}( X ) = \sqrt{ \frac 2 \pi}( \sigma_2 - \sigma_1 ) \left[ \left( \frac 4 \pi - 1 \right) ( \sigma_2 - \sigma_1)^2 + \sigma_1 \sigma_2 \right] }[/math]

[math]\displaystyle{ \mu \in \left[ \hat\mu - t_{n-1,1-\alpha/2} \frac{1}{\sqrt{n}}s, 在左[ hat mu-t _ { n-1,1-alpha/2} frac {1}{ sqrt { n }} s, \hat\mu + t_{n-1,1-\alpha/2} \frac{1}{\sqrt{n}}s \right] \approx 帽子 mu + t _ { n-1,1-alpha/2} frac {1}{ sqrt { n }} s 右]接近 where E(''X''), V(''X'') and T(''X'') are the mean, variance, and third central moment respectively. \left[ \hat\mu - |z_{\alpha/2}|\frac{1}{\sqrt n}s, 左[ hat mu-| z _ { alpha/2} | frac {1}{ sqrt n } s, \hat\mu + |z_{\alpha/2}|\frac{1}{\sqrt n}s \right], }[/math]

[ hat mu + | z _ { alpha/2} | frac {1}{ sqrt n } s right ] ,</math >

One of the main practical uses of the Gaussian law is to model the empirical distributions of many different random variables encountered in practice. In such case a possible extension would be a richer family of distributions, having more than two parameters and therefore being able to fit the empirical distribution more accurately. The examples of such extensions are:

[math]\displaystyle{ \sigma^2 \in \left[ \frac{(n-1)s^2}{\chi^2_{n-1,1-\alpha/2}}, 左[ frac {(n-1) s ^ 2}{ chi ^ 2 _ { n-1,1-alpha/2}} * [[Pearson distribution]] — a four-parameter family of probability distributions that extend the normal law to include different skewness and kurtosis values. \frac{(n-1)s^2}{\chi^2_{n-1,\alpha/2}} \right] \approx Frac {(n-1) s ^ 2}{ chi ^ 2 _ { n-1,alpha/2} right ] approx * The [[generalized normal distribution]], also known as the exponential power distribution, allows for distribution tails with thicker or thinner asymptotic behaviors. \left[ s^2 - |z_{\alpha/2}|\frac{\sqrt{2}}{\sqrt{n}}s^2, 左[ s ^ 2-| z _ { alpha/2} | frac { sqrt {2}{ sqrt { n } s ^ 2, s^2 + |z_{\alpha/2}|\frac{\sqrt{2}}{\sqrt{n}}s^2 \right], }[/math]

s ^ 2 + | z _ { alpha/2} | frac { sqrt {2}{ sqrt { n } s ^ 2 right ] ,</math >

Statistical inference

Estimation of parameters

where tk,p and are the pth quantiles of the t- and χ2-distributions respectively. These confidence intervals are of the confidence level , meaning that the true values μ and σ2 fall outside of these intervals with probability (or significance level) α. In practice people usually take 5%}}, resulting in the 95% confidence intervals. The approximate formulas in the display above were derived from the asymptotic distributions of [math]\displaystyle{ \textstyle\hat\mu }[/math] and s2. The approximate formulas become valid for large values of n, and are more convenient for the manual calculation since the standard normal quantiles zα/2 do not depend on n. In particular, the most popular value of 5%}}, results in z0.025 1.96}}.

其中 t < sub > k,p 和是 t-和 χ < sup > 2 分布的 p 分位数。这些置信区间是置信水平,这意味着真值 μ 和 σ < sup > 2 以概率(或显著水平) α 落在这些区间之外。在实践中,人们通常采用5% } ,导致95% 的置信区间。上面显示的近似公式来自于 < math style = " vertical-align:-. 3em" > textstyle hat mu </math > 和 s < sup > 2 的渐近分布。由于标准正态分位数 z < sub > α/2 不依赖于 n,因此近似公式对 n 的大值有效,并且更方便于人工计算。特别是最流行的值5% } ,结果是 z < sub > 0.025 1.96}。


It is often the case that we do not know the parameters of the normal distribution, but instead want to estimate them. That is, having a sample [math]\displaystyle{ (x_1, \ldots, x_n) }[/math] from a normal [math]\displaystyle{ N(\mu, \sigma^2) }[/math] population we would like to learn the approximate values of parameters [math]\displaystyle{ \mu }[/math] and [math]\displaystyle{ \sigma^2 }[/math]. The standard approach to this problem is the maximum likelihood method, which requires maximization of the log-likelihood function:

[math]\displaystyle{ \ln\mathcal{L}(\mu,\sigma^2) Normality tests assess the likelihood that the given data set {x\lt sub\gt 1\lt /sub\gt , ..., x\lt sub\gt n\lt /sub\gt } comes from a normal distribution. Typically the null hypothesis H\lt sub\gt 0\lt /sub\gt is that the observations are distributed normally with unspecified mean μ and variance σ\lt sup\gt 2\lt /sup\gt , versus the alternative H\lt sub\gt a\lt /sub\gt that the distribution is arbitrary. Many tests (over 40) have been devised for this problem, the more prominent of them are outlined below: 正态性检验评估给定数据集{ x \lt sub \gt 1 \lt /sub \gt ,... ,x \lt sub \gt n \lt /sub \gt \gt }来自正态分布的可能性。典型的零假设 h \lt sub \gt 0 \lt /sub \gt 是观测值呈正态分布,但未指明均值 μ 和方差 σ \lt sup \gt 2 \lt /sup \gt ,而另一个假设 h \lt sub \gt a \lt /sub \gt 是任意分布。针对这一问题已经设计了许多试验(超过40个) ,其中比较突出的试验概述如下: = \sum_{i=1}^n \ln f(x_i\mid\mu,\sigma^2) = -\frac{n}{2}\ln(2\pi) - \frac{n}{2}\ln\sigma^2 - \frac{1}{2\sigma^2}\sum_{i=1}^n (x_i-\mu)^2. }[/math]

Taking derivatives with respect to [math]\displaystyle{ \mu }[/math] and [math]\displaystyle{ \sigma^2 }[/math] and solving the resulting system of first order conditions yields the maximum likelihood estimates:

[math]\displaystyle{ \hat{\mu} = \overline{x} \equiv \frac{1}{n}\sum_{i=1}^n x_i, \qquad \hat{\sigma}^2 = \frac{1}{n} \sum_{i=1}^n (x_i - \overline{x})^2. }[/math]


Sample mean


Estimator [math]\displaystyle{ \textstyle\hat\mu }[/math] is called the sample mean, since it is the arithmetic mean of all observations. The statistic [math]\displaystyle{ \textstyle\overline{x} }[/math] is complete and sufficient for [math]\displaystyle{ \mu }[/math], and therefore by the Lehmann–Scheffé theorem, [math]\displaystyle{ \textstyle\hat\mu }[/math] is the uniformly minimum variance unbiased (UMVU) estimator.[44] In finite samples it is distributed normally:

[math]\displaystyle{ Bayesian analysis of normally distributed data is complicated by the many different possibilities that may be considered: 正态分布数据的贝叶斯分析是复杂的,因为可以考虑许多不同的可能性: \hat\mu \sim \mathcal{N}(\mu,\sigma^2/n). }[/math]

The variance of this estimator is equal to the μμ-element of the inverse Fisher information matrix [math]\displaystyle{ \textstyle\mathcal{I}^{-1} }[/math]. This implies that the estimator is finite-sample efficient. Of practical importance is the fact that the standard error of [math]\displaystyle{ \textstyle\hat\mu }[/math] is proportional to [math]\displaystyle{ \textstyle1/\sqrt{n} }[/math], that is, if one wishes to decrease the standard error by a factor of 10, one must increase the number of points in the sample by a factor of 100. This fact is widely used in determining sample sizes for opinion polls and the number of trials in Monte Carlo simulations.


From the standpoint of the asymptotic theory, [math]\displaystyle{ \textstyle\hat\mu }[/math] is consistent, that is, it converges in probability to [math]\displaystyle{ \mu }[/math] as [math]\displaystyle{ n\rightarrow\infty }[/math]. The estimator is also asymptotically normal, which is a simple corollary of the fact that it is normal in finite samples:

[math]\displaystyle{ \sqrt{n}(\hat\mu-\mu) \,\xrightarrow{d}\, \mathcal{N}(0,\sigma^2). The formulas for the non-linear-regression cases are summarized in the conjugate prior article. 非线性回归情形的计算公式在共轭先验文献中得到了总结。 }[/math]


Sample variance


The following auxiliary formula is useful for simplifying the posterior update equations, which otherwise become fairly tedious.

下面的辅助公式对于简化后验更新方程很有用,否则后验更新方程会变得相当繁琐。

The estimator [math]\displaystyle{ \textstyle\hat\sigma^2 }[/math] is called the sample variance, since it is the variance of the sample ([math]\displaystyle{ (x_1, \ldots, x_n) }[/math]). In practice, another estimator is often used instead of the [math]\displaystyle{ \textstyle\hat\sigma^2 }[/math]. This other estimator is denoted [math]\displaystyle{ s^2 }[/math], and is also called the sample variance, which represents a certain ambiguity in terminology; its square root [math]\displaystyle{ s }[/math] is called the sample standard deviation. The estimator [math]\displaystyle{ s^2 }[/math] differs from [math]\displaystyle{ \textstyle\hat\sigma^2 }[/math] by having (n − 1) instead of n in the denominator (the so-called Bessel's correction):

[math]\displaystyle{ \lt math\gt a(x-y)^2 + b(x-z)^2 = (a + b)\left(x - \frac{ay+bz}{a+b}\right)^2 + \frac{ab}{a+b}(y-z)^2 }[/math]

(x-y) ^ 2 + b (x-z) ^ 2 = (a + b) left (x-frac { ay + bz }{ a + b } right) ^ 2 + frac { ab }(y-z) ^ 2 </math >

   s^2 = \frac{n}{n-1} \hat\sigma^2 = \frac{1}{n-1} \sum_{i=1}^n (x_i - \overline{x})^2.
 </math>

This equation rewrites the sum of two quadratics in x by expanding the squares, grouping the terms in x, and completing the square. Note the following about the complex constant factors attached to some of the terms:

这个方程通过展开平方,将项分组成 x,并完成平方,从而重写 x 中两个二次方的和。请注意以下关于某些术语所附带的复杂常数因子:

The difference between [math]\displaystyle{ s^2 }[/math] and [math]\displaystyle{ \textstyle\hat\sigma^2 }[/math] becomes negligibly small for large n模板:'s. In finite samples however, the motivation behind the use of [math]\displaystyle{ s^2 }[/math] is that it is an unbiased estimator of the underlying parameter [math]\displaystyle{ \sigma^2 }[/math], whereas [math]\displaystyle{ \textstyle\hat\sigma^2 }[/math] is biased. Also, by the Lehmann–Scheffé theorem the estimator [math]\displaystyle{ s^2 }[/math] is uniformly minimum variance unbiased (UMVU),[44] which makes it the "best" estimator among all unbiased ones. However it can be shown that the biased estimator [math]\displaystyle{ \textstyle\hat\sigma^2 }[/math] is "better" than the [math]\displaystyle{ s^2 }[/math] in terms of the mean squared error (MSE) criterion. In finite samples both [math]\displaystyle{ s^2 }[/math] and [math]\displaystyle{ \textstyle\hat\sigma^2 }[/math] have scaled chi-squared distribution with (n − 1) degrees of freedom:

The factor [math]\displaystyle{ \frac{ay+bz}{a+b} }[/math] has the form of a weighted average of y and z.

因子 < math > frac { ay + bz }{ a + b } </math > 具有 y 和 z 的加权平均数形式。

[math]\displaystyle{ \lt math\gt \frac{ab}{a+b} = \frac{1}{\frac{1}{a}+\frac{1}{b}} = (a^{-1} + b^{-1})^{-1}. }[/math] This shows that this factor can be thought of as resulting from a situation where the reciprocals of quantities a and b add directly, so to combine a and b themselves, it's necessary to reciprocate, add, and reciprocate the result again to get back into the original units. This is exactly the sort of operation performed by the harmonic mean, so it is not surprising that [math]\displaystyle{ \frac{ab}{a+b} }[/math] is one-half the harmonic mean of a and b.

< math > frac { ab }{ a + b } = frac {1}{ frac {1}{ a } + frac {1}{ b } = (a ^ {1} + b ^ {1}) ^ {-1}.这表明,这个因子可以被认为是由数量 a 和 b 的倒数直接相加的情况产生的,所以为了使 a 和 b 本身相加,有必要往复,往复,再往复的结果返回到原来的单位。这正是由调和平均值执行的一种运算,所以《 math > frac { ab }{ a + b } </math > 是 a 和 b 调和平均值的一半也就不足为奇了。

   s^2 \sim \frac{\sigma^2}{n-1} \cdot \chi^2_{n-1}, \qquad
   \hat\sigma^2 \sim \frac{\sigma^2}{n} \cdot \chi^2_{n-1}.
 </math>

A similar formula can be written for the sum of two vector quadratics: If x, y, z are vectors of length k, and A and B are symmetric, invertible matrices of size [math]\displaystyle{ k\times k }[/math], then

对于两个向量二次方程的和也可以写出类似的公式: 如果 x,y,z 是长度为 k 的向量,a 和 b 是对称的,大小为 < math > k 乘以 k </math > 的可逆矩阵,那么

The first of these expressions shows that the variance of [math]\displaystyle{ s^2 }[/math] is equal to [math]\displaystyle{ 2\sigma^4/(n-1) }[/math], which is slightly greater than the σσ-element of the inverse Fisher information matrix [math]\displaystyle{ \textstyle\mathcal{I}^{-1} }[/math]. Thus, [math]\displaystyle{ s^2 }[/math] is not an efficient estimator for [math]\displaystyle{ \sigma^2 }[/math], and moreover, since [math]\displaystyle{ s^2 }[/math] is UMVU, we can conclude that the finite-sample efficient estimator for [math]\displaystyle{ \sigma^2 }[/math] does not exist.


[math]\displaystyle{ 《数学》 Applying the asymptotic theory, both estimators \lt math\gt s^2 }[/math] and [math]\displaystyle{ \textstyle\hat\sigma^2 }[/math] are consistent, that is they converge in probability to [math]\displaystyle{ \sigma^2 }[/math] as the sample size [math]\displaystyle{ n\rightarrow\infty }[/math]. The two estimators are also both asymptotically normal:

\begin{align}

开始{ align }

[math]\displaystyle{ & (\mathbf{y}-\mathbf{x})'\mathbf{A}(\mathbf{y}-\mathbf{x}) + (\mathbf{x}-\mathbf{z})' \mathbf{B}(\mathbf{x}-\mathbf{z}) \\ (mathbf { y }-mathbf { x })’ mathbf { a }(mathbf { y }-mathbf { x }) + (mathbf { x }-mathbf { z })’ mathbf { b }(mathbf { x }-mathbf { z }) \sqrt{n}(\hat\sigma^2 - \sigma^2) \simeq = {} & (\mathbf{x} - \mathbf{c})'(\mathbf{A}+\mathbf{B})(\mathbf{x} - \mathbf{c}) + (\mathbf{y} - \mathbf{z})'(\mathbf{A}^{-1} + \mathbf{B}^{-1})^{-1}(\mathbf{y} - \mathbf{z}) = {} & (mathbf { x }-mathbf { c })’(mathbf { a } + mathbf { b })(mathbf { x }-mathbf { c }) + (mathbf { y }-mathbf { z })’(mathbf { a } ^ {-1} + mathbf { b }{-1}) ^ {-1}(math{ y }-mathbf { z }) \sqrt{n}(s^2-\sigma^2) \,\xrightarrow{d}\, \mathcal{N}(0,2\sigma^4). \end{align} 结束{ align } }[/math]

</math>

数学

In particular, both estimators are asymptotically efficient for [math]\displaystyle{ \sigma^2 }[/math].


where

在哪里

Confidence intervals

[math]\displaystyle{ \mathbf{c} = (\mathbf{A} + \mathbf{B})^{-1}(\mathbf{A}\mathbf{y} + \mathbf{B} \mathbf{z}) }[/math]

< math > mathbf { c } = (mathbf { a } + mathbf { b }) ^ {-1}(mathbf { a } mathbf { y } + mathbf { b }) </math >


By Cochran's theorem, for normal distributions the sample mean [math]\displaystyle{ \textstyle\hat\mu }[/math] and the sample variance s2 are independent, which means there can be no gain in considering their joint distribution. There is also a converse theorem: if in a sample the sample mean and sample variance are independent, then the sample must have come from the normal distribution. The independence between [math]\displaystyle{ \textstyle\hat\mu }[/math] and s can be employed to construct the so-called t-statistic:

Note that the form x′ A x is called a quadratic form and is a scalar:

注意,形式 x ′ a x 被称为二次形式,是一个标量:

[math]\displaystyle{ \lt math\gt \mathbf{x}'\mathbf{A}\mathbf{x} = \sum_{i,j}a_{ij} x_i x_j }[/math]

[数学][数学]

   t = \frac{\hat\mu-\mu}{s/\sqrt{n}} = \frac{\overline{x}-\mu}{\sqrt{\frac{1}{n(n-1)}\sum(x_i-\overline{x})^2}} \sim t_{n-1}

In other words, it sums up all possible combinations of products of pairs of elements from x, with a separate coefficient for each. In addition, since [math]\displaystyle{ x_i x_j = x_j x_i }[/math], only the sum [math]\displaystyle{ a_{ij} + a_{ji} }[/math] matters for any off-diagonal elements of A, and there is no loss of generality in assuming that A is symmetric. Furthermore, if A is symmetric, then the form [math]\displaystyle{ \mathbf{x}'\mathbf{A}\mathbf{y} = \mathbf{y}'\mathbf{A}\mathbf{x}. }[/math]

换句话说,它总结了来自 x 的元素对乘积的所有可能的组合,每个元素都有一个单独的系数。此外,由于 < math > x _ i x _ j = x _ j x _ i </math > ,只有和 < math > a _ { ij } + a _ { ji } </math > 对于 a 的任何非对角元素都是重要的,并且假设 a 是对称的也没有丢失一般性。此外,如果 a 是对称的,那么形式 < math > mathbf { x }’ mathbf { a }{ y } = mathbf { y }’ mathbf { a }{ x } . </math >

 </math>

This quantity t has the Student's t-distribution with (n − 1) degrees of freedom, and it is an ancillary statistic (independent of the value of the parameters). Inverting the distribution of this t-statistics will allow us to construct the confidence interval for μ;[45] similarly, inverting the χ2 distribution of the statistic s2 will give us the confidence interval for σ2:[46]


Another useful formula is as follows:

另一个有用的公式如下:

[math]\displaystyle{ \mu \in \left[ \hat\mu - t_{n-1,1-\alpha/2} \frac{1}{\sqrt{n}}s, \hat\mu + t_{n-1,1-\alpha/2} \frac{1}{\sqrt{n}}s \right] \approx \lt math\gt \sum_{i=1}^n (x_i-\mu)^2 = \sum_{i=1}^n(x_i-\bar{x})^2 + n(\bar{x} -\mu)^2 }[/math]

< math > sum { i = 1} ^ n (x i-mu) ^ 2 = sum { i = 1} ^ n (x i-bar { x }) ^ 2 + n (bar { x }-mu) ^ 2 </math >

             \left[ \hat\mu - |z_{\alpha/2}|\frac{1}{\sqrt n}s,
                     \hat\mu + |z_{\alpha/2}|\frac{1}{\sqrt n}s \right],</math>

where [math]\displaystyle{ \bar{x} = \frac{1}{n}\sum_{i=1}^n x_i. }[/math]

其中 < math > bar { x } = frac {1}{ n } sum { i = 1} ^ n x _ i. </math >

[math]\displaystyle{ \sigma^2 \in \left[ \frac{(n-1)s^2}{\chi^2_{n-1,1-\alpha/2}}, \frac{(n-1)s^2}{\chi^2_{n-1,\alpha/2}} \right] \approx \left[ s^2 - |z_{\alpha/2}|\frac{\sqrt{2}}{\sqrt{n}}s^2, For a set of i.i.d. normally distributed data points X of size n where each individual point x follows \lt math\gt x \sim \mathcal{N}(\mu, \sigma^2) }[/math] with known variance σ2, the conjugate prior distribution is also normally distributed.

为了一套 i.i.i.i.id。正态分布的数据点 x 的大小为 n,其中每个点 x 遵循 < math > x sim mathcal { n }(mu,sigma ^ 2) </math > 已知方差 σ < sup > 2 ,共轭先验分布也是正态分布的。

                          s^2 + |z_{\alpha/2}|\frac{\sqrt{2}}{\sqrt{n}}s^2 \right],</math>


This can be shown more easily by rewriting the variance as the precision, i.e. using τ = 1/σ2. Then if [math]\displaystyle{ x \sim \mathcal{N}(\mu, 1/\tau) }[/math] and [math]\displaystyle{ \mu \sim \mathcal{N}(\mu_0, 1/\tau_0), }[/math] we proceed as follows.

通过将方差重写为精度,可以更容易地显示这一点,即。using τ = 1/σ2.然后,如果 < math > x sim mathcal { n }(mu,1/tau) </math > 和 < math > mu sim mathcal { n }(mu _ 0,1/tau _ 0) ,</math > 我们按照以下步骤进行。

where tk,p and 模板:SubSup are the pth quantiles of the t- and χ2-distributions respectively. These confidence intervals are of the confidence level 1 − α, meaning that the true values μ and σ2 fall outside of these intervals with probability (or significance level) α. In practice people usually take α = 5%, resulting in the 95% confidence intervals. The approximate formulas in the display above were derived from the asymptotic distributions of [math]\displaystyle{ \textstyle\hat\mu }[/math] and s2. The approximate formulas become valid for large values of n, and are more convenient for the manual calculation since the standard normal quantiles zα/2 do not depend on n. In particular, the most popular value of α = 5%, results in |z0.025| = 1.96.


First, the likelihood function is (using the formula above for the sum of differences from the mean):

首先,似然函数是(使用上面的公式计算与均值的差值之和) :

Normality tests

[math]\displaystyle{ \begin{align} 1.1.1.2.2.2.2.2.2.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.4.3.3.3.3.3.3.3.3.3.3.3.4.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3 p(\mathbf{X}\mid\mu,\tau) &= \prod_{i=1}^n \sqrt{\frac{\tau}{2\pi}} \exp\left(-\frac{1}{2}\tau(x_i-\mu)^2\right) \\ P (mathbf { x } mid mu,tau) & = prod _ { i = 1} ^ n sqrt { frac { tau }{2 pi } exp left (- frac {1}{2} tau (x _ i-mu) ^ 2 right) Normality tests assess the likelihood that the given data set {''x''\lt sub\gt 1\lt /sub\gt , ..., ''x\lt sub\gt n\lt /sub\gt ''} comes from a normal distribution. Typically the [[null hypothesis]] ''H''\lt sub\gt 0\lt /sub\gt is that the observations are distributed normally with unspecified mean ''μ'' and variance ''σ''\lt sup\gt 2\lt /sup\gt , versus the alternative ''H\lt sub\gt a\lt /sub\gt '' that the distribution is arbitrary. Many tests (over 40) have been devised for this problem, the more prominent of them are outlined below: &= \left(\frac{\tau}{2\pi}\right)^{n/2} \exp\left(-\frac{1}{2}\tau \sum_{i=1}^n (x_i-\mu)^2\right) \\ (& = left (frac { tau }{2 pi } right) ^ { n/2} exp left (- frac {1}{2} tau sum { i = 1} ^ n (x _ i-mu) ^ 2 right)) * '''"Visual" tests''' are more intuitively appealing but subjective at the same time, as they rely on informal human judgement to accept or reject the null hypothesis. &= \left(\frac{\tau}{2\pi}\right)^{n/2} \exp\left[-\frac{1}{2}\tau \left(\sum_{i=1}^n(x_i-\bar{x})^2 + n(\bar{x} -\mu)^2\right)\right]. (& = left (frac { tau }{2 pi } right) ^ { n/2} exp left [-frac {1}{2} tau left (sum { i = 1} ^ n (x _ i-bar { x }) ^ 2 + n (bar { x }-mu) ^ 2 right)]. ** [[Q-Q plot]]— is a plot of the sorted values from the data set against the expected values of the corresponding quantiles from the standard normal distribution. That is, it's a plot of point of the form (Φ\lt sup\gt −1\lt /sup\gt (''p\lt sub\gt k\lt /sub\gt ''), ''x''\lt sub\gt (''k'')\lt /sub\gt ), where plotting points ''p\lt sub\gt k\lt /sub\gt '' are equal to ''p\lt sub\gt k\lt /sub\gt '' = (''k'' − ''α'')/(''n'' + 1 − 2''α'') and ''α'' is an adjustment constant, which can be anything between 0 and 1. If the null hypothesis is true, the plotted points should approximately lie on a straight line. \end{align} }[/math]

结束{ align } </math >

    • P-P plot— similar to the Q-Q plot, but used much less frequently. This method consists of plotting the points (Φ(z(k)), pk), where [math]\displaystyle{ \textstyle z_{(k)} = (x_{(k)}-\hat\mu)/\hat\sigma }[/math]. For normally distributed data this plot should lie on a 45° line between (0, 0) and (1, 1).
    • Shapiro-Wilk test employs the fact that the line in the Q-Q plot has the slope of σ. The test compares the least squares estimate of that slope with the value of the sample variance, and rejects the null hypothesis if these two quantities differ significantly.

Then, we proceed as follows:

然后,我们按照以下步骤进行:

  • Moment tests:

[math]\displaystyle{ \begin{align} 1.1.1.2.2.2.2.2.2.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.4.3.3.3.3.3.3.3.3.3.3.3.4.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3 ** [[D'Agostino's K-squared test]] p(\mu\mid\mathbf{X}) &\propto p(\mathbf{X}\mid\mu) p(\mu) \\ P (mu mid mathbf { x }) & propto p (mathbf { x } mid mu) p (mu) ** [[Jarque–Bera test]] & = \left(\frac{\tau}{2\pi}\right)^{n/2} \exp\left[-\frac{1}{2}\tau \left(\sum_{i=1}^n(x_i-\bar{x})^2 + n(\bar{x} -\mu)^2\right)\right] \sqrt{\frac{\tau_0}{2\pi}} \exp\left(-\frac{1}{2}\tau_0(\mu-\mu_0)^2\right) \\ (& = left (frac {2 pi } right) ^ { n/2} exp left [-frac {1}{2} tau left (sum { i = 1} ^ n (x i-bar { x }) ^ 2 + n (bar { x }-mu) ^ 2 right)] rt { frac { tau {0}{2 pi } exp left (- frac {1}{2} tau _ 0(mu-mu _ 0) ^ 2 right) * '''Empirical distribution function tests''': &\propto \exp\left(-\frac{1}{2}\left(\tau\left(\sum_{i=1}^n(x_i-\bar{x})^2 + n(\bar{x} -\mu)^2\right) + \tau_0(\mu-\mu_0)^2\right)\right) \\ (- frac {1}{2} left (tau left (sum _ { i = 1} ^ n (x _ i-bar { x }) ^ 2 + n (bar { x }-mu) ^ 2 right) + tau _ 0(mu-mu _ 0) ^ 2 right)) ** [[Lilliefors test]] (an adaptation of the [[Kolmogorov–Smirnov test]]) &\propto \exp\left(-\frac{1}{2} \left(n\tau(\bar{x}-\mu)^2 + \tau_0(\mu-\mu_0)^2 \right)\right) \\ (- frac {1}{2} left (n tau (bar { x }-mu) ^ 2 + tau _ 0(mu-mu _ 0) ^ 2 right)) ** [[Anderson–Darling test]] &= \exp\left(-\frac{1}{2}(n\tau + \tau_0)\left(\mu - \dfrac{n\tau \bar{x} + \tau_0\mu_0}{n\tau + \tau_0}\right)^2 + \frac{n\tau\tau_0}{n\tau+\tau_0}(\bar{x} - \mu_0)^2\right) \\ & = exp left (- frac {1}{2}(n tau + tau _ 0) left (mu-dfrac { n tau bar { x } + tau _ 0}{ n tau + tau _ 0}右) ^ 2 + frac { n tau _ 0}{ n tau + tau _ 0}(bar { x }-mu _ 0) ^ 2右) &\propto \exp\left(-\frac{1}{2}(n\tau + \tau_0)\left(\mu - \dfrac{n\tau \bar{x} + \tau_0\mu_0}{n\tau + \tau_0}\right)^2\right) 左(- frac {1}{2}(n tau + tau _ 0)左(mu-dfrac { n tau bar { x } + tau _ 0 mu _ 0}{ n tau + tau _ 0}右) ^ 2右) === Bayesian analysis of the normal distribution === \end{align} }[/math]

结束{ align } </math >

Bayesian analysis of normally distributed data is complicated by the many different possibilities that may be considered:

  • Either the mean, or the variance, or neither, may be considered a fixed quantity.

In the above derivation, we used the formula above for the sum of two quadratics and eliminated all constant factors not involving μ. The result is the kernel of a normal distribution, with mean [math]\displaystyle{ \frac{n\tau \bar{x} + \tau_0\mu_0}{n\tau + \tau_0} }[/math] and precision [math]\displaystyle{ n\tau + \tau_0 }[/math], i.e.

在上述推导中,我们使用上述公式求两个二次方和,并排除了所有不涉及 μ 的常数因子。结果是正态分布的核函数,平均值 < math > n tau bar { x } + tau _ 0 mu _ 0}{ n tau + tau _ 0} </math > ,精度 < math > n tau + tau _ 0 </math > 。

  • When the variance is unknown, analysis may be done directly in terms of the variance, or in terms of the precision, the reciprocal of the variance. The reason for expressing the formulas in terms of precision is that the analysis of most cases is simplified.
  • Both univariate and multivariate cases need to be considered.

[math]\displaystyle{ p(\mu\mid\mathbf{X}) \sim \mathcal{N}\left(\frac{n\tau \bar{x} + \tau_0\mu_0}{n\tau + \tau_0}, \frac{1}{n\tau + \tau_0}\right) }[/math]

左(frc { n tau bar { x } + tau _ 0 mu _ 0} ,frc {1}{ n tau + tau _ 0}右) </math >

This can be written as a set of Bayesian update equations for the posterior parameters in terms of the prior parameters:

这可以写成一组贝叶斯更新方程,用于先验参数的后验参数:


The formulas for the non-linear-regression cases are summarized in the conjugate prior article.

[math]\displaystyle{ \begin{align} 1.1.1.2.2.2.2.2.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.4.3.3.3.3.3.3.3.3.3.3.3 \tau_0' &= \tau_0 + n\tau \\ 0’ & = tau 0 + n tau ==== Sum of two quadratics ==== \mu_0' &= \frac{n\tau \bar{x} + \tau_0\mu_0}{n\tau + \tau_0} \\ 0’ & = frac { n tau bar { x } + tau _ 0 mu _ 0}{ n tau + tau _ 0} \bar{x} &= \frac{1}{n}\sum_{i=1}^n x_i 1}{ n } sum { i = 1} ^ n x _ i ===== Scalar form ===== \end{align} }[/math]

结束{ align } </math >

The following auxiliary formula is useful for simplifying the posterior update equations, which otherwise become fairly tedious.


That is, to combine n data points with total precision of nτ (or equivalently, total variance of n/σ2) and mean of values [math]\displaystyle{ \bar{x} }[/math], derive a new total precision simply by adding the total precision of the data to the prior total precision, and form a new mean through a precision-weighted average, i.e. a weighted average of the data mean and the prior mean, each weighted by the associated total precision. This makes logical sense if the precision is thought of as indicating the certainty of the observations: In the distribution of the posterior mean, each of the input components is weighted by its certainty, and the certainty of this distribution is the sum of the individual certainties. (For the intuition of this, compare the expression "the whole is (or is not) greater than the sum of its parts". In addition, consider that the knowledge of the posterior comes from a combination of the knowledge of the prior and likelihood, so it makes sense that we are more certain of it than of either of its components.)

也就是说,将 n 个数据点与 n/σ < sup > 2 的总精度(或等效 n/σ < sup > 2 的总方差)和值 < math > bar { x } </math > 的平均值相结合,通过将数据的总精度与先前的总精度相加,得到一个新的总精度,并通过精度加权平均得到一个新的平均值,即:。数据平均值和先前平均值的加权平均数,每一项均以相关的总精确度加权。如果精度被认为是表明观测结果的确定性,那么这就具有逻辑意义: 在后验平均值的分布中,每个输入分量都以其确定性加权,而这种分布的确定性是各个确定性之和。(对于这一点的直觉来说,可以比较“整体大于(或不大于)各部分之和”这个表达式。此外,考虑到后验知识来自于先验知识和可能性知识的结合,因此我们对后验知识的确定性要高于其中任何一个组成部分

[math]\displaystyle{ a(x-y)^2 + b(x-z)^2 = (a + b)\left(x - \frac{ay+bz}{a+b}\right)^2 + \frac{ab}{a+b}(y-z)^2 }[/math]


The above formula reveals why it is more convenient to do Bayesian analysis of conjugate priors for the normal distribution in terms of the precision. The posterior precision is simply the sum of the prior and likelihood precisions, and the posterior mean is computed through a precision-weighted average, as described above. The same formulas can be written in terms of variance by reciprocating all the precisions, yielding the more ugly formulas

上述公式从精度上说明了为什么对正态分布的共轭先验进行贝叶斯分析更加方便。后验精度只是先验精度和似然精度之和,后验均值是通过精度加权平均计算得到的,如上所述。同样的公式可以用方差来表示,只要对所有的精确度进行往复运算,就会得出更丑陋的公式

This equation rewrites the sum of two quadratics in x by expanding the squares, grouping the terms in x, and completing the square. Note the following about the complex constant factors attached to some of the terms:

  1. The factor [math]\displaystyle{ \frac{ay+bz}{a+b} }[/math] has the form of a weighted average of y and z.

[math]\displaystyle{ \begin{align} 1.1.1.2.2.2.2.2.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.4.3.3.3.3.3.3.3.3.3.3.3.4.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3 # \lt math\gt \frac{ab}{a+b} = \frac{1}{\frac{1}{a}+\frac{1}{b}} = (a^{-1} + b^{-1})^{-1}. }[/math] This shows that this factor can be thought of as resulting from a situation where the reciprocals of quantities a and b add directly, so to combine a and b themselves, it's necessary to reciprocate, add, and reciprocate the result again to get back into the original units. This is exactly the sort of operation performed by the harmonic mean, so it is not surprising that [math]\displaystyle{ \frac{ab}{a+b} }[/math] is one-half the harmonic mean of a and b.

{\sigma^2_0}' &= \frac{1}{\frac{n}{\sigma^2} + \frac{1}{\sigma_0^2}} \\

{ sigma ^ 2 _ 0}’ & = frac {1}{ frac { n }{ sigma ^ 2} + frac {1}{ sigma _ 0 ^ 2}}


\mu_0' &= \frac{\frac{n\bar{x}}{\sigma^2} + \frac{\mu_0}{\sigma_0^2}}{\frac{n}{\sigma^2} + \frac{1}{\sigma_0^2}} \\

0’ & = frac { n bar { x }{ sigma ^ 2} + frac { mu _ 0}{ sigma _ 0 ^ 2}{ frac { n }{ sigma ^ 2} + frac {1}{ sigma _ 0 ^ 2}

Vector form

\bar{x} &= \frac{1}{n}\sum_{i=1}^n x_i

1}{ n } sum { i = 1} ^ n x _ i

A similar formula can be written for the sum of two vector quadratics: If x, y, z are vectors of length k, and A and B are symmetric, invertible matrices of size [math]\displaystyle{ k\times k }[/math], then

\end{align}</math>

结束{ align } </math >


[math]\displaystyle{ \begin{align} For a set of i.i.d. normally distributed data points X of size n where each individual point x follows \lt math\gt x \sim \mathcal{N}(\mu, \sigma^2) }[/math] with known mean μ, the conjugate prior of the variance has an inverse gamma distribution or a scaled inverse chi-squared distribution. The two are equivalent except for having different parameterizations. Although the inverse gamma is more commonly used, we use the scaled inverse chi-squared for the sake of convenience. The prior for σ2 is as follows:

为了一套 i.i.i.i.id。正态分布的数据点 x 大小 n,其中每个单独的点 x 遵循 < math > x sim mathcal { n }(mu,sigma ^ 2) </math > 与已知的均值 μ,方差的共轭先验有一个逆伽玛分布或一个缩放的逆卡方分布。除了具有不同的参数化之外,这两者是等价的。虽然反伽马更常用,为了方便起见,我们使用缩放的反卡方。σ < sup > 2 的先验如下:

& (\mathbf{y}-\mathbf{x})'\mathbf{A}(\mathbf{y}-\mathbf{x}) + (\mathbf{x}-\mathbf{z})' \mathbf{B}(\mathbf{x}-\mathbf{z}) \\

= {} & (\mathbf{x} - \mathbf{c})'(\mathbf{A}+\mathbf{B})(\mathbf{x} - \mathbf{c}) + (\mathbf{y} - \mathbf{z})'(\mathbf{A}^{-1} + \mathbf{B}^{-1})^{-1}(\mathbf{y} - \mathbf{z})

[math]\displaystyle{ p(\sigma^2\mid\nu_0,\sigma_0^2) = \frac{(\sigma_0^2\frac{\nu_0}{2})^{\nu_0/2}}{\Gamma\left(\frac{\nu_0}{2} \right)}~\frac{\exp\left[ \frac{-\nu_0 \sigma_0^2}{2 \sigma^2}\right]}{(\sigma^2)^{1+\frac{\nu_0}{2}}} \propto \frac{\exp\left[ \frac{-\nu_0 \sigma_0^2}{2 \sigma^2}\right]}{(\sigma^2)^{1+\frac{\nu_0}{2}}} }[/math]

< math > p (sigma ^ 2 mid nu _ 0,sigma _ 0 ^ 2) = frac {(sigma _ 0 ^ 2 frac { nu _ 0}{2}) ^ { nu _ 0/2}{ Gamma left (frac { nu _ 0}{2}右)} ~ frac { exp left [ frac {-nu _ 0 sigma _ 0 ^ 2}{2 sigma ^ 2}右]{2 sigma ^ 2}{1 + frac { nu _ 0}}{2}}}{ nu _ 0 ^ sigma _ 2}{ propto c { exp 左{ nu _ 0 ^ 2{2 sigma _ 2}{2}{右]{1 + { nu {0}{2}{ ff _ 2}{{{2}{{2}{2}{{2}{2}{2}{2}{2}{2}{2}{2}{2}{2}{

\end{align}

</math>

The likelihood function from above, written in terms of the variance, is:

以上用方差表示的似然函数是:


where

[math]\displaystyle{ \begin{align} 1.1.1.2.2.2.2.2.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.4.3.3.3.3.3.3.3.3.3.3.3 p(\mathbf{X}\mid\mu,\sigma^2) &= \left(\frac{1}{2\pi\sigma^2}\right)^{n/2} \exp\left[-\frac{1}{2\sigma^2} \sum_{i=1}^n (x_i-\mu)^2\right] \\ P (mathbf { x } mid mu,sigma ^ 2) & = left (frac {1}{2 pi sigma ^ 2} right) ^ { n/2} exp left [-frac {1}{2 sigma ^ 2} sum { i = 1} ^ n (xi-mu) ^ 2 right ] :\lt math\gt \mathbf{c} = (\mathbf{A} + \mathbf{B})^{-1}(\mathbf{A}\mathbf{y} + \mathbf{B} \mathbf{z}) }[/math]

&= \left(\frac{1}{2\pi\sigma^2}\right)^{n/2} \exp\left[-\frac{S}{2\sigma^2}\right]

& = left (frac {1}{2 pi sigma ^ 2} right) ^ { n/2} exp left [-frac { s }{2 sigma ^ 2} right ]


\end{align}</math>

结束{ align } </math >

Note that the form xA x is called a quadratic form and is a scalar:

[math]\displaystyle{ \mathbf{x}'\mathbf{A}\mathbf{x} = \sum_{i,j}a_{ij} x_i x_j }[/math]

where

在哪里

In other words, it sums up all possible combinations of products of pairs of elements from x, with a separate coefficient for each. In addition, since [math]\displaystyle{ x_i x_j = x_j x_i }[/math], only the sum [math]\displaystyle{ a_{ij} + a_{ji} }[/math] matters for any off-diagonal elements of A, and there is no loss of generality in assuming that A is symmetric. Furthermore, if A is symmetric, then the form [math]\displaystyle{ \mathbf{x}'\mathbf{A}\mathbf{y} = \mathbf{y}'\mathbf{A}\mathbf{x}. }[/math]


[math]\displaystyle{ S = \sum_{i=1}^n (x_i-\mu)^2. }[/math]

[数学] s = sum _ { i = 1} ^ n (x _ i-mu) ^ 2

Sum of differences from the mean

Another useful formula is as follows:

Then:

然后:


[math]\displaystyle{ \sum_{i=1}^n (x_i-\mu)^2 = \sum_{i=1}^n(x_i-\bar{x})^2 + n(\bar{x} -\mu)^2 }[/math]

[math]\displaystyle{ \begin{align} 1.1.1.2.2.2.2.2.2.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.4.3.3.3.3.3.3.3.3.3.3.3.4.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3 p(\sigma^2\mid\mathbf{X}) &\propto p(\mathbf{X}\mid\sigma^2) p(\sigma^2) \\ P (sigma ^ 2 mid mathbf { x }) & propto p (mathbf { x } mid sigma ^ 2) p (sigma ^ 2) where \lt math\gt \bar{x} = \frac{1}{n}\sum_{i=1}^n x_i. }[/math]

&= \left(\frac{1}{2\pi\sigma^2}\right)^{n/2} \exp\left[-\frac{S}{2\sigma^2}\right] \frac{(\sigma_0^2\frac{\nu_0}{2})^{\frac{\nu_0}{2}}}{\Gamma\left(\frac{\nu_0}{2} \right)}~\frac{\exp\left[ \frac{-\nu_0 \sigma_0^2}{2 \sigma^2}\right]}{(\sigma^2)^{1+\frac{\nu_0}{2}}} \\

(& = left (frac {1}{2 pi sigma ^ 2} right) ^ { n/2} exp left [-frac {2 sigma ^ 2} right ] frac {(sigma _ 0 ^ 2 frac { nu _ 0}{2}}) ^ frac { nu _ 0}{2}{ Gamma left (frac {2}{2}{2}{右)} ~ frac { left [ frac {-exp _ 0 ^ 2}{2 sigma ^ 2}{2}{2}{右]{1 + frac { nu _ 0}{2}{2}{ sigma {2}{2}{1 + nu _ 2}{2}{2}{2}{2}{2}{2}{2}{2}{2}{2}{2}{1 + frac {2}{2}{2}{2}{2}{2}{2}{2}


&\propto \left(\frac{1}{\sigma^2}\right)^{n/2} \frac{1}{(\sigma^2)^{1+\frac{\nu_0}{2}}} \exp\left[-\frac{S}{2\sigma^2} + \frac{-\nu_0 \sigma_0^2}{2 \sigma^2}\right] \\

& propto left (frac {1}{ sigma ^ 2} right) ^ { n/2} frac {1}{(sigma ^ 2) ^ {1 + frac { nu _ 0}{2}}} exp left [-frac { s }{2 sigma ^ 2} + frac {-nu _ 0 sigma _ 2}{2 sigma ^ 2}右]

With known variance

&= \frac{1}{(\sigma^2)^{1+\frac{\nu_0+n}{2}}} \exp\left[-\frac{\nu_0 \sigma_0^2 + S}{2\sigma^2}\right]

[ & = frac {1}{(sigma ^ 2) ^ {1 + frac { nu _ 0 + n }{2}}}} exp 左[-frac { nu _ 0 sigma _ 0 ^ 2 + s }{2 sigma ^ 2}右]

For a set of i.i.d. normally distributed data points X of size n where each individual point x follows [math]\displaystyle{ x \sim \mathcal{N}(\mu, \sigma^2) }[/math] with known variance σ2, the conjugate prior distribution is also normally distributed.

\end{align}</math>

结束{ align } </math >


This can be shown more easily by rewriting the variance as the precision, i.e. using τ = 1/σ2. Then if [math]\displaystyle{ x \sim \mathcal{N}(\mu, 1/\tau) }[/math] and [math]\displaystyle{ \mu \sim \mathcal{N}(\mu_0, 1/\tau_0), }[/math] we proceed as follows.

The above is also a scaled inverse chi-squared distribution where

上面也是一个缩放的反比例卡方分布


First, the likelihood function is (using the formula above for the sum of differences from the mean):

[math]\displaystyle{ \begin{align} 1.1.1.2.2.2.2.2.2.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.4.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3 \nu_0' &= \nu_0 + n \\ 0’ & = nu 0 + n :\lt math\gt \begin{align} \nu_0'{\sigma_0^2}' &= \nu_0 \sigma_0^2 + \sum_{i=1}^n (x_i-\mu)^2 Nu _ 0’{ sigma _ 0 ^ 2}’ & = nu _ 0 sigma _ 0 ^ 2 + sum _ { i = 1} ^ n (x _ i-mu) ^ 2 p(\mathbf{X}\mid\mu,\tau) &= \prod_{i=1}^n \sqrt{\frac{\tau}{2\pi}} \exp\left(-\frac{1}{2}\tau(x_i-\mu)^2\right) \\ \end{align} }[/math]

结束{ align } </math >

&= \left(\frac{\tau}{2\pi}\right)^{n/2} \exp\left(-\frac{1}{2}\tau \sum_{i=1}^n (x_i-\mu)^2\right) \\

&= \left(\frac{\tau}{2\pi}\right)^{n/2} \exp\left[-\frac{1}{2}\tau \left(\sum_{i=1}^n(x_i-\bar{x})^2 + n(\bar{x} -\mu)^2\right)\right].

or equivalently

或者等价

\end{align}</math>


[math]\displaystyle{ \begin{align} 1.1.1.2.2.2.2.2.2.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.4.3.3.3.3.3.3.3.3.3.3.3.4.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3 Then, we proceed as follows: \nu_0' &= \nu_0 + n \\ 0’ & = nu 0 + n {\sigma_0^2}' &= \frac{\nu_0 \sigma_0^2 + \sum_{i=1}^n (x_i-\mu)^2}{\nu_0+n} { sigma _ 0 ^ 2}’ & = frac { nu _ 0 sigma _ 0 ^ 2 + sum _ { i = 1} ^ n (x _ i-mu) ^ 2}{ nu _ 0 + n } :\lt math\gt \begin{align} \end{align} }[/math]

结束{ align } </math >

p(\mu\mid\mathbf{X}) &\propto p(\mathbf{X}\mid\mu) p(\mu) \\

& = \left(\frac{\tau}{2\pi}\right)^{n/2} \exp\left[-\frac{1}{2}\tau \left(\sum_{i=1}^n(x_i-\bar{x})^2 + n(\bar{x} -\mu)^2\right)\right] \sqrt{\frac{\tau_0}{2\pi}} \exp\left(-\frac{1}{2}\tau_0(\mu-\mu_0)^2\right) \\

Reparameterizing in terms of an inverse gamma distribution, the result is:

重新参数化一个反伽玛分布,结果是:

&\propto \exp\left(-\frac{1}{2}\left(\tau\left(\sum_{i=1}^n(x_i-\bar{x})^2 + n(\bar{x} -\mu)^2\right) + \tau_0(\mu-\mu_0)^2\right)\right) \\

&\propto \exp\left(-\frac{1}{2} \left(n\tau(\bar{x}-\mu)^2 + \tau_0(\mu-\mu_0)^2 \right)\right) \\

[math]\displaystyle{ \begin{align} 1.1.1.2.2.2.2.2.2.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.4.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3 &= \exp\left(-\frac{1}{2}(n\tau + \tau_0)\left(\mu - \dfrac{n\tau \bar{x} + \tau_0\mu_0}{n\tau + \tau_0}\right)^2 + \frac{n\tau\tau_0}{n\tau+\tau_0}(\bar{x} - \mu_0)^2\right) \\ \alpha' &= \alpha + \frac{n}{2} \\ 2} &\propto \exp\left(-\frac{1}{2}(n\tau + \tau_0)\left(\mu - \dfrac{n\tau \bar{x} + \tau_0\mu_0}{n\tau + \tau_0}\right)^2\right) \beta' &= \beta + \frac{\sum_{i=1}^n (x_i-\mu)^2}{2} Beta’ & = beta + frac { sum { i = 1} ^ n (x _ i-mu) ^ 2}{2} \end{align} }[/math]

\end{align}</math>

结束{ align } </math >


In the above derivation, we used the formula above for the sum of two quadratics and eliminated all constant factors not involving μ. The result is the kernel of a normal distribution, with mean [math]\displaystyle{ \frac{n\tau \bar{x} + \tau_0\mu_0}{n\tau + \tau_0} }[/math] and precision [math]\displaystyle{ n\tau + \tau_0 }[/math], i.e.


For a set of i.i.d. normally distributed data points X of size n where each individual point x follows [math]\displaystyle{ x \sim \mathcal{N}(\mu, \sigma^2) }[/math] with unknown mean μ and unknown variance σ2, a combined (multivariate) conjugate prior is placed over the mean and variance, consisting of a normal-inverse-gamma distribution.

为了一套 i.i.i.i.id。正态分布数据点 x 的大小为 n,其中每个点 x 遵循 < math > x sim mathcal { n }(mu,sigma ^ 2) </math > 具有未知的均值 μ 和未知的方差 σ < sup > 2 ,一个组合(多变量)共轭先验置于均值和方差之上,由正态-逆-伽马分布组成。

[math]\displaystyle{ p(\mu\mid\mathbf{X}) \sim \mathcal{N}\left(\frac{n\tau \bar{x} + \tau_0\mu_0}{n\tau + \tau_0}, \frac{1}{n\tau + \tau_0}\right) }[/math]

Logically, this originates as follows:

从逻辑上讲,这一原因如下:


From the analysis of the case with unknown mean but known variance, we see that the update equations involve sufficient statistics computed from the data consisting of the mean of the data points and the total variance of the data points, computed in turn from the known variance divided by the number of data points.

通过对均值未知但方差已知的情况的分析,我们可以看到,更新方程包括从数据点的均值和数据点的总方差组成的数据中计算出的充分统计量,这些数据点依次从已知方差除以数据点的数目。

This can be written as a set of Bayesian update equations for the posterior parameters in terms of the prior parameters:

From the analysis of the case with unknown variance but known mean, we see that the update equations involve sufficient statistics over the data consisting of the number of data points and sum of squared deviations.

通过对方差未知但均值已知的情况的分析,我们发现更新方程包含了对由数据点数和平方偏差之和组成的数据的充分统计。


Keep in mind that the posterior update values serve as the prior distribution when further data is handled.  Thus, we should logically think of our priors in terms of the sufficient statistics just described, with the same semantics kept in mind as much as possible.

请记住,在处理进一步的数据时,后续更新值作为先验分布。因此,我们应该从逻辑上根据刚才描述的充分的统计数据来思考我们的先验,并尽可能记住相同的语义。

[math]\displaystyle{ \begin{align} To handle the case where both mean and variance are unknown, we could place independent priors over the mean and variance, with fixed estimates of the average mean, total variance, number of data points used to compute the variance prior, and sum of squared deviations. Note however that in reality, the total variance of the mean depends on the unknown variance, and the sum of squared deviations that goes into the variance prior (appears to) depend on the unknown mean. In practice, the latter dependence is relatively unimportant: Shifting the actual mean shifts the generated points by an equal amount, and on average the squared deviations will remain the same. This is not the case, however, with the total variance of the mean: As the unknown variance increases, the total variance of the mean will increase proportionately, and we would like to capture this dependence. 为了处理均值和方差都未知的情况,我们可以在均值和方差之上放置独立的先验,用平均均值、总方差、用于计算方差先验的数据点数和偏差平方和的固定估计。然而,请注意,在现实中,均值的总方差取决于未知的方差,进入方差之前(似乎)的平方偏差之和取决于未知的均值。在实践中,后一种依赖关系相对来说并不重要: 将实际平均值移动生成的点,移动的数量相等,平均而言,平方偏差将保持不变。然而,对于均值的总方差,情况并非如此: 随着未知方差的增加,均值的总方差将按比例增加,我们希望捕捉这种依赖性。 \tau_0' &= \tau_0 + n\tau \\ This suggests that we create a conditional prior of the mean on the unknown variance, with a hyperparameter specifying the mean of the pseudo-observations associated with the prior, and another parameter specifying the number of pseudo-observations. This number serves as a scaling parameter on the variance, making it possible to control the overall variance of the mean relative to the actual variance parameter. The prior for the variance also has two hyperparameters, one specifying the sum of squared deviations of the pseudo-observations associated with the prior, and another specifying once again the number of pseudo-observations. Note that each of the priors has a hyperparameter specifying the number of pseudo-observations, and in each case this controls the relative variance of that prior. These are given as two separate hyperparameters so that the variance (aka the confidence) of the two priors can be controlled separately. 这就建议我们创建一个关于未知方差的均值条件先验,用一个超参数指定与先验相关联的伪观测值的均值,另一个参数指定伪观测值的数目。这个数字作为方差的标度参数,使得控制平均值相对于实际方差参数的总方差成为可能。方差的先验也有两个超参数,一个指定与先验相关的伪观测值的平方和,另一个指定再次伪观测值的数目。请注意,每个先验都有一个超参数,用于指定伪观测值的数量,并且在每种情况下,这控制了先验的相对方差。这些是作为两个独立的超参数,以便方差(又称置信度)的两个先验可以分别控制。 \mu_0' &= \frac{n\tau \bar{x} + \tau_0\mu_0}{n\tau + \tau_0} \\ This leads immediately to the normal-inverse-gamma distribution, which is the product of the two distributions just defined, with conjugate priors used (an inverse gamma distribution over the variance, and a normal distribution over the mean, conditional on the variance) and with the same four parameters just defined. 这立刻导致了正态-逆-伽马分布,这是刚刚定义的两个分布的乘积,使用了共轭先验(方差上的逆伽玛分布,方差上的正态分布,条件方差)和刚刚定义的相同的四个参数。 \bar{x} &= \frac{1}{n}\sum_{i=1}^n x_i \end{align} }[/math]

The priors are normally defined as follows:

前科通常定义如下:


That is, to combine n data points with total precision of (or equivalently, total variance of n/σ2) and mean of values [math]\displaystyle{ \bar{x} }[/math], derive a new total precision simply by adding the total precision of the data to the prior total precision, and form a new mean through a precision-weighted average, i.e. a weighted average of the data mean and the prior mean, each weighted by the associated total precision. This makes logical sense if the precision is thought of as indicating the certainty of the observations: In the distribution of the posterior mean, each of the input components is weighted by its certainty, and the certainty of this distribution is the sum of the individual certainties. (For the intuition of this, compare the expression "the whole is (or is not) greater than the sum of its parts". In addition, consider that the knowledge of the posterior comes from a combination of the knowledge of the prior and likelihood, so it makes sense that we are more certain of it than of either of its components.)

[math]\displaystyle{ \begin{align} 1.1.1.2.2.2.2.2.2.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.4.3.3.3.3.3.3.3.3.3.3.3.4.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3 p(\mu\mid\sigma^2; \mu_0, n_0) &\sim \mathcal{N}(\mu_0,\sigma^2/n_0) \\ P (mu mid sigma ^ 2; mu _ 0,n _ 0) & sim mathcal { n }(mu _ 0,sigma ^ 2/n _ 0) The above formula reveals why it is more convenient to do [[Bayesian analysis]] of [[conjugate prior]]s for the normal distribution in terms of the precision. The posterior precision is simply the sum of the prior and likelihood precisions, and the posterior mean is computed through a precision-weighted average, as described above. The same formulas can be written in terms of variance by reciprocating all the precisions, yielding the more ugly formulas p(\sigma^2; \nu_0,\sigma_0^2) &\sim I\chi^2(\nu_0,\sigma_0^2) = IG(\nu_0/2, \nu_0\sigma_0^2/2) p(\sigma^2; \nu_0,\sigma_0^2) &\sim I\chi^2(\nu_0,\sigma_0^2) = IG(\nu_0/2, \nu_0\sigma_0^2/2) \end{align} }[/math]

结束{ align } </math >

[math]\displaystyle{ \begin{align} \lt !-- \\ \lt !-- \\ {\sigma^2_0}' &= \frac{1}{\frac{n}{\sigma^2} + \frac{1}{\sigma_0^2}} \\ & =\frac{(\sigma_0^2\nu_0/2)^{\nu_0/2}}{\Gamma(\nu_0/2)}~\frac{\exp\left[ \frac{-\nu_0 \sigma_0^2}{2 \sigma^2}\right]}{(\sigma^2)^{1+\nu_0/2}} \propto \frac{\exp\left[ \frac{-\nu_0 \sigma_0^2}{2 \sigma^2}\right]}{(\sigma^2)^{1+\nu_0/2}} (& = frac {(sigma _ 0 ^ 2 nu _ 0/2) ^ { nu _ 0/2}{ Gamma (nu _ 0/2)} ~ frac { exp left [ frac {-nu _ 0 sigma _ 0 ^ 2}{2 sigma ^ 2}{右]}{(sigma ^ 2) ^ ^ {1 + nu _ 0/2}}至 frac { exp left [ frac {-nu _ 0 _ 0 ^ sigma _ 2}{2 ^ sigma ^ 2}{2 ^ 2}{2 ^ sigma ^ 2}{1 + nu _ 0/2}{1/2} \mu_0' &= \frac{\frac{n\bar{x}}{\sigma^2} + \frac{\mu_0}{\sigma_0^2}}{\frac{n}{\sigma^2} + \frac{1}{\sigma_0^2}} \\ --\gt --\gt \bar{x} &= \frac{1}{n}\sum_{i=1}^n x_i \end{align} }[/math]

The update equations can be derived, and look as follows:

更新方程可以推导出来,如下所示:


With known mean

[math]\displaystyle{ \begin{align} 1.1.1.2.2.2.2.2.2.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.4.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3 For a set of [[i.i.d.]] normally distributed data points '''X''' of size ''n'' where each individual point ''x'' follows \lt math\gt x \sim \mathcal{N}(\mu, \sigma^2) }[/math] with known mean μ, the conjugate prior of the variance has an inverse gamma distribution or a scaled inverse chi-squared distribution. The two are equivalent except for having different parameterizations. Although the inverse gamma is more commonly used, we use the scaled inverse chi-squared for the sake of convenience. The prior for σ2 is as follows:

\bar{x} &= \frac 1 n \sum_{i=1}^n x_i \\

1 n sum { i = 1} ^ n x i


\mu_0' &= \frac{n_0\mu_0 + n\bar{x}}{n_0 + n} \\

0’ & = frac { n _ 0 mu _ 0 + n bar { x }{ n _ 0 + n }

[math]\displaystyle{ p(\sigma^2\mid\nu_0,\sigma_0^2) = \frac{(\sigma_0^2\frac{\nu_0}{2})^{\nu_0/2}}{\Gamma\left(\frac{\nu_0}{2} \right)}~\frac{\exp\left[ \frac{-\nu_0 \sigma_0^2}{2 \sigma^2}\right]}{(\sigma^2)^{1+\frac{\nu_0}{2}}} \propto \frac{\exp\left[ \frac{-\nu_0 \sigma_0^2}{2 \sigma^2}\right]}{(\sigma^2)^{1+\frac{\nu_0}{2}}} }[/math]

n_0' &= n_0 + n \\

0’ & = n 0 + n


\nu_0' &= \nu_0 + n \\

0’ & = nu 0 + n

The likelihood function from above, written in terms of the variance, is:

\nu_0'{\sigma_0^2}' &= \nu_0 \sigma_0^2 + \sum_{i=1}^n (x_i-\bar{x})^2 + \frac{n_0 n}{n_0 + n}(\mu_0 - \bar{x})^2

Nu _ 0’{ sigma _ 0 ^ 2}’ & = nu _ 0 sigma _ 0 ^ 2 + sum _ { i = 1} ^ n (x _ i-bar { x }) ^ 2 + frac { n _ 0 n }{ n _ 0 + n }(mu _ 0-bar { x }) ^ 2


\end{align}</math>

结束{ align } </math >

[math]\displaystyle{ \begin{align} p(\mathbf{X}\mid\mu,\sigma^2) &= \left(\frac{1}{2\pi\sigma^2}\right)^{n/2} \exp\left[-\frac{1}{2\sigma^2} \sum_{i=1}^n (x_i-\mu)^2\right] \\ The respective numbers of pseudo-observations add the number of actual observations to them. The new mean hyperparameter is once again a weighted average, this time weighted by the relative numbers of observations. Finally, the update for \lt math\gt \nu_0'{\sigma_0^2}' }[/math] is similar to the case with known mean, but in this case the sum of squared deviations is taken with respect to the observed data mean rather than the true mean, and as a result a new "interaction term" needs to be added to take care of the additional error source stemming from the deviation between prior and data mean.

伪观测值的个别数目加上实际观测值的个数。新的平均超参数又是一个加权平均数,这一次是以观测值的相对数目为权重。最后,对于 < math > nu _ 0’{ sigma _ 0 ^ 2}’ </math > 的更新类似于已知平均值的情况,但是在这种情况下,偏差的平方和是按照观测数据的平均值而不是按照真实平均值,因此需要添加一个新的“交互项”来处理由于先前数据和数据平均值之间的偏差而产生的额外误差源。

&= \left(\frac{1}{2\pi\sigma^2}\right)^{n/2} \exp\left[-\frac{S}{2\sigma^2}\right]

\end{align}</math>


The prior distributions are

先验分布是

where


[math]\displaystyle{ \begin{align} 1.1.1.2.2.2.2.2.2.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.4.3.3.3.3.3.3.3.3.3.3.3.4.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3 :\lt math\gt S = \sum_{i=1}^n (x_i-\mu)^2. }[/math]

p(\mu\mid\sigma^2; \mu_0, n_0) &\sim \mathcal{N}(\mu_0,\sigma^2/n_0) = \frac{1}{\sqrt{2\pi\frac{\sigma^2}{n_0}}} \exp\left(-\frac{n_0}{2\sigma^2}(\mu-\mu_0)^2\right) \\

P (mu mid sigma ^ 2; mu _ 0,n _ 0) & sim mathcal { n }(mu _ 0,sigma ^ 2/n _ 0) = frac {1}{2 pi frac { σ ^ 2}{ n _ 0} exp 左(- frac { n _ 0}{2 sigma ^ 2}(mu-mu _ 0) ^ 2右)


&\propto (\sigma^2)^{-1/2} \exp\left(-\frac{n_0}{2\sigma^2}(\mu-\mu_0)^2\right) \\

(sigma ^ 2) ^ {-1/2} exp left (- frac { n _ 0}{2 sigma ^ 2}(mu-mu _ 0) ^ 2 right)

Then:

p(\sigma^2; \nu_0,\sigma_0^2) &\sim I\chi^2(\nu_0,\sigma_0^2) = IG(\nu_0/2, \nu_0\sigma_0^2/2) \\

p(\sigma^2; \nu_0,\sigma_0^2) &\sim I\chi^2(\nu_0,\sigma_0^2) = IG(\nu_0/2, \nu_0\sigma_0^2/2) \\


&= \frac{(\sigma_0^2\nu_0/2)^{\nu_0/2}}{\Gamma(\nu_0/2)}~\frac{\exp\left[ \frac{-\nu_0 \sigma_0^2}{2 \sigma^2}\right]}{(\sigma^2)^{1+\nu_0/2}} \\

&= \frac{(\sigma_0^2\nu_0/2)^{\nu_0/2}}{\Gamma(\nu_0/2)}~\frac{\exp\left[ \frac{-\nu_0 \sigma_0^2}{2 \sigma^2}\right]}{(\sigma^2)^{1+\nu_0/2}} \\

[math]\displaystyle{ \begin{align} &\propto {(\sigma^2)^{-(1+\nu_0/2)}} \exp\left[ \frac{-\nu_0 \sigma_0^2}{2 \sigma^2}\right]. [咒语][咒语][咒语][咒语]。 p(\sigma^2\mid\mathbf{X}) &\propto p(\mathbf{X}\mid\sigma^2) p(\sigma^2) \\ \end{align} }[/math]

结束{ align } </math >

&= \left(\frac{1}{2\pi\sigma^2}\right)^{n/2} \exp\left[-\frac{S}{2\sigma^2}\right] \frac{(\sigma_0^2\frac{\nu_0}{2})^{\frac{\nu_0}{2}}}{\Gamma\left(\frac{\nu_0}{2} \right)}~\frac{\exp\left[ \frac{-\nu_0 \sigma_0^2}{2 \sigma^2}\right]}{(\sigma^2)^{1+\frac{\nu_0}{2}}} \\

&\propto \left(\frac{1}{\sigma^2}\right)^{n/2} \frac{1}{(\sigma^2)^{1+\frac{\nu_0}{2}}} \exp\left[-\frac{S}{2\sigma^2} + \frac{-\nu_0 \sigma_0^2}{2 \sigma^2}\right] \\

Therefore, the joint prior is

因此,节理优先是

&= \frac{1}{(\sigma^2)^{1+\frac{\nu_0+n}{2}}} \exp\left[-\frac{\nu_0 \sigma_0^2 + S}{2\sigma^2}\right]

\end{align}</math>

[math]\displaystyle{ \begin{align} 1.1.1.2.2.2.2.2.2.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.4.3.3.3.3.3.3.3.3.3.3.3.4.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3 p(\mu,\sigma^2; \mu_0, n_0, \nu_0,\sigma_0^2) &= p(\mu\mid\sigma^2; \mu_0, n_0)\,p(\sigma^2; \nu_0,\sigma_0^2) \\ P (mu,sigma ^ 2; mu _ 0,n _ 0,nu _ 0,sigma _ 0 ^ 2) & = p (mu mid sigma ^ 2; mu _ 0,n _ 0) ,p (sigma ^ 2; nu _ 0,sigma _ 0 ^ 2) The above is also a scaled inverse chi-squared distribution where &\propto (\sigma^2)^{-(\nu_0+3)/2} \exp\left[-\frac 1 {2\sigma^2}\left(\nu_0\sigma_0^2 + n_0(\mu-\mu_0)^2\right)\right]. (sigma ^ 2) ^ {-(nu _ 0 + 3)/2} exp left [-frac 1{2 sigma ^ 2} left (nu _ 0 sigma _ 0 ^ 2 + n _ 0(mu _ 0) ^ 2 right)]. \end{align} }[/math]

结束{ align } </math >

[math]\displaystyle{ \begin{align} \nu_0' &= \nu_0 + n \\ The likelihood function from the section above with known variance is: 上面这一节中已知方差的可能函数是: \nu_0'{\sigma_0^2}' &= \nu_0 \sigma_0^2 + \sum_{i=1}^n (x_i-\mu)^2 \end{align} }[/math]

[math]\displaystyle{ \begin{align} 1.1.1.2.2.2.2.2.2.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.4.3.3.3.3.3.3.3.3.3.3.3.4.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3 p(\mathbf{X}\mid\mu,\sigma^2) &= \left(\frac{1}{2\pi\sigma^2}\right)^{n/2} \exp\left[-\frac{1}{2\sigma^2} \left(\sum_{i=1}^n(x_i -\mu)^2\right)\right] P (mathbf { x } mid mu,sigma ^ 2) & = left (frac {1}{2 pi sigma ^ 2} right) ^ { n/2} exp left [-frac {1}{2 sigma ^ 2} left (sum { i = 1} ^ n (xi-mu) ^ 2 right)] or equivalently \end{align} }[/math]

结束{ align } </math >


[math]\displaystyle{ \begin{align} Writing it in terms of variance rather than precision, we get: 用方差而不是精度来写,我们得到: \nu_0' &= \nu_0 + n \\ \lt math\gt \begin{align} 1.1.1.2.2.2.2.2.2.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.4.3.3.3.3.3.3.3.3.3.3.3.4.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3 {\sigma_0^2}' &= \frac{\nu_0 \sigma_0^2 + \sum_{i=1}^n (x_i-\mu)^2}{\nu_0+n} p(\mathbf{X}\mid\mu,\sigma^2) &= \left(\frac{1}{2\pi\sigma^2}\right)^{n/2} \exp\left[-\frac{1}{2\sigma^2} \left(\sum_{i=1}^n(x_i-\bar{x})^2 + n(\bar{x} -\mu)^2\right)\right] \\ P (mathbf { x } mid mu,sigma ^ 2) & = left (frac {1}{2 pi sigma ^ 2} right) ^ { n/2} exp left [-frac {1}{2 sigma ^ 2} left (sum { i = 1} ^ n (xi-bar { x }) ^ 2 + n (bar { x }-mu) ^ 2 right)] \end{align} }[/math]

&\propto {\sigma^2}^{-n/2} \exp\left[-\frac{1}{2\sigma^2} \left(S + n(\bar{x} -\mu)^2\right)\right]

[-frac {1}{2 sigma ^ 2} left (s + n (bar { x }-mu) ^ 2 right)]


\end{align}</math>

结束{ align } </math >

Reparameterizing in terms of an inverse gamma distribution, the result is:


where [math]\displaystyle{ S = \sum_{i=1}^n(x_i-\bar{x})^2. }[/math]

其中 < math > s = sum _ { i = 1} ^ n (x _ i-bar { x }) ^ 2. </math >

[math]\displaystyle{ \begin{align} \alpha' &= \alpha + \frac{n}{2} \\ Therefore, the posterior is (dropping the hyperparameters as conditioning factors): 因此,后面是(放弃作为调节因素的超参数) : \beta' &= \beta + \frac{\sum_{i=1}^n (x_i-\mu)^2}{2} \end{align} }[/math]

[math]\displaystyle{ \begin{align} 1.1.1.2.2.2.2.2.2.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.4.3.3.3.3.3.3.3.3.3.3.3.4.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3 p(\mu,\sigma^2\mid\mathbf{X}) & \propto p(\mu,\sigma^2) \, p(\mathbf{X}\mid\mu,\sigma^2) \\ P (mu,sigma ^ 2 mid mathbf { x }) & propto p (mu,sigma ^ 2) ,p (mathbf { x } mid mu,sigma ^ 2) ==== With unknown mean and unknown variance ==== & \propto (\sigma^2)^{-(\nu_0+3)/2} \exp\left[-\frac{1}{2\sigma^2}\left(\nu_0\sigma_0^2 + n_0(\mu-\mu_0)^2\right)\right] {\sigma^2}^{-n/2} \exp\left[-\frac{1}{2\sigma^2} \left(S + n(\bar{x} -\mu)^2\right)\right] \\ (sigma ^ 2) ^ {-(nu _ 0 + 3)/2} exp left [-frac {1}{2 sigma ^ 2} left (nu _ 0 sigma _ 0 ^ 2 + n _ 0(mu-mu _ 0) ^ 2 right)]{ sigma ^ 2} ^ {-n/2} exp left [-frac {1}{2 sigma ^ 2} left (s + n (bar { x }-mu) ^ 2 right) For a set of [[i.i.d.]] normally distributed data points '''X''' of size ''n'' where each individual point ''x'' follows \lt math\gt x \sim \mathcal{N}(\mu, \sigma^2) }[/math] with unknown mean μ and unknown variance σ2, a combined (multivariate) conjugate prior is placed over the mean and variance, consisting of a normal-inverse-gamma distribution.

&= (\sigma^2)^{-(\nu_0+n+3)/2} \exp\left[-\frac{1}{2\sigma^2}\left(\nu_0\sigma_0^2 + S + n_0(\mu-\mu_0)^2 + n(\bar{x} -\mu)^2\right)\right] \\

(sigma ^ 2) ^ {-(nu _ 0 + n + 3)/2} exp left [-frac {1}{2 sigma ^ 2} left (nu _ 0 sigma _ 0 ^ 2 + s + n _ 0(mu _ 0) ^ 2 + n (bar { x }-mu) ^ 2 right)]

Logically, this originates as follows:

&= (\sigma^2)^{-(\nu_0+n+3)/2} \exp\left[-\frac{1}{2\sigma^2}\left(\nu_0\sigma_0^2 + S + \frac{n_0 n}{n_0+n}(\mu_0-\bar{x})^2 + (n_0+n)\left(\mu-\frac{n_0\mu_0 + n\bar{x}}{n_0 + n}\right)^2\right)\right] \\

(sigma ^ 2) ^ {-(nu _ 0 + n + 3)/2} exp left [-frac {1}{2 sigma ^ 2} left (nu _ 0 sigma _ 0 ^ 2 + s + frac { n _ 0 n }{ n _ 0 + n }(mu _ 0-bar { x }) ^ 2 + (n _ 0 + n) left (mu-frac { n _ 0 mu _ 0 + n bar { x }{ n _ 0 + n }右) ^ 2) right ]

  1. From the analysis of the case with unknown mean but known variance, we see that the update equations involve sufficient statistics computed from the data consisting of the mean of the data points and the total variance of the data points, computed in turn from the known variance divided by the number of data points.

& \propto (\sigma^2)^{-1/2} \exp\left[-\frac{n_0+n}{2\sigma^2}\left(\mu-\frac{n_0\mu_0 + n\bar{x}}{n_0 + n}\right)^2\right] \\

[-frac { n _ 0 + n }{2 sigma ^ 2} left (mu-frac { n _ 0 mu _ 0 + n bar { x }{ n _ 0 + n } right) ^ 2 right ]

  1. From the analysis of the case with unknown variance but known mean, we see that the update equations involve sufficient statistics over the data consisting of the number of data points and sum of squared deviations.

& \quad\times (\sigma^2)^{-(\nu_0/2+n/2+1)} \exp\left[-\frac{1}{2\sigma^2}\left(\nu_0\sigma_0^2 + S + \frac{n_0 n}{n_0+n}(\mu_0-\bar{x})^2\right)\right] \\

& quad times (sigma ^ 2) ^ {-(nu _ 0/2 + n/2 + 1)} exp left [-frac {1}{2 sigma ^ 2} left (nu _ 0 sigma _ 0 ^ 2 + s + frac { n _ 0 n }(mu _ 0-bar { x }) ^ 2 right)]

  1. Keep in mind that the posterior update values serve as the prior distribution when further data is handled. Thus, we should logically think of our priors in terms of the sufficient statistics just described, with the same semantics kept in mind as much as possible.

& = \mathcal{N}_{\mu\mid\sigma^2}\left(\frac{n_0\mu_0 + n\bar{x}}{n_0 + n}, \frac{\sigma^2}{n_0+n}\right) \cdot {\rm IG}_{\sigma^2}\left(\frac12(\nu_0+n), \frac12\left(\nu_0\sigma_0^2 + S + \frac{n_0 n}{n_0+n}(\mu_0-\bar{x})^2\right)\right).

数学{ n }{ mu mid sigma ^ 2}左(frac { n _ 0 mu _ 0 + n bar { x }{ n _ 0 + n } ,frac { sigma ^ 2}{ n _ 0 + n }右) cdot { rm }{ IG }{ sigma ^ 2}左(frac12(nu _ 0 + n) ,frac12左(nu _ 0 sigma _ 0 ^ 2 + s + frac { n _ 0 n }{ n _ 0 + n }(mu _ 0-bar { x })右)。

  1. To handle the case where both mean and variance are unknown, we could place independent priors over the mean and variance, with fixed estimates of the average mean, total variance, number of data points used to compute the variance prior, and sum of squared deviations. Note however that in reality, the total variance of the mean depends on the unknown variance, and the sum of squared deviations that goes into the variance prior (appears to) depend on the unknown mean. In practice, the latter dependence is relatively unimportant: Shifting the actual mean shifts the generated points by an equal amount, and on average the squared deviations will remain the same. This is not the case, however, with the total variance of the mean: As the unknown variance increases, the total variance of the mean will increase proportionately, and we would like to capture this dependence.

\end{align}</math>

结束{ align } </math >

  1. This suggests that we create a conditional prior of the mean on the unknown variance, with a hyperparameter specifying the mean of the pseudo-observations associated with the prior, and another parameter specifying the number of pseudo-observations. This number serves as a scaling parameter on the variance, making it possible to control the overall variance of the mean relative to the actual variance parameter. The prior for the variance also has two hyperparameters, one specifying the sum of squared deviations of the pseudo-observations associated with the prior, and another specifying once again the number of pseudo-observations. Note that each of the priors has a hyperparameter specifying the number of pseudo-observations, and in each case this controls the relative variance of that prior. These are given as two separate hyperparameters so that the variance (aka the confidence) of the two priors can be controlled separately.
  1. This leads immediately to the normal-inverse-gamma distribution, which is the product of the two distributions just defined, with conjugate priors used (an inverse gamma distribution over the variance, and a normal distribution over the mean, conditional on the variance) and with the same four parameters just defined.

In other words, the posterior distribution has the form of a product of a normal distribution over p(μ | σ2) times an inverse gamma distribution over p(σ2), with parameters that are the same as the update equations above.

换句话说,后验概率的形式是 p (μ | σ < sup > 2 )上的正态分布乘以 p 上的逆伽玛分布(σ < sup > 2 )的乘积,其参数与上面的更新方程相同。


The priors are normally defined as follows:


[math]\displaystyle{ \begin{align} The occurrence of normal distribution in practical problems can be loosely classified into four categories: 在实际问题中,正态分布的出现大致可分为四类: p(\mu\mid\sigma^2; \mu_0, n_0) &\sim \mathcal{N}(\mu_0,\sigma^2/n_0) \\ Exactly normal distributions; 正态分布; p(\sigma^2; \nu_0,\sigma_0^2) &\sim I\chi^2(\nu_0,\sigma_0^2) = IG(\nu_0/2, \nu_0\sigma_0^2/2) Approximately normal laws, for example when such approximation is justified by the central limit theorem; and 近似正规定律,例如当这种近似被中心极限定理证明是正确的; 和 \end{align} }[/math]
Distributions modeled as normal – the normal distribution being the distribution with maximum entropy for a given mean and variance.

分布建模为正态分布-正态分布是具有最大熵的分布给定的均值和方差。



The ground state of a quantum harmonic oscillator has the Gaussian distribution.

的基态[[量子谐振子有正态分布]

The update equations can be derived, and look as follows:

Certain quantities in physics are distributed normally, as was first demonstrated by James Clerk Maxwell. Examples of such quantities are:

物理学中的某些量是按正态分布的,正如詹姆斯·克拉克·麦克斯韦第一次证明的那样。这些数量的例子有:


[math]\displaystyle{ \begin{align} \bar{x} &= \frac 1 n \sum_{i=1}^n x_i \\ \mu_0' &= \frac{n_0\mu_0 + n\bar{x}}{n_0 + n} \\ n_0' &= n_0 + n \\ Approximately normal distributions occur in many situations, as explained by the central limit theorem. When the outcome is produced by many small effects acting additively and independently, its distribution will be close to normal. The normal approximation will not be valid if the effects act multiplicatively (instead of additively), or if there is a single external influence that has a considerably larger magnitude than the rest of the effects. 大约正态分布发生在许多情况下,正如美国中心极限定理协会所解释的。当结果是由许多独立相加的小效应产生时,其分布将接近正常。如果效应是相乘的(而不是相加的) ,或者如果有一个单一的外部影响比其他效应大得多,则正常近似无效。 \nu_0' &= \nu_0 + n \\ \nu_0'{\sigma_0^2}' &= \nu_0 \sigma_0^2 + \sum_{i=1}^n (x_i-\bar{x})^2 + \frac{n_0 n}{n_0 + n}(\mu_0 - \bar{x})^2 \end{align} }[/math]


The respective numbers of pseudo-observations add the number of actual observations to them. The new mean hyperparameter is once again a weighted average, this time weighted by the relative numbers of observations. Finally, the update for [math]\displaystyle{ \nu_0'{\sigma_0^2}' }[/math] is similar to the case with known mean, but in this case the sum of squared deviations is taken with respect to the observed data mean rather than the true mean, and as a result a new "interaction term" needs to be added to take care of the additional error source stemming from the deviation between prior and data mean.


模板:Hidden begin

Histogram of sepal widths for Iris versicolor from Fisher's Iris flower data set, with superimposed best-fitting normal distribution.

来自 Fisher’ s 的杂色鸢尾的萼片宽度直方图[安德森鸢尾花卉数据集,具有叠加的最佳拟合正态分布. ]

The prior distributions are


There are statistical methods to empirically test that assumption, see the above Normality tests section.

有一些统计方法可以对这个假设进行实证检验,请参阅上面的“正态性检验”一节。

[math]\displaystyle{ \begin{align} p(\mu\mid\sigma^2; \mu_0, n_0) &\sim \mathcal{N}(\mu_0,\sigma^2/n_0) = \frac{1}{\sqrt{2\pi\frac{\sigma^2}{n_0}}} \exp\left(-\frac{n_0}{2\sigma^2}(\mu-\mu_0)^2\right) \\ &\propto (\sigma^2)^{-1/2} \exp\left(-\frac{n_0}{2\sigma^2}(\mu-\mu_0)^2\right) \\ p(\sigma^2; \nu_0,\sigma_0^2) &\sim I\chi^2(\nu_0,\sigma_0^2) = IG(\nu_0/2, \nu_0\sigma_0^2/2) \\ &= \frac{(\sigma_0^2\nu_0/2)^{\nu_0/2}}{\Gamma(\nu_0/2)}~\frac{\exp\left[ \frac{-\nu_0 \sigma_0^2}{2 \sigma^2}\right]}{(\sigma^2)^{1+\nu_0/2}} \\ &\propto {(\sigma^2)^{-(1+\nu_0/2)}} \exp\left[ \frac{-\nu_0 \sigma_0^2}{2 \sigma^2}\right]. \end{align} }[/math]


Fitted cumulative normal distribution to October rainfalls, see distribution fitting

截至十月降雨量的累积正态分布,参见[分布拟合]

Therefore, the joint prior is


[math]\displaystyle{ \begin{align} p(\mu,\sigma^2; \mu_0, n_0, \nu_0,\sigma_0^2) &= p(\mu\mid\sigma^2; \mu_0, n_0)\,p(\sigma^2; \nu_0,\sigma_0^2) \\ &\propto (\sigma^2)^{-(\nu_0+3)/2} \exp\left[-\frac 1 {2\sigma^2}\left(\nu_0\sigma_0^2 + n_0(\mu-\mu_0)^2\right)\right]. In regression analysis, lack of normality in residuals simply indicates that the model postulated is inadequate in accounting for the tendency in the data and needs to be augmented; in other words, normality in residuals can always be achieved given a properly constructed model. 在21回归分析,残差中缺乏正态性仅仅表明假设的模型在考虑数据中的趋势方面是不够的,并且需要加以扩充; 换句话说,残差中的正态性总是可以在一个适当构造的模型中实现的。 \end{align} }[/math]


The likelihood function from the section above with known variance is:


The [[bean machine, a device invented by Francis Galton, can be called the first generator of normal random variables. This machine consists of a vertical board with interleaved rows of pins. Small balls are dropped from the top and then bounce randomly left or right as they hit the pins. The balls are collected into bins at the bottom and settle down into a pattern resembling the Gaussian curve.]]

咖啡豆机是弗朗西斯 · 高尔顿发明的一种设备,可以称之为第一个正态随机变量发生器。这台机器是由一个竖板组成,上面有一排排插销。小球从顶部落下,然后随机地向左或向右反弹,因为它们击中了大头针。球被收集到底部的箱子里,然后安定下来,形成一个类似高斯曲线的图案。]

[math]\displaystyle{ \begin{align} p(\mathbf{X}\mid\mu,\sigma^2) &= \left(\frac{1}{2\pi\sigma^2}\right)^{n/2} \exp\left[-\frac{1}{2\sigma^2} \left(\sum_{i=1}^n(x_i -\mu)^2\right)\right] In computer simulations, especially in applications of the Monte-Carlo method, it is often desirable to generate values that are normally distributed. The algorithms listed below all generate the standard normal deviates, since a )}} can be generated as μ + σZ}}, where Z is standard normal. All these algorithms rely on the availability of a random number generator U capable of producing uniform random variates. 在计算机模拟中,特别是在蒙特卡罗方法的应用中,常常需要产生正态分布的值。下面列出的算法都生成标准的正常偏差,因为 a)}可以作为 μ + σz }生成,其中 z 是标准的正常偏差。所有这些算法都依赖于随机数生成器 u 的可用性,u 能产生均匀的随机变量。 \end{align} }[/math]


Writing it in terms of variance rather than precision, we get:

[math]\displaystyle{ \begin{align} \lt math\gt 《数学》 p(\mathbf{X}\mid\mu,\sigma^2) &= \left(\frac{1}{2\pi\sigma^2}\right)^{n/2} \exp\left[-\frac{1}{2\sigma^2} \left(\sum_{i=1}^n(x_i-\bar{x})^2 + n(\bar{x} -\mu)^2\right)\right] \\ X = \sqrt{- 2 \ln U} \, \cos(2 \pi V) , \qquad X = sqrt {-2 ln u } ,cos (2πv) ,qquad &\propto {\sigma^2}^{-n/2} \exp\left[-\frac{1}{2\sigma^2} \left(S + n(\bar{x} -\mu)^2\right)\right] Y = \sqrt{- 2 \ln U} \, \sin(2 \pi V) . Y = sqrt {-2 ln u } ,sin (2 πv). \end{align} }[/math]
 </math>

数学


will both have the standard normal distribution, and will be independent. This formulation arises because for a bivariate normal random vector (X, Y) the squared norm will have the chi-squared distribution with two degrees of freedom, which is an easily generated exponential random variable corresponding to the quantity −2ln(U) in these equations; and the angle is distributed uniformly around the circle, chosen by the random variable V.

都是标准的正态分布,并且是独立的。这个公式之所以产生是因为对于二元正态随机向量(x,y)的平方模具有2个自由度的卡方分布,这是一个很容易生成的指数型随机变量,对应于这些方程中的量 -2ln (u) ; 而且角度均匀地分布在圆周周围,由随机变量 v 选择。

where [math]\displaystyle{ S = \sum_{i=1}^n(x_i-\bar{x})^2. }[/math]


[math]\displaystyle{ X = U\sqrt{\frac{-2\ln S}{S}}, \qquad  Y = V\sqrt{\frac{-2\ln S}{S}} }[/math]

[数学] x = u sqrt { frac {-2 ln s }{ s } ,qquad y = v sqrt { frac {-2 ln s }{ s }}} </math >

Therefore, the posterior is (dropping the hyperparameters as conditioning factors):

are returned. Again, X and Y are independent, standard normal random variables.

被退回。同样,x 和 y 是独立的标准正态随机变量。


[math]\displaystyle{ \begin{align} p(\mu,\sigma^2\mid\mathbf{X}) & \propto p(\mu,\sigma^2) \, p(\mathbf{X}\mid\mu,\sigma^2) \\ & \propto (\sigma^2)^{-(\nu_0+3)/2} \exp\left[-\frac{1}{2\sigma^2}\left(\nu_0\sigma_0^2 + n_0(\mu-\mu_0)^2\right)\right] {\sigma^2}^{-n/2} \exp\left[-\frac{1}{2\sigma^2} \left(S + n(\bar{x} -\mu)^2\right)\right] \\ &= (\sigma^2)^{-(\nu_0+n+3)/2} \exp\left[-\frac{1}{2\sigma^2}\left(\nu_0\sigma_0^2 + S + n_0(\mu-\mu_0)^2 + n(\bar{x} -\mu)^2\right)\right] \\ &= (\sigma^2)^{-(\nu_0+n+3)/2} \exp\left[-\frac{1}{2\sigma^2}\left(\nu_0\sigma_0^2 + S + \frac{n_0 n}{n_0+n}(\mu_0-\bar{x})^2 + (n_0+n)\left(\mu-\frac{n_0\mu_0 + n\bar{x}}{n_0 + n}\right)^2\right)\right] \\ & \propto (\sigma^2)^{-1/2} \exp\left[-\frac{n_0+n}{2\sigma^2}\left(\mu-\frac{n_0\mu_0 + n\bar{x}}{n_0 + n}\right)^2\right] \\ The two optional steps allow the evaluation of the logarithm in the last step to be avoided in most cases. These steps can be greatly improved so that the logarithm is rarely evaluated. 这两个可选步骤允许在大多数情况下避免计算最后一步中的对数。这些步骤可以大大改进,因此很少计算对数。 & \quad\times (\sigma^2)^{-(\nu_0/2+n/2+1)} \exp\left[-\frac{1}{2\sigma^2}\left(\nu_0\sigma_0^2 + S + \frac{n_0 n}{n_0+n}(\mu_0-\bar{x})^2\right)\right] \\ & = \mathcal{N}_{\mu\mid\sigma^2}\left(\frac{n_0\mu_0 + n\bar{x}}{n_0 + n}, \frac{\sigma^2}{n_0+n}\right) \cdot {\rm IG}_{\sigma^2}\left(\frac12(\nu_0+n), \frac12\left(\nu_0\sigma_0^2 + S + \frac{n_0 n}{n_0+n}(\mu_0-\bar{x})^2\right)\right). \end{align} }[/math]


In other words, the posterior distribution has the form of a product of a normal distribution over p(μ | σ2) times an inverse gamma distribution over p2), with parameters that are the same as the update equations above.

模板:Hidden end

The standard normal CDF is widely used in scientific and statistical computing.

标准常规 CDF 在科学计算和统计计算中有着广泛的应用。


Occurrence and applications

The values Φ(x) may be approximated very accurately by a variety of methods, such as numerical integration, Taylor series, asymptotic series and continued fractions. Different approximations are used depending on the desired level of accuracy.

值 φ (x)可以用各种方法非常精确地近似,如数值积分、泰勒级数、渐近级数和连分式。根据期望的精度水平,使用不同的近似值。

The occurrence of normal distribution in practical problems can be loosely classified into four categories:

  1. Exactly normal distributions;

{{unordered list

{无序列表

  1. Approximately normal laws, for example when such approximation is justified by the central limit theorem; and
  1. Distributions modeled as normal – the normal distribution being the distribution with maximum entropy for a given mean and variance.

|1= give the approximation for Φ(x) for x > 0 with the absolute error  < 7.5·10−8 (algorithm 26.2.17):

| 1 = 给出 x > 0的 φ (x)的近似值,绝对误差 < 7.510 < sup >-8 (算法[ http://www.math.sfu.ca/~cbm/aands/page_932.htm 26.2.17]) :

  1. Regression problems – the normal distribution being found after systematic effects have been modeled sufficiently well.
[math]\displaystyle{ 

《数学》



    \Phi(x) = 1 - \varphi(x)\left(b_1t + b_2t^2 + b_3t^3 + b_4t^4 + b_5t^5\right) + \varepsilon(x), \qquad t = \frac{1}{1+b_0x},

Phi (x) = 1-varphi (x)左(b _ 1t + b _ 2t ^ 2 + b _ 3t ^ 3 + b _ 4t ^ 4 + b _ 5t ^ 5右) + varepsilon (x) ,qt = frac {1}{1 + b _ 0x } ,

=== Exact normality ===

   }[/math]

数学

where ϕ(x) is the standard normal PDF, and b0 = 0.2316419, b1 = 0.319381530, b2 = −0.356563782, b3 = 1.781477937, b4 = −1.821255978, b5 = 1.330274429.

其中 φ (x)为标准正常 PDF,b < sub > 0 = 0.2316419,b < sub > 1 = 0.319381530,b < sub > 2 =-0.356563782,b < sub > 3 = 1.78147937,b < sub > 4 =-1.821255978,b < sub > 5 = 1.330274429。

Certain quantities in physics are distributed normally, as was first demonstrated by James Clerk Maxwell. Examples of such quantities are:

|2= lists some dozens of approximations – by means of rational functions, with or without exponentials – for the function. His algorithms vary in the degree of complexity and the resulting precision, with maximum absolute precision of 24 digits. An algorithm by combines Hart's algorithm 5666 with a continued fraction approximation in the tail to provide a fast computation algorithm with a 16-digit precision.

| 2 = 列出了几十个近似值——通过有理函数,有或没有指数函数——用于函数。他的算法在复杂程度和结果精度上有所不同,最大绝对精度为24位。该算法将 Hart 的算法5666与尾部的连分数近似结合起来,提供了一个16位数精度的快速计算算法。

  • The position of a particle that experiences diffusion. If initially the particle is located at a specific point (that is its probability distribution is the Dirac delta function), then after time t its location is described by a normal distribution with variance t, which satisfies the diffusion equation [math]\displaystyle{ \frac{\partial}{\partial t} f(x,t) = \frac{1}{2} \frac{\partial^2}{\partial x^2} f(x,t) }[/math]. If the initial location is given by a certain density function [math]\displaystyle{ g(x) }[/math], then the density at time t is the convolution of g and the normal PDF.


|3= after recalling Hart68 solution is not suited for erf, gives a solution for both erf and erfc, with maximal relative error bound, via Rational Chebyshev Approximation.

3 = 在回顾 Hart68解不适用于 erf 后,利用 Rational Chebyshev 逼近给出了 erf 和 erfc 的最大相对误差界的解。

Approximate normality

Approximately normal distributions occur in many situations, as explained by the central limit theorem. When the outcome is produced by many small effects acting additively and independently, its distribution will be close to normal. The normal approximation will not be valid if the effects act multiplicatively (instead of additively), or if there is a single external influence that has a considerably larger magnitude than the rest of the effects.

|4= suggested a simple algorithm based on the Taylor series expansion

| 4 = 提出了一种基于泰勒级数展开式的简单算法

  • In counting problems, where the central limit theorem includes a discrete-to-continuum approximation and where infinitely divisible and decomposable distributions are involved, such as
[math]\displaystyle{ 

《数学》

** [[Poisson distribution|Poisson random variables]], associated with rare events;

    \Phi(x) = \frac12 + \varphi(x)\left( x + \frac{x^3} 3 + \frac{x^5}{3\cdot5} + \frac{x^7}{3\cdot5\cdot7} + \frac{x^9}{3\cdot5\cdot7\cdot9} + \cdots \right)

Phi (x) = frac12 + varphi (x) left (x + frac { x ^ 3}3 + frac { x ^ 5}{3 cdot5} + frac { x ^ 7}{3 cdot5 cdot7} + frac { x ^ 9}{3 cdot5 cdot7} + frac { x ^ 9}{3 cdot7 cdot9} + cdots right)

* [[Thermal radiation]] has a [[Bose–Einstein statistics|Bose–Einstein]] distribution on very short time scales, and a normal distribution on longer timescales due to the central limit theorem.

   }[/math]

数学


Assumed normality

for calculating Φ(x) with arbitrary precision. The drawback of this algorithm is comparatively slow calculation time (for example it takes over 300 iterations to calculate the function with 16 digits of precision when ).

for calculating Φ(x) with arbitrary precision.该算法的缺点是计算时间相对较慢(例如计算精度为16位的函数需要300次以上的迭代)。

文件:Fisher iris versicolor sepalwidth.svg
Histogram of sepal widths for Iris versicolor from Fisher's Iris flower data set, with superimposed best-fitting normal distribution.

/* Styling for Template:Quote */ .templatequote { overflow: hidden; margin: 1em 0; padding: 0 40px; } .templatequote .templatequotecite {

   line-height: 1.5em;
   /* @noflip */
   text-align: left;
   /* @noflip */
   padding-left: 1.6em;
   margin-top: 0;

}

|5= The GNU Scientific Library calculates values of the standard normal CDF using Hart's algorithms and approximations with Chebyshev polynomials.

GNU科学数值库计算使用 Hart 的算法计算标准普通 CDF 的值,并用切比雪夫多项式近似法计算。

There are statistical methods to empirically test that assumption, see the above Normality tests section.

}}

}}

  • In biology, the logarithm of various variables tend to have a normal distribution, that is, they tend to have a log-normal distribution (after separation on male/female subpopulations), with examples including:
    • Measures of size of living tissue (length, height, skin area, weight);[47]

Shore (1982) introduced simple approximations that may be incorporated in stochastic optimization models of engineering and operations research, like reliability engineering and inventory analysis. Denoting p=Φ(z), the simplest approximation for the quantile function is:

Shore (1982)引入了简单的近似,这些近似可以用于工程和运筹学的随机优化模型中,如可靠度和库存分析。表示 p = φ (z) ,分位函数的最简单近似是:

    • The length of inert appendages (hair, claws, nails, teeth) of biological specimens, in the direction of growth; presumably the thickness of tree bark also falls under this category;
    • Certain physiological measurements, such as blood pressure of adult humans.
[math]\displaystyle{ z=\Phi^{-1}(p)=5.5556\left[1- \left( \frac{1-p} p \right)^{0.1186}\right],\qquad p\ge 1/2  }[/math]

(p) = 5.5556 left [1-left (frac {1-p } p right) ^ {0.1186} right ] ,qquad p ge 1/2 </math >

  • Measurement errors in physical experiments are often modeled by a normal distribution. This use of a normal distribution does not imply that one is assuming the measurement errors are normally distributed, rather using the normal distribution produces the most conservative predictions possible given only knowledge about the mean and variance of the errors.[48]

This approximation delivers for z a maximum absolute error of 0.026 (for 0.5 ≤ p ≤ 0.9999, corresponding to 0 ≤ z ≤ 3.719). For p < 1/2 replace p by 1 − p and change sign. Another approximation, somewhat less accurate, is the single-parameter approximation:

这种近似对 z 的最大绝对误差为0.026(0.5≤ p ≤0.9999,相当于0≤ z ≤3.719) 。For p < 1/2 replace p by 1 − p and change sign.另一个近似值是单参数近似值,它的精确度稍低一些:

  • In standardized testing, results can be made to have a normal distribution by either selecting the number and difficulty of questions (as in the IQ test) or transforming the raw test scores into "output" scores by fitting them to the normal distribution. For example, the SAT's traditional range of 200–800 is based on a normal distribution with a mean of 500 and a standard deviation of 100.
文件:FitNormDistr.tif
Fitted cumulative normal distribution to October rainfalls, see distribution fitting
[math]\displaystyle{  z=-0.4115\left\{ \frac{1-p} p + \log \left[ \frac{1-p} p \right] - 1 \right\}, \qquad p\ge 1/2 }[/math]

0.4115 left { frac {1-p } p + log left [ frac {1-p } p right ]-1 right } ,qquad p ge 1/2 </math >

The latter had served to derive a simple approximation for the loss integral of the normal distribution, defined by

后者用来推导正态分布的损失积分的一个简单近似


Produced normality

[math]\displaystyle{ 

《数学》

In [[regression analysis]], lack of normality in [[Errors and residuals in statistics|residuals]] simply indicates that the model postulated is inadequate in accounting for the tendency in the data and needs to be augmented; in other words, normality in residuals can always be achieved given a properly constructed model.{{Citation needed|date=May 2020|reason=This is a crucial claim about fundamental regression analysis.}}

\begin{align}

开始{ align }



L(z) & =\int_z^\infty (u-z)\varphi(u) \, du=\int_z^\infty [1-\Phi (u)] \, du \\[5pt]

L (z) & = int _ z ^ infty (u-z) varphi (u) ,du = int _ z ^ infty [1-Phi (u)] ,du [5 pt ]

== Computational methods ==

L(z) & \approx \begin{cases}

L (z) & approx begin { cases }

=== Generating values from normal distribution ===

   0.4115\left(\dfrac p {1-p} \right) - z, & p\lt 1/2,  \\  \\

0.4115 left (dfrac p {1-p } right)-z,& p \lt  1/2,

[[File:Planche de Galton.jpg|thumb|250px|right|The [[bean machine]], a device invented by [[Francis Galton]], can be called the first generator of normal random variables. This machine consists of a vertical board with interleaved rows of pins. Small balls are dropped from the top and then bounce randomly left or right as they hit the pins. The balls are collected into bins at the bottom and settle down into a pattern resembling the Gaussian curve.]]

   0.4115\left( \dfrac {1-p} p \right), & p\ge 1/2.

0.4115 left (dfrac {1-p } p right) ,& p ge 1/2.



\end{cases} \\[5pt]

结束{ cases }[5 pt ]

In computer simulations, especially in applications of the [[Monte-Carlo method]], it is often desirable to generate values that are normally distributed. The algorithms listed below all generate the standard normal deviates, since a {{nowrap|''N''(''μ, σ''{{su|p=2}})}} can be generated as {{nowrap|''X {{=}} μ + σZ''}}, where ''Z'' is standard normal. All these algorithms rely on the availability of a [[random number generator]] ''U'' capable of producing [[Uniform distribution (continuous)|uniform]] random variates.

\text{or, equivalently,} \\

或者,等价地,}

* The most straightforward method is based on the [[probability integral transform]] property: if ''U'' is distributed uniformly on (0,1), then Φ\lt sup\gt −1\lt /sup\gt (''U'') will have the standard normal distribution. The drawback of this method is that it relies on calculation of the [[probit function]] Φ\lt sup\gt −1\lt /sup\gt , which cannot be done analytically. Some approximate methods are described in {{harvtxt |Hart |1968 }} and in the [[error function|erf]] article. Wichura gives a fast algorithm for computing this function to 16 decimal places,\lt ref\gt {{cite journal|last=Wichura|first=Michael J.|year=1988|title=Algorithm AS241: The Percentage Points of the Normal Distribution|journal=Applied Statistics|volume=37|pages=477–84|doi=10.2307/2347330|jstor=2347330|issue=3}}\lt /ref\gt  which is used by [[R programming language|R]] to compute random variates of the normal distribution.

L(z) & \approx \begin{cases}

L (z) & approx begin { cases }

* An easy to program approximate approach, that relies on the [[central limit theorem]], is as follows: generate 12 uniform ''U''(0,1) deviates, add them all up, and subtract 6 – the resulting random variable will have approximately standard normal distribution. In truth, the distribution will be [[Irwin–Hall distribution|Irwin–Hall]], which is a 12-section eleventh-order polynomial approximation to the normal distribution. This random deviate will have a limited range of (−6, 6).\lt ref\gt {{harvtxt |Johnson |Kotz  |Balakrishnan |1995 |loc=Equation (26.48) }}\lt /ref\gt 

   0.4115\left\{ 1-\log \left[ \frac p {1-p} \right] \right\}, & p\lt 1/2, \\  \\

0.4115 left {1-log left [ frac {1-p } right ]} ,& p \lt  1/2,

* The [[Box–Muller transform|Box–Muller method]] uses two independent random numbers ''U'' and ''V'' distributed [[uniform distribution (continuous)|uniformly]] on (0,1). Then the two random variables ''X'' and ''Y''

   0.4115 \dfrac{1-p} p, & p\ge 1/2.

0.4115 dfrac {1-p } p & p ge 1/2.

:: \lt math\gt 

\end{cases}

结束{ cases }

    X = \sqrt{- 2 \ln U} \, \cos(2 \pi V) , \qquad

\end{align}

结束{ align }

    Y = \sqrt{- 2 \ln U} \, \sin(2 \pi V) .

 }[/math]

数学

 </math>
will both have the standard normal distribution, and will be independent. This formulation arises because for a bivariate normal random vector (X, Y) the squared norm X2 + Y2 will have the chi-squared distribution with two degrees of freedom, which is an easily generated exponential random variable corresponding to the quantity −2ln(U) in these equations; and the angle is distributed uniformly around the circle, chosen by the random variable V.

This approximation is particularly accurate for the right far-tail (maximum error of 10−3 for z≥1.4). Highly accurate approximations for the CDF, based on Response Modeling Methodology (RMM, Shore, 2011, 2012), are shown in Shore (2005).

这种近似对于右极尾特别精确(z ≥1.4时最大误差为10 < sup >-3 ) 。CDF 的高度精确的近似值,基于响应建模方法(RMM,Shore,2011,2012) ,在 Shore (2005)中显示。

  • The Marsaglia polar method is a modification of the Box–Muller method which does not require computation of the sine and cosine functions. In this method, U and V are drawn from the uniform (−1,1) distribution, and then S = U2 + V2 is computed. If S is greater or equal to 1, then the method starts over, otherwise the two quantities
[math]\displaystyle{ X = U\sqrt{\frac{-2\ln S}{S}}, \qquad Y = V\sqrt{\frac{-2\ln S}{S}} }[/math]

Some more approximations can be found at: Error function#Approximation with elementary functions. In particular, small relative error on the whole domain for the CDF [math]\displaystyle{ \Phi }[/math] and the quantile function [math]\displaystyle{ \Phi^{-1} }[/math] as well, is achieved via an explicitly invertible formula by Sergei Winitzki in 2008.

更多的逼近可以在: Error function # 用初等函数逼近。特别是,CDF < math > Phi </math > 和分位函数 Phi ^ {-1} </math > 在整个领域的相对小误差,是由 Sergei Winitzki 在2008年通过一个明确的可逆公式实现的。

are returned. Again, X and Y are independent, standard normal random variables.
  • The Ratio method[50] is a rejection method. The algorithm proceeds as follows:
    • Generate two independent uniform deviates U and V;
    • Optional: if X2 ≤ 5 − 4e1/4U then accept X and terminate algorithm;

Some authors attribute the credit for the discovery of the normal distribution to de Moivre, who in 1738 in Seriem Expansi" that was designated for private circulation only. But it was not until the year 1738 that he made his results publicly available. The original pamphlet was reprinted several times, see for example .}} published in the second edition of his "The Doctrine of Chances" the study of the coefficients in the binomial expansion of . De Moivre proved that the middle term in this expansion has the approximate magnitude of [math]\displaystyle{ 2/\sqrt{2\pi n} }[/math], and that "If m or ½n be a Quantity infinitely great, then the Logarithm of the Ratio, which a Term distant from the middle by the Interval ℓ, has to the middle Term, is [math]\displaystyle{ -\frac{2\ell\ell}{n} }[/math]." Although this theorem can be interpreted as the first obscure expression for the normal probability law, Stigler points out that de Moivre himself did not interpret his results as anything more than the approximate rule for the binomial coefficients, and in particular de Moivre lacked the concept of the probability density function.

一些作者将发现正态分布的功劳归于德莫伊弗雷,他在1738年的《序列扩张》中被指定只供私人流通。但是直到1738年,他才将他的研究结果公之于众。最初的小册子被转载了好几次,参见例子。}发表在他的第二版“机会的学说”的研究系数在二项式展开式的。De Moivre 证明了这个展开式中的中项具有 < math > 2/sqrt {2 pi n } </math > 的近似大小,并且“如果 m 或1,2 n 是一个无穷大的量,那么比率的对数,从距离 l 的中间的一个项到中项,是 < math >-frac {2} </math > 。”虽然这个定理可以被解释为正态概率定律的第一个模糊的表达式,Stigler 指出,de Moivre 自己并没有把他的结果解释为除了二项式系数的近似规则以外的任何东西,特别是 de Moivre 缺乏概率密度函数的概念。

    • Optional: if X2 ≥ 4e−1.35/U + 1.4 then reject X and start over from step 1;
    • If X2 ≤ −4 lnU then accept X, otherwise start over the algorithm.

Carl Friedrich Gauss discovered the normal distribution in 1809 as a way to rationalize the method of least squares.

[卡尔·弗里德里希·高斯在1809年发现了正态分布,作为使最小二乘法合理化的一种方法]

The two optional steps allow the evaluation of the logarithm in the last step to be avoided in most cases. These steps can be greatly improved[51] so that the logarithm is rarely evaluated.
  • The ziggurat algorithm[52] is faster than the Box–Muller transform and still exact. In about 97% of all cases it uses only two random numbers, one random integer and one random uniform, one multiplication and an if-test. Only in 3% of the cases, where the combination of those two falls outside the "core of the ziggurat" (a kind of rejection sampling using logarithms), do exponentials and more uniform random numbers have to be employed.

In 1809 Gauss published his monograph "Theoria motus corporum coelestium in sectionibus conicis solem ambientium" where among other things he introduces several important statistical concepts, such as the method of least squares, the method of maximum likelihood, and the normal distribution. Gauss used M, , to denote the measurements of some unknown quantity V, and sought the "most probable" estimator of that quantity: the one that maximizes the probability of obtaining the observed experimental results. In his notation φΔ is the probability law of the measurement errors of magnitude Δ. Not knowing what the function φ is, Gauss requires that his method should reduce to the well-known answer: the arithmetic mean of the measured values. }} Starting from these principles, Gauss demonstrates that the only law that rationalizes the choice of arithmetic mean as an estimator of the location parameter, is the normal law of errors:

1809年,高斯发表了他的专著《圆锥段天体绕太阳运动的理论》《圆锥段上的体体运动》 ,其中介绍了最小二乘法、最大似然法和正态分布等几个重要的统计概念。Gauss 使用 m,来表示某个未知量 v 的测量值,并且寻求这个量的“最可能”估计量: 使得获得观察到的实验结果的概率最大化的估计量。在他的记数法中,φδ 是 δ 量级测量误差的概率定律。由于不知道 φ 是什么函数,高斯要求他的方法应该归结为一个众所周知的答案: 测量值的算术平均值。}从这些原则出发,Gauss 证明了唯一能够合理选择算术平均数作为位置参数估计量的法则是正常的误差法则:

  • Integer arithmetic can be used to sample from the standard normal distribution.[53] This method is exact in the sense that it satisfies the conditions of ideal approximation;[54] i.e., it is equivalent to sampling a real number from the standard normal distribution and rounding this to the nearest representable floating point number.
  • There is also some investigation[55] into the connection between the fast Hadamard transform and the normal distribution, since the transform employs just addition and subtraction and by the central limit theorem random numbers from almost any distribution will be transformed into the normal distribution. In this regard a series of Hadamard transforms can be combined with random permutations to turn arbitrary data sets into a normally distributed data.
[math]\displaystyle{ 

《数学》



    \varphi\mathit{\Delta} = \frac h {\surd\pi} \, e^{-\mathrm{hh}\Delta\Delta},

如果你想知道更多,请访问我们的网站,

=== Numerical approximations for the normal CDF ===

 }[/math] 

</math > < ! ! -- 请不要修改这个公式; 它的间距和风格尽可能地遵循原来的 -- >

The standard normal CDF is widely used in scientific and statistical computing.


where h is "the measure of the precision of the observations". Using this normal law as a generic model for errors in the experiments, Gauss formulates what is now known as the non-linear weighted least squares (NWLS) method.

其中 h 代表”观测精度的度量”。利用这一正态分布规律作为实验误差的一般模型,高斯建立了非线性加权最小二乘法(NWLS)。

The values Φ(x) may be approximated very accurately by a variety of methods, such as numerical integration, Taylor series, asymptotic series and continued fractions. Different approximations are used depending on the desired level of accuracy.


Pierre-Simon Laplace proved the central limit theorem in 1810, consolidating the importance of the normal distribution in statistics.

[皮埃尔-西蒙·拉普拉斯在1810年证明了中心极限定理,巩固了正态分布在统计学中的重要性]

{{unordered list


Although Gauss was the first to suggest the normal distribution law, Laplace made significant contributions. }} It was Laplace who first posed the problem of aggregating several observations in 1774, although his own solution led to the Laplacian distribution. It was Laplace who first calculated the value of the integral }}}} in 1782, providing the normalization constant for the normal distribution. Finally, it was Laplace who in 1810 proved and presented to the Academy the fundamental central limit theorem, which emphasized the theoretical importance of the normal distribution.

虽然高斯是第一个提出正态分布定律的人,但拉普拉斯做出了重大贡献1774年,拉普拉斯首先提出了将若干观测数据汇总的问题,尽管他自己的解决方案导致了拉普拉斯分布。正是拉普拉斯在1782年首次计算出{ integral }}的值,为正态分布提供了归一化常数。最后,是拉普拉斯在1810年向美国科学院证明并提交了基本中心极限定理,强调了正态分布的理论重要性。

|1= 脚本错误:没有“Footnotes”这个模块。 give the approximation for Φ(x) for x > 0 with the absolute error 模板:Abs < 7.5·10−8 (algorithm 26.2.17):

[math]\displaystyle{ It is of interest to note that in 1809 an Irish mathematician Adrain published two derivations of the normal probability law, simultaneously and independently from Gauss. His works remained largely unnoticed by the scientific community, until in 1871 they were "rediscovered" by Abbe. 值得注意的是,在1809年,一位爱尔兰数学家阿德莱恩发表了正常概率定律的两个推导,它们是同时独立于高斯的。他的工作在很大程度上没有被科学界注意到,直到1871年,他们被阿贝“重新发现”。 \Phi(x) = 1 - \varphi(x)\left(b_1t + b_2t^2 + b_3t^3 + b_4t^4 + b_5t^5\right) + \varepsilon(x), \qquad t = \frac{1}{1+b_0x}, }[/math]

In the middle of the 19th century Maxwell demonstrated that the normal distribution is not just a convenient mathematical tool, but may also occur in natural phenomena: "The number of particles whose velocity, resolved in a certain direction, lies between x and x + dx is

在19世纪中叶,麦克斯韦证明了正态分布不仅是一种方便的数学工具,而且也可能发生在自然现象中: “在某个方向上分解的速度介于 x 和 x + dx 之间的粒子数量

where ϕ(x) is the standard normal PDF, and b0 = 0.2316419, b1 = 0.319381530, b2 = −0.356563782, b3 = 1.781477937, b4 = −1.821255978, b5 = 1.330274429.

[math]\displaystyle{ 

《数学》



    \operatorname{N} \frac{1}{\alpha\;\sqrt\pi}\; e^{-\frac{x^2}{\alpha^2}} \, dx

1}{ alpha; sqrt pi } ; e ^ {-frac { x ^ 2}{ alpha ^ 2}} ,dx

|2= {{harvtxt |Hart |1968 }} lists some dozens of approximations – by means of rational functions, with or without exponentials – for the {{mono|erfc()}} function. His algorithms vary in the degree of complexity and the resulting precision, with maximum absolute precision of 24 digits. An algorithm by {{harvtxt |West |2009 }} combines Hart's algorithm 5666 with a [[continued fraction]] approximation in the tail to provide a fast computation algorithm with a 16-digit precision.

   }[/math] 

</math > < ! ! -- 请不要修改这个公式; 它的间距和风格尽可能接近原来的样式 -- >


|3= 脚本错误:没有“Footnotes”这个模块。 after recalling Hart68 solution is not suited for erf, gives a solution for both erf and erfc, with maximal relative error bound, via Rational Chebyshev Approximation.


Since its introduction, the normal distribution has been known by many different names: the law of error, the law of facility of errors, Laplace's second law, Gaussian law, etc. Gauss himself apparently coined the term with reference to the "normal equations" involved in its applications, with normal having its technical meaning of orthogonal rather than "usual". However, by the end of the 19th century some authors) and Lexis (, ) c. 1875. }} had started using the name normal distribution, where the word "normal" was used as an adjective – the term now being seen as a reflection of the fact that this distribution was seen as typical, common – and thus "normal". Peirce (one of those authors) once defined "normal" thus: "...the 'normal' is not the average (or any other kind of mean) of what actually occurs, but of what would, in the long run, occur under certain circumstances." Around the turn of the 20th century Pearson popularized the term normal as a designation for this distribution.

自从引入正态分布以来,它有许多不同的名称: 误差定律、误差便利定律、拉普拉斯第二定律、高斯定律等等。高斯自己创造这个术语,显然是参考其应用中涉及的“正规方程” ,正规具有正交而非“通常”的技术含义。然而,到19世纪末,一些作家)和词汇(,)约1875。} 已经开始使用”正常分布”这一名称,其中”正常”一词被用作形容词——这一术语现在被视为反映了这一分布被视为典型的、常见的,因此也是”正常”的事实。皮尔斯(其中一位作者)曾经这样定义“正常” : “ ... ... ‘正常’不是实际发生的事情的平均值(或任何其他类型的平均值) ,而是在特定情况下长期发生的事情的平均值。”大约在20世纪之交,皮尔逊普及了术语正常作为这种分布的指定。

|4= 脚本错误:没有“Footnotes”这个模块。 suggested a simple algorithm模板:NoteTag based on the Taylor series expansion


[math]\displaystyle{ Also, it was Pearson who first wrote the distribution in terms of the standard deviation σ as in modern notation. Soon after this, in year 1915, Fisher added the location parameter to the formula for normal distribution, expressing it in the way it is written nowadays: 同时,也是皮尔逊首先用现代符号写出了标准差 σ 的分布。不久之后,在1915年,Fisher 在正态分布公式中加入了位置参数,用现在的写法来表达: \Phi(x) = \frac12 + \varphi(x)\left( x + \frac{x^3} 3 + \frac{x^5}{3\cdot5} + \frac{x^7}{3\cdot5\cdot7} + \frac{x^9}{3\cdot5\cdot7\cdot9} + \cdots \right) \lt math\gt df = \frac{1}{\sqrt{2\sigma^2\pi}}e^{-(x-m)^2/(2\sigma^2)} \, dx }[/math]

< math > df = frac {1}{ sqrt {2 sigma ^ 2 pi } e ^ {-(x-m) ^ 2/(2 sigma ^ 2)} ,dx </math >

 </math>


The term "standard normal", which denotes the normal distribution with zero mean and unit variance came into general use around the 1950s, appearing in the popular textbooks by P.G. Hoel (1947) "Introduction to mathematical statistics" and A.M. Mood (1950) "Introduction to the theory of statistics".

“标准正态”一词是20世纪50年代前后在普通教科书中出现的一个概念,它表示的是零均值和单位方差的正态分布。Hoel (1947)《数理统计学导论》和《 a.m。穆德(1950)《统计学理论导论》。

for calculating Φ(x) with arbitrary precision. The drawback of this algorithm is comparatively slow calculation time (for example it takes over 300 iterations to calculate the function with 16 digits of precision when x = 10).


|5= The GNU Scientific Library calculates values of the standard normal CDF using Hart's algorithms and approximations with Chebyshev polynomials.

}}


Shore (1982) introduced simple approximations that may be incorporated in stochastic optimization models of engineering and operations research, like reliability engineering and inventory analysis. Denoting p=Φ(z), the simplest approximation for the quantile function is:


[math]\displaystyle{ z=\Phi^{-1}(p)=5.5556\left[1- \left( \frac{1-p} p \right)^{0.1186}\right],\qquad p\ge 1/2 }[/math]


This approximation delivers for z a maximum absolute error of 0.026 (for 0.5 ≤ p ≤ 0.9999, corresponding to 0 ≤ z ≤ 3.719). For p < 1/2 replace p by 1 − p and change sign. Another approximation, somewhat less accurate, is the single-parameter approximation:


[math]\displaystyle{ z=-0.4115\left\{ \frac{1-p} p + \log \left[ \frac{1-p} p \right] - 1 \right\}, \qquad p\ge 1/2 }[/math]


The latter had served to derive a simple approximation for the loss integral of the normal distribution, defined by


[math]\displaystyle{ \begin{align} L(z) & =\int_z^\infty (u-z)\varphi(u) \, du=\int_z^\infty [1-\Phi (u)] \, du \\[5pt] L(z) & \approx \begin{cases} 0.4115\left(\dfrac p {1-p} \right) - z, & p\lt 1/2, \\ \\ 0.4115\left( \dfrac {1-p} p \right), & p\ge 1/2. \end{cases} \\[5pt] \text{or, equivalently,} \\ L(z) & \approx \begin{cases} 0.4115\left\{ 1-\log \left[ \frac p {1-p} \right] \right\}, & p\lt 1/2, \\ \\ 0.4115 \dfrac{1-p} p, & p\ge 1/2. \end{cases} \end{align} }[/math]


This approximation is particularly accurate for the right far-tail (maximum error of 10−3 for z≥1.4). Highly accurate approximations for the CDF, based on Response Modeling Methodology (RMM, Shore, 2011, 2012), are shown in Shore (2005).


Some more approximations can be found at: Error function#Approximation with elementary functions. In particular, small relative error on the whole domain for the CDF [math]\displaystyle{ \Phi }[/math] and the quantile function [math]\displaystyle{ \Phi^{-1} }[/math] as well, is achieved via an explicitly invertible formula by Sergei Winitzki in 2008.


History

Development

Some authors[56][57] attribute the credit for the discovery of the normal distribution to de Moivre, who in 1738模板:NoteTag published in the second edition of his "The Doctrine of Chances" the study of the coefficients in the binomial expansion of (a + b)n. De Moivre proved that the middle term in this expansion has the approximate magnitude of [math]\displaystyle{ 2/\sqrt{2\pi n} }[/math], and that "If m or ½n be a Quantity infinitely great, then the Logarithm of the Ratio, which a Term distant from the middle by the Interval , has to the middle Term, is [math]\displaystyle{ -\frac{2\ell\ell}{n} }[/math]."[58] Although this theorem can be interpreted as the first obscure expression for the normal probability law, Stigler points out that de Moivre himself did not interpret his results as anything more than the approximate rule for the binomial coefficients, and in particular de Moivre lacked the concept of the probability density function.[59]


文件:Carl Friedrich Gauss.jpg
Carl Friedrich Gauss discovered the normal distribution in 1809 as a way to rationalize the method of least squares.


In 1809 Gauss published his monograph "Theoria motus corporum coelestium in sectionibus conicis solem ambientium" where among other things he introduces several important statistical concepts, such as the method of least squares, the method of maximum likelihood, and the normal distribution. Gauss used M, 模板:Nobr, 模板:Nobr to denote the measurements of some unknown quantity V, and sought the "most probable" estimator of that quantity: the one that maximizes the probability 模板:Nobr of obtaining the observed experimental results. In his notation φΔ is the probability law of the measurement errors of magnitude Δ. Not knowing what the function φ is, Gauss requires that his method should reduce to the well-known answer: the arithmetic mean of the measured values.模板:NoteTag Starting from these principles, Gauss demonstrates that the only law that rationalizes the choice of arithmetic mean as an estimator of the location parameter, is the normal law of errors:[60]


[math]\displaystyle{ \varphi\mathit{\Delta} = \frac h {\surd\pi} \, e^{-\mathrm{hh}\Delta\Delta}, }[/math]
 | title = Normal Distribution

| title = 正态分布


 | id = p/n067460

| id = p/n067460

where h is "the measure of the precision of the observations". Using this normal law as a generic model for errors in the experiments, Gauss formulates what is now known as the non-linear weighted least squares (NWLS) method.[61]

 | ref = harv

= harv


 }}
 }}
文件:Pierre-Simon Laplace.jpg
Pierre-Simon Laplace proved the central limit theorem in 1810, consolidating the importance of the normal distribution in statistics.


Although Gauss was the first to suggest the normal distribution law, Laplace made significant contributions.模板:NoteTag It was Laplace who first posed the problem of aggregating several observations in 1774,[62] although his own solution led to the Laplacian distribution. It was Laplace who first calculated the value of the [[Gaussian integral|integral et2 dt = 模板:Sqrt]] in 1782, providing the normalization constant for the normal distribution.[63] Finally, it was Laplace who in 1810 proved and presented to the Academy the fundamental central limit theorem, which emphasized the theoretical importance of the normal distribution.[64]


It is of interest to note that in 1809 an Irish mathematician Adrain published two derivations of the normal probability law, simultaneously and independently from Gauss.[65] His works remained largely unnoticed by the scientific community, until in 1871 they were "rediscovered" by Abbe.[66]


In the middle of the 19th century Maxwell demonstrated that the normal distribution is not just a convenient mathematical tool, but may also occur in natural phenomena:[67] "The number of particles whose velocity, resolved in a certain direction, lies between x and x + dx is

[math]\displaystyle{ \operatorname{N} \frac{1}{\alpha\;\sqrt\pi}\; e^{-\frac{x^2}{\alpha^2}} \, dx }[/math]


Naming

Since its introduction, the normal distribution has been known by many different names: the law of error, the law of facility of errors, Laplace's second law, Gaussian law, etc. Gauss himself apparently coined the term with reference to the "normal equations" involved in its applications, with normal having its technical meaning of orthogonal rather than "usual".[68] However, by the end of the 19th century some authors模板:NoteTag had started using the name normal distribution, where the word "normal" was used as an adjective – the term now being seen as a reflection of the fact that this distribution was seen as typical, common – and thus "normal". Peirce (one of those authors) once defined "normal" thus: "...the 'normal' is not the average (or any other kind of mean) of what actually occurs, but of what would, in the long run, occur under certain circumstances."[69] Around the turn of the 20th century Pearson popularized the term normal as a designation for this distribution.[70]

/* Styling for Template:Quote */ .templatequote { overflow: hidden; margin: 1em 0; padding: 0 40px; } .templatequote .templatequotecite {

   line-height: 1.5em;
   /* @noflip */
   text-align: left;
   /* @noflip */
   padding-left: 1.6em;
   margin-top: 0;

}


Also, it was Pearson who first wrote the distribution in terms of the standard deviation σ as in modern notation. Soon after this, in year 1915, Fisher added the location parameter to the formula for normal distribution, expressing it in the way it is written nowadays:

[math]\displaystyle{ df = \frac{1}{\sqrt{2\sigma^2\pi}}e^{-(x-m)^2/(2\sigma^2)} \, dx }[/math]


The term "standard normal", which denotes the normal distribution with zero mean and unit variance came into general use around the 1950s, appearing in the popular textbooks by P.G. Hoel (1947) "Introduction to mathematical statistics" and A.M. Mood (1950) "Introduction to the theory of statistics".[71]


See also

模板:Portal

  • Bates distribution — similar to the Irwin–Hall distribution, but rescaled back into the 0 to 1 range
  • Behrens–Fisher problem — the long-standing problem of testing whether two normal samples with different variances have same means;
  • Z-test— using the normal distribution


Notes

模板:NoteFoot


References

Citations

  1. 1.0 1.1 1.2 1.3 1.4 1.5 "List of Probability and Statistics Symbols". Math Vault (in English). April 26, 2020. Retrieved August 15, 2020.
  2. Weisstein, Eric W. "Normal Distribution". mathworld.wolfram.com (in English). Retrieved August 15, 2020.
  3. Normal Distribution, Gale Encyclopedia of Psychology
  4. 脚本错误:没有“Footnotes”这个模块。
  5. Lyon, A. (2014). Why are Normal Distributions Normal?, The British Journal for the Philosophy of Science.
  6. 6.0 6.1 "Normal Distribution". www.mathsisfun.com. Retrieved August 15, 2020.
  7. 脚本错误:没有“Footnotes”这个模块。
  8. 脚本错误:没有“Footnotes”这个模块。
  9. 脚本错误:没有“Footnotes”这个模块。
  10. 脚本错误:没有“Footnotes”这个模块。
  11. Scott, Clayton; Nowak, Robert (August 7, 2003). "The Q-function". Connexions.
  12. Barak, Ohad (April 6, 2006). "Q Function and Error Function" (PDF). Tel Aviv University. Archived from the original (PDF) on March 25, 2009.
  13. Weisstein, Eric W. "Normal Distribution Function". MathWorld.
  14. 模板:AS ref
  15. "Wolfram|Alpha: Computational Knowledge Engine". Wolframalpha.com. Retrieved March 3, 2017.
  16. "Wolfram|Alpha: Computational Knowledge Engine". Wolframalpha.com.
  17. "Wolfram|Alpha: Computational Knowledge Engine". Wolframalpha.com. Retrieved March 3, 2017.
  18. 18.0 18.1 18.2 脚本错误:没有“Footnotes”这个模块。
  19. 脚本错误:没有“Footnotes”这个模块。
  20. 脚本错误:没有“Footnotes”这个模块。
  21. Papoulis, Athanasios. Probability, Random Variables and Stochastic Processes (4th ed.). p. 148. 
  22. 脚本错误:没有“Footnotes”这个模块。
  23. 脚本错误:没有“Footnotes”这个模块。
  24. 脚本错误:没有“Footnotes”这个模块。
  25. Williams, David (2001). Weighing the odds : a course in probability and statistics (Reprinted. ed.). Cambridge [u.a.]: Cambridge Univ. Press. pp. 197–199. ISBN 978-0-521-00618-7. https://archive.org/details/weighingoddscour00will. 
  26. Smith, José M. Bernardo; Adrian F. M. (2000). Bayesian theory (Reprint ed.). Chichester [u.a.]: Wiley. pp. 209, 366. ISBN 978-0-471-49464-5. https://archive.org/details/bayesiantheory00bern_963. 
  27. O'Hagan, A. (1994) Kendall's Advanced Theory of statistics, Vol 2B, Bayesian Inference, Edward Arnold. (Section 5.40)
  28. 脚本错误:没有“Footnotes”这个模块。
  29. 脚本错误:没有“Footnotes”这个模块。
  30. 脚本错误:没有“Footnotes”这个模块。
  31. 31.0 31.1 脚本错误:没有“Footnotes”这个模块。
  32. 32.0 32.1 脚本错误:没有“Footnotes”这个模块。
  33. Quine, M.P. (1993). "On three characterisations of the normal distribution". Probability and Mathematical Statistics. 14 (2): 257–263.
  34. UIUC, Lecture 21. The Multivariate Normal Distribution, 21.6:"Individually Gaussian Versus Jointly Gaussian".
  35. Edward L. Melnick and Aaron Tenenbein, "Misspecifications of the Normal Distribution", The American Statistician, volume 36, number 4 November 1982, pages 372–373
  36. "Kullback Leibler (KL) Distance of Two Normal (Gaussian) Probability Distributions". Allisons.org. December 5, 2007. Retrieved March 3, 2017.
  37. Jordan, Michael I. (February 8, 2010). "Stat260: Bayesian Modeling and Inference: The Conjugate Prior for the Normal Distribution" (PDF).
  38. 脚本错误:没有“Footnotes”这个模块。
  39. "Normal Approximation to Poisson Distribution". Stat.ucla.edu. Retrieved March 3, 2017.
  40. Weisstein, Eric W. "Normal Product Distribution". MathWorld. wolfram.com.
  41. Lukacs, Eugene (1942). "A Characterization of the Normal Distribution". The Annals of Mathematical Statistics. 13 (1): 91–3. doi:10.1214/aoms/1177731647. ISSN 0003-4851. JSTOR 2236166.
  42. Basu, D.; Laha, R. G. (1954). "On Some Characterizations of the Normal Distribution". Sankhyā. 13 (4): 359–62. ISSN 0036-4452. JSTOR 25048183.
  43. Lehmann, E. L. (1997). Testing Statistical Hypotheses (2nd ed.). Springer. p. 199. ISBN 978-0-387-94919-2. 
  44. 44.0 44.1 脚本错误:没有“Footnotes”这个模块。
  45. 脚本错误:没有“Footnotes”这个模块。
  46. 脚本错误:没有“Footnotes”这个模块。
  47. 脚本错误:没有“Footnotes”这个模块。
  48. Jaynes, Edwin T. (2003). Probability Theory: The Logic of Science. Cambridge University Press. pp. 592–593. ISBN 9780521592710. https://books.google.com/books?id=tTN4HuUNXjgC&pg=PA592. 
  49. Oosterbaan, Roland J. (1994). "Chapter 6: Frequency and Regression Analysis of Hydrologic Data". In Ritzema, Henk P.. Drainage Principles and Applications, Publication 16 (second revised ed.). Wageningen, The Netherlands: International Institute for Land Reclamation and Improvement (ILRI). pp. 175–224. ISBN 978-90-70754-33-4. http://www.waterlog.info/pdf/freqtxt.pdf. 
  50. 脚本错误:没有“Footnotes”这个模块。
  51. 脚本错误:没有“Footnotes”这个模块。
  52. 脚本错误:没有“Footnotes”这个模块。
  53. 脚本错误:没有“Footnotes”这个模块。
  54. 脚本错误:没有“Footnotes”这个模块。
  55. 脚本错误:没有“Footnotes”这个模块。
  56. 脚本错误:没有“Footnotes”这个模块。
  57. 脚本错误:没有“Footnotes”这个模块。
  58. De Moivre, Abraham (1733), Corollary I – see 脚本错误:没有“Footnotes”这个模块。
  59. 脚本错误:没有“Footnotes”这个模块。
  60. 脚本错误:没有“Footnotes”这个模块。
  61. 脚本错误:没有“Footnotes”这个模块。
  62. 脚本错误:没有“Footnotes”这个模块。
  63. 脚本错误:没有“Footnotes”这个模块。
  64. 脚本错误:没有“Footnotes”这个模块。
  65. 脚本错误:没有“Footnotes”这个模块。
  66. 脚本错误:没有“Footnotes”这个模块。
  67. 脚本错误:没有“Footnotes”这个模块。
  68. Jaynes, Edwin J.; Probability Theory: The Logic of Science, Ch 7
  69. Peirce, Charles S. (c. 1909 MS), Collected Papers v. 6, paragraph 327
  70. 脚本错误:没有“Footnotes”这个模块。
  71. "Earliest uses... (entry STANDARD NORMAL CURVE)".


Sources

Category:Continuous distributions

类别: 连续分布

  • Amari, Shun-ichi; Nagaoka, Hiroshi (2000). Methods of Information Geometry. Oxford University Press. ISBN 978-0-8218-0531-2. 

Category:Conjugate prior distributions

范畴: 共轭先验分布

Category:Exponential family distributions

类别: 指数族分布

  • Bryc, Wlodzimierz (1995). The Normal Distribution: Characterizations with Applications. Springer-Verlag. ISBN 978-0-387-97990-8. 

Category:Stable distributions

类别: 稳定的发行版

  • Casella, George; Berger, Roger L. (2001). Statistical Inference (2nd ed.). Duxbury. ISBN 978-0-534-24312-8. 

Category:Location-scale family probability distributions

类别: 位置-尺度家族概率分布


This page was moved from wikipedia:en:Normal distribution. Its edit history can be viewed at 正态分布/edithistory