更改

跳到导航 跳到搜索
删除2,078字节 、 2021年8月1日 (日) 15:50
第417行: 第417行:  
A Monte Carlo approach was used for evaluating the potential value of a proposed program to help female petitioners in Wisconsin be successful in their applications for harassment and domestic abuse restraining orders.  It was proposed to help women succeed in their petitions by providing them with greater advocacy thereby potentially reducing the risk of rape and physical assault.  However, there were many variables in play that could not be estimated perfectly, including the effectiveness of restraining orders, the success rate of petitioners both with and without advocacy, and many others.  The study ran trials that varied these variables to come up with an overall estimate of the success level of the proposed program as a whole.
 
A Monte Carlo approach was used for evaluating the potential value of a proposed program to help female petitioners in Wisconsin be successful in their applications for harassment and domestic abuse restraining orders.  It was proposed to help women succeed in their petitions by providing them with greater advocacy thereby potentially reducing the risk of rape and physical assault.  However, there were many variables in play that could not be estimated perfectly, including the effectiveness of restraining orders, the success rate of petitioners both with and without advocacy, and many others.  The study ran trials that varied these variables to come up with an overall estimate of the success level of the proposed program as a whole.
   −
蒙特卡洛方法被用来评估一个拟议的方案的潜在价值,以帮助威斯康星州的女性请愿者成功地申请骚扰和家庭虐待限制令。提议帮助妇女成功地提出请愿,向她们提供更多的宣传,从而有可能减少强奸和人身攻击的风险。然而,还有许多变量无法完全估计,包括限制令的有效性,上访者的成功率,无论有没有主张,以及许多其他因素。这项研究通过改变这些变量进行了试验,得出了对整个计划成功程度的总体评估。<ref name="montecarloanalysis" />
+
蒙特卡洛方法曾用来评估一项提案的潜在价值,这项提案旨在帮助威斯康星州的女性请愿者成功申请骚扰和家庭虐待限制令。通过帮助妇女完成请愿,向她们提供更多的宣传,从而有可能减少强奸和人身攻击的风险。然而,还有许多变量无法完全估计,包括限制令的有效性,上访者的成功率,以及许多其他因素。该研究通过试验来改变这些变量,从而对整个计划的成功程度进行总体估计。<ref name="montecarloanalysis" />
    
== Use in mathematics 数学应用==
 
== Use in mathematics 数学应用==
第425行: 第425行:  
In general, the Monte Carlo methods are used in mathematics to solve various problems by generating suitable random numbers (see also Random number generation) and observing that fraction of the numbers that obeys some property or properties. The method is useful for obtaining numerical solutions to problems too complicated to solve analytically.  The most common application of the Monte Carlo method is Monte Carlo integration.
 
In general, the Monte Carlo methods are used in mathematics to solve various problems by generating suitable random numbers (see also Random number generation) and observing that fraction of the numbers that obeys some property or properties. The method is useful for obtaining numerical solutions to problems too complicated to solve analytically.  The most common application of the Monte Carlo method is Monte Carlo integration.
   −
一般来说,蒙特卡罗方法在数学中通过产生合适的随机数(也见随机数产生)和观察符合某些性质的数字分数来解决各种问题。这种方法对于求解解析求解过于复杂的问题的数值解是有用的。蒙特卡罗方法最常用的应用是蒙地卡罗积分。
+
一般来说,蒙特卡罗方法在数学中通过产生合适的随机数(也见随机数产生)和观察符合某些性质的数字分数来解决各种问题。对于过于复杂而无法用解析方法求解的问题,这种方法是有用的。蒙特卡罗方法最常见的应用是蒙特卡罗积分。
    
=== Integration 积分===
 
=== Integration 积分===
第433行: 第433行:  
蒙特卡罗积分是通过比较随机点和函数值来工作的|链接=Special:FilePath/Monte-carlo2.gif]]
 
蒙特卡罗积分是通过比较随机点和函数值来工作的|链接=Special:FilePath/Monte-carlo2.gif]]
   −
[[File:Monte-Carlo method (errors).png|thumb|Errors reduce by a factor of <math>\scriptstyle 1/\sqrt{N}</math>Errors reduce by a factor of <math>\scriptstyle 1/\sqrt{N}</math>
+
[[File:Monte-Carlo method (errors).png|thumb|Errors reduce by a factor of <math>\scriptstyle 1/\sqrt{N}</math>Errors reduce by a factor of <math>\scriptstyle 1/\sqrt{N}</math>错误由于<math>\scriptstyle 1/\sqrt{N}</math>而减少|链接=Special:FilePath/Monte-Carlo_method_(errors).png]]
 
  −
错误减少一个因素 < math > scriptstyle 1/sqrt { n } <nowiki></math ></nowiki>|链接=Special:FilePath/Monte-Carlo_method_(errors).png]]
      
Deterministic [[numerical integration]] algorithms work well in a small number of dimensions, but encounter two problems when the functions have many variables. First, the number of function evaluations needed increases rapidly with the number of dimensions. For example, if 10 evaluations provide adequate accuracy in one dimension, then [[googol|10<sup>100</sup>]] points are needed for 100 dimensions—far too many to be computed. This is called the [[curse of dimensionality]]. Second, the boundary of a multidimensional region may be very complicated, so it may not be feasible to reduce the problem to an [[iterated integral]].<ref name="Press">Press, William H.; Teukolsky, Saul A.; Vetterling, William T.; Flannery, Brian P. (1996) [1986]. ''Numerical Recipes in Fortran 77: The Art of Scientific Computing''. Fortran Numerical Recipes. '''1''' (2nd ed.). Cambridge University Press. ISBN <bdi>978-0-521-43064-7</bdi>.</ref> 100 [[dimension]]s is by no means unusual, since in many physical problems, a "dimension" is equivalent to a [[degrees of freedom (physics and chemistry)|degree of freedom]].
 
Deterministic [[numerical integration]] algorithms work well in a small number of dimensions, but encounter two problems when the functions have many variables. First, the number of function evaluations needed increases rapidly with the number of dimensions. For example, if 10 evaluations provide adequate accuracy in one dimension, then [[googol|10<sup>100</sup>]] points are needed for 100 dimensions—far too many to be computed. This is called the [[curse of dimensionality]]. Second, the boundary of a multidimensional region may be very complicated, so it may not be feasible to reduce the problem to an [[iterated integral]].<ref name="Press">Press, William H.; Teukolsky, Saul A.; Vetterling, William T.; Flannery, Brian P. (1996) [1986]. ''Numerical Recipes in Fortran 77: The Art of Scientific Computing''. Fortran Numerical Recipes. '''1''' (2nd ed.). Cambridge University Press. ISBN <bdi>978-0-521-43064-7</bdi>.</ref> 100 [[dimension]]s is by no means unusual, since in many physical problems, a "dimension" is equivalent to a [[degrees of freedom (physics and chemistry)|degree of freedom]].
第441行: 第439行:  
Deterministic numerical integration algorithms work well in a small number of dimensions, but encounter two problems when the functions have many variables. First, the number of function evaluations needed increases rapidly with the number of dimensions. For example, if 10 evaluations provide adequate accuracy in one dimension, then 10<sup>100</sup> points are needed for 100 dimensions—far too many to be computed. This is called the curse of dimensionality. Second, the boundary of a multidimensional region may be very complicated, so it may not be feasible to reduce the problem to an iterated integral. 100 dimensions is by no means unusual, since in many physical problems, a "dimension" is equivalent to a degree of freedom.
 
Deterministic numerical integration algorithms work well in a small number of dimensions, but encounter two problems when the functions have many variables. First, the number of function evaluations needed increases rapidly with the number of dimensions. For example, if 10 evaluations provide adequate accuracy in one dimension, then 10<sup>100</sup> points are needed for 100 dimensions—far too many to be computed. This is called the curse of dimensionality. Second, the boundary of a multidimensional region may be very complicated, so it may not be feasible to reduce the problem to an iterated integral. 100 dimensions is by no means unusual, since in many physical problems, a "dimension" is equivalent to a degree of freedom.
   −
确定性数值积分算法在少数维上运行良好,但在函数具有多个变量时会遇到两个问题。首先,随着维数的增加,需要进行的功能评估的数量迅速增加。例如,如果10个评估在一个维度上提供了足够的精确度,那么100个维度需要10个 < sup > 100 点,这太多了以至于无法计算。这就是所谓的维数灾难。其次,多维区域的边界可能非常复杂,因此将问题简化为迭代积分可能是不可行的。<ref name="Press" /> 100维绝对不是不寻常的,因为在许多物理问题中,一个“维度”等同于一个自由度。
+
确定性数值积分算法在低维运行良好,但在函数具有多个变量时会遇到两个问题。首先,随着维数的增加,需要进行的功能评估数量迅速增加。例如,如果10个评估在一个维度上提供了足够的精确度,那么100个维度需要10<sup>100</sup> 点,这太多了以至于无法计算。这就是所谓的'''维数灾难 Curse of Dimensionality'''。其次,多维区域的边界可能非常复杂,因此将问题简化为迭代积分可能是不可行的。<ref name="Press" /> 100维问题是很常见的的,因为在许多物理问题中,一个“维度”等同于一个自由度。
    
Monte Carlo methods provide a way out of this exponential increase in computation time. As long as the function in question is reasonably [[well-behaved]], it can be estimated by randomly selecting points in 100-dimensional space, and taking some kind of average of the function values at these points. By the [[central limit theorem]], this method displays <math>\scriptstyle 1/\sqrt{N}</math> convergence—i.e., quadrupling the number of sampled points halves the error, regardless of the number of dimensions.<ref name="Press" />
 
Monte Carlo methods provide a way out of this exponential increase in computation time. As long as the function in question is reasonably [[well-behaved]], it can be estimated by randomly selecting points in 100-dimensional space, and taking some kind of average of the function values at these points. By the [[central limit theorem]], this method displays <math>\scriptstyle 1/\sqrt{N}</math> convergence—i.e., quadrupling the number of sampled points halves the error, regardless of the number of dimensions.<ref name="Press" />
第447行: 第445行:  
Monte Carlo methods provide a way out of this exponential increase in computation time. As long as the function in question is reasonably well-behaved, it can be estimated by randomly selecting points in 100-dimensional space, and taking some kind of average of the function values at these points. By the central limit theorem, this method displays <math>\scriptstyle 1/\sqrt{N}</math> convergence—i.e., quadrupling the number of sampled points halves the error, regardless of the number of dimensions.  
 
Monte Carlo methods provide a way out of this exponential increase in computation time. As long as the function in question is reasonably well-behaved, it can be estimated by randomly selecting points in 100-dimensional space, and taking some kind of average of the function values at these points. By the central limit theorem, this method displays <math>\scriptstyle 1/\sqrt{N}</math> convergence—i.e., quadrupling the number of sampled points halves the error, regardless of the number of dimensions.  
   −
蒙特卡罗方法提供了一种方法来摆脱这种指数增长的计算时间。只要所涉及的函数具有合理的性质,就可以在100维空间中随机选取一些点,并在这些点上取某种函数值的平均值来估计。通过中心极限定理,这个方法显示 < math > scriptstyle 1/sqrt { n } <nowiki></math ></nowiki> 收敛,即,不管维数多少,将采样点的数目翻两番,误差减半。<ref name="Press" />  
+
蒙特卡罗方法提供了一种方法来摆脱这种指数增长的计算时间。只要所涉及的函数具有合理的性质,就可以在100维空间中随机选取一些点,并在这些点上取某种函数值的平均值来估计。通过'''中心极限定理 Central Limit Theorem''',这个方法显示 <math>\scriptstyle 1/\sqrt{N}</math> 收敛,即,不管维数多少,将采样点的数目翻两番,误差则会减半。<ref name="Press" />  
    
A refinement of this method, known as [[importance sampling]] in statistics, involves sampling the points randomly, but more frequently where the integrand is large. To do this precisely one would have to already know the integral, but one can approximate the integral by an integral of a similar function or use adaptive routines such as [[stratified sampling]], [[Monte Carlo integration#Recursive stratified sampling|recursive stratified sampling]], adaptive umbrella sampling<ref name=":59">MEZEI, M (31 December 1986). "Adaptive umbrella sampling: Self-consistent determination of the non-Boltzmann bias". ''Journal of Computational Physics''. '''68''' (1): 237–248. Bibcode:1987JCoPh..68..237M. doi:10.1016/0021-9991(87)90054-4.</ref><ref name=":60">Bartels, Christian; Karplus, Martin (31 December 1997). "Probability Distributions for Complex Systems: Adaptive Umbrella Sampling of the Potential Energy". ''The Journal of Physical Chemistry B''. '''102''' (5): 865–880. doi:10.1021/jp972280j.</ref> or the [[VEGAS algorithm]].
 
A refinement of this method, known as [[importance sampling]] in statistics, involves sampling the points randomly, but more frequently where the integrand is large. To do this precisely one would have to already know the integral, but one can approximate the integral by an integral of a similar function or use adaptive routines such as [[stratified sampling]], [[Monte Carlo integration#Recursive stratified sampling|recursive stratified sampling]], adaptive umbrella sampling<ref name=":59">MEZEI, M (31 December 1986). "Adaptive umbrella sampling: Self-consistent determination of the non-Boltzmann bias". ''Journal of Computational Physics''. '''68''' (1): 237–248. Bibcode:1987JCoPh..68..237M. doi:10.1016/0021-9991(87)90054-4.</ref><ref name=":60">Bartels, Christian; Karplus, Martin (31 December 1997). "Probability Distributions for Complex Systems: Adaptive Umbrella Sampling of the Potential Energy". ''The Journal of Physical Chemistry B''. '''102''' (5): 865–880. doi:10.1021/jp972280j.</ref> or the [[VEGAS algorithm]].
   −
这种方法的改进,在统计学中称为重要抽样,涉及随机抽样点,但更频繁地在被积函数较大的地方。要精确地做到这一点,你必须已经知道积分,但你可以用一个类似函数的积分来近似这个积分,或者使用自适应例程,如分层抽样,递归分层抽样,自适应雨伞抽样<ref name=":59" /><ref name=":60" /> 或VEGAS算法。
+
这种方法改进,在统计学中称为重要性抽样,涉及随机抽样点,但更频繁地在被积函数较大的地方进行。要精确地做到这一点,必须已知积分,或者也可以用一个类似函数的积分来近似这个积分,或者使用自适应例程,如'''分层抽样 Stratified Sampling''','''递归分层抽样 Recursive Stratified Sampling''','''自适应伞抽样 Adaptive Umbrella Sampling'''<ref name=":59" /><ref name=":60" /> 或'''VEGAS算法 VEGAS Algorithm'''。
    
A similar approach, the [[quasi-Monte Carlo method]], uses [[low-discrepancy sequence]]s. These sequences "fill" the area better and sample the most important points more frequently, so quasi-Monte Carlo methods can often converge on the integral more quickly.
 
A similar approach, the [[quasi-Monte Carlo method]], uses [[low-discrepancy sequence]]s. These sequences "fill" the area better and sample the most important points more frequently, so quasi-Monte Carlo methods can often converge on the integral more quickly.
第457行: 第455行:  
A similar approach, the quasi-Monte Carlo method, uses low-discrepancy sequences. These sequences "fill" the area better and sample the most important points more frequently, so quasi-Monte Carlo methods can often converge on the integral more quickly.
 
A similar approach, the quasi-Monte Carlo method, uses low-discrepancy sequences. These sequences "fill" the area better and sample the most important points more frequently, so quasi-Monte Carlo methods can often converge on the integral more quickly.
   −
一个类似的方法,拟蒙特卡罗方法,使用低差异序列。这些序列能更好地“填充”区域,更频繁地采样最重要的点,因此拟蒙特卡罗方法往往能更快地收敛于积分。
+
一个类似的方法—拟蒙特卡罗方法,使用低差异序列。这些序列能更好地“填充”区域,更频繁地采样最重要的点,因此拟蒙特卡罗方法往往能更快地收敛于积分。
    
Another class of methods for sampling points in a volume is to simulate random walks over it ([[Markov chain Monte Carlo]]). Such methods include the [[Metropolis–Hastings algorithm]], [[Gibbs sampling]], [[Wang and Landau algorithm]], and interacting type MCMC methodologies such as the [[Particle filter|sequential Monte Carlo]] samplers.<ref name=":61">Del Moral, Pierre; Doucet, Arnaud; Jasra, Ajay (2006). "Sequential Monte Carlo samplers". ''Journal of the Royal Statistical Society, Series B''. '''68''' (3): 411–436. arXiv:cond-mat/0212648. doi:10.1111/j.1467-9868.2006.00553.x. S2CID 12074789.</ref>
 
Another class of methods for sampling points in a volume is to simulate random walks over it ([[Markov chain Monte Carlo]]). Such methods include the [[Metropolis–Hastings algorithm]], [[Gibbs sampling]], [[Wang and Landau algorithm]], and interacting type MCMC methodologies such as the [[Particle filter|sequential Monte Carlo]] samplers.<ref name=":61">Del Moral, Pierre; Doucet, Arnaud; Jasra, Ajay (2006). "Sequential Monte Carlo samplers". ''Journal of the Royal Statistical Society, Series B''. '''68''' (3): 411–436. arXiv:cond-mat/0212648. doi:10.1111/j.1467-9868.2006.00553.x. S2CID 12074789.</ref>
第463行: 第461行:  
Another class of methods for sampling points in a volume is to simulate random walks over it (Markov chain Monte Carlo). Such methods include the Metropolis–Hastings algorithm, Gibbs sampling, Wang and Landau algorithm, and interacting type MCMC methodologies such as the sequential Monte Carlo samplers.
 
Another class of methods for sampling points in a volume is to simulate random walks over it (Markov chain Monte Carlo). Such methods include the Metropolis–Hastings algorithm, Gibbs sampling, Wang and Landau algorithm, and interacting type MCMC methodologies such as the sequential Monte Carlo samplers.
   −
另一类方法是模拟体积上的随机游动(马尔科夫蒙特卡洛)。这些方法包括 Metropolis-Hastings 算法、 Gibbs 抽样、 Wang Landau 算法以及交互式 MCMC 方法,如序贯蒙特卡罗抽样。<ref name=":61" />
+
另一类在体积中的取样点方法是模拟在它上面的随机行走(马尔可夫链蒙特卡罗)。这些方法包括 '''梅特罗波利斯-黑斯廷斯算法 Metropolis-Hastings Algorithm'''、'''吉布斯取样法 Gibbs Sampling'''、'''王-兰道算法 Wang and Landau Algorithm''',以及交互型马尔可夫链蒙特卡罗方法,如顺序蒙特卡罗采样法。<ref name=":61" />
    
===Simulation and optimization 模拟与优化===
 
===Simulation and optimization 模拟与优化===
第471行: 第469行:  
Another powerful and very popular application for random numbers in numerical simulation is in [[Optimization (mathematics)|numerical optimization]]. The problem is to minimize (or maximize) functions of some vector that often has many dimensions. Many problems can be phrased in this way: for example, a [[computer chess]] program could be seen as trying to find the set of, say, 10 moves that produces the best evaluation function at the end. In the [[traveling salesman problem]] the goal is to minimize distance traveled. There are also applications to engineering design, such as [[multidisciplinary design optimization]]. It has been applied with quasi-one-dimensional models to solve particle dynamics problems by efficiently exploring large configuration space. Reference<ref name=":62">Spall, J. C. (2003), ''Introduction to Stochastic Search and Optimization: Estimation, Simulation, and Control'', Wiley, Hoboken, NJ. http://www.jhuapl.edu/ISSO</ref> is a comprehensive review of many issues related to simulation and optimization.
 
Another powerful and very popular application for random numbers in numerical simulation is in [[Optimization (mathematics)|numerical optimization]]. The problem is to minimize (or maximize) functions of some vector that often has many dimensions. Many problems can be phrased in this way: for example, a [[computer chess]] program could be seen as trying to find the set of, say, 10 moves that produces the best evaluation function at the end. In the [[traveling salesman problem]] the goal is to minimize distance traveled. There are also applications to engineering design, such as [[multidisciplinary design optimization]]. It has been applied with quasi-one-dimensional models to solve particle dynamics problems by efficiently exploring large configuration space. Reference<ref name=":62">Spall, J. C. (2003), ''Introduction to Stochastic Search and Optimization: Estimation, Simulation, and Control'', Wiley, Hoboken, NJ. http://www.jhuapl.edu/ISSO</ref> is a comprehensive review of many issues related to simulation and optimization.
   −
Another powerful and very popular application for random numbers in numerical simulation is in numerical optimization. The problem is to minimize (or maximize) functions of some vector that often has many dimensions. Many problems can be phrased in this way: for example, a computer chess program could be seen as trying to find the set of, say, 10 moves that produces the best evaluation function at the end. In the traveling salesman problem the goal is to minimize distance traveled. There are also applications to engineering design, such as multidisciplinary design optimization. It has been applied with quasi-one-dimensional models to solve particle dynamics problems by efficiently exploring large configuration space. Reference is a comprehensive review of many issues related to simulation and optimization.
+
随机数在数值模拟中的另一个强大和非常流行的应用是数值优化。问题是最小化(或最大化)某些向量的函数,这些向量通常有多个维度。许多问题都可以用这种方式表述:例如,一个计算机国际象棋程序可以视为试图找到一组,比如说,在最后产生最佳评估函数的10步棋。在旅行推销员问题中,目标是最小化旅行距离。也有应用于工程设计,如'''多学科设计优化''' '''Multi-disciplinary Design Optimization''' ('''MDO''')。它也已应用于准一维模型,以解决粒子动力学问题,有效地探索大型位形空间。参考文献<ref name=":62" />是对许多与模拟和优化有关的问题的全面回顾。
 
  −
另一个强大的和非常流行的应用随机数在数值模拟是在数值优化。问题在于如何最小化(或最大化)某些向量的函数,这些向量通常具有多个维度。许多问题可以这样表述: 例如,一个计算机国际象棋程序可以被视为试图找到一组,比如说,10步棋,最终产生最好的评价函数。在旅行商问题中,目标是使旅行距离最小。在工程设计中也有一些应用,如多学科设计优化。它已被应用于准一维模型,以解决粒子动力学问题,有效地探索大型位形空间。参考文献<ref name=":62" />是对许多与模拟和优化有关的问题的全面回顾。
      
The [[traveling salesman problem]] is what is called a conventional optimization problem. That is, all the facts (distances between each destination point) needed to determine the optimal path to follow are known with certainty and the goal is to run through the possible travel choices to come up with the one with the lowest total distance. However, let's assume that instead of wanting to minimize the total distance traveled to visit each desired destination, we wanted to minimize the total time needed to reach each destination. This goes beyond conventional optimization since travel time is inherently uncertain (traffic jams, time of day, etc.). As a result, to determine our optimal path we would want to use simulation - optimization to first understand the range of potential times it could take to go from one point to another (represented by a probability distribution in this case rather than a specific distance) and then optimize our travel decisions to identify the best path to follow taking that uncertainty into account.
 
The [[traveling salesman problem]] is what is called a conventional optimization problem. That is, all the facts (distances between each destination point) needed to determine the optimal path to follow are known with certainty and the goal is to run through the possible travel choices to come up with the one with the lowest total distance. However, let's assume that instead of wanting to minimize the total distance traveled to visit each desired destination, we wanted to minimize the total time needed to reach each destination. This goes beyond conventional optimization since travel time is inherently uncertain (traffic jams, time of day, etc.). As a result, to determine our optimal path we would want to use simulation - optimization to first understand the range of potential times it could take to go from one point to another (represented by a probability distribution in this case rather than a specific distance) and then optimize our travel decisions to identify the best path to follow taking that uncertainty into account.
   −
The traveling salesman problem is what is called a conventional optimization problem. That is, all the facts (distances between each destination point) needed to determine the optimal path to follow are known with certainty and the goal is to run through the possible travel choices to come up with the one with the lowest total distance. However, let's assume that instead of wanting to minimize the total distance traveled to visit each desired destination, we wanted to minimize the total time needed to reach each destination. This goes beyond conventional optimization since travel time is inherently uncertain (traffic jams, time of day, etc.). As a result, to determine our optimal path we would want to use simulation - optimization to first understand the range of potential times it could take to go from one point to another (represented by a probability distribution in this case rather than a specific distance) and then optimize our travel decisions to identify the best path to follow taking that uncertainty into account.
+
旅行推销员问题被称为传统的最优化问题问题。也就是说,确定最佳路径所需的所有事实(每个目的地之间的距离)都是确定无疑的,目标是通过可能的旅行选择得出总距离最小的路径。然而假设我们不想最小化访问每个想要的目的地所需的总距离,而是想最小化到达每个目的地所需的总时间。这超越了传统的优化,因为旅行时间是固有的不确定性(交通堵塞,一天的时间,等)。因此,为了确定我们的最佳路径,我们需要使用模拟优化来首先了解从一个点到另一个点可能需要的时间范围(在这个例子中用概率分布代表,而不是特定的距离) ,然后优化我们的旅行决策,以确定最佳路径遵循考虑到这种不确定性。
 
+
===Inverse problems 逆问题===
旅行推销员问题被称为传统的最佳化问题问题。也就是说,确定最佳路径所需的所有事实(每个目的地之间的距离)都是确定无疑的,目标是通过可能的旅行选择得出总距离最小的路径。然而,让我们假设,我们不想最小化访问每个想要的目的地所需的总距离,而是想最小化到达每个目的地所需的总时间。这超越了传统的优化,因为旅行时间是固有的不确定性(交通堵塞,一天的时间,等)。因此,为了确定我们的最佳路径,我们需要使用模拟优化来首先了解从一个点到另一个点可能需要的时间范围(在这个例子中用概率分布代表,而不是特定的距离) ,然后优化我们的旅行决策,以确定最佳路径遵循考虑到这种不确定性。
  −
===Inverse problems 反问题===
      
Probabilistic formulation of [[inverse problem]]s leads to the definition of a [[probability distribution]] in the model space. This probability distribution combines [[prior probability|prior]] information with new information obtained by measuring some observable parameters (data). As, in the general case, the theory linking data with model parameters is nonlinear, the posterior probability in the model space may not be easy to describe (it may be multimodal, some moments may not be defined, etc.).
 
Probabilistic formulation of [[inverse problem]]s leads to the definition of a [[probability distribution]] in the model space. This probability distribution combines [[prior probability|prior]] information with new information obtained by measuring some observable parameters (data). As, in the general case, the theory linking data with model parameters is nonlinear, the posterior probability in the model space may not be easy to describe (it may be multimodal, some moments may not be defined, etc.).
   −
Probabilistic formulation of inverse problems leads to the definition of a probability distribution in the model space. This probability distribution combines prior information with new information obtained by measuring some observable parameters (data).As, in the general case, the theory linking data with model parameters is nonlinear, the posterior probability in the model space may not be easy to describe (it may be multimodal, some moments may not be defined, etc.).
+
逆问题的概率公式引出了模型空间中概率分布的定义。这种概率分布结合了先验信息和通过测量一些可观测参数(数据)获得的新信息。由于在一般情况下,将数据与模型参数联系起来的理论是非线性的,模型空间中的后验概率可能不容易描述(可能是多模态的,有些矩可能没有定义等)
 
  −
反问题的概率公式导致了模型空间中概率分布的定义。该概率分布将先前的信息与通过测量一些可观测的参数(数据)获得的新信息结合起来。因为,在一般情况下,连接数据和模型参数的理论是非线性的,模型空间中的后验概率可能不容易描述(它可能是多模态的,一些矩可能没有定义,等等。).
      
When analyzing an inverse problem, obtaining a maximum likelihood model is usually not sufficient, as we normally also wish to have information on the resolution power of the data. In the general case we may have many model parameters, and an inspection of the [[marginal probability]] densities of interest may be impractical, or even useless. But it is possible to pseudorandomly generate a large collection of models according to the [[posterior probability distribution]] and to analyze and display the models in such a way that information on the relative likelihoods of model properties is conveyed to the spectator. This can be accomplished by means of an efficient Monte Carlo method, even in cases where no explicit formula for the ''a priori'' distribution is available.
 
When analyzing an inverse problem, obtaining a maximum likelihood model is usually not sufficient, as we normally also wish to have information on the resolution power of the data. In the general case we may have many model parameters, and an inspection of the [[marginal probability]] densities of interest may be impractical, or even useless. But it is possible to pseudorandomly generate a large collection of models according to the [[posterior probability distribution]] and to analyze and display the models in such a way that information on the relative likelihoods of model properties is conveyed to the spectator. This can be accomplished by means of an efficient Monte Carlo method, even in cases where no explicit formula for the ''a priori'' distribution is available.
第492行: 第484行:  
When analyzing an inverse problem, obtaining a maximum likelihood model is usually not sufficient, as we normally also wish to have information on the resolution power of the data. In the general case we may have many model parameters, and an inspection of the marginal probability densities of interest may be impractical, or even useless. But it is possible to pseudorandomly generate a large collection of models according to the posterior probability distribution and to analyze and display the models in such a way that information on the relative likelihoods of model properties is conveyed to the spectator. This can be accomplished by means of an efficient Monte Carlo method, even in cases where no explicit formula for the a priori distribution is available.
 
When analyzing an inverse problem, obtaining a maximum likelihood model is usually not sufficient, as we normally also wish to have information on the resolution power of the data. In the general case we may have many model parameters, and an inspection of the marginal probability densities of interest may be impractical, or even useless. But it is possible to pseudorandomly generate a large collection of models according to the posterior probability distribution and to analyze and display the models in such a way that information on the relative likelihoods of model properties is conveyed to the spectator. This can be accomplished by means of an efficient Monte Carlo method, even in cases where no explicit formula for the a priori distribution is available.
   −
当分析一个反问题时,获得一个最大似然模型通常是不够的,因为我们通常也希望有关于数据的分辨率的信息。在一般情况下,我们可能有许多模型参数,检查的边际概率密度的兴趣可能是不切实际的,甚至无用的。但是,根据《后验概率可以伪随机生成大量的模型集合,并以这样一种方式分析和显示模型,模型属性的相对可能性信息被传达给观众,这是可能的。这可以通过一个有效的蒙特卡罗方法安全管理系统来实现,即使在没有黎曼显式公式安全管理先验概率的情况下。
+
在分析逆问题时,获得最大似然模型通常是不够的,因为我们通常还希望得到数据分辨率的信息。在一般情况下,我们可能有许多模型参数,而对边际概率密度的检验可能是不切实际的,甚至是无用的。但可以根据后验概率分布伪随机地生成大量模型,并以传递模型属性的相对可能性信息的方式对模型进行分析和显示。这可以通过一种有效的蒙特卡罗方法来完成,即使在没有先验分布的显式公式可用的情况下。
    
The best-known importance sampling method, the Metropolis algorithm, can be generalized, and this gives a method that allows analysis of (possibly highly nonlinear) inverse problems with complex ''a priori'' information and data with an arbitrary noise distribution.<ref name=":63">Mosegaard, Klaus; Tarantola, Albert (1995). "Monte Carlo sampling of solutions to inverse problems" (PDF). ''J. Geophys. Res''. '''100''' (B7): 12431–12447. Bibcode:1995JGR...10012431M. doi:10.1029/94JB03097.</ref><ref name=":64">Tarantola, Albert (2005). ''Inverse Problem Theory''. Philadelphia: Society for Industrial and Applied Mathematics. ISBN <bdi>978-0-89871-572-9</bdi>.</ref>
 
The best-known importance sampling method, the Metropolis algorithm, can be generalized, and this gives a method that allows analysis of (possibly highly nonlinear) inverse problems with complex ''a priori'' information and data with an arbitrary noise distribution.<ref name=":63">Mosegaard, Klaus; Tarantola, Albert (1995). "Monte Carlo sampling of solutions to inverse problems" (PDF). ''J. Geophys. Res''. '''100''' (B7): 12431–12447. Bibcode:1995JGR...10012431M. doi:10.1029/94JB03097.</ref><ref name=":64">Tarantola, Albert (2005). ''Inverse Problem Theory''. Philadelphia: Society for Industrial and Applied Mathematics. ISBN <bdi>978-0-89871-572-9</bdi>.</ref>
   −
最著名的重要性抽样方法,Metropolis–Hastings 演算法,可以推广,这提供了一种方法,允许分析(可能是高度非线性)与复杂的先验信息和数据与任意噪声分布的反问题。<ref name=":63" /><ref name=":64" />
+
最著名的重要抽样方法梅特罗波利斯-黑斯廷斯算法可以推广使用,它提供了一种允许分析(可能是高度非线性的)具有复杂先验信息和任意噪声分布数据的逆问题的方法。<ref name=":63" /><ref name=":64" />
 
=== Philosophy 哲学===
 
=== Philosophy 哲学===
   第503行: 第495行:  
Popular exposition of the Monte Carlo Method was conducted by McCracken. Method's general philosophy was discussed by Elishakoff and Grüne-Yanoff and Weirich.
 
Popular exposition of the Monte Carlo Method was conducted by McCracken. Method's general philosophy was discussed by Elishakoff and Grüne-Yanoff and Weirich.
   −
McCracken 主持的蒙特卡罗方法博览会的普及展览。<ref name=":65" />方法的一般哲学由 Elishakoff、<ref name=":66" /> Grüne-Yanoff 和 weurich 讨论。<ref name=":67" />
+
'''麦克拉肯 McCracken'''对蒙特卡罗方法进行了广泛的阐述。<ref name=":65" /> '''利沙科夫 Elishakoff'''、<ref name=":66" /> '''格林尼·雅诺夫 Grüne-Yanoff''' '''魏里希 Weurich'''讨论了方法的一般哲学。<ref name=":67" />
 
==See also 另见==
 
==See also 另见==
  
596

个编辑

导航菜单