更改

跳到导航 跳到搜索
添加501字节 、 2020年8月3日 (一) 22:10
第639行: 第639行:  
--[[用户:嘉树|嘉树]]([[用户讨论:嘉树|讨论]]) Scalable 翻译为“可变的”会不会好一点? 此处的搭配包括Scalable Agents,Scalable architecture 和Scalable simulations
 
--[[用户:嘉树|嘉树]]([[用户讨论:嘉树|讨论]]) Scalable 翻译为“可变的”会不会好一点? 此处的搭配包括Scalable Agents,Scalable architecture 和Scalable simulations
   −
== Human-like behaviors and crowd AI ==
+
== Human-like behaviors and crowd AI 似人行为和群体人工智能==
    
[[File:Crowd simulation, Covent Garden.jpg|thumb|A crowd simulation of [[Covent Garden square]], London, showing a crowd of pedestrian agents reacting to a street performer]]
 
[[File:Crowd simulation, Covent Garden.jpg|thumb|A crowd simulation of [[Covent Garden square]], London, showing a crowd of pedestrian agents reacting to a street performer]]
第645行: 第645行:  
A crowd simulation of [[Covent Garden square, London, showing a crowd of pedestrian agents reacting to a street performer]]
 
A crowd simulation of [[Covent Garden square, London, showing a crowd of pedestrian agents reacting to a street performer]]
   −
一个人群模拟[[ Covent Garden,伦敦,显示了一群行人代理人对街头艺人的反应]]
+
一个人群模拟[[ Covent Garden 广场,伦敦,显示了一群行人主体对街头艺人的反应]]
    
{{Main|Swarm intelligence}}
 
{{Main|Swarm intelligence}}
第653行: 第653行:  
To simulate more aspects of human activities in a crowd, more is needed than path and motion planning. Complex social interactions, smart object manipulation, and hybrid models are challenges in this area.Simulated crowd behavior is inspired by the flow of real-world crowds. Behavioral patterns, movement speeds and densities, and anomalies are analyzed across many environments and building types. Individuals are tracked and their movements are documented such that algorithms can be derived and implemented into crowd simulations.
 
To simulate more aspects of human activities in a crowd, more is needed than path and motion planning. Complex social interactions, smart object manipulation, and hybrid models are challenges in this area.Simulated crowd behavior is inspired by the flow of real-world crowds. Behavioral patterns, movement speeds and densities, and anomalies are analyzed across many environments and building types. Individuals are tracked and their movements are documented such that algorithms can be derived and implemented into crowd simulations.
   −
为了在人群中模拟人类活动的更多方面,需要的不仅仅是路径和运动规划。复杂的社会交互、聪明的对象操作和混合模型是这一领域的挑战。 模拟人群行为的灵感来自于现实世界中的人群流动。行为模式,运动速度和密度,以及异常分析在许多环境和建筑类型。个人被跟踪,他们的行动被记录下来,这样算法就可以派生出来并应用到人群模拟中。
+
为了模拟在人群中人类活动的更多方面,我们需要路径和运动规划以外的更多东西。复杂的社会交互、聪明的对象操纵和混合模型是这一领域的挑战。模拟人群行为的灵感来自于现实世界中的人群流。研究者分析了在许多环境和建筑类型中的行为模式、运动速度和密度以及异常情况。个体的行动被跟踪记录下来,这样就可以得到算法并应用到人群模拟中。
      第661行: 第661行:  
Individual entities in a crowd are also called agents. In order for a crowd to behave realistically each agent should act autonomously (be capable of acting independently of the other agents). This idea is referred to as an agent-based model. Moreover, it is usually desired that the agents act with some degree of intelligence (i.e. the agents should not perform actions that would cause them to harm themselves). For agents to make intelligent and realistic decisions, they should act in accordance with their surrounding environment, react to its changes, and react to the other agents.
 
Individual entities in a crowd are also called agents. In order for a crowd to behave realistically each agent should act autonomously (be capable of acting independently of the other agents). This idea is referred to as an agent-based model. Moreover, it is usually desired that the agents act with some degree of intelligence (i.e. the agents should not perform actions that would cause them to harm themselves). For agents to make intelligent and realistic decisions, they should act in accordance with their surrounding environment, react to its changes, and react to the other agents.
   −
群体中的个体也称为代理人。为了让一个群体的行为实际,每个代理应该自主行动(能够独立于其他代理)。这个想法被称为个体为本模型。此外,人们通常希望代理人的行为具有一定程度的智慧(即:。代理人不应采取会导致他们伤害自己的行动)。为了使代理人做出明智和现实的决定,他们应该根据他们周围的环境采取行动,对其变化作出反应,并对其他代理人作出反应。
+
群体中的'''个体Individual Entities'''也称为'''主体 Agents'''。为了让一个群体真实地行动,每个主体应该自主地(能够独立于其他主体地)行动。这个想法被称为'''基于主体的模型Agent-based Model'''。此外,人们通常希望主体的行为具有一定程度的智慧(即,主体不应采取伤害自己的行动)。为了使主体做出明智和真实的决定,他们应该根据他们周围的环境采取行动,对环境变化作出反应,并对其他主体作出反应。
         −
=== Rule-based AI ===
+
=== Rule-based AI 基于规则的人工智能===
    
[[File:MaslowsHierarchyOfNeeds.svg|alt=Maslow's Hierarchy of Needs|thumb|Maslow's Hierarchy of Needs]]
 
[[File:MaslowsHierarchyOfNeeds.svg|alt=Maslow's Hierarchy of Needs|thumb|Maslow's Hierarchy of Needs]]
第677行: 第677行:  
In rule-based AI, virtual agents follow scripts: "if this happens, do that". This is a good approach to take if agents with different roles are required, such as a main character and several background characters. This type of AI is usually implemented with a hierarchy, such as in Maslow's hierarchy of needs, where the lower the need lies in the hierarchy, the stronger it is.
 
In rule-based AI, virtual agents follow scripts: "if this happens, do that". This is a good approach to take if agents with different roles are required, such as a main character and several background characters. This type of AI is usually implemented with a hierarchy, such as in Maslow's hierarchy of needs, where the lower the need lies in the hierarchy, the stronger it is.
   −
在基于规则的人工智能中,虚拟代理遵循以下脚本: “如果发生了这种情况,就这样做”。如果需要具有不同角色的代理,比如一个主角和几个背景角色,那么这是一个很好的方法。这种类型的人工智能通常是用层次结构来实现的,例如在马斯洛的需求层次结构中,需求越低,需求就越强。
+
在'''基于规则的人工智能Rule-based AI'''中,虚拟主体遵循以下规则: “如果发生了这种情况,就那样做”。如果需要具有不同角色的主体,比如一个主角和几个背景角色,那么这是一个很好的方法。这种类型的人工智能通常是用层次结构来实现的,例如在马斯洛的需求层次结构中,需求层级越低,需求就越强。
      第685行: 第685行:  
For example, consider a student walking to class who encounters an explosion and runs away. The theory behind this is initially the first four levels of his needs are met, and the student is acting according to his need for self-actualization. When the explosion happens his safety is threatened which is a much stronger need, causing him to act according to that need.
 
For example, consider a student walking to class who encounters an explosion and runs away. The theory behind this is initially the first four levels of his needs are met, and the student is acting according to his need for self-actualization. When the explosion happens his safety is threatened which is a much stronger need, causing him to act according to that need.
   −
例如,假设一个学生在走向教室的路上遭遇了爆炸并逃跑了。这背后的理论最初是他的前4个层次的需求得到了满足,而学生是根据他对自我实现的需求来行动的。当爆炸发生时,他的安全受到威胁,这是一个更强烈的需要,使他按照这种需要行事。
+
例如,假设一个学生在走向教室的路上遭遇了爆炸并逃跑了。这背后的理论是他的前4个层次的需求得到了满足,并且学生是具有'''自我实现self-actualization'''的需求并依次需求来行动。当爆炸发生时,他的安全受到威胁,这是一个比自我实现更强烈的需要,安全需求使他按照这种需要行事。
      第693行: 第693行:  
This approach is scalable, and can be applied to crowds with a large number of agents. Rule-based AI, however, does have some drawbacks. Most notably the behavior of the agents can become very predictable, which may cause a crowd to behave unrealistically.
 
This approach is scalable, and can be applied to crowds with a large number of agents. Rule-based AI, however, does have some drawbacks. Most notably the behavior of the agents can become very predictable, which may cause a crowd to behave unrealistically.
   −
这种方法是可扩展的,可以应用于拥有大量代理的人群。然而,基于规则的人工智能确实有一些缺点。最值得注意的是,代理人的行为可以变得非常可预测,这可能导致群体的行为不现实。
+
这种方法是可扩展的,可以应用于拥有大量主体的人群。然而,基于规则的人工智能确实有一些缺点。最值得注意的缺点是,主体的行为可以变得非常可预测,这可能导致群体的行为不现实。
         −
=== Learning AI ===
+
=== Learning AI 学习型人工智能===
    
In learning AI, virtual characters behave in ways that have been tested to help them achieve their goals.  Agents experiment with their environment or a sample environment which is similar to their real one.
 
In learning AI, virtual characters behave in ways that have been tested to help them achieve their goals.  Agents experiment with their environment or a sample environment which is similar to their real one.
第703行: 第703行:  
In learning AI, virtual characters behave in ways that have been tested to help them achieve their goals.  Agents experiment with their environment or a sample environment which is similar to their real one.
 
In learning AI, virtual characters behave in ways that have been tested to help them achieve their goals.  Agents experiment with their environment or a sample environment which is similar to their real one.
   −
在学习人工智能时,虚拟角色的行为方式都经过了测试,以帮助他们实现自己的目标。代理用他们的环境或者一个类似于他们真实环境的样本环境进行实验。
+
在'''学习型人工智能Learning AI''',虚拟角色以一种帮助他们实现自己的目标且经过了测试的方式行动。主体用他们的环境或者一个类似于他们真实环境的样本环境进行测试。
      第711行: 第711行:  
Agents perform a variety of actions and learn from their mistakes. Each agent alters its behavior in response to rewards and punishments it receives from the environment. Over time, each agent would develop behaviors that are consistently more likely to earn high rewards.
 
Agents perform a variety of actions and learn from their mistakes. Each agent alters its behavior in response to rewards and punishments it receives from the environment. Over time, each agent would develop behaviors that are consistently more likely to earn high rewards.
   −
代理执行各种各样的行动,并从错误中吸取教训。每个代理根据从环境中得到的奖励和惩罚改变其行为。随着时间的推移,每个代理人都会发展出更有可能获得高回报的行为。
+
主体执行各种各样的行动并从他们的错误中吸取经验。每个主体根据从环境中得到的奖励和惩罚改变其行为。随着时间的推移,每个主体都会发展出更有可能获得高回报的行为。
      第719行: 第719行:  
If this approach is used, along with a large number of possible behaviors and a complex environment agents will act in a realistic and unpredictable fashion.
 
If this approach is used, along with a large number of possible behaviors and a complex environment agents will act in a realistic and unpredictable fashion.
   −
如果使用这种方法,加上大量可能的行为和复杂的环境代理将以一种现实的和不可预测的方式行事。
+
如果使用这种方法,加上大量可能的行为和复杂的环境,主体将以一种现实的和不可预测的方式行事。
         −
==== Algorithms ====
+
==== Algorithms 算法====
    
There are a wide variety of machine learning algorithms that can be applied to crowd simulations.
 
There are a wide variety of machine learning algorithms that can be applied to crowd simulations.
第729行: 第729行:  
There are a wide variety of machine learning algorithms that can be applied to crowd simulations.
 
There are a wide variety of machine learning algorithms that can be applied to crowd simulations.
   −
有各种各样的机器学习算法,可以应用于人群模拟。
+
人群模拟现在已经有各种各样可以应用的机器学习算法。
      第737行: 第737行:  
Q-Learning is an algorithm residing under machine learning's sub field known as reinforcement learning. A basic overview of the algorithm is that each action is assigned a Q value and each agent is given the directive to always perform the action with the highest Q value. In this case learning applies to the way in which Q values are assigned, which is entirely reward based. When an agent comes in contact with a state, s, and action, a, the algorithm then estimates the total reward value that an agent would receive for performing that state action pair. After calculating this data, it is then stored in the agent's knowledge and the agent proceeds to act from there.
 
Q-Learning is an algorithm residing under machine learning's sub field known as reinforcement learning. A basic overview of the algorithm is that each action is assigned a Q value and each agent is given the directive to always perform the action with the highest Q value. In this case learning applies to the way in which Q values are assigned, which is entirely reward based. When an agent comes in contact with a state, s, and action, a, the algorithm then estimates the total reward value that an agent would receive for performing that state action pair. After calculating this data, it is then stored in the agent's knowledge and the agent proceeds to act from there.
   −
Q 学习是机器学习的一个子领域,被称为强化学习学习。算法的一个基本概述是,每个动作都被分配了一个 q 值,每个代理都被赋予一个指令,总是执行 q 值最高的动作。在这种情况下,学习应用于分配 q 值的方式,这完全是基于奖励的。当一个代理接触到一个状态 s 和一个动作 a 时,算法就会估计一个代理执行这个状态动作对所能得到的总回报值。在计算这些数据之后,它们被存储在代理人的知识中,代理人从那里开始行动。
+
Q-Learning是机器学习的一个子领域:强化学习中的一个算法。该算法的一个基本描述是:每个动作都被分配了一个Q值,每个主体都被赋予一个总是执行Q值最高的动作的指令。在这种情况下,学习效果取决于分配Q值的方式
 +
(--[[用户:嘉树|嘉树]]([[用户讨论:嘉树|讨论]]) learning applies to the way in which Q values are assigned 翻译为学习效果取决于分配 q 值的方式,不知是否正确)
 +
,这完全是基于奖励的。当一个主体接触到一个状态 s 和一个动作 a 时,算法就会估计这个主体执行这个'''状态动作对state action pair'''所能得到的总回报值。在计算这些数据之后,它们被存储在主体的知识中,主体依据这些知识开始行动。
    +
==here==
      第770行: 第773行:     
给定一个状态 s 和动作 a,r 和 s 是执行(s,a)后的奖励和状态,a’是所有动作的范围。
 
给定一个状态 s 和动作 a,r 和 s 是执行(s,a)后的奖励和状态,a’是所有动作的范围。
  −
      
== Crowd rendering and animation ==
 
== Crowd rendering and animation ==
259

个编辑

导航菜单