Game AI Applicable Brain Science and Psychology

 

Brain organization models have some value when designing efficient AI engines. The corresponding task distribution between the brain and the game system is shown in the figure:
the AI ​​task is decomposed into several specific sub-modules, and other classes are allowed to collect the output of these sub-modules, and mix this knowledge into the game character.

 

How to design the knowledge base and learning of AI?

  • The human brain stores and learns everything, but AI systems need a reliable system to determine what is worth learning.
  • The human brain does not allow memory to degrade due to the passage of time, AI can lock these plastic changes by dynamic hardcoding, transferring them to a long-term memory, but too much hardcoding or improper use can make people or games Character becomes morbid/forgettable.
  • Short-term memory (memory) only holds perceptions for a short period of time, filtered according to their importance, and then stored in long-term memory or simply forgotten. (note breadth and single thought)
When the enemy sees the player, it will follow the player for a specified period of time, but once the player hides, it will completely forget the player's presence and go back to its original job. This state-based AI behavior is unrealistic and lacks intelligence. By designing an adversary with more simulated memory models, or giving it the ability to remember anything longer than a minute, it would still be able to go back to its original job, be more sensitive to future attacks, and be able to seek in advance support etc. There is always room for improvement and expansion in real-world AI behavior.

The brain uses a "modulator" to adjust the memory in special situations according to the context. The AI ​​regulator can reload the state and behavior of the entire AI. Traditional state-based AI can borrow the concept of moudulation to become more flexible. A previously alerted enemy can transition into a completely different Alerted state, and then transition back to Normal after a slow degradation. However, when using a state system with a modifier, an "aggressive modifier" can be coded to maintain its normal Guard state.

  • The human brain learns from things stored in different memories in the brain, and this expansion of the knowledge base about the world through biases and associations usually has several independent implementations: detection or direct experience, imitation, imaginative inference. Games can learn in two ways: by counting the behaviors that work against players, by recording what players do against AI opponents, and by imitating or improving those human behaviors.
  • Adopting an AI learning algorithm requires multiple iterations, and learning in a fast-paced, short-cycle game will cause its performance to drop dramatically. Full learning is typically done in the manufacturing process prior to use, with no learning capability in use until new methods are developed that meet speed and accuracy requirements.
  • Learning doesn't have to be conscious, and many games use influence maps for unconscious learning. The influence map system allows accumulation of the same types of information and simple storage of them in a fast and accessible way, while keeping the number of iterations low. This can provide an escape "kill zone" for the pathfinding algorithm of an RTS game.

How to design cognitive systems for AI?

  • The brain achieves perception by using a variety of different systems to quickly classify and prioritize incoming data. In game AI, we can pick out arbitrary levels of perception during processing. A mock-up of a sports game is as follows:
    When coding any particular AI subsystem, you should make sure to use only those perceptions that are really needed. Over-simplification will make it easy to predict the output behavior of the subsystem, the enemy can only hear the sound within a certain range, such a subsystem is very strange, the initial distance and initial volume, attenuation propagation and other acoustic properties should be considered.
  • The decision-making system filters out the current game state from everything the AI ​​can do. In the state space of AI technology, if the result of perception is isolated, the enumeration space response of the state system can be used. If the response is continuous in the whole range, it is more suitable to use neural network because they can be used in the continuous response domain. Better identification of local extreme points.

How to design systems that seem intelligent (Theory of Mind, ToM)?

ToM is more of a theory of cognition, which profoundly points out that people have the ability to understand others and have thoughts and worldviews that are separate from themselves. A ToM is technically a conscious agent with the ability to comprehend intent rather than rigid cognition of actions. So how to measure ToM is intelligent, Turing test believes that if a program can successfully communicate with another physical player, and the player cannot tell that it is a computer, then it must be intelligent.

But in fact, we want game AI systems to be able to make decisions like humans, exhibiting their advanced properties and going beyond those simple gameplay, so we have to imitate thoughts, not actions. ToM can give programmers or designers some guidance on what types of information can be provided directly to the player, which should not be provided, and which can be handled in a vague way.

As an example, a simple battlefield setup with a human player at the bottom of the map, surrounded by 4 CPU enemies and moving between multiple cover points. The rules are as follows:
if no one shoots at the player, if I am full and ready, I will start shooting (only one player can shoot at a time in this game).

If I'm exposed, I'll go to the nearest unoccupied cover position and randomly shout "cover me!", "on your left!", etc.

If I'm in cover, I'll reload and wait for the guy to finish shooting, maybe by playing some scan-like animation to make it look more like he's ready to snipe the player.


Scenario description: 4 enemy soldiers come into view, one of them starts firing quickly, while the other three look for cover. The previous soldier then stopped firing, called "cover me!" and ran forward for cover. At the same time another soldier jumped out and started shooting. In this system, the soldiers are completely unaware of each other, the player's intentions, and the fact that they perform a basic alternate advance and cover military maneuver. But since human players naturally try to form a ToM about the enemy, in his view, this is a highly coordinated and intelligent behavior. So the strategy works.

 

 How to construct a Bounded Optimality (BO) decision-making system within the scope?

 Perfect rationality is undesirable and unnecessary for most entertainment games, and the goal of game AI is to emulate levels of human performance, not perfect ideals. So instead of forcing the program to find the ideal solution in a limited amount of time, it's better to just steer the decision in the right direction. The so-called optimal solutions to real-world problems are often computationally intractable, and there are few real-world problems without constraints. The BO idea can be reduced in some way to an incremental hierarchy. For example, the path search can set several levels of complexity, you can start the path search on a large map area, then within each area, then locally, and finally around the dynamic target. Each successive level is progressively better than the last, but each level moves the player in the right direction.

 

What has robotics taught us?

There are many AI technologies in the game derived from research on robots, including the very important A* algorithm. The main inspirations include the following aspects:

  • Simplicity of design and solutions. The robot designed by Brooks, adopted on Mars, does not attempt to navigate an area by relying on the identification of obstacles, but uses a general search method to force its way through obstacles.
  • Theory of Mind.
  • Multi-layer decision-making system. Modern robotic platforms all use a subsystem on which multiple levels of decision-making structure systems run to reflect high-to-low decisions. This bottom-up behavioral design allows robots to achieve a certain degree of autonomy in certain environments. Subsystems represent the highest-priority decisions and can override and modify behaviors from the top-level decision-making structure. The higher the level, the lower the priority. This independence of levels makes the system more robust and fault-tolerant between levels. . So chaos at one level doesn't disrupt the overall structure, and as long as the rest of the system returns to normal, the robot can still complete the task. This type of structure is ideal for games that require simultaneous decisions at multiple levels of complexity, such as RTS.

 

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=325984190&siteId=291194637