Interpretation of "agent" in the field of complex systems


1. Agent-based model

An agent model, agent-based model, or agent model (for agent-based model (ABM)) is a computational model for modeling autonomous agents (individual) or collective entities (collective entities), such as organizations (organizations) or groups) to understand the behavior of a system and the factors that control its outcomes.

Translated into "subject model or agent model", it is distinguished from both the individual and the agent. It is an abstract object that can be defined artificially and has certain self- and interaction properties. In addition, the agent model should be the result of the translation of " Intelligent agent model ". In order to show the difference, the "agent-based model" will be translated into the agent model or agent-based model later.

Figure agent-based model results in Baidu translation


It combines elements of game theory, complex systems, emergence , computational sociology , multi -agent systems , and evolutionary programming . Monte Carlo methods are used to understand the stochastic nature of these models. Especially in ecology, ABMs are also known as individual-based models (IBM). A review of recent literature on agent-based models, agent models, and multi-agent systems shows that ABMs are used in many scientific fields, including biology, ecology, and social sciences. Agent models are related but distinct from the concepts of multi - agent systems or multi-agent simulations, since the goal of ABM is to find explanatory insights into the collective behavior of agents following simple rules, usually in natural systems, rather than design subjects or solve specific practical or engineering problems.

An agent model is a miniature model that simulates the simultaneous operation and interaction of multiple agents in an attempt to recreate and predict the emergence of complex phenomena. This process is an emergence, which some have expressed as "the whole is greater than the sum of its parts". In other words, higher-level system properties come from the interactions of lower-level subsystems. Alternatively, state changes at the macroscale arise from agent behavior at the microscale. Alternatively, simple behaviors (meaning rules followed by agents) generate complex behaviors (meaning state changes at the level of the entire system).

Individual agents, often described as boundedly rational , are assumed to act in what they perceive to be their own interests, such as reproduction, economic interests, or social status, using heuristics or simple decision rules. ABM subjects may experience "learning", adaptation and replication.

Most agent models consist of: (1) a large number of agents specified at different scales (often referred to as agent-granularity); (2) decision heuristics; (3) learning rules or adaptation procedures; (4) ) interaction topology ( interaction topology ); (5) environment. ABM is often implemented as a computer simulation, either as custom software or through an ABM toolkit, which can then be used to test how changes in individual behavior will affect the emerging overall behavior of the system.

1.1 History

The idea of ​​agent-based modeling was developed in the late 1940s as a relatively simple concept. Because of its heavy computation, it didn't become popular until the 1990s.

1.1.1 Early development

The history of the agent model can be traced back to the von Neumann machine, a theoretical machine capable of replication. The device proposed by von Neumann would follow precise instructions to make a copy of itself. Von Neumann's friend Stanislaw Ulam, also a mathematician, later developed the concept. Ulam suggested building the machine on paper as a set of cells on a grid. The idea intrigued von Neumann, who came up with the idea—creating the first device that came to be known as a cellular automata . Mathematician John Conway introduced another advance. He constructed the famous Game of Life. Unlike von Neumann's machine, Conway's Game of Life operates with simple rules in a virtual world in the form of a two-dimensional chessboard.

The Simula programming language, developed in the mid-1960s and widely implemented in the early 1970s, was the first framework for automating step-by-step agent simulations.

1.1.2 1970s and 1980s: the first models

One of the earliest conceptual models of agents was Thomas Schelling's segregation model, discussed in his 1971 paper "Dynamic Models of Segregation". Although Schelling initially used coins and graph paper rather than a computer, his model embodies the basic concept of agent models, i.e., autonomous agents interacting in a shared environment, where aggregates, emergent (emergent) phenomenon.

In the early 1980s, Robert Axelrod held a Prisoner's Diemma strategy tournament and had them interact in an agent-based fashion to determine the winner. Axelrod would go on to develop many other agent-based models in political science that examine phenomena ranging from ethnocentrism to cultural diffusion. By the late 1980s, Craig Reynolds' work on flocking models led to the development of some of the first biological agent-based models that included social traits. He tried to simulate an active and realistic biological subject, called artificial life (artificial life), this is Christopher Langton.

The first use of the term "subject" and the currently used definition are difficult to trace. A possible candidate seems to be John Holland and John H. Miller's 1991 paper "Artificial Adaptive Agents in Economic Theory", based on their previous conference presentations.

Meanwhile, in the 1980s, social scientists, mathematicians, operations research researchers, and a small group from other disciplines developed Computational and Mathematical Organization Theory (CMOT). The field grew as a special interest group within The Institute of Management Sciences (TIMS) and its sister society, the Operations Research Society of America (ORSA).

1.1.3 1990s: Expansion

The 1990s were especially notable for the expansion of ABM in the social sciences, with one notable effort being the large-scale ABM developed by Joshua M. Epstein and Robert Axtell, Sugarscape, for modeling and exploring social The role of phenomena, pollution, sexual reproduction, confrontation and disease and even the spread of culture. Other notable developments in the 1990s included Carnegie Mellon University's Kathleen Carley ABM to explore the co-evolution of social networks and culture. During the 1990s, Nigel Gilbert published the first textbook on social simulation: Simulation for the social scientist (1999) and a journal from a social science perspective: the Journal of Artificial Societies and Social Simulation ( JASSS ). Except for JASSS, subject models for any discipline are in the SpringerOpen journal Complex Adaptive Systems Modeling (CASM).

By the mid-1990s, the social science thread of ABM was concerned with issues such as designing effective teams, the communication and social network behavior needed to understand organizational effectiveness. CMOT (later renamed Computational Analysis of Social and Organizational Systems (CASOS)), incorporates more and more agent-based modeling. Samuelson (2000) provides a brief overview of early history, Samuelson (2005) and Samuelson and Macal (2006) trace recent developments.

In the late 1990s, TIMS and ORSA merged to form INFORMS , and INFORMS changed its meeting from twice a year to once, which helped push the CMOT group to form a separate association, the North American Association for Computing Society and Organizational Science. Computational Social and Organizational Sciences, NAACSOS). Kathleen Carley was a major contributor, especially to social network models, received a National Science Foundation annual meeting grant, and served as the first chair of NAACSOS. She was succeeded by David Sallach of the University of Chicago and Argonne National Laboratory, and then by Michael Prietula of Emory University. Around the same time as NAACSOS started, NAACSOS counterparts the European Social Simulation Association (ESSA ) and the Pacific Asian Association for Agent-Based Approach in Social Systems Science (PAAA) were organized. As of 2013, the three organizations collaborate internationally. The First World Congress on Social Simulation was held in August 2006 under the joint sponsorship of Kyoto, Japan. The Second World Congress was held in July 2008 in the Northern Virginia suburbs of Washington, DC with George Mason University, which played a leading role in the local arrangement.

1.1.4 2000s and beyond

More recently, Ron Sun developed an approach to agent-based simulation based on a model of human cognition, called cognitive social simulation. UCLA's Bill McKelvey, Suzanne Lohmann, Dario Nardi, Dwight Read, and others have also made significant contributions to organizational behavior and decision-making. Since 2001, UCLA has arranged a conference in Lake Arrowhead, CA, which has become another major gathering point for practitioners in the field.

1.2 Theory

Most computational modeling studies describe systems as being in equilibrium or moving between equilibrium states. However, agent-based modeling, using simple rules, can generate complex and interesting behaviors of different types. The three core ideas of the agent model are agents as objects, emergence and complexity.

The agent model consists of rule-based agents that interact dynamically. The systems they interact with can create complexities that resemble the real world. Typically agents are located in space and time and reside in a network or lattice neighborhood. The position of the subject and its responsive behavior are encoded algorithmically in a computer program. In some cases, though not always, subjects may be considered intelligent and purposeful. In ecological ABM (often called "agent-based models" in ecology), agents might for example be trees in a forest, and would not be considered intelligent, although they are optimizing access to a resource (like water) may be "purposeful". The modeling process is best described as inductive. The modeler makes assumptions that are most relevant to the current situation and then observes what happens in the agent's interactions. Sometimes, this outcome is a balance. Sometimes it's an emergence pattern. However, sometimes it's an incomprehensible mess.

In some respects, principal models complement traditional analytical methods. Where analytical methods enable humans to characterize the equilibria of systems, agent models allow the possibility of generating these equilibria. This generative contribution is perhaps the most mainstream of the potential benefits of agent-based modeling. Principal models can explain the emergence of higher-order patterns—the network structure of terrorist organizations and the Internet, traffic congestion, power-law distributions in the size of wars and stock market crashes, and tolerance of social isolation despite large populations. people. Agent-based models can also be used to identify leverage points, defined as moments when interventions have extreme consequences, and to distinguish types of path dependence.

Instead of focusing on steady state, many models consider system robustness—the way a complex system adapts to internal and external stresses to maintain its function. Tasks that exploit this complexity need to consider agents themselves—their diversity, connectedness, and level of interaction.

1.2.1 Framework

Recent work on the modeling and simulation of complex adaptive systems demonstrates the need to combine agent-based and complex network-based models. Describe a framework consisting of four levels of a model for the development of complex adaptive systems, using several example multidisciplinary case studies:

  1. Complex network modeling level for developing models that use data on interactions of various system components.
  2. An exploratory agent-based modeling level for developing agent models to assess feasibility for further research. This can, for example, be useful for developing proof-of-concept models (such as grant applications) without requiring researchers to undertake an extensive learning curve.
  3. Descriptive Agent-based Modeling (DREAM) for developing descriptions of agent models by using templates and complex network-based models. Building DREAM models allows model comparisons across scientific disciplines.
  4. Validated agent-based modeling using the Virtual Overlay Multiagent system (VOMAS) for the validated development process and validated models in a formal manner.

Other methods for describing the main model include code templates and text-based methods, such as ODD (Overview, Design, Concepts, Design Details, that is, overview, design concept and design details) agreement.

The role of the environment in which a vertebral body resides, both macroscopic and microscopic, is also becoming an important factor in agent-based modeling and simulation efforts. A simple environment carries a simple subject, but a complex environment produces a variety of behaviors.

1.2.2 Multiscale Modeling

One of the strengths of agent-based modeling is its ability to mediate information flow between scales. When additional details about subjects are needed, researchers can integrate it with models that describe the extra details. When people are interested in the emergent behavior exhibited by groups of agents, they can combine agent models with continuous models that describe group dynamics. For example, in a study of CD4+ T cells (a key cell type in the adaptive immune system), the researchers simulated the Gene regulation, metabolism, cell behavior and cytokine transport). In the resulting modular model, signal transduction and gene regulation are described by logical models, metabolism by constraint-based models, cell population dynamics by agent-based models, systemic cytokines Concentrations are described by ordinary differential equations. In this multiscale model, the principal model occupies the central position and coordinates every information flow between scales.

1.3 Application

1.3.1 In modeling complex adaptive systems

We live in a very complex world and we are faced with complex phenomena such as the formation of social rules and the emergence of new disruptive technologies. To better understand this phenomenon, social scientists often use simplistic methods, in which they reduce complex systems to lower-lever variables and model their relationships through systems of equations such as partial differential equations . This approach, known as equation-based modeling (EBM), suffers from some fundamental weaknesses when it comes to modeling real complex systems. EBM emphasizes unrealistic assumptions such as unbounded rationality and perfect information, while adaptability, evolvability, and network effects are left unaddressed. The framework of complex adaptive systems ( CAS) has proven to be very influential over the past two decades in addressing the shortcomings of reductionism . In contrast to reductionism, in the CAS framework, complex phenomena are studied in an organic way, where their subjects should be boundedly rational and adaptive. As a powerful CAS modeling approach, agent-based modeling is gaining popularity among academia and practitioners. ABMs show how simple behavioral rules of agents and their local interactions at the microscale can generate surprisingly complex patterns at the macroscale.

1.3.2 In biology

Agent-based modeling has been widely used in biology, including analysis of the spread of epidemics, and the threat of biological warfare, biological applications including population dynamics, stochastic gene expression, plant-animal interactions, vegetation ecology, Landscape diversity, sociobiology, rise and fall of ancient civilizations, evolution of ethnocentric behavior, forced migration/migration, language choice dynamics, cognitive modeling and biomedical applications, including modeling 3D breast tissue Formation/morphogenesis, effects of ionizing radiation on the dynamics of breast stem cell subsets, inflammation, and the human immune system. Subject models are also used to develop decision support systems (decision support systems), such as breast cancer. Principal models are increasingly used to model pharmacological systems in early-stage and preclinical research, to aid drug development and to gain understanding of biological systems that are not possible to know a priori. Military applications have also been evaluated. Furthermore, host models have recently been used to study biological systems at the molecular level. Principal models have also been written to describe ecological processes at work in ancient systems, such as those in dinosaur environments and more recent ancient systems.

1.3.3 In epidemiology

The main body model now complements traditional compartmental models , the usual type of epidemiological models. ABM has been shown to outperform compartmental models in terms of predictive accuracy. More recently, ABMs such as epidemiologist Neil Ferguson's CovidSim have been used to inform public health (non-drug) interventions to prevent the spread of SARS-CoV-2. Epidemiological ABM has been criticized for simplistic and unrealistic assumptions. Nonetheless, when ABMs are accurately calibrated, they can be used to inform decisions about mitigation and containment measures.

1.3.4 In terms of business, technology and network theory

Since the mid-1990s, master models have been used to solve a variety of business and technical problems. Application examples include marketing, organizational behavior and cognition, teamwork, supply chain optimization and logistics, modeling consumer behavior including word of mouth, social network effects, distributed computing, workforce management, and portfolio management. They are also used to analyze traffic congestion.

Recently, agent-based modeling and simulations have been applied in various fields, such as studying the influence of researchers in the field of computer science (journals and conferences) on publishing places. Furthermore, ABM has been used to simulate information transfer in ambient assisted environments. A November 2016 paper on arXiv analyzes an agent-based simulation of spreading posts on Facebook. The usefulness of agent-based modeling and simulation has been demonstrated in the domain of peer-to-peer, self-organizing, and other self-organizing and complex networks. The feasibility of using a formal computer science-based specification framework combined with wireless sensor networks and agent-based simulations has recently been demonstrated.

Agent-based evolutionary search or algorithms are new research topics for solving complex optimization problems.

1.3.5 In the economic and social sciences

ABM received increasing attention as a possible tool for economic analysis both before and after the 2008 financial crisis. ABM does not assume that the economy can achieve equilibrium, and "representative agents " are replaced by agents with diverse, dynamic, and interdependent behaviors, including herding. ABM uses a "bottom-up" approach that can generate extremely complex and volatile simulated economies. ABMs can represent unstable systems whose collapse and prosperity develop from nonlinear (disproportionate) responses to proportionally small changes. A July 2010 article in The Economist sees ABM as an alternative to DSGE models. The journal Nature also encourages agent-based modeling, with an editorial suggesting that ABMs can better represent financial markets and other economic complexities than standard models. An article by J. Doyne Farmer and Duncan Foley arguing that ABM can satisfy Keynes' desire to represent complex economies and Robert Lucas's desire to build models based on microscopic foundations. Farmer and Foley point to the progress made using ABM to model various parts of the economy, but they argue for creating a very large model that includes low level models. Financial markets are simulated with high precision by modeling a complex system of analysts based on three different behavioral characteristics (imitation, anti-imitation, and apathy). The results show a correlation between network morphology and stock market indices. However, ABM methods have been criticized for their lack of robustness between models, where similar models can produce very different results.

ABM has been deployed in architecture and urban planning to evaluate designs and simulate human flow in urban environments and to examine the application of public policy to land use. The field of socioeconomic analysis of infrastructure investment impacts is also growing, leveraging ABM's ability to discern systemic effects on socioeconomic networks. Heterogeneity and dynamics can be easily built into ABM models to address wealth inequality and social mobility.

1.3.5.1 In economics

In economics, an agent (agent) is a participant (more specifically, a decision maker) in a model of some aspect of the economy. Typically, each agent makes a decision by solving a well-defined optimization or selection problem.

For example, buyers (consumers) and sellers (producers) are two common types of agents in partial equilibrium models of a single market. Macroeconomic models, especially dynamic stochastic general equilibrium models explicitly based on microscopic foundations, often distinguish between households, firms, and the government or central bank as the main types of agents in the economy. Each of these actors may play multiple roles in the economy; for example, households may be modeled as consumers, workers, and voters. Some macroeconomic models distinguish between more types of agents, such as workers and shoppers or commercial banks.

The term principal is also used in the principal-agent model; in this case, it refers specifically to the person entrusted to act on behalf of the principal.

In agent-based computational economics , the corresponding agents are “modeled as computational objects that interact according to rules” in space and time, rather than real people. Rules are formulated to model behavior and social interactions based on prescribed incentives and information. The concept of an agent can be broadly interpreted as any persistent individual, social, biological, or physical entity that interacts with other such entities in the context of a dynamic multi-agent economic system.

1.3.5.2 Representative vs. heterogenous agents

An economic model that assumes that all agents of a given type (such as all consumers or all firms) are identical is called a representative agent model . Models that differentiate between agents are called heterogeneous agent models . When economists want to describe the economy in the simplest terms, they often use a representative agent model. Instead, they may have to use heterogeneous subject models when the differences between subjects are directly relevant to the problem at hand. For example, age heterogeneity may need to be accounted for in models studying the economic impact of pensions; wealth heterogeneity may need to be considered in models studying precautionary saving or redistributive taxes.

1.3.6 In water resource management

ABM has also been applied to water resource planning and management, especially for exploring, simulating, and predicting the performance of infrastructure design and policy decisions, and assessing the value of cooperation and information exchange in large water resource systems.

1.3.7 Organizational ABM: agent-directed simulation (Organizational ABM: agent-directed simulation)

Using the metaphor of agent-directed simulation (ADS), systems can be divided into two categories, namely "Systems for Agents" and "Agents for Systems". Systems for Agents (sometimes called agents systems) are systems implementing agents for engineering, human and social dynamics, military applications, etc. Agents for Systems are divided into two subcategories. Agents-surpported systems deal with the use of agents as support facilities to enable computer-assisted problem solving or enhanced cognitive abilities. Agent-based systems focus on using agents in system evaluation (system research and analysis) to model evolutionary behavior.

1.3.8 Autonomous Vehicles

Hallerbach et al. The application of agent-based approaches to the development and validation of autonomous driving systems is discussed through a digital twin of the vehicle under test and an independent agent-based microscopic traffic simulation. Waymo created Carcraft, a multi-agent simulation environment, to test algorithms for self-driving cars. It simulates traffic interactions between human drivers, pedestrians, and autonomous vehicles. People's behavior is modeled by artificial agents based on data from real human behavior. The basic idea of ​​using agent-based modeling to understand autonomous vehicles was discussed as early as 2003.

1.4 Algorithm implementation

Many ABM frameworks are designed for serial von-Neumann computer architectures, limiting the speed and scalability of implementing models. Since emergent behavior in large-scale ABM depends on population size, size constraints may hinder model validation. These limitations are mostly addressed through distributed computing, and frameworks such as Repast HPC are designed for these types of implementations. While these approaches map well to cluster and supercomputer architectures, issues related to communication and synchronization, as well as deployment complexity, remain potential barriers to their widespread adoption.

A recent development is the use of data-parallel algorithms for ABM simulations on graphics processing units (GPUs). Extreme memory bandwidth combined with the sheer number-crunching power of a multi-processor GPU enables simulation of millions of agents at tens of frames per second.

1.4.1 Integration with other modeling forms

Because agent-based modeling is more of a modeling framework than a specific software or platform, it is often used in conjunction with other modeling modalities. For example, agent-based models have also been integrated with Geographic Information System (GIS) . This provides a useful combination where ABM is used as a process model and a GIS system can provide a schema model. Likewise, Social Network Analysis ( SNA) tools and agent models are sometimes integrated , where ABM is used to simulate the dynamics on the network and SNA tools model and analyze the interaction network.

1.5 Verification and validation (V&V)

Verification and validation (V&V) of the simulation model is very important. Verification involves ensuring that the implemented model matches the conceptual model, while validation ensures that the implemented model has some relationship to the real world. Face verification, sensitivity analysis, calibration and statistical verification are different aspects of verification. A discrete-event simulation framework approach for the verification of agent-based systems has been proposed. A comprehensive resource on empirical validation of principal models can be found here.

As an example of V&V techniques, consider VOMAS (virtual overlay multi-agent system), a software engineering-based approach in which virtual overlay multi-agent systems are developed together with agent models. Muazi et al. also provide an example of using VOMAS to validate and validate a forest fire simulation model. Another software engineering approach, test-driven development, has been adapted for master model verification. This approach has another advantage, it allows automatic verification using unit testing tools.

2. Intelligent agent model

In artificial intelligence, an intelligent agent (IA) is anything that perceives its environment, takes actions autonomously to achieve goals, and can improve its performance by learning or using knowledge. They may be simple or complex - a thermostat is considered an example of an agent, like a human being, as is any system that fits the definition, such as a company, country, or biome.

Figure Simple reflector (reflex agent) diagram


Leading AI textbooks define "artificial intelligence" as "the study and design of intelligent agents," a definition that sees goal-directed behavior as the essence of intelligence. The term " rational agent ", borrowed from economics, is also used to describe goal-oriented agents.

The agent has an "objective function" that encapsulates all of the agent's goals. Such an agent is designed to create and execute any plan that, once completed, maximizes the expected value of the objective function. For example, a reinforcement learning agent has a "reward function" that allows the programmer to shape the agent's desired behavior, and an evolutionary algorithm's behavior is shaped by a "fitness function."

Agents in artificial intelligence are closely related to agents in economics, and versions of the agent paradigm are used in cognitive science, ethics, the philosophy of practical rationality, and many interdisciplinary social cognitive modeling and computer social simulations. There is research.

Agents are often described schematically as abstract functional systems similar to computer programs. Abstract descriptions of agents are called abstract intelligent agents (AIA) to distinguish them from real-world implementations. Autonomous intelligent agents are designed to function without human intervention. Agents are also closely related to software agents (autonomous computer programs that perform tasks on behalf of users) .

2.1 Definition of artificial intelligence

Computer science defines artificial intelligence research as the study of intelligent agents. Newer AI textbooks define an "agent" as:

  • "Anything that can be viewed as sensing its environment through sensors and acting on that environment through actuators"

Define a "rational agent" as:

  • "A subject that maximizes the expected value of a measure of performance, based on past experience and knowledge."

And defines the field of "artificial intelligence" research as:

"Research and Design of Rational Subjects"

Kaplan and Haenlein give a similar definition of AI: "The ability of a system to correctly interpret external data, learn from such data, and, through flexible adaptation, use these learnings to achieve specific goals and tasks."

Padgham & Winikoff (2005) agree that the agent is located in the environment and responds to changes in the environment in a timely (though not necessarily real-time) manner. However, the agent must also actively pursue goals in a flexible and robust manner. Optional requirements include that the agent is rational and that the agent is capable of belief-desire-intention analysis.

2.1.1 Advantages of this definition

Philosophically, it avoids several criticisms. Unlike the Turing Test, it does not refer to human intelligence in any way. Therefore there is no need to discuss whether it is "real" or "simulated" intelligence (that is, "synthetic" intelligence or "artificial" intelligence), nor does it suggest that such a machine has a mind (mind), consciousness ( consciousness) or true understanding (i.e., it does not imply John Searle's "strong AI hypothesis " ). Nor does it attempt to draw a clear line between "intelligent" and "non-intelligent" behavior—programs need only be measured against their objective function.

More importantly, it has many practical advantages that help drive AI research forward. It provides a reliable and scientific way to test programs; researchers can compare directly by asking which agent is best at maximizing a given "objective function", and even combine different approaches to solve isolated problems. It also provides them with a common language to communicate with other fields - such as mathematical optimization (defined in terms of "goals") or economics (using the same definition of "rational agent").

2.2 Objective function

An agent assigned an explicit "objective function" is considered smarter if it consistently takes actions that successfully maximize its programmed objective function. Goals can be simple ("1 if the agent wins the game of Go, 0 otherwise") or complex ("perform an action that is mathematically similar to past successes"). An "objective function" encapsulates all the goals by which an agent is driven to act; in the case of a rational agent, this function also encapsulates acceptable trade-offs between achieving conflicting goals. (Terminology varies; for example, some agents seek to maximize or minimize a "utility funciton " , "objective funciton", or "loss function " .)

Goals can be clearly defined or induced. If an AI is programmed for "reinforcement learning , " then it has a "reward function " that encourages certain types of behavior and punishes others. Alternatively, evolutionary systems could induce goals by using "fitness functions" to mutate and preferentially replicate high-scoring AI systems, similar to how animals evolve to innately crave certain goals, such as finding food. Some AI systems, such as nearest-neighbor, rather than reasoning by analogy, often have no goal unless the goal is implicit in their training data. Non-target systems can still be benchmarked if they are built as systems whose "goal" is to accomplish their narrow classification task.

Systems not traditionally viewed as agents, such as knowledge representation systems, are sometimes brought into paradigms by structuring them as agents with the goal of (for example) answering questions as accurately as possible; "action The concept of "is here extended to include the "act" of answering the question. As an additional extension, mimicry-driven systems can be built as agents that optimize an "objective function" based on how similar the agent successfully mimics the desired behavior. In generative adversarial networks ( generative adversarial networks ) in the 2010s , "encoder (encoder)"/"generator (generator)" components try to imitate and improvise human text. The generator tries to maximize a function that encapsulates how it can fool an adversarial "predictor"/"discriminator" component.

While GOFAI systems generally accept explicit objective functions, this paradigm can also be applied to neural networks and evolutionary computation. Reinforcement learning can generate intelligent agents that appear to act in ways designed to maximize a "reward function". Sometimes, instead of setting the reward function directly equal to the desired benchmark evaluation function, machine learning programmers use reward shaping, which rewards the machine initially for incremental progress in learning (incremental progress). According to Yann LeCun in 2018, "Most learning algorithms that people come up with are essentially minimizing some objective function." AlphaZero Chess has a simple objective function; each win counts as +1 point, and each loss counts as -1 point. The objective function for self-driving cars must be more complex. Evolutionary computation can evolve intelligent agents that appear to behave in ways designed to maximize a "fitness function" that affects how many offspring each agent is allowed to leave behind.

Theoretical and non-computable AIXI designs are the largest intelligent agents in this paradigm; however, in the real world, agents are constrained by limited time and hardware resources, and scientists race to generate algorithm, the benchmark uses real-world hardware for computation.

2.3 Types of agents

2.3.1 Russel and Norvig's classification

Russell & Norvig (2003) divided agents into five categories according to their perception and ability:

2.3.1.1 Simple reflex agents

Figure Simple reflector (reflex agent)


Simple reflectors only act on the current perception, ignoring the rest of the perception history. The main function is based on the condition-action rule: "if the condition is met, then act".

This principal function will only succeed if the environment is fully observable. Some reflectors can also contain information about their current state, which allows them to ignore the condition that the actuator has been triggered.

For simple reflectors operating in partially observable environments, infinite loops are often unavoidable. If the agent can randomize its actions, it is possible to escape the infinite loop.

2.3.1.2 Model-based reflex agent

Figure-based reflectors


Model-based agents can handle partially observable environments. Its current state is stored inside the agent, maintaining some kind of structure that describes the rest of the world that cannot be seen. This knowledge of "how the world works" is called a model of the world, hence the name "model-based agent".

Model-based reflectors should maintain some kind of internal model that depends on the history of perception, thus reflecting at least some unobserved aspects of the current state. The impact of perceived history and actions on the environment can be determined using internal models. It then picks an action in the same way as a reflector.

Agents can also use models to describe and predict the behavior of other agents in the environment.

2.3.1.3 Goal-based agents

Diagram-based model-based, goal-based agents


Goal-based agents further extend the capabilities of model-based agents by using "goal" information. Target information describes the ideal situation. This provides a means for the agent to choose among several possibilities, choosing the one that achieves the goal state. Search and planning are subfields of artificial intelligence devoted to finding sequences of actions that achieve a subject's goal.

2.3.1.4 Utility-based agents

Figure-based model-based, utility-based agents


Goal-based agents only distinguish between goal states and non-goal states. It is also possible to define a measure of the desirability of a particular state. This measure can be obtained by using a utility function that maps states to measures of state utility. More general performance metrics should allow comparison of how well the agent's goals are met according to different world states. The term utility can be used to describe how "happy" an agent is.

A rational utility-based agent chooses actions that maximize the expected utility of the action's outcomes—that is, what the agent expects to get, on average, given the probability and utility of each outcome. A utility-based agent must model and track its environment, tasks that involve extensive research into perception, representation, reasoning, and learning.

2.3.1.5 Learning agents

Figure general learning body


The advantage of learning is that it allows an agent to initially operate in an unknown environment with greater capabilities than its initial knowledge itself might allow. The most important distinction is between the "learning element" responsible for improvement and the "performance element" responsible for selecting external actions.

The learning element uses "critic" feedback on the agent's performance and determines how performance elements, or "actors," should be modified to do better in the future. The performance element is what we previously thought of as the whole body: it receives perception and decides action.

The last component of the learning body is the "problem generator". It is responsible for suggesting next actions that will lead to new and informative experiences.

2.3.2 Weiss' classification

Weiss (2013) defines four classes of subjects:

  • Logic-based subject - decide what action to perform through logical reasoning;
  • Reactive agents - decisions are implemented with some form of direct mapping from situation to action;
  • Belief-desire-intention agents - decisions rely on manipulation of data structures representing the agent's beliefs, desires, and intentions; finally,
  • layered architectures - decisions are implemented through various software layers, each more or less explicitly reasoning about the environment at a different level of abstraction.

2.4 Hierarchies of agents

In order to actively perform their functions, today's agents are often assembled in a hierarchy containing many "sub-agents". Smart sub-principals process and execute lower-level functions. Taken together, agents and subagents create a complete system that can accomplish difficult tasks or goals by exhibiting some form of intelligent behavior and response.

Typically, a subject can be built by splitting the body into sensors and actuators so that it operates with a complex perception system that takes a description of the world as input to the controller and outputs commands to the actuators. However, a hierarchy of controller layers is often necessary to balance the immediate reaction required for low-level tasks with the slow inference of complex high-level goals.

2.4.1 Agent function

A simple body program can be defined mathematically as a function fff (called the "agent function"), which maps each possible perceptual sequence to a coefficient, feedback element, function, or constant of possible actions or influences that the agent can perform, while reflecting the final action:

f : P ∗ → A {\displaystyle f:P^{\ast }\rightarrow A} f:PA

The principal function is an abstract concept because it can contain various decision-making principles, such as calculating the utility of individual options, derivation of logical rules, fuzzy logic, etc.

Instead, the program body maps every possible perception to an action.

We use the term "perception" to refer to the subject's perceptual input at any given moment. An agent is anything that can be thought of as perceiving its environment through sensors and acting on that environment through actuators.

2.5 Applications

Figure An example of an automated live assistant providing automated customer service on a web page.


Smart agents are used as automated online assistants, and their role is to sense customer needs to perform personalized customer service. Such a body might basically consist of a dialogue system, an avatar, and an expert system providing users with specific expertise. They can also be used to optimize the coordination of online crowds. Hallerbach et al. The application of agent-based approaches to the development and validation of autonomous driving systems is discussed through a digital twin of the vehicle under test and an independent agent-based microscopic traffic simulation. Waymo created Carcraft, a multi-agent simulation environment, to test algorithms for self-driving cars. It simulates traffic interactions between human drivers, pedestrians, and autonomous vehicles. People's behavior is modeled by artificial agents based on data from real human behavior. The basic idea of ​​using agent-based modeling to understand autonomous vehicles was discussed as early as 2003.

2.6 Other definitions and uses

"Agent" is also often used as a vague marketing term, sometimes synonymous with "virtual personal assistant." Some 20th-century definitions describe agents as programs that assist users or act on behalf of users. These examples are called software agents, and "intelligent software agents" (that is, software agents with intelligence) are sometimes referred to as "agents".

Nikola Kasabov believes that an agent system should have the following characteristics:

  • Gradually adapt to new problem-solving rules
  • Online and real-time adaptation
  • Ability to analyze oneself in terms of actions, mistakes and successes
  • Learn and improve through interaction with the environment
  • Learn quickly from large amounts of data
  • Has memory-based example storage and retrieval capabilities
  • Has parameters representing short-term and long-term memory, age, forgetting, etc.

  • references

Niazi, Muaz; Hussain, Amir. Agent-based Computing from Multi-agent Systems to Agent-Based Models: A Visual Survey. Scientometrics. 89 (2): 479–499. (2011).

wiki: Agent

wiki: Agent(economics)

wiki: Biological agent

Baidu translation: agent-based

wiki: Agent-based model

Samuelson, Douglas A. (February 2005). “Agents of Change”. OR/MS Today.

Samuelson, Douglas A.; Macal, Charles M. (August 2006). “Agent-Based Modeling Comes of Age”. OR/MS Today.

wiki: Agent-based model in biology

wiki: Agent-based computational economics

wiki: Agent-based social simulation

wiki: Intelligent agent

Russell, Stuart J.; Norvig, Peter (2003). Artificial Intelligence: A Modern Approach (2nd ed.). Upper Saddle River, New Jersey: Prentice Hall. Chapter 2. ISBN 0-13-790395-2.

Weiss, G. (2013). Multiagent systems (2nd ed.). Cambridge, MA: MIT Press. ISBN 978-0-262-01889-0.

Multi-agent system

Guess you like

Origin blog.csdn.net/qq_32515081/article/details/127198286
Recommended