Learning in Networked Interactions: A Replicator Dynamics Approach



Bloembergen, Daan, Caliskanelli, Ipek and Tuyls, Karl
(2015) Learning in Networked Interactions: A Replicator Dynamics Approach. ARTIFICIAL LIFE AND INTELLIGENT AGENTS, ALIA 2014, 519. pp. 44-58.

[img] Text
Bloembergen2014alia.pdf - Author Accepted Manuscript

Download (496kB)

Abstract

Many real-world scenarios can be modelled as multi-agent systems, where multiple autonomous decision makers interact in a single environment. The complex and dynamic nature of such interactions prevents hand-crafting solutions for all possible scenarios, hence learning is crucial. Studying the dynamics of multi-agent learning is imperative in selecting and tuning the right learning algorithm for the task at hand. So far, analysis of these dynamics has been mainly limited to normal form games, or unstructured populations. However, many multi-agent systems are highly structured, complex networks, with agents only interacting locally. Here, we study the dynamics of such networked interactions, using the well-known replicator dynamics of evolutionary game theory as a model for learning. Different learning algorithms are modelled by altering the replicator equations slightly. In particular, we investigate lenience as an enabler for cooperation. Moreover, we show how well-connected, stubborn agents can influence the learning outcome. Finally, we investigate the impact of structural network properties on the learning outcome, as well as the influence of mutation driven by exploration.

Item Type: Article
Uncontrolled Keywords: Reinforcement learning, Social networks, Replicator dynamics
Subjects: ?? QA75 ??
Depositing User: Symplectic Admin
Date Deposited: 12 Nov 2015 08:44
Last Modified: 16 Dec 2022 15:42
DOI: 10.1007/978-3-319-18084-7_4
Related URLs:
URI: https://livrepository.liverpool.ac.uk/id/eprint/2036519