Emergence of communication in multi-agent systems using reinforcement learning
In this paper, the new approach to the emergence of communication between autonomous agents is introduced. The learning scheme is presented, which allows for emergence of efficient communication between agents in cooperative systems. Classical reinforcement learning framework extended to multi-agent systems is used. Language capabilities are modeled by modifying agents' policy. In order to do this, so called linguistic state and action variables are added to extend agents' state and action spaces. Linguistic state variables represent the signal received by an agent and linguistic action variables represent a signal sent by an agent. Set of agents is divided into receivers and senders on the basis of their ability to send and receive communication signals. The experiment with two-agent system is presented. It is shown how a simple communication evolves simultaneously with a non-linguistic behavior as a tool to coordinate agents actions in order to implement a task. At the end, conclusion is made that presented approach can be applied to ensure an efficient communication within real-world heterogenous, task-oriented multi-agent systems.