Multi-Agent Reinforcement Learning for Alternating-Time Logic Chapter uri icon

Overview

abstract

  • Alternating-time temporal logic (ATL) extends branching time logic by enabling quantification over paths that result from the strategic choices made by multiple agents in various coalitions within the system. While classical temporal logics express properties of “closed” systems, ATL can express properties of “open” systems resulting from interactions among several agents. Reinforcement learning (RL) is a sampling-based approach to decision-making where learning agents, guided by a scalar reward function, discover optimal policies through repeated interactions with the environment. The challenge of translating high-level objectives into scalar rewards for RL has garnered increased interest, particularly following the success of model-free RL algorithms. This paper presents an approach for deploying model-free RL to verify multi-agent systems against ATL specifications. The key contribution of this paper is a verification procedure for model-free RL of quantitative and non-nested classic ATL properties, based on Q-learning, demonstrated on a natural subclass of non-nested ATL formulas.

publication date

  • October 16, 2024

has restriction

  • hybrid

Date in CU Experts

  • October 30, 2024 8:26 AM

Full Author List

  • Hahn EM; Perez M; Schewe S; Somenzi F; Trivedi A; Wojtczak D

author count

  • 6

Other Profiles