SMAC-Exp

StarCraft Multi-Agent Exploration Challenge

Dataset Information
Modalities
Environment
Languages
English
Introduced
2022
License
MIT
Homepage

Overview

The StarCraft Multi-Agent Challenges+ requires agents to learn completion of multi-stage tasks and usage of environmental factors without precise reward functions. The previous challenges (SMAC) recognized as a standard benchmark of Multi-Agent Reinforcement Learning are mainly concerned with ensuring that all agents cooperatively eliminate approaching adversaries only through fine manipulation with obvious reward functions. This challenge, on the other hand, is interested in the exploration capability of MARL algorithms to efficiently learn implicit multi-stage tasks and environmental factors as well as micro-control. This study covers both offensive and defensive scenarios. In the offensive scenarios, agents must learn to first find opponents and then eliminate them. The defensive scenarios require agents to use topographic features. For example, agents need to position themselves behind protective structures to make it harder for enemies to attack.

Variants: Off_Superhard_sequential, Def_Outnumbered_sequential, Def_Armored_sequential, Off_Hard_parallel, Off_Superhard_parallel, Off_Complicated_parallel, Def_Outnumbered_parallel, Off_Distant_parallel, Off_Near_parallel, Def_Armored_parallel, Def_Infantry_parallel, SMAC-Exp

Associated Benchmarks

This dataset is used in 1 benchmark:

Recent Benchmark Submissions

Task Model Paper Date
Multi-agent Reinforcement Learning DRIMA Neural Processes with Stochastic Attention: … 2022-04-11

Research Papers

Recent papers with results on this dataset: