Hello,
Is it possible to continue training from checkpoints in MARLlib?
A bit of context: I'm experimenting with policy transfer/reuse across multi-agent teams and would like to test the scenario, when a MARL team kicks-off with a model learned for a related task in the same environment, and adapts it for the target task (i.e. in MPE, reuses the model learned for simple_spread to solve simple_adversary).
Thanks in advance!
Hello,
Is it possible to continue training from checkpoints in MARLlib?
A bit of context: I'm experimenting with policy transfer/reuse across multi-agent teams and would like to test the scenario, when a MARL team kicks-off with a model learned for a related task in the same environment, and adapts it for the target task (i.e. in MPE, reuses the model learned for
simple_spreadto solvesimple_adversary).Thanks in advance!