Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Question] How to Set Different Observations for Each Agent in a Multiagent Reinforcement Learning Environment #1817

Open
4 tasks done
paehal opened this issue Jan 29, 2024 · 2 comments
Labels
question Further information is requested

Comments

@paehal
Copy link

paehal commented Jan 29, 2024

❓ Question

Hello,

I have experience using Stable Baselines 3 as a module but am a beginner regarding its internal workings. I have a decent understanding of both multiagent and single-agent reinforcement learning.

My question is: In multiagent reinforcement learning using Stable Baselines 3, is it possible to provide different observation information to separate agents and have them learn independently? If so, how can this be specifically implemented?

I am using the gym-pybullet-drone repository for reinforcement learning of multi-drone control with Stable Baselines 3, which can be found here: gym-pybullet-drone.

As per the tutorial, I am executing the following code for multiagent reinforcement learning:

cd gym_pybullet_drones/examples/
python learn.py --multiagent true

Within learn.py, learning is conducted using Stable Baselines 3's PPO in the following manner:

train_env = make_vec_env(MultiHoverAviary,
                         env_kwargs=dict(num_drones=DEFAULT_AGENTS, obs=DEFAULT_OBS, act=DEFAULT_ACT),
                         n_envs=1,
                         seed=0
                         )

model = PPO('MlpPolicy',
            train_env,
            # tensorboard_log=filename+'/tb/',
            verbose=1)

model.learn(total_timesteps=int(1e7) if local else int(1e2), # shorter training in GitHub Actions pytest

In this setup, the environment defined in MultiHoverAviary.py and its parent class BaseRLAviary.py includes the _computeObs(self) function, which combines information about all drones.

With this configuration and the learning function in learn.py, I understand that all agents share the same model and input the same information into both the Policy and Value networks for learning.

I want to modify the observations for each agent. Specifically, I want agent0 to only receive positional information about agent1, and agent1 to only receive positional information about agent0. I believe this might require setting up multiple models, but the current implementation in the gym_pybullet_drone repository with Stable Baselines 3 seems not to support this.

I am asking this question because I think someone well-versed with Stable Baselines 3 might know a solution. As discussed in this issue here, my understanding is that multiagent reinforcement learning settings are not a primary focus in Stable Baselines 3. However, any advice or solution for the above problem would be greatly appreciated.

Thank you.

Checklist

@paehal paehal added the question Further information is requested label Jan 29, 2024
@paehal
Copy link
Author

paehal commented Mar 27, 2024

It seems that there have been no comments on this matter for about 2 months.

@araffin As you appear to be an expert on SB3, may I ask if you have any comments?

@mchoilab
Copy link

SB3 does not officially support multi-agent RL. However, you can use the sb3 agent as one big agent with different observations and action spaces with "sub-agents", take a look at Pettingzoo examples.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

2 participants