Are you sure you want to delete this task? Once this task is deleted, it cannot be recovered.
Happy 83a6dcaa3e | 1 year ago | |
---|---|---|
.. | ||
README.md | 1 year ago | |
agent.py | 1 year ago | |
atari_config.py | 1 year ago | |
atari_model.py | 1 year ago | |
env_utils.py | 1 year ago | |
mujoco_config.py | 1 year ago | |
mujoco_model.py | 1 year ago | |
requirements_atari.txt | 1 year ago | |
requirements_mujoco.txt | 1 year ago | |
storage.py | 1 year ago | |
train.py | 1 year ago |
Based on PARL, the PPO algorithm of deep reinforcement learning has been reproduced, reaching the same level of indicators as the paper in mujoco benchmarks.
Paper: PPO in Proximal Policy Optimization Algorithms
PARL currently supports the open-source version of Mujoco provided by DeepMind, so users do not need to download binaries of Mujoco as well as install mujoco-py and get license. For more details, please visit Mujoco.
# To train an agent for discrete action game (Atari: PongNoFrameskip-v4 by default)
python train.py
# To train an agent for continuous action game (Mujoco)
python train.py --env 'HalfCheetah-v4' --continuous_action --train_total_steps 1000000
Accelerate training process by setting xparl_addr
and env_num > 1
when environment simulation running very slow.
At first, we can start a local cluster with 8 CPUs:
xparl start --port 8010 --cpu_num 8
Note that if you have started a master before, you don't have to run the above
command. For more information about the cluster, please refer to our
documentation.
Then we can start the distributed training by running:
# To train an agent distributedly
# for discrete action game (Atari games)
python train.py --env "PongNoFrameskip-v4" --env_num 8 --xparl_addr 'localhost:8010'
# for continuous action game (Mujoco games)
python train.py --env 'HalfCheetah-v4' --continuous_action --train_total_steps 1000000 --env_num 5 --xparl_addr 'localhost:8010'
PARL 是一个高性能、灵活的强化学习框架
Python C++ JavaScript Shell Markdown other
Dear OpenI User
Thank you for your continuous support to the Openl Qizhi Community AI Collaboration Platform. In order to protect your usage rights and ensure network security, we updated the Openl Qizhi Community AI Collaboration Platform Usage Agreement in January 2024. The updated agreement specifies that users are prohibited from using intranet penetration tools. After you click "Agree and continue", you can continue to use our services. Thank you for your cooperation and understanding.
For more agreement content, please refer to the《Openl Qizhi Community AI Collaboration Platform Usage Agreement》