PARL 是一个高性能、灵活的强化学习框架
You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
long0663 ab1eb893a9
Add readme file of a2c and modify the readme file of alphazero (#608)
2 months ago
.github update parl.maddpg without import gym (#208) 1 year ago
.teamcity fix windows unittest (#587) 3 months ago
benchmark Add readme file of a2c and modify the readme file of alphazero (#608) 2 months ago
docs paddle2.0-based QMIX (#586) 2 months ago
evo_kit update evo_kit and readme (#418) 9 months ago
examples Revert "add max_sample_steps (#596)" (#600) 2 months ago
papers Replace broken links in with arxiv links (#594) 2 months ago
parl fix passing wrong argument names in @parl.remote_class (#606) 2 months ago
.copyright.hook chinese in doc (#266) 1 year ago
.gitignore implement of IMPALA with the newest parallel design (#60) 2 years ago
.pre-commit-config.yaml redesign basic class in PARL (#26) 2 years ago
.readthedocs.yml Onlinedocs (#84) 1 year ago
.travis.yml Travis CI: Discover undefined names in Python code (#306) 1 year ago
CMakeLists.txt fix unittest timeout problem (#560) 3 months ago
LICENSE Initial commit 3 years ago fix unittest bugs (#136) 1 year ago Windows unittest CI (#524) 5 months ago Windows unittest CI (#524) 5 months ago first PR of heartbeat refactor (#489) 6 months ago


English | 简体中文
Documentation | 中文文档

PARL is a flexible and high-efficient reinforcement learning framework.


Reproducible. We provide algorithms that stably reproduce the result of many influential reinforcement learning algorithms.

Large Scale. Ability to support high-performance parallelization of training with thousands of CPUs and multi-GPUs.

Reusable. Algorithms provided in the repository could be directly adapted to a new task by defining a forward network and training mechanism will be built automatically.

Extensible. Build new algorithms quickly by inheriting the abstract class in the framework.


abstractions PARL aims to build an agent for training algorithms to perform complex tasks. The main abstractions introduced by PARL that are used to build an agent recursively are the following:


Model is abstracted to construct the forward network which defines a policy network or critic network given state as input.


Algorithm describes the mechanism to update parameters in Model and often contains at least one model.


Agent, a data bridge between the environment and the algorithm, is responsible for data I/O with the outside environment and describes data preprocessing before feeding data into the training process.

Note: For more information about base classes, please visit our tutorial and API documentation.


PARL provides a compact API for distributed training, allowing users to transfer the code into a parallelized version by simply adding a decorator. For more information about our APIs for parallel training, please visit our documentation.
Here is a Hello World example to demonstrate how easy it is to leverage outer computation resources.
class Agent(object):

    def say_hello(self):
        print("Hello World!")

    def sum(self, a, b):
        return a+b

agent = Agent()
ans = agent.sum(1,5) # it runs remotely, without consuming any local computation resources

Two steps to use outer computation resources:

  1. use the parl.remote_class to decorate a class at first, after which it is transferred to be a new class that can run in other CPUs or machines.
  2. call parl.connect to initialize parallel communication before creating an object. Calling any function of the objects does not consume local computation resources since they are executed elsewhere.
PARL As shown in the above figure, real actors (orange circle) are running at the cpu cluster, while the learner (blue circle) is running at the local gpu with several remote actors (yellow circle with dotted edge).

For users, they can write code in a simple way, just like writing multi-thread code, but with actors consuming remote resources. We have also provided examples of parallized algorithms like IMPALA, A2C and GA3C. For more details in usage please refer to these examples.



  • Python 2.7 or 3.5+(On Windows, PARL only supprorts the enviroment with python3.7+).
  • paddlepaddle>=1.8.5 (Optional, if you only want to use APIs related to parallelization alone)
pip install parl


NeurlIPS2018 Half-Cheetah Breakout