5.2. Getting Started
Here is a concise series to introduce all users to the MLPro-RL in a practical way, whether you are a first-timer or an experienced MLPro user.
If you are a first-timer, then you can begin with Section (1) What is MLPro?.
If you have understood MLPro but not reinforcement learning, then you can jump to Section (2) What is Reinforcement Learning?.
If you have experience in both MLPro and reinforcement learning, then you can directly start with Section (3) What is MLPro-RL?.
After following the below step-by-step guideline, we expect the user understands the MLPro-RL in practice and starts using MLPro-RL.
- 1. What is MLPro?
If you are a first-time user of MLPro, you might wonder what is MLPro. Therefore, we recommend initially start with understanding MLPro by checking out the following steps:
- 2. What is Reinforcement Learning?
If you have not dealt with reinforcement learning, we recommend starting to understand at least the basic concept of reinforcement learning. There are plenty of references, articles, papers, books, or videos on the internet that explains reinforcement learning. But, for deep understanding, we recommend you to read the book from Sutton and Barto, which is Reinforcement Learning: An Introduction.
- 3. What is MLPro-RL?
We expect that you have a basic knowledge of MLPro and reinforcement learning. Therefore, you need to understand the overview of MLPro-RL by following the steps below:
- 4. Understanding Environment in MLPro-RL
First of all, it is important to understand the structure of an environment in MLPro, which can be found on this page.
Then, you can start following some of our howto files related to the environment in MLPro-RL, as follows:
- 5. Understanding Agent in MLPro-RL
In reinforcement learning, we have two types of agents, such as a single-agent RL or a multi-agent RL. Both of the types are covered by MLPro-RL. To understand the different possibilities of an agent in MLPro, you can visit this page.
Then, you need to understand how to set up a single-agent and a multi-agent RL in MLPro-RL by following these examples:
- 6. Selecting between Model-Free and Model-Based RL
In this section, you need to select your direction of the RL training, whether it is a model-free RL or a model-based RL. However, firstly, you can pay attention to these two pages, which are RL scenario and training, before selecting either of the paths below.
Model-Free Reinforcement Learning
To practice model-free RL in the MLPro-RL package, here are a video and some ready-to-use howto files that can be followed:
Model-Based Reinforcement Learning
Model-based RL contains two learning paradigms, such as learning the environment (model-based learning) and utilizing the model (e.g. as an action planner). To practice model-based RL in the MLPro-RL package, here are a howto file that can be followed:
For more advanced MBRL technique, e.g. applying a native MBRL network, here is an example that can be used as a reference:
- 7. Additional Guidance
After following the previous steps, we hope that you could practice MLPro-RL and start using this subpackage for your RL-related activities. For more advanced features, we highly recommend you to check out the following howto files: