MLPro Documentations
latest
Home
MLPro - Machine Learning Professional
Welcome to MLPro
1. Introduction
2. Getting Started
Basic Functions
3. MLPro-BF - Basic Functions
Machine Learning
4. MLPro-SL - Supervised Learning
5. MLPro-OA - Online Adaptivity
6. MLPro-RL - Reinforcement Learning
7. MLPro-GT - Game Theory
Extension Hub
8. General Information
9. Third-Party Extensions
Appendices
A1 - Example Pool
MLPro-BF - Basic Functions
MLPro-SL - Supervised Learning
MLPro-RL - Reinforcement Learning
MLPro-GT - Game Theory
Dynamic Games
Howto GT-DG-001: Run Multi-Player with Own Policy
Howto GT-DG-002: Train Multi-Player
Howto GT-DG-003: Train Multi-Player in Potential Games
Howto GT-DG-004: Train Multi-Player in Stackelberg Games
Native GT
Howto GT-Native-001: 2P Prisoners’ Dilemma
Howto GT-Native-002: 3P Prisoners’ Dilemma
Howto GT-Native-003: Rock, Paper, Scissors
Howto GT-Native-004: 3P Supply and Demand
Howto GT-Native-005: 3P Routing Problems
MLPro-OA - Online Adaptivity
A2 - API Reference
A3 - Project MLPro
MLPro Documentations
A1 - Example Pool
MLPro-GT - Game Theory
Dynamic Games
Edit on GitHub
Dynamic Games
Howto GT-DG-001: Run Multi-Player with Own Policy
Howto GT-DG-002: Train Multi-Player
Howto GT-DG-003: Train Multi-Player in Potential Games
Howto GT-DG-004: Train Multi-Player in Stackelberg Games
Native GT
Howto GT-Native-001: 2P Prisoners’ Dilemma
Howto GT-Native-002: 3P Prisoners’ Dilemma
Howto GT-Native-003: Rock, Paper, Scissors
Howto GT-Native-004: 3P Supply and Demand
Howto GT-Native-005: 3P Routing Problems
Read the Docs
v: latest
Versions
latest
v1.4.4
v1.4.3
v1.4.2
v1.4.1
v1.4.0
v1.3.1
v1.3.0
v1.1.0
v1.0.2
v1.0.0
v0.9.2
v0.9.1
v0.9.0
v0.8.6
v0.8.5
v0.8.1
v0.8.0
v0.5.0
v.1.2.0
Downloads
pdf
On Read the Docs
Project Home
Builds