MLPro - Machine Learning Professional
Welcome to MLPro - the synoptic framework for standardized machine learning tasks in Python!
MLPro provides complete, standardized and reusable functionalities to support your scientific research, industrial projects or educational tasks in machine learning.
The documentation is currently under construction, but we are working flat out to complete it. The framework itself is ready for use and we are proud to offer you extensive functionalities, especially in the area of reinforcement learning and game theory. A number of self-study examples are available in Appendix 1 and the API documentation in Appendix 2 is pretty complete. Try it out and get in touch with any questions or problems. Have fun!
Table of Content
- Basic Functions
- Howto BF-001: Logging
- Howto BF-002: Timer
- Howto BF-003: Dimensions, Spaces and Elements
- Howto BF-004: Store and plot data
- Howto BF-005: Hyperparameters
- Howto BF-006: Buffers
- Howto BF-007: Hyperparameter Tuning using Hyperopt
- Howto BF-008: Hyperparameter Tuning using Optuna
- Howto BF-009: SciUI - Reuse of interactive 2D/3D Input Space
- Howto BF-010: SciUI - Reinforcement Learning Cockpit
- Reinforcement Learning
- Howto RL-001: Types of reward
- Howto RL-002: Run an agent with an own policy with an OpenAI GYM environment
- Howto RL-003: Train an agent with an own policy on an OpenAI Gym environment
- Howto RL-004: Run a multi-agent with an own policy with an OpenAI Gym environment
- Howto RL-005: Train a multi-agent with an own policy on the Multi-Cartpole environment
- Howto RL-006: Run own multi-agent with Petting Zoo environment
- Howto RL-007: Training of a wrapped Stable Baselines 3 policy
- Howto RL-008: Wrap native MLPro environment class to OpenAI Gym environment
- Howto RL-009: Wrap native MLPro environment class to PettingZoo environment
- Howto RL-010: Train a wrapped Stable Baslines 3 policy on MLPro’s native UR5 environment
- Howto RL-011: Train a wrapped Stable Baslines 3 policy on MLPro’s native UR5 environment (Paper)
- Howto RL-012: Train a wrapped Stable Baslines 3 policy on MLPro’s native RobotHTM environment
- Howto RL-013: Model Based Reinforcement Learning
- Howto RL-014: Advanced training with stagnation detection
- Howto RL-015: Train a wrapped Stable Baselines 3 policy with stagnation detection
- Howto RL-016: Comparison of native and wrapped Stable Baselines 3 policy
- Howto RL-017: Comparison of native and wrapped Stable Baselines 3 policy (off-policy)
- Howto RL-018: Train a wrapped Stable Baselines 3 policy on MLPro’s native MultiGeo environment
- Howto RL-019: Train and reuse a single agent
- Howto RL-020: Run a native random agent in MLPro’s native DoublePendulum environment
- Howto RL-021: Train a wrapped Stable Baselines 3 policy on MLPro’s native DoublePendulum environment
- Game Theory
- Online Adaptivity
Contact Data
Mail: mlpro@listen.fh-swf.de