MLPro - Machine Learning Professional
Welcome to MLPro - the integrative middleware-framework for standardized machine learning in Python!
- MLPro is developed by scientists to enable
real world ML projects at a high quality level
comparable and reproducible results in publications
the exchange and reuse of standardized ML code
For this purpose, MLPro provides advanced models and templates at a scientific level for a constantly growing number of sub-areas of machine learning. These are embedded in standard processes for training and real operations. But of course we have not reinvented existing wheels. An integral part of MLPro’s philosophy is to seamlessly integrate proven functionalities of relevant 3rd party packages instead of developing them again. The scope is rounded off by numerous executable example programs that make it easier to get started in the world of MLPro.
- MLPro is also present on…
- Notes on the current version:
MLPro already provides two stable sub-frameworks MLPro-RL for reinforcement learning and MLPro-GT for game theory.
The documentation is not quite complete yet, but we are working hard on it and the numerous sample programs in Appendix 1 and the API specification in Appendix 2 should help in the meantime.
Next sub-framework in progress: MLPro-OA for online adaptive systems…
Table of Content
- Basic Functions
- Howto BF-001: Logging
- Howto BF-002: Timer
- Howto BF-003: Dimensions, Spaces and Elements
- Howto BF-004: Store and plot data
- Howto BF-005: Hyperparameters
- Howto BF-006: Buffers
- Howto BF-007: Hyperparameter Tuning using Hyperopt
- Howto BF-008: Hyperparameter Tuning using Optuna
- Howto BF-009: SciUI - Reuse of interactive 2D/3D Input Space
- Howto BF-010: SciUI - Reinforcement Learning Cockpit
- Howto BF-011: Event Handling
- Howto BF-020: Accessing Data from OpenML
- Howto BF-021: Accessing Data from River
- Howto BF-022: Accessing Data from Scikit-Learn
- Reinforcement Learning
- Howto RL-001: Types of reward
- Howto RL-002: Run an agent with an own policy with an OpenAI GYM environment
- Howto RL-003: Train an agent with an own policy on an OpenAI Gym environment
- Howto RL-004: Run a multi-agent with an own policy with an OpenAI Gym environment
- Howto RL-005: Train a multi-agent with an own policy on the Multi-Cartpole environment
- Howto RL-006: Run own multi-agent with Petting Zoo environment
- Howto RL-007: Training of a wrapped Stable Baselines 3 policy
- Howto RL-008: Wrap native MLPro environment class to OpenAI Gym environment
- Howto RL-009: Wrap native MLPro environment class to PettingZoo environment
- Howto RL-010: Train a wrapped Stable Baslines 3 policy on MLPro’s native UR5 environment
- Howto RL-011: Train a wrapped Stable Baslines 3 policy on MLPro’s native UR5 environment (Paper)
- Howto RL-012: Train a wrapped Stable Baslines 3 policy on MLPro’s native RobotHTM environment
- Howto RL-013: Model Based Reinforcement Learning
- Howto RL-014: Advanced training with stagnation detection
- Howto RL-015: Train a wrapped Stable Baselines 3 policy with stagnation detection
- Howto RL-016: Comparison of native and wrapped Stable Baselines 3 policy
- Howto RL-017: Comparison of native and wrapped Stable Baselines 3 policy (off-policy)
- Howto RL-018: Train a wrapped Stable Baselines 3 policy on MLPro’s native MultiGeo environment
- Howto RL-019: Train and reuse a single agent
- Howto RL-020: Run a native random agent in MLPro’s native DoublePendulum environment
- Howto RL-021: Train a wrapped Stable Baselines 3 policy on MLPro’s native DoublePendulum environment
- Game Theory
- Online Adaptivity
Contact Data
Mail: mlpro@listen.fh-swf.de