7 MetaBBO-RL optimizers, 1 MetaBBO-SL optimizer and 11 classic optimizers have been integrated into this platform. Choose one or more of them to be the baseline(s) to test the performance of your own optimizer.
Supported MetaBBO-RL optimizers:
Supported MetaBBO-SL optimizer:
Name | Year | Related paper |
---|---|---|
RNN-OI | 2017 | Learning to learn without gradient descent by gradient descent |
Supported classic optimizers:
Note that Random Search
performs uniformly random sampling to optimize the fitness.
For running commands, use the corresponding agent class name and corresponding optimizer class name to specify the algorithms:
MetaBBO-RL
baselines:
Algorithm Name | Corresponding Agent Class | Corresponding Backbone Optimizer Class |
---|---|---|
DE-DDQN | DE_DDQN_Agent | DE_DDQN_Optimizer |
QLPSO | QLPSO_Agent | QLPSO_Optimizer |
DEDQN | DEDQN_Agent | DEDQN_Optimizer |
LDE | LDE_Agent | LDE_Optimizer |
RL-PSO | RL_PSO_Agent | RL_PSO_Optimizer |
RLEPSO | RLEPSO_Agent | RLEPSO_Optimizer |
RL-HPSDE | RL_HPSDE_Agent | RL_HPSDE_Optimizer |
MetaBBO-SL
baseline:
Algorithm Name | Corresponding Agent Class | Corresponding Backbone Optimizer Class |
---|---|---|
RNN-OI | L2L_Agent | L2L_Optimizer |
classic
baselines:
Algorithm Name | Corresponding Optimizer Class |
---|---|
PSO | DEAP_PSO |
DE | DEAP_DE |
CMA-ES | DEAP_CMAES |
Bayesian Optimization | BayesianOptimizer |
GL-PSO | GL_PSO |
sDMS_PSO | sDMS_PSO |
j21 | JDE21 |
MadDE | MadDE |
SAHLPSO | SAHLPSO |
NL_SHADE_LBC | NL_SHADE_LBC |
Random Search | Random_search |
For all baselines, we display their control parameter settings in:
Classic: control_parameters_classic
Metabbo: control_parameters_metabbo
To facilitate the observation of our baselines and related metrics, we tested our baselines on two levels of difficulty on three datasets. All data are provided in content.md.