API Reference

This page provides API documentation for the main classes and methods in D²MTOLab.

Algorithms

All algorithms follow a consistent interface:

from ddmtolab.Algorithms.STSO.DE import DE
from ddmtolab.Methods.mtop import MTOP

problem = MTOP()
problem.add_task(objective_func, dim=10)

optimizer = DE(problem, n=50, max_nfes=1000)
results = optimizer.optimize()

print(results.best_decs, results.best_objs)

Single-Task Single-Objective (STSO)

Genetic Algorithm (GA)

This module implements the Genetic Algorithm for single-objective optimization problems.

References

[1] David E. Goldberg. Genetic Algorithms in Search, Optimization, and Machine Learning. Addison-Wesley, Reading, MA, 1989.

[2] John H. Holland. Adaptation in Natural and Artificial Systems: An Introductory Analysis with Applications to Biology, Control, and Artificial Intelligence. University of Michigan Press, Ann Arbor, MI, 1st edition, 1975. Reprinted by MIT Press in 1992.

Notes

Author: Jiangtao Shen Email: j.shen5@exeter.ac.uk Date: 2025.11.11 Version: 1.0

class ddmtolab.Algorithms.STSO.GA.GA(problem, n=None, max_nfes=None, muc=2.0, mum=5.0, save_data=True, save_path='./Data', name='GA', disable_tqdm=True)[source]

Bases: object

Genetic Algorithm for single-objective optimization.

algorithm_information

Dictionary containing algorithm capabilities and requirements

Type:

dict

__init__(problem, n=None, max_nfes=None, muc=2.0, mum=5.0, save_data=True, save_path='./Data', name='GA', disable_tqdm=True)[source]

Initialize Genetic Algorithm.

Parameters:
  • problem (MTOP) – Multi-task optimization problem instance

  • n (int or List[int], optional) – Population size per task (default: 100)

  • max_nfes (int or List[int], optional) – Maximum number of function evaluations per task (default: 10000)

  • muc (float, optional) – Distribution index for simulated binary crossover (SBX) (default: 2.0)

  • mum (float, optional) – Distribution index for polynomial mutation (PM) (default: 5.0)

  • save_data (bool, optional) – Whether to save optimization data (default: True)

  • save_path (str, optional) – Path to save results (default: ‘./TestData’)

  • name (str, optional) – Name for the experiment (default: ‘GA_test’)

  • disable_tqdm (bool, optional) – Whether to disable progress bar (default: True)

optimize()[source]

Execute the Genetic Algorithm.

Returns:

Optimization results containing decision variables, objectives, and runtime

Return type:

Results

Differential Evolution (DE)

This module implements Differential Evolution for single-objective optimization problems.

References

[1] Storn, Rainer, and Kenneth Price. “Differential evolution–a simple and efficient heuristic for global optimization over continuous spaces.” Journal of global optimization 11.4 (1997): 341-359.

Notes

Author: Jiangtao Shen Email: j.shen5@exeter.ac.uk Date: 2025.10.24 Version: 1.0

class ddmtolab.Algorithms.STSO.DE.DE(problem, n=None, max_nfes=None, F=0.5, CR=0.9, save_data=True, save_path='./Data', name='DE', disable_tqdm=True)[source]

Bases: object

Differential Evolution algorithm for single-objective optimization.

algorithm_information

Dictionary containing algorithm capabilities and requirements

Type:

dict

__init__(problem, n=None, max_nfes=None, F=0.5, CR=0.9, save_data=True, save_path='./Data', name='DE', disable_tqdm=True)[source]

Initialize Differential Evolution algorithm.

Parameters:
  • problem (MTOP) – Multi-task optimization problem instance

  • n (int or List[int], optional) – Population size per task (default: 100)

  • max_nfes (int or List[int], optional) – Maximum number of function evaluations per task (default: 10000)

  • F (float, optional) – Scaling factor for mutation (default: 0.5)

  • CR (float, optional) – Crossover probability (default: 0.9)

  • save_data (bool, optional) – Whether to save optimization data (default: True)

  • save_path (str, optional) – Path to save results (default: ‘./TestData’)

  • name (str, optional) – Name for the experiment (default: ‘DE_test’)

  • disable_tqdm (bool, optional) – Whether to disable progress bar (default: True)

optimize()[source]

Execute the Differential Evolution algorithm.

Returns:

Optimization results containing decision variables, objectives, and runtime

Return type:

Results

Particle Swarm Optimization (PSO)

This module implements Particle Swarm Optimization for single-objective optimization problems.

References

[1] Kennedy, James, and Russell Eberhart. “Particle swarm optimization.” Proceedings of ICNN’95-international conference on neural networks. Vol. 4. IEEE, 1995.

Notes

Author: Jiangtao Shen Email: j.shen5@exeter.ac.uk Date: 2025.10.23 Version: 1.0

class ddmtolab.Algorithms.STSO.PSO.PSO(problem, n=None, max_nfes=None, min_w=0.4, max_w=0.9, c1=0.2, c2=0.2, save_data=True, save_path='./Data', name='PSO', disable_tqdm=True)[source]

Bases: object

Particle Swarm Optimization algorithm for single-objective optimization.

algorithm_information

Dictionary containing algorithm capabilities and requirements

Type:

dict

__init__(problem, n=None, max_nfes=None, min_w=0.4, max_w=0.9, c1=0.2, c2=0.2, save_data=True, save_path='./Data', name='PSO', disable_tqdm=True)[source]

Initialize Particle Swarm Optimization algorithm.

Parameters:
  • problem (MTOP) – Multi-task optimization problem instance

  • n (int or List[int], optional) – Population size per task (default: 100)

  • max_nfes (int or List[int], optional) – Maximum number of function evaluations per task (default: 10000)

  • min_w (float, optional) – Minimum inertia weight (default: 0.4)

  • max_w (float, optional) – Maximum inertia weight (default: 0.9)

  • c1 (float, optional) – Cognitive coefficient (self-learning factor) (default: 0.2)

  • c2 (float, optional) – Social coefficient (swarm-learning factor) (default: 0.2)

  • save_data (bool, optional) – Whether to save optimization data (default: True)

  • save_path (str, optional) – Path to save results (default: ‘./TestData’)

  • name (str, optional) – Name for the experiment (default: ‘PSO_test’)

  • disable_tqdm (bool, optional) – Whether to disable progress bar (default: True)

optimize()[source]

Execute the Particle Swarm Optimization algorithm.

Returns:

Optimization results containing decision variables, objectives, and runtime

Return type:

Results

CMA-ES (Covariance Matrix Adaptation Evolution Strategy)

This module implements the CMA-ES algorithm for single-objective optimization problems.

References

[1] Hansen, N., & Ostermeier, A. (2001). Completely Derandomized Self-Adaptation in Evolution Strategies. Evolutionary Computation, 9(2), 159-195. DOI: 10.1162/106365601750190398

Notes

Author: Jiangtao Shen Email: j.shen5@exeter.ac.uk Date: 2025.12.12 Version: 1.0

class ddmtolab.Algorithms.STSO.CMA_ES.CMA_ES(problem, n=None, max_nfes=None, sigma0=0.3, use_n=True, save_data=True, save_path='./Data', name='CMA-ES', disable_tqdm=True)[source]

Bases: object

CMA-ES for single-objective optimization.

algorithm_information

Dictionary containing algorithm capabilities and requirements

Type:

dict

__init__(problem, n=None, max_nfes=None, sigma0=0.3, use_n=True, save_data=True, save_path='./Data', name='CMA-ES', disable_tqdm=True)[source]

Initialize CMA-ES Algorithm.

Parameters:
  • problem (MTOP) – Multi-task optimization problem instance

  • n (int or List[int], optional) – Population size per task (default: None, will use 4+3*log(D))

  • max_nfes (int or List[int], optional) – Maximum number of function evaluations per task (default: 10000)

  • sigma0 (float, optional) – Initial step size (default: 0.3)

  • use_n (bool, optional) – If True, use provided n; if False, use 4+3*log(D) (default: True)

  • save_data (bool, optional) – Whether to save optimization data (default: True)

  • save_path (str, optional) – Path to save results (default: ‘./TestData’)

  • name (str, optional) – Name for the experiment (default: ‘CMA_ES_test’)

  • disable_tqdm (bool, optional) – Whether to disable progress bar (default: True)

optimize()[source]

Execute the CMA-ES Algorithm.

Returns:

Optimization results containing decision variables, objectives, and runtime

Return type:

Results

IPOP-CMA-ES (Increasing Population CMA-ES)

This module implements the IPOP-CMA-ES algorithm for single-objective optimization problems.

References

[1] Auger, A., & Hansen, N. (2005). A Restart CMA Evolution Strategy with Increasing Population Size. 2005 IEEE Congress on Evolutionary Computation, 2, 1769-1776. DOI: 10.1109/CEC.2005.1554902

Notes

Author: Jiangtao Shen Email: j.shen5@exeter.ac.uk Date: 2025.01.27 Version: 1.0

class ddmtolab.Algorithms.STSO.IPOP_CMA_ES.IPOP_CMA_ES(problem, n=None, max_nfes=None, sigma0=0.3, use_n=False, save_data=True, save_path='./Data', name='IPOP-CMA-ES', disable_tqdm=True)[source]

Bases: object

IPOP-CMA-ES for single-objective optimization.

algorithm_information

Dictionary containing algorithm capabilities and requirements

Type:

dict

__init__(problem, n=None, max_nfes=None, sigma0=0.3, use_n=False, save_data=True, save_path='./Data', name='IPOP-CMA-ES', disable_tqdm=True)[source]

Initialize IPOP-CMA-ES Algorithm.

Parameters:
  • problem (MTOP) – Multi-task optimization problem instance

  • n (int or List[int], optional) – Initial population size per task (default: None, will use 4+3*log(D))

  • max_nfes (int or List[int], optional) – Maximum number of function evaluations per task (default: 10000)

  • sigma0 (float, optional) – Initial step size (default: 0.3)

  • use_n (bool, optional) – If True, use provided n; if False, use 4+3*log(D) (default: False)

  • save_data (bool, optional) – Whether to save optimization data (default: True)

  • save_path (str, optional) – Path to save results (default: ‘./TestData’)

  • name (str, optional) – Name for the experiment (default: ‘IPOP_CMA_ES_test’)

  • disable_tqdm (bool, optional) – Whether to disable progress bar (default: True)

optimize()[source]

Execute the IPOP-CMA-ES Algorithm.

Returns:

Optimization results containing decision variables, objectives, and runtime

Return type:

Results

sep-CMA-ES (Separable Covariance Matrix Adaptation Evolution Strategy)

This module implements the sep-CMA-ES algorithm for single-objective optimization problems. sep-CMA-ES achieves linear time and space complexity by using a diagonal covariance matrix.

References

[1] Ros, R., & Hansen, N. (2008). A Simple Modification in CMA-ES Achieving Linear Time and Space Complexity. Parallel Problem Solving from Nature, PPSN X, 296-305. DOI: 10.1007/978-3-540-87700-4_30

Notes

Author: Jiangtao Shen Email: j.shen5@exeter.ac.uk Date: 2025.01.27 Version: 1.0

class ddmtolab.Algorithms.STSO.sep_CMA_ES.sep_CMA_ES(problem, n=None, max_nfes=None, sigma0=0.3, use_n=True, save_data=True, save_path='./Data', name='sep-CMA-ES', disable_tqdm=True)[source]

Bases: object

sep-CMA-ES for single-objective optimization.

algorithm_information

Dictionary containing algorithm capabilities and requirements

Type:

dict

__init__(problem, n=None, max_nfes=None, sigma0=0.3, use_n=True, save_data=True, save_path='./Data', name='sep-CMA-ES', disable_tqdm=True)[source]

Initialize sep-CMA-ES Algorithm.

Parameters:
  • problem (MTOP) – Multi-task optimization problem instance

  • n (int or List[int], optional) – Population size per task (default: None, will use 4+3*log(D))

  • max_nfes (int or List[int], optional) – Maximum number of function evaluations per task (default: 10000)

  • sigma0 (float, optional) – Initial step size (default: 0.3)

  • use_n (bool, optional) – If True, use provided n; if False, use 4+3*log(D) (default: True)

  • save_data (bool, optional) – Whether to save optimization data (default: True)

  • save_path (str, optional) – Path to save results (default: ‘./TestData’)

  • name (str, optional) – Name for the experiment (default: ‘sep_CMA_ES_test’)

  • disable_tqdm (bool, optional) – Whether to disable progress bar (default: True)

optimize()[source]

Execute the sep-CMA-ES Algorithm.

Returns:

Optimization results containing decision variables, objectives, and runtime

Return type:

Results

MA-ES (Matrix Adaptation Evolution Strategy)

This module implements the MA-ES algorithm for single-objective optimization problems. MA-ES uses matrix adaptation instead of covariance matrix adaptation for efficiency.

References

[1] Bayer, H. G., & Sendhoff, B. (2017). Simplify Your Covariance Matrix Adaptation Evolution Strategy. IEEE Transactions on Evolutionary Computation, 21(5), 746-759. DOI: 10.1109/TEVC.2017.2680320

Notes

Author: Jiangtao Shen Email: j.shen5@exeter.ac.uk Date: 2025.01.27 Version: 1.0

class ddmtolab.Algorithms.STSO.MA_ES.MA_ES(problem, n=None, max_nfes=None, sigma0=0.3, use_n=True, save_data=True, save_path='./Data', name='MA-ES', disable_tqdm=True)[source]

Bases: object

MA-ES for single-objective optimization.

algorithm_information

Dictionary containing algorithm capabilities and requirements

Type:

dict

__init__(problem, n=None, max_nfes=None, sigma0=0.3, use_n=True, save_data=True, save_path='./Data', name='MA-ES', disable_tqdm=True)[source]

Initialize MA-ES Algorithm.

Parameters:
  • problem (MTOP) – Multi-task optimization problem instance

  • n (int or List[int], optional) – Population size per task (default: None, will use 4+3*log(D))

  • max_nfes (int or List[int], optional) – Maximum number of function evaluations per task (default: 10000)

  • sigma0 (float, optional) – Initial step size (default: 0.3)

  • use_n (bool, optional) – If True, use provided n; if False, use 4+3*log(D) (default: True)

  • save_data (bool, optional) – Whether to save optimization data (default: True)

  • save_path (str, optional) – Path to save results (default: ‘./TestData’)

  • name (str, optional) – Name for the experiment (default: ‘MA_ES_test’)

  • disable_tqdm (bool, optional) – Whether to disable progress bar (default: True)

optimize()[source]

Execute the MA-ES Algorithm.

Returns:

Optimization results containing decision variables, objectives, and runtime

Return type:

Results

OpenAI-ES (OpenAI Evolution Strategies)

This module implements the OpenAI-ES algorithm for single-objective optimization problems. OpenAI-ES uses antithetic sampling and momentum-based gradient descent.

References

[1] Salimans, T., Ho, J., Chen, X., Sidor, S., & Sutskever, I. (2017). Evolution Strategies as a Scalable Alternative to Reinforcement Learning. arXiv:1703.03864 [stat.ML].

Notes

Author: Jiangtao Shen Email: j.shen5@exeter.ac.uk Date: 2025.01.27 Version: 1.0

class ddmtolab.Algorithms.STSO.OpenAI_ES.OpenAI_ES(problem, n=None, max_nfes=None, sigma=1.0, lr=0.001, momentum=0.9, save_data=True, save_path='./Data', name='OpenAI-ES', disable_tqdm=True)[source]

Bases: object

OpenAI-ES for single-objective optimization.

algorithm_information

Dictionary containing algorithm capabilities and requirements

Type:

dict

__init__(problem, n=None, max_nfes=None, sigma=1.0, lr=0.001, momentum=0.9, save_data=True, save_path='./Data', name='OpenAI-ES', disable_tqdm=True)[source]

Initialize OpenAI-ES Algorithm.

Parameters:
  • problem (MTOP) – Multi-task optimization problem instance

  • n (int or List[int], optional) – Population size per task (must be even, default: None, will use 100)

  • max_nfes (int or List[int], optional) – Maximum number of function evaluations per task (default: 10000)

  • sigma (float, optional) – Noise standard deviation (default: 1.0)

  • lr (float, optional) – Learning rate (default: 1e-3)

  • momentum (float, optional) – Momentum coefficient (default: 0.9)

  • save_data (bool, optional) – Whether to save optimization data (default: True)

  • save_path (str, optional) – Path to save results (default: ‘./TestData’)

  • name (str, optional) – Name for the experiment (default: ‘OpenAI_ES_test’)

  • disable_tqdm (bool, optional) – Whether to disable progress bar (default: True)

optimize()[source]

Execute the OpenAI-ES Algorithm.

Returns:

Optimization results containing decision variables, objectives, and runtime

Return type:

Results

xNES (Exponential Natural Evolution Strategies)

This module implements the xNES algorithm for single-objective optimization problems. xNES uses natural gradients to adapt the search distribution.

References

[1] Glasmachers, T., Schaul, T., Yi, S., Wierstra, D., & Schmidhuber, J. (2010). Exponential Natural Evolution Strategies. Proceedings of the 12th Annual Conference on Genetic and Evolutionary Computation, 393-400. [2] Wierstra, D., Schaul, T., Glasmachers, T., Sun, Y., Peters, J., & Schmidhuber, J. (2014). Natural Evolution Strategies. Journal of Machine Learning Research, 15(27), 949-980.

Notes

Author: Jiangtao Shen Email: j.shen5@exeter.ac.uk Date: 2025.01.27 Version: 1.0

class ddmtolab.Algorithms.STSO.xNES.xNES(problem, n=None, max_nfes=None, sigma0=0.3, use_n=True, save_data=True, save_path='./Data', name='xNES', disable_tqdm=True)[source]

Bases: object

xNES for single-objective optimization.

algorithm_information

Dictionary containing algorithm capabilities and requirements

Type:

dict

__init__(problem, n=None, max_nfes=None, sigma0=0.3, use_n=True, save_data=True, save_path='./Data', name='xNES', disable_tqdm=True)[source]

Initialize xNES Algorithm.

Parameters:
  • problem (MTOP) – Multi-task optimization problem instance

  • n (int or List[int], optional) – Population size per task (default: None, will use 4+3*log(D))

  • max_nfes (int or List[int], optional) – Maximum number of function evaluations per task (default: 10000)

  • sigma0 (float, optional) – Initial step size (default: 0.3)

  • use_n (bool, optional) – If True, use provided n; if False, use 4+3*log(D) (default: True)

  • save_data (bool, optional) – Whether to save optimization data (default: True)

  • save_path (str, optional) – Path to save results (default: ‘./TestData’)

  • name (str, optional) – Name for the experiment (default: ‘xNES_test’)

  • disable_tqdm (bool, optional) – Whether to disable progress bar (default: True)

optimize()[source]

Execute the xNES Algorithm.

Returns:

Optimization results containing decision variables, objectives, and runtime

Return type:

Results

Competitive Swarm Optimizer (CSO)

This module implements Competitive Swarm Optimizer for single-objective optimization problems.

References

[1] Cheng, Ran, and Yaochu Jin. “A competitive swarm optimizer for large scale optimization.” IEEE Transactions on Cybernetics 45.2 (2015): 191-204.

Notes

Author: Jiangtao Shen Email: j.shen5@exeter.ac.uk Date: 2025.10.31 Version: 1.0

class ddmtolab.Algorithms.STSO.CSO.CSO(problem, n=None, max_nfes=None, phi=0.1, save_data=True, save_path='./Data', name='CSO', disable_tqdm=True)[source]

Bases: object

Competitive Swarm Optimizer algorithm for single-objective optimization.

algorithm_information

Dictionary containing algorithm capabilities and requirements

Type:

dict

__init__(problem, n=None, max_nfes=None, phi=0.1, save_data=True, save_path='./Data', name='CSO', disable_tqdm=True)[source]

Initialize Competitive Swarm Optimizer algorithm.

Parameters:
  • problem (MTOP) – Multi-task optimization problem instance

  • n (int or List[int], optional) – Population size per task (default: 100)

  • max_nfes (int or List[int], optional) – Maximum number of function evaluations per task (default: 10000)

  • phi (float, optional) – Social influence parameter for mean position learning (default: 0.1)

  • save_data (bool, optional) – Whether to save optimization data (default: True)

  • save_path (str, optional) – Path to save results (default: ‘./TestData’)

  • name (str, optional) – Name for the experiment (default: ‘CSO_test’)

  • disable_tqdm (bool, optional) – Whether to disable progress bar (default: True)

optimize()[source]

Execute the Competitive Swarm Optimizer algorithm.

Returns:

Optimization results containing decision variables, objectives, and runtime

Return type:

Results

Social Learning Particle Swarm Optimization (SL-PSO)

This module implements the Social Learning PSO for single-objective optimization problems.

References

[1] Cheng, R., & Jin, Y. (2014). A social learning particle swarm optimization algorithm for scalable optimization. Information Sciences, 291, 43-60.

Notes

Author: Jiangtao Shen Email: j.shen5@exeter.ac.uk Date: 2025.11.22 Version: 1.0

class ddmtolab.Algorithms.STSO.SL_PSO.SL_PSO(problem, n=None, max_nfes=None, save_data=True, save_path='./Data', name='SL-PSO', disable_tqdm=True)[source]

Bases: object

Social Learning Particle Swarm Optimization for single-objective optimization.

algorithm_information

Dictionary containing algorithm capabilities and requirements

Type:

dict

__init__(problem, n=None, max_nfes=None, save_data=True, save_path='./Data', name='SL-PSO', disable_tqdm=True)[source]

Initialize Social Learning PSO algorithm.

Parameters:
  • problem (MTOP) – Multi-task optimization problem instance

  • n (int or List[int], optional) – Population size per task (default: 100)

  • max_nfes (int or List[int], optional) – Maximum number of function evaluations per task (default: 10000)

  • save_data (bool, optional) – Whether to save optimization data (default: True)

  • save_path (str, optional) – Path to save results (default: ‘./TestData’)

  • name (str, optional) – Name for the experiment (default: ‘SLPSO_test’)

  • disable_tqdm (bool, optional) – Whether to disable progress bar (default: True)

optimize()[source]

Execute the Social Learning PSO algorithm.

Returns:

Optimization results containing decision variables, objectives, and runtime

Return type:

Results

Knowledge Learning PSO (KLPSO)

This module implements Knowledge Learning PSO for single-objective optimization problems. A neural network learns successful movement patterns from improved particles. With probability LR, particles move using the learned knowledge instead of standard PSO velocity update.

References

[1] Jiang, Yi, et al. “Knowledge Learning for Evolutionary Computation.” IEEE Transactions on Evolutionary Computation, 2023. DOI: 10.1109/TEVC.2023.3278132

Notes

Author: Jiangtao Shen Email: j.shen5@exeter.ac.uk Date: 2026.02.21 Version: 1.0

class ddmtolab.Algorithms.STSO.KLPSO.KLPSO(problem, n=None, max_nfes=None, lr=0.2, ep=10, min_w=0.4, max_w=0.9, c1=0.2, c2=0.2, save_data=True, save_path='./Data', name='KLPSO', disable_tqdm=True)[source]

Bases: object

Knowledge Learning PSO algorithm for single-objective optimization.

algorithm_information

Dictionary containing algorithm capabilities and requirements

Type:

dict

__init__(problem, n=None, max_nfes=None, lr=0.2, ep=10, min_w=0.4, max_w=0.9, c1=0.2, c2=0.2, save_data=True, save_path='./Data', name='KLPSO', disable_tqdm=True)[source]

Initialize KLPSO algorithm.

Parameters:
  • problem (MTOP) – Multi-task optimization problem instance

  • n (int or List[int], optional) – Population size per task (default: 100)

  • max_nfes (int or List[int], optional) – Maximum number of function evaluations per task (default: 10000)

  • lr (float, optional) – Learning rate - probability of using neural net prediction (default: 0.2)

  • ep (int, optional) – Number of training epochs per generation (default: 10)

  • min_w (float, optional) – Minimum inertia weight (default: 0.4)

  • max_w (float, optional) – Maximum inertia weight (default: 0.9)

  • c1 (float, optional) – Cognitive coefficient (default: 0.2)

  • c2 (float, optional) – Social coefficient (default: 0.2)

  • save_data (bool, optional) – Whether to save optimization data (default: True)

  • save_path (str, optional) – Path to save results (default: ‘./Data’)

  • name (str, optional) – Name for the experiment (default: ‘KLPSO’)

  • disable_tqdm (bool, optional) – Whether to disable progress bar (default: True)

optimize()[source]

Execute the KLPSO algorithm.

Returns:

Optimization results containing decision variables, objectives, and runtime

Return type:

Results

Surrogate-assisted Hierarchical Particle Swarm Optimization (SHPSO)

This module implements SHPSO for expensive single-objective optimization problems.

References

[1] Yu, Haibo, et al. “Surrogate-assisted hierarchical particle swarm optimization.” Information Sciences 454 (2018): 59-72.

Notes

Author: Jiangtao Shen Email: j.shen5@exeter.ac.uk Date: 2025.01.13 Version: 1.0

class ddmtolab.Algorithms.STSO.SHPSO.SHPSO(problem, n_initial=None, max_nfes=None, ps=None, mu=5, save_data=True, save_path='./Data', name='SHPSO', disable_tqdm=True)[source]

Bases: object

Surrogate-assisted Hierarchical Particle Swarm Optimization for expensive optimization problems.

This algorithm uses a two-level hierarchical structure: 1. Upper level: RBF surrogate model optimized by SL-PSO to find promising regions 2. Lower level: PSO swarm guided by surrogate model optimum with prescreening strategy

__init__(problem, n_initial=None, max_nfes=None, ps=None, mu=5, save_data=True, save_path='./Data', name='SHPSO', disable_tqdm=True)[source]

Initialize SHPSO algorithm.

Parameters:
  • problem (MTOP) – Multi-task optimization problem instance

  • n_initial (int or List[int], optional) – Number of initial samples per task (default: 100)

  • max_nfes (int or List[int], optional) – Maximum number of function evaluations per task (default: 300)

  • ps (int or List[int], optional) – Particle swarm size per task (default: 20)

  • mu (int, optional) – Number of new samples per iteration (default: 1) - 1: Only evaluate surrogate optimum (most conservative) - k: Evaluate surrogate optimum + top (k-1) prescreened particles

  • save_data (bool, optional) – Whether to save optimization data (default: True)

  • save_path (str, optional) – Path to save results (default: ‘./TestData’)

  • name (str, optional) – Name for the experiment (default: ‘SHPSO_test’)

  • disable_tqdm (bool, optional) – Whether to disable progress bar (default: True)

optimize()[source]

Execute the SHPSO algorithm.

Returns:

Optimization results containing decision variables, objectives, and runtime

Return type:

Results

Grey Wolf Optimizer (GWO)

This module implements the Grey Wolf Optimizer for single-objective optimization problems.

References

[1] Mirjalili, S., Mirjalili, S. M., & Lewis, A. (2014). Grey wolf optimizer. Advances in engineering software, 69, 46-61.

Notes

Author: Jiangtao Shen Email: j.shen5@exeter.ac.uk Date: 2025.12.08 Version: 1.0

class ddmtolab.Algorithms.STSO.GWO.GWO(problem, n=None, max_nfes=None, save_data=True, save_path='./Data', name='GWO', disable_tqdm=True)[source]

Bases: object

Grey Wolf Optimizer for single-objective optimization.

algorithm_information

Dictionary containing algorithm capabilities and requirements

Type:

dict

__init__(problem, n=None, max_nfes=None, save_data=True, save_path='./Data', name='GWO', disable_tqdm=True)[source]

Initialize Grey Wolf Optimizer.

Parameters:
  • problem (MTOP) – Multi-task optimization problem instance

  • n (int or List[int], optional) – Population size per task (default: 100)

  • max_nfes (int or List[int], optional) – Maximum number of function evaluations per task (default: 10000)

  • save_data (bool, optional) – Whether to save optimization data (default: True)

  • save_path (str, optional) – Path to save results (default: ‘./TestData’)

  • name (str, optional) – Name for the experiment (default: ‘GWO_test’)

  • disable_tqdm (bool, optional) – Whether to disable progress bar (default: True)

optimize()[source]

Execute the Grey Wolf Optimizer algorithm.

Returns:

Optimization results containing decision variables, objectives, and runtime

Return type:

Results

Aquila Optimizer (AO)

This module implements the Aquila Optimizer for single-objective optimization problems.

References

[1] Abualigah, L., Yousri, D., Abd Elaziz, M., Ewees, A. A., Al-qaness, M. A., & Gandomi, A. H. (2021). Aquila Optimizer: A novel meta-heuristic optimization algorithm. Computers & Industrial Engineering, 157, 107250.

Notes

Author: Jiangtao Shen Email: j.shen5@exeter.ac.uk Date: 2025.12.10 Version: 1.0

class ddmtolab.Algorithms.STSO.AO.AO(problem, n=None, max_nfes=None, alpha=0.1, delta=0.1, save_data=True, save_path='./Data', name='AO', disable_tqdm=True)[source]

Bases: object

Aquila Optimizer for single-objective optimization.

algorithm_information

Dictionary containing algorithm capabilities and requirements

Type:

dict

__init__(problem, n=None, max_nfes=None, alpha=0.1, delta=0.1, save_data=True, save_path='./Data', name='AO', disable_tqdm=True)[source]

Initialize Aquila Optimizer.

Parameters:
  • problem (MTOP) – Multi-task optimization problem instance

  • n (int or List[int], optional) – Population size per task (default: 100)

  • max_nfes (int or List[int], optional) – Maximum number of function evaluations per task (default: 10000)

  • alpha (float, optional) – Exploitation adjustment parameter (default: 0.1)

  • delta (float, optional) – Exploitation adjustment parameter (default: 0.1)

  • save_data (bool, optional) – Whether to save optimization data (default: True)

  • save_path (str, optional) – Path to save results (default: ‘./TestData’)

  • name (str, optional) – Name for the experiment (default: ‘AO_test’)

  • disable_tqdm (bool, optional) – Whether to disable progress bar (default: True)

optimize()[source]

Execute the Aquila Optimizer algorithm.

Returns:

Optimization results containing decision variables, objectives, and runtime

Return type:

Results

Equilibrium Optimizer (EO)

This module implements the Equilibrium Optimizer for single-objective optimization problems.

References

[1] Faramarzi, A., Heidarinejad, M., Stephens, B., & Mirjalili, S. (2020). Equilibrium optimizer: A novel optimization algorithm. Knowledge-Based Systems, 191, 105190.

Notes

Author: Jiangtao Shen Email: j.shen5@exeter.ac.uk Date: 2025.12.12 Version: 1.0

class ddmtolab.Algorithms.STSO.EO.EO(problem, n=None, max_nfes=None, a1=2, a2=1, v=1, gp=0.5, save_data=True, save_path='./Data', name='EO', disable_tqdm=True)[source]

Bases: object

Equilibrium Optimizer for single-objective optimization.

algorithm_information

Dictionary containing algorithm capabilities and requirements

Type:

dict

__init__(problem, n=None, max_nfes=None, a1=2, a2=1, v=1, gp=0.5, save_data=True, save_path='./Data', name='EO', disable_tqdm=True)[source]

Initialize Equilibrium Optimizer.

Parameters:
  • problem (MTOP) – Multi-task optimization problem instance

  • n (int or List[int], optional) – Population size per task (default: 100)

  • max_nfes (int or List[int], optional) – Maximum number of function evaluations per task (default: 10000)

  • a1 (float, optional) – Exploration constant (default: 2)

  • a2 (float, optional) – Exploitation constant (default: 1)

  • v (float, optional) – Volume coefficient (default: 1)

  • gp (float, optional) – Generation probability (default: 0.5)

  • save_data (bool, optional) – Whether to save optimization data (default: True)

  • save_path (str, optional) – Path to save results (default: ‘./TestData’)

  • name (str, optional) – Name for the experiment (default: ‘EO_test’)

  • disable_tqdm (bool, optional) – Whether to disable progress bar (default: True)

optimize()[source]

Execute the Equilibrium Optimizer algorithm.

Returns:

Optimization results containing decision variables, objectives, and runtime

Return type:

Results

Global-Local Surrogate-Assisted Differential Evolution (GL-SADE)

This module implements GL-SADE for expensive single-objective optimization problems.

References

[1] Wang, Weizhong, Hai-Lin Liu, and Kay Chen Tan. “A surrogate-assisted differential evolution algorithm for high-dimensional expensive optimization problems.” IEEE Transactions on Cybernetics 53.4 (2022): 2685-2697.

Notes

Author: Jiangtao Shen Email: j.shen5@exeter.ac.uk Date: 2025.01.13 Version: 1.0

class ddmtolab.Algorithms.STSO.GL_SADE.GL_SADE(problem, n_initial=None, max_nfes=None, save_data=True, save_path='./Data', name='GL-SADE', disable_tqdm=True)[source]

Bases: object

Global-Local Surrogate-Assisted Differential Evolution for expensive optimization problems.

This algorithm adaptively switches between: 1. Global search: RBF model with plain acquisition 2. Local search: GPR model with LCB-decay acquisition

__init__(problem, n_initial=None, max_nfes=None, save_data=True, save_path='./Data', name='GL-SADE', disable_tqdm=True)[source]

Initialize GL-SADE algorithm.

Parameters:
  • problem (MTOP) – Multi-task optimization problem instance

  • n_initial (int or List[int], optional) – Number of initial samples per task (default: 100)

  • max_nfes (int or List[int], optional) – Maximum number of function evaluations per task (default: 300)

  • save_data (bool, optional) – Whether to save optimization data (default: True)

  • save_path (str, optional) – Path to save results (default: ‘./TestData’)

  • name (str, optional) – Name for the experiment (default: ‘GLSADE_test’)

  • disable_tqdm (bool, optional) – Whether to disable progress bar (default: True)

optimize()[source]

Execute the GL-SADE algorithm.

Returns:

Optimization results containing decision variables, objectives, and runtime

Return type:

Results

Bayesian Optimization (BO)

This module implements Bayesian Optimization for expensive single-objective optimization problems.

References

[1] Jones, Donald R., Matthias Schonlau, and William J. Welch. “Efficient global optimization of expensive black-box functions.” Journal of Global optimization 13.4 (1998): 455-492.

Notes

Author: Jiangtao Shen Email: j.shen5@exeter.ac.uk Date: 2025.11.11 Version: 1.0

class ddmtolab.Algorithms.STSO.BO.BO(problem, n_initial=None, max_nfes=None, mode='ei', save_data=True, save_path='./Data', name='BO', disable_tqdm=True)[source]

Bases: object

Bayesian Optimization algorithm for expensive optimization problems.

algorithm_information

Dictionary containing algorithm capabilities and requirements

Type:

dict

__init__(problem, n_initial=None, max_nfes=None, mode='ei', save_data=True, save_path='./Data', name='BO', disable_tqdm=True)[source]

Initialize Bayesian Optimization algorithm.

Parameters:
  • problem (MTOP) – Multi-task optimization problem instance

  • n_initial (int or List[int], optional) – Number of initial samples per task (default: 50)

  • max_nfes (int or List[int], optional) – Maximum number of function evaluations per task (default: 100)

  • mode (str, optional) – Acquisition function mode: ‘ei’ for Expected Improvement or ‘lcb’ for Lower Confidence Bound (default: ‘ei’)

  • save_data (bool, optional) – Whether to save optimization data (default: True)

  • save_path (str, optional) – Path to save results (default: ‘./TestData’)

  • name (str, optional) – Name for the experiment (default: ‘BO_test’)

  • disable_tqdm (bool, optional) – Whether to disable progress bar (default: True)

optimize()[source]

Execute the Bayesian Optimization algorithm.

Returns:

Optimization results containing decision variables, objectives, and runtime

Return type:

Results

Evolutionary Expected Improvement based Bayesian Optimization (EEI-BO)

This module implements Bayesian Optimization for expensive single-objective optimization problems using an evolutionary approach to optimize the Expected Improvement acquisition function.

References

[1] Liu, Jiao, et al. “Solving highly expensive optimization problems via evolutionary expected improvement.” IEEE Transactions on Systems, Man, and Cybernetics: Systems 53.8 (2023): 4843-4855.

Notes

Author: Jiangtao Shen Email: j.shen5@exeter.ac.uk Date: 2025.12.17 Version: 1.0

class ddmtolab.Algorithms.STSO.EEI_BO.EEI_BO(problem, n_initial=None, max_nfes=None, n1=50, max_nfes1=500, n2=30, max_nfes2=6000, save_data=True, save_path='./Data', name='EEI-BO', disable_tqdm=True)[source]

Bases: object

Evolutionary Expected Improvement based Bayesian Optimization.

algorithm_information

Dictionary containing algorithm capabilities and requirements

Type:

dict

__init__(problem, n_initial=None, max_nfes=None, n1=50, max_nfes1=500, n2=30, max_nfes2=6000, save_data=True, save_path='./Data', name='EEI-BO', disable_tqdm=True)[source]

Initialize EEI-BO algorithm.

Parameters:
  • problem (MTOP) – Multi-task optimization problem instance

  • n_initial (int or List[int], optional) – Number of initial samples per task (default: 50)

  • max_nfes (int or List[int], optional) – Maximum number of function evaluations per task (default: 100)

  • n1 (int, optional) – Population size of CMA-ES (default: 50)

  • max_nfes1 (int, optional) – Maximum number of function evaluations of CMA-ES (default: 500)

  • n2 (int, optional) – Population size of DE (default: 30)

  • max_nfes2 (int, optional) – Maximum number of function evaluations of DE (default: 6000)

  • save_data (bool, optional) – Whether to save optimization data (default: True)

  • save_path (str, optional) – Path to save results (default: ‘./TestData’)

  • name (str, optional) – Name for the experiment (default: ‘EEIBO_test’)

  • disable_tqdm (bool, optional) – Whether to disable progress bar (default: True)

optimize()[source]

Execute the Evolutionary Expected Improvement based Bayesian Optimization algorithm.

Returns:

Optimization results containing decision variables, objectives, and runtime

Return type:

Results

Exploration-exploitation Switching Assisted Optimization (ESAO)

This module implements ESAO for expensive single-objective optimization problems.

References

[1] Wang, Xinjing, et al. “A novel evolutionary sampling assisted optimization method for high-dimensional expensive problems.” IEEE Transactions on Evolutionary Computation 23.5 (2019): 815-827.

Notes

Author: Jiangtao Shen Email: j.shen5@exeter.ac.uk Date: 2025.01.13 Version: 1.0

class ddmtolab.Algorithms.STSO.ESAO.ESAO(problem, n_initial=None, max_nfes=None, save_data=True, save_path='./Data', name='ESAO', disable_tqdm=True)[source]

Bases: object

Exploration-exploitation Switching Assisted Optimization for expensive optimization problems.

This algorithm adaptively switches between global and local search based on improvement: 1. Global search: RBF model built on all data points 2. Local search: RBF model built on top 2*dim points nearest to the best

__init__(problem, n_initial=None, max_nfes=None, save_data=True, save_path='./Data', name='ESAO', disable_tqdm=True)[source]

Initialize ESAO algorithm.

Parameters:
  • problem (MTOP) – Multi-task optimization problem instance

  • n_initial (int or List[int], optional) – Number of initial samples per task (default: 100)

  • max_nfes (int or List[int], optional) – Maximum number of function evaluations per task (default: 300)

  • save_data (bool, optional) – Whether to save optimization data (default: True)

  • save_path (str, optional) – Path to save results (default: ‘./TestData’)

  • name (str, optional) – Name for the experiment (default: ‘ESAO_test’)

  • disable_tqdm (bool, optional) – Whether to disable progress bar (default: True)

optimize()[source]

Execute the ESAO algorithm.

Returns:

Optimization results containing decision variables, objectives, and runtime

Return type:

Results

Surrogate-Assisted Cooperative Swarm Optimization (SA-COSO)

This module implements SA-COSO for expensive single-objective optimization problems.

References

[1] Sun, Chaoli, et al. “Surrogate-assisted cooperative swarm optimization of high-dimensional expensive problems.” IEEE Transactions on Evolutionary Computation 21.4 (2017): 644-660.

Notes

Author: Jiangtao Shen Email: j.shen5@exeter.ac.uk Date: 2025.01.13 Version: 1.0

class ddmtolab.Algorithms.STSO.SA_COSO.SA_COSO(problem, n_initial=None, max_nfes=None, n_fes=30, n_rbf=100, mu=5, save_data=True, save_path='./Data', name='SA-COSO', disable_tqdm=True)[source]

Bases: object

Surrogate-Assisted Cooperative Swarm Optimization for expensive optimization problems.

This algorithm uses two cooperative swarms: 1. FES-PSO: Small swarm with Fitness Estimation Strategy to reduce evaluations 2. RBF-SLPSO: Large swarm with RBF-assisted Social Learning PSO

__init__(problem, n_initial=None, max_nfes=None, n_fes=30, n_rbf=100, mu=5, save_data=True, save_path='./Data', name='SA-COSO', disable_tqdm=True)[source]

Initialize SA-COSO algorithm.

Parameters:
  • problem (MTOP) – Multi-task optimization problem instance

  • n_initial (int or List[int], optional) – Number of initial samples per task (default: n_fes + n_rbf)

  • max_nfes (int or List[int], optional) – Maximum number of function evaluations per task (default: 300)

  • n_fes (int, optional) – Population size of FES-assisted PSO (default: 30)

  • n_rbf (int, optional) – Population size of RBF-assisted SL-PSO (default: 100)

  • mu (int, optional) – Total number of samples per iteration (default: 5, must be >= 2) - (mu - 1) samples from FES-PSO - 1 sample from RBF-SLPSO

  • save_data (bool, optional) – Whether to save optimization data (default: True)

  • save_path (str, optional) – Path to save results (default: ‘./TestData’)

  • name (str, optional) – Name for the experiment (default: ‘SACOSO_test’)

  • disable_tqdm (bool, optional) – Whether to disable progress bar (default: True)

optimize()[source]

Execute the SA-COSO algorithm.

Returns:

Optimization results containing decision variables, objectives, and runtime

Return type:

Results

Three-Level Radial Basis Function Method (TLRBF)

This module implements TLRBF for expensive single-objective optimization problems.

References

[1] Li, Genghui, et al. “A three-level radial basis function method for expensive optimization.” IEEE Transactions on Cybernetics 52.7 (2021): 5720-5731.

Notes

Author: Jiangtao Shen Email: j.shen5@exeter.ac.uk Date: 2026.01.13 Version: 1.0

class ddmtolab.Algorithms.STSO.TLRBF.TLRBF(problem, n_initial=None, max_nfes=None, save_data=True, save_path='./Data', name='TLRBF', disable_tqdm=True)[source]

Bases: object

Three-Level Radial Basis Function Method for expensive optimization problems.

This algorithm uses three search strategies in rotation: 1. Global search: Random sampling with distance filtering 2. Subregion search: FCM clustering + local RBF models 3. Local search: K-nearest neighbors + local RBF model

__init__(problem, n_initial=None, max_nfes=None, save_data=True, save_path='./Data', name='TLRBF', disable_tqdm=True)[source]

Initialize TLRBF algorithm.

Parameters:
  • problem (MTOP) – Multi-task optimization problem instance

  • n_initial (int or List[int], optional) – Number of initial samples per task (default: 100)

  • max_nfes (int or List[int], optional) – Maximum number of function evaluations per task (default: 300)

  • save_data (bool, optional) – Whether to save optimization data (default: True)

  • save_path (str, optional) – Path to save results (default: ‘./TestData’)

  • name (str, optional) – Name for the experiment (default: ‘TLRBF_test’)

  • disable_tqdm (bool, optional) – Whether to disable progress bar (default: True)

optimize()[source]

Execute the TLRBF algorithm.

Returns:

Optimization results containing decision variables, objectives, and runtime

Return type:

Results

Surrogate-Assisted EA with Model and Infill Criterion Auto-Configuration (AutoSAEA)

This module implements AutoSAEA for expensive single-objective optimization problems.

References

[1] Xie, L., Li, G., Wang, Z., Cui, L., & Gong, M. (2023). Surrogate-assisted evolutionary algorithm with model and infill criterion auto-configuration. IEEE Transactions on Evolutionary Computation.

Notes

Author: Jiangtao Shen Email: j.shen5@exeter.ac.uk Date: 2026.02.19 Version: 1.0

class ddmtolab.Algorithms.STSO.AutoSAEA.AutoSAEA(problem, n_initial=None, max_nfes=None, save_data=True, save_path='./Data', name='AutoSAEA', disable_tqdm=True)[source]

Bases: object

Surrogate-Assisted EA with Model and Infill Criterion Auto-Configuration.

Uses a Two-Level UCB multi-armed bandit to adaptively select from 8 model-criterion combinations: - {RBF, prescreening}, {RBF, local search} - {GP, LCB}, {GP, EI} - {PRS, prescreening}, {PRS, local search} - {KNN, exploitation}, {KNN, exploration}

__init__(problem, n_initial=None, max_nfes=None, save_data=True, save_path='./Data', name='AutoSAEA', disable_tqdm=True)[source]
optimize()[source]

Data-Driven Evolutionary Algorithm with Multi-Evolutionary Sampling Strategy (DDEA-MESS)

This module implements DDEA-MESS for expensive single-objective optimization problems.

References

[1] Yu, F., Gong, W., & Zhen, H. (2022). A data-driven evolutionary algorithm with multi-evolutionary sampling strategy for expensive optimization. Knowledge-Based Systems, 242, 108436.

Notes

Author: Jiangtao Shen Email: j.shen5@exeter.ac.uk Date: 2026.02.19 Version: 1.0

class ddmtolab.Algorithms.STSO.DDEA_MESS.DDEA_MESS(problem, n_initial=None, max_nfes=None, save_data=True, save_path='./Data', name='DDEA-MESS', disable_tqdm=True)[source]

Bases: object

Data-Driven Evolutionary Algorithm with Multi-Evolutionary Sampling Strategy.

Dynamically selects from three search strategies based on evaluation budget usage: 1. Global search: DE/rand/1 prescreening on RBF model built from first min(N, 300) samples 2. Local search: DE/best/1 on RBF model built from top tau samples by fitness 3. Trust region search: Local optimization (L-BFGS-B) on RBF model around best solution

__init__(problem, n_initial=None, max_nfes=None, save_data=True, save_path='./Data', name='DDEA-MESS', disable_tqdm=True)[source]

Initialize DDEA-MESS algorithm.

Parameters:
  • problem (MTOP) – Multi-task optimization problem instance

  • n_initial (int or List[int], optional) – Number of initial samples per task (default: 100)

  • max_nfes (int or List[int], optional) – Maximum number of function evaluations per task (default: 300)

  • save_data (bool, optional) – Whether to save optimization data (default: True)

  • save_path (str, optional) – Path to save results (default: ‘./Data’)

  • name (str, optional) – Name for the experiment (default: ‘DDEA-MESS’)

  • disable_tqdm (bool, optional) – Whether to disable progress bar (default: True)

optimize()[source]

Execute the DDEA-MESS algorithm.

Returns:

Optimization results containing decision variables, objectives, and runtime

Return type:

Results

Lipschitz Surrogate-Assisted Differential Evolution (LSADE)

This module implements LSADE for expensive single-objective optimization problems.

References

[1] Kudela, J., & Matousek, R. (2023). Combining Lipschitz and RBF surrogate models for high-dimensional computationally expensive problems. Information Sciences, 619, 457-477.

Notes

Author: Jiangtao Shen Email: j.shen5@exeter.ac.uk Date: 2026.02.19 Version: 1.0

class ddmtolab.Algorithms.STSO.LSADE.LSADE(problem, n_initial=None, max_nfes=None, save_data=True, save_path='./Data', name='LSADE', disable_tqdm=True)[source]

Bases: object

Lipschitz Surrogate-Assisted Differential Evolution for expensive optimization problems.

Uses three surrogate strategies in deterministic rotation: 1. RBF prescreening: DE/best/1 on Gaussian RBF model 2. Lipschitz prescreening: DE/best/1 on Lipschitz lower-bound surrogate 3. Local optimization: SQP on local RBF model around best solutions

__init__(problem, n_initial=None, max_nfes=None, save_data=True, save_path='./Data', name='LSADE', disable_tqdm=True)[source]

Initialize LSADE algorithm.

Parameters:
  • problem (MTOP) – Multi-task optimization problem instance

  • n_initial (int or List[int], optional) – Number of initial samples per task (default: 50)

  • max_nfes (int or List[int], optional) – Maximum number of function evaluations per task (default: 300)

  • save_data (bool, optional) – Whether to save optimization data (default: True)

  • save_path (str, optional) – Path to save results (default: ‘./Data’)

  • name (str, optional) – Name for the experiment (default: ‘LSADE’)

  • disable_tqdm (bool, optional) – Whether to disable progress bar (default: True)

optimize()[source]

Execute the LSADE algorithm.

Returns:

Optimization results containing decision variables, objectives, and runtime

Return type:

Results

Single-Task Multiobjective (STMO)

Nondominated Sorting Genetic Algorithm II (NSGA-II)

This module implements NSGA-II for multi-objective optimization problems.

References

[1] Deb, Kalyanmoy, et al. “A fast and elitist multiobjective genetic algorithm: NSGA-II.” IEEE transactions on evolutionary computation 6.2 (2002): 182-197.

Notes

Author: Jiangtao Shen Email: j.shen5@exeter.ac.uk Date: 2025.10.23 Version: 1.0

class ddmtolab.Algorithms.STMO.NSGA_II.NSGA_II(problem, n=None, max_nfes=None, muc=20.0, mum=15.0, save_data=True, save_path='./Data', name='NSGA-II', disable_tqdm=True)[source]

Bases: object

Nondominated Sorting Genetic Algorithm II for multi-objective optimization.

algorithm_information

Dictionary containing algorithm capabilities and requirements

Type:

dict

__init__(problem, n=None, max_nfes=None, muc=20.0, mum=15.0, save_data=True, save_path='./Data', name='NSGA-II', disable_tqdm=True)[source]

Initialize NSGA-II algorithm.

Parameters:
  • problem (MTOP) – Multi-task optimization problem instance

  • n (int or List[int], optional) – Population size per task (default: 100)

  • max_nfes (int or List[int], optional) – Maximum number of function evaluations per task (default: 10000)

  • muc (float, optional) – Distribution index for simulated binary crossover (SBX) (default: 20.0)

  • mum (float, optional) – Distribution index for polynomial mutation (PM) (default: 15.0)

  • save_data (bool, optional) – Whether to save optimization data (default: True)

  • save_path (str, optional) – Path to save results (default: ‘./TestData’)

  • name (str, optional) – Name for the experiment (default: ‘NSGA-II_test’)

  • disable_tqdm (bool, optional) – Whether to disable progress bar (default: True)

optimize()[source]

Execute the NSGA-II algorithm.

Returns:

Optimization results containing decision variables, objectives, constraints, and runtime

Return type:

Results

Nondominated Sorting Genetic Algorithm III (NSGA-III)

This module implements NSGA-III for many-objective optimization problems.

References

[1] Deb, Kalyanmoy, and Himanshu Jain. “An evolutionary many-objective optimization algorithm using reference-point-based nondominated sorting approach, part I: Solving problems with box constraints.” IEEE Transactions on Evolutionary Computation 18.4 (2014): 577-601.

Notes

Author: Jiangtao Shen Email: j.shen5@exeter.ac.uk Date: 2025.12.12 Version: 1.0

class ddmtolab.Algorithms.STMO.NSGA_III.NSGA_III(problem, n=None, max_nfes=None, muc=20.0, mum=15.0, save_data=True, save_path='./Data', name='NSGA-III', disable_tqdm=True)[source]

Bases: object

Nondominated Sorting Genetic Algorithm III for many-objective optimization.

algorithm_information

Dictionary containing algorithm capabilities and requirements

Type:

dict

__init__(problem, n=None, max_nfes=None, muc=20.0, mum=15.0, save_data=True, save_path='./Data', name='NSGA-III', disable_tqdm=True)[source]

Initialize NSGA-III algorithm.

Parameters:
  • problem (MTOP) – Multi-task optimization problem instance

  • n (int or List[int], optional) – Population size per task (default: 100)

  • max_nfes (int or List[int], optional) – Maximum number of function evaluations per task (default: 10000)

  • muc (float, optional) – Distribution index for simulated binary crossover (SBX) (default: 20.0)

  • mum (float, optional) – Distribution index for polynomial mutation (PM) (default: 15.0)

  • save_data (bool, optional) – Whether to save optimization data (default: True)

  • save_path (str, optional) – Path to save results (default: ‘./TestData’)

  • name (str, optional) – Name for the experiment (default: ‘NSGA-III_test’)

  • disable_tqdm (bool, optional) – Whether to disable progress bar (default: True)

optimize()[source]

Execute the NSGA-III algorithm.

Returns:

Optimization results containing decision variables, objectives, constraints, and runtime

Return type:

Results

Nondominated Sorting Genetic Algorithm II with Strengthened Dominance Relation (NSGA-II-SDR)

This module implements NSGA-II-SDR for multi-objective optimization problems.

References

[1] Y. Tian, R. Cheng, X. Zhang, Y. Su, and Y. Jin. A strengthened dominance relation considering convergence and diversity for evolutionary many-objective optimization. IEEE Transactions on Evolutionary Computation, 2019, 23(2): 331-345.

Notes

Author: Jiangtao Shen Email: j.shen5@exeter.ac.uk Date: 2025.12.14 Version: 1.0

class ddmtolab.Algorithms.STMO.NSGA_II_SDR.NSGA_II_SDR(problem, n=None, max_nfes=None, muc=20.0, mum=15.0, save_data=True, save_path='./Data', name='NSGA-II-SDR', disable_tqdm=True)[source]

Bases: object

Nondominated Sorting Genetic Algorithm II with Strengthened Dominance Relation for multi-objective optimization.

algorithm_information

Dictionary containing algorithm capabilities and requirements

Type:

dict

__init__(problem, n=None, max_nfes=None, muc=20.0, mum=15.0, save_data=True, save_path='./Data', name='NSGA-II-SDR', disable_tqdm=True)[source]

Initialize NSGA-II-SDR algorithm.

Parameters:
  • problem (MTOP) – Multi-task optimization problem instance

  • n (int or List[int], optional) – Population size per task (default: 100)

  • max_nfes (int or List[int], optional) – Maximum number of function evaluations per task (default: 10000)

  • muc (float, optional) – Distribution index for simulated binary crossover (SBX) (default: 20.0)

  • mum (float, optional) – Distribution index for polynomial mutation (PM) (default: 15.0)

  • save_data (bool, optional) – Whether to save optimization data (default: True)

  • save_path (str, optional) – Path to save results (default: ‘./TestData’)

  • name (str, optional) – Name for the experiment (default: ‘NSGA-II-SDR_test’)

  • disable_tqdm (bool, optional) – Whether to disable progress bar (default: True)

optimize()[source]

Execute the NSGA-II-SDR algorithm.

Returns:

Optimization results containing decision variables, objectives, constraints, and runtime

Return type:

Results

Multi-objective Evolutionary Algorithm Based on Decomposition (MOEA/D)

This module implements MOEA/D for multi-objective optimization problems.

References

[1] Zhang, Qingfu, and Hui Li. “MOEA/D: A multiobjective evolutionary algorithm based on decomposition.” IEEE Transactions on Evolutionary Computation 11.6 (2007): 712-731.

Notes

Author: Jiangtao Shen Email: j.shen5@exeter.ac.uk Date: 2025.12.03 Version: 1.0

class ddmtolab.Algorithms.STMO.MOEA_D.MOEA_D(problem, n=None, max_nfes=None, decomp_type=1, muc=20.0, mum=15.0, save_data=True, save_path='./Data', name='MOEA-D', disable_tqdm=True)[source]

Bases: object

Multi-objective Evolutionary Algorithm Based on Decomposition.

algorithm_information

Dictionary containing algorithm capabilities and requirements

Type:

dict

__init__(problem, n=None, max_nfes=None, decomp_type=1, muc=20.0, mum=15.0, save_data=True, save_path='./Data', name='MOEA-D', disable_tqdm=True)[source]

Initialize MOEA/D algorithm.

Parameters:
  • problem (MTOP) – Multi-task optimization problem instance

  • n (int or List[int], optional) – Population size per task (default: 100)

  • max_nfes (int or List[int], optional) – Maximum number of function evaluations per task (default: 10000)

  • decomp_type (int, optional) – Decomposition approach type (default: 1) 1: PBI (Penalty-based Boundary Intersection) 2: Tchebycheff 3: Tchebycheff with normalization 4: Modified Tchebycheff

  • muc (float, optional) – Distribution index for simulated binary crossover (SBX) (default: 20.0)

  • mum (float, optional) – Distribution index for polynomial mutation (PM) (default: 15.0)

  • save_data (bool, optional) – Whether to save optimization data (default: True)

  • save_path (str, optional) – Path to save results (default: ‘./TestData’)

  • name (str, optional) – Name for the experiment (default: ‘MOEAD_test’)

  • disable_tqdm (bool, optional) – Whether to disable progress bar (default: True)

optimize()[source]

Execute the MOEA/D.

Returns:

Optimization results containing decision variables, objectives, constraints, and runtime

Return type:

Results

Multi-objective Evolutionary Algorithm Based on Decomposition and Dominance (MOEA/DD)

This module implements MOEA/DD for multi/many-objective optimization problems.

References

[1] Li, Ke, et al. “An evolutionary many-objective optimization algorithm based on dominance and decomposition.” IEEE transactions on evolutionary computation 19.5 (2014): 694-716.

Notes

Author: Jiangtao Shen Email: j.shen5@exeter.ac.uk Date: 2025.12.18 Version: 1.0

class ddmtolab.Algorithms.STMO.MOEA_DD.MOEA_DD(problem, n=None, max_nfes=None, delta=0.9, muc=20.0, mum=15.0, save_data=True, save_path='./Data', name='MOEA-DD', disable_tqdm=True)[source]

Bases: object

Multi-objective Evolutionary Algorithm Based on Decomposition and Dominance.

algorithm_information

Dictionary containing algorithm capabilities and requirements

Type:

dict

__init__(problem, n=None, max_nfes=None, delta=0.9, muc=20.0, mum=15.0, save_data=True, save_path='./Data', name='MOEA-DD', disable_tqdm=True)[source]

Initialize MOEA/DD.

Parameters:
  • problem (MTOP) – Multi-task optimization problem instance

  • n (int or List[int], optional) – Population size per task (default: 100)

  • max_nfes (int or List[int], optional) – Maximum number of function evaluations per task (default: 10000)

  • delta (float, optional) – Probability of choosing parents locally (default: 0.9)

  • muc (float, optional) – Distribution index for simulated binary crossover (SBX) (default: 20.0)

  • mum (float, optional) – Distribution index for polynomial mutation (PM) (default: 15.0)

  • save_data (bool, optional) – Whether to save optimization data (default: True)

  • save_path (str, optional) – Path to save results (default: ‘./TestData’)

  • name (str, optional) – Name for the experiment (default: ‘MOEADD_test’)

  • disable_tqdm (bool, optional) – Whether to disable progress bar (default: True)

optimize()[source]

Execute the MOEA/DD algorithm.

Returns:

Optimization results containing decision variables, objectives, constraints, and runtime

Return type:

Results

MOEA/D with Stable Matching (MOEA/D-STM)

This module implements MOEA/D-STM for multi-objective optimization problems.

References

[1] Li, Ke, et al. “Stable matching-based selection in evolutionary multiobjective optimization.” IEEE Transactions on Evolutionary Computation 18.6 (2014): 909-923.

Notes

Author: Jiangtao Shen Email: j.shen5@exeter.ac.uk Date: 2025.01.01 Version: 1.0

class ddmtolab.Algorithms.STMO.MOEA_D_STM.MOEA_D_STM(problem, n=None, max_nfes=None, T=None, save_data=True, save_path='./Data', name='MOEA-D-STM', disable_tqdm=True)[source]

Bases: object

MOEA/D with Stable Matching for multi-objective optimization.

This algorithm uses a stable matching model to select solutions for subproblems, ensuring a stable assignment between solutions and weight vectors.

algorithm_information

Dictionary containing algorithm capabilities and requirements

Type:

dict

__init__(problem, n=None, max_nfes=None, T=None, save_data=True, save_path='./Data', name='MOEA-D-STM', disable_tqdm=True)[source]

Initialize MOEA/D-STM algorithm.

Parameters:
  • problem (MTOP) – Multi-task optimization problem instance

  • n (int or List[int], optional) – Population size per task (default: 100)

  • max_nfes (int or List[int], optional) – Maximum number of function evaluations per task (default: 10000)

  • T (int or List[int], optional) – Size of neighborhood (default: ceil(n/10))

  • save_data (bool, optional) – Whether to save optimization data (default: True)

  • save_path (str, optional) – Path to save results (default: ‘./TestData’)

  • name (str, optional) – Name for the experiment (default: ‘MOEADSTM_test’)

  • disable_tqdm (bool, optional) – Whether to disable progress bar (default: True)

optimize()[source]

Execute the MOEA/D-STM algorithm.

Returns:

Optimization results containing decision variables, objectives, constraints, and runtime

Return type:

Results

MOEA/D with Fitness-Rate-Rank-based Multi-Armed Bandit (MOEA/D-FRRMAB)

This module implements MOEA/D-FRRMAB for multi-objective optimization problems.

References

[1] Li, Ke, et al. “Adaptive operator selection with bandits for a multiobjective evolutionary algorithm based on decomposition.” IEEE Transactions on Evolutionary Computation 18.1 (2014): 114-130.

Notes

Author: Jiangtao Shen Email: j.shen5@exeter.ac.uk Date: 2025.01.01 Version: 1.0

class ddmtolab.Algorithms.STMO.MOEA_D_FRRMAB.MOEA_D_FRRMAB(problem, n=None, max_nfes=None, C=5, W=None, D=1, T=20, nr=2, save_data=True, save_path='./Data', name='MOEA-D-FRRMAB', disable_tqdm=True)[source]

Bases: object

MOEA/D with Fitness-Rate-Rank-based Multi-Armed Bandit.

This algorithm uses a multi-armed bandit approach to adaptively select differential evolution operators during optimization.

algorithm_information

Dictionary containing algorithm capabilities and requirements

Type:

dict

__init__(problem, n=None, max_nfes=None, C=5, W=None, D=1, T=20, nr=2, save_data=True, save_path='./Data', name='MOEA-D-FRRMAB', disable_tqdm=True)[source]

Initialize MOEA/D-FRRMAB algorithm.

Parameters:
  • problem (MTOP) – Multi-task optimization problem instance

  • n (int or List[int], optional) – Population size per task (default: 100)

  • max_nfes (int or List[int], optional) – Maximum number of function evaluations per task (default: 10000)

  • C (float, optional) – Scaling factor in bandit-based operator selection (default: 5)

  • W (int or List[int], optional) – Size of sliding window (default: ceil(n/2))

  • D (float, optional) – Decaying factor in calculating credit value (default: 1)

  • T (int, optional) – Size of neighborhood (default: 20)

  • nr (int, optional) – Maximum number of solutions replaced by each offspring (default: 2)

  • save_data (bool, optional) – Whether to save optimization data (default: True)

  • save_path (str, optional) – Path to save results (default: ‘./TestData’)

  • name (str, optional) – Name for the experiment (default: ‘MOEADFRRMAB_test’)

  • disable_tqdm (bool, optional) – Whether to disable progress bar (default: True)

optimize()[source]

Execute the MOEA/D-FRRMAB algorithm.

Returns:

Optimization results containing decision variables, objectives, constraints, and runtime

Return type:

Results

Multiple Classifiers-assisted Evolutionary Algorithm based on Decomposition (MCEAD)

This module implements MCEAD for multi-objective optimization problems.

References

[1] T. Sonoda and M. Nakata. Multiple classifiers-assisted evolutionary algorithm based on decomposition for high-dimensional multi-objective problems. IEEE Transactions on Evolutionary Computation, 2022.

Notes

Author: Haowei Guo Email: ghw@mail.nwpu.edu.cn Date: 2026.01.06 Version: 1.0

class ddmtolab.Algorithms.STMO.MCEA_D.MCEA_D(problem, n=None, max_nfes=None, delta=0.9, nr=2, r_max=10, save_data=True, save_path='./Data', name='MCEA-D', disable_tqdm=True)[source]

Bases: object

Multiple Classifiers-assisted Evolutionary Algorithm based on Decomposition for multi-objective optimization

algorithm_information

Dictionary containing algorithm capabilities and requirements

Type:

dict

__init__(problem, n=None, max_nfes=None, delta=0.9, nr=2, r_max=10, save_data=True, save_path='./Data', name='MCEA-D', disable_tqdm=True)[source]

Initialize MCEAD algorithm.

Parameters:
  • problem (MTOP) – Problem instance.

  • n (int or List[int], optional) – Population size per task (default: 100)

  • max_nfes (int or List[int], optional) – Maximum number of function evaluations per task (default: 10000)

  • delta (float, optional) – Probability of choosing parents from neighborhood (default: 0.9).

  • nr (int, optional) – Maximum number of solutions replaced by each offspring (default: 2).

  • r_max (int, optional) – Maximum repeat time of offspring generation (default: 10).

  • save_data (bool, optional) – Whether to save optimization data (default: True)

  • save_path (str, optional) – Path for saving results (default: ‘./TestData’)

  • name (str, optional) – Name of the experiment/file (default: ‘MCEAD_test’)

  • disable_tqdm (bool, optional) – Whether to disable progress bar (default: True)

optimize()[source]

Execute the MCEAD algorithm.

Returns:

Optimization results containing decision variables, objectives, constraints, and runtime

Return type:

Results

Reference Vector Guided Evolutionary Algorithm (RVEA)

This module implements RVEA for many-objective optimization problems.

References

[1] Cheng, Ran, et al. “A reference vector guided evolutionary algorithm for many-objective optimization.” IEEE transactions on evolutionary computation 20.5 (2016): 773-791.

Notes

Author: Jiangtao Shen Email: j.shen5@exeter.ac.uk Date: 2025.10.25 Version: 1.0

class ddmtolab.Algorithms.STMO.RVEA.RVEA(problem, n=None, max_nfes=None, alpha=2.0, fr=0.1, save_data=True, save_path='./Data', name='RVEA', disable_tqdm=True)[source]

Bases: object

Reference Vector Guided Evolutionary Algorithm for many-objective optimization.

algorithm_information

Dictionary containing algorithm capabilities and requirements

Type:

dict

__init__(problem, n=None, max_nfes=None, alpha=2.0, fr=0.1, save_data=True, save_path='./Data', name='RVEA', disable_tqdm=True)[source]

Initialize RVEA algorithm.

Parameters:
  • problem (MTOP) – Multi-task optimization problem instance

  • n (int or List[int], optional) – Population size per task (default: 100)

  • max_nfes (int or List[int], optional) – Maximum number of function evaluations per task (default: 10000)

  • alpha (float, optional) – Parameter controlling the rate of change of penalty (default: 2.0)

  • fr (float, optional) – Frequency of employing reference vector adaptation (default: 0.1)

  • save_data (bool, optional) – Whether to save optimization data (default: True)

  • save_path (str, optional) – Path to save results (default: ‘./TestData’)

  • name (str, optional) – Name for the experiment (default: ‘RVEA_test’)

  • disable_tqdm (bool, optional) – Whether to disable progress bar (default: True)

optimize()[source]

Execute the RVEA algorithm.

Returns:

Optimization results containing decision variables, objectives, constraints, and runtime

Return type:

Results

Kriging-assisted Reference Vector Guided Evolutionary Algorithm (K-RVEA)

This module implements K-RVEA for computationally expensive multi/many-objective optimization.

References

[1] T. Chugh, Y. Jin, K. Miettinen, J. Hakanen, and K. Sindhya. A surrogate-assisted reference vector guided evolutionary algorithm for computationally expensive many-objective optimization. IEEE Transactions on Evolutionary Computation, 2018, 22(1): 129-142.

Notes

Author: Jiangtao Shen Date: 2025.01.11 Version: 2.0

class ddmtolab.Algorithms.STMO.K_RVEA.K_RVEA(problem, n_initial=None, max_nfes=None, n=100, alpha=2.0, wmax=20, mu=5, save_data=True, save_path='./Data', name='K-RVEA', disable_tqdm=True)[source]

Bases: object

Kriging-assisted Reference Vector Guided Evolutionary Algorithm for expensive multi/many-objective optimization.

algorithm_information

Dictionary containing algorithm capabilities and requirements

Type:

dict

__init__(problem, n_initial=None, max_nfes=None, n=100, alpha=2.0, wmax=20, mu=5, save_data=True, save_path='./Data', name='K-RVEA', disable_tqdm=True)[source]

Initialize K-RVEA algorithm.

Parameters:
  • problem (MTOP) – Multi-task optimization problem instance

  • n_initial (int or List[int], optional) – Number of initial samples per task (default: 11*dim-1)

  • max_nfes (int or List[int], optional) – Maximum number of function evaluations per task (default: 200)

  • n (int or List[int], optional) – Population size (number of reference vectors) per task (default: 100)

  • alpha (float, optional) – Parameter controlling the rate of change of penalty (default: 2.0)

  • wmax (int, optional) – Number of generations before updating Kriging models (default: 20)

  • mu (int, optional) – Number of re-evaluated solutions at each generation (default: 5)

  • save_data (bool, optional) – Whether to save optimization data (default: True)

  • save_path (str, optional) – Path to save results (default: ‘./Data’)

  • name (str, optional) – Name for the experiment (default: ‘K-RVEA’)

  • disable_tqdm (bool, optional) – Whether to disable progress bar (default: True)

optimize()[source]

Execute the K-RVEA algorithm.

Returns:

Optimization results containing decision variables, objectives, and runtime

Return type:

Results

Indicator-Based Evolutionary Algorithm (IBEA)

This module implements IBEA for multi-objective optimization problems.

References

[1] Zitzler, Eckart, and Simon Künzli. “Indicator-based selection in multiobjective search.” International conference on parallel problem solving from nature. 2004.

Notes

Author: Jiangtao Shen Email: j.shen5@exeter.ac.uk Date: 2025.12.13 Version: 1.0

class ddmtolab.Algorithms.STMO.IBEA.IBEA(problem, n=None, max_nfes=None, kappa=0.05, muc=20.0, mum=15.0, save_data=True, save_path='./Data', name='IBEA', disable_tqdm=True)[source]

Bases: object

Indicator-Based Evolutionary Algorithm for multi-objective optimization.

algorithm_information

Dictionary containing algorithm capabilities and requirements

Type:

dict

__init__(problem, n=None, max_nfes=None, kappa=0.05, muc=20.0, mum=15.0, save_data=True, save_path='./Data', name='IBEA', disable_tqdm=True)[source]

Initialize IBEA algorithm.

Parameters:
  • problem (MTOP) – Multi-task optimization problem instance

  • n (int or List[int], optional) – Population size per task (default: 100)

  • max_nfes (int or List[int], optional) – Maximum number of function evaluations per task (default: 10000)

  • kappa (float, optional) – Fitness scaling factor (default: 0.05)

  • muc (float, optional) – Distribution index for simulated binary crossover (SBX) (default: 20.0)

  • mum (float, optional) – Distribution index for polynomial mutation (PM) (default: 15.0)

  • save_data (bool, optional) – Whether to save optimization data (default: True)

  • save_path (str, optional) – Path to save results (default: ‘./TestData’)

  • name (str, optional) – Name for the experiment (default: ‘IBEA_test’)

  • disable_tqdm (bool, optional) – Whether to disable progress bar (default: True)

optimize()[source]

Execute the IBEA algorithm.

Returns:

Optimization results containing decision variables, objectives, constraints, and runtime

Return type:

Results

Strength Pareto Evolutionary Algorithm 2 (SPEA2)

This module implements SPEA2 for multi-objective optimization problems.

References

[1] Zitzler, E., Laumanns, M., & Thiele, L. (2001). SPEA2: Improving the Strength Pareto Evolutionary Algorithm For Multiobjective Optimization. In Evolutionary Methods for Design, Optimization and Control with Applications to Industrial Problems. Proceedings of the EUROGEN’2001.

Notes

Author: Jiangtao Shen Email: j.shen5@exeter.ac.uk Date: 2025.12.13 Version: 1.0

class ddmtolab.Algorithms.STMO.SPEA2.SPEA2(problem, n=None, max_nfes=None, muc=20.0, mum=15.0, epsilon=0, save_data=True, save_path='./Data', name='SPEA2', disable_tqdm=True)[source]

Bases: object

Strength Pareto Evolutionary Algorithm 2 for multi-objective optimization.

algorithm_information

Dictionary containing algorithm capabilities and requirements

Type:

dict

__init__(problem, n=None, max_nfes=None, muc=20.0, mum=15.0, epsilon=0, save_data=True, save_path='./Data', name='SPEA2', disable_tqdm=True)[source]

Initialize SPEA2 algorithm.

Parameters:
  • problem (MTOP) – Multi-task optimization problem instance

  • n (int or List[int], optional) – Population size per task (default: 100)

  • max_nfes (int or List[int], optional) – Maximum number of function evaluations per task (default: 10000)

  • muc (float, optional) – Distribution index for simulated binary crossover (SBX) (default: 20.0)

  • mum (float, optional) – Distribution index for polynomial mutation (PM) (default: 15.0)

  • epsilon (float, optional) – Constraint epsilon value for epsilon-constraint method (default: 0)

  • save_data (bool, optional) – Whether to save optimization data (default: True)

  • save_path (str, optional) – Path to save results (default: ‘./TestData’)

  • name (str, optional) – Name for the experiment (default: ‘SPEA2_test’)

  • disable_tqdm (bool, optional) – Whether to disable progress bar (default: True)

optimize()[source]

Execute the SPEA2 algorithm.

Returns:

Optimization results containing decision variables, objectives, constraints, and runtime

Return type:

Results

Two-Archive Algorithm 2 (Two_Arch2)

This module implements Two_Arch2 for many-objective optimization problems.

References

[1] Wang, H., Jiao, L., & Yao, X. (2015). Two_Arch2: An improved two-archive algorithm for many-objective optimization. IEEE Transactions on Evolutionary Computation, 19(4), 524-541.

Notes

Author: Jiangtao Shen Email: j.shen5@exeter.ac.uk Date: 2025.12.13 Version: 1.0

class ddmtolab.Algorithms.STMO.TwoArch2.TwoArch2(problem, n=None, max_nfes=None, CA_size=None, p=None, save_data=True, save_path='./Data', name='Two_Arch2', disable_tqdm=True)[source]

Bases: object

Two-Archive Algorithm 2 for many-objective optimization.

algorithm_information

Dictionary containing algorithm capabilities and requirements

Type:

dict

__init__(problem, n=None, max_nfes=None, CA_size=None, p=None, save_data=True, save_path='./Data', name='Two_Arch2', disable_tqdm=True)[source]

Initialize Two_Arch2 algorithm.

Parameters:
  • problem (MTOP) – Multi-task optimization problem instance

  • n (int or List[int], optional) – Population size per task (default: 100)

  • max_nfes (int or List[int], optional) – Maximum number of function evaluations per task (default: 10000)

  • CA_size (int or None, optional) – Convergence archive size (default: None, will be set to population size)

  • p (float or None, optional) – Parameter for fractional distance (default: None, will be set to 1/M)

  • save_data (bool, optional) – Whether to save optimization data (default: True)

  • save_path (str, optional) – Path to save results (default: ‘./TestData’)

  • name (str, optional) – Name for the experiment (default: ‘Two_Arch2_test’)

  • disable_tqdm (bool, optional) – Whether to disable progress bar (default: True)

optimize()[source]

Execute the Two_Arch2 algorithm.

Returns:

Optimization results containing decision variables, objectives, constraints, and runtime

Return type:

Results

Coevolutionary Framework for Constrained Multiobjective Optimization (CCMO)

This module implements CCMO for constrained multi-objective optimization problems.

References

[1] Tian, Ye, et al. “A Coevolutionary Framework for Constrained Multiobjective Optimization Problems.” IEEE Transactions on Evolutionary Computation 25.1 (2021): 102-116.

Notes

Author: Jiangtao Shen Email: j.shen5@exeter.ac.uk Date: 2025.12.14 Version: 1.0

class ddmtolab.Algorithms.STMO.CCMO.CCMO(problem, n=None, max_nfes=None, muc=20.0, mum=15.0, save_data=True, save_path='./Data', name='CCMO', disable_tqdm=True)[source]

Bases: object

Coevolutionary Framework for Constrained Multiobjective Optimization.

CCMO uses two co-evolving populations: - Population 1: Optimizes objectives with strict constraint handling (epsilon=0) - Population 2: Optimizes objectives with relaxed constraints (epsilon=inf)

algorithm_information

Dictionary containing algorithm capabilities and requirements

Type:

dict

__init__(problem, n=None, max_nfes=None, muc=20.0, mum=15.0, save_data=True, save_path='./Data', name='CCMO', disable_tqdm=True)[source]

Initialize CCMO algorithm.

Parameters:
  • problem (MTOP) – Multi-task optimization problem instance

  • n (int or List[int], optional) – Population size per task (default: 100)

  • max_nfes (int or List[int], optional) – Maximum number of function evaluations per task (default: 10000)

  • muc (float, optional) – Distribution index for simulated binary crossover (SBX) (default: 20.0)

  • mum (float, optional) – Distribution index for polynomial mutation (PM) (default: 15.0)

  • save_data (bool, optional) – Whether to save optimization data (default: True)

  • save_path (str, optional) – Path to save results (default: ‘./TestData’)

  • name (str, optional) – Name for the experiment (default: ‘CCMO_test’)

  • disable_tqdm (bool, optional) – Whether to disable progress bar (default: True)

optimize()[source]

Execute the CCMO algorithm.

Returns:

Optimization results containing decision variables, objectives, constraints, and runtime

Return type:

Results

Constrained Two-Archive Evolutionary Algorithm (C-TAEA)

This module implements C-TAEA for constrained multi-objective optimization problems.

References

[1] Li, Ke, et al. “Two-archive evolutionary algorithm for constrained multi-objective optimization.” IEEE Transactions on Evolutionary Computation 23.2 (2018): 303-315.

Notes

Author: Jiangtao Shen Email: j.shen5@exeter.ac.uk Date: 2025.01.01 Version: 1.0

class ddmtolab.Algorithms.STMO.C_TAEA.C_TAEA(problem, n=None, max_nfes=None, muc=20.0, mum=15.0, save_data=True, save_path='./Data', name='C-TAEA', disable_tqdm=True)[source]

Bases: object

Constrained Two-Archive Evolutionary Algorithm for constrained multi-objective optimization.

C-TAEA uses two co-evolving archives: - Convergence Archive (CA): Focuses on convergence towards the Pareto front - Diversity Archive (DA): Maintains diversity in the objective space

algorithm_information

Dictionary containing algorithm capabilities and requirements

Type:

dict

__init__(problem, n=None, max_nfes=None, muc=20.0, mum=15.0, save_data=True, save_path='./Data', name='C-TAEA', disable_tqdm=True)[source]

Initialize C-TAEA algorithm.

Parameters:
  • problem (MTOP) – Multi-task optimization problem instance

  • n (int or List[int], optional) – Population size per task (default: 100)

  • max_nfes (int or List[int], optional) – Maximum number of function evaluations per task (default: 10000)

  • muc (float, optional) – Distribution index for simulated binary crossover (SBX) (default: 20.0)

  • mum (float, optional) – Distribution index for polynomial mutation (PM) (default: 15.0)

  • save_data (bool, optional) – Whether to save optimization data (default: True)

  • save_path (str, optional) – Path to save results (default: ‘./TestData’)

  • name (str, optional) – Name for the experiment (default: ‘CTAEA_test’)

  • disable_tqdm (bool, optional) – Whether to disable progress bar (default: True)

optimize()[source]

Execute the C-TAEA algorithm.

Returns:

Optimization results containing decision variables, objectives, constraints, and runtime

Return type:

Results

Classification and Pareto Domination Based Multi-Objective Evolutionary Algorithm (CPS-MOEA)

This module implements CPS-MOEA for multi-objective optimization problems.

References

[1] J. Zhang, A. Zhou, and G. Zhang. “A classification and Pareto domination based multiobjective evolutionary algorithm.” Proceedings of the IEEE Congress on Evolutionary Computation, 2015, 2883-2890.

Notes

Author: Converted from MATLAB implementation Date: 2025.01.22 Version: 1.0

class ddmtolab.Algorithms.STMO.CPS_MOEA.CPS_MOEA(problem, n=None, max_nfes=None, M=3, CR=1.0, F=0.5, proM=1.0, disM=20.0, k_neighbors=5, save_data=True, save_path='./Data', name='CPS-MOEA', disable_tqdm=True)[source]

Bases: object

Classification and Pareto Domination Based Multi-Objective Evolutionary Algorithm.

This algorithm uses KNN-based classification to distinguish between good and bad solutions, and uses this information to guide offspring generation via differential evolution with surrogate-assisted pre-selection.

algorithm_information

Dictionary containing algorithm capabilities and requirements

Type:

dict

__init__(problem, n=None, max_nfes=None, M=3, CR=1.0, F=0.5, proM=1.0, disM=20.0, k_neighbors=5, save_data=True, save_path='./Data', name='CPS-MOEA', disable_tqdm=True)[source]

Initialize CPS-MOEA algorithm.

Parameters:
  • problem (MTOP) – Multi-task optimization problem instance

  • n (int or List[int], optional) – Population size per task (default: 100)

  • max_nfes (int or List[int], optional) – Maximum number of function evaluations per task (default: 10000)

  • M (int, optional) – Number of candidate offsprings generated per solution (default: 3)

  • CR (float, optional) – Crossover probability for differential evolution (default: 1.0)

  • F (float, optional) – Scaling factor for differential evolution (default: 0.5)

  • proM (float, optional) – Expected number of mutated variables (default: 1.0)

  • disM (float, optional) – Distribution index for polynomial mutation (default: 20.0)

  • k_neighbors (int, optional) – Number of neighbors for KNN classification (default: 5)

  • save_data (bool, optional) – Whether to save optimization data (default: True)

  • save_path (str, optional) – Path to save results (default: ‘./TestData’)

  • name (str, optional) – Name for the experiment (default: ‘CPS-MOEA_test’)

  • disable_tqdm (bool, optional) – Whether to disable progress bar (default: True)

optimize()[source]

Execute the CPS-MOEA algorithm.

Returns:

Optimization results containing decision variables, objectives, constraints, and runtime

Return type:

Results

Pareto Efficient Global Optimization (ParEGO)

This module implements ParEGO for expensive multi-objective optimization problems.

References

[1] Knowles, Joshua. “ParEGO: A hybrid algorithm with on-line landscape approximation for expensive multiobjective optimization problems.” IEEE Transactions on Evolutionary Computation 10.1 (2006): 50-66.

Notes

Author: Jiangtao Shen Date: 2025.01.10 Version: 1.0

class ddmtolab.Algorithms.STMO.ParEGO.ParEGO(problem, n_initial=None, n_weights=None, max_nfes=None, rho=0.05, save_data=True, save_path='./Data', name='ParEGO', disable_tqdm=True)[source]

Bases: object

Pareto Efficient Global Optimization algorithm for expensive multi-objective optimization.

ParEGO uses scalarization with randomly selected weight vectors to convert multi-objective problems into single-objective problems, which are then solved using Bayesian Optimization with Expected Improvement.

algorithm_information

Dictionary containing algorithm capabilities and requirements

Type:

dict

__init__(problem, n_initial=None, n_weights=None, max_nfes=None, rho=0.05, save_data=True, save_path='./Data', name='ParEGO', disable_tqdm=True)[source]

Initialize ParEGO algorithm.

Parameters:
  • problem (MTOP) – Multi-task optimization problem instance

  • n_initial (int or List[int], optional) – Number of initial samples per task (default: 11*dim - 1, following Knowles 2006)

  • n_weights (int or List[int], optional) – Number of reference weight vectors per task (default: 10)

  • max_nfes (int or List[int], optional) – Maximum number of function evaluations per task (default: 100)

  • rho (float, optional) – Augmentation coefficient for augmented Tchebycheff scalarization (default: 0.05)

  • save_data (bool, optional) – Whether to save optimization data (default: True)

  • save_path (str, optional) – Path to save results (default: ‘./TestData’)

  • name (str, optional) – Name for the experiment (default: ‘ParEGO_test’)

  • disable_tqdm (bool, optional) – Whether to disable progress bar (default: True)

optimize()[source]

Execute the ParEGO algorithm.

Returns:

Optimization results containing decision variables, objectives, and runtime

Return type:

Results

Multistage Evolutionary Algorithm (MSEA)

This module implements MSEA for better diversity preservation in multi-objective optimization.

References

[1] Tian, Ye, et al. “A multistage evolutionary algorithm for better diversity preservation in multiobjective optimization.” IEEE Transactions on Systems, Man, and Cybernetics: Systems, 51.9 (2021): 5880-5894.

Notes

Author: [Your Name] Email: [Your Email] Date: 2025.12.12 Version: 1.0

class ddmtolab.Algorithms.STMO.MSEA.MSEA(problem, n=None, max_nfes=None, muc=20.0, mum=15.0, save_data=True, save_path='./Data', name='MSEA', disable_tqdm=True)[source]

Bases: object

Multistage Evolutionary Algorithm for diversity preservation in multi-objective optimization.

algorithm_information

Dictionary containing algorithm capabilities and requirements

Type:

dict

__init__(problem, n=None, max_nfes=None, muc=20.0, mum=15.0, save_data=True, save_path='./Data', name='MSEA', disable_tqdm=True)[source]

Initialize MSEA algorithm.

Parameters:
  • problem (MTOP) – Multi-task optimization problem instance

  • n (int or List[int], optional) – Population size per task (default: 100)

  • max_nfes (int or List[int], optional) – Maximum number of function evaluations per task (default: 10000)

  • muc (float, optional) – Distribution index for simulated binary crossover (SBX) (default: 20.0)

  • mum (float, optional) – Distribution index for polynomial mutation (PM) (default: 15.0)

  • save_data (bool, optional) – Whether to save optimization data (default: True)

  • save_path (str, optional) – Path to save results (default: ‘./TestData’)

  • name (str, optional) – Name for the experiment (default: ‘MSEA_test’)

  • disable_tqdm (bool, optional) – Whether to disable progress bar (default: True)

optimize()[source]

Execute the MSEA algorithm.

Returns:

Optimization results containing decision variables, objectives, constraints, and runtime

Return type:

Results

Expensive Multiobjective Optimization by Relation Learning and Prediction (REMO)

This module implements the REMO algorithm. It utilizes a neural network to learn and predict dominance relationships between candidate solutions and reference solutions, guiding the evolutionary search process efficiently under limited evaluation budgets.

References

[1] H. Hao, A. Zhou, H. Qian, and H. Zhang. Expensive multiobjective optimization by relation learning and prediction. IEEE Transactions on Evolutionary Computation, 2022.

Notes

Author: Haowei Guo Email: ghw@mail.nwpu.edu.cn Date: 2026.01.16 Version: 1.1

class ddmtolab.Algorithms.STMO.REMO.REMO(problem, n=50, max_nfes=300, k=6, gmax=3000, save_data=True, save_path='./Data', name='REMO', disable_tqdm=False, **kwargs)[source]

Bases: object

Expensive Multiobjective Optimization by Relation Learning and Prediction (REMO)

algorithm_information
Type:

dict

Dictionary containing algorithm capabilities (e.g., supported objectives, constraints).
__init__(problem, n=50, max_nfes=300, k=6, gmax=3000, save_data=True, save_path='./Data', name='REMO', disable_tqdm=False, **kwargs)[source]

Initialize the REMO algorithm parameters.

Parameters:
  • problem (MTOP) – The optimization problem instance.

  • n (int or List[int]) – Population size (default: 50).

  • max_nfes (int or List[int]) – Maximum number of function evaluations (default: 300).

  • k (int) – Number of reference solutions used for relation learning (default: 6).

  • gmax (int) – Maximum total steps for internal surrogate-assisted evolution (default: 3000).

optimize()[source]

Execute the main optimization loop of REMO.

Returns:

Optimization results containing decision variables, objectives, constraints, and runtime

Return type:

Results

Kriging-Assisted Two-Archive Evolutionary Algorithm 2 (KTA2)

This module implements KTA2 for computationally expensive many-objective optimization. It maintains two archives (convergence and diversity) and uses point-insensitive Kriging models with adaptive sampling strategies.

References

[1] Z. Song, H. Wang, C. He, and Y. Jin. A Kriging-assisted two-archive evolutionary algorithm for expensive many-objective optimization. IEEE Transactions on Evolutionary Computation, 2021, 25(6): 1013-1027.

Notes

Author: Jiangtao Shen Date: 2026.02.16 Version: 1.0

class ddmtolab.Algorithms.STMO.KTA2.KTA2(problem, n_initial=None, max_nfes=None, n=100, tau=0.75, phi=0.1, wmax=10, mu=5, save_data=True, save_path='./Data', name='KTA2', disable_tqdm=True)[source]

Bases: object

Kriging-Assisted Two-Archive Evolutionary Algorithm 2 for expensive many-objective optimization.

algorithm_information

Dictionary containing algorithm capabilities and requirements

Type:

dict

__init__(problem, n_initial=None, max_nfes=None, n=100, tau=0.75, phi=0.1, wmax=10, mu=5, save_data=True, save_path='./Data', name='KTA2', disable_tqdm=True)[source]

Initialize KTA2 algorithm.

Parameters:
  • problem (MTOP) – Multi-task optimization problem instance

  • n_initial (int or List[int], optional) – Number of initial samples per task (default: 11*dim-1)

  • max_nfes (int or List[int], optional) – Maximum number of function evaluations per task (default: 200)

  • n (int or List[int], optional) – Population/archive size per task (default: 100)

  • tau (float, optional) – Proportion of training data for insensitive models (default: 0.75)

  • phi (float, optional) – Fraction of DA for uncertainty sampling (default: 0.1)

  • wmax (int, optional) – Number of inner surrogate evolution generations (default: 10)

  • mu (int, optional) – Number of re-evaluated solutions per generation (default: 5)

  • save_data (bool, optional) – Whether to save optimization data (default: True)

  • save_path (str, optional) – Path to save results (default: ‘./Data’)

  • name (str, optional) – Name for the experiment (default: ‘KTA2’)

  • disable_tqdm (bool, optional) – Whether to disable progress bar (default: True)

optimize()[source]

Execute the KTA2 algorithm.

Returns:

Optimization results containing decision variables, objectives, and runtime

Return type:

Results

Dual-Surrogate Assisted Evolutionary Algorithm with Portfolio Strategy (DSAEA-PS)

This module implements DSAEA-PS for computationally expensive multi/many-objective optimization. It uses two types of surrogates (Kriging for objective prediction and RBF for dominance relation prediction) combined with a portfolio of three environmental selection strategies (IBEA, RVEA, NSGA-II/CSDR) to balance convergence and diversity.

References

[1] J. Shen, P. Wang, Y. Tian, and H. Dong. A dual surrogate assisted evolutionary algorithm based on parallel search for expensive multi/many-objective optimization. Applied Soft Computing, 2023, 148: 110879.

Notes

Author: Jiangtao Shen Date: 2026.02.18 Version: 1.0

class ddmtolab.Algorithms.STMO.DSAEA_PS.DSAEA_PS(problem, n_initial=None, max_nfes=None, n=100, wmax=20, mu=5, save_data=True, save_path='./Data', name='DSAEA-PS', disable_tqdm=True)[source]

Bases: object

Dual-Surrogate Assisted Evolutionary Algorithm with Portfolio Strategy for expensive multi/many-objective optimization.

Uses Kriging models for objective prediction and an RBF model for dominance relation prediction, combined with three environmental selection strategies (IBEA, RVEA, CSDR).

algorithm_information

Dictionary containing algorithm capabilities and requirements

Type:

dict

__init__(problem, n_initial=None, max_nfes=None, n=100, wmax=20, mu=5, save_data=True, save_path='./Data', name='DSAEA-PS', disable_tqdm=True)[source]

Initialize DSAEA-PS algorithm.

Parameters:
  • problem (MTOP) – Multi-task optimization problem instance

  • n_initial (int or List[int], optional) – Number of initial samples per task (default: 11*dim-1)

  • max_nfes (int or List[int], optional) – Maximum number of function evaluations per task (default: 200)

  • n (int or List[int], optional) – Population size (number of reference vectors) per task (default: 100)

  • wmax (int, optional) – Number of inner surrogate evolution generations (default: 20)

  • mu (int, optional) – Number of re-evaluated solutions per iteration (default: 5)

  • save_data (bool, optional) – Whether to save optimization data (default: True)

  • save_path (str, optional) – Path to save results (default: ‘./Data’)

  • name (str, optional) – Name for the experiment (default: ‘DSAEA-PS’)

  • disable_tqdm (bool, optional) – Whether to disable progress bar (default: True)

optimize()[source]

Execute the DSAEA-PS algorithm.

Returns:

Optimization results containing decision variables, objectives, and runtime

Return type:

Results

Adaptive Dropout based Surrogate-Assisted Particle Swarm Optimization (ADSAPSO)

This module implements ADSAPSO for computationally expensive multi/many-objective optimization. It uses adaptive dropout to select important decision variables, builds RBF surrogate models in the reduced space, and applies PSO on surrogates to find promising solutions.

References

[1] J. Lin, C. He, and R. Cheng. Adaptive dropout for high-dimensional expensive multiobjective optimization. Complex & Intelligent Systems, 2022, 8(1): 271-285.

Notes

Author: Jiangtao Shen Date: 2026.02.17 Version: 1.0

class ddmtolab.Algorithms.STMO.ADSAPSO.ADSAPSO(problem, n_initial=100, max_nfes=None, n=100, k=5, beta=0.5, n_a=200, n_s=50, save_data=True, save_path='./Data', name='ADSAPSO', disable_tqdm=True)[source]

Bases: object

Adaptive Dropout based Surrogate-Assisted Particle Swarm Optimization for expensive multi/many-objective optimization.

algorithm_information

Dictionary containing algorithm capabilities and requirements

Type:

dict

__init__(problem, n_initial=100, max_nfes=None, n=100, k=5, beta=0.5, n_a=200, n_s=50, save_data=True, save_path='./Data', name='ADSAPSO', disable_tqdm=True)[source]

Initialize ADSAPSO algorithm.

Parameters:
  • problem (MTOP) – Multi-task optimization problem instance

  • n_initial (int or List[int], optional) – Number of initial samples per task (default: 100)

  • max_nfes (int or List[int], optional) – Maximum number of function evaluations per task (default: 200)

  • n (int or List[int], optional) – Population size for PSO per task (default: 100)

  • k (int, optional) – Number of re-evaluated solutions per generation (default: 5)

  • beta (float, optional) – Percentage of dropout (fraction of dimensions to keep) (default: 0.5)

  • n_a (int, optional) – Number of solutions for building surrogate models (default: 200)

  • n_s (int, optional) – Number of well/poorly performing solutions for dimension analysis (default: 50)

  • save_data (bool, optional) – Whether to save optimization data (default: True)

  • save_path (str, optional) – Path to save results (default: ‘./Data’)

  • name (str, optional) – Name for the experiment (default: ‘ADSAPSO’)

  • disable_tqdm (bool, optional) – Whether to disable progress bar (default: True)

optimize()[source]

Execute the ADSAPSO algorithm.

Returns:

Optimization results containing decision variables, objectives, and runtime

Return type:

Results

Classification-based Surrogate-assisted Evolutionary Algorithm (CSEA)

This module implements CSEA for computationally expensive multi/many-objective optimization. It uses a neural network classifier to distinguish promising solutions from non-promising ones relative to reference solutions, with adaptive surrogate-assisted selection strategies.

References

[1] L. Pan, C. He, Y. Tian, H. Wang, X. Zhang, and Y. Jin. A classification based surrogate-assisted evolutionary algorithm for expensive many-objective optimization. IEEE Transactions on Evolutionary Computation, 2019, 23(1): 74-88.

Notes

Author: Jiangtao Shen Date: 2026.02.16 Version: 1.0

class ddmtolab.Algorithms.STMO.CSEA.CSEA(problem, n_initial=None, max_nfes=None, n=None, k=6, gmax=3000, save_data=True, save_path='./Data', name='CSEA', disable_tqdm=True)[source]

Bases: object

Classification-based Surrogate-assisted Evolutionary Algorithm for expensive multi/many-objective optimization.

algorithm_information

Dictionary containing algorithm capabilities and requirements

Type:

dict

__init__(problem, n_initial=None, max_nfes=None, n=None, k=6, gmax=3000, save_data=True, save_path='./Data', name='CSEA', disable_tqdm=True)[source]

Initialize CSEA algorithm.

Parameters:
  • problem (MTOP) – Multi-task optimization problem instance

  • n_initial (int or List[int], optional) – Number of initial samples per task (default: min(11*dim-1, 109))

  • max_nfes (int or List[int], optional) – Maximum number of function evaluations per task (default: 200)

  • n (int or List[int], optional) – Population size per task (default: same as n_initial)

  • k (int, optional) – Number of reference solutions (default: 6)

  • gmax (int, optional) – Number of solutions evaluated by surrogate model (default: 3000)

  • save_data (bool, optional) – Whether to save optimization data (default: True)

  • save_path (str, optional) – Path to save results (default: ‘./Data’)

  • name (str, optional) – Name for the experiment (default: ‘CSEA’)

  • disable_tqdm (bool, optional) – Whether to disable progress bar (default: True)

optimize()[source]

Execute the CSEA algorithm.

Returns:

Optimization results containing decision variables, objectives, and runtime

Return type:

Results

Distribution-Informed Surrogate-assisted Kriging (DISK)

This module implements DISK for computationally expensive multi-objective optimization. It uses Distribution-Informed Probabilistic Dominance (DIPD) that combines prediction uncertainty from Kriging with the probability distribution learned from Pareto-optimal solutions. It features adaptive local search guided by weight vector identification to fill gaps in the Pareto front.

References

[1] Z. Song, H. Wang, and H. Xu. DISK: A Kriging-Assisted Multi-Objective Optimization Algorithm with Distribution-Informed Probabilistic Dominance. IEEE Transactions on Evolutionary Computation, 2024.

Notes

Author: Jiangtao Shen Date: 2026.02.17 Version: 1.0

class ddmtolab.Algorithms.STMO.DISK.DISK(problem, n_initial=None, max_nfes=None, n=100, wmax=60, alpha=5, save_data=True, save_path='./Data', name='DISK', disable_tqdm=True)[source]

Bases: object

Distribution-Informed Surrogate-assisted Kriging for expensive multi-objective optimization with adaptive local search.

__init__(problem, n_initial=None, max_nfes=None, n=100, wmax=60, alpha=5, save_data=True, save_path='./Data', name='DISK', disable_tqdm=True)[source]

Initialize DISK algorithm.

Parameters:
  • problem (MTOP) – Multi-task optimization problem instance

  • n_initial (int or List[int], optional) – Number of initial samples per task (default: 11*dim-1)

  • max_nfes (int or List[int], optional) – Maximum number of function evaluations per task (default: 200)

  • n (int or List[int], optional) – Population/archive size per task (default: 100)

  • wmax (int, optional) – Surrogate evolution generations (default: 60)

  • alpha (int, optional) – Number of candidates per iteration (default: 5)

  • save_data (bool, optional) – Whether to save optimization data (default: True)

  • save_path (str, optional) – Path to save results (default: ‘./Data’)

  • name (str, optional) – Name for the experiment (default: ‘DISK’)

  • disable_tqdm (bool, optional) – Whether to disable progress bar (default: True)

optimize()[source]

Execute the DISK algorithm.

Returns:

Optimization results containing decision variables, objectives, and runtime

Return type:

Results

Deep Reinforcement Learning-assisted Surrogate-Assisted Evolutionary Algorithm (DRL-SAEA)

This module implements DRL-SAEA for computationally expensive constrained multi-objective optimization. It uses a Double Deep Q-Network (DDQN) to dynamically select among three constraint handling strategies for surrogate model management.

References

[1] S. Shao, Y. Tian, and Y. Zhang. Deep reinforcement learning assisted surrogate model management for expensive constrained multi-objective optimization. Swarm and Evolutionary Computation, 2025, 92: 101817.

Notes

Author: Jiangtao Shen Date: 2026.02.17 Version: 1.0

class ddmtolab.Algorithms.STMO.DRLSAEA.DRLSAEA(problem, n_initial=None, max_nfes=None, n=100, wmax=20, mu=5, save_data=True, save_path='./Data', name='DRL-SAEA', disable_tqdm=True)[source]

Bases: object

Deep Reinforcement Learning-assisted Surrogate-Assisted Evolutionary Algorithm for expensive constrained multi-objective optimization.

Uses DDQN to select among 3 constraint handling strategies: - Action 0: objectives + normalized aggregate CV - Action 1: objectives + individual normalized active constraints - Action 2: objectives only (unconstrained)

algorithm_information

Dictionary containing algorithm capabilities and requirements

Type:

dict

__init__(problem, n_initial=None, max_nfes=None, n=100, wmax=20, mu=5, save_data=True, save_path='./Data', name='DRL-SAEA', disable_tqdm=True)[source]

Initialize DRL-SAEA algorithm.

Parameters:
  • problem (MTOP) – Multi-task optimization problem instance

  • n_initial (int or List[int], optional) – Number of initial samples per task (default: 11*dim-1)

  • max_nfes (int or List[int], optional) – Maximum number of function evaluations per task (default: 200)

  • n (int or List[int], optional) – Archive/population size per task (default: 100)

  • wmax (int, optional) – Number of inner GA generations on surrogates (default: 20)

  • mu (int, optional) – Number of real evaluated solutions per iteration (default: 5)

  • save_data (bool, optional) – Whether to save optimization data (default: True)

  • save_path (str, optional) – Path to save results (default: ‘./Data’)

  • name (str, optional) – Name for the experiment (default: ‘DRL-SAEA’)

  • disable_tqdm (bool, optional) – Whether to disable progress bar (default: True)

optimize()[source]

Execute the DRL-SAEA algorithm.

Returns:

Optimization results containing decision variables, objectives, and runtime

Return type:

Results

Expected Direction-based Hypervolume Improvement (DirHV-EI)

This module implements DirHV-EI for parallel expensive multi/many-objective optimization. It uses GP surrogates with a MOEA/D-GR framework to maximize direction-based hypervolume expected improvement, and a greedy batch selection strategy.

References

[1] L. Zhao and Q. Zhang. Hypervolume-guided decomposition for parallel expensive multiobjective optimization. IEEE Transactions on Evolutionary Computation, 2024, 28(2): 432-444.

Notes

Author: Jiangtao Shen Date: 2026.02.16 Version: 1.0

class ddmtolab.Algorithms.STMO.DirHV_EI.DirHV_EI(problem, n_initial=None, max_nfes=None, batch_size=5, save_data=True, save_path='./Data', name='DirHV-EI', disable_tqdm=True)[source]

Bases: object

Expected Direction-based Hypervolume Improvement for parallel expensive multi/many-objective optimization.

algorithm_information

Dictionary containing algorithm capabilities and requirements

Type:

dict

__init__(problem, n_initial=None, max_nfes=None, batch_size=5, save_data=True, save_path='./Data', name='DirHV-EI', disable_tqdm=True)[source]

Initialize DirHV-EI algorithm.

Parameters:
  • problem (MTOP) – Multi-task optimization problem instance

  • n_initial (int or List[int], optional) – Number of initial samples per task (default: 11*dim-1)

  • max_nfes (int or List[int], optional) – Maximum number of function evaluations per task (default: 200)

  • batch_size (int, optional) – Number of true function evaluations per iteration (default: 5)

  • save_data (bool, optional) – Whether to save optimization data (default: True)

  • save_path (str, optional) – Path to save results (default: ‘./Data’)

  • name (str, optional) – Name for the experiment (default: ‘DirHV-EI’)

  • disable_tqdm (bool, optional) – Whether to disable progress bar (default: True)

optimize()[source]

Execute the DirHV-EI algorithm.

Returns:

Optimization results containing decision variables, objectives, and runtime

Return type:

Results

Efficient Dropout Neural Network based AR-MOEA (EDN-ARMOEA)

This module implements EDN-ARMOEA for computationally expensive multi/many-objective optimization. It uses a dropout neural network as surrogate model with MC dropout for uncertainty estimation, combined with an adaptive reference point based evolutionary algorithm (AR-MOEA).

References

[1] D. Guo, X. Wang, K. Gao, Y. Jin, J. Ding, and T. Chai. Evolutionary optimization of high-dimensional multiobjective and many-objective expensive problems assisted by a dropout neural network. IEEE Transactions on Systems, Man, and Cybernetics: Systems, 2022, 52(4): 2084-2097.

Notes

Author: Jiangtao Shen Date: 2026.02.17 Version: 1.0

class ddmtolab.Algorithms.STMO.EDN_ARMOEA.EDN_ARMOEA(problem, n_initial=None, max_nfes=None, n=100, delta=0.05, wmax=20, ke=3, save_data=True, save_path='./Data', name='EDN-ARMOEA', disable_tqdm=True)[source]

Bases: object

Efficient Dropout Neural Network based AR-MOEA for expensive multi/many-objective optimization.

__init__(problem, n_initial=None, max_nfes=None, n=100, delta=0.05, wmax=20, ke=3, save_data=True, save_path='./Data', name='EDN-ARMOEA', disable_tqdm=True)[source]

Initialize EDN-ARMOEA algorithm.

Parameters:
  • problem (MTOP) – Multi-task optimization problem instance

  • n_initial (int or List[int], optional) – Number of initial samples per task (default: 11*dim-1)

  • max_nfes (int or List[int], optional) – Maximum number of function evaluations per task (default: 200)

  • n (int or List[int], optional) – Population size per task (default: 100)

  • delta (float, optional) – Threshold for judging diversity improvement (default: 0.05)

  • wmax (int, optional) – Number of generations before updating models (default: 20)

  • ke (int, optional) – Number of solutions to be re-evaluated in each iteration (default: 3)

  • save_data (bool, optional) – Whether to save optimization data (default: True)

  • save_path (str, optional) – Path to save results (default: ‘./Data’)

  • name (str, optional) – Name for the experiment (default: ‘EDN-ARMOEA’)

  • disable_tqdm (bool, optional) – Whether to disable progress bar (default: True)

optimize()[source]

Execute the EDN-ARMOEA algorithm.

Returns:

Optimization results containing decision variables, objectives, and runtime

Return type:

Results

Expected Improvement Matrix based Efficient Global Optimization (EIM-EGO)

This module implements EIM-EGO for computationally expensive multi-objective optimization. It builds Kriging (Gaussian Process) models for each objective and uses the Expected Improvement Matrix (EIM) criterion to select one promising candidate per iteration via GA optimization.

Three EIM infill criteria are supported:
  1. Euclidean distance-based EIM criterion (default)

  2. Maximin distance-based EIM criterion

  3. Hypervolume-based EIM criterion

References

[1] D. Zhan, Y. Cheng, and J. Liu. Expected improvement matrix-based infill criteria for expensive multiobjective optimization. IEEE Transactions on Evolutionary Computation, 2017, 21(6): 956-975.

Notes

Author: Jiangtao Shen Date: 2026.02.17 Version: 1.0

class ddmtolab.Algorithms.STMO.EIM_EGO.EIM_EGO(problem, n_initial=None, max_nfes=None, eim_type=1, ga_pop_size=100, ga_generations=100, save_data=True, save_path='./Data', name='EIM-EGO', disable_tqdm=True)[source]

Bases: object

Expected Improvement Matrix based Efficient Global Optimization for expensive multi-objective optimization.

__init__(problem, n_initial=None, max_nfes=None, eim_type=1, ga_pop_size=100, ga_generations=100, save_data=True, save_path='./Data', name='EIM-EGO', disable_tqdm=True)[source]

Initialize EIM-EGO algorithm.

Parameters:
  • problem (MTOP) – Multi-task optimization problem instance

  • n_initial (int or List[int], optional) – Number of initial samples per task (default: 11*dim-1)

  • max_nfes (int or List[int], optional) – Maximum number of function evaluations per task (default: 200)

  • eim_type (int, optional) – EIM infill criterion type (default: 1) 1 = Euclidean distance-based, 2 = Maximin distance-based, 3 = Hypervolume-based

  • ga_pop_size (int, optional) – Population size for internal GA optimizer (default: 100)

  • ga_generations (int, optional) – Number of generations for internal GA optimizer (default: 100)

  • save_data (bool, optional) – Whether to save optimization data (default: True)

  • save_path (str, optional) – Path to save results (default: ‘./Data’)

  • name (str, optional) – Name for the experiment (default: ‘EIM-EGO’)

  • disable_tqdm (bool, optional) – Whether to disable progress bar (default: True)

optimize()[source]

Execute the EIM-EGO algorithm.

Returns:

Optimization results containing decision variables, objectives, and runtime

Return type:

Results

Ensemble-based Surrogate Model-Assisted Evolutionary Algorithm (EM-SAEA)

This module implements EM-SAEA for computationally expensive constrained/unconstrained multi/many-objective optimization. It uses a two-stage approach: - Stage 1 (objective-oriented, FE < 50% budget): RVMM-based search with two sub-populations - Stage 2 (constraint-oriented, FE >= 50% budget): MOEA/D with ensemble constraint models

References

[1] Y. Li, X. Feng, and H. Yu. Enhancing landscape approximation with ensemble-based surrogate model for expensive constrained multiobjective optimization. IEEE Transactions on Evolutionary Computation, 2025.

Notes

Author: Jiangtao Shen Date: 2026.02.17 Version: 1.0

class ddmtolab.Algorithms.STMO.EM_SAEA.EM_SAEA(problem, n_initial=None, max_nfes=None, n=100, wmax=20, lc_num=5, mu=5, alpha=2.0, kk=0.5, save_data=True, save_path='./Data', name='EM-SAEA', disable_tqdm=True)[source]

Bases: object

Ensemble-based Surrogate Model-Assisted Evolutionary Algorithm for expensive constrained/unconstrained multi/many-objective optimization.

algorithm_information

Dictionary containing algorithm capabilities and requirements

Type:

dict

__init__(problem, n_initial=None, max_nfes=None, n=100, wmax=20, lc_num=5, mu=5, alpha=2.0, kk=0.5, save_data=True, save_path='./Data', name='EM-SAEA', disable_tqdm=True)[source]

Initialize EM-SAEA algorithm.

Parameters:
  • problem (MTOP) – Multi-task optimization problem instance

  • n_initial (int or List[int], optional) – Number of initial samples per task (default: 11*dim-1)

  • max_nfes (int or List[int], optional) – Maximum number of function evaluations per task (default: 200)

  • n (int or List[int], optional) – Population size (number of reference vectors) per task (default: 100)

  • wmax (int, optional) – Number of generations before updating surrogate models (default: 20)

  • lc_num (int, optional) – Number of local constraint model clusters (default: 5)

  • mu (int, optional) – Number of re-evaluated solutions per iteration in stage 2 (default: 5)

  • alpha (float, optional) – Parameter controlling APD penalty rate (default: 2.0)

  • kk (float, optional) – Uncertainty weighting factor for MSE augmentation (default: 0.5)

  • save_data (bool, optional) – Whether to save optimization data (default: True)

  • save_path (str, optional) – Path to save results (default: ‘./Data’)

  • name (str, optional) – Name for the experiment (default: ‘EM-SAEA’)

  • disable_tqdm (bool, optional) – Whether to disable progress bar (default: True)

optimize()[source]

Execute the EM-SAEA algorithm.

Returns:

Optimization results containing decision variables, objectives, and runtime

Return type:

Results

Kriging-Assisted Two-Archive Search (KTS)

This module implements KTS for computationally expensive constrained/unconstrained multi-objective optimization. It adaptively switches between two search modes:

Mode 0 (unconstrained/KTA2-style): two-archive CA/DA with convergence/diversity Mode 1 (constrained/KCCMO-style): SPEA2-based fitness with K-means sampling

The switching is based on the correlation between convergence metric Q and constraint violation CV.

References

[1] Z. Song, H. Wang, and B. Xue. A Kriging-Assisted Two-Archive Evolutionary Algorithm for Expensive Multi-Objective Optimization. IEEE Transactions on Evolutionary Computation, 2024.

Notes

Author: Jiangtao Shen Date: 2026.02.17 Version: 1.0

class ddmtolab.Algorithms.STMO.KTS.KTS(problem, n_initial=None, max_nfes=None, n=100, tau=0.6, phi=-0.2, mu=20, phi1=0.1, wmax1=10, mu1=5, save_data=True, save_path='./Data', name='KTS', disable_tqdm=True)[source]

Bases: object

Kriging-Assisted Two-Archive Search for expensive constrained/unconstrained multi-objective optimization with adaptive mode switching.

__init__(problem, n_initial=None, max_nfes=None, n=100, tau=0.6, phi=-0.2, mu=20, phi1=0.1, wmax1=10, mu1=5, save_data=True, save_path='./Data', name='KTS', disable_tqdm=True)[source]

Initialize KTS algorithm.

Parameters:
  • problem (MTOP) – Multi-task optimization problem instance

  • n_initial (int or List[int], optional) – Number of initial samples per task (default: 11*dim-1)

  • max_nfes (int or List[int], optional) – Maximum number of function evaluations per task (default: 200)

  • n (int or List[int], optional) – Population/archive size per task (default: 100)

  • tau (float, optional) – Correlation threshold for mode 0 (default: 0.6)

  • phi (float, optional) – Correlation threshold for mode 1 (default: -0.2)

  • mu (int, optional) – Number of elite solutions for correlation (default: 20)

  • phi1 (float, optional) – Uncertainty sampling fraction (default: 0.1)

  • wmax1 (int, optional) – Inner surrogate evolution generations (default: 10)

  • mu1 (int, optional) – Number of re-evaluated solutions per generation (default: 5)

  • save_data (bool, optional) – Whether to save optimization data (default: True)

  • save_path (str, optional) – Path to save results (default: ‘./Data’)

  • name (str, optional) – Name for the experiment (default: ‘KTS’)

  • disable_tqdm (bool, optional) – Whether to disable progress bar (default: True)

optimize()[source]

Execute the KTS algorithm.

Returns:

Optimization results containing decision variables, objectives, and runtime

Return type:

Results

Multigranularity Surrogate-Assisted Evolutionary Algorithm (MGSAEA)

This module implements MGSAEA for computationally expensive constrained multi-objective optimization. It uses a two-stage framework: - Stage 1 (convergence stage): Builds surrogates for objectives only, ignoring constraints - Stage 2 (constraint stage): Adaptively selects constraint handling strategy based on

constraint satisfaction status (all violated, partially violated, all satisfied)

Stage transition is triggered when the ideal point change rate drops below a threshold, indicating convergence of the unconstrained search.

References

[1] Y. Zhang, H. Jiang, Y. Tian, H. Ma, and X. Zhang. Multigranularity surrogate modeling for evolutionary multiobjective optimization with expensive constraints. IEEE Transactions on Neural Networks and Learning Systems, 2024, 35(3): 2956-2968.

Notes

Author: Jiangtao Shen Date: 2026.02.17 Version: 1.0

class ddmtolab.Algorithms.STMO.MGSAEA.MGSAEA(problem, n_initial=None, max_nfes=None, n=100, wmax=20, mu=5, gap=20, lam=0.001, save_data=True, save_path='./Data', name='MGSAEA', disable_tqdm=True)[source]

Bases: object

Multigranularity Surrogate-Assisted Evolutionary Algorithm for expensive constrained multi-objective optimization.

Two-stage approach: - Stage 1: Objective-only surrogates until ideal points converge - Stage 2: Constraint-aware surrogates with adaptive granularity

algorithm_information

Dictionary containing algorithm capabilities and requirements

Type:

dict

__init__(problem, n_initial=None, max_nfes=None, n=100, wmax=20, mu=5, gap=20, lam=0.001, save_data=True, save_path='./Data', name='MGSAEA', disable_tqdm=True)[source]

Initialize MGSAEA algorithm.

Parameters:
  • problem (MTOP) – Multi-task optimization problem instance

  • n_initial (int or List[int], optional) – Number of initial samples per task (default: 11*dim-1)

  • max_nfes (int or List[int], optional) – Maximum number of function evaluations per task (default: 200)

  • n (int or List[int], optional) – Archive/population size per task (default: 100)

  • wmax (int, optional) – Number of inner GA generations on surrogates (default: 20)

  • mu (int, optional) – Number of real evaluated solutions per iteration (default: 5)

  • gap (int, optional) – Window for ideal point change rate computation (default: 20)

  • lam (float, optional) – Threshold for stage transition (default: 1e-3)

  • save_data (bool, optional) – Whether to save optimization data (default: True)

  • save_path (str, optional) – Path to save results (default: ‘./Data’)

  • name (str, optional) – Name for the experiment (default: ‘MGSAEA’)

  • disable_tqdm (bool, optional) – Whether to disable progress bar (default: True)

optimize()[source]

Execute the MGSAEA algorithm.

Returns:

Optimization results containing decision variables, objectives, and runtime

Return type:

Results

Multi-Model-based Ranking Aggregation Evolutionary Algorithm (MMRAEA)

This module implements MMRAEA for computationally expensive multi/many-objective optimization. It uses three RBF surrogate models (objective approximation, dominance prediction, and fitness prediction) combined with a ranking aggregation infill strategy and dual evolutionary optimization (CSO + GA) for balanced convergence and diversity.

References

[1] J. Shen, X. Wang, R. He, Y. Tian, W. Wang, P. Wang, and Z. Wen. Optimization of high-dimensional expensive multi-objective problems using multi-mode radial basis functions. Complex & Intelligent Systems, 2025, 11(2): 147.

Notes

Author: Jiangtao Shen Date: 2026.02.18 Version: 1.0

class ddmtolab.Algorithms.STMO.MMRAEA.MMRAEA(problem, n_initial=None, max_nfes=None, n=100, wmax=20, save_data=True, save_path='./Data', name='MMRAEA', disable_tqdm=True)[source]

Bases: object

Multi-Model-based Ranking Aggregation Evolutionary Algorithm for expensive multi/many-objective optimization.

Uses three RBF surrogate models (objective, dominance, fitness) with ranking aggregation infill strategy and dual evolutionary optimization (CSO + GA).

algorithm_information

Dictionary containing algorithm capabilities and requirements

Type:

dict

__init__(problem, n_initial=None, max_nfes=None, n=100, wmax=20, save_data=True, save_path='./Data', name='MMRAEA', disable_tqdm=True)[source]

Initialize MMRAEA algorithm.

Parameters:
  • problem (MTOP) – Multi-task optimization problem instance

  • n_initial (int or List[int], optional) – Number of initial samples per task (default: 11*dim-1)

  • max_nfes (int or List[int], optional) – Maximum number of function evaluations per task (default: 200)

  • n (int or List[int], optional) – Population size per task (default: 100)

  • wmax (int, optional) – Number of inner surrogate evolution generations (default: 20)

  • save_data (bool, optional) – Whether to save optimization data (default: True)

  • save_path (str, optional) – Path to save results (default: ‘./Data’)

  • name (str, optional) – Name for the experiment (default: ‘MMRAEA’)

  • disable_tqdm (bool, optional) – Whether to disable progress bar (default: True)

optimize()[source]

Execute the MMRAEA algorithm.

Returns:

Optimization results containing decision variables, objectives, and runtime

Return type:

Results

MOEA/D with Efficient Global Optimization (MOEA/D-EGO)

This module implements MOEA/D-EGO for computationally expensive multi-objective optimization. It uses clustering-based Gaussian Process models, MOEA/D-DE to maximize Expected Tchebycheff Improvement (ETI) with Moment Matching Approximation, and K-means batch selection.

References

[1] Q. Zhang, W. Liu, E. Tsang, and B. Virginas. Expensive multiobjective optimization by MOEA/D with Gaussian process model. IEEE Transactions on Evolutionary Computation, 2010, 14(3): 456-474.

Notes

Author: Jiangtao Shen Date: 2026.02.17 Version: 1.0

class ddmtolab.Algorithms.STMO.MOEA_D_EGO.MOEA_D_EGO(problem, n_initial=None, max_nfes=None, batch_size=5, save_data=True, save_path='./Data', name='MOEA-D-EGO', disable_tqdm=True)[source]

Bases: object

MOEA/D with Efficient Global Optimization for expensive multi-objective optimization.

__init__(problem, n_initial=None, max_nfes=None, batch_size=5, save_data=True, save_path='./Data', name='MOEA-D-EGO', disable_tqdm=True)[source]

Initialize MOEA/D-EGO algorithm.

Parameters:
  • problem (MTOP) – Multi-task optimization problem instance

  • n_initial (int or List[int], optional) – Number of initial samples per task (default: 11*dim-1)

  • max_nfes (int or List[int], optional) – Maximum number of function evaluations per task (default: 200)

  • batch_size (int, optional) – Number of true evaluations per iteration (default: 5)

  • save_data (bool, optional) – Whether to save optimization data (default: True)

  • save_path (str, optional) – Path to save results (default: ‘./Data’)

  • name (str, optional) – Name for the experiment (default: ‘MOEA-D-EGO’)

  • disable_tqdm (bool, optional) – Whether to disable progress bar (default: True)

optimize()[source]

Execute the MOEA/D-EGO algorithm.

Returns:

Optimization results containing decision variables, objectives, and runtime

Return type:

Results

Multi-objective Efficient Global Optimization (MultiObjectiveEGO)

This module implements MultiObjectiveEGO for computationally expensive multi-objective optimization. It uses reference direction-based Augmented Achievement Scalarizing Function (AASF) to decompose the multi-objective problem into scalar subproblems, builds a single GP model per infill, and maximizes Standard Expected Improvement to select new evaluation points.

References

[1] R. Hussein and K. Deb. A generative Kriging surrogate model for constrained and unconstrained multi-objective optimization. Proceedings of the Genetic and Evolutionary Computation Conference, 2016, 573-580.

Notes

Author: Jiangtao Shen Date: 2026.02.17 Version: 1.0

class ddmtolab.Algorithms.STMO.MultiObjectiveEGO.MultiObjectiveEGO(problem, n_initial=None, max_nfes=None, alpha=0.7, num_k=5, H=21, rho=0.001, save_data=True, save_path='./Data', name='MultiObjectiveEGO', disable_tqdm=True)[source]

Bases: object

Multi-objective Efficient Global Optimization for expensive multi-objective optimization using reference direction-based scalarization and Expected Improvement.

__init__(problem, n_initial=None, max_nfes=None, alpha=0.7, num_k=5, H=21, rho=0.001, save_data=True, save_path='./Data', name='MultiObjectiveEGO', disable_tqdm=True)[source]

Initialize MultiObjectiveEGO algorithm.

Parameters:
  • problem (MTOP) – Multi-task optimization problem instance

  • n_initial (int or List[int], optional) – Number of initial samples per task (default: 11*dim-1)

  • max_nfes (int or List[int], optional) – Maximum number of function evaluations per task (default: 200)

  • alpha (float, optional) – Portion of samples for Kriging construction (default: 0.7)

  • num_k (int, optional) – Number of infill points per reference direction (default: 5)

  • H (int, optional) – Number of reference directions (default: 21)

  • rho (float, optional) – Parameter for AASF scalarization (default: 1e-3)

  • save_data (bool, optional) – Whether to save optimization data (default: True)

  • save_path (str, optional) – Path to save results (default: ‘./Data’)

  • name (str, optional) – Name for the experiment (default: ‘MultiObjectiveEGO’)

  • disable_tqdm (bool, optional) – Whether to disable progress bar (default: True)

optimize()[source]

Execute the MultiObjectiveEGO algorithm.

Returns:

Optimization results containing decision variables, objectives, and runtime

Return type:

Results

Pairwise Comparison Surrogate-Assisted Evolutionary Algorithm (PC-SAEA)

This module implements PC-SAEA for computationally expensive multi-objective optimization. It uses a Probabilistic Neural Network (PNN) based pairwise comparison surrogate that classifies whether one solution is better than another, combined with adaptive selection strategies based on surrogate reliability.

References

[1] H. Wang, Y. Jin, C. Sun, and J. Deng. A pairwise comparison based surrogate-assisted evolutionary algorithm for expensive multi-objective optimization. Swarm and Evolutionary Computation, 2023, 80: 101323.

Notes

Author: Jiangtao Shen Date: 2026.02.17 Version: 1.0

class ddmtolab.Algorithms.STMO.PCSAEA.PCSAEA(problem, n_initial=None, max_nfes=None, n=100, delta=0.8, gmax=3000, spread=0.1925, save_data=True, save_path='./Data', name='PC-SAEA', disable_tqdm=True)[source]

Bases: object

Pairwise Comparison Surrogate-Assisted Evolutionary Algorithm for expensive multi-objective optimization using PNN-based pairwise comparison surrogates.

__init__(problem, n_initial=None, max_nfes=None, n=100, delta=0.8, gmax=3000, spread=0.1925, save_data=True, save_path='./Data', name='PC-SAEA', disable_tqdm=True)[source]

Initialize PC-SAEA algorithm.

Parameters:
  • problem (MTOP) – Multi-task optimization problem instance

  • n_initial (int or List[int], optional) – Number of initial samples per task (default: 11*dim-1)

  • max_nfes (int or List[int], optional) – Maximum number of function evaluations per task (default: 200)

  • n (int or List[int], optional) – Population/archive size per task (default: 100)

  • delta (float, optional) – Reliability threshold for surrogate (default: 0.8)

  • gmax (int, optional) – Number of surrogate evaluations per iteration (default: 3000)

  • spread (float, optional) – PNN RBF spread parameter (default: 0.1925)

  • save_data (bool, optional) – Whether to save optimization data (default: True)

  • save_path (str, optional) – Path to save results (default: ‘./Data’)

  • name (str, optional) – Name for the experiment (default: ‘PC-SAEA’)

  • disable_tqdm (bool, optional) – Whether to disable progress bar (default: True)

optimize()[source]

Execute the PC-SAEA algorithm.

Returns:

Optimization results containing decision variables, objectives, and runtime

Return type:

Results

Pareto-based Efficient Algorithm (PEA)

This module implements PEA for computationally expensive multi-objective optimization. It uses Constrained Probabilistic Pareto Dominance (CPPD) sorting that accounts for prediction uncertainty from Kriging models during evolutionary search, and selects promising candidates for expensive re-evaluation using diversity-based strategies.

References

[1] T. Sonoda and M. Nakata. Multiple Objective Optimization Based on Kriging Surrogate Model with Constrained Probabilistic Pareto Dominance. IEEE Congress on Evolutionary Computation (CEC), 2020.

Notes

Author: Jiangtao Shen Date: 2026.02.17 Version: 1.0

class ddmtolab.Algorithms.STMO.PEA.PEA(problem, n_initial=None, max_nfes=None, n=100, wmax=20, mu=5, save_data=True, save_path='./Data', name='PEA', disable_tqdm=True)[source]

Bases: object

Pareto-based Efficient Algorithm for expensive multi-objective optimization using Constrained Probabilistic Pareto Dominance and Kriging surrogates.

__init__(problem, n_initial=None, max_nfes=None, n=100, wmax=20, mu=5, save_data=True, save_path='./Data', name='PEA', disable_tqdm=True)[source]

Initialize PEA algorithm.

Parameters:
  • problem (MTOP) – Multi-task optimization problem instance

  • n_initial (int or List[int], optional) – Number of initial samples per task (default: 11*dim-1)

  • max_nfes (int or List[int], optional) – Maximum number of function evaluations per task (default: 200)

  • n (int or List[int], optional) – Population/archive size per task (default: 100)

  • wmax (int, optional) – Number of inner surrogate evolution generations (default: 20)

  • mu (int, optional) – Number of candidate solutions per iteration (default: 5)

  • save_data (bool, optional) – Whether to save optimization data (default: True)

  • save_path (str, optional) – Path to save results (default: ‘./Data’)

  • name (str, optional) – Name for the experiment (default: ‘PEA’)

  • disable_tqdm (bool, optional) – Whether to disable progress bar (default: True)

optimize()[source]

Execute the PEA algorithm.

Returns:

Optimization results containing decision variables, objectives, and runtime

Return type:

Results

Performance Indicator-based Evolutionary Algorithm (PIEA)

This module implements PIEA for computationally expensive high-dimensional multi/many-objective optimization. It adaptively selects among three performance indicators (SDE, I_epsilon+, Minkowski distance) to train an SVR surrogate model, then uses DE-based model optimization and hierarchical evaluation to guide the search.

References

[1] Y. Li, W. Li, S. Li, and Y. Zhao. A performance indicator-based evolutionary algorithm for expensive high-dimensional multi-/many-objective optimization. Information Sciences, 2024: 121045.

Notes

Author: Jiangtao Shen (DDMTOLab implementation) Date: 2026.02.16 Version: 1.0

class ddmtolab.Algorithms.STMO.PIEA.PIEA(problem, n_initial=None, max_nfes=None, n=None, eta=5, r_max=20, tau=20, save_data=True, save_path='./Data', name='PIEA', disable_tqdm=True)[source]

Bases: object

Performance Indicator-based Evolutionary Algorithm for expensive multi/many-objective optimization.

algorithm_information

Dictionary containing algorithm capabilities and requirements

Type:

dict

__init__(problem, n_initial=None, max_nfes=None, n=None, eta=5, r_max=20, tau=20, save_data=True, save_path='./Data', name='PIEA', disable_tqdm=True)[source]

Initialize PIEA algorithm.

Parameters:
  • problem (MTOP) – Multi-task optimization problem instance

  • n_initial (int or List[int], optional) – Number of initial samples per task (default: same as n)

  • max_nfes (int or List[int], optional) – Maximum number of function evaluations per task (default: 200)

  • n (int or List[int], optional) – Population size per task (default: 100)

  • eta (int, optional) – Number of pre-selected survivors (default: 5)

  • r_max (int, optional) – Maximum repeat time of offspring generation (default: 20)

  • tau (int, optional) – Window width for history list (default: 20)

  • save_data (bool, optional) – Whether to save optimization data (default: True)

  • save_path (str, optional) – Path to save results (default: ‘./Data’)

  • name (str, optional) – Name for the experiment (default: ‘PIEA’)

  • disable_tqdm (bool, optional) – Whether to disable progress bar (default: True)

optimize()[source]

Execute the PIEA algorithm.

Returns:

Optimization results containing decision variables, objectives, and runtime

Return type:

Results

Surrogate-Assisted Evolutionary Algorithm with Direction-Based Local Learning (SAEA-DBLL)

This module implements SAEA-DBLL for computationally expensive multi/many-objective optimization. It uses RBF surrogate models with a direction-based local learning strategy, where sub-reference vectors define neighborhoods for competitive swarm optimization, combined with adaptive vector selection and APD-based environmental selection.

References

[1] J. Shen, P. Wang, H. Dong, W. Wang, and J. Li. Surrogate-assisted evolutionary algorithm with decomposition-based local learning for high-dimensional multi-objective optimization. Expert Systems with Applications, 2024, 240: 122575.

Notes

Author: Jiangtao Shen Date: 2026.02.18 Version: 1.0

class ddmtolab.Algorithms.STMO.SAEA_DBLL.SAEA_DBLL(problem, n_initial=None, max_nfes=None, n=50, alpha=2.0, wmax=20, mu=5, T=3, K=2, save_data=True, save_path='./Data', name='SAEA-DBLL', disable_tqdm=True)[source]

Bases: object

Surrogate-Assisted Evolutionary Algorithm with Direction-Based Local Learning for expensive multi/many-objective optimization.

Uses RBF surrogates with neighborhood-aware competitive swarm optimization and adaptive sub-reference vector management.

algorithm_information

Dictionary containing algorithm capabilities and requirements

Type:

dict

__init__(problem, n_initial=None, max_nfes=None, n=50, alpha=2.0, wmax=20, mu=5, T=3, K=2, save_data=True, save_path='./Data', name='SAEA-DBLL', disable_tqdm=True)[source]

Initialize SAEA-DBLL algorithm.

Parameters:
  • problem (MTOP) – Multi-task optimization problem instance

  • n_initial (int or List[int], optional) – Number of initial samples per task (default: dim+50)

  • max_nfes (int or List[int], optional) – Maximum number of function evaluations per task (default: 200)

  • n (int or List[int], optional) – Number of reference vectors per task (default: 50)

  • alpha (float, optional) – Exponent for theta progression (default: 2.0)

  • wmax (int, optional) – Number of inner surrogate evolution generations (default: 20)

  • mu (int, optional) – Number of re-evaluated solutions per iteration (default: 5)

  • T (int, optional) – Neighborhood size for sub-vectors (default: 3)

  • K (int, optional) – Division factor for sub-vector count (default: 2)

  • save_data (bool, optional) – Whether to save optimization data (default: True)

  • save_path (str, optional) – Path to save results (default: ‘./Data’)

  • name (str, optional) – Name for the experiment (default: ‘SAEA-DBLL’)

  • disable_tqdm (bool, optional) – Whether to disable progress bar (default: True)

optimize()[source]

Execute the SAEA-DBLL algorithm.

Returns:

Optimization results containing decision variables, objectives, and runtime

Return type:

Results

Self-Organizing Surrogate-Assisted Non-Dominated Sorting Differential Evolution (SSDE)

This module implements SSDE for computationally expensive multi/many-objective optimization. It uses a Self-Organizing Map (SOM) as a surrogate model to predict offspring quality, combined with NSGA-II environmental selection. Only offspring that survive selection are evaluated with the expensive objective function.

References

[1] A. F. R. Araujo, L. R. C. Farias, and A. R. C. Goncalves. Self-organizing surrogate-assisted non-dominated sorting differential evolution. Swarm and Evolutionary Computation, 2024, 91: 101703.

Notes

Author: Jiangtao Shen (DDMTOLab implementation) Date: 2026.02.16 Version: 1.0

class ddmtolab.Algorithms.STMO.SSDE.SSDE(problem, n=None, max_nfes=None, num_nodes=None, eta0=0.2, sigma0=None, save_data=True, save_path='./Data', name='SSDE', disable_tqdm=True)[source]

Bases: object

Self-Organizing Surrogate-Assisted Non-Dominated Sorting Differential Evolution for expensive multi/many-objective optimization.

algorithm_information

Dictionary containing algorithm capabilities and requirements

Type:

dict

__init__(problem, n=None, max_nfes=None, num_nodes=None, eta0=0.2, sigma0=None, save_data=True, save_path='./Data', name='SSDE', disable_tqdm=True)[source]

Initialize SSDE algorithm.

Parameters:
  • problem (MTOP) – Multi-task optimization problem instance

  • n (int or List[int], optional) – Population size per task (default: 100). Also used as the initial sample count (matching MATLAB: Problem.N).

  • max_nfes (int or List[int], optional) – Maximum number of function evaluations per task (default: 200)

  • num_nodes (int, optional) – Number of neurons in the SOM (default: same as n)

  • eta0 (float, optional) – Initial learning rate for SOM training (default: 0.2)

  • sigma0 (float, optional) – Initial neighborhood size for SOM (default: same as n)

  • save_data (bool, optional) – Whether to save optimization data (default: True)

  • save_path (str, optional) – Path to save results (default: ‘./Data’)

  • name (str, optional) – Name for the experiment (default: ‘SSDE’)

  • disable_tqdm (bool, optional) – Whether to disable progress bar (default: True)

optimize()[source]

Execute the SSDE algorithm.

Returns:

Optimization results containing decision variables, objectives, and runtime

Return type:

Results

Two-phase Evolutionary Algorithm (TEA)

This module implements TEA for computationally expensive multi-objective optimization. It uses Probabilistic Dominant Product Dominance (PDPD) sorting that groups objectives by uncertainty level and applies product dominance for uncertain objectives, with Kriging surrogate models to guide the search.

References

[1] Z. Zhang, Y. Wang, J. Liu, G. Sun, and K. Tang. A two-phase Kriging-assisted evolutionary algorithm for expensive constrained multiobjective optimization problems. IEEE Transactions on Systems, Man, and Cybernetics: Systems, 2024, 54(8): 4579-4591.

Notes

Author: Jiangtao Shen Date: 2026.02.17 Version: 1.0

class ddmtolab.Algorithms.STMO.TEA.TEA(problem, n_initial=None, max_nfes=None, n=100, wmax=20, mu=5, save_data=True, save_path='./Data', name='TEA', disable_tqdm=True)[source]

Bases: object

Two-phase Evolutionary Algorithm for expensive multi-objective optimization using PDPD sorting and Kriging surrogates.

__init__(problem, n_initial=None, max_nfes=None, n=100, wmax=20, mu=5, save_data=True, save_path='./Data', name='TEA', disable_tqdm=True)[source]
optimize()[source]

Multitask Single-Objective (MTSO)

Multifactorial Evolutionary Algorithm (MFEA)

This module implements MFEA for multi-task optimization with knowledge transfer across tasks.

References

[1] Abhishek Gupta, Yew-Soon Ong, and Liang Feng. “Multifactorial Evolution: Toward Evolutionary Multitasking.” IEEE Transactions on Evolutionary Computation, 20(3): 343-357, 2015.

Notes

Author: Jiangtao Shen Email: j.shen5@exeter.ac.uk Date: 2025.11.12 Version: 1.0

class ddmtolab.Algorithms.MTSO.MFEA.MFEA(problem, n=None, max_nfes=None, rmp=0.3, save_data=True, save_path='./Data', name='MFEA', disable_tqdm=True)[source]

Bases: object

Multifactorial Evolutionary Algorithm for multi-task optimization.

algorithm_information

Dictionary containing algorithm capabilities and requirements

Type:

dict

__init__(problem, n=None, max_nfes=None, rmp=0.3, save_data=True, save_path='./Data', name='MFEA', disable_tqdm=True)[source]

Initialize Multifactorial Evolutionary Algorithm.

Parameters:
  • problem (MTOP) – Multi-task optimization problem instance

  • n (int, optional) – Population size per task (default: 100)

  • max_nfes (int, optional) – Maximum number of function evaluations per task (default: 10000)

  • rmp (float, optional) – Random mating probability for inter-task crossover (default: 0.3)

  • save_data (bool, optional) – Whether to save optimization data (default: True)

  • save_path (str, optional) – Path to save results (default: ‘./TestData’)

  • name (str, optional) – Name for the experiment (default: ‘MFEA_test’)

  • disable_tqdm (bool, optional) – Whether to disable progress bar (default: True)

optimize()[source]

Execute the Multifactorial Evolutionary Algorithm.

Returns:

Optimization results containing decision variables, objectives, and runtime

Return type:

Results

Multifactorial Evolutionary Algorithm With Online Transfer Parameter Estimation (MFEA-II)

This module implements MFEA-II for multi-task optimization with knowledge transfer across tasks.

References

[1] Bali, Kavitesh Kumar, et al. “Multifactorial evolutionary algorithm with online transfer parameter estimation: MFEA-II.” IEEE Transactions on Evolutionary Computation 24.1 (2019): 69-83.

Notes

Author: Jiangtao Shen Email: j.shen5@exeter.ac.uk Date: 2025.12.16 Version: 1.0

class ddmtolab.Algorithms.MTSO.MFEA_II.MFEA_II(problem, n=None, max_nfes=None, save_data=True, save_path='./Data', name='MFEA-II', disable_tqdm=True)[source]

Bases: object

Multifactorial Evolutionary Algorithm With Online Transfer Parameter Estimation

algorithm_information

Dictionary containing algorithm capabilities and requirements

Type:

dict

__init__(problem, n=None, max_nfes=None, save_data=True, save_path='./Data', name='MFEA-II', disable_tqdm=True)[source]

Initialize MFEA-II.

Parameters:
  • problem (MTOP) – Multi-task optimization problem instance

  • n (int, optional) – Population size per task (default: 100)

  • max_nfes (int, optional) – Maximum number of function evaluations per task (default: 10000)

  • save_data (bool, optional) – Whether to save optimization data (default: True)

  • save_path (str, optional) – Path to save results (default: ‘./TestData’)

  • name (str, optional) – Name for the experiment (default: ‘MFEA_test’)

  • disable_tqdm (bool, optional) – Whether to disable progress bar (default: True)

optimize()[source]

Execute the Multifactorial Evolutionary Algorithm.

Returns:

Optimization results containing decision variables, objectives, and runtime

Return type:

Results

Generalized Multifactorial Evolutionary Algorithm (G-MFEA)

This module implements G-MFEA for multi-task optimization with adaptive knowledge transfer.

References

[1] Ding, Jinliang, et al. “Generalized Multitasking for Evolutionary Optimization of Expensive Problems.” IEEE Transactions on Evolutionary Computation 23.1 (2019): 44-58. https://doi.org/10.1109/TEVC.2017.2785351

Notes

Author: Jiangtao Shen Email: j.shen5@exeter.ac.uk Date: 2025.11.12 Version: 1.0

class ddmtolab.Algorithms.MTSO.G_MFEA.G_MFEA(problem, n=None, max_nfes=None, rmp=0.3, muc=2.0, mum=5.0, phi=0.1, theta=0.02, top=0.4, save_data=True, save_path='./Data', name='G-MFEA', disable_tqdm=True)[source]

Bases: object

Generalized Multifactorial Evolutionary Algorithm for multi-task optimization.

This algorithm features: - Adaptive knowledge transfer via task-pair specific transfer vectors - Dimension shuffling for heterogeneous task alignment - Translation strategy based on population centroids

algorithm_information

Dictionary containing algorithm capabilities and requirements

Type:

dict

__init__(problem, n=None, max_nfes=None, rmp=0.3, muc=2.0, mum=5.0, phi=0.1, theta=0.02, top=0.4, save_data=True, save_path='./Data', name='G-MFEA', disable_tqdm=True)[source]

Initialize G-MFEA algorithm.

Parameters:
  • problem (MTOP) – Multi-task optimization problem instance

  • n (int, optional) – Population size per task (default: 100)

  • max_nfes (int, optional) – Maximum number of function evaluations per task (default: 10000)

  • rmp (float, optional) – Random mating probability for inter-task crossover (default: 0.3)

  • muc (float, optional) – Distribution index for SBX crossover (default: 2.0)

  • mum (float, optional) – Distribution index for polynomial mutation (default: 5.0)

  • phi (float, optional) – Threshold ratio to activate translation (default: 0.1)

  • theta (float, optional) – Interval ratio for translation frequency (default: 0.02)

  • top (float, optional) – Ratio of top individuals to estimate current optimums (default: 0.4)

  • save_data (bool, optional) – Whether to save optimization data (default: True)

  • save_path (str, optional) – Path to save results (default: ‘./TestData’)

  • name (str, optional) – Name for the experiment (default: ‘GMFEA_test’)

  • disable_tqdm (bool, optional) – Whether to disable progress bar (default: True)

optimize()[source]

Execute the G-MFEA algorithm.

Returns:

Optimization results containing decision variables, objectives, and runtime

Return type:

Results

Multi-task Evolutionary Algorithm with Adaptive Knowledge Transfer via Anomaly Detection (MTEA-AD)

This module implements MTEA-AD for multi-task optimization with adaptive knowledge transfer using anomaly detection to identify beneficial solutions from other tasks.

References

[1] Wang, Chao, et al. “Solving multitask optimization problems with adaptive knowledge transfer via anomaly detection.” IEEE Transactions on Evolutionary Computation 26.2 (2021): 304-318.

Notes

Author: Jiangtao Shen Email: j.shen5@exeter.ac.uk Date: 2025.01.12 Version: 1.0

class ddmtolab.Algorithms.MTSO.MTEA_AD.MTEA_AD(problem, n=None, max_nfes=None, TRP=0.1, muc=2.0, mum=5.0, save_data=True, save_path='./Data', name='MTEA-AD', disable_tqdm=True)[source]

Bases: object

Multi-task Evolutionary Algorithm with Adaptive Knowledge Transfer via Anomaly Detection.

Uses a Gaussian-based anomaly detection model to adaptively identify and transfer beneficial solutions from other tasks during optimization.

algorithm_information

Dictionary containing algorithm capabilities and requirements

Type:

dict

__init__(problem, n=None, max_nfes=None, TRP=0.1, muc=2.0, mum=5.0, save_data=True, save_path='./Data', name='MTEA-AD', disable_tqdm=True)[source]

Initialize MTEA-AD algorithm.

Parameters:
  • problem (MTOP) – Multi-task optimization problem instance

  • n (int or List[int], optional) – Population size per task (default: 100)

  • max_nfes (int or List[int], optional) – Maximum number of function evaluations per task (default: 10000)

  • TRP (float, optional) – Transfer probability - probability of knowledge transfer in each generation (default: 0.1)

  • muc (float, optional) – Distribution index for simulated binary crossover (SBX) (default: 2.0)

  • mum (float, optional) – Distribution index for polynomial mutation (PM) (default: 5.0)

  • save_data (bool, optional) – Whether to save optimization data (default: True)

  • save_path (str, optional) – Path to save results (default: ‘./TestData’)

  • name (str, optional) – Name for the experiment (default: ‘MTEA_AD_test’)

  • disable_tqdm (bool, optional) – Whether to disable progress bar (default: True)

optimize()[source]

Execute the MTEA-AD algorithm.

Returns:

Optimization results containing decision variables, objectives, and runtime

Return type:

Results

Multi-task Evolutionary Algorithm with Self-adaptive Solvers (MTEA-SaO)

This module implements MTEA-SaO for multi-task single-objective optimization problems.

References

[1] Li, Yanchi, Wenyin Gong, and Shuijia Li. “Multitasking Optimization via an Adaptive Solver Multitasking Evolutionary Framework.” Information Sciences (2022). https://doi.org/10.1016/j.ins.2022.10.099

Notes

Author: Jiangtao Shen Email: j.shen5@exeter.ac.uk Date: 2026.01.02 Version: 1.0

class ddmtolab.Algorithms.MTSO.MTEA_SaO.MTEA_SaO(problem, n=None, max_nfes=None, t_gap=10, t_num=10, sa_gap=70, memory=30, ga_muc=2.0, ga_mum=5.0, de_f=0.5, de_cr=0.9, save_data=True, save_path='./Data', name='MTEA-SaO', disable_tqdm=True)[source]

Bases: object

Multi-task Evolutionary Algorithm with Self-adaptive Solvers for single-objective optimization.

This algorithm features: - Two solver strategies: GA (SBX + PM) and DE (DE/rand/1/bin) - Self-adaptive solver selection based on success/failure history - Knowledge transfer between tasks via random solution injection - Adaptive population partitioning among solvers

algorithm_information

Dictionary containing algorithm capabilities and requirements

Type:

dict

__init__(problem, n=None, max_nfes=None, t_gap=10, t_num=10, sa_gap=70, memory=30, ga_muc=2.0, ga_mum=5.0, de_f=0.5, de_cr=0.9, save_data=True, save_path='./Data', name='MTEA-SaO', disable_tqdm=True)[source]

Initialize MTEA-SaO algorithm.

Parameters:
  • problem (MTOP) – Multi-task optimization problem instance

  • n (int or List[int], optional) – Population size per task (default: 100)

  • max_nfes (int or List[int], optional) – Maximum number of function evaluations per task (default: 10000)

  • t_gap (int, optional) – Transfer gap - perform knowledge transfer every t_gap generations (default: 10)

  • t_num (int, optional) – Number of solutions to transfer (default: 10)

  • sa_gap (int, optional) – Self-adaptive gap - update solver allocation every sa_gap generations (default: 70)

  • memory (int, optional) – Memory length for success/failure history (default: 30)

  • ga_muc (float, optional) – Distribution index for GA crossover (SBX) (default: 2.0)

  • ga_mum (float, optional) – Distribution index for GA mutation (PM) (default: 5.0)

  • de_f (float, optional) – DE scaling factor (default: 0.5)

  • de_cr (float, optional) – DE crossover probability (default: 0.9)

  • save_data (bool, optional) – Whether to save optimization data (default: True)

  • save_path (str, optional) – Path to save results (default: ‘./TestData’)

  • name (str, optional) – Name for the experiment (default: ‘MTEASaO_test’)

  • disable_tqdm (bool, optional) – Whether to disable progress bar (default: True)

optimize()[source]

Execute the MTEA-SaO algorithm.

Returns:

Optimization results containing decision variables, objectives, constraints, and runtime

Return type:

Results

Evolutionary Multitasking via Explicit Autoencoding (EMEA)

This module implements EMEA for multi-task optimization with knowledge transfer via autoencoding.

References

[1] Feng, Liang, et al. “Evolutionary multitasking via explicit autoencoding.” IEEE transactions on cybernetics 49.9 (2018): 3457-3470.

Notes

Author: Jiangtao Shen Email: j.shen5@exeter.ac.uk Date: 2025.10.25 Version: 1.0

class ddmtolab.Algorithms.MTSO.EMEA.EMEA(problem, n=None, max_nfes=None, SNum=10, TGap=10, muc=2, mum=5, F=0.5, CR=0.6, save_data=True, save_path='./Data', name='EMEA', disable_tqdm=True)[source]

Bases: object

Evolutionary Multitasking via Explicit Autoencoding for multi-task optimization.

algorithm_information

Dictionary containing algorithm capabilities and requirements

Type:

dict

__init__(problem, n=None, max_nfes=None, SNum=10, TGap=10, muc=2, mum=5, F=0.5, CR=0.6, save_data=True, save_path='./Data', name='EMEA', disable_tqdm=True)[source]

Initialize EMEA algorithm.

Parameters:
  • problem (MTOP) – Multi-task optimization problem instance

  • n (int or List[int], optional) – Population size per task (default: 100)

  • max_nfes (int or List[int], optional) – Maximum number of function evaluations per task (default: 10000)

  • SNum (int, optional) – Number of transferred solutions (default: 10)

  • TGap (int, optional) – Transfer interval in generations (default: 10)

  • muc (float, optional) – Distribution index for simulated binary crossover (SBX) (default: 2.0)

  • mum (float, optional) – Distribution index for polynomial mutation (PM) (default: 5.0)

  • F (float, optional) – Scaling factor for DE mutation (default: 0.5)

  • CR (float, optional) – Crossover rate for DE (default: 0.6)

  • save_data (bool, optional) – Whether to save optimization data (default: True)

  • save_path (str, optional) – Path to save results (default: ‘./TestData’)

  • name (str, optional) – Name for the experiment (default: ‘EMEA_test’)

  • disable_tqdm (bool, optional) – Whether to disable progress bar (default: True)

optimize()[source]

Execute the EMEA algorithm.

Returns:

Optimization results containing decision variables, objectives, and runtime

Return type:

Results

Meta-Knowledge Transfer-based Differential Evolution (MKTDE)

This module implements MKTDE for multi-task optimization using centroid-based meta-knowledge transfer between tasks in a multi-population DE framework.

References

[1] Li, Jian-Yu, et al. “A Meta-Knowledge Transfer-Based Differential

Evolution for Multitask Optimization.” IEEE Transactions on Evolutionary Computation, 26(4): 719-734, 2022.

Notes

Author: Jiangtao Shen (DDMTOLab adaptation) Date: 2026.02.22 Version: 2.0

class ddmtolab.Algorithms.MTSO.MKTDE.MKTDE(problem, n=None, max_nfes=None, F=0.5, CR=0.6, save_data=True, save_path='./Data', name='MKTDE', disable_tqdm=True)[source]

Bases: object

Meta-Knowledge Transfer-based Differential Evolution.

Uses centroid alignment to transform source task solutions into the target task’s search distribution, creating an extended donor pool for DE/rand/1/bin. The base vector x1 is selected from the current task only, while difference vectors x2, x3 come from the combined pool. Additionally, an elite solution from the source task is transferred each generation.

algorithm_information

Dictionary containing algorithm capabilities and requirements

Type:

dict

__init__(problem, n=None, max_nfes=None, F=0.5, CR=0.6, save_data=True, save_path='./Data', name='MKTDE', disable_tqdm=True)[source]

Initialize MKTDE algorithm.

Parameters:
  • problem (MTOP) – Multi-task optimization problem instance

  • n (int, optional) – Population size per task (default: 100)

  • max_nfes (int, optional) – Maximum number of function evaluations per task (default: 10000)

  • F (float, optional) – DE mutation scale factor (default: 0.5)

  • CR (float, optional) – DE crossover rate (default: 0.6)

  • save_data (bool, optional) – Whether to save optimization data (default: True)

  • save_path (str, optional) – Path to save results (default: ‘./Data’)

  • name (str, optional) – Name for the experiment (default: ‘MKTDE’)

  • disable_tqdm (bool, optional) – Whether to disable progress bar (default: True)

optimize()[source]

Execute the MKTDE algorithm.

Returns:

Optimization results containing decision variables, objectives, and runtime

Return type:

Results

Self-Regulated Evolutionary Multitask Optimization (SREMTO)

This module implements SREMTO for multi-task single-objective optimization problems.

References

[1] Zheng, Xiaolong, A. K. Qin, Maoguo Gong, and Deyun Zhou. “Self-Regulated Evolutionary Multitask Optimization.” IEEE Transactions on Evolutionary Computation 24.1 (2020): 16-28. https://doi.org/10.1109/TEVC.2019.2904696

Notes

Author: Jiangtao Shen Email: j.shen5@exeter.ac.uk Date: 2025.12.28 Version: 1.0

class ddmtolab.Algorithms.MTSO.SREMTO.SREMTO(problem, n=None, max_nfes=None, th=0.3, p_alpha=0.7, p_beta=1.0, muc=1.0, mum=39.0, save_data=True, save_path='./Data', name='SREMTO', disable_tqdm=True)[source]

Bases: object

Self-Regulated Evolutionary Multitask Optimization.

This algorithm features: - Ability vector for self-regulated knowledge transfer - Two-line segment ability calculation based on ranking - Combined SBX crossover with differential mutation - Multi-factorial evaluation based on ability probability

algorithm_information

Dictionary containing algorithm capabilities and requirements

Type:

dict

__init__(problem, n=None, max_nfes=None, th=0.3, p_alpha=0.7, p_beta=1.0, muc=1.0, mum=39.0, save_data=True, save_path='./Data', name='SREMTO', disable_tqdm=True)[source]

Initialize SREMTO algorithm.

Parameters:
  • problem (MTOP) – Multi-task optimization problem instance

  • n (int, optional) – Population size per task (default: 100)

  • max_nfes (int, optional) – Maximum number of function evaluations per task (default: 10000)

  • th (float, optional) – Threshold for two-line segments point (default: 0.3)

  • p_alpha (float, optional) – Probability of crossover (default: 0.7)

  • p_beta (float, optional) – Probability of differential mutation (default: 1.0)

  • muc (float, optional) – Distribution index for SBX crossover (default: 1.0)

  • mum (float, optional) – Distribution index for polynomial mutation (default: 39.0)

  • save_data (bool, optional) – Whether to save optimization data (default: True)

  • save_path (str, optional) – Path to save results (default: ‘./TestData’)

  • name (str, optional) – Name for the experiment (default: ‘SREMTO_test’)

  • disable_tqdm (bool, optional) – Whether to disable progress bar (default: True)

optimize()[source]

Execute the SREMTO algorithm.

Returns:

Optimization results containing decision variables, objectives, constraints, and runtime

Return type:

Results

Radial Basis Functions-Assisted MTEA (RAMTEA)

This module implements RAMTEA for expensive multi-task optimization with surrogate-assisted adaptive knowledge transfer.

References

[1] Shen, Jiangtao, et al. “Surrogate-assisted adaptive knowledge transfer for expensive multitasking optimization.” 2024 IEEE Congress on Evolutionary Computation (CEC). IEEE, 2024.

Notes

Author: Jiangtao Shen Email: j.shen5@exeter.ac.uk Date: 2025.11.20 Version: 1.0

class ddmtolab.Algorithms.MTSO.RAMTEA.RAMTEA(problem, n_initial=None, max_nfes=None, pop_size=50, w_max=50, save_data=True, save_path='./Data', name='RAMTEA', disable_tqdm=True)[source]

Bases: object

Radial Basis Functions-Assisted Multi-Task Evolutionary Algorithm for expensive optimization.

algorithm_information

Dictionary containing algorithm capabilities and requirements

Type:

dict

__init__(problem, n_initial=None, max_nfes=None, pop_size=50, w_max=50, save_data=True, save_path='./Data', name='RAMTEA', disable_tqdm=True)[source]

Initialize RAMTEA algorithm.

Parameters:
  • problem (MTOP) – Multi-task optimization problem instance

  • n_initial (int or List[int], optional) – Number of initial samples per task (default: 50)

  • max_nfes (int or List[int], optional) – Maximum number of function evaluations per task (default: 100)

  • pop_size (int, optional) – Population size for GA optimization on surrogate (default: 50)

  • w_max (int, optional) – Maximum number of generations for GA optimization on surrogate (default: 50)

  • save_data (bool, optional) – Whether to save optimization data (default: True)

  • save_path (str, optional) – Path to save results (default: ‘./TestData’)

  • name (str, optional) – Name for the experiment (default: ‘RA-MTEA_test’)

  • disable_tqdm (bool, optional) – Whether to disable progress bar (default: True)

optimize()[source]

Execute the RAMTEA algorithm.

Returns:

Optimization results containing decision variables, objectives, and runtime

Return type:

Results

A Surrogate-Assisted Evolutionary Framework for Expensive Multitask Optimization Problems (SELF)

This module implements SELF using multi-task Gaussian processes and Bayesian optimization for expensive multi-task optimization.

References

[1] Tan, Shenglian, et al. “A surrogate-assisted evolutionary framework for expensive multitask optimization problems.” IEEE Transactions on Evolutionary Computation (2024).

Notes

Author: Jiangtao Shen Email: j.shen5@exeter.ac.uk Date: 2025.11.18 Version: 1.0

class ddmtolab.Algorithms.MTSO.SELF.SELF(problem, max_nfes=None, np=10, F=0.5, CR=0.9, ng=50, nl=50, save_data=True, save_path='./Data', name='SELF', disable_tqdm=True)[source]

Bases: object

Surrogate-Assisted Evolutionary Framework for expensive multi-task optimization.

algorithm_information

Dictionary containing algorithm capabilities and requirements

Type:

dict

__init__(problem, max_nfes=None, np=10, F=0.5, CR=0.9, ng=50, nl=50, save_data=True, save_path='./Data', name='SELF', disable_tqdm=True)[source]

Initialize SELF algorithm.

Parameters:
  • problem (MTOP) – Multi-task optimization problem instance

  • max_nfes (int or List[int], optional) – Maximum number of function evaluations per task (default: 200)

  • np (int, optional) – Population size (default: 10)

  • F (float, optional) – Mutation factor for DE (default: 0.5)

  • CR (float, optional) – Crossover rate for DE (default: 0.9)

  • ng (int, optional) – Number of trial vectors in global knowledge transfer phase (default: 50)

  • nl (int, optional) – Sample size for training GP model in local knowledge transfer phase (default: 50)

  • save_data (bool, optional) – Whether to save optimization data (default: True)

  • save_path (str, optional) – Path to save results (default: ‘./TestData’)

  • name (str, optional) – Name for the experiment (default: ‘SELF_test’)

  • disable_tqdm (bool, optional) – Whether to disable progress bar (default: True)

optimize()[source]

Execute the SELF algorithm with three phases: global transfer, local optimization, and local transfer.

Returns:

Optimization results containing decision variables, objectives, and runtime

Return type:

Results

EBS (Evolutionary Biocoenosis-based Symbiosis)

This module implements the EBS algorithm for evolutionary many-tasking optimization.

References

[1] Liaw, R. T., & Ting, C. K. (2019). Evolutionary many-tasking based on biocoenosis through symbiosis: A framework and benchmark problems. In 2019 IEEE Congress on Evolutionary Computation (CEC) (pp. 2266-2273). IEEE.

Notes

Author: Jiangtao Shen Email: j.shen5@exeter.ac.uk Date: 2025.01.09 Version: 1.0

class ddmtolab.Algorithms.MTSO.EBS.EBS(problem, n=None, max_nfes=None, sigma0=0.3, use_n=True, gen_init=10, save_data=True, save_path='./Data', name='EBS', disable_tqdm=True)[source]

Bases: object

Evolutionary Biocoenosis-based Symbiosis for many-task optimization.

EBS uses multiple CMA-ES instances with adaptive information exchange among tasks. Each task maintains two CMA-ES distributions: - One updated when knowledge transfer occurs - One updated when no knowledge transfer occurs

The information exchange probability is controlled adaptively based on the improvement ratio from self-generated offspring versus offspring from other tasks.

algorithm_information

Dictionary containing algorithm capabilities and requirements

Type:

dict

__init__(problem, n=None, max_nfes=None, sigma0=0.3, use_n=True, gen_init=10, save_data=True, save_path='./Data', name='EBS', disable_tqdm=True)[source]

Initialize EBS Algorithm.

Parameters:
  • problem (MTOP) – Multi-task optimization problem instance

  • n (int or List[int], optional) – Population size per task (default: None, will use 4+3*log(D))

  • max_nfes (int or List[int], optional) – Maximum number of function evaluations per task (default: 10000)

  • sigma0 (float, optional) – Initial step size for CMA-ES (default: 0.3)

  • use_n (bool, optional) – If True, use provided n; if False, use 4+3*log(D) (default: True)

  • gen_init (int, optional) – Number of initial generations for alternating CMA-ES before using gamma (default: 10) During this phase, two CMA-ES alternate (one without transfer, one with transfer)

  • save_data (bool, optional) – Whether to save optimization data (default: True)

  • save_path (str, optional) – Path to save results (default: ‘./TestData’)

  • name (str, optional) – Name for the experiment (default: ‘EBS_test’)

  • disable_tqdm (bool, optional) – Whether to disable progress bar (default: True)

optimize()[source]

Execute the EBS Algorithm.

Returns:

Optimization results containing decision variables, objectives, and runtime

Return type:

Results

Multi-Task Bayesian Optimization (MTBO)

This module implements MTBO for expensive multi-task optimization with knowledge transfer via multi-task Gaussian processes.

References

[1] Swersky, Kevin, Jasper Snoek, and Ryan P. Adams. “Multi-task bayesian optimization.” Advances in neural information processing systems 26 (2013).

Notes

Author: Jiangtao Shen Email: j.shen5@exeter.ac.uk Date: 2025.11.12 Version: 1.0

class ddmtolab.Algorithms.MTSO.MTBO.MTBO(problem, n_initial=None, max_nfes=None, save_data=True, save_path='./Data', name='MTBO', disable_tqdm=True)[source]

Bases: object

Multi-Task Bayesian Optimization for expensive multi-task optimization problems.

algorithm_information

Dictionary containing algorithm capabilities and requirements

Type:

dict

__init__(problem, n_initial=None, max_nfes=None, save_data=True, save_path='./Data', name='MTBO', disable_tqdm=True)[source]

Initialize Multi-Task Bayesian Optimization algorithm.

Parameters:
  • problem (MTOP) – Multi-task optimization problem instance

  • n_initial (int or List[int], optional) – Number of initial samples per task (default: 50)

  • max_nfes (int or List[int], optional) – Maximum number of function evaluations per task (default: 100)

  • save_data (bool, optional) – Whether to save optimization data (default: True)

  • save_path (str, optional) – Path to save results (default: ‘./TestData’)

  • name (str, optional) – Name for the experiment (default: ‘MTBO_test’)

  • disable_tqdm (bool, optional) – Whether to disable progress bar (default: True)

optimize()[source]

Execute the Multi-Task Bayesian Optimization algorithm.

Returns:

Optimization results containing decision variables, objectives, and runtime

Return type:

Results

Multi-task Max-value Bayesian Optimization (MUMBO)

This module implements MUMBO for expensive multi-task optimization. The algorithm uses an information-theoretic acquisition function based on mutual information between candidate observations and the global optimum value g*. A multi-task Gaussian process provides cross-task knowledge transfer, and the MUMBO acquisition function exploits the bivariate predictive distribution to compute rho (predictive correlation) between each task and the target task. The acquisition value is divided by the evaluation cost of each task, enabling cost-aware task selection.

References

[1] Moss, Henry B., David S. Leslie, and Paul Rayson. “Mumbo: Multi-task

max-value Bayesian optimization.” Joint European Conference on Machine Learning and Knowledge Discovery in Databases. Springer, 2020.

[2] Wang, Zi, and Stefanie Jegelka. “Max-value entropy search for efficient

Bayesian optimization.” ICML, 2017.

Notes

Author: Jiangtao Shen Email: j.shen5@exeter.ac.uk Date: 2025.12.18 Version: 2.0

class ddmtolab.Algorithms.MTSO.MUMBO.MUMBO(problem, n_initial=None, max_nfes=None, task_cost=None, n_gstar_samples=1, n_candidates=20, n_quad=10, save_data=True, save_path='./Data', name='MUMBO', disable_tqdm=True)[source]

Bases: object

Multi-task Max-value Bayesian Optimization (MUMBO).

algorithm_information

Dictionary containing algorithm capabilities and requirements.

Type:

dict

__init__(problem, n_initial=None, max_nfes=None, task_cost=None, n_gstar_samples=1, n_candidates=20, n_quad=10, save_data=True, save_path='./Data', name='MUMBO', disable_tqdm=True)[source]

Initialize MUMBO.

Parameters:
  • problem (MTOP) – Multi-task optimization problem instance.

  • n_initial (int or List[int], optional) – Number of initial samples per task (default: 50).

  • max_nfes (int or List[int], optional) – Maximum number of function evaluations per task (default: 100).

  • task_cost (List[float] or None, optional) – Evaluation cost for each task. If None, all tasks have equal cost [1, 1, …, 1].

  • n_gstar_samples (int, optional) – Number of g* samples from Gumbel distribution (default: 1).

  • n_candidates (int, optional) – Number of random candidates per task for acquisition (default: 20).

  • n_quad (int, optional) – Number of quadrature points for Simpson integration (default: 10).

  • save_data (bool, optional) – Whether to save optimization data (default: True).

  • save_path (str, optional) – Path to save results (default: ‘./Data’).

  • name (str, optional) – Name for the experiment (default: ‘MUMBO’).

  • disable_tqdm (bool, optional) – Whether to disable progress bar (default: True).

optimize()[source]

Execute MUMBO.

Returns:

Optimization results containing decision variables, objectives, and runtime.

Return type:

Results

Lower Confidence Bound-based Evolutionary Multi-Tasking (LCB-EMT)

This module implements LCB-EMT for multi-task optimization with knowledge transfer via Transfer Gaussian Process and similarity-based lower confidence bound.

References

[1] Wang, Zhenzhong, et al. “Evolutionary multitask optimization with lower confidence bound-based solution selection strategy.” IEEE Transactions on Evolutionary Computation (2024).

Notes

Author: Jiangtao Shen Email: j.shen5@exeter.ac.uk Date: 2025.12.19 Version: 1.0

class ddmtolab.Algorithms.MTSO.LCB_EMT.LCB_EMT(problem, n=None, max_nfes=None, Nt=10, TGap=20, muc=2, mum=5, save_data=True, save_path='./Data', name='LCB-EMT', disable_tqdm=True)[source]

Bases: object

Lower Confidence Bound-based Evolutionary Multi-Tasking.

This algorithm uses Transfer Gaussian Process (TGP) to model task relationships and employs a Similarity-based Lower Confidence Bound (SLCB) strategy to select promising solutions for knowledge transfer between tasks.

algorithm_information

Dictionary containing algorithm capabilities and requirements

Type:

dict

__init__(problem, n=None, max_nfes=None, Nt=10, TGap=20, muc=2, mum=5, save_data=True, save_path='./Data', name='LCB-EMT', disable_tqdm=True)[source]

Initialize LCB-EMT algorithm.

Parameters:
  • problem (MTOP) – Multi-task optimization problem instance

  • n (int or List[int], optional) – Population size per task (default: 100)

  • max_nfes (int or List[int], optional) – Maximum number of function evaluations per task (default: 10000)

  • Nt (int, optional) – Number of solutions to transfer per knowledge transfer operation (default: 20)

  • TGap (int, optional) – Transfer interval in generations (default: 20)

  • muc (float, optional) – Distribution index for simulated binary crossover (SBX) (default: 2.0)

  • mum (float, optional) – Distribution index for polynomial mutation (PM) (default: 5.0)

  • save_data (bool, optional) – Whether to save optimization data (default: True)

  • save_path (str, optional) – Path to save results (default: ‘./TestData’)

  • name (str, optional) – Name for the experiment (default: ‘LCBEMT_test’)

  • disable_tqdm (bool, optional) – Whether to disable progress bar (default: True)

optimize()[source]

Execute the LCB-EMT algorithm.

Returns:

Optimization results containing decision variables, objectives, and runtime

Return type:

Results

Bayesian Optimization with Lower Confidence Bound and Competitive Knowledge Transfer (BO-LCB-CKT)

This module implements BO-LCB-CKT for expensive sequential transfer optimization problems.

Key Features: - Task 0: Target task (to be optimized) - Tasks 1:k: Source tasks (provide knowledge base) - Only task 0 is actively optimized; source tasks are pre-optimized once

References

[1] Xue, Xiaoming, et al. “Surrogate-assisted search with competitive knowledge transfer for expensive optimization.” IEEE Transactions on Evolutionary Computation (2024).

Notes

Author: Jiangtao Shen Email: j.shen5@exeter.ac.uk Date: 2026.01.09 Version: 1.0

class ddmtolab.Algorithms.MTSO.BO_LCB_CKT.BO_LCB_CKT(problem, n_initial=None, max_nfes=None, gen_gap=10, ada_flag=False, save_data=True, save_path='./Data', name='BO-LCB-CKT', disable_tqdm=False)[source]

Bases: object

Bayesian Optimization with Lower Confidence Bound and Competitive Knowledge Transfer.

This algorithm optimizes task 0 (target) by leveraging knowledge from tasks 1:k (sources). Source tasks are pre-optimized once to build a knowledge base, then only the target task is optimized with competitive knowledge transfer.

Now supports tasks with different dimensions using space_transfer.

algorithm_information

Dictionary containing algorithm capabilities and requirements

Type:

dict

__init__(problem, n_initial=None, max_nfes=None, gen_gap=10, ada_flag=False, save_data=True, save_path='./Data', name='BO-LCB-CKT', disable_tqdm=False)[source]

Initialize BO-LCB-CKT algorithm.

Parameters:
  • problem (MTOP) – Multi-task optimization problem instance. Task 0 is the target task, tasks 1:k are source tasks.

  • n_initial (int or List[int], optional) – Number of initial samples per task (default: 50)

  • max_nfes (int or List[int], optional) – Maximum number of function evaluations per task (default: [500, 100, 100, …]) - First value: budget for target task (task 0) - Remaining values: budget for source tasks

  • gen_gap (int, optional) – Knowledge transfer trigger frequency (default: 10)

  • ada_flag (bool, optional) – Whether to enable task adaptation (default: False)

  • save_data (bool, optional) – Whether to save optimization data (default: True)

  • save_path (str, optional) – Path to save results (default: ‘./TestData’)

  • name (str, optional) – Name for the experiment (default: ‘BO_LCB_CKT_test’)

  • disable_tqdm (bool, optional) – Whether to disable progress bar (default: False)

optimize()[source]

Execute the BO-LCB-CKT algorithm.

Returns:

Optimization results containing decision variables, objectives, and runtime

Return type:

Results

BO-LCB-BCKT: Bayesian Optimization with Lower Confidence Bound and Bayesian Competitive Knowledge Transfer

This module implements BO-LCB-BCKT for expensive multi-task optimization problems.

References

[1] Lu, Yi, et al. “Multi-Task Surrogate-Assisted Search with Bayesian Competitive Knowledge Transfer for Expensive Optimization.” arXiv preprint arXiv:2510.23407 (2025).

Notes

Author: Jiangtao Shen Email: j.shen5@exeter.ac.uk Date: 2026.01.09 Version: 2.0

class ddmtolab.Algorithms.MTSO.BO_LCB_BCKT.BO_LCB_BCKT(problem, n_initial=None, max_nfes=None, gen_gap=10, sigma_I_sq=0.0025000000000000005, save_data=True, save_path='./Data', name='BO-LCB-BCKT', disable_tqdm=True, padding='zero')[source]

Bases: object

BO-LCB-BCKT algorithm for expensive multi-task optimization.

algorithm_information

Dictionary containing algorithm capabilities and requirements

Type:

dict

__init__(problem, n_initial=None, max_nfes=None, gen_gap=10, sigma_I_sq=0.0025000000000000005, save_data=True, save_path='./Data', name='BO-LCB-BCKT', disable_tqdm=True, padding='zero')[source]
optimize()[source]

Evolutionary Expected Improvement based Bayesian Optimization for MTOP (EEI-BO+)

This module implements Bayesian Optimization for expensive single-objective optimization problems using an evolutionary approach to optimize the Expected Improvement acquisition function.

References

[1] Liu, Jiao, et al. “Solving highly expensive optimization problems via evolutionary expected improvement.” IEEE Transactions on Systems, Man, and Cybernetics: Systems 53.8 (2023): 4843-4855.

Notes

Author: Jiangtao Shen Email: j.shen5@exeter.ac.uk Date: 2025.12.17 Version: 1.0

class ddmtolab.Algorithms.MTSO.EEI_BO_plus.EEI_BO_plus(problem, n_initial=None, max_nfes=None, switch_interval=6, n1=50, max_nfes1=500, n2=30, max_nfes2=6000, save_data=True, save_path='./Data', name='EEI-BO+', disable_tqdm=True)[source]

Bases: object

Evolutionary Expected Improvement based Bayesian Optimization for MTOP.

algorithm_information

Dictionary containing algorithm capabilities and requirements

Type:

dict

__init__(problem, n_initial=None, max_nfes=None, switch_interval=6, n1=50, max_nfes1=500, n2=30, max_nfes2=6000, save_data=True, save_path='./Data', name='EEI-BO+', disable_tqdm=True)[source]

Initialize EEI-BO+ algorithm.

Parameters:
  • problem (MTOP) – Multi-task optimization problem instance

  • n_initial (int or List[int], optional) – Number of initial samples per task (default: 50)

  • max_nfes (int or List[int], optional) – Maximum number of function evaluations per task (default: 100)

  • n1 (int, optional) – Population size of CMA-ES (default: 50)

  • max_nfes1 (int, optional) – Maximum number of function evaluations of CMA-ES (default: 500)

  • n2 (int, optional) – Population size of DE (default: 30)

  • max_nfes2 (int, optional) – Maximum number of function evaluations of DE (default: 6000)

  • save_data (bool, optional) – Whether to save optimization data (default: True)

  • save_path (str, optional) – Path to save results (default: ‘./TestData’)

  • name (str, optional) – Name for the experiment (default: ‘EEIBO_test’)

  • disable_tqdm (bool, optional) – Whether to disable progress bar (default: True)

optimize()[source]

Execute the Evolutionary Expected Improvement based Bayesian Optimization algorithm.

Returns:

Optimization results containing decision variables, objectives, and runtime

Return type:

Results

Block-Level Knowledge Transfer DE (BLKT-DE)

This module implements BLKT-DE for multi-task optimization using block-level decision variable decomposition with k-means clustering for cross-task knowledge transfer.

References

[1] Jiang, Yi, et al. “Block-Level Knowledge Transfer for Evolutionary

Multitask Optimization.” IEEE Transactions on Cybernetics, 1-14, 2023.

Notes

Author: Jiangtao Shen (DDMTOLab adaptation) Date: 2026.02.22 Version: 1.0

class ddmtolab.Algorithms.MTSO.BLKT_DE.BLKT_DE(problem, n=None, max_nfes=None, F=0.5, CR=0.7, save_data=True, save_path='./Data', name='BLKT-DE', disable_tqdm=True)[source]

Bases: object

Block-Level Knowledge Transfer DE.

Decomposes decision variables into fixed-size blocks, clusters them across all tasks and individuals using k-means, then performs DE/rand/1 within each cluster for cross-task knowledge transfer. Combined with standard DE/rand/1/bin offspring per task.

Adaptive block size (divD) and cluster count (divK) based on per-task improvement: full reset if all tasks stagnate, slight perturbation if some tasks stagnate, no change if all improve.

algorithm_information

Dictionary containing algorithm capabilities and requirements

Type:

dict

__init__(problem, n=None, max_nfes=None, F=0.5, CR=0.7, save_data=True, save_path='./Data', name='BLKT-DE', disable_tqdm=True)[source]

Initialize BLKT-DE algorithm.

Parameters:
  • problem (MTOP) – Multi-task optimization problem instance

  • n (int, optional) – Population size per task (default: 100)

  • max_nfes (int, optional) – Maximum number of function evaluations per task (default: 10000)

  • F (float, optional) – DE mutation scale factor (default: 0.5)

  • CR (float, optional) – DE crossover rate (default: 0.7)

  • save_data (bool, optional) – Whether to save optimization data (default: True)

  • save_path (str, optional) – Path to save results (default: ‘./Data’)

  • name (str, optional) – Name for the experiment (default: ‘BLKT-DE’)

  • disable_tqdm (bool, optional) – Whether to disable progress bar (default: True)

optimize()[source]

Execute the BLKT-DE algorithm.

Returns:

Optimization results containing decision variables, objectives, and runtime

Return type:

Results

Distribution Direction-Assisted Two-Stage Knowledge Transfer (DTSKT)

This module implements DTSKT for many-task single-objective optimization with distribution-based two-stage knowledge transfer.

References

[1] Zhang, Tingyu, et al. “Distribution Direction-Assisted Two-Stage Knowledge

Transfer for Many-Task Optimization.” IEEE Transactions on Systems, Man, and Cybernetics: Systems (2025): 1-15.

Notes

Author: Jiangtao Shen (DDMTOLab adaptation) Date: 2026.02.22 Version: 1.0

class ddmtolab.Algorithms.MTSO.DTSKT.DTSKT(problem, n=None, max_nfes=None, A=0.35, beta=0.6, rmp=0.5, topn=2, save_data=True, save_path='./Data', name='DTSKT', disable_tqdm=True)[source]

Bases: object

Distribution Direction-Assisted Two-Stage Knowledge Transfer for Many-Task Optimization.

Uses Gaussian EDA with two-stage knowledge transfer: - Exploring stage: shifts sampling mean along the best source task’s search path - Exploiting stage: uses combined source-target distribution for refined search

algorithm_information

Dictionary containing algorithm capabilities and requirements

Type:

dict

__init__(problem, n=None, max_nfes=None, A=0.35, beta=0.6, rmp=0.5, topn=2, save_data=True, save_path='./Data', name='DTSKT', disable_tqdm=True)[source]

Initialize DTSKT algorithm.

Parameters:
  • problem (MTOP) – Multi-task optimization problem instance (all tasks must have equal dimensions)

  • n (int, optional) – Population size per task (default: 200)

  • max_nfes (int, optional) – Maximum number of function evaluations per task (default: 10000)

  • A (float, optional) – Elite ratio for weighted mean computation (default: 0.35)

  • beta (float, optional) – Stage transition ratio - fraction of budget for exploring stage (default: 0.6)

  • rmp (float, optional) – Probability of knowledge transfer in offspring generation (default: 0.5)

  • topn (int, optional) – Number of top source tasks for multi-source transfer (default: 2)

  • save_data (bool, optional) – Whether to save optimization data (default: True)

  • save_path (str, optional) – Path to save results (default: ‘./Data’)

  • name (str, optional) – Name for the experiment (default: ‘DTSKT’)

  • disable_tqdm (bool, optional) – Whether to disable progress bar (default: True)

optimize()[source]

Execute the DTSKT algorithm.

Returns:

Optimization results containing decision variables, objectives, and runtime

Return type:

Results

Evolutionary Multi-Task Optimization with Adaptive Intensity (EMTO-AI)

This module implements EMTO-AI for multi-task optimization with adaptive knowledge transfer intensity based on cross-task competitiveness evaluation.

References

[1] Zhou, Xinyu, et al. “Evolutionary Multi-Task Optimization With Adaptive

Intensity of Knowledge Transfer.” IEEE Transactions on Emerging Topics in Computational Intelligence, 1-13, 2024.

Notes

Author: Jiangtao Shen (DDMTOLab adaptation) Date: 2026.02.22 Version: 1.0

class ddmtolab.Algorithms.MTSO.EMTO_AI.EMTO_AI(problem, n=None, max_nfes=None, F=0.5, CR=0.6, rate=0.05, gap_gen=35, save_data=True, save_path='./Data', name='EMTO-AI', disable_tqdm=True)[source]

Bases: object

Evolutionary Multi-Task Optimization with Adaptive Intensity of Knowledge Transfer.

Uses a DE-based multi-factorial framework with: - Per-task elite archives for inter-task knowledge transfer - DE mutation with archive base vector for transfer, DE/rand/1 for intra-task - Binomial crossover - Adaptive transfer intensity (rmp) updated periodically by cross-evaluating

each task’s subpopulation on other tasks and measuring competitiveness

algorithm_information

Dictionary containing algorithm capabilities and requirements

Type:

dict

__init__(problem, n=None, max_nfes=None, F=0.5, CR=0.6, rate=0.05, gap_gen=35, save_data=True, save_path='./Data', name='EMTO-AI', disable_tqdm=True)[source]

Initialize EMTO-AI algorithm.

Parameters:
  • problem (MTOP) – Multi-task optimization problem instance

  • n (int, optional) – Population size per task (default: 100)

  • max_nfes (int, optional) – Maximum number of function evaluations per task (default: 10000)

  • F (float, optional) – DE mutation scale factor (default: 0.5)

  • CR (float, optional) – DE crossover rate (default: 0.6)

  • rate (float, optional) – Archive size as fraction of per-task population (default: 0.05)

  • gap_gen (int, optional) – Generation interval for transfer intensity update (default: 35)

  • save_data (bool, optional) – Whether to save optimization data (default: True)

  • save_path (str, optional) – Path to save results (default: ‘./Data’)

  • name (str, optional) – Name for the experiment (default: ‘EMTO-AI’)

  • disable_tqdm (bool, optional) – Whether to disable progress bar (default: True)

optimize()[source]

Execute the EMTO-AI algorithm.

Returns:

Optimization results containing decision variables, objectives, and runtime

Return type:

Results

Multifactorial Evolutionary Algorithm with Adaptive Knowledge Transfer (MFEA-AKT)

This module implements MFEA-AKT for multi-task optimization with adaptive crossover operator selection for inter-task knowledge transfer.

References

[1] Zhou, Lei, et al. “Toward Adaptive Knowledge Transfer in Multifactorial

Evolutionary Computation.” IEEE Transactions on Cybernetics, 51(5): 2563-2576, 2021.

Notes

Author: Jiangtao Shen (DDMTOLab adaptation) Date: 2026.02.22 Version: 1.0

class ddmtolab.Algorithms.MTSO.MFEA_AKT.MFEA_AKT(problem, n=None, max_nfes=None, rmp=0.3, gap=20, muc=2, mum=5, save_data=True, save_path='./Data', name='MFEA-AKT', disable_tqdm=True)[source]

Bases: object

Multifactorial Evolutionary Algorithm with Adaptive Knowledge Transfer.

Extends MFEA with 6 crossover operators for inter-task transfer and an adaptive mechanism to select the best operator based on improvement tracking.

The 6 crossover operators are:

0: Two-point crossover 1: Uniform crossover 2: Arithmetical crossover (r=0.25) 3: Geometric crossover (r=0.2) 4: BLX-alpha crossover (a=0.3) 5: SBX crossover

algorithm_information

Dictionary containing algorithm capabilities and requirements

Type:

dict

__init__(problem, n=None, max_nfes=None, rmp=0.3, gap=20, muc=2, mum=5, save_data=True, save_path='./Data', name='MFEA-AKT', disable_tqdm=True)[source]

Initialize MFEA-AKT algorithm.

Parameters:
  • problem (MTOP) – Multi-task optimization problem instance

  • n (int, optional) – Population size per task (default: 100)

  • max_nfes (int, optional) – Maximum number of function evaluations per task (default: 10000)

  • rmp (float, optional) – Random mating probability for inter-task crossover (default: 0.3)

  • gap (int, optional) – History window size for operator selection fallback (default: 20)

  • muc (float, optional) – Distribution index for SBX crossover (default: 2)

  • mum (float, optional) – Distribution index for polynomial mutation (default: 5)

  • save_data (bool, optional) – Whether to save optimization data (default: True)

  • save_path (str, optional) – Path to save results (default: ‘./Data’)

  • name (str, optional) – Name for the experiment (default: ‘MFEA-AKT’)

  • disable_tqdm (bool, optional) – Whether to disable progress bar (default: True)

optimize()[source]

Execute the MFEA-AKT algorithm.

Returns:

Optimization results containing decision variables, objectives, and runtime

Return type:

Results

Multifactorial Evolutionary Algorithm Based on Diffusion Gradient Descent (MFEA-DGD)

This module implements MFEA-DGD for multi-task optimization using gradient estimation via finite differences with random orthogonal directions.

References

[1] Liu, Zhaobo, et al. “Multifactorial Evolutionary Algorithm Based on

Diffusion Gradient Descent.” IEEE Transactions on Cybernetics, 1-13, 2023.

Notes

Author: Jiangtao Shen (DDMTOLab adaptation) Date: 2026.02.22 Version: 1.0

class ddmtolab.Algorithms.MTSO.MFEA_DGD.MFEA_DGD(problem, n=None, max_nfes=None, rmp=0.7, gamma=0.1, save_data=True, save_path='./Data', name='MFEA-DGD', disable_tqdm=True)[source]

Bases: object

Multifactorial Evolutionary Algorithm Based on Diffusion Gradient Descent.

Uses gradient estimation via random finite differences to guide crossover and mutation operators: - Random perturbation direction from Gaussian distribution - Finite-difference gradient estimation per parent pair - Gradient-guided blend crossover with opposition-based learning (OBL) - Gradient descent mutation for non-transfer offspring - Adaptive sigma randomly selected each generation

algorithm_information

Dictionary containing algorithm capabilities and requirements

Type:

dict

__init__(problem, n=None, max_nfes=None, rmp=0.7, gamma=0.1, save_data=True, save_path='./Data', name='MFEA-DGD', disable_tqdm=True)[source]

Initialize MFEA-DGD algorithm.

Parameters:
  • problem (MTOP) – Multi-task optimization problem instance

  • n (int, optional) – Population size per task (default: 100)

  • max_nfes (int, optional) – Maximum number of function evaluations per task (default: 10000)

  • rmp (float, optional) – Random mating probability for inter-task crossover (default: 0.7)

  • gamma (float, optional) – Smoothing factor for gradient norm tracking (default: 0.1)

  • save_data (bool, optional) – Whether to save optimization data (default: True)

  • save_path (str, optional) – Path to save results (default: ‘./Data’)

  • name (str, optional) – Name for the experiment (default: ‘MFEA-DGD’)

  • disable_tqdm (bool, optional) – Whether to disable progress bar (default: True)

optimize()[source]

Execute the MFEA-DGD algorithm.

Returns:

Optimization results containing decision variables, objectives, and runtime

Return type:

Results

Multifactorial Evolutionary Algorithm with Single-Step Generative Model (MFEA-SSG)

This module implements MFEA-SSG for expensive multi-task optimization using a diffusion-based generative model with knowledge distillation for single-step inference.

References

[1] R. Wang, X. Feng, H. Yu, Y. Tan, and E. M. K. Lai, “Meta-Learning Inspired Single-Step Generative Model for Expensive Multitask Optimization Problems,” IEEE Transactions on Evolutionary Computation, 2025.

Notes

Author: Jiangtao Shen Email: j.shen5@exeter.ac.uk Date: 2025.12.01 Version: 1.0

class ddmtolab.Algorithms.MTSO.MFEA_SSG.MFEA_SSG(problem, n=None, max_nfes=None, rmp=0.3, muc=2, mum=5, max_gen=None, refine_freq=3, n_pairs_per_gen=None, n_diffusion_steps=100, train_epochs=5, distill_epochs=5, batch_size=512, lr=0.0005, base_ch=64, save_data=True, save_path='./Data', name='MFEA-SSG', disable_tqdm=True)[source]

Bases: object

Multifactorial Evolutionary Algorithm with Single-Step Generative Model.

Follows the MFEA architecture with a diffusion-based generative model replacing crossover in early generations. Knowledge distillation compresses the teacher model into a lightweight student for single-step inference.

algorithm_information

Dictionary containing algorithm capabilities and requirements

Type:

dict

__init__(problem, n=None, max_nfes=None, rmp=0.3, muc=2, mum=5, max_gen=None, refine_freq=3, n_pairs_per_gen=None, n_diffusion_steps=100, train_epochs=5, distill_epochs=5, batch_size=512, lr=0.0005, base_ch=64, save_data=True, save_path='./Data', name='MFEA-SSG', disable_tqdm=True)[source]

Initialize MFEA-SSG algorithm.

Parameters:
  • problem (MTOP) – Multi-task optimization problem instance

  • n (int, optional) – Population size per task (default: 100)

  • max_nfes (int, optional) – Maximum number of function evaluations per task (default: 10000)

  • rmp (float, optional) – Random mating probability (default: 0.3)

  • muc (float, optional) – Distribution index for SBX crossover (default: 2)

  • mum (float, optional) – Distribution index for polynomial mutation (default: 5)

  • max_gen (int, optional) – Maximum generation for generative phase (default: auto)

  • refine_freq (int, optional) – Refinement frequency tau for generative model (default: 3)

  • n_diffusion_steps (int, optional) – Number of diffusion timesteps N (default: 100)

  • train_epochs (int, optional) – Training epochs for teacher model (default: 50)

  • distill_epochs (int, optional) – Knowledge distillation epochs (default: 50)

  • batch_size (int, optional) – Mini-batch size for training (default: 512)

  • lr (float, optional) – Learning rate for Adam optimizer (default: 5e-4)

  • base_ch (int, optional) – Base channel count for U-Net models (default: 64)

  • save_data (bool, optional) – Whether to save optimization data (default: True)

  • save_path (str, optional) – Path to save results (default: ‘./Data’)

  • name (str, optional) – Name for the experiment (default: ‘MFEA-SSG’)

  • disable_tqdm (bool, optional) – Whether to disable progress bar (default: True)

optimize()[source]

Execute the MFEA-SSG algorithm (Algorithm 1 in paper).

Returns:

Optimization results containing decision variables, objectives, and runtime

Return type:

Results

Multifactorial Evolutionary Algorithm with Variational Crossover (MFEA-VC)

This module implements MFEA-VC for multi-task optimization using a contrastive Variational Auto-Encoder (VAE) to guide knowledge transfer in early generations.

References

[1] Wang, Ruilin, et al. “Contrastive Variational Auto-Encoder Driven

Convergence Guidance in Evolutionary Multitasking.” Applied Soft Computing, 163: 111883, 2024.

Notes

Author: Jiangtao Shen (DDMTOLab adaptation) Date: 2026.02.22 Version: 1.0

class ddmtolab.Algorithms.MTSO.MFEA_VC.MFEA_VC(problem, n=None, max_nfes=None, rmp=0.3, muc=2, mum=5, vae_gens=25, lam=0.8, save_data=True, save_path='./Data', name='MFEA-VC', disable_tqdm=True)[source]

Bases: object

Multifactorial Evolutionary Algorithm with Variational Crossover.

Uses a VAE (with random weights, no training) to generate cross-task individuals for the first vae_gens generations. The VAE encodes both tasks’ population data into a shared latent space and decodes to produce mixed-task offspring used as SBX crossover partners.

After vae_gens generations, reverts to standard MFEA behavior with SBX crossover and polynomial mutation.

algorithm_information

Dictionary containing algorithm capabilities and requirements

Type:

dict

__init__(problem, n=None, max_nfes=None, rmp=0.3, muc=2, mum=5, vae_gens=25, lam=0.8, save_data=True, save_path='./Data', name='MFEA-VC', disable_tqdm=True)[source]

Initialize MFEA-VC algorithm.

Parameters:
  • problem (MTOP) – Multi-task optimization problem instance

  • n (int, optional) – Population size per task (default: 100)

  • max_nfes (int, optional) – Maximum number of function evaluations per task (default: 10000)

  • rmp (float, optional) – Random mating probability (default: 0.3)

  • muc (float, optional) – Distribution index for SBX crossover (default: 2)

  • mum (float, optional) – Distribution index for polynomial mutation (default: 5)

  • vae_gens (int, optional) – Number of generations to use VAE-guided crossover (default: 25)

  • lam (float, optional) – Lambda scaling factor for VAE latent space (default: 0.8)

  • save_data (bool, optional) – Whether to save optimization data (default: True)

  • save_path (str, optional) – Path to save results (default: ‘./Data’)

  • name (str, optional) – Name for the experiment (default: ‘MFEA-VC’)

  • disable_tqdm (bool, optional) – Whether to disable progress bar (default: True)

optimize()[source]

Execute the MFEA-VC algorithm.

Returns:

Optimization results containing decision variables, objectives, and runtime

Return type:

Results

Multitask Differential Evolution with Adaptive Dual Knowledge Transfer (MTDE-ADKT)

This module implements MTDE-ADKT for multi-task optimization using SHADE-based adaptive parameters, distribution-aligned knowledge transfer, and adaptive transfer probability control.

References

[1] Zhang, Tingyu, et al. “Multitask Differential Evolution with Adaptive

Dual Knowledge Transfer.” Applied Soft Computing, 165: 112040, 2024.

Notes

Author: Jiangtao Shen (DDMTOLab adaptation) Date: 2026.02.22 Version: 1.0

class ddmtolab.Algorithms.MTSO.MTDE_ADKT.MTDE_ADKT(problem, n=None, max_nfes=None, P=0.1, H=100, Gap=50, Alpha=0.25, RMP0=0.15, Beta=0.9, TGap=1, save_data=True, save_path='./Data', name='MTDE-ADKT', disable_tqdm=True)[source]

Bases: object

Multitask DE with Adaptive Dual Knowledge Transfer.

Combines SHADE-based adaptive F/CR with two knowledge transfer modes: - Type 1: Distribution-aligned transfer via covariance whitening/coloring - Type 2: Direct transfer from source task - Type 3: Standard DE/current-to-pbest/1 (no transfer)

Transfer probabilities (RMP1, RMP2) are adaptively adjusted based on the relative success rates of each transfer type.

algorithm_information

Dictionary containing algorithm capabilities and requirements

Type:

dict

__init__(problem, n=None, max_nfes=None, P=0.1, H=100, Gap=50, Alpha=0.25, RMP0=0.15, Beta=0.9, TGap=1, save_data=True, save_path='./Data', name='MTDE-ADKT', disable_tqdm=True)[source]

Initialize MTDE-ADKT algorithm.

Parameters:
  • problem (MTOP) – Multi-task optimization problem instance

  • n (int, optional) – Population size per task (default: 100)

  • max_nfes (int, optional) – Maximum number of function evaluations per task (default: 10000)

  • P (float, optional) – Top P fraction for pbest selection (default: 0.1)

  • H (int, optional) – SHADE success history memory size (default: 100)

  • Gap (int, optional) – RMP adaptation period in generations (default: 50)

  • Alpha (float, optional) – Population reduction timing as fraction of budget (default: 0.25)

  • RMP0 (float, optional) – Initial random mating probability (default: 0.15)

  • Beta (float, optional) – EMA smoothing factor for centroid tracking (default: 0.9)

  • TGap (int, optional) – Transfer frequency: transfer every TGap generations (default: 1)

  • save_data (bool, optional) – Whether to save optimization data (default: True)

  • save_path (str, optional) – Path to save results (default: ‘./Data’)

  • name (str, optional) – Name for the experiment (default: ‘MTDE-ADKT’)

  • disable_tqdm (bool, optional) – Whether to disable progress bar (default: True)

optimize()[source]

Execute the MTDE-ADKT algorithm.

Returns:

Optimization results containing decision variables, objectives, and runtime

Return type:

Results

Multi-Task Evolutionary Algorithm with Hierarchical Knowledge Transfer Strategy (MTEA-HKTS)

This module implements MTEA-HKTS for multi-task optimization using KLD-based variable ordering, adaptive knowledge transfer with hierarchical strategy selection, and alternating GA/DE operators.

References

[1] Zhao, Ben, et al. “A Multi-Task Evolutionary Algorithm for Solving

the Problem of Transfer Targets.” Information Sciences, 681: 121214, 2024.

Notes

Author: Jiangtao Shen (DDMTOLab adaptation) Date: 2026.02.22 Version: 1.0

class ddmtolab.Algorithms.MTSO.MTEA_HKTS.MTEA_HKTS(problem, n=None, max_nfes=None, pTransfer=0.5, mu=2, mum=5, F=0.5, CR=0.5, minx=0.1, Lb=0.1, Ub=0.7, save_data=True, save_path='./Data', name='MTEA-HKTS', disable_tqdm=True)[source]

Bases: object

Multi-Task EA with Hierarchical Knowledge Transfer Strategy.

Uses KLD-based decision variable alignment across tasks, adaptive transfer probability control via a task selection table, and alternating GA (SBX+PM) / DE (rand/1/bin) operators.

Three operation modes per generation: - sign=0 (10%): Separate transferred population evaluated independently - sign=1 (9%): Transferred individuals replace worst, standard GA/DE - sign=2 (81%): Transferred individuals in temp pop, cross-population GA/DE

algorithm_information

Dictionary containing algorithm capabilities and requirements

Type:

dict

__init__(problem, n=None, max_nfes=None, pTransfer=0.5, mu=2, mum=5, F=0.5, CR=0.5, minx=0.1, Lb=0.1, Ub=0.7, save_data=True, save_path='./Data', name='MTEA-HKTS', disable_tqdm=True)[source]

Initialize MTEA-HKTS algorithm.

Parameters:
  • problem (MTOP) – Multi-task optimization problem instance

  • n (int, optional) – Population size per task (default: 100)

  • max_nfes (int, optional) – Maximum number of function evaluations per task (default: 10000)

  • pTransfer (float, optional) – Initial transfer portion (default: 0.5)

  • mu (float, optional) – SBX crossover distribution index (default: 2)

  • mum (float, optional) – Polynomial mutation distribution index (default: 5)

  • F (float, optional) – DE mutation factor (default: 0.5)

  • CR (float, optional) – DE crossover rate (default: 0.5)

  • minx (float, optional) – Minimum scale boundary (default: 0.1)

  • Lb (float, optional) – Lower bound for transfer probability (default: 0.1)

  • Ub (float, optional) – Upper bound for transfer probability (default: 0.7)

  • save_data (bool, optional) – Whether to save optimization data (default: True)

  • save_path (str, optional) – Path to save results (default: ‘./Data’)

  • name (str, optional) – Name for the experiment (default: ‘MTEA-HKTS’)

  • disable_tqdm (bool, optional) – Whether to disable progress bar (default: True)

optimize()[source]

Execute the MTEA-HKTS algorithm.

Returns:

Optimization results

Return type:

Results

Multi-Task Evolutionary Algorithm with Progressive Auto-Encoding (MTEA-PAE)

This module implements MTEA-PAE for multi-task optimization using kernelized autoencoding (NFC) for cross-task knowledge transfer with adaptive operator and transfer type selection.

References

[1] Gu, Qiong, et al. “Progressive Auto-Encoding for Domain Adaptation

in Evolutionary Multi-Task Optimization.” Applied Soft Computing, 113916, 2025.

Notes

Author: Jiangtao Shen (DDMTOLab adaptation) Date: 2026.02.22 Version: 1.0

class ddmtolab.Algorithms.MTSO.MTEA_PAE.MTEA_PAE(problem, n=None, max_nfes=None, Seg=10, TNum=20, TGap=5, F=0.5, CR=0.9, MuC=2, MuM=5, save_data=True, save_path='./Data', name='MTEA-PAE', disable_tqdm=True)[source]

Bases: object

Multi-Task Evolutionary Algorithm with Progressive Auto-Encoding.

Uses kernelized autoencoding (NFC) for cross-task knowledge transfer with two transfer strategies: segment transfer (using current distribution) and stochastic replacement transfer (using archive). Adaptive selection between DE/rand/1/bin and GA (SBX+PM) for offspring generation.

algorithm_information

Dictionary containing algorithm capabilities and requirements

Type:

dict

__init__(problem, n=None, max_nfes=None, Seg=10, TNum=20, TGap=5, F=0.5, CR=0.9, MuC=2, MuM=5, save_data=True, save_path='./Data', name='MTEA-PAE', disable_tqdm=True)[source]

Initialize MTEA-PAE algorithm.

Parameters:
  • problem (MTOP) – Multi-task optimization problem instance

  • n (int, optional) – Population size per task (default: 100)

  • max_nfes (int, optional) – Maximum number of function evaluations per task (default: 10000)

  • Seg (int, optional) – Number of segments for distribution snapshots (default: 10)

  • TNum (int, optional) – Number of solutions to transfer (default: 20)

  • TGap (int, optional) – Generation gap between transfers (default: 5)

  • F (float, optional) – DE mutation scale factor (default: 0.5)

  • CR (float, optional) – DE crossover rate (default: 0.9)

  • MuC (float, optional) – SBX distribution index (default: 2)

  • MuM (float, optional) – PM distribution index (default: 5)

  • save_data (bool, optional) – Whether to save optimization data (default: True)

  • save_path (str, optional) – Path to save results (default: ‘./Data’)

  • name (str, optional) – Name for the experiment (default: ‘MTEA-PAE’)

  • disable_tqdm (bool, optional) – Whether to disable progress bar (default: True)

optimize()[source]

Execute the MTEA-PAE algorithm.

Returns:

Optimization results containing decision variables, objectives, and runtime

Return type:

Results

Multi-Task Evolution Strategy with Knowledge-Guided External Sampling (MTES-KG)

This module implements the MTES-KG algorithm for multi-task single-objective optimization. The algorithm extends CMA-ES with two types of knowledge-guided external sampling across tasks: DoS (Domain of Solution knowledge) and SaS (Shape of function knowledge), along with adaptive negative transfer mitigation.

References

[1] Y. Li, W. Gong, and S. Li. “Multitask Evolution Strategy With Knowledge-Guided External

Sampling.” IEEE Transactions on Evolutionary Computation, 28(6): 1733-1745, 2024.

Notes

The code is developed in accordance with the MATLAB-based MTO-platform framework.

Author: Jiangtao Shen Email: j.shen5@exeter.ac.uk Date: 2026.02.21 Version: 1.0

class ddmtolab.Algorithms.MTSO.MTES_KG.MTES_KG(problem, n=None, max_nfes=None, tau0=2, alpha=0.5, adj_gap=50, sigma0=0.3, save_data=True, save_path='./Data', name='MTES-KG', disable_tqdm=True)[source]

Bases: object

Multi-Task Evolution Strategy with Knowledge-Guided External Sampling.

Each task maintains an independent CMA-ES instance. Knowledge transfer between tasks is achieved through external samples generated via two strategies:

  • DoS (Domain of Solution knowledge): Samples from an auxiliary task’s distribution, projected to within the current task’s neighborhood

  • SaS (Shape of function knowledge): Transfers the search direction from an auxiliary task’s successful solutions using CMA-ES coordinate system transformation

An adaptive mechanism adjusts the number of external samples (tau) to mitigate negative transfer.

algorithm_information

Dictionary containing algorithm capabilities and requirements

Type:

dict

__init__(problem, n=None, max_nfes=None, tau0=2, alpha=0.5, adj_gap=50, sigma0=0.3, save_data=True, save_path='./Data', name='MTES-KG', disable_tqdm=True)[source]

Initialize MTES-KG algorithm.

Parameters:
  • problem (MTOP) – Multi-task optimization problem instance

  • n (int, optional) – Population size (lambda) per task (default: 100)

  • max_nfes (int, optional) – Maximum number of function evaluations per task (default: 10000)

  • tau0 (int, optional) – Initial external sample number per task (default: 2)

  • alpha (float, optional) – Probability of using DoS vs SaS for external sampling (default: 0.5)

  • adj_gap (int, optional) – Generation gap for adjusting tau (default: 50)

  • sigma0 (float, optional) – Initial step size for CMA-ES (default: 0.3)

  • save_data (bool, optional) – Whether to save optimization data (default: True)

  • save_path (str, optional) – Path to save results (default: ‘./Data’)

  • name (str, optional) – Name for the experiment (default: ‘MTES-KG’)

  • disable_tqdm (bool, optional) – Whether to disable progress bar (default: True)

optimize()[source]

Execute the MTES-KG algorithm.

Returns:

Optimization results containing decision variables, objectives, constraints, and runtime

Return type:

Results

SaEF-AKT: Surrogate-Assisted Evolutionary Framework with Adaptive Knowledge Transfer

This module implements SaEF-AKT for expensive multi-task single-objective optimization.

References

[1] Z. Huang, J. Zhong, and W. N. N. Yu, “Surrogate-Assisted Evolutionary Framework with Adaptive Knowledge Transfer for Multi-Task Optimization,” IEEE Trans. Emerg. Topics Comput., vol. 9, no. 4, pp. 1930-1944, 2021.

Notes

Author: Jiangtao Shen Email: j.shen5@exeter.ac.uk Date: 2026.02.22 Version: 1.0

class ddmtolab.Algorithms.MTSO.SaEF_AKT.SaEF_AKT(problem, n_initial=None, max_nfes=None, NC=60, NR=60, Phe_a=0.1, Phe_max=1.0, Phe_min=0.01, P_max=0.9, g_values=None, de_pop=15, de_merit_evals=2000, save_data=True, save_path='./Data', name='SaEF-AKT', disable_tqdm=True)[source]

Bases: object

Surrogate-Assisted Evolutionary Framework with Adaptive Knowledge Transfer.

This algorithm features: - Local Gaussian Process modeling (NC nearest + NR recent points) - Multiple merit functions with different exploration-exploitation balance (g = 0, 1, 2, 4) - DE-based optimization of merit functions on GP surrogate - KL divergence-based task similarity measurement - Pheromone-based adaptive auxiliary task selection for knowledge transfer

algorithm_information

Dictionary containing algorithm capabilities and requirements

Type:

dict

__init__(problem, n_initial=None, max_nfes=None, NC=60, NR=60, Phe_a=0.1, Phe_max=1.0, Phe_min=0.01, P_max=0.9, g_values=None, de_pop=15, de_merit_evals=2000, save_data=True, save_path='./Data', name='SaEF-AKT', disable_tqdm=True)[source]

Initialize SaEF-AKT algorithm.

Parameters:
  • problem (MTOP) – Multi-task optimization problem instance

  • n_initial (int or List[int], optional) – Number of initial samples per task (default: 50)

  • max_nfes (int or List[int], optional) – Maximum number of function evaluations per task (default: 100)

  • NC (int, optional) – Number of nearest neighbor points for local GP (default: 60)

  • NR (int, optional) – Number of most recent evaluation points for local GP (default: 60)

  • Phe_a (float, optional) – Pheromone evaporation rate (default: 0.1)

  • Phe_max (float, optional) – Maximum pheromone concentration (default: 1.0)

  • Phe_min (float, optional) – Minimum pheromone concentration (default: 0.01)

  • P_max (float, optional) – Probability of selecting the task with maximum transfer probability (default: 0.9)

  • g_values (list of float, optional) – Exploration weights for merit functions (default: [0, 1, 2, 4])

  • de_pop (int, optional) – DE population size for merit function optimization (default: 15)

  • de_merit_evals (int, optional) – Max DE evaluations for merit function optimization (default: 2000)

  • save_data (bool, optional) – Whether to save optimization data (default: True)

  • save_path (str, optional) – Path to save results (default: ‘./Data’)

  • name (str, optional) – Name for the experiment (default: ‘SaEF-AKT’)

  • disable_tqdm (bool, optional) – Whether to disable progress bar (default: True)

optimize()[source]

Execute the SaEF-AKT algorithm.

The main loop follows Algorithm 3 in the paper: Phase 1: GPOP search + real evaluation for each task (1 FE per task) Phase 2: Similarity measurement via KL divergence on updated databases Phase 3: Knowledge transfer + pheromone update for each task (1 FE per task)

Returns:

Optimization results containing decision variables, objectives, and runtime

Return type:

Results

Scenario-based Self-Learning Transfer Differential Evolution (SSLT-DE)

This module implements SSLT-DE for multi-task optimization using a DQN-based reinforcement learning framework to adaptively select among four knowledge transfer scenarios.

References

[1] Z. Yuan, G. Dai, L. Peng, M. Wang, Z. Song, and X. Chen, “Scenario-based

self-learning transfer framework for multi-task optimization problems,” Knowledge-Based Systems, vol. 325, p. 113824, 2025.

Notes

Author: Jiangtao Shen Email: j.shen5@exeter.ac.uk Date: 2026.02.22 Version: 1.0

class ddmtolab.Algorithms.MTSO.SSLT_DE.SSLT_DE(problem, n=None, max_nfes=None, threshold=150, gap=50, gamma=0.9, epsilon=0.8, F=0.5, CR=0.9, save_data=True, save_path='./Data', name='SSLT-DE', disable_tqdm=True)[source]

Bases: object

Scenario-based Self-Learning Transfer Differential Evolution.

Uses a DQN-based reinforcement learning framework to adaptively select among four knowledge transfer scenarios: 1. No transfer (standard DE/rand/1/bin) 2. Shape transfer (shift smoothed source toward target center) 3. Bi-directional transfer (DE on merged populations) 4. Domain transfer (direction-guided from best source-target difference)

algorithm_information

Dictionary containing algorithm capabilities and requirements

Type:

dict

__init__(problem, n=None, max_nfes=None, threshold=150, gap=50, gamma=0.9, epsilon=0.8, F=0.5, CR=0.9, save_data=True, save_path='./Data', name='SSLT-DE', disable_tqdm=True)[source]

Initialize SSLT-DE algorithm.

Parameters:
  • problem (MTOP) – Multi-task optimization problem instance

  • n (int, optional) – Population size per task (default: 100)

  • max_nfes (int, optional) – Maximum number of function evaluations per task (default: 10000)

  • threshold (int, optional) – Number of generations before building DQN (default: 150)

  • gap (int, optional) – DQN update interval in generations (default: 50)

  • gamma (float, optional) – Discount factor for Q-learning (default: 0.9)

  • epsilon (float, optional) – Epsilon-greedy exploration rate (default: 0.8)

  • F (float, optional) – DE mutation scale factor (default: 0.5)

  • CR (float, optional) – DE crossover rate (default: 0.9)

  • save_data (bool, optional) – Whether to save optimization data (default: True)

  • save_path (str, optional) – Path to save results (default: ‘./Data’)

  • name (str, optional) – Name for the experiment (default: ‘SSLT-DE’)

  • disable_tqdm (bool, optional) – Whether to disable progress bar (default: True)

optimize()[source]

Execute the SSLT-DE algorithm.

Returns:

Optimization results containing decision variables, objectives, and runtime

Return type:

Results

Transfer Task-averaged Natural Gradient - Separable NES (TNG-SNES)

This module implements TNG-SNES for many-task single-objective optimization using separable Natural Evolution Strategy with task-averaged gradient transfer.

References

[1] Li, Yanchi, et al. “Transfer Task-averaged Natural Gradient for Efficient

Many-task Optimization.” IEEE Transactions on Evolutionary Computation, 29(5): 1952-1965, 2025.

Notes

Author: Jiangtao Shen (DDMTOLab adaptation) Date: 2026.02.22 Version: 1.0

class ddmtolab.Algorithms.MTSO.TNG_SNES.TNG_SNES(problem, n=None, max_nfes=None, sigma0=0.3, rho0=0.1, alpha0=0.7, adj_gap=100, save_data=True, save_path='./Data', name='TNG-SNES', disable_tqdm=True)[source]

Bases: object

Transfer Task-averaged Natural Gradient for Many-Task Optimization (Separable NES).

Uses separable NES with task-averaged natural gradient for knowledge transfer: - Each task maintains a Gaussian distribution N(x, diag(S^2)) - Natural gradients are computed from fitness-ranked utility weights - Task-averaged gradient is transferred with adaptive utilization rate (rho) - Adaptive transfer control adjusts rho and alpha via virtual parameter comparison

algorithm_information

Dictionary containing algorithm capabilities and requirements

Type:

dict

__init__(problem, n=None, max_nfes=None, sigma0=0.3, rho0=0.1, alpha0=0.7, adj_gap=100, save_data=True, save_path='./Data', name='TNG-SNES', disable_tqdm=True)[source]

Initialize TNG-SNES algorithm.

Parameters:
  • problem (MTOP) – Multi-task optimization problem instance

  • n (int, optional) – Population size per task (default: 100)

  • max_nfes (int, optional) – Maximum number of function evaluations per task (default: 10000)

  • sigma0 (float, optional) – Initial standard deviation for all dimensions (default: 0.3)

  • rho0 (float, optional) – Initial utilization factor for gradient transfer (default: 0.1)

  • alpha0 (float, optional) – Initial transfer rate / probability (default: 0.7)

  • adj_gap (int, optional) – Generation interval for adaptive transfer control (default: 100)

  • save_data (bool, optional) – Whether to save optimization data (default: True)

  • save_path (str, optional) – Path to save results (default: ‘./Data’)

  • name (str, optional) – Name for the experiment (default: ‘TNG-SNES’)

  • disable_tqdm (bool, optional) – Whether to disable progress bar (default: True)

optimize()[source]

Execute the TNG-SNES algorithm.

Returns:

Optimization results containing decision variables, objectives, and runtime

Return type:

Results

Multitask Multiobjective (MTMO)

Multiobjective Multifactorial Evolutionary Algorithm (MOMFEA)

This module implements MOMFEA for multi-objective multi-task optimization with knowledge transfer.

References

[1] Abhishek Gupta, Yew-Soon Ong, and Liang Feng. “Multifactorial Evolution: Toward Evolutionary Multitasking.” IEEE Transactions on Evolutionary Computation, 20(3): 343-357, 2015.

Notes

Author: Jiangtao Shen Email: j.shen5@exeter.ac.uk Date: 2025.11.27 Version: 1.0

class ddmtolab.Algorithms.MTMO.MO_MFEA.MO_MFEA(problem, n=None, max_nfes=None, rmp=0.3, save_data=True, save_path='./Data', name='MO-MFEA', disable_tqdm=True)[source]

Bases: object

Multiobjective Multifactorial Evolutionary Algorithm for multi-objective multi-task optimization.

algorithm_information

Dictionary containing algorithm capabilities and requirements

Type:

dict

__init__(problem, n=None, max_nfes=None, rmp=0.3, save_data=True, save_path='./Data', name='MO-MFEA', disable_tqdm=True)[source]

Initialize MOMFEA algorithm.

Parameters:
  • problem (MTOP) – Multi-task optimization problem instance

  • n (int, optional) – Population size per task (default: 100)

  • max_nfes (int, optional) – Maximum number of function evaluations per task (default: 10000)

  • rmp (float, optional) – Random mating probability for inter-task crossover (default: 0.3)

  • save_data (bool, optional) – Whether to save optimization data (default: True)

  • save_path (str, optional) – Path to save results (default: ‘./Data’)

  • name (str, optional) – Name for the experiment (default: ‘momfea_test’)

  • disable_tqdm (bool, optional) – Whether to disable progress bar (default: True)

optimize()[source]

Execute the MOMFEA algorithm.

Returns:

Optimization results containing decision variables, objectives, constraints, and runtime

Return type:

Results

Multiobjective Multifactorial Evolutionary Algorithm With Online Transfer Parameter Estimation (MO-MFEA-II)

This module implements MOMFEA for multi-objective multi-task optimization with knowledge transfer.

References

[1] Bali, Kavitesh Kumar, et al. “Cognizant multitasking in multiobjective multifactorial evolution: MO-MFEA-II.” IEEE transactions on cybernetics 51.4 (2020): 1784-1796.

Notes

Author: Jiangtao Shen Email: j.shen5@exeter.ac.uk Date: 2025.12.16 Version: 1.0

class ddmtolab.Algorithms.MTMO.MO_MFEA_II.MO_MFEA_II(problem, n=None, max_nfes=None, save_data=True, save_path='./Data', name='MO-MFEA-II', disable_tqdm=True)[source]

Bases: object

Multiobjective Multifactorial Evolutionary Algorithm With Online Transfer Parameter Estimation.

algorithm_information

Dictionary containing algorithm capabilities and requirements

Type:

dict

__init__(problem, n=None, max_nfes=None, save_data=True, save_path='./Data', name='MO-MFEA-II', disable_tqdm=True)[source]

Initialize MO-MFEA-II.

Parameters:
  • problem (MTOP) – Multi-task optimization problem instance

  • n (int, optional) – Population size per task (default: 100)

  • max_nfes (int, optional) – Maximum number of function evaluations per task (default: 10000)

  • save_data (bool, optional) – Whether to save optimization data (default: True)

  • save_path (str, optional) – Path to save results (default: ‘./Data’)

  • name (str, optional) – Name for the experiment (default: ‘momfea_test’)

  • disable_tqdm (bool, optional) – Whether to disable progress bar (default: True)

optimize()[source]

Execute the MO-MFEA-II algorithm.

Returns:

Optimization results containing decision variables, objectives, constraints, and runtime

Return type:

Results

Multiobjective Evolutionary Multitasking via Explicit Autoencoding (MO-EMEA)

This module implements the MO_EMEA algorithm for multi-task multi-objective optimization problems with knowledge transfer.

References

[1] L. Feng, L. Zhou, J. Zhong, A. Gupta, Y. -S. Ong, K. -C. Tan, and A. K. Qin. “Evolutionary Multitasking via Explicit Autoencoding.” IEEE Transactions on Cybernetics, 49(9): 3457-3470, 2019.

Notes

The code is developed in accordance with the MATLAB-based MTO-platform framework.

Author: Jing Wang Email: Date: 2026.01.09 Version: 1.0

class ddmtolab.Algorithms.MTMO.MO_EMEA.MO_EMEA(problem, n=None, max_nfes=None, operator='SP/NS', s_num=None, t_gap=None, mu_c=None, mu_m=None, save_data=True, save_path='./Data', name='MO-EMEA', disable_tqdm=True)[source]

Bases: object

Multi-task Multi-objective Evolutionary Multitasking via Explicit Autoencoding (MO_EMEA).

algorithm_information

Dictionary containing algorithm capabilities and requirements

Type:

dict

__init__(problem, n=None, max_nfes=None, operator='SP/NS', s_num=None, t_gap=None, mu_c=None, mu_m=None, save_data=True, save_path='./Data', name='MO-EMEA', disable_tqdm=True)[source]

Initialize MO-EMEA algorithm.

Parameters:
  • problem (MTOP) – Multi-task optimization problem instance

  • n (int or List[int], optional) – Population size per task (default: 100)

  • max_nfes (int, optional) – Maximum number of function evaluations per task (default: 10000)

  • operator (str, optional) – Selection operator(s) split with ‘/’, e.g., ‘SP/NS’ (default: ‘SP/NS’) - ‘SP’: SPEA2 selection - ‘NS’: NSGA-II selection

  • s_num (int, optional) – Number of solutions for knowledge transfer (default: 10)

  • t_gap (int, optional) – Generation gap for knowledge transfer (default: 10)

  • mu_c (float, optional) – Distribution index for simulated binary crossover (SBX) (default: 20)

  • mu_m (float, optional) – Distribution index for polynomial mutation (PM) (default: 15)

  • save_data (bool, optional) – Whether to save optimization data (default: True)

  • save_path (str, optional) – Path to save results (default: ‘./TestData’)

  • name (str, optional) – Name for the experiment (default: ‘MO_EMEA_test’)

  • disable_tqdm (bool, optional) – Whether to disable progress bar (default: True)

optimize()[source]

Execute MO_EMEA algorithm.

Returns:

Optimization results containing decision variables, objectives, constraints and runtime

Return type:

Results

Multi-objective Multi-task Evolutionary Algorithm with Self-adaptive Solvers (MO-MTEA-SaO)

This module implements MO-MTEA-SaO for multi-task multi-objective optimization problems.

References

[1] Li, Yanchi, Wenyin Gong, and Shuijia Li. “Multitasking Optimization via an Adaptive Solver Multitasking Evolutionary Framework.” Information Sciences (2022).

Notes

Author: Jiangtao Shen Email: j.shen5@exeter.ac.uk Date: 2026.01.18 Version: 1.0

class ddmtolab.Algorithms.MTMO.MO_MTEA_SaO.MO_MTEA_SaO(problem, n=None, max_nfes=None, t_gap=10, t_num=10, sa_gap=70, memory=30, ga_muc=20.0, ga_mum=15.0, de_f=0.5, de_cr=0.9, save_data=True, save_path='./Data', name='MO-MTEA-SaO', disable_tqdm=True)[source]

Bases: object

Multi-objective Multi-task Evolutionary Algorithm with Self-adaptive Solvers.

This algorithm features: - Two solver strategies: NSGA-II + GA and SPEA2 + DE - Self-adaptive solver selection based on success/failure history - Knowledge transfer between tasks - Adaptive population partitioning among solvers

algorithm_information

Dictionary containing algorithm capabilities and requirements

Type:

dict

__init__(problem, n=None, max_nfes=None, t_gap=10, t_num=10, sa_gap=70, memory=30, ga_muc=20.0, ga_mum=15.0, de_f=0.5, de_cr=0.9, save_data=True, save_path='./Data', name='MO-MTEA-SaO', disable_tqdm=True)[source]

Initialize MO-MTEA-SaO algorithm.

Parameters:
  • problem (MTOP) – Multi-task optimization problem instance

  • n (int or List[int], optional) – Population size per task (default: 100)

  • max_nfes (int or List[int], optional) – Maximum number of function evaluations per task (default: 10000)

  • t_gap (int, optional) – Transfer gap - perform knowledge transfer every t_gap generations (default: 10)

  • t_num (int, optional) – Number of solutions to transfer (default: 10)

  • sa_gap (int, optional) – Self-adaptive gap - update solver allocation every sa_gap generations (default: 70)

  • memory (int, optional) – Memory length for success/failure history (default: 30)

  • ga_muc (float, optional) – Distribution index for GA crossover (SBX) (default: 20.0)

  • ga_mum (float, optional) – Distribution index for GA mutation (PM) (default: 15.0)

  • de_f (float, optional) – DE scaling factor (default: 0.5)

  • de_cr (float, optional) – DE crossover probability (default: 0.9)

  • save_data (bool, optional) – Whether to save optimization data (default: True)

  • save_path (str, optional) – Path to save results (default: ‘./TestData’)

  • name (str, optional) – Name for the experiment (default: ‘MOMTEASaO_test’)

  • disable_tqdm (bool, optional) – Whether to disable progress bar (default: True)

optimize()[source]

Execute the MO-MTEA-SaO algorithm.

Returns:

Optimization results containing decision variables, objectives, constraints, and runtime

Return type:

Results

Multi-task Multi-objective Evolutionary Algorithm Based on Decomposition with Dynamic Neighborhood (MTEA-D-DN)

This module implements MTEA-D-DN for multi-task multi-objective optimization problems.

References

[1] Wang, Xianpeng, Zhiming Dong, Lixin Tang, and Qingfu Zhang. “Multiobjective Multitask Optimization - Neighborhood as a Bridge for Knowledge Transfer.” IEEE Transactions on Evolutionary Computation 27.1 (2023): 155-169.

Notes

Author: Jiangtao Shen Email: j.shen5@exeter.ac.uk Date: 2026.01.18 Version: 1.0

class ddmtolab.Algorithms.MTMO.MTEA_D_DN.MTEA_D_DN(problem, n=None, max_nfes=None, beta=0.2, F=0.5, CR=0.9, mum=20.0, save_data=True, save_path='./Data', name='MTEA-D-DN', disable_tqdm=True)[source]

Bases: object

Multi-task Multi-objective Evolutionary Algorithm Based on Decomposition with Dynamic Neighborhood.

This algorithm uses neighborhood structure as a bridge for knowledge transfer between tasks. It maintains two types of neighborhoods: - B: Primary neighborhood within the same task (based on weight vector distance) - B2: Secondary neighborhood from other tasks (for knowledge transfer)

algorithm_information

Dictionary containing algorithm capabilities and requirements

Type:

dict

__init__(problem, n=None, max_nfes=None, beta=0.2, F=0.5, CR=0.9, mum=20.0, save_data=True, save_path='./Data', name='MTEA-D-DN', disable_tqdm=True)[source]

Initialize MTEA-D-DN algorithm.

Parameters:
  • problem (MTOP) – Multi-task optimization problem instance

  • n (int or List[int], optional) – Population size per task (default: 100)

  • max_nfes (int or List[int], optional) – Maximum number of function evaluations per task (default: 10000)

  • beta (float, optional) – Probability of choosing parents locally (from neighborhood) (default: 0.2)

  • F (float, optional) – Scaling factor for DE mutation (default: 0.5)

  • CR (float, optional) – Crossover rate for DE (default: 0.9)

  • mum (float, optional) – Distribution index for polynomial mutation (default: 20.0)

  • save_data (bool, optional) – Whether to save optimization data (default: True)

  • save_path (str, optional) – Path to save results (default: ‘./TestData’)

  • name (str, optional) – Name for the experiment (default: ‘MTEADDN_test’)

  • disable_tqdm (bool, optional) – Whether to disable progress bar (default: True)

optimize()[source]

Execute the MTEA-D-DN algorithm.

Returns:

Optimization results containing decision variables, objectives, constraints, and runtime

Return type:

Results

Multi-objective Multi-task Differential Evolution with Multiple Knowledge Types and Transfer Adaptation (MTDE-MKTA)

This module implements MTDE-MKTA for multi-task multi-objective optimization problems.

References

[1] Li, Yanchi, and Wenyin Gong. “Multiobjective Multitask Optimization With Multiple Knowledge Types and Transfer Adaptation.” IEEE Transactions on Evolutionary Computation 29.1 (2025): 205-216.

Notes

Author: Jiangtao Shen Email: j.shen5@exeter.ac.uk Date: 2026.01.18 Version: 1.0

class ddmtolab.Algorithms.MTMO.MTDE_MKTA.MTDE_MKTA(problem, n=None, max_nfes=None, tau1=0.2, tau2=0.1, save_data=True, save_path='./Data', name='MTDE-MKTA', disable_tqdm=True)[source]

Bases: object

Multi-objective Multi-task Differential Evolution with Multiple Knowledge Types and Transfer Adaptation.

This algorithm features: - Self-adaptive parameters (F, CR, TR, KP) for each individual - Rank-based DE parent selection - Two knowledge transfer types: direct transfer and distribution-based transfer

algorithm_information

Dictionary containing algorithm capabilities and requirements

Type:

dict

__init__(problem, n=None, max_nfes=None, tau1=0.2, tau2=0.1, save_data=True, save_path='./Data', name='MTDE-MKTA', disable_tqdm=True)[source]

Initialize MTDE-MKTA algorithm.

Parameters:
  • problem (MTOP) – Multi-task optimization problem instance

  • n (int or List[int], optional) – Population size per task (default: 100)

  • max_nfes (int or List[int], optional) – Maximum number of function evaluations per task (default: 10000)

  • tau1 (float, optional) – Mutation probability for F and CR parameters (default: 0.2)

  • tau2 (float, optional) – Mutation probability for TR and KP parameters (default: 0.1)

  • save_data (bool, optional) – Whether to save optimization data (default: True)

  • save_path (str, optional) – Path to save results (default: ‘./TestData’)

  • name (str, optional) – Name for the experiment (default: ‘MTDEMKTA_test’)

  • disable_tqdm (bool, optional) – Whether to disable progress bar (default: True)

optimize()[source]

Execute the MTDE-MKTA algorithm.

Returns:

Optimization results containing decision variables, objectives, constraints, and runtime

Return type:

Results

Evolutionary Multi-task with Effective Transfer (EMT-ET)

This module implements EMT-ET for multi-task multi-objective optimization problems.

References

[1] Lin, Jiabin, Hai-Lin Liu, Kay Chen Tan, and Fangqing Gu. “An Effective Knowledge Transfer Approach for Multiobjective Multitasking Optimization.” IEEE Transactions on Cybernetics 51.6 (2021): 3238-3248.

Notes

Author: Jiangtao Shen Email: j.shen5@exeter.ac.uk Date: 2026.01.16 Version: 1.0

class ddmtolab.Algorithms.MTMO.EMT_ET.EMT_ET(problem, n=None, max_nfes=None, G=8, P=0.5, muc=20.0, mum=15.0, save_data=True, save_path='./Data', name='EMT-ET', disable_tqdm=True)[source]

Bases: object

Evolutionary Multi-task with Effective Transfer.

This algorithm features: - Adaptive knowledge transfer based on successful transferred solutions - Transfer solutions selected from Pareto front of source tasks - Distribution-based perturbation for transferred solutions - NSGA-II based environmental selection

algorithm_information

Dictionary containing algorithm capabilities and requirements

Type:

dict

__init__(problem, n=None, max_nfes=None, G=8, P=0.5, muc=20.0, mum=15.0, save_data=True, save_path='./Data', name='EMT-ET', disable_tqdm=True)[source]

Initialize EMT-ET algorithm.

Parameters:
  • problem (MTOP) – Multi-task optimization problem instance

  • n (int or List[int], optional) – Population size per task (default: 100)

  • max_nfes (int or List[int], optional) – Maximum number of function evaluations per task (default: 10000)

  • G (int, optional) – Number of transfer solutions per generation (default: 8)

  • P (float, optional) – Probability of distribution-based perturbation (default: 0.5)

  • muc (float, optional) – Distribution index for simulated binary crossover (SBX) (default: 20.0)

  • mum (float, optional) – Distribution index for polynomial mutation (PM) (default: 15.0)

  • save_data (bool, optional) – Whether to save optimization data (default: True)

  • save_path (str, optional) – Path to save results (default: ‘./TestData’)

  • name (str, optional) – Name for the experiment (default: ‘EMTET_test’)

  • disable_tqdm (bool, optional) – Whether to disable progress bar (default: True)

optimize()[source]

Execute the EMT-ET algorithm.

Returns:

Optimization results containing decision variables, objectives, constraints, and runtime

Return type:

Results

Evolutionary Multi-task with Population Distribution-based Transfer (EMT-PD)

This module implements EMT-PD for multi-task multi-objective optimization problems.

References

[1] Liang, Zhengping, Weiqi Liang, Zhiqiang Wang, Xiaoliang Ma, Ling Liu, and Zexuan Zhu. “Multiobjective Evolutionary Multitasking With Two-Stage Adaptive Knowledge Transfer Based on Population Distribution.” IEEE Transactions on Systems, Man, and Cybernetics: Systems (2021): 1-13.

Notes

Author: Jiangtao Shen Email: j.shen5@exeter.ac.uk Date: 2026.01.13 Version: 1.0

class ddmtolab.Algorithms.MTMO.EMT_PD.EMT_PD(problem, n=None, max_nfes=None, rmp=0.3, G=5, muc=20.0, mum=15.0, save_data=True, save_path='./Data', name='EMT-PD', disable_tqdm=True)[source]

Bases: object

Evolutionary Multi-task with Population Distribution-based Transfer.

This algorithm features: - Two-stage adaptive knowledge transfer based on population distribution - Covariance-based distribution alignment between tasks - Multifactorial evolutionary framework with RMP - NSGA-II based environmental selection

algorithm_information

Dictionary containing algorithm capabilities and requirements

Type:

dict

__init__(problem, n=None, max_nfes=None, rmp=0.3, G=5, muc=20.0, mum=15.0, save_data=True, save_path='./Data', name='EMT-PD', disable_tqdm=True)[source]

Initialize EMT-PD algorithm.

Parameters:
  • problem (MTOP) – Multi-task optimization problem instance

  • n (int, optional) – Population size per task (default: 100)

  • max_nfes (int, optional) – Maximum number of function evaluations per task (default: 10000)

  • rmp (float, optional) – Random mating probability for inter-task crossover (default: 0.3)

  • G (int, optional) – Transfer gap - perform distribution-based transfer every G generations (default: 5)

  • muc (float, optional) – Distribution index for simulated binary crossover (SBX) (default: 20.0)

  • mum (float, optional) – Distribution index for polynomial mutation (PM) (default: 15.0)

  • save_data (bool, optional) – Whether to save optimization data (default: True)

  • save_path (str, optional) – Path to save results (default: ‘./TestData’)

  • name (str, optional) – Name for the experiment (default: ‘EMTPD_test’)

  • disable_tqdm (bool, optional) – Whether to disable progress bar (default: True)

optimize()[source]

Execute the EMT-PD algorithm.

Returns:

Optimization results containing decision variables, objectives, constraints, and runtime

Return type:

Results

ParEGO with Knowledge Transfer (ParEGO-KT)

This module implements ParEGO-KT for expensive multitask multiobjective optimization. It extends ParEGO with cross-task knowledge transfer using Spearman rank correlation to identify and leverage beneficial task relationships.

References

[1] J. Knowles. “ParEGO: A hybrid algorithm with on-line landscape approximation for expensive multiobjective optimization problems.” IEEE Transactions on Evolutionary Computation, 2006, 10(1): 50-66.

Notes

Author: Jiangtao Shen Email: j.shen5@exeter.ac.uk Date: 2025.12.18 Version: 1.0

class ddmtolab.Algorithms.MTMO.ParEGO_KT.ParEGO_KT(problem, n_initial=None, n_weights=None, max_nfes=None, rho=0.05, save_data=True, save_path='./Data', name='ParEGO-KT', disable_tqdm=True)[source]

Bases: object

__init__(problem, n_initial=None, n_weights=None, max_nfes=None, rho=0.05, save_data=True, save_path='./Data', name='ParEGO-KT', disable_tqdm=True)[source]

Initialize ParEGO-KT algorithm with knowledge transfer.

Parameters:
  • problem (MTOP) – Multi-task optimization problem instance

  • n_initial (int or List[int], optional) – Number of initial samples per task (default: 11*dim - 1)

  • n_weights (int or List[int], optional) – Number of reference weight vectors per task (default: 100)

  • max_nfes (int or List[int], optional) – Maximum number of function evaluations per task (default: 200)

  • rho (float, optional) – Augmentation coefficient for augmented Tchebycheff scalarization (default: 0.05)

  • save_data (bool, optional) – Whether to save optimization data (default: True)

  • save_path (str, optional) – Path to save results (default: ‘./TestData’)

  • name (str, optional) – Name for the experiment (default: ‘ParEGO_KT_test’)

  • disable_tqdm (bool, optional) – Whether to disable progress bar (default: True)

optimize()[source]

Execute the ParEGO-KT algorithm with knowledge transfer.

Returns:

Optimization results containing decision variables, objectives, and runtime

Return type:

Results

Evolutionary Multitasking for Multi-objective Optimization Based on Generative Strategies (EMT-GS)

This module implements EMT-GS for multi-task multi-objective optimization problems. EMT-GS uses Generative Adversarial Networks (GANs) to transfer knowledge between tasks.

References

[1] Z. Liang, Y. Zhu, X. Wang, Z. Li, and Z. Zhu, “Evolutionary Multitasking for Multi-objective Optimization Based on Generative Strategies,” IEEE Transactions on Evolutionary Computation, 2022.

Notes

Author: Jiangtao Shen Email: j.shen5@exeter.ac.uk Date: 2026.02.22 Version: 1.0

class ddmtolab.Algorithms.MTMO.EMT_GS.EMT_GS(problem, n=None, max_nfes=None, G=10, lrD=0.0002, lrG=0.0003, BS=10, pp=0.5, CR=0.6, save_data=True, save_path='./Data', name='EMT-GS', disable_tqdm=True)[source]

Bases: object

Evolutionary Multitasking for Multi-objective Optimization Based on Generative Strategies.

This algorithm features: - GAN-based cross-task knowledge transfer - Generator maps source task solutions to target task space - DE mutation with rand-or-best strategy - NSGA-II based environmental selection per task

algorithm_information

Dictionary containing algorithm capabilities and requirements

Type:

dict

__init__(problem, n=None, max_nfes=None, G=10, lrD=0.0002, lrG=0.0003, BS=10, pp=0.5, CR=0.6, save_data=True, save_path='./Data', name='EMT-GS', disable_tqdm=True)[source]

Initialize EMT-GS algorithm.

Parameters:
  • problem (MTOP) – Multi-task optimization problem instance

  • n (int, optional) – Population size per task (default: 100)

  • max_nfes (int, optional) – Maximum number of function evaluations per task (default: 10000)

  • G (int, optional) – GAN training gap in generations (default: 10)

  • lrD (float, optional) – Learning rate for discriminator (default: 0.0002)

  • lrG (float, optional) – Learning rate for generator (default: 0.0003)

  • BS (int, optional) – Batch size for GAN training (default: 10)

  • pp (float, optional) – Probability of using random (vs best) base vector (default: 0.5)

  • CR (float, optional) – Crossover rate for DE (default: 0.6)

  • save_data (bool, optional) – Whether to save optimization data (default: True)

  • save_path (str, optional) – Path to save results (default: ‘./Data’)

  • name (str, optional) – Name for the experiment (default: ‘EMT-GS’)

  • disable_tqdm (bool, optional) – Whether to disable progress bar (default: True)

optimize()[source]

Execute the EMT-GS algorithm.

Returns:

Optimization results containing decision variables, objectives, constraints, and runtime

Return type:

Results

Multi-objective Multi-task Evolutionary Algorithm with Progressive Auto-Encoding (MO-MTEA-PAE)

This module implements MO-MTEA-PAE for multi-task multi-objective optimization problems.

References

[1] Q. Gu, Y. Li, W. Gong, Z. Yuan, B. Ning, C. Hu, and J. Wu, “Progressive Auto-Encoding for Domain Adaptation in Evolutionary Multi-Task Optimization,” Applied Soft Computing, vol. 175, p. 113916, 2025.

Notes

Author: Jiangtao Shen Email: j.shen5@exeter.ac.uk Date: 2026.02.22 Version: 1.0

class ddmtolab.Algorithms.MTMO.MO_MTEA_PAE.MO_MTEA_PAE(problem, n=None, max_nfes=None, Seg=10, TNum=20, TGap=5, F=0.5, CR=0.9, MuC=20, MuM=15, save_data=True, save_path='./Data', name='MO-MTEA-PAE', disable_tqdm=True)[source]

Bases: object

Multi-objective Multi-task Evolutionary Algorithm with Progressive Auto-Encoding.

This algorithm features: - Kernel-based NFC (Nonlinear Feature Coupling) for cross-task knowledge transfer - Two transfer strategies: segment transfer (historical distribution) and stochastic transfer (current distribution) - Adaptive selection between DE and GA offspring generation - Adaptive selection between transfer types based on success rates - SPEA2 environmental selection per task - Elite solution transfer across tasks

algorithm_information

Dictionary containing algorithm capabilities and requirements

Type:

dict

__init__(problem, n=None, max_nfes=None, Seg=10, TNum=20, TGap=5, F=0.5, CR=0.9, MuC=20, MuM=15, save_data=True, save_path='./Data', name='MO-MTEA-PAE', disable_tqdm=True)[source]

Initialize MO-MTEA-PAE algorithm.

Parameters:
  • problem (MTOP) – Multi-task optimization problem instance

  • n (int or List[int], optional) – Population size per task (default: 100)

  • max_nfes (int or List[int], optional) – Maximum number of function evaluations per task (default: 10000)

  • Seg (int, optional) – Number of segments for DisPop update schedule (default: 10)

  • TNum (int, optional) – Number of transfer solutions per transfer event (default: 20)

  • TGap (int, optional) – Transfer gap in generations (default: 5)

  • F (float, optional) – DE mutation factor (default: 0.5)

  • CR (float, optional) – DE crossover rate (default: 0.9)

  • MuC (float, optional) – SBX crossover distribution index (default: 20)

  • MuM (float, optional) – PM mutation distribution index (default: 15)

  • save_data (bool, optional) – Whether to save optimization data (default: True)

  • save_path (str, optional) – Path to save results (default: ‘./Data’)

  • name (str, optional) – Name for the experiment (default: ‘MO-MTEA-PAE’)

  • disable_tqdm (bool, optional) – Whether to disable progress bar (default: True)

optimize()[source]

Execute the MO-MTEA-PAE algorithm.

Returns:

Optimization results containing decision variables, objectives, constraints, and runtime

Return type:

Results

Multi-Objective Symbiosis-Based Optimization (MO-SBO)

This module implements the MO-SBO algorithm for multi-objective many-task optimization based on symbiotic relationships in biocoenosis. The algorithm adaptively controls knowledge transfer rates by tracking six types of symbiotic interactions: mutualism, commensalism, parasitism, competition, amensalism, and neutralism.

References

[1] R.-T. Liaw and C.-K. Ting. “Evolutionary Manytasking Optimization Based on Symbiosis in Biocoenosis.” Proceedings of the AAAI Conference on Artificial Intelligence, 33(01): 4295-4303, 2019.

Notes

The code is developed in accordance with the MATLAB-based MTO-platform framework.

Author: Jiangtao Shen Email: j.shen5@exeter.ac.uk Date: 2026.02.21 Version: 1.0

class ddmtolab.Algorithms.MTMO.MO_SBO.MO_SBO(problem, n=None, max_nfes=None, benefit=0.25, harm=0.5, mu_c=20, mu_m=15, save_data=True, save_path='./Data', name='MO-SBO', disable_tqdm=True)[source]

Bases: object

Multi-Objective Symbiosis-Based Optimization for many-task multi-objective optimization.

The algorithm uses symbiotic relationships between tasks to adaptively control knowledge transfer rates. Six types of symbiotic interactions are tracked:

  • Mutualism (MIJ): Both tasks benefit (transferred solution ranks high in both)

  • Commensalism (OIJ): One benefits, other neutral

  • Parasitism (PIJ): One benefits, other harmed

  • Competition (CIJ): Both harmed

  • Amensalism (AIJ): One harmed, other neutral

  • Neutralism (NIJ): Both neutral

algorithm_information

Dictionary containing algorithm capabilities and requirements

Type:

dict

__init__(problem, n=None, max_nfes=None, benefit=0.25, harm=0.5, mu_c=20, mu_m=15, save_data=True, save_path='./Data', name='MO-SBO', disable_tqdm=True)[source]

Initialize MO-SBO algorithm.

Parameters:
  • problem (MTOP) – Multi-task optimization problem instance

  • n (int, optional) – Population size per task (default: 100)

  • max_nfes (int, optional) – Maximum number of function evaluations per task (default: 10000)

  • benefit (float, optional) – Beneficial factor threshold for symbiosis categorization (default: 0.25)

  • harm (float, optional) – Harmful factor threshold for symbiosis categorization (default: 0.5)

  • mu_c (float, optional) – Distribution index for simulated binary crossover (default: 20)

  • mu_m (float, optional) – Distribution index for polynomial mutation (default: 15)

  • save_data (bool, optional) – Whether to save optimization data (default: True)

  • save_path (str, optional) – Path to save results (default: ‘./Data’)

  • name (str, optional) – Name for the experiment (default: ‘MO-SBO’)

  • disable_tqdm (bool, optional) – Whether to disable progress bar (default: True)

optimize()[source]

Execute the MO-SBO algorithm.

Returns:

Optimization results containing decision variables, objectives, constraints, and runtime

Return type:

Results

Multi-task Evolutionary Algorithm Based on Decomposition with Transfer of Search Directions (MTEA-D-TSD)

This module implements MTEA-D-TSD for multi-task multi-objective optimization problems.

References

[1] Y. Li, W. Gong, and Q. Gu, “Transfer Search Directions Among Decomposed Subtasks for Evolutionary Multitasking in Multiobjective Optimization,” in Proceedings of the Genetic and Evolutionary Computation Conference (GECCO ‘24), 2024, pp. 557-565.

Notes

Author: Jiangtao Shen Email: j.shen5@exeter.ac.uk Date: 2026.02.22 Version: 1.0

class ddmtolab.Algorithms.MTMO.MTEA_D_TSD.MTEA_D_TSD(problem, n=None, max_nfes=None, TR0=0.2, CF=0.4, SNum=10, Delta=0.9, NR=2, F=0.5, CR=0.9, MuM=15, save_data=True, save_path='./Data', name='MTEA-D-TSD', disable_tqdm=True)[source]

Bases: object

Multi-task Evolutionary Algorithm Based on Decomposition with Transfer of Search Directions.

This algorithm features: - MOEA/D framework with Tchebycheff decomposition - Search direction (SD) tracking for each individual - Cross-task transfer of search directions based on cosine similarity - Adaptive per-individual transfer rate - DE/rand/1 mutation with polynomial mutation

algorithm_information

Dictionary containing algorithm capabilities and requirements

Type:

dict

__init__(problem, n=None, max_nfes=None, TR0=0.2, CF=0.4, SNum=10, Delta=0.9, NR=2, F=0.5, CR=0.9, MuM=15, save_data=True, save_path='./Data', name='MTEA-D-TSD', disable_tqdm=True)[source]

Initialize MTEA-D-TSD algorithm.

Parameters:
  • problem (MTOP) – Multi-task optimization problem instance

  • n (int or List[int], optional) – Population size per task (default: 100)

  • max_nfes (int or List[int], optional) – Maximum number of function evaluations per task (default: 10000)

  • TR0 (float, optional) – Initial transfer rate (default: 0.2)

  • CF (float, optional) – Cumulative factor for search direction update (default: 0.4)

  • SNum (int, optional) – Number of random samples for source selection (default: 10)

  • Delta (float, optional) – Probability of choosing parents from local neighborhood (default: 0.9)

  • NR (int, optional) – Maximum number of solutions replaced per offspring (default: 2)

  • F (float, optional) – DE mutation factor (default: 0.5)

  • CR (float, optional) – DE crossover rate (default: 0.9)

  • MuM (float, optional) – PM mutation distribution index (default: 15)

  • save_data (bool, optional) – Whether to save optimization data (default: True)

  • save_path (str, optional) – Path to save results (default: ‘./Data’)

  • name (str, optional) – Name for the experiment (default: ‘MTEA-D-TSD’)

  • disable_tqdm (bool, optional) – Whether to disable progress bar (default: True)

optimize()[source]

Execute the MTEA-D-TSD algorithm.

Returns:

Optimization results containing decision variables, objectives, constraints, and runtime

Return type:

Results

Multi-task Evolutionary Algorithm via Diversity- and Convergence-Oriented Knowledge Transfer (MTEA-DCK)

This module implements MTEA-DCK for multi-task multi-objective optimization problems.

References

[1] Y. Li, D. Li, W. Gong, and Q. Gu, “Multiobjective Multitask Optimization via Diversity- and Convergence-Oriented Knowledge Transfer,” IEEE Trans. Syst., Man, Cybern., Syst., vol. 55, no. 3, pp. 2367-2379, 2025.

Notes

Author: Jiangtao Shen Email: j.shen5@exeter.ac.uk Date: 2026.02.22 Version: 1.0

class ddmtolab.Algorithms.MTMO.MTEA_DCK.MTEA_DCK(problem, n=None, max_nfes=None, Tau=0.1, TRC0=0.3, save_data=True, save_path='./Data', name='MTEA-DCK', disable_tqdm=True)[source]

Bases: object

Multi-task Evolutionary Algorithm via Diversity- and Convergence-Oriented Knowledge Transfer.

This algorithm features: - Competitive Swarm Optimizer (CSO) framework with winner/loser pairing - DE-based generation with diversified knowledge transfer (DKT) via region mapping - CSO-based generation with convergent knowledge transfer (CKT) via fragment swap - Adaptive per-individual parameters (F, CR, TRD) with Cauchy/Normal perturbation - SPEA2 environmental selection

algorithm_information

Dictionary containing algorithm capabilities and requirements

Type:

dict

__init__(problem, n=None, max_nfes=None, Tau=0.1, TRC0=0.3, save_data=True, save_path='./Data', name='MTEA-DCK', disable_tqdm=True)[source]

Initialize MTEA-DCK algorithm.

Parameters:
  • problem (MTOP) – Multi-task optimization problem instance

  • n (int or List[int], optional) – Population size per task (default: 100)

  • max_nfes (int or List[int], optional) – Maximum number of function evaluations per task (default: 10000)

  • Tau (float, optional) – Probability of random parameter reset (default: 0.1)

  • TRC0 (float, optional) – Initial transfer rate for convergent knowledge transfer (default: 0.3)

  • save_data (bool, optional) – Whether to save optimization data (default: True)

  • save_path (str, optional) – Path to save results (default: ‘./Data’)

  • name (str, optional) – Name for the experiment (default: ‘MTEA-DCK’)

  • disable_tqdm (bool, optional) – Whether to disable progress bar (default: True)

optimize()[source]

Execute the MTEA-DCK algorithm.

Returns:

Optimization results containing decision variables, objectives, constraints, and runtime

Return type:

Results

Problems

MTOP Class

The MTOP (Multitask Optimization Problem) class is the core component for defining optimization problems.

class ddmtolab.Methods.mtop.MTOP(unified_eval_mode: bool = False, fill_value: float = 0.0)[source]

Bases: object

Multi-Task Optimization Problem (MTOP) definition and management.

This class allows defining multiple optimization tasks, each with decision variables, objectives, and constraints. It handles vectorized and non-vectorized objective/constraint functions, normalizes outputs to consistent 2D arrays, and provides unified evaluation interfaces.

Parameters:
  • unified_eval_mode (bool, optional) – If True, evaluation results will be padded to maximum dimensions across all tasks (default is False).

  • fill_value (float, optional) – Value used for padding in unified evaluation mode (default is 0.0).

tasks

List of task dictionaries containing function wrappers and metadata.

Type:

List[Dict[str, Any]]

dims

List of decision variable dimensions for each task.

Type:

List[int]

bounds

List of (lower_bound, upper_bound) tuples for each task.

Type:

List[Tuple[np.ndarray, np.ndarray]]

n_tasks

Total number of tasks.

Type:

int

m_max

Maximum number of objectives across all tasks.

Type:

int

c_max

Maximum number of constraints across all tasks.

Type:

int

unified_eval_mode

Whether unified evaluation mode is enabled.

Type:

bool

fill_value

Fill value for padding in unified evaluation mode.

Type:

float

Examples

Create a simple MTOP with two tasks:

>>> def sphere(x):
...     return np.sum(x**2, axis=1)
>>> def rosenbrock(x):
...     x = np.atleast_2d(x)
...     return np.sum(100*(x[:, 1:] - x[:, :-1]**2)**2 + (1 - x[:, :-1])**2, axis=1)
>>>
>>> mtop = MTOP()
>>> mtop.add_task(sphere, dim=3)
0
>>> mtop.add_task(rosenbrock, dim=5, lower_bound=-5, upper_bound=5)
1
>>>
>>> # Evaluate first task
>>> X = np.random.rand(10, 3)
>>> obj, con = mtop.evaluate_task(0, X)
>>> obj.shape
(10, 1)

See also

ObjectiveFunctionWrapper

Wrapper for objective functions

ConstraintFunctionWrapper

Wrapper for constraint functions

add_task(objective_func: Callable[[ndarray], Any] | Tuple[Callable, ...], dim: int | Tuple[int, ...], constraint_func: Callable | List[Callable] | Tuple[List[Callable], ...] | None = None, lower_bound: float | List[float] | ndarray | Tuple[float | List[float] | ndarray, ...] | None = None, upper_bound: float | List[float] | ndarray | Tuple[float | List[float] | ndarray, ...] | None = None) int | List[int][source]

Add one or more tasks to MTOP.

This method provides a flexible interface for adding tasks with various configurations. It supports both single task and multiple task additions.

Parameters:
  • objective_func (Callable or Tuple[Callable, ...]) –

    Objective function(s) to evaluate. Can be:

    • A single callable: adds one task

    • A tuple of callables: adds multiple tasks

    Each function should accept X with shape (n, dim) and return objective values.

  • dim (int or Tuple[int, ...]) –

    Dimension(s) of decision variables. Can be:

    • A single int: dimension for one task (or broadcast to all if multiple funcs)

    • A tuple of ints: dimensions for each task in objective_func tuple

  • constraint_func (Callable, List[Callable], Tuple[List[Callable], ...], optional) –

    Constraint function(s). Can be:

    • None: no constraints (default)

    • A single callable: one constraint function

    • A list of callables: multiple constraint functions for one task

    • A tuple: constraint functions for each task (when adding multiple)

  • lower_bound (float, List[float], np.ndarray, Tuple[...], optional) –

    Lower bound(s) for decision variables. Can be:

    • None: defaults to zeros array with length dim

    • float: broadcasts to all dimensions

    • array: must have length dim

  • upper_bound (float, List[float], np.ndarray, Tuple[...], optional) –

    Upper bound(s) for decision variables. Can be:

    • None: defaults to ones array with length dim

    • float: broadcasts to all dimensions

    • array: must have length dim

Returns:

Task index (single task) or list of task indices (multiple tasks).

Return type:

int or List[int]

Raises:

ValueError – If dimensions mismatch or bounds are incompatible.

Examples

Add a single task with default bounds [0, 1]:

>>> def sphere(x):
...     return np.sum(x**2, axis=1)
>>> mtop = MTOP()
>>> idx = mtop.add_task(sphere, dim=3)
>>> idx
0

Add a single task with custom bounds (scalar):

>>> idx = mtop.add_task(sphere, dim=5, lower_bound=-5, upper_bound=5)
>>> idx
1

Add multiple tasks at once:

>>> def f1(x): return np.sum(x**2, axis=1)
>>> def f2(x): return np.sum((x-1)**2, axis=1)
>>> indices = mtop.add_task(
...     objective_func=(f1, f2),
...     dim=(3, 4),
...     lower_bound=([-1]*3, [-2]*4),
...     upper_bound=([1]*3, [2]*4)
... )
>>> indices
[2, 3]

Add task with constraints:

>>> def con(x): return x[0] - 0.5
>>> idx = mtop.add_task(sphere, dim=2, constraint_func=con)
>>> idx
4
add_tasks(tasks_config: List[Dict[str, Any]]) List[int][source]

Add multiple tasks from configuration dictionaries.

Parameters:

tasks_config (List[Dict[str, Any]]) –

List of task configuration dictionaries. Each dict must contain:

  • ’objective_func’ : Callable (required)

  • ’dim’ : int (required)

  • ’constraint_func’ : Callable or List[Callable] (optional)

  • ’lower_bound’ : float, List[float], or np.ndarray (optional, default zeros)

  • ’upper_bound’ : float, List[float], or np.ndarray (optional, default ones)

Returns:

List of task indices.

Return type:

List[int]

Raises:
  • TypeError – If tasks_config is not a list.

  • ValueError – If any config dict is missing required keys.

Examples

>>> def f1(x): return np.sum(x**2, axis=1)
>>> def f2(x): return np.sum((x-1)**2, axis=1)
>>> configs = [
...     {'objective_func': f1, 'dim': 3},
...     {'objective_func': f2, 'dim': 5, 'lower_bound': -5, 'upper_bound': 5}
... ]
>>> mtop = MTOP()
>>> indices = mtop.add_tasks(configs)
>>> indices
[0, 1]
evaluate_task(task_idx: int, X: ndarray, eval_objectives: bool | int | List[int] = True, eval_constraints: bool | int | List[int] = True) Tuple[ndarray, ndarray][source]

Evaluate a task with selective evaluation support.

Parameters:
  • task_idx (int) – Index of the task to evaluate.

  • X (np.ndarray) – Input array of shape (n_samples, dim) or (dim,).

  • eval_objectives (bool, int, or List[int], optional) –

    Evaluation mode for objectives (default is True):

    • True: evaluate all objectives

    • False: skip objective evaluation, return empty array

    • int: evaluate only the i-th objective

    • List[int]: evaluate specified objectives by indices

  • eval_constraints (bool, int, or List[int], optional) –

    Evaluation mode for constraints (default is True):

    • True: evaluate all constraints

    • False: skip constraint evaluation, return empty array

    • int: evaluate only the i-th constraint

    • List[int]: evaluate specified constraints by indices

Returns:

Tuple of (objectives, constraints) as 2D numpy arrays:

  • objectives: shape (n_samples, n_evaluated_objectives) or padded to (n_samples, m_max) if unified_eval_mode is True

  • constraints: shape (n_samples, n_evaluated_constraints) or padded to (n_samples, c_max) if unified_eval_mode is True

Return type:

Tuple[np.ndarray, np.ndarray]

Raises:

ValueError – If task_idx is out of range or input dimension mismatch.

Examples

Evaluate all objectives and constraints:

>>> def sphere(x):
...     return np.sum(x**2, axis=1)
>>> mtop = MTOP()
>>> mtop.add_task(sphere, dim=3)
0
>>> X = np.random.rand(10, 3)
>>> obj, con = mtop.evaluate_task(0, X)
>>> obj.shape
(10, 1)

Evaluate only specific objectives:

>>> def multi_obj(x):
...     x = np.atleast_2d(x)
...     f1 = np.sum(x**2, axis=1)
...     f2 = np.sum((x-1)**2, axis=1)
...     f3 = np.sum(x, axis=1)
...     return np.column_stack([f1, f2, f3])
>>> mtop2 = MTOP()
>>> mtop2.add_task(multi_obj, dim=3)
0
>>> X = np.random.rand(10, 3)
>>> obj, con = mtop2.evaluate_task(0, X, eval_objectives=[0, 2])
>>> obj.shape
(10, 2)

Skip constraint evaluation:

>>> mtop3 = MTOP()
>>> mtop3.add_task(sphere, dim=3)
0
>>> X = np.random.rand(10, 3)
>>> obj, con = mtop3.evaluate_task(0, X, eval_constraints=False)
>>> con.shape
(10, 0)
evaluate_tasks(task_indices: List[int], X_list: List[ndarray], eval_objectives: bool | int | List[int] | List[bool | int | List[int]] = True, eval_constraints: bool | int | List[int] | List[bool | int | List[int]] = True) Tuple[List[ndarray], List[ndarray]][source]

Evaluate multiple tasks simultaneously.

Parameters:
  • task_indices (List[int]) – List of task indices to evaluate.

  • X_list (List[np.ndarray]) – List of input arrays, one for each task.

  • eval_objectives (bool, int, List[int], or List[Union[...]], optional) –

    Evaluation mode for objectives (default is True):

    • Single mode: applied to all tasks

    • List of modes: per-task evaluation modes

  • eval_constraints (bool, int, List[int], or List[Union[...]], optional) –

    Evaluation mode for constraints (default is True):

    • Single mode: applied to all tasks

    • List of modes: per-task evaluation modes

Returns:

Tuple of (list of objective arrays, list of constraint arrays).

Return type:

Tuple[List[np.ndarray], List[np.ndarray]]

Raises:

ValueError – If task_indices and X_list length mismatch.

Examples

>>> def f1(x): return np.sum(x**2, axis=1)
>>> def f2(x): return np.sum((x-1)**2, axis=1)
>>> mtop = MTOP()
>>> mtop.add_task(f1, dim=3)
0
>>> mtop.add_task(f2, dim=4)
1
>>> mtop.add_task(f1, dim=5)
2
>>> task_indices = [0, 1, 2]
>>> X_list = [np.random.rand(10, 3), np.random.rand(10, 4), np.random.rand(10, 5)]
>>> objs, cons = mtop.evaluate_tasks(task_indices, X_list)
>>> len(objs)
3
get_task_info(task_idx: int) Dict[str, Any][source]

Get comprehensive information about a specific task.

Parameters:

task_idx (int) – Index of the task.

Returns:

Dictionary containing task information:

  • ’dimension’ : int - Decision variable dimension

  • ’n_objectives’ : int - Number of objectives

  • ’n_constraints’ : int - Number of constraints

  • ’lower_bounds’ : np.ndarray - Lower bounds

  • ’upper_bounds’ : np.ndarray - Upper bounds

  • ’objective_func’ : Callable - Raw objective function

  • ’constraint_funcs’ : List[Callable] - Constraint function wrappers

Return type:

Dict[str, Any]

Raises:

ValueError – If task_idx is out of range.

Examples

>>> def sphere(x): return np.sum(x**2, axis=1)
>>> mtop = MTOP()
>>> mtop.add_task(sphere, dim=3)
0
>>> info = mtop.get_task_info(0)
>>> print(f"Task 0 has {info['n_objectives']} objectives")
Task 0 has 1 objectives
set_unified_eval_mode(enabled: bool, fill_value: float = 0.0) None[source]

Set unified evaluation mode configuration.

In unified evaluation mode, all task evaluations are padded to have the same dimensions (m_max objectives and c_max constraints).

Parameters:
  • enabled (bool) – Enable or disable unified evaluation mode.

  • fill_value (float, optional) – Value used for padding (default is 0.0).

Examples

>>> mtop = MTOP()
>>> mtop.set_unified_eval_mode(enabled=True, fill_value=0.0)
>>> mtop.unified_eval_mode
True

Benchmark Problem Suites

STSO Problems:

class ddmtolab.Problems.STSO.classical_so.CLASSICALSO[source]

Classical Single-Task Optimization (CLASSICALSO) benchmark problems.

This class provides a set of standard single-objective optimization benchmark functions (e.g., Ackley, Rastrigin, Sphere) configured as Multi-Task Optimization Problems (MTOPs) with only one task. This serves as a baseline for comparing single-task solvers or as individual tasks in a multi-task setting.

P1(D=50) MTOP[source]

Generates a single-task MTOP based on the Ackley function.

The search space is set to [-100.0, 100.0] in all dimensions.

Parameters:

D (int, optional) – Number of decision variables (default is 50).

Returns:

A Multi-Task Optimization Problem instance containing the Ackley task.

Return type:

MTOP

P2(D=50) MTOP[source]

Generates a single-task MTOP based on the Elliptic function.

The search space is set to [-100.0, 100.0] in all dimensions.

Parameters:

D (int, optional) – Number of decision variables (default is 50).

Returns:

A Multi-Task Optimization Problem instance containing the Elliptic task.

Return type:

MTOP

P3(D=50) MTOP[source]

Generates a single-task MTOP based on the Griewank function.

The search space is set to [-100.0, 100.0] in all dimensions.

Parameters:

D (int, optional) – Number of decision variables (default is 50).

Returns:

A Multi-Task Optimization Problem instance containing the Griewank task.

Return type:

MTOP

P4(D=50) MTOP[source]

Generates a single-task MTOP based on the Rastrigin function.

The search space is set to [-100.0, 100.0] in all dimensions.

Parameters:

D (int, optional) – Number of decision variables (default is 50).

Returns:

A Multi-Task Optimization Problem instance containing the Rastrigin task.

Return type:

MTOP

P5(D=50) MTOP[source]

Generates a single-task MTOP based on the Rosenbrock function.

The search space is set to [-100.0, 100.0] in all dimensions.

Parameters:

D (int, optional) – Number of decision variables (default is 50).

Returns:

A Multi-Task Optimization Problem instance containing the Rosenbrock task.

Return type:

MTOP

P6(D=50) MTOP[source]

Generates a single-task MTOP based on the Schwefel function (F6).

The search space is set to [-500.0, 500.0] in all dimensions.

Parameters:

D (int, optional) – Number of decision variables (default is 50).

Returns:

A Multi-Task Optimization Problem instance containing the Schwefel task.

Return type:

MTOP

P7(D=50) MTOP[source]

Generates a single-task MTOP based on the Schwefel 2.22 function (F7).

The search space is set to [-100.0, 100.0] in all dimensions.

Parameters:

D (int, optional) – Number of decision variables (default is 50).

Returns:

A Multi-Task Optimization Problem instance containing the Schwefel 2.22 task.

Return type:

MTOP

P8(D=50) MTOP[source]

Generates a single-task MTOP based on the Sphere function.

The search space is set to [-100.0, 100.0] in all dimensions.

Parameters:

D (int, optional) – Number of decision variables (default is 50).

Returns:

A Multi-Task Optimization Problem instance containing the Sphere task.

Return type:

MTOP

P9(D=50) MTOP[source]

Generates a single-task MTOP based on the Weierstrass function.

The search space is set to [-0.5, 0.5] in all dimensions.

Parameters:

D (int, optional) – Number of decision variables (default is 50).

Returns:

A Multi-Task Optimization Problem instance containing the Weierstrass task.

Return type:

MTOP

problem_information = {'n_cases': 9, 'n_cons': '0', 'n_dims': 'D', 'n_objs': '1', 'n_tasks': '1', 'type': 'synthetic'}
class ddmtolab.Problems.STSO.cec10_cso.CEC10_CSO[source]

CEC 2010 Competition on Constrained Real-Parameter Optimization (CSO) benchmark problems.

This class provides constrained single-objective optimization benchmark functions configured as Multi-Task Optimization Problems (MTOPs) with only one task.

References

[1] Mallipeddi, Rammohan and Suganthan, Ponnuthurai. “Problem Definitions and Evaluation Criteria for the CEC 2010 Competition on Constrained Real-parameter Optimization.” (2010)

delta

Tolerance for equality constraints (default: 1e-4).

Type:

float

data_dir

The directory path for problem data files.

Type:

str

max_dim

Maximum allowed dimension (30, due to offset vector size).

Type:

int

P1(dim=10) MTOP[source]

Generates CEC10_CSO Problem 1.

This is a constrained optimization problem with: - 1 objective function - 2 inequality constraints - Search space: [0, 10] for all dimensions

Parameters:

dim (int, optional) – The dimensionality of the search space (default is 10, max is 30).

Returns:

A Multi-Task Optimization Problem instance containing Problem 1.

Return type:

MTOP

P10(dim=10) MTOP[source]

Generates CEC10_CSO Problem 10.

This is a constrained optimization problem with: - 1 objective function (Rosenbrock) - 1 equality constraint - Uses rotation matrix M - Search space: [-500, 500] for all dimensions

Parameters:

dim (int, optional) – The dimensionality of the search space (default is 10, max is 30).

Returns:

A Multi-Task Optimization Problem instance containing Problem 10.

Return type:

MTOP

P11(dim=10) MTOP[source]

Generates CEC10_CSO Problem 11.

This is a constrained optimization problem with: - 1 objective function (modified cosine function) - 1 equality constraint (Rosenbrock) - Uses rotation matrix M - Search space: [-100, 100] for all dimensions

Parameters:

dim (int, optional) – The dimensionality of the search space (default is 10, max is 30).

Returns:

A Multi-Task Optimization Problem instance containing Problem 11.

Return type:

MTOP

P12(dim=10) MTOP[source]

Generates CEC10_CSO Problem 12.

This is a constrained optimization problem with: - 1 objective function (Schwefel) - 1 equality constraint - 1 inequality constraint - Search space: [-1000, 1000] for all dimensions

Parameters:

dim (int, optional) – The dimensionality of the search space (default is 10, max is 30).

Returns:

A Multi-Task Optimization Problem instance containing Problem 12.

Return type:

MTOP

P13(dim=10) MTOP[source]

Generates CEC10_CSO Problem 13.

This is a constrained optimization problem with: - 1 objective function (modified Schwefel) - 3 inequality constraints - Search space: [-500, 500] for all dimensions

Parameters:

dim (int, optional) – The dimensionality of the search space (default is 10, max is 30).

Returns:

A Multi-Task Optimization Problem instance containing Problem 13.

Return type:

MTOP

P14(dim=10) MTOP[source]

Generates CEC10_CSO Problem 14.

This is a constrained optimization problem with: - 1 objective function (Rosenbrock) - 3 inequality constraints - Search space: [-1000, 1000] for all dimensions

Parameters:

dim (int, optional) – The dimensionality of the search space (default is 10, max is 30).

Returns:

A Multi-Task Optimization Problem instance containing Problem 14.

Return type:

MTOP

P15(dim=10) MTOP[source]

Generates CEC10_CSO Problem 15.

This is a constrained optimization problem with: - 1 objective function (Rosenbrock) - 3 inequality constraints - Uses rotation matrix M - Search space: [-1000, 1000] for all dimensions

Parameters:

dim (int, optional) – The dimensionality of the search space (default is 10, max is 30).

Returns:

A Multi-Task Optimization Problem instance containing Problem 15.

Return type:

MTOP

P16(dim=10) MTOP[source]

Generates CEC10_CSO Problem 16.

This is a constrained optimization problem with: - 1 objective function (Griewank) - 1 inequality constraint - 2 equality constraints - Search space: [-10, 10] for all dimensions

Parameters:

dim (int, optional) – The dimensionality of the search space (default is 10, max is 30).

Returns:

A Multi-Task Optimization Problem instance containing Problem 16.

Return type:

MTOP

P17(dim=10) MTOP[source]

Generates CEC10_CSO Problem 17.

This is a constrained optimization problem with: - 1 objective function (sum of squared differences) - 2 inequality constraints - 1 equality constraint - Search space: [-10, 10] for all dimensions

Parameters:

dim (int, optional) – The dimensionality of the search space (default is 10, max is 30).

Returns:

A Multi-Task Optimization Problem instance containing Problem 17.

Return type:

MTOP

P18(dim=10) MTOP[source]

Generates CEC10_CSO Problem 18.

This is a constrained optimization problem with: - 1 objective function (sum of squared differences) - 1 inequality constraint - 1 equality constraint - Search space: [-50, 50] for all dimensions

Parameters:

dim (int, optional) – The dimensionality of the search space (default is 10, max is 30).

Returns:

A Multi-Task Optimization Problem instance containing Problem 18.

Return type:

MTOP

P2(dim=10) MTOP[source]

Generates CEC10_CSO Problem 2.

This is a constrained optimization problem with: - 1 objective function (max function) - 2 inequality constraints - 1 equality constraint - Search space: [-5.12, 5.12] for all dimensions

Parameters:

dim (int, optional) – The dimensionality of the search space (default is 10, max is 30).

Returns:

A Multi-Task Optimization Problem instance containing Problem 2.

Return type:

MTOP

P3(dim=10) MTOP[source]

Generates CEC10_CSO Problem 3.

This is a constrained optimization problem with: - 1 objective function (Rosenbrock) - 1 equality constraint - Search space: [-1000, 1000] for all dimensions

Parameters:

dim (int, optional) – The dimensionality of the search space (default is 10, max is 30).

Returns:

A Multi-Task Optimization Problem instance containing Problem 3.

Return type:

MTOP

P4(dim=10) MTOP[source]

Generates CEC10_CSO Problem 4.

This is a constrained optimization problem with: - 1 objective function (max function) - 4 equality constraints - Search space: [-50, 50] for all dimensions

Parameters:

dim (int, optional) – The dimensionality of the search space (default is 10, max is 30).

Returns:

A Multi-Task Optimization Problem instance containing Problem 4.

Return type:

MTOP

P5(dim=10) MTOP[source]

Generates CEC10_CSO Problem 5.

This is a constrained optimization problem with: - 1 objective function (max function) - 2 equality constraints - Search space: [-600, 600] for all dimensions

Parameters:

dim (int, optional) – The dimensionality of the search space (default is 10, max is 30).

Returns:

A Multi-Task Optimization Problem instance containing Problem 5.

Return type:

MTOP

P6(dim=10) MTOP[source]

Generates CEC10_CSO Problem 6.

This is a constrained optimization problem with: - 1 objective function (max function) - 2 equality constraints - Uses rotation matrix M - Search space: [-600, 600] for all dimensions

Parameters:

dim (int, optional) – The dimensionality of the search space (default is 10, max is 30).

Returns:

A Multi-Task Optimization Problem instance containing Problem 6.

Return type:

MTOP

P7(dim=10) MTOP[source]

Generates CEC10_CSO Problem 7.

This is a constrained optimization problem with: - 1 objective function (Rosenbrock) - 1 inequality constraint - Search space: [-140, 140] for all dimensions

Parameters:

dim (int, optional) – The dimensionality of the search space (default is 10, max is 30).

Returns:

A Multi-Task Optimization Problem instance containing Problem 7.

Return type:

MTOP

P8(dim=10) MTOP[source]

Generates CEC10_CSO Problem 8.

This is a constrained optimization problem with: - 1 objective function (Rosenbrock) - 1 inequality constraint - Uses rotation matrix M - Search space: [-140, 140] for all dimensions

Parameters:

dim (int, optional) – The dimensionality of the search space (default is 10, max is 30).

Returns:

A Multi-Task Optimization Problem instance containing Problem 8.

Return type:

MTOP

P9(dim=10) MTOP[source]

Generates CEC10_CSO Problem 9.

This is a constrained optimization problem with: - 1 objective function (Rosenbrock) - 1 equality constraint - Search space: [-500, 500] for all dimensions

Parameters:

dim (int, optional) – The dimensionality of the search space (default is 10, max is 30).

Returns:

A Multi-Task Optimization Problem instance containing Problem 9.

Return type:

MTOP

max_dim = 30
problem_information = {'n_cases': 18, 'n_cons': '[1, 5]', 'n_dims': '[10, 30]', 'n_objs': '1', 'n_tasks': '1', 'type': 'synthetic'}
class ddmtolab.Problems.STSO.stsotest.STSOtest[source]

Modified Single-Task Optimization (STSOtest) benchmark problems.

This class provides a set of standard single-objective optimization benchmark functions (e.g., Ackley, Rastrigin, Sphere) configured as Multi-Task Optimization Problems (MTOPs) with only one task. Unlike CLASSICALSO, this class uses non-identity rotation matrices (M) and non-zero offset vectors (o) to create different but corresponding problem instances.

P1(D=50) MTOP[source]

Generates a single-task MTOP based on the Ackley function.

The search space is set to [-100.0, 100.0] in all dimensions. Uses fixed rotation matrix M and offset vector o.

Parameters:

D (int, optional) – Number of decision variables (default is 50).

Returns:

A Multi-Task Optimization Problem instance containing the modified Ackley task.

Return type:

MTOP

P2(D=50) MTOP[source]

Generates a single-task MTOP based on the Elliptic function.

The search space is set to [-100.0, 100.0] in all dimensions. Uses fixed rotation matrix M and offset vector o.

Parameters:

D (int, optional) – Number of decision variables (default is 50).

Returns:

A Multi-Task Optimization Problem instance containing the modified Elliptic task.

Return type:

MTOP

P3(D=50) MTOP[source]

Generates a single-task MTOP based on the Griewank function.

The search space is set to [-100.0, 100.0] in all dimensions. Uses fixed rotation matrix M and offset vector o.

Parameters:

D (int, optional) – Number of decision variables (default is 50).

Returns:

A Multi-Task Optimization Problem instance containing the modified Griewank task.

Return type:

MTOP

P4(D=50) MTOP[source]

Generates a single-task MTOP based on the Rastrigin function.

The search space is set to [-100.0, 100.0] in all dimensions. Uses fixed rotation matrix M and offset vector o.

Parameters:

D (int, optional) – Number of decision variables (default is 50).

Returns:

A Multi-Task Optimization Problem instance containing the modified Rastrigin task.

Return type:

MTOP

P5(D=50) MTOP[source]

Generates a single-task MTOP based on the Rosenbrock function.

The search space is set to [-100.0, 100.0] in all dimensions. Uses fixed rotation matrix M and offset vector o.

Parameters:

D (int, optional) – Number of decision variables (default is 50).

Returns:

A Multi-Task Optimization Problem instance containing the modified Rosenbrock task.

Return type:

MTOP

P6(D=50) MTOP[source]

Generates a single-task MTOP based on the Schwefel function (F6).

The search space is set to [-500.0, 500.0] in all dimensions. Uses fixed rotation matrix M and offset vector o.

Parameters:

D (int, optional) – Number of decision variables (default is 50).

Returns:

A Multi-Task Optimization Problem instance containing the modified Schwefel task.

Return type:

MTOP

P7(D=50) MTOP[source]

Generates a single-task MTOP based on the Schwefel 2.22 function (F7).

The search space is set to [-100.0, 100.0] in all dimensions. Uses fixed rotation matrix M and offset vector o.

Parameters:

D (int, optional) – Number of decision variables (default is 50).

Returns:

A Multi-Task Optimization Problem instance containing the modified Schwefel 2.22 task.

Return type:

MTOP

P8(D=50) MTOP[source]

Generates a single-task MTOP based on the Sphere function.

The search space is set to [-100.0, 100.0] in all dimensions. Uses fixed rotation matrix M and offset vector o.

Parameters:

D (int, optional) – Number of decision variables (default is 50).

Returns:

A Multi-Task Optimization Problem instance containing the modified Sphere task.

Return type:

MTOP

P9(D=50) MTOP[source]

Generates a single-task MTOP based on the Weierstrass function.

The search space is set to [-0.5, 0.5] in all dimensions. Uses fixed rotation matrix M and offset vector o.

Parameters:

D (int, optional) – Number of decision variables (default is 50).

Returns:

A Multi-Task Optimization Problem instance containing the modified Weierstrass task.

Return type:

MTOP

problem_information = {'n_cases': 9, 'n_cons': '0', 'n_dims': 'D', 'n_objs': '1', 'n_tasks': '1', 'type': 'synthetic'}

STMO Problems:

class ddmtolab.Problems.STMO.ZDT.ZDT[source]

Implementation of the ZDT test suite for multi-objective optimization.

The ZDT test problems (ZDT1 to ZDT6) are standard bi-objective optimization benchmarks proposed by Zitzler, Deb, and Thiele (2000).

Each method in this class generates a Multi-Task Optimization Problem (MTOP) instance containing a single ZDT task.

References

[1] E. Zitzler, K. Deb, and L. Thiele. “Comparison of multiobjective evolutionary algorithms: Empirical results.” Evolutionary Computation, 2000, 8(2): 173-195.

Notes

All ZDT problems have exactly M=2 objectives. The decision space dimension can be adjusted via the D parameter.

ZDT1(D=30) MTOP[source]

Generates the ZDT1 problem.

ZDT1 features a convex Pareto front and tests the ability to converge to the optimal front uniformly.

Parameters:

D (int, optional) – Number of decision variables (default is 30).

Returns:

A Multi-Task Optimization Problem instance containing the ZDT1 task.

Return type:

MTOP

ZDT2(D=30) MTOP[source]

Generates the ZDT2 problem.

ZDT2 features a non-convex Pareto front and tests the ability to maintain diversity in non-convex regions.

Parameters:

D (int, optional) – Number of decision variables (default is 30).

Returns:

A Multi-Task Optimization Problem instance containing the ZDT2 task.

Return type:

MTOP

ZDT3(D=30) MTOP[source]

Generates the ZDT3 problem.

ZDT3 features a disconnected Pareto front and tests the ability to maintain subpopulations in different regions.

Parameters:

D (int, optional) – Number of decision variables (default is 30).

Returns:

A Multi-Task Optimization Problem instance containing the ZDT3 task.

Return type:

MTOP

ZDT4(D=10) MTOP[source]

Generates the ZDT4 problem.

ZDT4 features multiple local Pareto fronts and tests the ability to avoid local optima. It has 21^9 local Pareto fronts.

Parameters:

D (int, optional) – Number of decision variables (default is 10).

Returns:

A Multi-Task Optimization Problem instance containing the ZDT4 task.

Return type:

MTOP

ZDT5(D=80) MTOP[source]

Generates the ZDT5 problem.

ZDT5 is a binary-encoded problem with a deceptive fitness landscape. The dimension is adjusted to be 30 + 5k where k is an integer.

Parameters:

D (int, optional) – Number of decision variables (default is 80). The actual dimension will be adjusted to 30 + 5k format.

Returns:

A Multi-Task Optimization Problem instance containing the ZDT5 task.

Return type:

MTOP

Notes

Decision variables are binary-encoded. Continuous inputs are converted to binary by thresholding at 0.5: x > 0.5 → 1, x ≤ 0.5 → 0.

ZDT6(D=10) MTOP[source]

Generates the ZDT6 problem.

ZDT6 features a non-uniform search space and non-convex Pareto front. It has low density of solutions near the Pareto front.

Parameters:

D (int, optional) – Number of decision variables (default is 10).

Returns:

A Multi-Task Optimization Problem instance containing the ZDT6 task.

Return type:

MTOP

problem_information = {'n_cases': 6, 'n_cons': '0', 'n_dims': 'D', 'n_objs': '2', 'n_tasks': '1', 'type': 'synthetic'}
class ddmtolab.Problems.STMO.DTLZ.DTLZ[source]

Implementation of the DTLZ test suite for multi-objective optimization.

The DTLZ test problems (DTLZ1 to DTLZ7) are standard unconstrained multi-objective optimization benchmarks. DTLZ8 and DTLZ9 are constrained.

Each method in this class generates a Multi-Task Optimization Problem (MTOP) instance containing a single DTLZ task.

Notes

The decision variables (x) are typically split into M objectives (x[0:M-1]) and k complexity variables (x[M-1:]).

DTLZ1(M=3, D=None) MTOP[source]

Generates the DTLZ1 problem.

DTLZ1 features a simple linear Pareto-optimal front (PF) and a complex multi-modal search space due to the g-function.

Parameters:
  • M (int, optional) – Number of objectives (default is 3).

  • D (int, optional) – Number of decision variables. If None, it is set to M + k - 1, where k=5 for DTLZ1 (default is None).

Returns:

A Multi-Task Optimization Problem instance containing the DTLZ1 task.

Return type:

MTOP

DTLZ2(M=3, D=None) MTOP[source]

Generates the DTLZ2 problem.

DTLZ2 features a simple convex spherical PF and a simple uni-modal g-function.

Parameters:
  • M (int, optional) – Number of objectives (default is 3).

  • D (int, optional) – Number of decision variables. If None, it is set to M + k - 1, where k=10 for DTLZ2 (default is None).

Returns:

A Multi-Task Optimization Problem instance containing the DTLZ2 task.

Return type:

MTOP

DTLZ3(M=3, D=None) MTOP[source]

Generates the DTLZ3 problem.

DTLZ3 features a convex spherical PF (similar to DTLZ2) but has a multi-modal g-function, making it difficult to converge.

Parameters:
  • M (int, optional) – Number of objectives (default is 3).

  • D (int, optional) – Number of decision variables. If None, it is set to M + k - 1, where k=10 for DTLZ3 (default is None).

Returns:

A Multi-Task Optimization Problem instance containing the DTLZ3 task.

Return type:

MTOP

DTLZ4(M=3, D=None, alpha=100) MTOP[source]

Generates the DTLZ4 problem.

DTLZ4 features a convex spherical PF (similar to DTLZ2) but introduces a bias towards certain objective regions due to the exponent \(\alpha\), making diversity maintenance challenging.

Parameters:
  • M (int, optional) – Number of objectives (default is 3).

  • D (int, optional) – Number of decision variables. If None, it is set to M + k - 1, where k=10 for DTLZ4 (default is None).

  • alpha (int, optional) – Exponent used to bias the solution (default is 100).

Returns:

A Multi-Task Optimization Problem instance containing the DTLZ4 task.

Return type:

MTOP

DTLZ5(M=3, D=None) MTOP[source]

Generates the DTLZ5 problem.

DTLZ5 features a degenerated (curve-like) PF, lying on a \((M-1)\)-dimensional manifold of the M-dimensional objective space.

Parameters:
  • M (int, optional) – Number of objectives (default is 3).

  • D (int, optional) – Number of decision variables. If None, it is set to M + k - 1, where k=10 for DTLZ5 (default is None).

Returns:

A Multi-Task Optimization Problem instance containing the DTLZ5 task.

Return type:

MTOP

DTLZ6(M=3, D=None) MTOP[source]

Generates the DTLZ6 problem.

DTLZ6 also features a degenerated (curve-like) PF (similar to DTLZ5), but introduces a connectivity difficulty with its g-function (\(g(x_M) = \sum x_M^{0.1}\)).

Parameters:
  • M (int, optional) – Number of objectives (default is 3).

  • D (int, optional) – Number of decision variables. If None, it is set to M + k - 1, where k=10 for DTLZ6 (default is None).

Returns:

A Multi-Task Optimization Problem instance containing the DTLZ6 task.

Return type:

MTOP

DTLZ7(M=3, D=None) MTOP[source]

Generates the DTLZ7 problem.

DTLZ7 features a disconnected PF and is used to test an algorithm’s ability to converge to multiple disconnected regions.

Parameters:
  • M (int, optional) – Number of objectives (default is 3).

  • D (int, optional) – Number of decision variables. If None, it is set to M + k - 1, where k=20 for DTLZ7 (default is None).

Returns:

A Multi-Task Optimization Problem instance containing the DTLZ7 task.

Return type:

MTOP

DTLZ8(M=3, D=None) MTOP[source]

Generates the DTLZ8 problem (Constrained).

DTLZ8 has simple objective functions but complex constraints, typically resulting in a PF that is a linear or piecewise linear manifold.

Parameters:
  • M (int, optional) – Number of objectives (default is 3).

  • D (int, optional) – Number of decision variables. If None, it is set to M * k, where k=10 for DTLZ8 (default is None).

Returns:

A Multi-Task Optimization Problem instance containing the DTLZ8 task.

Return type:

MTOP

DTLZ9(M=2, D=None) MTOP[source]

Generates the DTLZ9 problem (Constrained).

DTLZ9 has simple objective functions but constraints that define a parabolic shape for the PF.

Parameters:
  • M (int, optional) – Number of objectives (default is 3).

  • D (int, optional) – Number of decision variables. If None, it is set to M * k, where k=10 for DTLZ9 (default is None).

Returns:

A Multi-Task Optimization Problem instance containing the DTLZ9 task.

Return type:

MTOP

problem_information = {'n_cases': 9, 'n_cons': '[0, M]', 'n_dims': 'D', 'n_objs': 'M', 'n_tasks': '1', 'type': 'synthetic'}
class ddmtolab.Problems.STMO.WFG.WFG[source]

Implementation of the WFG (Walking Fish Group) test suite for multi-objective optimization.

The WFG test problems are scalable benchmarks designed to test various characteristics of multi-objective optimization algorithms, including bias, flatness, and mixed Pareto fronts.

WFG1(M=3, Kp=None, D=None) MTOP[source]

Generates the WFG1 problem.

WFG1 features a mixed Pareto front with both convex and non-convex regions, along with polynomial bias and flat region transformations.

Parameters:
  • M (int, optional) – Number of objectives (default is 3).

  • Kp (int, optional) – Position parameter (number of position-related variables), which should be a multiple of M-1. If None, it is set to M-1 (default is None).

  • D (int, optional) – Number of decision variables. If None, it is set to Kp + 10 (default is None).

Returns:

A Multi-Task Optimization Problem instance containing the WFG1 task.

Return type:

MTOP

WFG2(M=3, Kp=None, D=None) MTOP[source]

Generates the WFG2 problem.

WFG2 features a disconnected Pareto front and uses non-separable reduction functions. It tests an algorithm’s ability to maintain diversity across disconnected regions.

Parameters:
  • M (int, optional) – Number of objectives (default is 3).

  • Kp (int, optional) – Position parameter, which should be a multiple of M-1. If None, it is set to M-1 (default is None).

  • D (int, optional) – Number of decision variables. If None, it is set to Kp + 10 (default is None).

Returns:

A Multi-Task Optimization Problem instance containing the WFG2 task.

Return type:

MTOP

WFG3(M=3, Kp=None, D=None) MTOP[source]

Generates the WFG3 problem.

WFG3 features a linear Pareto front with a degenerate geometry, testing an algorithm’s ability to handle problems with dependencies between objectives.

Parameters:
  • M (int, optional) – Number of objectives (default is 3).

  • Kp (int, optional) – Position parameter, which should be a multiple of M-1. If None, it is set to M-1 (default is None).

  • D (int, optional) – Number of decision variables. If None, it is set to Kp + 10 (default is None).

Returns:

A Multi-Task Optimization Problem instance containing the WFG3 task.

Return type:

MTOP

WFG4(M=3, Kp=None, D=None) MTOP[source]

Generates the WFG4 problem.

WFG4 features a concave Pareto front with multi-modal transformations, testing an algorithm’s ability to handle multi-modality.

Parameters:
  • M (int, optional) – Number of objectives (default is 3).

  • Kp (int, optional) – Position parameter, which should be a multiple of M-1. If None, it is set to M-1 (default is None).

  • D (int, optional) – Number of decision variables. If None, it is set to Kp + 10 (default is None).

Returns:

A Multi-Task Optimization Problem instance containing the WFG4 task.

Return type:

MTOP

WFG5(M=3, Kp=None, D=None) MTOP[source]

Generates the WFG5 problem.

WFG5 features a concave Pareto front with parameter-deceptive transformations, testing an algorithm’s ability to handle deceptive fitness landscapes.

Parameters:
  • M (int, optional) – Number of objectives (default is 3).

  • Kp (int, optional) – Position parameter, which should be a multiple of M-1. If None, it is set to M-1 (default is None).

  • D (int, optional) – Number of decision variables. If None, it is set to Kp + 10 (default is None).

Returns:

A Multi-Task Optimization Problem instance containing the WFG5 task.

Return type:

MTOP

WFG6(M=3, Kp=None, D=None) MTOP[source]

Generates the WFG6 problem.

WFG6 features a concave Pareto front with non-separable reduction functions, testing an algorithm’s ability to handle non-separable problems.

Parameters:
  • M (int, optional) – Number of objectives (default is 3).

  • Kp (int, optional) – Position parameter, which should be a multiple of M-1. If None, it is set to M-1 (default is None).

  • D (int, optional) – Number of decision variables. If None, it is set to Kp + 10 (default is None).

Returns:

A Multi-Task Optimization Problem instance containing the WFG6 task.

Return type:

MTOP

WFG7(M=3, Kp=None, D=None) MTOP[source]

Generates the WFG7 problem.

WFG7 features a concave Pareto front with parameter-dependent transformations, testing an algorithm’s ability to handle parameter dependencies.

Parameters:
  • M (int, optional) – Number of objectives (default is 3).

  • Kp (int, optional) – Position parameter, which should be a multiple of M-1. If None, it is set to M-1 (default is None).

  • D (int, optional) – Number of decision variables. If None, it is set to Kp + 10 (default is None).

Returns:

A Multi-Task Optimization Problem instance containing the WFG7 task.

Return type:

MTOP

WFG8(M=3, Kp=None, D=None) MTOP[source]

Generates the WFG8 problem.

WFG8 features a concave Pareto front with parameter-dependent transformations on distance parameters, testing an algorithm’s ability to handle complex parameter dependencies.

Parameters:
  • M (int, optional) – Number of objectives (default is 3).

  • Kp (int, optional) – Position parameter, which should be a multiple of M-1. If None, it is set to M-1 (default is None).

  • D (int, optional) – Number of decision variables. If None, it is set to Kp + 10 (default is None).

Returns:

A Multi-Task Optimization Problem instance containing the WFG8 task.

Return type:

MTOP

WFG9(M=3, Kp=None, D=None) MTOP[source]

Generates the WFG9 problem.

WFG9 features a concave Pareto front with parameter-dependent transformations, deceptive and multi-modal shifts, and non-separable reduction functions, testing multiple characteristics simultaneously.

Parameters:
  • M (int, optional) – Number of objectives (default is 3).

  • Kp (int, optional) – Position parameter, which should be a multiple of M-1. If None, it is set to M-1 (default is None).

  • D (int, optional) – Number of decision variables. If None, it is set to Kp + 10 (default is None).

Returns:

A Multi-Task Optimization Problem instance containing the WFG9 task.

Return type:

MTOP

problem_information = {'n_cases': 9, 'n_cons': '0', 'n_dims': 'D', 'n_objs': 'M', 'n_tasks': '1', 'type': 'synthetic'}
class ddmtolab.Problems.STMO.UF.UF[source]

Implementation of the UF test suite for multi-objective optimization.

The UF test problems (UF1 to UF10) are unconstrained benchmark MOPs proposed by Zhang et al. (2009) for the CEC 2009 special session and competition.

Each method in this class generates a Multi-Task Optimization Problem (MTOP) instance containing a single UF task.

References

[1] Q. Zhang, A. Zhou, S. Zhao, P. N. Suganthan, W. Liu, and S. Tiwari. “Multiobjective optimization test instances for the CEC 2009 special session and competition.” School of CS & EE, University of Essex, Working Report CES-487, 2009.

Notes

UF1-UF7 have M=2 objectives, UF8-UF10 have M=3 objectives. The decision space dimension can be adjusted via the D parameter.

UF1(D=30) MTOP[source]

Generates the UF1 problem.

UF1 is a bi-objective problem with a convex Pareto front.

Parameters:

D (int, optional) – Number of decision variables (default is 30).

Returns:

A Multi-Task Optimization Problem instance containing the UF1 task.

Return type:

MTOP

UF10(D=30) MTOP[source]

Generates the UF10 problem.

UF10 is a three-objective problem with spherical Pareto front and complex landscape.

Parameters:

D (int, optional) – Number of decision variables (default is 30).

Returns:

A Multi-Task Optimization Problem instance containing the UF10 task.

Return type:

MTOP

UF2(D=30) MTOP[source]

Generates the UF2 problem.

UF2 is a bi-objective problem with more complex variable linkage.

Parameters:

D (int, optional) – Number of decision variables (default is 30).

Returns:

A Multi-Task Optimization Problem instance containing the UF2 task.

Return type:

MTOP

UF3(D=30) MTOP[source]

Generates the UF3 problem.

UF3 features a complex landscape with product terms.

Parameters:

D (int, optional) – Number of decision variables (default is 30).

Returns:

A Multi-Task Optimization Problem instance containing the UF3 task.

Return type:

MTOP

UF4(D=30) MTOP[source]

Generates the UF4 problem.

UF4 features a complex sine-based transformation with h function.

Parameters:

D (int, optional) – Number of decision variables (default is 30).

Returns:

A Multi-Task Optimization Problem instance containing the UF4 task.

Return type:

MTOP

UF5(D=30) MTOP[source]

Generates the UF5 problem.

UF5 features multimodal landscape with oscillatory Pareto front.

Parameters:

D (int, optional) – Number of decision variables (default is 30).

Returns:

A Multi-Task Optimization Problem instance containing the UF5 task.

Return type:

MTOP

UF6(D=30) MTOP[source]

Generates the UF6 problem.

UF6 features disconnected Pareto front with product terms.

Parameters:

D (int, optional) – Number of decision variables (default is 30).

Returns:

A Multi-Task Optimization Problem instance containing the UF6 task.

Return type:

MTOP

UF7(D=30) MTOP[source]

Generates the UF7 problem.

UF7 features a power transformation in the objectives.

Parameters:

D (int, optional) – Number of decision variables (default is 30).

Returns:

A Multi-Task Optimization Problem instance containing the UF7 task.

Return type:

MTOP

UF8(D=30) MTOP[source]

Generates the UF8 problem.

UF8 is a three-objective problem with spherical Pareto front.

Parameters:

D (int, optional) – Number of decision variables (default is 30).

Returns:

A Multi-Task Optimization Problem instance containing the UF8 task.

Return type:

MTOP

UF9(D=30) MTOP[source]

Generates the UF9 problem.

UF9 is a three-objective problem with disconnected Pareto front.

Parameters:

D (int, optional) – Number of decision variables (default is 30).

Returns:

A Multi-Task Optimization Problem instance containing the UF9 task.

Return type:

MTOP

problem_information = {'n_cases': 10, 'n_cons': '0', 'n_dims': 'D', 'n_objs': '[2, 3]', 'n_tasks': '1', 'type': 'synthetic'}
class ddmtolab.Problems.STMO.CF.CF[source]

Implementation of the CF test suite (CEC 2009) for constrained multi-objective optimization.

The CF test problems (CF1 to CF10) are standard constrained multi-objective optimization benchmarks from the CEC 2009 competition.

References

[1] Q. Zhang, A. Zhou, S. Zhao, P. N. Suganthan, W. Liu, and S. Tiwari. “Multiobjective optimization test instances for the CEC 2009 special session and competition.” University of Essex, Colchester, UK, Tech. Rep. CES-487, 2009.

CF1(M=2, D=None) MTOP[source]

Generates the CF1 problem.

CF10(M=3, D=None) MTOP[source]

Generates the CF10 problem.

CF2(M=2, D=None) MTOP[source]

Generates the CF2 problem.

CF3(M=2, D=None) MTOP[source]

Generates the CF3 problem.

CF4(M=2, D=None) MTOP[source]

Generates the CF4 problem.

CF5(M=2, D=None) MTOP[source]

Generates the CF5 problem.

CF6(M=2, D=None) MTOP[source]

Generates the CF6 problem.

CF7(M=2, D=None) MTOP[source]

Generates the CF7 problem.

CF8(M=3, D=None) MTOP[source]

Generates the CF8 problem.

CF9(M=3, D=None) MTOP[source]

Generates the CF9 problem.

problem_information = {'n_cases': 10, 'n_cons': '1', 'n_dims': 'D', 'n_objs': '[2, 3]', 'n_tasks': '1', 'type': 'synthetic'}
class ddmtolab.Problems.STMO.MW.MW[source]

Implementation of the MW test suite for constrained multi-objective optimization.

The MW test problems are standard constrained multi-objective optimization benchmarks proposed by Ma and Wang (2019).

References

[1] Z. Ma and Y. Wang. “Evolutionary constrained multiobjective optimization: Test suite construction and performance comparisons.” IEEE Transactions on Evolutionary Computation, 2019, 23(6): 972-986.

MW1(M=2, D=None) MTOP[source]

Generates the MW1 problem.

MW1 features a linear Pareto front with a nonlinear constraint boundary that creates a challenging feasible region.

Parameters:
  • M (int, optional) – Number of objectives (default is 2).

  • D (int, optional) – Number of decision variables. If None, it is set to 15 (default is None).

Returns:

A Multi-Task Optimization Problem instance containing the MW1 task.

Return type:

MTOP

MW10(M=2, D=None) MTOP[source]

Generates the MW10 problem.

MW10 features a convex Pareto front (f2 = 1 - f1^2) with three constraints that create a complex feasible region with multiple disconnected segments.

Parameters:
  • M (int, optional) – Number of objectives (default is 2).

  • D (int, optional) – Number of decision variables. If None, it is set to 15 (default is None).

Returns:

A Multi-Task Optimization Problem instance containing the MW10 task.

Return type:

MTOP

MW11(M=2, D=None) MTOP[source]

Generates the MW11 problem.

MW11 features a quarter-circle Pareto front with four constraints that create a highly complex feasible region with multiple disconnected segments.

Parameters:
  • M (int, optional) – Number of objectives (default is 2).

  • D (int, optional) – Number of decision variables. If None, it is set to 15 (default is None).

Returns:

A Multi-Task Optimization Problem instance containing the MW11 task.

Return type:

MTOP

MW12(M=2, D=None) MTOP[source]

Generates the MW12 problem.

MW12 features a complex oscillating Pareto front with two constraints involving sinusoidal terms that create intricate feasible regions.

Parameters:
  • M (int, optional) – Number of objectives (default is 2).

  • D (int, optional) – Number of decision variables. If None, it is set to 15 (default is None).

Returns:

A Multi-Task Optimization Problem instance containing the MW12 task.

Return type:

MTOP

MW13(M=2, D=None) MTOP[source]

Generates the MW13 problem.

MW13 features a complex Pareto front involving exponential and sinusoidal terms with two constraints that create intricate feasible regions.

Parameters:
  • M (int, optional) – Number of objectives (default is 2).

  • D (int, optional) – Number of decision variables. If None, it is set to 15 (default is None).

Returns:

A Multi-Task Optimization Problem instance containing the MW13 task.

Return type:

MTOP

MW14(M=3, D=None) MTOP[source]

Generates the MW14 problem.

MW14 is a multi/many-objective constrained problem with a complex Pareto front involving exponential and sinusoidal terms, and a single constraint creating a disconnected feasible region.

Parameters:
  • M (int, optional) – Number of objectives (default is 3).

  • D (int, optional) – Number of decision variables. If None, it is set to 15 (default is None).

Returns:

A Multi-Task Optimization Problem instance containing the MW14 task.

Return type:

MTOP

MW2(M=2, D=None) MTOP[source]

Generates the MW2 problem.

MW2 features a linear Pareto front (f2 = 1 - f1) with a multi-modal g function and a nonlinear constraint boundary.

Parameters:
  • M (int, optional) – Number of objectives (default is 2).

  • D (int, optional) – Number of decision variables. If None, it is set to 15 (default is None).

Returns:

A Multi-Task Optimization Problem instance containing the MW2 task.

Return type:

MTOP

MW3(M=2, D=None) MTOP[source]

Generates the MW3 problem.

MW3 features a linear Pareto front (f2 = 1 - f1) with two constraints that create a complex feasible region.

Parameters:
  • M (int, optional) – Number of objectives (default is 2).

  • D (int, optional) – Number of decision variables. If None, it is set to 15 (default is None).

Returns:

A Multi-Task Optimization Problem instance containing the MW3 task.

Return type:

MTOP

MW4(M=3, D=None) MTOP[source]

Generates the MW4 problem.

MW4 is a multi/many-objective constrained problem with a simplex-shaped Pareto front and a nonlinear constraint.

Parameters:
  • M (int, optional) – Number of objectives (default is 3).

  • D (int, optional) – Number of decision variables. If None, it is set to 15 (default is None).

Returns:

A Multi-Task Optimization Problem instance containing the MW4 task.

Return type:

MTOP

MW5(M=2, D=None) MTOP[source]

Generates the MW5 problem.

MW5 features a quarter-circle Pareto front with three constraints that create a complex feasible region with disconnected segments.

Parameters:
  • M (int, optional) – Number of objectives (default is 2).

  • D (int, optional) – Number of decision variables. If None, it is set to 15 (default is None).

Returns:

A Multi-Task Optimization Problem instance containing the MW5 task.

Return type:

MTOP

MW6(M=2, D=None) MTOP[source]

Generates the MW6 problem.

MW6 features an elliptical Pareto front with a complex constraint based on angular position.

Parameters:
  • M (int, optional) – Number of objectives (default is 2).

  • D (int, optional) – Number of decision variables. If None, it is set to 15 (default is None).

Returns:

A Multi-Task Optimization Problem instance containing the MW6 task.

Return type:

MTOP

MW7(M=2, D=None) MTOP[source]

Generates the MW7 problem.

MW7 features a quarter-circle Pareto front with two constraints that create a complex angular-dependent feasible region.

Parameters:
  • M (int, optional) – Number of objectives (default is 2).

  • D (int, optional) – Number of decision variables. If None, it is set to 15 (default is None).

Returns:

A Multi-Task Optimization Problem instance containing the MW7 task.

Return type:

MTOP

MW8(M=3, D=None) MTOP[source]

Generates the MW8 problem.

MW8 is a multi/many-objective constrained problem with a normalized spherical Pareto front and a constraint based on the angular position of the last objective.

Parameters:
  • M (int, optional) – Number of objectives (default is 3).

  • D (int, optional) – Number of decision variables. If None, it is set to 15 (default is None).

Returns:

A Multi-Task Optimization Problem instance containing the MW8 task.

Return type:

MTOP

MW9(M=2, D=None) MTOP[source]

Generates the MW9 problem.

MW9 features a concave Pareto front (f2 = 1 - f1^0.6) with three constraints that create a complex feasible region with multiple disconnected segments.

Parameters:
  • M (int, optional) – Number of objectives (default is 2).

  • D (int, optional) – Number of decision variables. If None, it is set to 15 (default is None).

Returns:

A Multi-Task Optimization Problem instance containing the MW9 task.

Return type:

MTOP

problem_information = {'n_cases': 14, 'n_cons': '[1, 3]', 'n_dims': 'D', 'n_objs': '[2, 3]', 'n_tasks': '1', 'type': 'synthetic'}

MTSO Problems:

class ddmtolab.Problems.MTSO.cec17_mtso.CEC17MTSO[source]

Implementation of the CEC 2017 Competition on Evolutionary Multi-Task Optimization (EMTO) benchmark problems P1 to P9.

These problems are two-task optimization scenarios designed to test the ability of algorithms to leverage knowledge transfer under various relationships between tasks (similarity of global optima and search spaces).

Notes

Fixed parameters by benchmark definition: - D=50 (decision variables) - K=2 (number of tasks)

data_dir

The directory path for problem data files.

Type:

str

P1() MTOP[source]

Generates Problem 1: CI-HS (Complete Intersection - High Similarity).

  • Task 1: Rotated and shifted Griewank (Dim 50, [-100, 100])

  • Task 2: Rotated and shifted Rastrigin (Dim 50, [-50, 50])

  • Characteristic: Complete Overlap of the global optima and High Similarity of the solution space structures (Griewank, Rastrigin are both multi-modal).

Returns:

A Multi-Task Optimization Problem instance containing Task 1 and Task 2.

Return type:

MTOP

P2() MTOP[source]

Generates Problem 2: CI-MS (Complete Intersection - Medium Similarity).

  • Task 1: Rotated and shifted Ackley (Dim 50, [-50, 50])

  • Task 2: Rotated and shifted Rastrigin (Dim 50, [-50, 50])

  • Characteristic: Complete Overlap of the global optima and Medium Similarity of the solution space structures (Ackley is generally smoother than Rastrigin).

Returns:

A Multi-Task Optimization Problem instance containing Task 1 and Task 2.

Return type:

MTOP

P3() MTOP[source]

Generates Problem 3: CI-LS (Complete Intersection - Low Similarity).

  • Task 1: Rotated and shifted Ackley (Dim 50, [-50, 50])

  • Task 2: Standard Schwefel (Dim 50, [-500, 500])

  • Characteristic: Complete Overlap of the global optima and Low Similarity of the solution space structures (Schwefel is very difficult, Ackley is moderate).

Returns:

A Multi-Task Optimization Problem instance containing Task 1 and Task 2.

Return type:

MTOP

P4() MTOP[source]

Generates Problem 4: PI-HS (Partial Intersection - High Similarity).

  • Task 1: Rotated and shifted Rastrigin (Dim 50, [-50, 50])

  • Task 2: Shifted Sphere (Dim 50, [-100, 100])

  • Characteristic: Partial Overlap of the global optima and High Similarity of the solution space structures (Rastrigin is multi-modal, Sphere is uni-modal).

Returns:

A Multi-Task Optimization Problem instance containing Task 1 and Task 2.

Return type:

MTOP

P5() MTOP[source]

Generates Problem 5: PI-MS (Partial Intersection - Medium Similarity).

  • Task 1: Rotated and shifted Ackley (Dim 50, [-50, 50])

  • Task 2: Standard Rosenbrock (Dim 50, [-50, 50])

  • Characteristic: Partial Overlap of the global optima and Medium Similarity of the solution space structures (Ackley is multi-modal, Rosenbrock is uni-modal and valley-shaped).

Returns:

A Multi-Task Optimization Problem instance containing Task 1 and Task 2.

Return type:

MTOP

P6() MTOP[source]

Generates Problem 6: PI-LS (Partial Intersection - Low Similarity).

  • Task 1: Rotated and shifted Ackley (Dim 50, [-50, 50])

  • Task 2: Rotated and shifted Weierstrass (Dim 25, [-0.5, 0.5])

  • Characteristic: Partial Overlap of the global optima, Unequal Dimensions (50 vs 25), and Low Similarity (Weierstrass is highly complex and non-differentiable).

Returns:

A Multi-Task Optimization Problem instance containing Task 1 and Task 2.

Return type:

MTOP

P7() MTOP[source]

Generates Problem 7: NI-HS (No Intersection - High Similarity).

  • Task 1: Standard Rosenbrock (Dim 50, [-50, 50])

  • Task 2: Rotated and shifted Rastrigin (Dim 50, [-50, 50])

  • Characteristic: No Overlap of the global optima and High Similarity of the solution space structures.

Returns:

A Multi-Task Optimization Problem instance containing Task 1 and Task 2.

Return type:

MTOP

P8() MTOP[source]

Generates Problem 8: NI-MS (No Intersection - Medium Similarity).

  • Task 1: Rotated and shifted Griewank (Dim 50, [-100, 100])

  • Task 2: Rotated and shifted Weierstrass (Dim 50, [-0.5, 0.5])

  • Characteristic: No Overlap of the global optima and Medium Similarity of the solution space structures.

Returns:

A Multi-Task Optimization Problem instance containing Task 1 and Task 2.

Return type:

MTOP

P9() MTOP[source]

Generates Problem 9: NI-LS (No Intersection - Low Similarity).

  • Task 1: Rotated and shifted Rastrigin (Dim 50, [-50, 50])

  • Task 2: Standard Schwefel (Dim 50, [-500, 500])

  • Characteristic: No Overlap of the global optima and Low Similarity of the solution space structures (Rastrigin is multi-modal, Schwefel is highly complex/difficult).

Returns:

A Multi-Task Optimization Problem instance containing Task 1 and Task 2.

Return type:

MTOP

problem_information = {'n_cases': 9, 'n_cons': '0', 'n_dims': '50', 'n_objs': '1', 'n_tasks': '2', 'type': 'synthetic'}
class ddmtolab.Problems.MTSO.cec17_mtso_10d.CEC17MTSO_10D[source]

Implementation of the 10-Dimensional (10D) versions of the CEC 2017 Multi-Task Optimization (MTSO) benchmark problems P1 to P9.

These problems maintain the same underlying functions and global optima relationships as the original 50D CEC17 MTSO set but are configured with a reduced search space dimensionality (D=10) for both tasks.

Notes

Fixed parameters by benchmark definition: - D=10 (decision variables) - K=2 (number of tasks)

data_dir

The directory path for 10D problem data files.

Type:

str

P1() MTOP[source]

Generates Problem 1 (10D): T1: Griewank, T2: Rastrigin.

  • T1: Griewank (Dim 10, [-100, 100]) - Standard

  • T2: Rastrigin (Dim 10, [-50, 50]) - Standard

  • Relationship: Global optima are at origin (0) for both tasks (Complete Intersection).

Returns:

A Multi-Task Optimization Problem instance containing the two tasks.

Return type:

MTOP

P2() MTOP[source]

Generates Problem 2 (10D): T1: Rosenbrock, T2: Rastrigin.

  • T1: Rosenbrock (Dim 10, [-50, 50]) - Shifted to (1, …, 1)

  • T2: Rastrigin (Dim 10, [-50, 50]) - Shifted to (1, …, 1)

  • Relationship: Global optima are at (1, …, 1) for both tasks (Complete Intersection).

Returns:

A Multi-Task Optimization Problem instance containing the two tasks.

Return type:

MTOP

P3() MTOP[source]

Generates Problem 3 (10D): T1: Griewank, T2: Weierstrass.

  • T1: Griewank (Dim 10, [-100, 100]) - Shifted to (10, …, 10)

  • T2: Weierstrass (Dim 10, [-0.5, 0.5]) - Shifted to (1, …, 1)

  • Relationship: Global optima are misaligned (No Intersection).

Returns:

A Multi-Task Optimization Problem instance containing the two tasks.

Return type:

MTOP

P4() MTOP[source]

Generates Problem 4 (10D): T1: Ackley, T2: Rosenbrock.

  • T1: Ackley (Dim 10, [-50, 50]) - Rotated and Shifted (Data loaded from P4.mat)

  • T2: Rosenbrock (Dim 10, [-50, 50]) - Standard (optimum at origin $mathbf{0}$)

  • Relationship: Partial Intersection of global optima due to rotation/shift in T1.

Returns:

A Multi-Task Optimization Problem instance containing the two tasks.

Return type:

MTOP

P5() MTOP[source]

Generates Problem 5 (10D): T1: Rastrigin, T2: Sphere.

  • T1: Rastrigin (Dim 10, [-50, 50]) - Rotated and Shifted (Data loaded from P5.mat)

  • T2: Sphere (Dim 10, [-100, 100]) - Shifted (Data loaded from P5.mat)

  • Relationship: Partial Intersection of global optima.

Returns:

A Multi-Task Optimization Problem instance containing the two tasks.

Return type:

MTOP

P6() MTOP[source]

Generates Problem 6 (10D): T1: Ackley, T2: Rastrigin.

  • T1: Ackley (Dim 10, [-50, 50]) - Rotated and Shifted (Data loaded from P6.mat)

  • T2: Rastrigin (Dim 10, [-50, 50]) - Rotated and Shifted (Data loaded from P6.mat)

  • Relationship: Complete Intersection of global optima, but different rotations (Data files define this).

Returns:

A Multi-Task Optimization Problem instance containing the two tasks.

Return type:

MTOP

P7() MTOP[source]

Generates Problem 7 (10D): T1: Ackley, T2: Schwefel.

  • T1: Ackley (Dim 10, [-50, 50]) - Rotated and Shifted (Data loaded from P7.mat)

  • T2: Schwefel (Dim 10, [-500, 500]) - Standard (optimum at origin $mathbf{0}$)

  • Relationship: Partial Intersection of global optima.

Returns:

A Multi-Task Optimization Problem instance containing the two tasks.

Return type:

MTOP

P8() MTOP[source]

Generates Problem 8 (10D): T1: Ackley, T2: Weierstrass.

  • T1: Ackley (Dim 10, [-50, 50]) - Rotated and Shifted (Data loaded from P8.mat)

  • T2: Weierstrass (Dim 10, [-0.5, 0.5]) - Rotated (Data loaded from P8.mat)

  • Relationship: Partial Intersection of global optima.

Returns:

A Multi-Task Optimization Problem instance containing the two tasks.

Return type:

MTOP

P9() MTOP[source]

Generates Problem 9 (10D): T1: Rastrigin, T2: Schwefel.

  • T1: Rastrigin (Dim 10, [-50, 50]) - Rotated and Shifted (Data loaded from P9.mat)

  • T2: Schwefel (Dim 10, [-500, 500]) - Standard (optimum at origin $mathbf{0}$)

  • Relationship: Partial Intersection of global optima.

Returns:

A Multi-Task Optimization Problem instance containing the two tasks.

Return type:

MTOP

problem_information = {'n_cases': 9, 'n_cons': '0', 'n_dims': '10', 'n_objs': '1', 'n_tasks': '2', 'type': 'synthetic'}
class ddmtolab.Problems.MTSO.cec19_matso.CEC19MaTSO[source]

Implementation of the CEC 2019 Competition on Massive Multi-Task Optimization (MaTSO) benchmark problems P1 to P6.

These problems are designed to challenge algorithms with a large number of optimization tasks (typically 100 or more) derived from the same underlying function, but with different rotations and shifts, thereby testing transfer learning across many similar, but distinct tasks.

dim

The dimensionality of the search space for all tasks (fixed at 50).

Type:

int

data_dir

The directory path for problem data files.

Type:

str

P1(K=10) MTOP[source]

Generates Problem 1 (MaTSO): Rosenbrock tasks.

Each task is a 50D Rosenbrock function, rotated and shifted.

Parameters:

K (int, optional) – Number of tasks to create (default: 10).

Notes

Fixed parameters by benchmark definition: D=50

  • Function: Rosenbrock

  • Dimensions: 50D

  • Bounds: [-50, 50]

Returns:

A Multi-Task Optimization Problem instance containing task_num tasks.

Return type:

MTOP

P2(K=10) MTOP[source]

Generates Problem 2 (MaTSO): Ackley tasks.

Each task is a 50D Ackley function, rotated and shifted.

Parameters:

K (int, optional) – Number of tasks to create (default: 10).

Notes

Fixed parameters by benchmark definition: D=50

  • Function: Ackley

  • Dimensions: 50D

  • Bounds: [-50, 50]

Returns:

A Multi-Task Optimization Problem instance containing task_num tasks.

Return type:

MTOP

P3(K=10) MTOP[source]

Generates Problem 3 (MaTSO): Rastrigin tasks.

Each task is a 50D Rastrigin function, rotated and shifted.

Parameters:

K (int, optional) – Number of tasks to create (default: 10).

Notes

Fixed parameters by benchmark definition: D=50

  • Function: Rastrigin

  • Dimensions: 50D

  • Bounds: [-50, 50]

Returns:

A Multi-Task Optimization Problem instance containing task_num tasks.

Return type:

MTOP

P4(K=10) MTOP[source]

Generates Problem 4 (MaTSO): Griewank tasks.

Each task is a 50D Griewank function, rotated and shifted.

Parameters:

K (int, optional) – Number of tasks to create (default: 10).

Notes

Fixed parameters by benchmark definition: D=50

  • Function: Griewank

  • Dimensions: 50D

  • Bounds: [-100, 100]

Returns:

A Multi-Task Optimization Problem instance containing task_num tasks.

Return type:

MTOP

P5(K=10) MTOP[source]

Generates Problem 5 (MaTSO): Weierstrass tasks.

Each task is a 50D Weierstrass function, rotated and shifted.

Parameters:

K (int, optional) – Number of tasks to create (default: 10).

Notes

Fixed parameters by benchmark definition: D=50

  • Function: Weierstrass

  • Dimensions: 50D

  • Bounds: [-0.5, 0.5]

Returns:

A Multi-Task Optimization Problem instance containing task_num tasks.

Return type:

MTOP

P6(K=10) MTOP[source]

Generates Problem 6 (MaTSO): Schwefel tasks.

Each task is a 50D Schwefel function, rotated and shifted.

Parameters:

K (int, optional) – Number of tasks to create (default: 10).

Notes

Fixed parameters by benchmark definition: D=50

  • Function: Schwefel

  • Dimensions: 50D

  • Bounds: [-500, 500]

Returns:

A Multi-Task Optimization Problem instance containing task_num tasks.

Return type:

MTOP

problem_information = {'n_cases': 6, 'n_cons': '0', 'n_dims': '50', 'n_objs': '1', 'n_tasks': 'K', 'type': 'synthetic'}
class ddmtolab.Problems.MTSO.cmt.CMT[source]

CMT (Constrained Multi-Task) benchmark problems.

This class provides constrained multi-task optimization problems with various function combinations and constraint types.

CMT1(D: int = 50) MTOP[source]

CMT Problem 1: Griewank (Type 1 constraint) + Rastrigin (Type 1 constraint).

Parameters:

D (int, optional) – Number of decision variables (default is 50).

Returns:

A Multi-Task Optimization Problem instance.

Return type:

MTOP

CMT2(D: int = 50) MTOP[source]

CMT Problem 2: Ackley (Type 2 constraint) + Rastrigin (Type 2 constraint).

Parameters:

D (int, optional) – Number of decision variables (default is 50).

Returns:

A Multi-Task Optimization Problem instance.

Return type:

MTOP

CMT3(D: int = 50) MTOP[source]

CMT Problem 3: Ackley (Type 2 constraint) + Schwefel (Type 1 constraint).

Parameters:

D (int, optional) – Number of decision variables (default is 50).

Returns:

A Multi-Task Optimization Problem instance.

Return type:

MTOP

CMT4(D: int = 50) MTOP[source]

CMT Problem 4: Rastrigin (Type 1 constraint) + Sphere (Type 1 constraint).

Parameters:

D (int, optional) – Number of decision variables (default is 50).

Returns:

A Multi-Task Optimization Problem instance.

Return type:

MTOP

CMT5(D: int = 50) MTOP[source]

CMT Problem 5: Ackley (Type 1 constraint) + Rosenbrock (Type 2 constraint).

Parameters:

D (int, optional) – Number of decision variables (default is 50).

Returns:

A Multi-Task Optimization Problem instance.

Return type:

MTOP

CMT6(D: int = 50) MTOP[source]

CMT Problem 6: Ackley (Type 2 constraint) + Weierstrass (Type 3 constraint).

Parameters:

D (int, optional) – Number of decision variables (default is 50).

Returns:

A Multi-Task Optimization Problem instance.

Return type:

MTOP

CMT7(D: int = 50) MTOP[source]

CMT Problem 7: Rosenbrock (Type 1 constraint) + Rastrigin (Type 1 constraint).

Parameters:

D (int, optional) – Number of decision variables (default is 50).

Returns:

A Multi-Task Optimization Problem instance.

Return type:

MTOP

CMT8(D: int = 50) MTOP[source]

CMT Problem 8: Griewank (Type 2 constraint) + Weierstrass (Type 3 constraint).

Parameters:

D (int, optional) – Number of decision variables (default is 50).

Returns:

A Multi-Task Optimization Problem instance.

Return type:

MTOP

CMT9(D: int = 50) MTOP[source]

CMT Problem 9: Rastrigin (Type 4 constraint) + Schwefel (Type 2 constraint).

Parameters:

D (int, optional) – Number of decision variables (default is 50).

Returns:

A Multi-Task Optimization Problem instance.

Return type:

MTOP

problem_information = {'n_cases': 9, 'n_cons': '1', 'n_dims': 'D', 'n_objs': '1', 'n_tasks': '2', 'type': 'synthetic'}
class ddmtolab.Problems.MTSO.stop.STOP[source]

Implementation of STOP (Scalable Test Problem Generator for Sequential Transfer Optimization) benchmark problems.

These problems are designed to challenge algorithms with many tasks derived from various benchmark functions with different characteristics, testing transfer learning across many similar but distinct tasks.

Reference: X. Xue et al., “A Scalable Test Problem Generator for Sequential Transfer Optimization,” IEEE Trans. Cybern., vol. 55, no. 5, pp. 2110-2123, 2025.

data_dir

The directory path for problem data files.

Type:

str

STOP1(K=10) MTOP[source]

STOP Problem 1: Sphere-Ta-hh2-d50-k49

STOP10(K=10) MTOP[source]

STOP Problem 10: Rastrigin-Te-hl2-d30-k49

STOP11(K=10) MTOP[source]

STOP Problem 11: Ackley-Ta-hl2-d50-k49

STOP12(K=10) MTOP[source]

STOP Problem 12: Ellipsoid-Te-hl1-d50-k49

STOP2(K=10) MTOP[source]

STOP Problem 2: Ellipsoid-Te-hh2-d25-k49

STOP3(K=10) MTOP[source]

STOP Problem 3: Schwefel-Ta-hh2-d30-k49

STOP4(K=10) MTOP[source]

STOP Problem 4: Quartic-Te-hh2-d50-k49

STOP5(K=10) MTOP[source]

STOP Problem 5: Ackley-Ta-hm1-d25-k49

STOP6(K=10) MTOP[source]

STOP Problem 6: Rastrigin-Te-hm2-d50-k49

STOP7(K=10) MTOP[source]

STOP Problem 7: Griewank-Ta-hm3-d25-k49

STOP8(K=10) MTOP[source]

STOP Problem 8: Levy-Te-hm4-d30-k49

STOP9(K=10) MTOP[source]

STOP Problem 9: Sphere-Ta-hl1-d25-k49

static S_Ellipsoid(var, opt)[source]

Ellipsoid function (STOP variant, shifted only).

static S_Levy(var, opt)[source]

Levy function (STOP variant, shifted only).

static S_Quartic(var, opt)[source]

Quartic function with noise (STOP variant, shifted only).

static S_Schwefel(var, opt)[source]

Schwefel 2.2 function (STOP variant, shifted only).

problem_information = {'n_cases': 12, 'n_cons': '0', 'n_dims': '[25, 50]', 'n_objs': '1', 'n_tasks': 'K', 'type': 'synthetic'}

MTMO Problems:

class ddmtolab.Problems.MTMO.cec17_mtmo.CEC17MTMO[source]

Implementation of the CEC 2017 Competition on Evolutionary Multi-Task Multi-Objective Optimization (MTMO) benchmark problems P1 to P9.

These problems consist of two multi-objective optimization tasks (MO-tasks) with shared variables, designed to test knowledge transfer in the presence of multiple conflicting objectives. All tasks are minimization problems.

Notes

Fixed parameters by benchmark definition: - K=2 (number of tasks) - D and M vary by problem (see individual method docstrings)

data_dir

The directory path for problem data files.

Type:

str

P1() MTOP[source]

Generates Problem 1: T1 (ZDT3-like) vs T2 (ZDT2-like).

Both tasks are 2-objective, 50-dimensional.

  • T1: Modified ZDT3-like, PF is discontinuous, non-convex (Curved, Piecewise).

  • T2: Modified ZDT2-like, PF is continuous, non-convex.

  • Relationship: Decision space overlap exists only in \(x_1\) dimension (\([0, 1]\)).

Returns:

A Multi-Task Multi-Objective Optimization Problem instance.

Return type:

MTOP

P2() MTOP[source]

Generates Problem 2: T1 (ZDT2-like, Rosenbrock) vs T2 (ZDT3-like, Rotated).

Both tasks are 2-objective, 10-dimensional.

  • T1: Modified ZDT2-like with Rosenbrock-like component.

  • T2: Modified ZDT3-like with rotated non-linear component (Mcm2).

  • Relationship: Decision variables are coupled and search spaces are different.

Returns:

A Multi-Task Multi-Objective Optimization Problem instance.

Return type:

MTOP

P3() MTOP[source]

Generates Problem 3: T1 (ZDT3-like, Rastrigin) vs T2 (ZDT1-like, Ackley).

Both tasks are 2-objective, 50-dimensional.

  • T1: Modified ZDT3-like with Rastrigin-like component in \(g\). PF is discontinuous, non-convex.

  • T2: Modified ZDT1-like with Ackley-like component in \(g\). PF is continuous, convex.

  • Relationship: Tasks have different search spaces and different \(g\)-functions.

Returns:

A Multi-Task Multi-Objective Optimization Problem instance.

Return type:

MTOP

P4() MTOP[source]

Generates Problem 4: T1 (ZDT1-like, Sphere) vs T2 (ZDT1-like, Rastrigin).

Both tasks are 2-objective, 50-dimensional.

  • T1: Modified ZDT1-like with Sphere component in \(g\). PF is continuous, convex.

  • T2: Modified ZDT1-like with shifted Rastrigin component (Sph2) in \(g\). PF is continuous, convex.

  • Relationship: The \(g\)-functions and search spaces are different.

Returns:

A Multi-Task Multi-Objective Optimization Problem instance.

Return type:

MTOP

P5() MTOP[source]

Generates Problem 5: T1 (ZDT3-like, Rotated Sphere) vs T2 (ZDT2-like, Rotated Rastrigin).

Both tasks are 2-objective, 50-dimensional.

  • T1: Modified ZDT3-like with rotated and shifted Sphere component in \(g\) (Mpm1, Spm1). PF is discontinuous, non-convex.

  • T2: Modified ZDT2-like with rotated Rastrigin component in \(g\) (Mpm2). PF is continuous, non-convex.

  • Relationship: Different problem landscapes and global optimum locations in the search space.

Returns:

A Multi-Task Multi-Objective Optimization Problem instance.

Return type:

MTOP

P6() MTOP[source]

Generates Problem 6: T1 (ZDT3-like, Griewank) vs T2 (ZDT3-like, Ackley).

Both tasks are 2-objective, 50-dimensional.

  • T1: Modified ZDT3-like with Griewank component in \(g\). PF is discontinuous, non-convex.

  • T2: Modified ZDT3-like with shifted Ackley component (Spl2) in \(g\). PF is discontinuous, non-convex.

  • Relationship: PF shapes are similar, but \(g\)-functions and search spaces are different.

Returns:

A Multi-Task Multi-Objective Optimization Problem instance.

Return type:

MTOP

P7() MTOP[source]

Generates Problem 7: T1 (ZDT3-like, Rosenbrock) vs T2 (ZDT1-like, Sphere).

Both tasks are 2-objective, 50-dimensional.

  • T1: Modified ZDT3-like with Rosenbrock component in \(g\). PF is discontinuous, non-convex.

  • T2: Modified ZDT1-like with Sphere component in \(g\). PF is continuous, convex.

  • Relationship: Highly multi-modal \(g\)-function in T1 and different PF shapes.

Returns:

A Multi-Task Multi-Objective Optimization Problem instance.

Return type:

MTOP

P8() MTOP[source]

Generates Problem 8: T1 (DTLZ1-like, 3-obj) vs T2 (ZDT2-like, 2-obj).

T1 is 3-objective, T2 is 2-objective. Both are 20-dimensional.

  • T1: Modified DTLZ1-like with Rosenbrock component in \(g\). PF is a plane (linear).

  • T2: Modified ZDT2-like with rotated Sphere component in \(g\) (Mnm2). PF is continuous, non-convex.

  • Relationship: Tasks have different number of objectives and different PF shapes/geometries.

Returns:

A Multi-Task Multi-Objective Optimization Problem instance.

Return type:

MTOP

P9() MTOP[source]

Generates Problem 9: T1 (DTLZ1-like, 3-obj) vs T2 (ZDT2-like, 2-obj).

T1 is 3-objective (25-dimensional), T2 is 2-objective (50-dimensional).

  • T1: Modified DTLZ1-like with shifted Griewank component (Snl1) in \(g\). PF is a plane (linear).

  • T2: Modified ZDT2-like with Ackley component in \(g\). PF is continuous, non-convex.

  • Relationship: Different objectives, different dimensions, and different PF shapes.

Returns:

A Multi-Task Multi-Objective Optimization Problem instance.

Return type:

MTOP

problem_information = {'n_cases': 9, 'n_cons': '0', 'n_dims': '[10, 50]', 'n_objs': '[2, 3]', 'n_tasks': '2', 'type': 'synthetic'}
class ddmtolab.Problems.MTMO.cec19_mtmo.CEC19MTMO[source]

Implementation of the CEC 2019 Competition on Evolutionary Multi-Task Multi-Objective Optimization (MTMO) benchmark problems.

These problems are based on the LZ09 test suite and consist of multiple tasks with different configurations designed to test knowledge transfer in multi-objective optimization scenarios.

Notes

Fixed parameters by benchmark definition: - K=2 (number of tasks) - D and M vary by problem (see individual method docstrings)

P1() MTOP[source]

Generates Problem P1 (CPLX1): T1 (LZ09_F1) vs T2 (LZ09_F2).

Both tasks are 2-objective, 10-dimensional.

  • T1: LZ09_F1 with ptype=21, dtype=1, ltype=21

  • T2: LZ09_F2 with ptype=21, dtype=1, ltype=22

  • Relationship: Same PF shape (ptype=21), different link functions

Returns:

A Multi-Task Multi-Objective Optimization Problem instance.

Return type:

MTOP

P10() MTOP[source]

Generates Problem P10 (CPLX10): T1 (LZ09_F7) vs T2 (LZ09_F8).

Both tasks are 2-objective, 10-dimensional.

  • T1: LZ09_F7 with ptype=21, dtype=3, ltype=21

  • T2: LZ09_F8 with ptype=21, dtype=4, ltype=21

  • Relationship: Same PF shape (ptype=21), different distance functions and search spaces

Returns:

A Multi-Task Multi-Objective Optimization Problem instance.

Return type:

MTOP

P2() MTOP[source]

Generates Problem P2 (CPLX2): T1 (LZ09_F1) vs T2 (LZ09_F7).

Both tasks are 2-objective, 10-dimensional.

  • T1: LZ09_F1 with ptype=21, dtype=1, ltype=21

  • T2: LZ09_F7 with ptype=21, dtype=3, ltype=21 # 修正:dtype=3

  • Relationship: Same PF shape (ptype=21), different distance functions

Returns:

A Multi-Task Multi-Objective Optimization Problem instance.

Return type:

MTOP

P3() MTOP[source]

Generates Problem P3 (CPLX3): T1 (LZ09_F2) vs T2 (LZ09_F4).

Both tasks are 2-objective, 30-dimensional.

  • T1: LZ09_F2 with ptype=21, dtype=1, ltype=22

  • T2: LZ09_F4 with ptype=21, dtype=1, ltype=24

  • Relationship: Same PF shape (ptype=21), different link functions and search spaces

Returns:

A Multi-Task Multi-Objective Optimization Problem instance.

Return type:

MTOP

P4() MTOP[source]

Generates Problem P4 (CPLX4): T1 (LZ09_F2) vs T2 (LZ09_F9).

Both tasks are 2-objective, 30-dimensional.

  • T1: LZ09_F2 with ptype=21, dtype=1, ltype=22

  • T2: LZ09_F9 with ptype=22, dtype=1, ltype=22

  • Relationship: Different PF shapes (ptype=21 vs ptype=22), same link function

Returns:

A Multi-Task Multi-Objective Optimization Problem instance.

Return type:

MTOP

P5() MTOP[source]

Generates Problem P5 (CPLX5): T1 (LZ09_F3, 2-obj) vs T2 (LZ09_F6, 3-obj).

Tasks have different objectives and dimensions.

  • T1: LZ09_F3, 2-objective, 30-dimensional, ptype=21, dtype=1, ltype=23

  • T2: LZ09_F6, 3-objective, 10-dimensional, ptype=31, dtype=1, ltype=32

  • Relationship: Different number of objectives and dimensions

Returns:

A Multi-Task Multi-Objective Optimization Problem instance.

Return type:

MTOP

P6() MTOP[source]

Generates Problem P6 (CPLX6): T1 (LZ09_F3) vs T2 (LZ09_F9).

Both tasks are 2-objective, 30-dimensional.

  • T1: LZ09_F3 with ptype=21, dtype=1, ltype=23

  • T2: LZ09_F9 with ptype=22, dtype=1, ltype=22

  • Relationship: Different PF shapes and search spaces

Returns:

A Multi-Task Multi-Objective Optimization Problem instance.

Return type:

MTOP

P7() MTOP[source]

Generates Problem P7 (CPLX7): T1 (LZ09_F4) vs T2 (LZ09_F5).

Both tasks are 2-objective, 30-dimensional.

  • T1: LZ09_F4 with ptype=21, dtype=1, ltype=24

  • T2: LZ09_F5 with ptype=21, dtype=1, ltype=26

  • Relationship: Same PF shape (ptype=21), different link functions

Returns:

A Multi-Task Multi-Objective Optimization Problem instance.

Return type:

MTOP

P8() MTOP[source]

Generates Problem P8 (CPLX8): T1 (LZ09_F5) vs T2 (LZ09_F7).

Both tasks are 2-objective with different dimensions.

  • T1: LZ09_F5, 30-dimensional, ptype=21, dtype=1, ltype=26

  • T2: LZ09_F7, 10-dimensional, ptype=21, dtype=3, ltype=21

  • Relationship: Same PF shape (ptype=21), different dimensions and link functions

  • Note: x1 and x2 both in [0,1] for both tasks

Returns:

A Multi-Task Multi-Objective Optimization Problem instance.

Return type:

MTOP

P9() MTOP[source]

Generates Problem P9 (CPLX9): T1 (LZ09_F6, 3-obj) vs T2 (LZ09_F9, 2-obj).

Tasks have different objectives and dimensions.

  • T1: LZ09_F6, 3-objective, 10-dimensional, ptype=31, dtype=1, ltype=32

  • T2: LZ09_F9, 2-objective, 30-dimensional, ptype=22, dtype=1, ltype=22

  • Relationship: Different number of objectives, dimensions, and PF shapes

  • Note: x1 and x2 both in [0,1] for both tasks

Returns:

A Multi-Task Multi-Objective Optimization Problem instance.

Return type:

MTOP

problem_information = {'n_cases': 10, 'n_cons': '0', 'n_dims': '50', 'n_objs': '2', 'n_tasks': '2', 'type': 'synthetic'}
class ddmtolab.Problems.MTMO.cec19_matmo.CEC19_MaTMO[source]

Implementation of CEC19 MaTMO (Many-Task Multi-Objective) benchmark problems P1-P6.

CRITICAL: P1, P4, P6 use DTLZ formulation

P2, P3, P5 use ZDT formulation

Notes

Fixed parameters by benchmark definition: - D=50 (decision variables) - M=2 (number of objectives) - K is configurable (default is 10)

data_dir

The directory path for problem data files.

Type:

str

P1(K=10) MTOP[source]

P1: Sphere + Circular PF (DTLZ formulation)

  • Dimension: 50

  • g-function: Sphere

  • PF: Circular (DTLZ-style)

P2(K=10) MTOP[source]

P2: Mean + Concave PF (ZDT formulation)

  • Dimension: 50

  • g-function: Mean

  • PF: Concave

P3(K=10) MTOP[source]

P3: Rosenbrock + Concave PF (ZDT formulation)

  • Dimension: 10

  • g-function: Rosenbrock

  • PF: Concave

P4(K=10) MTOP[source]

P4: Rastrigin + Circular PF (DTLZ formulation)

  • Dimension: 50

  • g-function: Rastrigin

  • PF: Circular (DTLZ-style)

P5(K=10) MTOP[source]

P5: Ackley + Convex PF (ZDT formulation)

  • Dimension: 50

  • g-function: Ackley

  • PF: Convex

P6(K=10) MTOP[source]

P6: Griewank + Circular PF (DTLZ formulation)

  • Dimension: 50

  • g-function: Griewank

  • PF: Circular (DTLZ-style)

problem_information = {'n_cases': 6, 'n_cons': '0', 'n_dims': '50', 'n_objs': '2', 'n_tasks': 'K', 'type': 'synthetic'}
class ddmtolab.Problems.MTMO.cec21_mtmo.CEC21MTMO[source]

Implementation of the CEC 2021 Competition on Evolutionary Multi-Task Multi-Objective Optimization (MTMO) benchmark problems.

These problems consist of two multi-objective optimization tasks (MO-tasks) with shared variables, designed to test knowledge transfer in the presence of multiple conflicting objectives. All tasks are minimization problems.

Notes

Fixed parameters by benchmark definition: - K=2 (number of tasks) - D=50, M=2 for all problems

data_dir

The directory path for problem data files.

Type:

str

funcs

Instance of the functions helper class.

Type:

CEC21MTMOFunctions

P1() MTOP[source]

Generates Problem 1: T1 (MMDTLZ, F17) vs T2 (MMZDT, F17).

Both tasks are 2-objective, 50-dimensional.

  • T1: MMDTLZ-type with hybrid function F17 (Modified Schwefel + Rastrigin + Elliptic).

  • T2: MMZDT-type with linear f1, hybrid function F17, and concave h.

  • Relationship: Different task types but same g-function complexity.

Returns:

A Multi-Task Multi-Objective Optimization Problem instance.

Return type:

MTOP

P10() MTOP[source]

Generates Problem 10: T1 (MMDTLZ, F15) vs T2 (MMZDT, F17).

Both tasks are 2-objective, 50-dimensional.

  • T1: MMDTLZ-type with hybrid function F15 (ExGriewRosen).

  • T2: MMZDT-type with linear f1, hybrid function F17 (Modified Schwefel + Rastrigin + Elliptic), and concave h.

  • Relationship: Different task types and different g-functions.

Returns:

A Multi-Task Multi-Objective Optimization Problem instance.

Return type:

MTOP

P2() MTOP[source]

Generates Problem 2: T1 (MMDTLZ, F19) vs T2 (MMDTLZ, F19).

P3() MTOP[source]

Generates Problem 3: T1 (MMDTLZ, F22) vs T2 (MMZDT, F22).

Both tasks are 2-objective, 50-dimensional.

  • T1: MMDTLZ-type with hybrid function F22 (Katsuura + HappyCat + ExGriewRosen + Modified Schwefel + Ackley).

  • T2: MMZDT-type with linear f1, hybrid function F22, and convex h.

  • Relationship: Different task types but same g-function complexity.

Returns:

A Multi-Task Multi-Objective Optimization Problem instance.

Return type:

MTOP

P4() MTOP[source]

Generates Problem 4: T1 (MMZDT, F15) vs T2 (MMZDT, F15).

Both tasks are 2-objective, 50-dimensional.

  • T1: MMZDT-type with linear f1, hybrid function F15 (ExGriewRosen), and convex h.

  • T2: MMZDT-type with linear f1, hybrid function F15 (ExGriewRosen), and convex h.

  • Relationship: Same task type and g-function, both convex.

Returns:

A Multi-Task Multi-Objective Optimization Problem instance.

Return type:

MTOP

P5() MTOP[source]

Generates Problem 5: T1 (MMDTLZ, F4) vs T2 (MMZDT, F4).

Both tasks are 2-objective, 50-dimensional.

  • T1: MMDTLZ-type with hybrid function F4 (Rosenbrock).

  • T2: MMZDT-type with linear f1, hybrid function F4 (Rosenbrock), and concave h.

  • Relationship: Different task types but same g-function (Rosenbrock).

Returns:

A Multi-Task Multi-Objective Optimization Problem instance.

Return type:

MTOP

P6() MTOP[source]

Generates Problem 6: T1 (MMDTLZ, F9) vs T2 (MMDTLZ, F9).

Both tasks are 2-objective, 50-dimensional.

  • T1: MMDTLZ-type with hybrid function F9 (Rastrigin, rotated).

  • T2: MMDTLZ-type with hybrid function F9 (Rastrigin, rotated).

  • Relationship: Same task type and g-function.

Returns:

A Multi-Task Multi-Objective Optimization Problem instance.

Return type:

MTOP

P7() MTOP[source]

Generates Problem 7: T1 (MMDTLZ, F8) vs T2 (MMZDT, F8).

Both tasks are 2-objective, 50-dimensional.

  • T1: MMDTLZ-type with hybrid function F8 (Rastrigin, non-rotated).

  • T2: MMZDT-type with linear f1, hybrid function F8 (Rastrigin, non-rotated), and convex h.

  • Relationship: Different task types but same g-function.

Returns:

A Multi-Task Multi-Objective Optimization Problem instance.

Return type:

MTOP

P8() MTOP[source]

Generates Problem 8: T1 (MMDTLZ, F18) vs T2 (MMZDT, F20).

Both tasks are 2-objective, 50-dimensional.

  • T1: MMDTLZ-type with hybrid function F18 (Cigar + HGBat + Rastrigin).

  • T2: MMZDT-type with linear f1, hybrid function F20 (HGBat + Discus + ExGriewRosen + Rastrigin), and concave h.

  • Relationship: Different task types and different g-functions.

Returns:

A Multi-Task Multi-Objective Optimization Problem instance.

Return type:

MTOP

P9() MTOP[source]

Generates Problem 9: T1 (MMDTLZ, F11) vs T2 (MMZDT, F18).

Both tasks are 2-objective, 50-dimensional.

  • T1: MMDTLZ-type with hybrid function F11 (Modified Schwefel).

  • T2: MMZDT-type with linear f1, hybrid function F18 (Cigar + HGBat + Rastrigin), and concave h.

  • Relationship: Different task types and different g-functions.

Returns:

A Multi-Task Multi-Objective Optimization Problem instance.

Return type:

MTOP

problem_information = {'n_cases': 10, 'n_cons': '0', 'n_dims': '50', 'n_objs': '2', 'n_tasks': '2', 'type': 'synthetic'}
class ddmtolab.Problems.MTMO.mtmo_dtlz.MTMO_DTLZ[source]

Multi-Task Multi-Objective DTLZ benchmark problems.

These problems combine different DTLZ test functions as separate tasks within a multi-task optimization framework.

P1(M=3, D=10) MTOP[source]

Generates Problem 1: T1 (DTLZ2) vs T2 (DTLZ3).

  • T1: DTLZ2 with a simple uni-modal g-function. PF is the unit sphere.

  • T2: DTLZ3 with a multi-modal g-function. PF is the unit sphere (same shape as DTLZ2) but much harder to converge due to many local fronts.

  • Relationship: Both tasks share the same PF shape but differ in landscape difficulty, enabling potential knowledge transfer of convergence information.

Parameters:
  • M (int, optional) – Number of objectives (default is 3).

  • D (int, optional) – Number of decision variables (default is 10). Must satisfy D >= M.

Returns:

A Multi-Task Multi-Objective Optimization Problem instance.

Return type:

MTOP

problem_information = {'n_cases': 1, 'n_cons': '0', 'n_dims': 'D', 'n_objs': 'M', 'n_tasks': '2', 'type': 'synthetic'}
class ddmtolab.Problems.MTMO.mtmo_instance.MTMOInstances[source]

Additional Multi-Task Multi-Objective Optimization (MTMO) benchmark problems.

These problems consist of two multi-objective optimization tasks with different characteristics to test knowledge transfer capabilities.

data_dir

The directory path for problem data files.

Type:

str

P1() MTOP[source]

Generates MTMO Instance 1: T1 (ZDT4_R, Rastrigin) vs T2 (ZDT4_G, Griewank).

Both tasks are 2-objective, 10-dimensional.

  • T1: Modified ZDT1-like with Rastrigin component in g-function. PF is continuous, convex.

  • T2: Modified ZDT2-like with Griewank component in g-function. PF is continuous, non-convex.

  • Relationship: Different g-functions create different landscape difficulties.

Returns:

A Multi-Task Multi-Objective Optimization Problem instance.

Return type:

MTOP

P2() MTOP[source]

Generates MTMO Instance 2: T1 (ZDT4_RC, Rastrigin + Constraint) vs T2 (ZDT4_A, Ackley).

T1 is 2-objective with 1 constraint (10-dimensional). T2 is 2-objective without constraints (10-dimensional).

  • T1: Modified ZDT1-like with Rastrigin component and a sinusoidal constraint.

    PF is continuous, convex, but partially infeasible.

  • T2: Modified ZDT2-like with Ackley component in g-function. PF is continuous, non-convex.

  • Relationship: One task has constraints while the other doesn’t, testing constraint handling transfer.

Returns:

A Multi-Task Multi-Objective Optimization Problem instance.

Return type:

MTOP

problem_information = {'n_cases': 2, 'n_cons': '[0, 1]', 'n_dims': '10', 'n_objs': '2', 'n_tasks': '2', 'type': 'synthetic'}

Real-World Optimization (RWO) Problems:

class ddmtolab.Problems.RWO.pepvm.PEPVM[source]

Parameter Extraction of Photovoltaic Models (PEPVM) benchmark problem.

This problem consists of three single-objective optimization tasks for parameter extraction of different photovoltaic cell models using experimental I-V data.

  • Task 1: Single Diode Model (5 parameters)

  • Task 2: Double Diode Model (7 parameters)

  • Task 3: PV Module Model (5 parameters)

The tasks share similar parameter extraction objectives but differ in model complexity and experimental conditions.

References

[1] Li, S., Gu, Q., Gong, W., & Ning, B. (2020). An Enhanced Adaptive Differential Evolution Algorithm for Parameter Extraction of Photovoltaic Models. Energy Conversion and Management, 205, 112443. [2] Li, Y., Gong, W., & Li, S. (2022). Multitasking Optimization via an Adaptive Solver Multitasking Evolutionary Framework. Information Sciences. [3] Li, Y., Gong, W., & Li, S. (2023). Evolutionary Competitive Multitasking Optimization via Improved Adaptive Differential Evolution. Expert Systems with Applications, 119550.

None required for this benchmark.
P1() MTOP[source]

Generates PEPVM Problem: Three photovoltaic parameter extraction tasks.

  • Task 1: Single Diode Model (5-D, experimental data at 33°C)

  • Task 2: Double Diode Model (7-D, experimental data at 33°C)

  • Task 3: PV Module Model (5-D, experimental data at 45°C)

All tasks minimize RMSE between measured and modeled I-V characteristics.

Returns:

A Multi-Task Optimization Problem instance with 3 tasks.

Return type:

MTOP

problem_information = {'n_cases': 1, 'n_cons': '0', 'n_dims': '[5, 7]', 'n_objs': '1', 'n_tasks': '3', 'type': 'real_world'}
class ddmtolab.Problems.RWO.sopm.SOPM[source]

Implementation of Synchronous Optimal Pulse-width Modulation (SOPM) benchmark problems for Multi-Task Multi-Objective Optimization (MTMO).

These problems involve optimizing switching angles for multilevel inverters to minimize Total Harmonic Distortion (THD) and maintain desired fundamental voltage component while satisfying monotonicity constraints on switching angles.

References

[1] Y. Li, W. Gong, “Multiobjective Multitask Optimization with Multiple Knowledge Types and Transfer Adaptation,” IEEE Trans. Evol. Comput., 2024. [2] A. Kumar et al., “A Benchmark-suite of Real-world Constrained Multi-objective Optimization Problems and Some Baseline Results,” Swarm and Evolutionary Computation, vol. 67, 2021.

Notes

All problems are constrained 2-objective optimization problems where: - Objective 1: Total Harmonic Distortion (THD) - Objective 2: Squared deviation from target modulation index - Constraints: Monotonically decreasing switching angles

P1() MTOP[source]

Generates SOPM MTMO Problem 1: [3, 5, 7]-level Inverters.

Three tasks optimizing switching angles for different inverter levels.

  • T1 (3-level): 2-objective, 25-dimensional * Decision variables: 25 switching angles in [0, 90] degrees * Target modulation index: m = 0.32 * Constraints: 24 monotonicity constraints (α_i ≥ α_{i+1})

  • T2 (5-level): 2-objective, 25-dimensional * Decision variables: 25 switching angles in [0, 90] degrees * Target modulation index: m = 0.32 * Constraints: 24 monotonicity constraints

  • T3 (7-level): 2-objective, 25-dimensional * Decision variables: 25 switching angles in [0, 90] degrees * Target modulation index: m = 0.36 * Constraints: 24 monotonicity constraints

  • Relationship: Different inverter levels with similar structure but different harmonic patterns. Tests knowledge transfer across inverter configurations.

Returns:

A Multi-Task Multi-Objective Optimization Problem instance.

Return type:

MTOP

P2() MTOP[source]

Generates SOPM MTMO Problem 2: [9, 11, 13]-level Inverters.

Three tasks optimizing switching angles for higher-level inverters.

  • T1 (9-level): 2-objective, 30-dimensional * Decision variables: 30 switching angles in [0, 90] degrees * Target modulation index: m = 0.32 * Constraints: 29 monotonicity constraints

  • T2 (11-level): 2-objective, 30-dimensional * Decision variables: 30 switching angles in [0, 90] degrees * Target modulation index: m = 0.3333 * Constraints: 29 monotonicity constraints

  • T3 (13-level): 2-objective, 30-dimensional * Decision variables: 30 switching angles in [0, 90] degrees * Target modulation index: m = 0.32 * Constraints: 29 monotonicity constraints

  • Relationship: Higher-level inverters with more complex harmonic patterns. Tests scalability and knowledge transfer for increased problem complexity.

Returns:

A Multi-Task Multi-Objective Optimization Problem instance.

Return type:

MTOP

problem_information = {'n_cases': 2, 'n_cons': '[24, 29]', 'n_dims': '[25, 30]', 'n_objs': '2', 'n_tasks': '3', 'type': 'real_world'}
class ddmtolab.Problems.RWO.nn_training.NN_Training(test_ratio=0.3, seed=42)[source]

Neural Network Weight Training benchmark suite for single-task optimization.

The decision variables are the flattened weights and biases of a fixed-architecture MLP. The optimization algorithm directly searches for optimal weight configurations – evaluation is a single forward pass (no gradient-based training), making these problems fast to evaluate.

Data is split into train / test. The optimization objective is the test set error rate (classification) or test MSE (regression).

Problems are ordered from easy to hard (by dimension & difficulty):

P

Dataset

Architecture

Dim

Task type

P1 P2 P3 P4 P5 P6

California Housing Diabetes Digits Covertype Digits (large net) Covertype (large net)

[8, 10, 1] [10, 10, 1] [64, 10, 10] [54, 20, 7] [64, 20, 10] [54, 30, 7]

101 121 760 1247 1510 1867

Regression Regression Classification Classification Classification Classification

Objectives (minimize):

  • Classification: test error rate (1 - accuracy), range [0, 1]

  • Regression: test MSE on standardized targets

Bounds: [-3, 3] for all weight parameters.

Parameters:
  • test_ratio (float, optional) – Fraction of data for testing (default 0.3).

  • seed (int, optional) – Random seed for train/test split (default 42).

P1() MTOP[source]

Problem 1: California Housing regression.

Architecture: [8, 10, 1], 101-D. 5000 samples (subsampled). Objective: test MSE, minimize.

P2() MTOP[source]

Problem 2: Diabetes regression.

Architecture: [10, 10, 1], 121-D. 442 samples. Objective: test MSE, minimize.

P3() MTOP[source]

Problem 3: Digits classification (small net).

Architecture: [64, 10, 10], 760-D. 1797 samples, 10 classes. Objective: test error rate, minimize.

P4() MTOP[source]

Problem 4: Covertype classification (medium net).

Architecture: [54, 20, 7], 1247-D. 5000 samples (subsampled), 7 classes. Objective: test error rate, minimize.

P5() MTOP[source]

Problem 5: Digits classification (large net).

Architecture: [64, 20, 10], 1510-D. 1797 samples, 10 classes. Objective: test error rate, minimize.

P6() MTOP[source]

Problem 6: Covertype classification (large net).

Architecture: [54, 30, 7], 1857-D. 5000 samples (subsampled), 7 classes. Objective: test error rate, minimize.

problem_information = {'n_cases': 6, 'n_cons': '0', 'n_dims': '[101, 1867]', 'n_objs': '1', 'n_tasks': '1', 'type': 'real_world'}

Traveling Salesman Problem (TSP) benchmark problems.

This module provides real-world combinatorial optimization problems formulated as continuous single-task optimization via random keys encoding. Each decision variable x_i in [0, 1] represents the priority of city i; the visiting order (permutation) is obtained by sorting these priorities (argsort).

This encoding allows standard continuous evolutionary algorithms (GA, DE, PSO, CMA-ES, etc.) to solve TSP without any problem-specific operator.

The objective is the total Euclidean tour length (round trip), to be minimized.

Problems are ordered from easy to hard (by number of cities):

P

Instance

Cities

Description

P1 P2 P3 P4 P5 P6

Random-20 Circle-30 Clustered-50 Random-50 Random-100 Random-200

20 30 50 50 100 200

Random layout Circular layout 5 clusters Random layout Random layout Random layout

References

[1] Reinelt, G. (1991). “TSPLIB – A Traveling Salesman Problem Library.” ORSA Journal on Computing, 3(4), 376-384. [2] Bean, J.C. (1994). “Genetic Algorithms and Random Keys for Sequencing and Optimization.” ORSA Journal on Computing, 6(2), 154-160. [3] Applegate, D.L., Bixby, R.E., Chvatal, V., and Cook, W.J. (2006). “The Traveling Salesman Problem: A Computational Study.” Princeton University Press.

class ddmtolab.Problems.RWO.tsp.TSP(seed=42)[source]

Traveling Salesman Problem (TSP) benchmark suite for single-task optimization.

Decision variables are random keys in [0, 1]: continuous values whose argsort defines the visiting permutation. This allows any continuous optimizer (GA, DE, PSO, CMA-ES, etc.) to solve TSP directly.

Objective (minimize): total Euclidean round-trip tour length.

Parameters:

seed (int, optional) – Random seed for city coordinate generation (default 42).

References

P1() MTOP[source]

Problem 1: Random-20 – 20 random cities, 20-D.

Objective: total tour length, minimize.

References: [1] [2]

P2() MTOP[source]

Problem 2: Circle-30 – 30 cities on a circle, 30-D.

The optimal tour visits cities in order around the circle. Objective: total tour length, minimize.

References: [1] [2]

P3() MTOP[source]

Problem 3: Clustered-50 – 50 cities in 5 clusters, 50-D.

Cities are grouped in 5 clusters; intra-cluster distances are small. Objective: total tour length, minimize.

References: [1] [2] [3]

P4() MTOP[source]

Problem 4: Random-50 – 50 random cities, 50-D.

Objective: total tour length, minimize.

References: [1] [2] [3]

P5() MTOP[source]

Problem 5: Random-100 – 100 random cities, 100-D.

Objective: total tour length, minimize.

References: [1] [2] [3]

P6() MTOP[source]

Problem 6: Random-200 – 200 random cities, 200-D.

Objective: total tour length, minimize.

References: [1] [2] [3]

plot_tour(problem_id, decision_vars, title=None, save_path=None, figsize=(8, 4), show=True)[source]

Plot the TSP tour defined by decision variables (random keys).

Parameters:
  • problem_id (int) – Problem index (1-6).

  • decision_vars (np.ndarray, shape (n_cities,)) – Decision variables in [0, 1]. The tour is obtained via argsort.

  • title (str, optional) – Figure title. If None, auto-generated from problem name and tour length.

  • save_path (str, optional) – If provided, save the figure to this path.

  • figsize (tuple, optional) – Figure size (default (8, 8)).

  • show (bool, optional) – Whether to call plt.show() (default True).

Returns:

The total tour length.

Return type:

float

problem_information = {'n_cases': 6, 'n_cons': '0', 'n_dims': '[20, 200]', 'n_objs': '1', 'n_tasks': '1', 'type': 'real_world'}
class ddmtolab.Problems.RWO.scp.SCP(Nmin=25, Nmax=35)[source]

Implementation of the Sensor Coverage Problem (SCP) for Multi-Task Optimization.

This problem involves optimizing sensor placements to maximize coverage of target points while minimizing the number of sensors and their sensing radii. Each task corresponds to a different number of sensors (variable-length optimization).

The problem optimizes sensor positions (x, y) and sensing radii (r) to: - Maximize coverage of target points - Minimize number of sensors - Minimize sensing costs (proportional to r²)

References

[1] M. L. Ryerkerk et al., “Solving Metameric Variable-length Optimization Problems Using Genetic Algorithms,” Genetic Programming and Evolvable Machines, vol. 18, no. 2, pp. 247-277, 2017. [2] G. Li et al., “Evolutionary Competitive Multitasking Optimization,” IEEE Trans. Evol. Comput., 2022.

Nmin

Minimum number of sensors (default: 25)

Type:

int

Nmax

Maximum number of sensors (default: 35)

Type:

int

A

Target points to be covered, shape (n_points, 2)

Type:

ndarray

data_dir

The directory path for problem data files.

Type:

str

P1() MTOP[source]

Generates SCP Problem 1: Multi-Task Sensor Coverage Optimization.

Creates tasks for different numbers of sensors from Nmin to Nmax. Each task optimizes sensor placements and radii.

Task Structure: - T_i (i sensors): 1-objective, (3*i)-dimensional

  • Decision variables: [x1, y1, r1, x2, y2, r2, …, xi, yi, ri]

  • x, y: Sensor positions in [-1, 1]

  • r: Sensing radii in [0.1, 0.25]

  • Objective: Weighted sum of: - Coverage penalty: 1000 * (1 - coverage_ratio) - Sensor count penalty: 1 * number_of_sensors - Sensing cost: 10 * sum(r²)

  • Relationship: Variable-length tasks with increasing complexity. Tests transfer learning across different problem dimensions.

Returns:

A Multi-Task Optimization Problem instance with (Nmax - Nmin + 1) tasks.

Return type:

MTOP

problem_information = {'n_cases': 1, 'n_cons': '0', 'n_dims': '[75, 105]', 'n_objs': '1', 'n_tasks': '11', 'type': 'real_world'}
class ddmtolab.Problems.RWO.mo_scp.MO_SCP[source]

Implementation of Multi-Objective Sensor Coverage Problem (MO_SCP) for Multi-Task Multi-Objective Optimization.

This problem extends the single-objective SCP to bi-objective optimization, balancing coverage maximization against sensor deployment costs. Each task corresponds to a different number of sensors with variable-length dimensions.

Objectives: - f1: Inverted coverage percentage (minimize uncovered area) - f2: Total sensor cost (number of sensors + sensing costs)

References

[1] Y. Li et al., “Transfer Search Directions Among Decomposed Subtasks for Evolutionary Multitasking in Multiobjective Optimization,” GECCO, 2024. [2] Y. Li et al., “Evolutionary Competitive Multiobjective Multitasking: One-Pass Optimization of Heterogeneous Pareto Solutions,” IEEE TEVC, 2024.

A

Target points to be covered, shape (n_points, 2)

Type:

ndarray

data_dir

The directory path for problem data files.

Type:

str

P1(Nmin=28, task_num=5, gap=1) MTOP[source]

Generates MO_SCP Problem 1: Multi-Objective Sensor Coverage with uniform gap.

Creates multiple tasks with different numbers of sensors, where sensor counts increase uniformly by a fixed gap.

Parameters:
  • Nmin (int, optional) – Minimum number of sensors (default: 28)

  • task_num (int, optional) – Number of tasks to create (default: 5)

  • gap (int, optional) – Gap between consecutive tasks’ sensor numbers (default: 1)

  • Structure (Task)

  • T_i (-) –

    • Decision variables: [x1, y1, r1, …, xk, yk, rk]

    • x, y: Sensor positions in [-1, 1]

    • r: Sensing radii in [0.1, 0.25]

    • Objective 1 (f1): Inverted coverage percentage (0-100)

    • Objective 2 (f2): Total cost = sensor_count + 10*sum(r²)

  • Relationship (-) – learning across different problem scales.

Returns:

A Multi-Task Multi-Objective Optimization Problem instance.

Return type:

MTOP

P2(Nmin=25, task_num=4, gap=3) MTOP[source]

Generates MO_SCP Problem 2: Multi-Objective Sensor Coverage with larger gap.

Creates multiple tasks with different numbers of sensors, where sensor counts increase with a larger gap. The second objective includes an additional coupling term for increased task correlation.

Parameters:
  • Nmin (int, optional) – Minimum number of sensors (default: 25)

  • task_num (int, optional) – Number of tasks to create (default: 4)

  • gap (int, optional) – Gap between consecutive tasks’ sensor numbers (default: 3)

  • Structure (Task)

  • T_i (-) –

    • Decision variables: [x1, y1, r1, …, xk, yk, rk]

    • x, y: Sensor positions in [-1, 1]

    • r: Sensing radii in [0.1, 0.25]

    • Objective 1 (f1): Inverted coverage percentage (0-100)

    • Objective 2 (f2): Total cost + f1/10 (coupled objectives)

  • Relationship (-) – to test transfer learning in more heterogeneous scenarios.

Returns:

A Multi-Task Multi-Objective Optimization Problem instance.

Return type:

MTOP

problem_information = {'n_cases': 2, 'n_cons': '0', 'n_dims': '[75, 105]', 'n_objs': '2', 'n_tasks': '[4, 5]', 'type': 'real_world'}
class ddmtolab.Problems.RWO.pkacp.PKACP[source]

Implementation of Planar Kinematic Arm Control Problem (PKACP) for Multi-Task/Many-Task Single-Objective Optimization.

This problem involves controlling a planar kinematic arm with multiple joints to reach a target position. Each task has different constraints on the maximum angular range and link lengths, creating diverse optimization landscapes.

The objective is to minimize the Euclidean distance between the end effector position and the target position (0.5, 0.5).

References

[1] Y. Jiang et al., “A Bi-Objective Knowledge Transfer Framework for Evolutionary Many-Task Optimization,” IEEE TEVC, 2022. [2] H. Xu et al., “Evolutionary Multi-Task Optimization with Adaptive Knowledge Transfer,” IEEE TEVC, 2021.

data_dir

Directory for task parameter files (in user’s home directory)

Type:

str

P1(task_num=20, dim=20) MTOP[source]

Generates PKACP Problem 1: Planar Kinematic Arm Control.

Creates multiple tasks with different maximum angular ranges and link lengths. Each task optimizes joint angles to minimize the distance between the end effector and target position.

Parameters:
  • task_num (int, optional) – Number of tasks to create (default: 20)

  • dim (int, optional) – Number of joints (dimensionality) for each task (default: 20)

  • Structure (Task)

  • T_i (-) –

    • Decision variables: Joint angles in [0, 1]

    • These are scaled to actual angular ranges based on task parameters

    • Objective: Euclidean distance to target (0.5, 0.5)

  • Parameters (Task)

  • Amax (-)

  • Lmax (-)

  • Tessellation) (- These parameters are generated using CVT (Centroidal Voronoi) – to create diverse but structured task variations

  • Relationship (-) – learning across different arm configurations.

Returns:

A Multi-Task Optimization Problem instance with task_num tasks.

Return type:

MTOP

problem_information = {'n_cases': 1, 'n_cons': '0', 'n_dims': 'D', 'n_objs': '1', 'n_tasks': 'K', 'type': 'real_world'}
class ddmtolab.Problems.RWO.pinn_hpo.PINN_HPO[source]

Physics-Informed Neural Network Hyperparameter Optimization (PINN-HPO) benchmark suite.

This class provides a collection of multi-task optimization problems for tuning PINN hyperparameters across different PDEs (Convection, Reaction, Wave, Helmholtz). Each problem consists of multiple related tasks with varying PDE parameters.

Notes

Decision variables for all problems: - x[0]: Number of layers (integer, [2, 10]) - x[1]: Number of nodes per layer (integer, [5, 100]) - x[2]: Activation function (float, [0, 5] mapping to tanh/relu/sigmoid/sin/swish) - x[3]: Training epochs (integer, [5000, 100000]) - x[4]: Grid size (integer, [10, 200]) - x[5]: Learning rate (float, [1e-5, 0.1])

P1()[source]

Generates Problem 1: Convection (\(\beta=20\), \(\beta=30\)).

Two-task hyperparameter optimization for Convection PDE with different convection velocities.

Returns:

A Multi-Task Optimization Problem instance.

Return type:

MTOP

P10()[source]

Generates Problem 10: Mixed (Wave \(\alpha=4, \beta=3\); Helmholtz \(n=4\)).

Two-task mixed hyperparameter optimization combining Wave and Helmholtz PDEs.

Returns:

A Multi-Task Optimization Problem instance.

Return type:

MTOP

P11()[source]

Generates Problem 11: Mixed (Convection \(\beta=30\), Reaction \(\rho=5\), Wave \(\alpha=4, \beta=3\)).

Three-task mixed hyperparameter optimization combining Convection, Reaction, and Wave PDEs.

Returns:

A Multi-Task Optimization Problem instance.

Return type:

MTOP

P12()[source]

Generates Problem 12: Mixed (Convection \(\beta=30\), Reaction \(\rho=5\), Wave \(\alpha=4, \beta=3\), Helmholtz \(n=4\)).

Four-task mixed hyperparameter optimization combining all PDE types: Convection, Reaction, Wave, and Helmholtz.

Returns:

A Multi-Task Optimization Problem instance.

Return type:

MTOP

P2()[source]

Generates Problem 2: Reaction (\(\rho=4\), \(\rho=5\)).

Two-task hyperparameter optimization for Reaction PDE with different reaction rates.

Returns:

A Multi-Task Optimization Problem instance.

Return type:

MTOP

P3()[source]

Generates Problem 3: Wave (\(\alpha=3, \beta=3\); \(\alpha=4, \beta=3\)).

Two-task hyperparameter optimization for Wave PDE with different wave speed parameters.

Returns:

A Multi-Task Optimization Problem instance.

Return type:

MTOP

P4()[source]

Generates Problem 4: Helmholtz (\(n=3\), \(n=4\)).

Two-task hyperparameter optimization for Helmholtz PDE with different wave number multipliers.

Returns:

A Multi-Task Optimization Problem instance.

Return type:

MTOP

P5()[source]

Generates Problem 5: Convection (\(\beta=20\), \(\beta=30\), \(\beta=40\)).

Three-task hyperparameter optimization for Convection PDE with different convection velocities.

Returns:

A Multi-Task Optimization Problem instance.

Return type:

MTOP

P6()[source]

Generates Problem 6: Reaction (\(\rho=4\), \(\rho=5\), \(\rho=6\)).

Three-task hyperparameter optimization for Reaction PDE with different reaction rates.

Returns:

A Multi-Task Optimization Problem instance.

Return type:

MTOP

P7()[source]

Generates Problem 7: Wave (\(\alpha=3, \beta=3\); \(\alpha=4, \beta=3\); \(\alpha=4, \beta=4\)).

Three-task hyperparameter optimization for Wave PDE with different wave speed and frequency parameters.

Returns:

A Multi-Task Optimization Problem instance.

Return type:

MTOP

P8()[source]

Generates Problem 8: Helmholtz (\(n=3\), \(n=4\), \(n=5\)).

Three-task hyperparameter optimization for Helmholtz PDE with different wave number multipliers.

Returns:

A Multi-Task Optimization Problem instance.

Return type:

MTOP

P9()[source]

Generates Problem 9: Mixed (Convection \(\beta=30\), Reaction \(\rho=5\)).

Two-task mixed hyperparameter optimization combining Convection and Reaction PDEs.

Returns:

A Multi-Task Optimization Problem instance.

Return type:

MTOP

static T10_helmholtz_n4(x)[source]
static T10_wave_alpha4_beta3(x)[source]
static T11_convection_beta30(x)[source]
static T11_reaction_rho5(x)[source]
static T11_wave_alpha4_beta3(x)[source]
static T12_convection_beta30(x)[source]
static T12_helmholtz_n4(x)[source]
static T12_reaction_rho5(x)[source]
static T12_wave_alpha4_beta3(x)[source]
static T1_convection_beta20(x)[source]
static T1_convection_beta30(x)[source]
static T2_reaction_rho4(x)[source]
static T2_reaction_rho5(x)[source]
static T3_wave_alpha3_beta3(x)[source]
static T3_wave_alpha4_beta3(x)[source]
static T4_helmholtz_n3(x)[source]
static T4_helmholtz_n4(x)[source]
static T5_convection_beta20(x)[source]
static T5_convection_beta30(x)[source]
static T5_convection_beta40(x)[source]
static T6_reaction_rho4(x)[source]
static T6_reaction_rho5(x)[source]
static T6_reaction_rho6(x)[source]
static T7_wave_alpha3_beta3(x)[source]
static T7_wave_alpha4_beta3(x)[source]
static T7_wave_alpha4_beta4(x)[source]
static T8_helmholtz_n3(x)[source]
static T8_helmholtz_n4(x)[source]
static T8_helmholtz_n5(x)[source]
static T9_convection_beta30(x)[source]
static T9_reaction_rho5(x)[source]
problem_information = {'n_cases': 12, 'n_cons': '0', 'n_dims': '6', 'n_objs': '1', 'n_tasks': '[2, 4]', 'type': 'real_world'}

Methods and Utilities

Batch Experiment

class ddmtolab.Methods.batch_experiment.BatchExperiment(base_path: str = './Data', clear_folder: bool = False)[source]

Bases: object

Batch Experiment Module

This class provides a framework to define and run batch experiments for multiple optimization algorithms on multiple benchmark problems. It supports:

  • Adding multiple problems via problem creator functions.

  • Adding multiple optimization algorithm classes with fixed parameters.

  • Running experiments in parallel using multiple CPU cores.

  • Logging execution time, status, and errors for each run.

  • Saving timing summaries to CSV files.

  • Printing experiment configuration summaries to console.

  • Optional folder clearing before experiments.

  • Saving and loading experiment configuration from YAML files.

Author: Jiangtao Shen Email: j.shen5@exeter.ac.uk Date: 2025.11.25 Version: 1.0

add_algorithm(algorithm_class: Type, algorithm_name: str, **params)[source]

Add an optimization algorithm class

Parameters:
  • algorithm_class – Algorithm class (e.g., GA, DE, PSO, etc.)

  • algorithm_name – Algorithm name (used for file naming and folder creation)

  • **params – Fixed parameters for the algorithm (e.g., n, max_nfes, muc, mum, etc.) Note: problem, save_path, and name will be set automatically

add_problem(problem_creator, problem_name: str, **problem_params)[source]

Add an experiment problem using a creator function

Parameters:
  • problem_creator – Function that creates the problem instance

  • problem_name – Problem name (used for file naming)

  • **problem_params – Parameters to pass to problem creator

run(n_runs: int | None = None, verbose: bool = True, max_workers: int | None = None)[source]

Run all experiments using multi-core parallel processing

Parameters:
  • n_runs – Number of independent runs for each algorithm on each problem If None and loaded from config, uses config value

  • verbose – Whether to print detailed progress information

  • max_workers – Maximum number of worker processes, defaults to CPU count if None If None and loaded from config, uses config value

Data Analysis

class ddmtolab.Methods.data_analysis.DataAnalyzer(data_path: str | Path = './Data', settings: Dict[str, Any] | None = None, algorithm_order: List[str] | None = None, save_path: str | Path = './Results', table_format: str = 'excel', figure_format: str = 'pdf', statistic_type: str = 'mean', significance_level: float = 0.05, rank_sum_test: bool = True, log_scale: bool = False, show_pf: bool = True, show_nd: bool = True, merge_plots: bool = False, merge_columns: int = 3, show_std_band: bool = False, best_so_far: bool = True, clear_results: bool = True, convergence_k: int | None = None)[source]

Main class for comprehensive data analysis and visualization of multi-task optimization experiments.

This class provides a complete pipeline for:

  • Scanning data directories to detect algorithms, problems, and runs

  • Calculating performance metrics (IGD, HV, or objective values)

  • Generating statistical comparison tables (Excel or LaTeX)

  • Creating convergence curve plots

  • Visualizing runtime comparisons

  • Plotting non-dominated solutions

data_path

Path to the data directory containing experiment results.

Type:

Path

settings

Problem settings including reference definitions and metric configuration.

Type:

Optional[Dict[str, Any]]

algorithm_order

Custom ordering of algorithms for display.

Type:

Optional[List[str]]

table_config

Configuration for table generation.

Type:

TableConfig

plot_config

Configuration for plot generation.

Type:

PlotConfig

run() MetricResults[source]

Execute the complete analysis pipeline.

This method runs all analysis steps in sequence:

  1. Clear existing results (if configured)

  2. Scan data directory

  3. Calculate metrics

  4. Generate statistical tables

  5. Generate convergence plots

  6. Generate runtime plots

  7. Generate non-dominated solution plots

Returns:

Complete metric results from the analysis.

Return type:

MetricResults

class ddmtolab.Methods.test_data_analysis.TestDataAnalyzer(data_path: str | Path = './Data', settings: Dict[str, Any] | None = None, algorithm_order: List[str] | None = None, save_path: str | Path = './Results', figure_format: str = 'pdf', log_scale: bool = False, show_pf: bool = True, show_nd: bool = True, best_so_far: bool = True, clear_results: bool = True, file_suffix: str = '.pkl')[source]

Main class for analyzing single-run test data.

This class handles pickle files stored directly in the data folder, providing a lightweight analysis pipeline without statistical analysis.

data_path

Path Path to the data directory containing pickle files.

settings

Optional[Dict[str, Any]] Problem settings including reference definitions.

algorithm_order

Optional[List[str]] Custom ordering of algorithms for display.

plot_config

PlotConfig Configuration for plot generation.

run() TestMetricResults[source]

Execute the complete test analysis pipeline.

Returns:

TestMetricResults

Complete metric results from the analysis.

Performance Metrics

class ddmtolab.Methods.metrics.CV[source]

Constraint Violation (CV) metric. Lower is better (ideally 0 for feasible solutions).

calculate(cons: ndarray) float[source]

Compute CV metric - returns the best (minimum) CV in the population

Parameters:

cons ((n, c) constraint violation matrix) – where n is the number of solutions and c is the number of constraints Constraint is satisfied when cons <= 0

Returns:

float

Return type:

CV value of the best solution (minimum CV)

class ddmtolab.Methods.metrics.DeltaP[source]

Averaged Hausdorff Distance (Δp) metric. Lower is better.

calculate(objs: ndarray, pf: ndarray) float[source]

Compute Δp(objs, pf)

Parameters:
  • objs ((n, m) obtained objective vectors)

  • pf ((n_pf, m) true Pareto front)

Returns:

float

Return type:

Δp value (Averaged Hausdorff Distance)

class ddmtolab.Methods.metrics.FR[source]

Feasible Rate metric. Calculates the proportion of feasible solutions in the population. Higher is better (more feasible solutions).

calculate(cons: ndarray) float[source]

Compute feasible rate

Parameters:

cons ((n, c) constraint violation matrix) – where n is the number of solutions and c is the number of constraints A solution is feasible if all constraints <= 0

Returns:

float

Return type:

Feasible rate (proportion of feasible solutions)

class ddmtolab.Methods.metrics.GD[source]

Generational Distance (GD) metric. Lower is better.

calculate(objs: ndarray, pf: ndarray) float[source]

Compute GD(objs, pf)

Parameters:
  • objs ((n, m) obtained objective vectors)

  • pf ((n_pf, m) true Pareto front)

Returns:

float

Return type:

GD value

class ddmtolab.Methods.metrics.HV[source]

Hypervolume (HV) metric. Higher is better.

calculate(objs: ndarray, pf: ndarray | None = None, reference: ndarray | None = None) float[source]

Compute HV for a set of objective vectors.

Parameters:
  • objs (np.ndarray) – Objective matrix, shape (n, m)

  • pf (np.ndarray, optional) – True Pareto front for normalization, shape (n_pf, m)

  • reference (np.ndarray, optional) – Reference point for HV calculation, shape (m,)

Returns:

HV value

Return type:

float

class ddmtolab.Methods.metrics.IGD[source]

Inverted Generational Distance (IGD) metric. Lower is better.

calculate(objs: ndarray, pf: ndarray) float[source]

Compute IGD(objs, pf)

Parameters:
  • objs ((n, m) obtained objective vectors)

  • pf ((n_pf, m) true Pareto front)

Returns:

float

Return type:

IGD value

class ddmtolab.Methods.metrics.IGDp[source]

Inverted Generational Distance Plus (IGD+) metric. Lower is better.

calculate(objs: ndarray, pf: ndarray) float[source]

Compute IGD+(objs, pf)

Parameters:
  • objs ((n, m) obtained objective vectors)

  • pf ((n_pf, m) true Pareto front)

Returns:

float

Return type:

IGD+ value

class ddmtolab.Methods.metrics.Spacing[source]

Spacing metric. Lower is better.

calculate(objs: ndarray) float[source]

Compute Spacing metric

Parameters:

objs ((n, m) obtained objective vectors) – where n is the number of solutions and m is the number of objectives

Returns:

float

Return type:

Spacing value (standard deviation of nearest neighbor distances)

class ddmtolab.Methods.metrics.Spread[source]

Spread metric. Lower is better.

calculate(objs: ndarray, pf: ndarray) float[source]

Compute Spread metric

Parameters:
  • objs ((n, m) obtained objective vectors) – where n is the number of solutions and m is the number of objectives

  • pf ((n_pf, m) true Pareto front)

Returns:

float

Return type:

Spread value

Algorithm Utilities

This script contains commonly used components for implementing algorithms.

Author: Jiangtao Shen Email: j.shen5@exeter.ac.uk Date: 2025.10.18 Version: 1.0

class ddmtolab.Methods.Algo_Methods.algo_utils.Results(best_decs: List[ndarray], best_objs: List[ndarray], all_decs: List[List[ndarray]], all_objs: List[List[ndarray]], runtime: float, max_nfes: List[int], best_cons: List[ndarray] | None = None, all_cons: List[List[ndarray]] | None = None, bounds: List[Tuple[ndarray, ndarray]] | None = None)[source]

Container for optimization results.

No-index:

best_decs

Best decision variables for each task

Type:

List[np.ndarray]

best_objs

Best objective values for each task

Type:

List[np.ndarray]

all_decs

Decision variables history for all tasks across generations

Type:

List[List[np.ndarray]]

all_objs

Objective values history for all tasks across generations

Type:

List[List[np.ndarray]]

runtime

Total runtime in seconds

Type:

float

max_nfes

Maximum function evaluations per task

Type:

List[int]

best_cons

Best constraint values for each task (None if unconstrained)

Type:

Optional[List[np.ndarray]]

all_cons

Constraint values history for all tasks (None if unconstrained)

Type:

Optional[List[List[np.ndarray]]]

bounds

Bounds for each task, where each element is a 2D array with shape (2, dim)

Type:

Optional[List[np.ndarray]]

all_cons: List[List[ndarray]] | None = None
all_decs: List[List[ndarray]]
all_objs: List[List[ndarray]]
best_cons: List[ndarray] | None = None
best_decs: List[ndarray]
best_objs: List[ndarray]
bounds: List[Tuple[ndarray, ndarray]] | None = None
max_nfes: List[int]
runtime: float
ddmtolab.Methods.Algo_Methods.algo_utils.append_history(*pairs: Any) Tuple[list, ...][source]

Append current generation data to history storage.

Parameters:

*pairs (tuple) –

Alternating pairs of (history_list, current_data).

  • history_list: List to store historical data

  • current_data: Either a single np.ndarray (single task) or List[np.ndarray] (multi-task)

Returns:

results – All updated history lists (all_1, all_2, …)

Return type:

tuple

ddmtolab.Methods.Algo_Methods.algo_utils.build_save_results(all_decs: List[List[ndarray]], all_objs: List[List[ndarray]], runtime: float, max_nfes: List[int], all_cons: List[List[ndarray]] | None = None, bounds: List[Tuple[ndarray, ndarray]] | None = None, save_path: str | None = None, filename: str | None = None, save_data: bool = True, **kwargs) Results[source]

Extract best solutions, build results, and optionally save to file.

Automatically detects single-objective vs multi-objective tasks:

  • Single-objective (n_objs=1): returns the best individual

  • Multi-objective (n_objs>1): returns the entire final population (Pareto front)

Parameters:
  • all_decs (List[List[np.ndarray]]) – Decision variables history for all tasks. all_decs[i][g] has shape (n_samples, dim) for task i at generation g.

  • all_objs (List[List[np.ndarray]]) – Objective values history for all tasks. all_objs[i][g] has shape (n_samples, n_objs) for task i at generation g.

  • runtime (float) – Total runtime in seconds

  • max_nfes (List[int]) – Maximum function evaluations per task

  • all_cons (List[List[np.ndarray]], optional) – Constraint values history for all tasks (default: None)

  • bounds (List[Tuple[np.ndarray, np.ndarray]], optional) – Bounds (lower, upper) for each task (default: None)

  • save_path (str, optional) – Directory path where the results will be saved (default: None)

  • filename (str, optional) – Name of the output file without extension (default: None)

  • save_data (bool, optional) – Whether to save the data to file (default: True)

  • **kwargs (dict) – Additional data to include in the saved file

Returns:

results – Results object containing best solutions and optimization history

Return type:

Results

ddmtolab.Methods.Algo_Methods.algo_utils.crowding_distance(pop_obj: ndarray, front_no: ndarray | None = None) ndarray[source]

Calculate the crowding distance for a population of solutions.

Parameters:
  • pop_obj (np.ndarray) – Objective value matrix, shape (n, m), where n is the number of solutions and m is the number of objectives

  • front_no (np.ndarray, optional) – Non-dominated front number for each solution, shape (n,). If not provided, all solutions are assumed to belong to the same front.

Returns:

crowd_dis – Crowding distance for each solution, shape (n,). Boundary solutions are assigned infinite distance.

Return type:

np.ndarray

ddmtolab.Methods.Algo_Methods.algo_utils.de_generation(parents: ndarray, F: float, CR: float) ndarray[source]

Generate offspring for a population using Differential Evolution (DE).

Uses DE/rand/1/bin strategy: random base vector, one difference vector, and binomial crossover.

Parameters:
  • parents (np.ndarray) – Array of parent solutions, shape (n, d)

  • F (float) – Differential weight (mutation scale factor)

  • CR (float) – Crossover rate in [0, 1] for binomial crossover

Returns:

offdecs – Offspring array, shape (n, d), clipped to [0, 1]

Return type:

np.ndarray

ddmtolab.Methods.Algo_Methods.algo_utils.dsmerge(S, Y, ds=1e-14)[source]

Merge data for duplicate/near-duplicate design sites.

Finds clusters of points within threshold distance in normalized space and merges them by averaging.

Parameters:
  • S (np.ndarray) – Design sites, shape (m, n)

  • Y (np.ndarray) – Responses, shape (m,) or (m, 1)

  • ds (float) – Threshold for near-duplicate detection (default: 1e-14)

Returns:

  • mS (np.ndarray) – Merged design sites

  • mY (np.ndarray) – Merged responses

ddmtolab.Methods.Algo_Methods.algo_utils.evaluation(problem, decs: List[ndarray], unified: bool = False, fill_value: float = 0.0, eval_objectives: bool | List[bool | int | List[int]] = True, eval_constraints: bool | List[bool | int | List[int]] = True) Tuple[List[ndarray], List[ndarray]][source]

Evaluate a list of decision variable matrices on multiple tasks.

Parameters:
  • problem (MTOP) – An instance of the MTOP class.

  • decs (list of ndarray) – List of decision variable matrices for each task, shape [n, d_i], scaled in [0,1].

  • unified (bool, optional) – If True, pad objectives to m_max and constraints to c_max. Default False.

  • fill_value (float, optional) – Value used for padding in unified mode. Default 0.0.

  • eval_objectives (bool or list, optional) –

    • True: evaluate all objectives for all tasks (default)

    • False: skip objective evaluation for all tasks

    • List: per-task specification, each element can be:

      • True/False: evaluate all/none

      • int: evaluate only the i-th objective

      • List[int]: evaluate specified objectives

  • eval_constraints (bool or list, optional) –

    • True: evaluate all constraints for all tasks (default)

    • False: skip constraint evaluation for all tasks

    • List: per-task specification, same format as eval_objectives

Returns:

  • objs (list of ndarray) – List of objective value matrices for each task.

    • Normal mode: shape [n, m_i] or [n, len(selected)]

    • Unified mode: shape [n, m_max]

  • cons (list of ndarray) – List of constraint value matrices for each task.

    • Normal mode: shape [n, c_i] or [n, 1] if no constraints

    • Unified mode: shape [n, c_max]

ddmtolab.Methods.Algo_Methods.algo_utils.ga_generation(parents: ndarray, muc: float, mum: float) ndarray[source]

Generate offspring population using genetic algorithm operators.

Applies simulated binary crossover (SBX) and polynomial mutation to create offspring from parent population.

Parameters:
  • parents (np.ndarray) – Parent population, shape (n, d)

  • muc (float) – Distribution index for crossover

  • mum (float) – Distribution index for mutation

Returns:

offdecs – Offspring decision variables, shape (n, d)

Return type:

np.ndarray

ddmtolab.Methods.Algo_Methods.algo_utils.init_history(decs: List[ndarray], objs: List[ndarray], cons: List[ndarray] | None = None) tuple[List[List[ndarray]], List[List[ndarray]]] | tuple[List[List[ndarray]], List[List[ndarray]], List[List[ndarray]]][source]

Initialize history storage for populations across generations.

Parameters:
  • decs (List[np.ndarray]) – Initial decision variables for each task. decs[i] has shape (n_samples, dim) for task i.

  • objs (List[np.ndarray]) – Initial objective values for each task. objs[i] has shape (n_samples, n_objs) for task i.

  • cons (List[np.ndarray], optional) – Initial constraint values for each task (default: None). cons[i] has shape (n_samples, n_cons) for task i.

Returns:

  • all_decs (List[List[np.ndarray]]) – History storage for decision variables

  • all_objs (List[List[np.ndarray]]) – History storage for objective values

  • all_cons (List[List[np.ndarray]], optional) – History storage for constraint values (only returned if cons is not None)

ddmtolab.Methods.Algo_Methods.algo_utils.initialization(problem: MTOP, n: int | List[int], method: str = 'random', the_same: bool = False) List[ndarray][source]

Initialize decision variable matrices for multiple tasks.

Parameters:
  • problem (MTOP) – An instance of the MTOP class

  • n (Union[int, List[int]]) –

    Number of samples per task.

    • If int: same number of samples for all tasks

    • If list: number of samples for each task, e.g., [30, 50]

  • method (str, optional) – Sampling method: ‘random’ or ‘lhs’ (default: ‘random’)

  • the_same (bool, optional) – If True, all tasks share the same sample points (default: False). For tasks with different dimensions, samples are generated in the maximum dimension and then truncated to each task’s dimension.

Returns:

decs – List of decision variable matrices for each task. decs[i] has shape (n_i, d_i) for task i.

Return type:

List[np.ndarray]

ddmtolab.Methods.Algo_Methods.algo_utils.merge_archive(arc_decs, arc_objs, new_decs, new_objs)[source]

Update archive by merging and removing duplicates.

Parameters:
  • arc_decs (np.ndarray) – Current archive

  • arc_objs (np.ndarray) – Current archive

  • new_decs (np.ndarray) – New solutions

  • new_objs (np.ndarray) – New solutions

Returns:

merged_decs, merged_objs – Updated archive

Return type:

np.ndarray

ddmtolab.Methods.Algo_Methods.algo_utils.nd_sort(objs: ndarray, *args) Tuple[ndarray, int][source]

Perform non-dominated sorting on a population of objective values.

Parameters:
  • objs (np.ndarray) – Objective value matrix, shape (n, m)

  • *args (tuple) –

    Optional arguments:

    • (n_sort,): Number of solutions to sort

    • (cons, n_sort): Constraint matrix and number of solutions to sort

Returns:

  • front_no (np.ndarray) – Non-dominated front number for each solution, shape (n,)

  • max_fno (int) – Maximum front number assigned

ddmtolab.Methods.Algo_Methods.algo_utils.rbf_build(X_train, Y_train, bf_c=1.0)[source]

Build an RBF (Multiquadric) interpolation model.

Parameters:
  • X_train (np.ndarray) – Training inputs, shape (n, d)

  • Y_train (np.ndarray) – Training outputs, shape (n,) or (n, 1)

  • bf_c (float) – Multiquadric parameter (default: 1.0)

Returns:

model – RBF model containing coefficients and parameters

Return type:

dict

ddmtolab.Methods.Algo_Methods.algo_utils.rbf_predict(model, X_train, X_query)[source]

Predict using RBF model.

Parameters:
  • model (dict) – RBF model from rbf_build

  • X_train (np.ndarray) – Training inputs, shape (n, d)

  • X_query (np.ndarray) – Query points, shape (nq, d)

Returns:

Y_pred – Predicted values, shape (nq,)

Return type:

np.ndarray

ddmtolab.Methods.Algo_Methods.algo_utils.reorganize_initial_data(data: List[ndarray], nt: int, n_initial_per_task: List[int], interval: int = 1) List[List[ndarray]][source]

Reorganize initial data by task and number of initial points.

Parameters:
  • data (List[np.ndarray]) – Original data list, where data[i] is the data array for task i

  • nt (int) – Number of tasks

  • n_initial_per_task (List[int]) – List of number of initial points for each task

  • interval (int, optional) – Interval for selecting points. Default is 1. - interval=1: 1, 2, 3, 4, … points - interval=2: 2, 4, 6, 8, … points - interval=k: k, 2k, 3k, 4k, … (plus remaining points if not divisible)

Returns:

all_data – Reorganized data

Return type:

List[List[np.ndarray]]

ddmtolab.Methods.Algo_Methods.algo_utils.spea2_fitness(pop_obj, pop_con=None)[source]

Calculate SPEA2 fitness with constrained dominance.

Parameters:
  • pop_obj (np.ndarray, shape (N, M))

  • pop_con (np.ndarray, shape (N, C), optional) – Constraint violation values (positive = violation).

Returns:

fitness – SPEA2 fitness values. Lower is better; < 1 means non-dominated.

Return type:

np.ndarray, shape (N,)

ddmtolab.Methods.Algo_Methods.algo_utils.spea2_truncation(pop_obj, N)[source]

Select N solutions by iteratively removing the most crowded. Uses SPEA2-style lexicographic nearest-neighbor comparison.

Parameters:
  • pop_obj (np.ndarray, shape (n, M)) – Objectives.

  • N (int) – Number of solutions to keep.

Returns:

selected – Indices of selected solutions.

Return type:

np.ndarray

ddmtolab.Methods.Algo_Methods.algo_utils.spea2_truncation_fast(pop_obj, N)[source]

Select N solutions by iteratively removing the one with smallest nearest-neighbor distance.

Parameters:
  • pop_obj (np.ndarray, shape (n, M)) – Objectives.

  • N (int) – Number of solutions to keep.

Returns:

selected – Indices of selected solutions.

Return type:

np.ndarray

ddmtolab.Methods.Algo_Methods.algo_utils.tournament_selection(K: int, N: int, *fitness_arrays: ndarray, rng: Generator | None = None) ndarray[source]

Perform tournament selection on a population.

Parameters:
  • K (int) – Tournament size. If K <= 1, selection is purely random with replacement.

  • N (int) – Number of individuals to select

  • *fitness_arrays (np.ndarray) – One or more arrays of fitness values. Higher fitness is considered better.

  • rng (np.random.Generator, optional) – NumPy random number generator. If None, a new default RNG is created.

Returns:

selected – Array of selected individual indices, shape (N,), dtype=int

Return type:

np.ndarray

Bayesian Optimization Utilities

ddmtolab.Methods.Algo_Methods.bo_utils.bo_next_point(dim_i: int, decs_i: ndarray, objs_i: ndarray, data_type: dtype = torch.float32) ndarray[source]

Get the next sampling point using Bayesian Optimization

Parameters:
  • dim_i (int) – Dimension of decision variables

  • decs_i (np.ndarray) – Historical decision variables, shape: (n_samples, dim_i)

  • objs_i (np.ndarray) – Historical objective function values, shape: (n_samples,) or (n_samples, 1)

  • data_type (torch.dtype, optional) – Data type, default is torch.float

Returns:

candidate_np – Next sampling point, shape: (1, dim_i)

Return type:

np.ndarray

ddmtolab.Methods.Algo_Methods.bo_utils.bo_next_point_lcb(dim_i: int, decs_i: ndarray, objs_i: ndarray, data_type: dtype = torch.float32, kappa: float = 2.0) tuple[source]

Get the next sampling point using Bayesian Optimization with LCB acquisition

Parameters:
  • dim_i (int) – Dimension of decision variables

  • decs_i (np.ndarray) – Historical decision variables, shape: (n_samples, dim_i)

  • objs_i (np.ndarray) – Historical objective function values, shape: (n_samples,) or (n_samples, 1)

  • data_type (torch.dtype, optional) – Data type, default is torch.float

  • kappa (float, optional) – Exploration weight for LCB, default is 2.0

Returns:

  • candidate_np (np.ndarray) – Next sampling point, shape: (1, dim_i)

  • gp (SingleTaskGP) – Fitted Gaussian Process model

ddmtolab.Methods.Algo_Methods.bo_utils.gp_build(decs: ndarray, objs: ndarray, data_type: dtype = torch.float32) SingleTaskGP[source]

Build and fit a Single-Task Gaussian Process model.

Parameters:
  • decs (np.ndarray) – Historical decision variables, shape: (n_samples, dim)

  • objs (np.ndarray) – Historical objective function values, shape: (n_samples,) or (n_samples, 1)

  • data_type (torch.dtype, optional) – Data type for tensors (default: torch.float)

Returns:

gp – Fitted Gaussian Process model

Return type:

SingleTaskGP

ddmtolab.Methods.Algo_Methods.bo_utils.gp_predict(gp: SingleTaskGP, test_X: ndarray, data_type: dtype = torch.float32) tuple[ndarray, ndarray][source]

Predict objectives and uncertainties using a trained Gaussian Process model.

Parameters:
  • gp (SingleTaskGP) – Trained Gaussian Process model

  • test_X (np.ndarray) – Test decision variables, shape: (n_candidates, dim)

  • data_type (torch.dtype, optional) – Data type for tensors (default: torch.float)

Returns:

  • pred_objs (np.ndarray) – Predicted objective values, shape: (n_candidates, 1)

  • pred_std (np.ndarray) – Predicted standard deviations, shape: (n_candidates, 1)

ddmtolab.Methods.Algo_Methods.bo_utils.mo_gp_build(decs, objs, data_type=torch.float32)[source]

Build Gaussian Process models for each objective in multi-objective optimization.

Parameters:
  • decs (np.ndarray) – Decision variables, shape (N, D) where N is the number of samples and D is the dimension of decision space.

  • objs (np.ndarray) – Objective values, shape (N, M) where M is the number of objectives.

  • data_type (torch.dtype, optional) – Data type for GP models (default: torch.float).

Returns:

models – List of trained GP models, one for each objective.

Return type:

list

ddmtolab.Methods.Algo_Methods.bo_utils.mo_gp_predict(models, x, data_type=torch.float32, mse=False)[source]

Predict objectives using trained GP models for multi-objective optimization.

Parameters:
  • models (list) – List of trained GP models (one per objective), as returned by mo_gp_build.

  • x (np.ndarray) – Decision variables to predict, shape (N, D) where N is the number of samples and D is the dimension of decision space.

  • data_type (torch.dtype, optional) – Data type for GP prediction (default: torch.float).

  • mse (bool, optional) – If True, also return the Mean Squared Error (variance) of predictions. If False, only return predicted objective values (default: False).

Returns:

  • pred_objs (np.ndarray) – Predicted objective values, shape (N, M) where M is the number of objectives.

  • pred_mse (np.ndarray, optional) – Predicted MSE (variance) for each objective, shape (N, M). Only returned if mse=True.

ddmtolab.Methods.Algo_Methods.bo_utils.mtbo_next_point(mtgp: MultiTaskGP, task_id: int, objs: list[ndarray], dims: list[int], nt: int, data_type: dtype = torch.float32) ndarray[source]

Get the next sampling point using Multi-Task Bayesian Optimization.

Parameters:
  • mtgp (MultiTaskGP) – Trained Multi-Task Gaussian Process model

  • task_id (int) – Task index for which to find the next point

  • objs (list[np.ndarray]) – List of objective value matrices for each task

  • dims (list[int]) – List of dimensionalities for each task

  • nt (int) – Total number of tasks

  • data_type (torch.dtype, optional) – Data type for tensors (default: torch.float)

Returns:

candidate_np – Next sampling point, shape: (1, dims[task_id])

Return type:

np.ndarray

ddmtolab.Methods.Algo_Methods.bo_utils.mtgp_build(decs: list[ndarray], objs: list[ndarray], dims: list[int], data_type: dtype = torch.float32) MultiTaskGP[source]

Build a Multi-Task Gaussian Process model.

Parameters:
  • decs (list[np.ndarray]) – List of decision variable matrices for each task

  • objs (list[np.ndarray]) – List of objective value matrices for each task

  • dims (list[int]) – List of dimensionalities for each task

  • std_params (list[dict] | None) – Standardization parameters for each task. If None, objectives are not standardized.

  • data_type (torch.dtype) – Data type for tensors

Returns:

mtgp – Fitted Multi-Task GP model

Return type:

MultiTaskGP

ddmtolab.Methods.Algo_Methods.bo_utils.mtgp_predict(mtgp: MultiTaskGP, off_decs: ndarray, task_id: int, dims: list[int], nt: int, obj_min_vals: list[float] | None = None, obj_max_vals: list[float] | None = None, data_type: dtype = torch.float32) tuple[ndarray, ndarray][source]

Use Multi-Task GP to predict objectives and uncertainties for candidate solutions.

Parameters:
  • mtgp (MultiTaskGP) – Trained Multi-Task Gaussian Process model

  • off_decs (np.ndarray) – Candidate decision variables, shape (n_candidates, dim)

  • task_id (int) – Task index for prediction

  • dims (list[int]) – List of dimensionalities for each task

  • nt (int) – Total number of tasks

  • obj_min_vals (list[float] | None) – Minimum objective values for each task (for denormalization)

  • obj_max_vals (list[float] | None) – Maximum objective values for each task (for denormalization)

  • data_type (torch.dtype, optional) – Data type for tensors (default: torch.float)

Returns:

  • pred_objs (np.ndarray) – Predicted objective values, shape (n_candidates, 1)

  • pred_std (np.ndarray) – Predicted standard deviations, shape (n_candidates, 1)

ddmtolab.Methods.Algo_Methods.bo_utils.mtgp_task_corr(mtgp: MultiTaskGP) ndarray[source]

Extract task correlation matrix from multi-task Gaussian process model.

Parameters:

mtgp (MultiTaskGP) – Trained Multi-Task Gaussian Process model

Returns:

task_corr – Task correlation matrix (normalized covariance matrix)

Return type:

np.ndarray

Uniform Point Generation

ddmtolab.Methods.Algo_Methods.uniform_point.uniform_point(N: int, M: int, method: str = 'NBI') tuple[ndarray, int][source]

Generate a set of uniformly distributed points.

Parameters:
  • N (int) – Approximate number of points to generate

  • M (int) – Number of objectives/dimensions

  • method (str, optional) –

    Sampling method to use (default: ‘NBI’). Options:

    • ’NBI’: Normal-boundary intersection method

    • ’ILD’: Incremental lattice design

    • ’MUD’: Mixture uniform design

    • ’grid’: Grid sampling

    • ’Latin’: Latin hypercube sampling

Returns:

  • W (np.ndarray) – Array of uniformly distributed points, shape (N_actual, M)

  • N_actual (int) – Actual number of points generated