Algorithms
This chapter introduces the algorithm design philosophy and construction rules in D²MTOLab, providing comprehensive guidance for implementing custom optimization algorithms.
Algorithm Construction
Considering the complexity and diversity of data-driven multitask optimization, D²MTOLab adopts a loosely-coupled algorithm design philosophy. The platform does not mandate algorithms to inherit specific base classes or implement fixed interface methods, thereby avoiding restrictions on algorithm flexibility. This design approach offers the following advantages:
Enhanced Platform Compatibility: Traditional gradient-based methods, evolutionary algorithms, advanced data-driven multitask optimization algorithms, and hybrid innovative architectures can all be seamlessly integrated into the platform.
Improved Development Convenience: Users can quickly implement algorithms across the full spectrum—from inexpensive single-task single-objective unconstrained optimization to expensive multitask multiobjective constrained optimization—without understanding complex class inheritance hierarchies.
Guaranteed Algorithm Freedom: Users are free to design data structures, optimization workflows, and knowledge transfer strategies according to specific problem characteristics and algorithm mechanisms, without framework constraints.
To facilitate subsequent data processing and efficient coordination with the platform’s experiment modules and data analysis modules, D²MTOLab imposes only 3 basic rules on algorithm construction, ensuring normal platform functionality while maximizing algorithm development flexibility.
Rule 1: Algorithm Framework
Algorithms must be implemented as classes and include the following core components:
Algorithm Metadata: Class attribute
algorithm_informationdictionary declaring the algorithm’s basic characteristicsMetadata Access Method: Class method
get_algorithm_informationfor retrieving and displaying algorithm metadataInitialization Method:
__init__method that must accept aproblem(MTOP instance) as the first parameterOptimization Method:
optimizemethod that executes the optimization process and returns aResultsobject
Example Structure:
class AlgorithmName:
# Component 1: Algorithm metadata (required)
algorithm_information = {
'n_tasks': '1-K', # Supported task number types
'dims': 'unequal', # Decision variable dimension constraint
'objs': 'unequal', # Objective number constraint
'n_objs': '1-M', # Objective quantity type
'cons': 'unequal', # Constraint number constraint
'n_cons': '0-C', # Constraint quantity type
'expensive': 'False', # Whether expensive optimization
'knowledge_transfer': 'False', # Whether knowledge transfer involved
'param': 'unequal' # Algorithm parameter constraint
}
# Component 2: Metadata access method (required)
@classmethod
def get_algorithm_information(cls, print_info=True):
return get_algorithm_information(cls, print_info)
# Component 3: Initialization method (required)
def __init__(self, problem, n=None, max_nfes=None, ...):
self.problem = problem
# Other parameter initialization
# Component 4: Optimization method (required)
def optimize(self):
# Algorithm implementation
return results
Rule 2: Algorithm Input
Algorithms must accept an MTOP instance as an input parameter. The MTOP instance encapsulates complete information about the optimization problem, through which the algorithm obtains all problem information. Other parameters can be freely designed according to algorithm requirements.
Example:
def __init__(self, problem, n=None, max_nfes=None, ...):
"""
Args:
problem: MTOP instance (required parameter)
n: Population size per task (custom parameter)
...: Other algorithm-specific parameters
"""
self.problem = problem # Store problem instance
# Other parameter initialization
Rule 3: Algorithm Output
Algorithms must return a result object conforming to the Results dataclass specification. The Results class encapsulates complete information about the optimization process:
Results Dataclass Definition:
@dataclass
class Results:
"""Optimization results container"""
best_decs: List[np.ndarray] # Best decision variables for each task
best_objs: List[np.ndarray] # Best objective values for each task
all_decs: List[List[np.ndarray]] # Decision variable evolution history
all_objs: List[List[np.ndarray]] # Objective value evolution history
runtime: float # Total runtime (seconds)
max_nfes: List[int] # Max function evaluations per task
best_cons: Optional[List[np.ndarray]] = None # Best constraint values
all_cons: Optional[List[List[np.ndarray]]] = None # Constraint history
Results Fields Description
Field |
Data Type |
Description |
|---|---|---|
|
|
Best decision variables. List length is the number of tasks K. |
|
|
Best objective values. List length is K. |
|
|
Decision variable history. |
|
|
Objective value history. |
|
|
Total runtime (seconds). Records total time from start to end for performance evaluation |
|
|
Maximum function evaluations. List length is K. |
|
|
Best constraint values (optional). Used only in constrained optimization. |
|
|
Constraint evolution history (optional). |
The input/output structure is straightforward: input must include an MTOP instance, and output must follow the specified data structure.
Algorithm Metadata
Algorithms must declare their basic characteristics through the algorithm_information class attribute dictionary to facilitate algorithm management, experiment matching, and performance analysis. The key fields are described below:
Metadata Fields
Field |
Description |
|---|---|
|
Supported task numbers. |
|
Decision variable dimension constraint. |
|
Objective number constraint. |
|
Objective quantity type. |
|
Constraint number constraint. |
|
Constraint quantity type. |
|
Whether expensive optimization (involving surrogate models). |
|
Whether inter-task knowledge transfer involved. |
|
Algorithm parameter constraint. |
Example: GA Metadata Declaration:
class GA:
algorithm_information = {
'n_tasks': '1-K', # Supports single and multitask
'dims': 'unequal', # Supports unequal dimensions
'objs': 'unequal', # Supports unequal objectives
'n_objs': '1', # Single-objective only
'cons': 'unequal', # Supports unequal constraints
'n_cons': '0', # Unconstrained only
'expensive': 'False', # Not expensive (no surrogate)
'knowledge_transfer': 'False', # No knowledge transfer
'n': 'unequal', # Different population sizes
'max_nfes': 'unequal' # Different max evaluations
}
@classmethod
def get_algorithm_information(cls, print_info=True):
"""Get and print algorithm metadata"""
return get_algorithm_information(cls, print_info)
Viewing Algorithm Metadata
D²MTOLab provides the get_algorithm_information class method for each algorithm to retrieve and display metadata:
from Algorithms.STSO.GA import GA
# Call class method to view GA metadata
GA.get_algorithm_information()
Output:
🤖️ GA
Algorithm Information:
- n_tasks: 1-K
- dims: unequal
- objs: unequal
- n_objs: 1
- cons: unequal
- n_cons: 0
- expensive: False
- knowledge_transfer: False
- param: unequal
This method prints the algorithm name and all metadata fields in a structured format, helping users quickly understand the algorithm’s scope and characteristic constraints. By viewing the metadata, users can determine whether an algorithm is suitable for their optimization problem.
The method also supports returning metadata as a dictionary for programmatic processing:
from Algorithms.STSO.GA import GA
info = GA.get_algorithm_information(print_info=False)
print(info)
Output:
{'n_tasks': '1-K', 'dims': 'unequal', 'objs': 'unequal', 'n_objs': '1',
'cons': 'unequal', 'n_cons': '0', 'expensive': 'False',
'knowledge_transfer': 'False', 'n': 'unequal', 'max_nfes': 'unequal'}
Using Algorithms
Basic Usage
Example: Single-Task Optimization:
from ddmtolab.Methods.mtop import MTOP
from ddmtolab.Algorithms.STSO.GA import GA
import numpy as np
# Define objective function
def sphere(x):
return np.sum(x**2, axis=1)
# Create problem instance using MTOP
problem = MTOP()
problem.add_task(sphere, dim=30)
# Initialize algorithm
algorithm = GA(
problem=problem,
n=100, # Population size
max_nfes=10000, # Max function evaluations
pc=0.9, # Crossover probability
pm=0.1 # Mutation probability
)
# Run optimization
results = algorithm.optimize()
# Access results
print(f"Best objective: {results.best_objs[0]}")
print(f"Runtime: {results.runtime:.2f}s")
Example: Multitask Optimization:
from ddmtolab.Methods.mtop import MTOP
from ddmtolab.Algorithms.MTSO.MFEA import MFEA
import numpy as np
# Define objective functions
def sphere(x):
return np.sum(x**2, axis=1)
def rosenbrock(x):
return np.sum(100*(x[:, 1:] - x[:, :-1]**2)**2 + (1 - x[:, :-1])**2, axis=1)
def rastrigin(x):
return 10*x.shape[1] + np.sum(x**2 - 10*np.cos(2*np.pi*x), axis=1)
# Create multitask problem using MTOP
problem = MTOP()
problem.add_task(sphere, dim=30)
problem.add_task(rosenbrock, dim=30)
problem.add_task(rastrigin, dim=30)
# Initialize MFEA
algorithm = MFEA(
problem=problem,
n=100,
max_nfes=10000,
rmp=0.3 # Random mating probability
)
# Run optimization
results = algorithm.optimize()
# Compare task performance
for i in range(problem.n_tasks):
print(f"Task {i+1} best: {results.best_objs[i]}")
Advanced Configuration
Custom Parameter Settings:
# Configure algorithm with custom parameters
algorithm = GA(
problem=problem,
n=[200], # Larger population
max_nfes=[50000], # More evaluations
pc=0.85, # Custom crossover rate
pm=0.15, # Custom mutation rate
selection='tournament', # Selection method
tournament_size=3 # Tournament size
)
Accessing Optimization History:
results = algorithm.optimize()
# Get evolution trajectory for task 0
obj_history = results.all_objs[0]
# Plot convergence curve
import matplotlib.pyplot as plt
best_per_gen = [min(gen_objs) for gen_objs in obj_history]
plt.plot(best_per_gen)
plt.xlabel('Generation')
plt.ylabel('Best Objective Value')
plt.title('Convergence Curve')
plt.show()
Implementing Custom Algorithms
You can easily implement custom algorithms by following the three construction rules:
Example: Simple Custom Algorithm:
import numpy as np
import time
from ddmtolab.Methods.Algo_Methods.algo_utils import (
Results, get_algorithm_information,
initialization, evaluation,
init_history, append_history, build_save_results
)
class MyCustomAlgorithm:
# Rule 1: Algorithm metadata
algorithm_information = {
'n_tasks': '1',
'dims': 'unequal',
'objs': 'unequal',
'n_objs': '1',
'cons': 'unequal',
'n_cons': '0',
'expensive': 'False',
'knowledge_transfer': 'False',
'param': 'unequal'
}
@classmethod
def get_algorithm_information(cls, print_info=True):
return get_algorithm_information(cls, print_info)
# Rule 2: Accept MTOP instance
def __init__(self, problem, n=100, max_nfes=10000,
save_data=True, save_path='./Data', name='MyAlgo'):
self.problem = problem
self.n = n
self.max_nfes = max_nfes
self.save_data = save_data
self.save_path = save_path
self.name = name
# Rule 3: Return Results object
def optimize(self):
start_time = time.time()
# Initialize population using algo_utils
decs = initialization(self.problem, self.n)
objs, cons = evaluation(self.problem, decs)
# Initialize history tracking
all_decs, all_objs, all_cons = init_history(decs, objs, cons)
# Main optimization loop
nfes = self.n
while nfes < self.max_nfes:
# Your optimization logic here
# Generate new solutions, evaluate, select...
# Track history
append_history(all_decs, all_objs, all_cons, decs, objs, cons)
nfes += self.n
runtime = time.time() - start_time
# Build and save results using utility function
return build_save_results(
problem=self.problem,
all_decs=all_decs,
all_objs=all_objs,
all_cons=all_cons,
runtime=runtime,
max_nfes=self.max_nfes,
save_data=self.save_data,
save_path=self.save_path,
name=self.name
)
Available Algorithms
D²MTOLab provides 110+ optimization algorithms organized into four categories:
STSO (Single-Task Single-Objective)
Classical evolutionary algorithms and surrogate-assisted methods for single-objective optimization.
Inexpensive
Algorithm |
Description |
|---|---|
|
Genetic Algorithm |
|
Differential Evolution |
|
Particle Swarm Optimization |
|
Social Learning PSO |
|
Knowledge Learning PSO |
|
Competitive Swarm Optimizer |
|
Covariance Matrix Adaptation Evolution Strategy |
|
CMA-ES with Increasing Population |
|
Separable CMA-ES |
|
Matrix Adaptation Evolution Strategy |
|
Exponential Natural Evolution Strategy |
|
OpenAI Evolution Strategy |
|
Aquila Optimizer |
|
Grey Wolf Optimizer |
|
Equilibrium Optimizer |
Expensive
Algorithm |
Description |
|---|---|
|
Bayesian Optimization |
|
Expected Exploration Improvement BO |
|
Efficient Surrogate-Assisted Optimization |
|
Self-adaptive Hierarchical PSO |
|
Surrogate-Assisted Competitive Swarm Optimizer |
|
Two-Layer RBF Surrogate-Assisted Optimization |
|
Gaussian Local Search with Self-adaptive DE |
|
Surrogate-Assisted EA with Auto-Configuration |
|
Data-Driven EA with Multi-Evolutionary Sampling Strategy |
|
Lipschitz Surrogate-Assisted Differential Evolution |
STMO (Single-Task Multiobjective)
Multiobjective evolutionary algorithms and surrogate-assisted methods.
Inexpensive
Algorithm |
Description |
|---|---|
|
Non-dominated Sorting Genetic Algorithm II |
|
Non-dominated Sorting Genetic Algorithm III |
|
NSGA-II with Stochastic Dominance Ranking |
|
Strength Pareto Evolutionary Algorithm 2 |
|
Multiobjective Evolutionary Algorithm based on Decomposition |
|
MOEA/D with Diversity Enhancement |
|
MOEA/D with Fitness-Rate-Rank Multi-Armed Bandit |
|
MOEA/D with Stable Matching |
|
Reference Vector Guided Evolutionary Algorithm |
|
Indicator-Based Evolutionary Algorithm |
|
Two-Archive Algorithm 2 |
|
Multi-Surrogate Evolutionary Algorithm |
|
Constrained Two-Archive Evolutionary Algorithm |
|
Coevolutionary Constrained Multiobjective Optimization |
Expensive
Algorithm |
Description |
|---|---|
|
Pareto Efficient Global Optimization |
|
Kriging-assisted RVEA |
|
Data-driven Surrogate-Assisted EA with Pareto Selection |
|
Kriging-assisted Two-Archive Algorithm |
|
Reference-based Multiobjective Optimization |
|
Adaptive Dropout Surrogate-Assisted PSO |
|
Classification-based Surrogate-assisted EA |
|
Distribution-Informed Surrogate-assisted Kriging |
|
Deep Reinforcement Learning Surrogate-Assisted EA |
|
Direction-based Hypervolume Expected Improvement |
|
Efficient Dropout Neural Network based AR-MOEA |
|
Expected Improvement Matrix based EGO |
|
Ensemble Model Surrogate-Assisted EA |
|
Kriging-Assisted Two-Archive Search |
|
Multigranularity Surrogate-Assisted EA |
|
Multi-Model Ranking Aggregation EA |
|
MOEA/D with Efficient Global Optimization |
|
Multiobjective Efficient Global Optimization |
|
Pairwise Comparison Surrogate-Assisted EA |
|
Pareto-based Efficient Algorithm |
|
Performance Indicator-based EA |
|
Surrogate-Assisted EA with Direction-Based Local Learning |
|
Self-Organizing Surrogate-Assisted Non-Dominated Sorting DE |
|
Two-phase EA with Probabilistic Dominance |
|
Constrained Push and Search MOEA |
|
Multi-Criteria Evolutionary Algorithm based on Decomposition |
MTSO (Multitask Single-Objective)
Multitask evolutionary algorithms with knowledge transfer for single-objective optimization.
Inexpensive
Algorithm |
Description |
|---|---|
|
Multi-Factorial Evolutionary Algorithm |
|
Multi-Factorial Evolutionary Algorithm II |
|
Evolutionary Multitask Evolutionary Algorithm |
|
Evolution by Similarity |
|
Generalized MFEA |
|
Multitask Evolutionary Algorithm with Adaptive Distribution |
|
Multi-Knowledge Transfer Differential Evolution |
|
Multitask EA with Surrogate-assisted Optimization |
|
Self-Regulated Evolutionary Multitask Optimization |
|
Lower Confidence Bound Evolutionary Multitasking |
|
Block-Level Knowledge Transfer DE |
|
Distribution Direction-Assisted Two-Stage Knowledge Transfer |
|
Evolutionary Multitask Optimization with Adaptive Intensity |
|
Multifactorial EA with Adaptive Knowledge Transfer |
|
MFEA Based on Diffusion Gradient Descent |
|
MFEA with Variational Crossover |
|
Multitask DE with Adaptive Dual Knowledge Transfer |
|
Multitask EA with Hierarchical Knowledge Transfer Strategy |
|
Multitask EA with Progressive Auto-Encoding |
|
Multitask Evolution Strategy with Knowledge-Guided Sampling |
|
Scenario-based Self-Learning Transfer DE |
|
Transfer Task-averaged Natural Gradient Separable NES |
Expensive
Algorithm |
Description |
|---|---|
|
Multitask Bayesian Optimization |
|
Resource Allocation Multitask Evolutionary Algorithm |
|
Self-adaptive Evolutionary Learning Framework |
|
Enhanced EEI-BO for Multitask Optimization |
|
Multi-task Max-value Bayesian Optimization |
|
BO with LCB and Curriculum Knowledge Transfer |
|
BO with LCB and Bidirectional Curriculum Knowledge Transfer |
|
MFEA with Single-Step Generative Model |
|
Surrogate-Assisted Evolutionary Framework with Adaptive Knowledge Transfer |
MTMO (Multitask Multiobjective)
Multitask multiobjective evolutionary algorithms with knowledge transfer.
Inexpensive
Algorithm |
Description |
|---|---|
|
Multiobjective Multi-Factorial Evolutionary Algorithm |
|
Multiobjective MFEA II |
|
Multiobjective Evolutionary Multitask EA |
|
Multiobjective MTEA with Surrogate-assisted Optimization |
|
Multitask DE with Multi-Knowledge Transfer Adaptation |
|
Multitask EA/D with Dynamic Neighborhood |
|
Evolutionary Multitasking with Explicit Transfer |
|
Evolutionary Multitasking with Probabilistic Distribution |
|
Evolutionary Multitasking with Generative Strategies |
|
Multiobjective MTEA with Progressive Auto-Encoding |
|
Multiobjective Symbiosis-Based Optimization |
|
Multitask EA/D with Transfer of Search Directions |
|
Multitask EA via Diversity- and Convergence-Oriented Knowledge Transfer |
Expensive
Algorithm |
Description |
|---|---|
|
ParEGO with Knowledge Transfer |
See Also
API Reference - Complete API documentation
Demos - Diverse demonstrations