单变量边际分布算法 (UMDA)

class pypop7.optimizers.eda.umda.UMDA(problem, options)[源代码]

用于正态模型的单变量边际分布算法 (UMDA)。

注意

UMDA 只学习高斯采样分布协方差矩阵的对角线元素,这使得每次采样的时间复杂度都是线性的。因此,它可以被视为大规模黑盒优化 (LBO) 的一个基准。为了在 LBO 中获得令人满意的性能,在实践中可能需要仔细调整后代的数量。

参数:
  • problem (dict) –

    问题参数,包含以下通用设置 ()
    • 'fitness_function' - 需要被最小化的目标函数 (func),

    • 'ndim_problem' - 维度数量 (int),

    • 'upper_boundary' - 搜索范围的上边界 (array_like),

    • 'lower_boundary' - 搜索范围的下边界 (array_like).

  • options (dict) –

    优化器选项,包含以下通用设置 ()
    • 'max_function_evaluations' - 函数评估的最大次数 (int, 默认: np.inf),

    • “max_runtime”- 最大运行时间(float,默认值:np.inf),

    • 'seed_rng' - 随机数生成器的种子,需要明确设置 (int);

    以及以下特定设置 ()
    • “n_individuals” - 后代数量,即后代种群大小(int,默认值:200),

    • 'n_parents' - 父代数量,也称为父代种群大小 (int,默认值: int(options[‘n_individuals’]/2))。

示例

使用来自 EDA 的黑盒优化器 UMDA 来最小化著名的测试函数 Rosenbrock

 1>>> import numpy  # engine for numerical computing
 2>>> from pypop7.benchmarks.base_functions import rosenbrock  # function to be minimized
 3>>> from pypop7.optimizers.eda.umda import UMDA
 4>>> problem = {'fitness_function': rosenbrock,  # define problem arguments
 5...            'ndim_problem': 2,
 6...            'lower_boundary': -5*numpy.ones((2,)),
 7...            'upper_boundary': 5*numpy.ones((2,))}
 8>>> options = {'max_function_evaluations': 5000,  # set optimizer options
 9...            'seed_rng': 2022}
10>>> umda = UMDA(problem, options)  # initialize the optimizer class
11>>> results = umda.optimize()  # run the optimization process
12>>> # return the number of function evaluations and best-so-far fitness
13>>> print(f"UMDA: {results['n_function_evaluations']}, {results['best_so_far_y']}")
14UMDA: 5000, 0.029323401402499186

关于其正确性检查,请参阅这份基于代码的可重复性报告以获取更多详情。

n_individuals

子代数量,也称为子代种群大小。

类型:

int

n_parents

父代数量,也称为父代种群大小。

类型:

int

参考文献

Mühlenbein, H. and Mahnig, T., 2002. Evolutionary computation and Wright’s equation. Theoretical Computer Science, 287(1), pp.145-165. https://www.sciencedirect.com/science/article/pii/S0304397502000981

Larrañaga, P. and Lozano, J.A. eds., 2001. Estimation of distribution algorithms: A new tool for evolutionary computation. Springer Science & Business Media. https://link.springer.com/book/10.1007/978-1-4615-1539-5

Mühlenbein, H. and Mahnig, T., 2001. Evolutionary algorithms: From recombination to search distributions. In Theoretical Aspects of Evolutionary Computing (pp. 135-173). Springer, Berlin, Heidelberg. https://link.springer.com/chapter/10.1007/978-3-662-04448-3_7

Larranaga, P., Etxeberria, R., Lozano, J.A. and Pena, J.M., 2000. Optimization in continuous domains by learning and simulation of Gaussian networks. Technical Report, Department of Computer Science and Artificial Intelligence, University of the Basque Country. https://tinyurl.com/3bw6n3x4

Larranaga, P., Etxeberria, R., Lozano, J.A. and Pe, J.M., 1999. Optimization by learning and simulation of Bayesian and Gaussian networks. Technical Report, Department of Computer Science and Artificial Intelligence, University of the Basque Country. https://tinyurl.com/5dktrdwc

Mühlenbein, H., 1997. The equation for response to selection and its use for prediction. Evolutionary Computation, 5(3), pp.303-346. https://tinyurl.com/yt78c786

https://visitor-badge.laobi.icu/badge?page_id=Evolutionary-Intelligence.pypop