Coder Social home page Coder Social logo

guofei9987 / scikit-opt Goto Github PK

View Code? Open in Web Editor NEW
4.9K 45.0 961.0 382 KB

Genetic Algorithm, Particle Swarm Optimization, Simulated Annealing, Ant Colony Optimization Algorithm,Immune Algorithm, Artificial Fish Swarm Algorithm, Differential Evolution and TSP(Traveling salesman)

Home Page: https://scikit-opt.github.io/scikit-opt/#/en/

License: MIT License

Python 99.48% Shell 0.52%
genetic-algorithm pso particle-swarm-optimization tsp travelling-salesman-problem simulated-annealing ant-colony-algorithm immune-algorithm heuristic-algorithms immune

scikit-opt's Introduction

PyPI Build Status codecov License Python Platform fork Downloads Discussions

Swarm Intelligence in Python
(Genetic Algorithm, Particle Swarm Optimization, Simulated Annealing, Ant Colony Algorithm, Immune Algorithm, Artificial Fish Swarm Algorithm in Python)

install

pip install scikit-opt

For the current developer version:

git clone [email protected]:guofei9987/scikit-opt.git
cd scikit-opt
pip install .

Features

Feature1: UDF

UDF (user defined function) is available now!

For example, you just worked out a new type of selection function.
Now, your selection function is like this:
-> Demo code: examples/demo_ga_udf.py#s1

# step1: define your own operator:
def selection_tournament(algorithm, tourn_size):
    FitV = algorithm.FitV
    sel_index = []
    for i in range(algorithm.size_pop):
        aspirants_index = np.random.choice(range(algorithm.size_pop), size=tourn_size)
        sel_index.append(max(aspirants_index, key=lambda i: FitV[i]))
    algorithm.Chrom = algorithm.Chrom[sel_index, :]  # next generation
    return algorithm.Chrom

Import and build ga
-> Demo code: examples/demo_ga_udf.py#s2

import numpy as np
from sko.GA import GA, GA_TSP

demo_func = lambda x: x[0] ** 2 + (x[1] - 0.05) ** 2 + (x[2] - 0.5) ** 2
ga = GA(func=demo_func, n_dim=3, size_pop=100, max_iter=500, prob_mut=0.001,
        lb=[-1, -10, -5], ub=[2, 10, 2], precision=[1e-7, 1e-7, 1])

Regist your udf to GA
-> Demo code: examples/demo_ga_udf.py#s3

ga.register(operator_name='selection', operator=selection_tournament, tourn_size=3)

scikit-opt also provide some operators
-> Demo code: examples/demo_ga_udf.py#s4

from sko.operators import ranking, selection, crossover, mutation

ga.register(operator_name='ranking', operator=ranking.ranking). \
    register(operator_name='crossover', operator=crossover.crossover_2point). \
    register(operator_name='mutation', operator=mutation.mutation)

Now do GA as usual
-> Demo code: examples/demo_ga_udf.py#s5

best_x, best_y = ga.run()
print('best_x:', best_x, '\n', 'best_y:', best_y)

Until Now, the udf surport crossover, mutation, selection, ranking of GA scikit-opt provide a dozen of operators, see here

For advanced users:

-> Demo code: examples/demo_ga_udf.py#s6

class MyGA(GA):
    def selection(self, tourn_size=3):
        FitV = self.FitV
        sel_index = []
        for i in range(self.size_pop):
            aspirants_index = np.random.choice(range(self.size_pop), size=tourn_size)
            sel_index.append(max(aspirants_index, key=lambda i: FitV[i]))
        self.Chrom = self.Chrom[sel_index, :]  # next generation
        return self.Chrom

    ranking = ranking.ranking


demo_func = lambda x: x[0] ** 2 + (x[1] - 0.05) ** 2 + (x[2] - 0.5) ** 2
my_ga = MyGA(func=demo_func, n_dim=3, size_pop=100, max_iter=500, lb=[-1, -10, -5], ub=[2, 10, 2],
             precision=[1e-7, 1e-7, 1])
best_x, best_y = my_ga.run()
print('best_x:', best_x, '\n', 'best_y:', best_y)

feature2: continue to run

(New in version 0.3.6)
Run an algorithm for 10 iterations, and then run another 20 iterations base on the 10 iterations before:

from sko.GA import GA

func = lambda x: x[0] ** 2
ga = GA(func=func, n_dim=1)
ga.run(10)
ga.run(20)

feature3: 4-ways to accelerate

  • vectorization
  • multithreading
  • multiprocessing
  • cached

see https://github.com/guofei9987/scikit-opt/blob/master/examples/example_function_modes.py

feature4: GPU computation

We are developing GPU computation, which will be stable on version 1.0.0
An example is already available: https://github.com/guofei9987/scikit-opt/blob/master/examples/demo_ga_gpu.py

Quick start

1. Differential Evolution

Step1:define your problem
-> Demo code: examples/demo_de.py#s1

'''
min f(x1, x2, x3) = x1^2 + x2^2 + x3^2
s.t.
    x1*x2 >= 1
    x1*x2 <= 5
    x2 + x3 = 1
    0 <= x1, x2, x3 <= 5
'''


def obj_func(p):
    x1, x2, x3 = p
    return x1 ** 2 + x2 ** 2 + x3 ** 2


constraint_eq = [
    lambda x: 1 - x[1] - x[2]
]

constraint_ueq = [
    lambda x: 1 - x[0] * x[1],
    lambda x: x[0] * x[1] - 5
]

Step2: do Differential Evolution
-> Demo code: examples/demo_de.py#s2

from sko.DE import DE

de = DE(func=obj_func, n_dim=3, size_pop=50, max_iter=800, lb=[0, 0, 0], ub=[5, 5, 5],
        constraint_eq=constraint_eq, constraint_ueq=constraint_ueq)

best_x, best_y = de.run()
print('best_x:', best_x, '\n', 'best_y:', best_y)

2. Genetic Algorithm

Step1:define your problem
-> Demo code: examples/demo_ga.py#s1

import numpy as np


def schaffer(p):
    '''
    This function has plenty of local minimum, with strong shocks
    global minimum at (0,0) with value 0
    https://en.wikipedia.org/wiki/Test_functions_for_optimization
    '''
    x1, x2 = p
    part1 = np.square(x1) - np.square(x2)
    part2 = np.square(x1) + np.square(x2)
    return 0.5 + (np.square(np.sin(part1)) - 0.5) / np.square(1 + 0.001 * part2)

Step2: do Genetic Algorithm
-> Demo code: examples/demo_ga.py#s2

from sko.GA import GA

ga = GA(func=schaffer, n_dim=2, size_pop=50, max_iter=800, prob_mut=0.001, lb=[-1, -1], ub=[1, 1], precision=1e-7)
best_x, best_y = ga.run()
print('best_x:', best_x, '\n', 'best_y:', best_y)

-> Demo code: examples/demo_ga.py#s3

import pandas as pd
import matplotlib.pyplot as plt

Y_history = pd.DataFrame(ga.all_history_Y)
fig, ax = plt.subplots(2, 1)
ax[0].plot(Y_history.index, Y_history.values, '.', color='red')
Y_history.min(axis=1).cummin().plot(kind='line')
plt.show()

Figure_1-1

2.2 Genetic Algorithm for TSP(Travelling Salesman Problem)

Just import the GA_TSP, it overloads the crossover, mutation to solve the TSP

Step1: define your problem. Prepare your points coordinate and the distance matrix.
Here I generate the data randomly as a demo:
-> Demo code: examples/demo_ga_tsp.py#s1

import numpy as np
from scipy import spatial
import matplotlib.pyplot as plt

num_points = 50

points_coordinate = np.random.rand(num_points, 2)  # generate coordinate of points
distance_matrix = spatial.distance.cdist(points_coordinate, points_coordinate, metric='euclidean')


def cal_total_distance(routine):
    '''The objective function. input routine, return total distance.
    cal_total_distance(np.arange(num_points))
    '''
    num_points, = routine.shape
    return sum([distance_matrix[routine[i % num_points], routine[(i + 1) % num_points]] for i in range(num_points)])

Step2: do GA
-> Demo code: examples/demo_ga_tsp.py#s2

from sko.GA import GA_TSP

ga_tsp = GA_TSP(func=cal_total_distance, n_dim=num_points, size_pop=50, max_iter=500, prob_mut=1)
best_points, best_distance = ga_tsp.run()

Step3: Plot the result:
-> Demo code: examples/demo_ga_tsp.py#s3

fig, ax = plt.subplots(1, 2)
best_points_ = np.concatenate([best_points, [best_points[0]]])
best_points_coordinate = points_coordinate[best_points_, :]
ax[0].plot(best_points_coordinate[:, 0], best_points_coordinate[:, 1], 'o-r')
ax[1].plot(ga_tsp.generation_best_Y)
plt.show()

GA_TPS

3. PSO(Particle swarm optimization)

3.1 PSO

Step1: define your problem:
-> Demo code: examples/demo_pso.py#s1

def demo_func(x):
    x1, x2, x3 = x
    return x1 ** 2 + (x2 - 0.05) ** 2 + x3 ** 2

Step2: do PSO
-> Demo code: examples/demo_pso.py#s2

from sko.PSO import PSO

pso = PSO(func=demo_func, n_dim=3, pop=40, max_iter=150, lb=[0, -1, 0.5], ub=[1, 1, 1], w=0.8, c1=0.5, c2=0.5)
pso.run()
print('best_x is ', pso.gbest_x, 'best_y is', pso.gbest_y)

Step3: Plot the result
-> Demo code: examples/demo_pso.py#s3

import matplotlib.pyplot as plt

plt.plot(pso.gbest_y_hist)
plt.show()

PSO_TPS

3.2 PSO with nonlinear constraint

If you need nolinear constraint like (x[0] - 1) ** 2 + (x[1] - 0) ** 2 - 0.5 ** 2<=0
Codes are like this:

constraint_ueq = (
    lambda x: (x[0] - 1) ** 2 + (x[1] - 0) ** 2 - 0.5 ** 2
    ,
)
pso = PSO(func=demo_func, n_dim=2, pop=40, max_iter=max_iter, lb=[-2, -2], ub=[2, 2]
          , constraint_ueq=constraint_ueq)

Note that, you can add more then one nonlinear constraint. Just add it to constraint_ueq

More over, we have an animation:
pso_ani
see examples/demo_pso_ani.py

4. SA(Simulated Annealing)

4.1 SA for multiple function

Step1: define your problem
-> Demo code: examples/demo_sa.py#s1

demo_func = lambda x: x[0] ** 2 + (x[1] - 0.05) ** 2 + x[2] ** 2

Step2: do SA
-> Demo code: examples/demo_sa.py#s2

from sko.SA import SA

sa = SA(func=demo_func, x0=[1, 1, 1], T_max=1, T_min=1e-9, L=300, max_stay_counter=150)
best_x, best_y = sa.run()
print('best_x:', best_x, 'best_y', best_y)

Step3: Plot the result
-> Demo code: examples/demo_sa.py#s3

import matplotlib.pyplot as plt
import pandas as pd

plt.plot(pd.DataFrame(sa.best_y_history).cummin(axis=0))
plt.show()

sa

Moreover, scikit-opt provide 3 types of Simulated Annealing: Fast, Boltzmann, Cauchy. See more sa

4.2 SA for TSP

Step1: oh, yes, define your problems. To boring to copy this step.

Step2: DO SA for TSP
-> Demo code: examples/demo_sa_tsp.py#s2

from sko.SA import SA_TSP

sa_tsp = SA_TSP(func=cal_total_distance, x0=range(num_points), T_max=100, T_min=1, L=10 * num_points)

best_points, best_distance = sa_tsp.run()
print(best_points, best_distance, cal_total_distance(best_points))

Step3: plot the result
-> Demo code: examples/demo_sa_tsp.py#s3

from matplotlib.ticker import FormatStrFormatter

fig, ax = plt.subplots(1, 2)

best_points_ = np.concatenate([best_points, [best_points[0]]])
best_points_coordinate = points_coordinate[best_points_, :]
ax[0].plot(sa_tsp.best_y_history)
ax[0].set_xlabel("Iteration")
ax[0].set_ylabel("Distance")
ax[1].plot(best_points_coordinate[:, 0], best_points_coordinate[:, 1],
           marker='o', markerfacecolor='b', color='c', linestyle='-')
ax[1].xaxis.set_major_formatter(FormatStrFormatter('%.3f'))
ax[1].yaxis.set_major_formatter(FormatStrFormatter('%.3f'))
ax[1].set_xlabel("Longitude")
ax[1].set_ylabel("Latitude")
plt.show()

sa

More: Plot the animation:

sa
see examples/demo_sa_tsp.py

5. ACA (Ant Colony Algorithm) for tsp

-> Demo code: examples/demo_aca_tsp.py#s2

from sko.ACA import ACA_TSP

aca = ACA_TSP(func=cal_total_distance, n_dim=num_points,
              size_pop=50, max_iter=200,
              distance_matrix=distance_matrix)

best_x, best_y = aca.run()

ACA

6. immune algorithm (IA)

-> Demo code: examples/demo_ia.py#s2

from sko.IA import IA_TSP

ia_tsp = IA_TSP(func=cal_total_distance, n_dim=num_points, size_pop=500, max_iter=800, prob_mut=0.2,
                T=0.7, alpha=0.95)
best_points, best_distance = ia_tsp.run()
print('best routine:', best_points, 'best_distance:', best_distance)

IA

7. Artificial Fish Swarm Algorithm (AFSA)

-> Demo code: examples/demo_afsa.py#s1

def func(x):
    x1, x2 = x
    return 1 / x1 ** 2 + x1 ** 2 + 1 / x2 ** 2 + x2 ** 2


from sko.AFSA import AFSA

afsa = AFSA(func, n_dim=2, size_pop=50, max_iter=300,
            max_try_num=100, step=0.5, visual=0.3,
            q=0.98, delta=0.5)
best_x, best_y = afsa.run()
print(best_x, best_y)

Projects using scikit-opt

scikit-opt's People

Contributors

agrover112 avatar duguanyue avatar firefly-cpp avatar gitter-badger avatar guofei9987 avatar hitliaomq avatar huanity avatar ilikega avatar koralturkk avatar lbiscuola avatar lisheng2001 avatar samueljsluo avatar thatch avatar waterfall-xi avatar xu-kq avatar yingliangzhe avatar zhangxiao123qqq avatar zidong6344 avatar zxf1001 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

scikit-opt's Issues

蚁群算法迭代过程中存在引用问题,会修改上一代的x_best_history值

这里面第4行(源代码ln36)直接对Table操作,同时在第添加x_best时是引用了table的对象(源代码ln50),所以在之后的迭代中修改到对应j行时会修改原来添加的过去代的x_best

        for i in range(self.max_iter):  # 对每次迭代
            prob_matrix = (self.Tau ** self.alpha) * (self.prob_matrix_distance) ** self.beta 
            for j in range(self.size_pop):  # 对每个蚂蚁
                self.Table[j, 0] = 0  # start point,其实可以随机,但没什么区别
                for k in range(self.n_dim - 1):  # 蚂蚁到达的每个节点
                    taboo_set = set(self.Table[j, :k + 1])  # 已经经过的点和当前点,不能再次经过
                    allow_list = list(set(range(self.n_dim)) - taboo_set)  # 在这些点中做选择
                    prob = prob_matrix[self.Table[j, k], allow_list]
                    prob = prob / prob.sum()  # 概率归一化
                    next_point = np.random.choice(allow_list, size=1, p=prob)[0]
                    self.Table[j, k + 1] = next_point

            # 计算距离
            y = np.array([self.func(i) for i in self.Table])

            # 顺便记录历史最好情况
            index_best = y.argmin()
            x_best, y_best = self.Table[index_best, :], y[index_best]
            self.x_best_history.append(x_best)
            self.y_best_history.append(y_best)

            # 计算需要新涂抹的信息素
            delta_tau = np.zeros((self.n_dim, self.n_dim))
            for j in range(self.size_pop):  # 每个蚂蚁
                for k in range(self.n_dim - 1):  # 每个节点
                    n1, n2 = self.Table[j, k], self.Table[j, k + 1]  # 蚂蚁从n1节点爬到n2节点
                    delta_tau[n1, n2] += 1 / y[j]  # 涂抹的信息素
                n1, n2 = self.Table[j, self.n_dim - 1], self.Table[j, 0]  # 蚂蚁从最后一个节点爬回到第一个节点
                delta_tau[n1, n2] += 1 / y[j]  # 涂抹信息素

            # 信息素飘散+信息素涂抹
            self.Tau = (1 - self.rho) * self.Tau + delta_tau

GA auto tranfer value, make things wrong

hi, 我在使用GA函数时,函数中矩阵会自动迭代为上一次结果,我其实是不希望我定义的矩阵发生改变。传递时用了self,可能是主要原因,但不用self又无法作为定量传入。

遗传算法整数规划上下界出错

在我的程序中,一共有15个变量,整数规划取值范围是从0~1363,因此我设置lb与ub未0、1363,但是我发现程序会生成大于1363的数字,例如生成了
IndexError: index 1901 is out of bounds for axis 0 with size 1363
但是我发现只要把ub边界设置小于1000,就没有问题

from sko.GA import GA
K=15
ga = GA(func=cost_func, n_dim=K, max_iter=500, lb=[0 for i in range(K)], ub=[1363 for i in range(K)], precision=[1 for i in range(K)])
rs = ga.run()
print(rs)

关于参数传递

官方的示例是这样的,
def demo_func(x):
x1, x2, x3 = x
return x1**2+x2*1+x3
如果我的目标函数是类似于这样的:
def demo_func(x,model,y):
x1,x2,x3=x
model,y 是我要传递的其他参数,请问该怎么写
PSO(func=demo_func, dim=3, pop=40, max_iter=150, lb=[0, -1, 0.5], ub=[1, 1, 1], w=0.8, c1=0.5, c2=0.5) 这个里面的demo_func.

多参数算法寻优问题

你好
现在我有一个函数模型,入参是由多个参数组成的,每个参数可以设置范围(最大值最小值),最后是求最优解所对应的这些入参值,我看了scikit-opt的模拟退火算法和粒子群算法,不知道大佬的算法能不能实现,希望能够解惑一下,感谢!

人工鱼群算法的简写

我发现你的人工鱼群算法简写有好多种,ASFS、ASFA、AFSA,能不能统一下?(虽然问题不大

TSP计算Y所用的func中调用的distance_matrix

func的参数中没有distance_matrix,使得每次都要在调用前run前设置全局变量distance_matrix,除了重载外,不知道能不能提供一个用于传递辅助参数的self属性。

crossover.crossover_1point单点交叉这里的代码有点小bug

crossover.py的第9行:
n = np.random.randint(0, self.len_chrom, 1)
调试的时候看到这里的n是一个1*1的向量[n],而不是标量。
后面执行到11和12行:
seg1, seg2 = self.Chrom[i, n:].copy(), self.Chrom[i + 1, n:].copy()
self.Chrom[i, n:], self.Chrom[i + 1, n:] = seg2, seg1
就会报错说only integer scalar arrays can be converted to a scalar index
应该是说这里的n不能作为索引?

在crossover_2point没有这个问题,因为那边的
n1, n2 = np.random.randint(0, self.len_chrom, 2)
确实是生成两个标量赋值给n1和n2。

ga.register使用单点交叉的时候报了错,所以觉得这边是不是有点问题。

约束关系没有发挥作用

非常感谢您的及时回答。
您好,我使用demo中约束关系列表为差分算法施加了约束关系,但是计算出来的结果好像约束关系没有发挥作用。
constraint_ueq = []
for i in range(16):
for j in range(15):
constraint_ueq.append(lambda x: x[0 + j+i16] - x[1 + j+i16])
for i in range(16):
for j in range(15):
constraint_ueq.append(lambda x:x[0+j16+i] - x[16+j16+i])
for i in range(6):
constraint_ueq.append(lambda x:x[256+i] - x[257+i])
for i in range(11):
constraint_ueq.append(lambda x:x[264+i] - x[263+i])
我在做一个汽车系统的自动标定,涉及到一个256的曲面和一个7的曲线,12的曲线。
曲面自身和曲线自身有平顺性的要求(沿x轴和y轴递增),所以我用以上关系施加了不等式约束,但是优化出来的结果,好像没有反应出这种约束关系。我查看了差分算法的源代码,使用的是罚函数做约束施加?代码如下?
if not self.has_constraint:
self.Y = self.Y_raw
else:
# constraint
penalty_eq = np.array([np.sum(np.abs([c_i(x) for c_i in self.constraint_eq])) for x in self.X])
penalty_ueq = np.array([np.sum(np.abs([max(0, c_i(x)) for c_i in self.constraint_ueq])) for x in self.X])
self.Y = self.Y_raw + 1e5 * penalty_eq + 1e5 * penalty_ueq
请问如何让我的约束关系在算法的种群生成,变异、交叉、选择中发挥作用?
非常感谢您的及时回答。

How can I add complex constraints?

For example, my optimization problem is as follows:

min mean(x)
s.t. 1< function(x) < 5

The "function()" is a function using "x" as input defined by myself.

Let's say DE algorithm.

May Class "GeneticAlgorithmBase" have "precision" Attribute?

Firstly, I have to say It's an extremely Great package, and thanks for your work.

Well, let's go back to my case. I am using DifferentialEvolution algorithm to solve LingerPrograming problems. The algorithm is OK and powerful, without doubt.

However, when I wanna to solve Integer LingerPrograming, I figure out that "GeneticAlgorithmBase" class does not have "precision" attribute as GA has.

That is my problem , and now I am working on adding "precision" attribute to "GAbase".

Over all, here is just a suggestion to implement "precision" on "GAbase" Class.
Since I don't know what is your specific requirement or which detail you concern about. Maybe we could discuss about that, My feedback is just a suggestion anyway.

Missing files in sdist

It appears that the manifest is missing at least one file necessary to build
from the sdist for version 0.5.6. You're in good company, about 5% of other
projects updated in the last year are also missing files.

+ /tmp/venv/bin/pip3 wheel --no-binary scikit-opt -w /tmp/ext scikit-opt==0.5.6
Looking in indexes: http://10.10.0.139:9191/root/pypi/+simple/
Collecting scikit-opt==0.5.6
  Downloading http://10.10.0.139:9191/root/pypi/%2Bf/288/dd1c4e27be972/scikit-opt-0.5.6.tar.gz (22 kB)
    ERROR: Command errored out with exit status 1:
     command: /tmp/venv/bin/python3 -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-wheel-eoj_scqa/scikit-opt/setup.py'"'"'; __file__='"'"'/tmp/pip-wheel-eoj_scqa/scikit-opt/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' egg_info --egg-base /tmp/pip-wheel-eoj_scqa/scikit-opt/pip-egg-info
         cwd: /tmp/pip-wheel-eoj_scqa/scikit-opt/
    Complete output (7 lines):
    Traceback (most recent call last):
      File "<string>", line 1, in <module>
      File "/tmp/pip-wheel-eoj_scqa/scikit-opt/setup.py", line 25, in <module>
        long_description=read_file('docs/en/README.md'),
      File "/tmp/pip-wheel-eoj_scqa/scikit-opt/setup.py", line 10, in read_file
        with open(os_path.join(this_directory, filename), encoding='utf-8') as f:
    FileNotFoundError: [Errno 2] No such file or directory: '/tmp/pip-wheel-eoj_scqa/scikit-opt/docs/en/README.md'
    ----------------------------------------
ERROR: Command errored out with exit status 1: python setup.py egg_info Check the logs for full command output.

模拟退火SA的问题

import random
import numpy as np
from sko.SA import SA_TSP

def cal_total_distance(routine):
    num_points, = routine.shape
    return sum([distance_matrix[routine[i % num_points], routine[(i + 1) % num_points]] for i in range(num_points)])

num_points = 30
distance_matrix = [ [0] * num_points for _ in range(num_points) ]
for i in range(num_points):
    for j in range(i+1, num_points):
        distance_matrix[i][j] = distance_matrix[j][i] = random.randint(1, num_points)
distance_matrix = np.array(distance_matrix)

sa_tsp = SA_TSP(func=cal_total_distance, x0=range(num_points), T_max=3000, T_min=1, L=10 * num_points)
best_points, best_distance = sa_tsp.run()
print(best_points, best_distance)

得到的 best_distance 如图。下图横坐标为迭代次数,纵坐标为cost。
image

粒子群算法运行TSP例子报错

你好,我打算使用不带约束的粒子群算法求解TSP问题,但程序报错了,代码如下:

pso = PSO(func=cal_total_distance, dim=num_points)
fitness = pso.run()
return pso.gbest_x, pso.gbest_y
IndexError                                Traceback (most recent call last)
<ipython-input-8-2a9f4e09697c> in <module>
     78     PSO_Best_points = []
     79     for j in tqdm(range(TestCase)):
---> 80         PSO_best_points, PSO_best_distance, solution_answer = Particle_Swarm_Optimization(distance_matrix, num_points)
     81         PSO_Percase_best_points.append(PSO_best_points)
     82         PSO_Percase_best_distance.append(PSO_best_distance)

<ipython-input-6-36f764e8053d> in Particle_Swarm_Optimization(distance_matrix, num_points)
      4         num_points, = routine.shape
      5         return sum([distance_matrix[routine[i % num_points], routine[(i + 1) % num_points]] for i in range(num_points)])
----> 6     pso = PSO(func=cal_total_distance, dim=num_points)
      7     fitness = pso.run()
      8     return pso.gbest_x, pso.gbest_y, cal_total_distance(solution_point)

~/anaconda3/envs/py36/lib/python3.6/site-packages/sko/PSO.py in __init__(self, func, dim, pop, max_iter, lb, ub, w, c1, c2)
     93         v_high = self.ub - self.lb
     94         self.V = np.random.uniform(low=-v_high, high=v_high, size=(self.pop, self.dim))  # speed of particles
---> 95         self.Y = self.cal_y()  # y = f(x) for all particles
     96         self.pbest_x = self.X.copy()  # personal best location of every particle in history
     97         self.pbest_y = self.Y.copy()  # best image of every particle in history

~/anaconda3/envs/py36/lib/python3.6/site-packages/sko/PSO.py in cal_y(self)
    120     def cal_y(self):
    121         # calculate y for every x in X
--> 122         self.Y = np.array([self.func(x) for x in self.X]).reshape(-1, 1)
    123         return self.Y
    124 

~/anaconda3/envs/py36/lib/python3.6/site-packages/sko/PSO.py in <listcomp>(.0)
    120     def cal_y(self):
    121         # calculate y for every x in X
--> 122         self.Y = np.array([self.func(x) for x in self.X]).reshape(-1, 1)
    123         return self.Y
    124 

<ipython-input-6-36f764e8053d> in cal_total_distance(routine)
      3     def cal_total_distance(routine):
      4         num_points, = routine.shape
----> 5         return sum([distance_matrix[routine[i % num_points], routine[(i + 1) % num_points]] for i in range(num_points)])
      6     pso = PSO(func=cal_total_distance, dim=num_points)
      7     fitness = pso.run()

<ipython-input-6-36f764e8053d> in <listcomp>(.0)
      3     def cal_total_distance(routine):
      4         num_points, = routine.shape
----> 5         return sum([distance_matrix[routine[i % num_points], routine[(i + 1) % num_points]] for i in range(num_points)])
      6     pso = PSO(func=cal_total_distance, dim=num_points)
      7     fitness = pso.run()

IndexError: only integers, slices (`:`), ellipsis (`...`), numpy.newaxis (`None`) and integer or boolean arrays are valid indices

PSO_TSP?

As the title says, is it possible to add TSP implementation with PSO?

Add references

Well, first, I must say that your work is the cleanest I could find so far! There are tons of variations and the sources out there are full of mess.

The question is, can you please add references of papers you used to base every algorithm on?

Like for example, I was testing with SA and I found a strange parameter L (Long of Chain). I looked through few different papers but couldn't find it. So based on which implementation is this?

And the same for all other algorithms, having a reference is always good. Maybe just put links to papers in the source, that would be great I guess.

capacitated vrp with time windows

Hi, thanks for the package with multiple metaheuristics which can be implemented with ease.
I had a doubt whether we can define our own Vehicle routing problems with constraints like time windows and use this package. Or we have to define our own custom functions?
Thanks!

算法融合和扩展性

因为最近在学软计算这门课,有提到算法融合的问题。想了解一下作者是否有实现算法融合的便捷的函数,比如融合遗传算法和模拟退火或者局部梯度下降这些,库里面不知道有没有提供这些接口?另外,不知道蚁群算法里面信息素的相关计算函数有没有快速的自定义方法,可以实现不同策略比如精英蚂蚁、最大最小蚂蚁这些?

Want to set up starting point

Thanks for the great work of this repo!

Wish to have enhancement for letting us set up a starting point of the parameters that we want to optimize

Complex constraints in PSO

I have tried running the example in PSO with the constraints (defined in DE example). However,

constraint_eq = [lambda x: 1 - x[1] - x[2]]
pso = PSO(func=obj_func, dim=2, pop=40, max_iter=100, lb=new_low_bound, ub=new_up_bound, constraint_eq=constraint_eq, w=0.8, c1=0.5, c2=0.5)

However, the is an error showing that TypeError: init() got an unexpected keyword argument 'constraint_eq'.

Could you please tell me how to add constraints in PSO?

模拟退火算法扰动的上下界有问题

在模拟退火算法的SAFast类中

class SAFast(SimulatedAnnealingBase):
    def __init__(self, func, x0, T_max=100, T_min=1e-7, L=300, max_stay_counter=150, **kwargs):
        super().__init__(func, x0, T_max, T_min, L, max_stay_counter, **kwargs)
        self.m, self.n, self.quench = kwargs.get('m', 1), kwargs.get('n', 1), kwargs.get('quench', 1)
        self.lower, self.upper = kwargs.get('m', -10), kwargs.get('m', 10)
        self.c = self.m * np.exp(-self.n * self.quench)
    def get_new_x(self, x):
        r = np.random.uniform(-1, 1, size=self.n_dims)
        xc = np.sign(r) * self.T * ((1 + 1.0 / self.T) ** np.abs(r) - 1.0)
        x_new = x + xc * (self.upper - self.lower)
        return x_new

    def cool_down(self):
        self.T = self.T_max * np.exp(-self.c * self.iter_cycle ** self.quench)

初始化函数根据参数确定扰动的lower和upper时,使用的都是关键字参数'm',这个地方是不是写错了,如果我传入一个参数m,岂不是lower和upper都是同一个值了。而且在SABoltzmann类中也存在同样的问题。我安装的版本是0.5.5

请问整数规划该怎么写?

demo_func=lambda x: x[0]**2 + x[1]**2 + x[2]**2
ga = GA(func=demo_func,n_dim=3, max_iter=500, lb=[-1, -10, -5], ub=[2, 10, 2])
best_x, best_y = ga.fit()

如果 x[0] 只能取整数, 代码应该怎么写?
谢谢

crossover_pmx 函数value1 与 value2 的位置是不是写反了?

./sko/operators/crossover.py 中 的crossover_pmx(self) 函数
"for j in range(cxpoint1, cxpoint2):
value1, value2 = Chrom1[j], Chrom2[j]
pos1, pos2 = pos1_recoder[value1], pos2_recoder[value2] "

第80行代码中的 value1 与 value2 是不是写反了?导致 pos1的value 基本都等于j , 以下swap的时候并未起到交叉的作用。

能否把自变量约束成整数?

在粒子群里,因为我这的自变量是整数的索引,想着如果X以及V都是整数的话,会不会能让算法运行速度更快点?

稍微复杂的整数规划的问题

#15 中提到了这个问题,但是我实际用的时候好像并不是完全按照精度进行计算。能否有一个专门的离散模式?

一个简单的例子:

def obj_func(p):
    x1, x2, x3 = p
    return -x1 - x2 -  x3


constraint_eq = [
    lambda x: 0 if x[0] == x[1] else 1
]

from sko.GA import GA

ga = GA(func=obj_func, n_dim=3, size_pop=50, max_iter=800, lb=[0, 0, 0], ub=[5, 5, 5],
        constraint_eq=constraint_eq, precision=[1] * 3)

best_x, best_y = ga.run()
print('best_x:', best_x, '\n', 'best_y:', best_y)

输出:

best_x: [4.66666667 4.66666667 4.33333333] 
 best_y: -13.666666666666668

GA出现超过ub的现象

您好,
GA(func=self.fitness,
n_dim=16,
size_pop=100,
max_iter=800,
lb=[1] * 16,
ub=[90] * 16,
precision=1)
按理说ub最大值应该90,但却出现了123,127这种情况

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.