# Category Archives: Software

## (Constrained) Numerical Optimization Problem (NOP)

Category : Code

A general constrained numerical optimization problem (NOP) may be written as follows:

$\begin{array}{rcll} \min &~& f(\mathbf{x}) & \\ \mathrm{subject~to} &~& g_i(\mathbf{x}) =\in [c_i, d_i] &\text{for } i=1,\ldots,n \end{array}$

where $f(\mathbf{x})$ is the objective function and each $g(\mathbf{x})$ is a constraint function to be satisfied, and $c$ and $d$ are constants. Each function can be nonlinear and non-smooth. If there is no constraint (i.e., $n=0$), then the problem becomes an unconstrained optimization problem.

NOP Instance Implementation [in DEPSO or SCO]

To implement your own problem instance, you need create a JAVA source file, normally placed in the directory problem/unconstrained (if the problem has no constraint) or problem/constrained (if the problem has constraints).

Implementation Tips: 1) all the variable bounds must be specified; 2)  any equality constraint should be relaxed by a small tolerance value (e.g., ε=1E-4 for problem.constrained.Michalewicz_G3); and 3) problem.ProblemEncoder and problem.UnconstrainedProblemEncoder are the parental classes of all constrained (e.g., problem.constrained.Michalewicz_G1) and unconstrained (e.g., problem.unconstrained.GoldsteinPrice) problems, respectively.

We use Michalewicz’s G1 problem as an example to explain the steps to implement an instance in JAVA. Here is the mathematical form:

There are detailed comments in src/problem/constrained/Michalewicz_G1.java. It should be easy to figure out the implementation details.

Compiling Tips: Type “ant” to build, and the output file will be dist/depso.jar. You can also import the project into Eclipse IDE.

Testing Tips: There is a tool to calculate the output responses of the problem based on the input values. The tool has a main class problem.Simulator. It has two parameters, one is the function, and the second is the input values (separated by “,”). An example is:

$java -cp ../binary/MAOS_BIN.jar maosKernel.MAOSExecuter QAP:Problem=jeu_100_025_01 N=500 T=500 DUP_TIMES=10 solver=STD_NLDUX //java: the JAVA executor, JAVA Runtime Environment Version 1.5 or above is preferred //maosKernel.MAOSExecuter: the main execution entrance of the MAOS program  JAVA Options (See “java -help” for more information): There is one recommended optional parameter for java: -server: select the "server" VM, which is normally faster than the "client" VM The "-cp" option is used for loading the binary file "../release/MAOS_BIN.jar".  General Parameters: NAME VALUE_type Range Default_Value Description QAP:Problem String * <Problem_Name> The problem instance to be solved -------------------------------------------------------------------------------------------------- N integer >1 100 The number of agents T integer >0 500 Terminate condition: The maximum learning cycles Tcon integer >0 -1 Terminate condition: The number of cycles as the best state is unvaried //If Tcon==-1, then Tcon=T DUP_TIMES integer >0 2 Number of trials -------------------------------------------------------------------------------------------------- Solver String * <Solver_Name> The name of the script of the actual solver //The explanation of Problem_Name and Solver_Name can be found in Directories & Files.  Output Information For the output, we provide screen output, output a file for the running result, and stored the best solution ever found. Screen Output: [Initialization information]: provide the parsing information during the initialization. [Runtime information]: The program outputs runtime information, i.e., the current best evaluation values, execution time, at every "Tout" cycles. [Summary information]: At the end, it outputs the input variables, response values, and evaluation values <Vcon, Vopt> of the best solution.  Result File: The result file will be stored at the directory "myprojects/results".  Solution File: The best solution (better than any previous solutions) will be stored at the directory "myprojects/tasks/QAP/solution" (of the QAPLIB format).  References [1] Xiao-Feng Xie. Round-table group optimization for sequencing problems. International Journal of Applied Metaheuristic Computing, 3(4): 1-24, 2012. @Article{Xie2012a, Title = {Round-table group optimization for sequencing problems}, Author = {Xiao-Feng Xie}, Journal = {International Journal of Applied Metaheuristic Computing}, Year = {2012}, Number = {4}, Pages = {1-24}, PDF={http://www.wiomax.com/team/xie/paper/IJAMC12.pdf}, DOI={10.4018/jamc.2012100101}, Code={http://www.wiomax.com/maos-qap}, Volume = {3} } • ###### 0 ## Ant Colony Optimization (ACO) Algorithms Ant colony optimization (ACO), or ant system (AS), is a class of metaheuristic optimization algorithms inspired by the emergent search behavior using pheromone trails in natural ants. We present CGO-AS, a generalized ant system (AS) implemented in the framework of cooperative group optimization (CGO), to show the leveraged optimization with a mixed individual and social learning. Ant colony is a simple yet efficient natural system for understanding the effects of primary intelligence on optimization. However, existing AS algorithms are mostly focusing on their capability of using social heuristic cues while ignoring their individual learning. CGO can integrate the advantages of a cooperative group and a low-level algorithm portfolio design, and the agents of CGO can explore both individual and social search. In CGO-AS, each ant (agent) is added with an individual memory, and is implemented with a novel search strategy to use individual and social cues in a controlled proportion. The presented CGO-AS is therefore especially useful in exposing the power of the mixed individual and social learning for improving optimization. The optimization performance is tested with instances of the traveling salesman problem (TSP). The results prove that a cooperative ant group using both individual and social learning obtains a better performance than the systems solely using either individual or social learning. The best performance is achieved under the condition when agents use individual memory as their primary information source, and simultaneously use social memory as their searching guidance. In comparison with existing AS systems, CGO-AS retains a faster learning speed toward those higher-quality solutions, especially in the later learning cycles. The leverage in optimization by CGO-AS is highly possible due to its inherent feature of adaptively maintaining the population diversity in the individual memory of agents, and of accelerating the learning process with accumulated knowledge in the social memory. • Xiao-Feng Xie and Zun-Jing Wang. Cooperative group optimization with ants (CGO-AS): Leverage optimization with mixed individual and social learning. Applied Soft Computing, 50: 223-234, 2017. @Article{Xie2017Ants, Title = {Cooperative group optimization with ants (CGO-AS): Leverage optimization with mixed individual and social learning}, Author = {Xiao-Feng Xie and Zun-Jing Wang}, Journal = {Applied Soft Computing}, Year = {2017}, Pages = {223--234}, Doi = {10.1016/j.asoc.2016.11.018}, PDF={http://www.wiomax.com/team/xie/paper/ASOC17.pdf}, Volume = {50} } • ###### 0 ## Social Cognitive Optimization (SCO): Project Portal Social Cognitive Optimization (SCO) [1, 2] is an optimization algorithm for solving the (constrained) numerical optimization problem. SCO is an agent-based model based on the observational learning mechanism in human social cognition. In CGOS [3], SCO was hybridized with differential evolution (DE) to obtain better results than individual algorithms on a common set of benchmark problems. Related Information: Please find other related code and software in our Source Code Library. Basic Description What’s New Problem to be solved Setting Parameters Output Information References Contact License information: SCO is free software; you can redistribute and/or modify it under the terms of Creative Commons Non-Commercial License 3.0. Problem to be solved: (constrained) numerical optimization problem (NOP), or called the nonlinear programming problem. System Requirements: SCO is a platform-independent software developed by JAVA version 1.4 or above. Command line (examples):$ java SCO Problem=<Problem_Name> [NAME=VALUE] …

What’s New

It implements the original SCO algorithm [1] & [2].

• Setting parameters: Problem, N, T, NL.

Problem to be solved

The problem to be solved is (constrained) numerical optimization problem (NOP), including nonlinear programming problems.

To implement your own problem instance, you need create a JAVA source file, normally placed in the directory problem/unconstrained (if the problem has no constraint) or problem/constrained (if the problem has constraints).

Implementation Tips: 1) all the variable bounds must be specified; 2)  any equality constraint should be relaxed by a small tolerance value (e.g., ε=1E-4 for problem.constrained.Michalewicz_G3); and 3) problem.ProblemEncoder and problem.UnconstrainedProblemEncoder are the parental classes of all constrained (e.g., problem.constrained.Michalewicz_G1) and unconstrained (e.g., problem.unconstrained.GoldsteinPrice) problems, respectively.

More detailed description on the problem and implementation can be found here.

Setting parameters [NAME=VALUE]
NAME         VALUE_type  Range      Default_Value   Description
Problem      String      *	    <Problem_Name>  The problem to be solved
//For example: problem.constrained.Michalewicz_G2 is the default value
------------------------------------------------------------------------------------------------------

N            integer     >5         70              General: The number of agents
T            integer     >1         2000            General: The maximum learning cycles
NL           integer     >1         3*N             For the library: The number of Points

//The total number of evaluation times is N*T+NL
//The program outputs runtime information of the best solution every "Tout" cycles.


Output Information

[Parsing information]: provide the parsing information for all input parameters.

[Setting information]: show the information of all setting parameters for the algorithm.

[Runtime information]: The program outputs runtime information, i.e., the evaluation values <Vcon, Vopt> of the best solution, at every “Tout” cycles.
//Vopt: the value of objective function; Vcon: the weighted constraint violation value (≥0): it is not outputted if Vcon≡0 since there is no violation

[Summary information]: At the end, it outputs the input variables, response values, and evaluation values <Vcon, Vopt> of the best solution.

References
[1] Xiao-Feng Xie, Wen-Jun Zhang, and Zhi-Lian Yang. Social cognitive optimization for nonlinear programming problems. In International Conference on Machine Learning and Cybernetics (ICMLC), pages 779-783, Beijing, China, 2002.
@InProceedings{Xie:2002p779,
Title = {Social cognitive optimization for nonlinear programming problems},
Author = {Xiao-Feng Xie and Wen-Jun Zhang and Zhi-Lian Yang},
Booktitle = {International Conference on Machine Learning and Cybernetics (ICMLC)},
Year = {2002},
PDF={http://www.wiomax.com/team/xie/paper/ICMLC02A.pdf},
DOI={10.1109/ICMLC.2002.1174487},
Code={http://www.wiomax.com/sco},
Pages = {779--783}
}
[2] Xiao-Feng Xie and Wen-Jun Zhang. Solving engineering design problems by social cognitive optimization. In Genetic and Evolutionary Computation Conference (GECCO), pages 261-262, Seattle, WA, USA, 2004.
@InProceedings{Xie:2004p261,
Title = {Solving engineering design problems by social cognitive optimization},
Author = {Xiao-Feng Xie and Wen-Jun Zhang},
Booktitle = {Genetic and Evolutionary Computation Conference (GECCO)},
Year = {2004},
Pages = {261--262},
PDF={http://www.wiomax.com/team/xie/paper/GECCO04_SCO.pdf},
DOI={10.1007/978-3-540-24854-5_27},
Code={http://www.wiomax.com/sco},
}
[3] Xiao-Feng Xie, Jiming Liu, and Zun-Jing Wang. A cooperative group optimization system. Soft Computing, 18(3): 469-495, 2014.
@Article{xie2014cooperative,
Title = {A cooperative group optimization system},
Author = {Xie, Xiao-Feng and Liu, Jiming and Wang, Zun-Jing},
Journal = {Soft Computing},
Year = {2014},
Number = {3},
Pages = {469--495},
Volume = {18},
PDF={http://www.wiomax.com/team/xie/paper/SOCO14.pdf},
DOI={10.1007/s00500-013-1069-8},
Publisher = {Springer}
}

## Dissipative Particle Swarm Optimization (DPSO)

Category : Open Source

The dissipative particle swarm optimization (DPSO) is developed according to the self-organization of dissipative structure. The negative entropy is introduced into the particle swarm to construct an opening dissipative system that is far-from-equilibrium so as to driving the evolutionary process towards better fitness.

• Xiao-Feng Xie, Wen-Jun Zhang, and Zhi-Lian Yang. A dissipative particle swarm optimization. In Congress on Evolutionary Computation (CEC), pages 1456-1461, Honolulu, HI, USA, 2002.
@InProceedings{Xie:2002p1456,
Title = {A dissipative particle swarm optimization},
Author = {Xiao-Feng Xie and Wen-Jun Zhang and Zhi-Lian Yang},
Booktitle = {Congress on Evolutionary Computation (CEC)},
Year = {2002},
PDF={http://www.wiomax.com/team/xie/paper/CEC02.pdf},
DOI={10.1109/CEC.2002.1004457},
Code={http://www.wiomax.com/dpso},
Pages = {1456--1461}
}

## DEPSO Algorithm: Project Portal

DEPSO [1], or called DEPS, is an algorithm for (constrained) numerical optimization problem (NOP). DEPSO combines the advantages of Particle Swarm Optimization (PSO) and Differential Evolution (DE). It is incorporated into cooperative group optimization (CGO) system [2].

The DEPSO paper has been cited over 400 times with various applications. DEPSO was also implemented (by Sun Microsystems Inc.) into NLPSolver (Solver for Nonlinear Programming), an extension of Calc in Apache OpenOffice.

Related Information: Please find other related code and software in our Code Library.

Basic Description What’s New Problem to be solved Setting Parameters Output Information References Contact

License information: DEPSO is free software; you can redistribute it and/or modify it under the Creative Commons Non-Commercial License 3.0.

System Requirements: DEPSO is a platform-independent software developed by JAVA version 1.4 or above.

Command line (examples): \$ java DEPSO Problem=<Problem_Name> [NAME=VALUE] …

What’s New

Version V1.0.004: [Github]

• Allow dynamically accessing problem instance.

• The adaptive constraints relaxing (ACR) rule [3] might tackle the problem with equality constraints more efficiently than the basic constraint-handling (BCH) rule does.
• Newly introduced parameters: isACR.

Version V1.0.002:

• Bug fixed (Reported by Miklos Espak, Nov 16, 2010): ProblemEncoder.java

Version V1.0.001:

• For boundary-handling, the cycled version [4] instead of the periodic version [5] is considered, so that all new solutions are generated within the original search space, as well as the agents are searching within a virtually infinite space. In addition, the limitation of maximal velocity is no longer required.

Version V1.0.000:

• It implements the original DEPSO algorithm [1].
• Setting parameters: Problem, N, T, Tout, FACTOR, CR, c1, c2, weight.

Problem to be solved

The problem to be solved is (constrained) numerical optimization problem (NOP), including nonlinear programming problems.

To implement your own problem instance, you need create a JAVA source file, normally placed in the directory problem/unconstrained (if the problem has no constraint) or problem/constrained (if the problem has constraints).

Implementation Tips: 1) all the variable bounds must be specified; 2)  any equality constraint should be relaxed by a small tolerance value (e.g., ε=1E-4 for problem.constrained.Michalewicz_G3); and 3) problem.ProblemEncoder and problem.UnconstrainedProblemEncoder are the parental classes of all constrained (e.g., problem.constrained.Michalewicz_G1) and unconstrained (e.g., problem.unconstrained.GoldsteinPrice) problems, respectively.

More detailed description on the problem and implementation can be found here.

Setting parameters [NAME=VALUE]
NAME    VALUE_type   Range      Default_Value   Description
Problem  String      *		<Problem_Name>  The problem to be solved
//For example: problem.constrained.Michalewicz_G2 is the default value
-----------------------------------------------------------------------------------------

N        integer     >5         70 		The number of agents
T        integer     >1         2000		The maximum learning cycles
//The total number of evaluation times is about N*T

isACR    boolean                false           Constraint-handling: BCH(false), ACR(true)
//Basic constraint-handling (BCH) rule or adaptive constraints relaxing (ACR) rule

Tout     integer     >0         100		The output interval (not important)
//The program outputs runtime information of the best solution every "Tout" cycles.

FACTOR   real        (0, 1.2]   0.5  	        DE: scale constant
CR       real        [0, 1]     0.9  	        DE: crossover constant
//The parameters of DE operator, there are two suggested settings for DE:
// 1) FACTOR=0.5, CR=0.9; 2) FACTOR=0.5, CR=0.1

c1       real        [0, 2]     1.494           PSO: learning factor for pbest
c2       real        [0, 2]     1.494           PSO: learning factor for gbest
weight   real        [0, 1]     0.729           PSO: inertia weight
//The parameters of PSO operator, default values: c1=c2=1.494, weight=0.729


Output Information

[Parsing information]: provide the parsing information for all input parameters.

[Setting information]: show the information of all setting parameters for the algorithm.

[Runtime information]: The program outputs runtime information, i.e., the evaluation values <Vcon, Vopt> of the best solution, at every “Tout” cycles.
//Vopt: the value of objective function; Vcon: the weighted constraint violation value (≥0), which is not outputted if Vcon≡0 since there is no violation

[Summary information]: At the end, it outputs the input variables, response values, and evaluation values <Vcon, Vopt> of the best solution.

References
[1] Wen-Jun Zhang and Xiao-Feng Xie. DEPSO: Hybrid particle swarm with differential evolution operator. In IEEE International Conference on Systems, Man, and Cybernetics, pages 3816-3821, Washington, DC, USA, 2003. IEEE.
@InProceedings{Zhang:2003p3816,
Title = {{DEPSO}: Hybrid particle swarm with differential evolution operator},
Author = {Wen-Jun Zhang and Xiao-Feng Xie},
Booktitle = {IEEE International Conference on Systems, Man, and Cybernetics},
Year = {2003},
PDF={http://www.wiomax.com/team/xie/paper/SMCC03.pdf},
DOI={10.1109/ICSMC.2003.1244483},
Code={http://www.wiomax.com/depso},
Pages = {3816--3821},
Publisher = {IEEE}
}
[2] Xiao-Feng Xie, Jiming Liu, and Zun-Jing Wang. A cooperative group optimization system. Soft Computing, 18(3): 469-495, 2014.
@Article{xie2014cooperative,
Title = {A cooperative group optimization system},
Author = {Xie, Xiao-Feng and Liu, Jiming and Wang, Zun-Jing},
Journal = {Soft Computing},
Year = {2014},
Number = {3},
Pages = {469--495},
Volume = {18},
PDF={http://www.wiomax.com/team/xie/paper/SOCO14.pdf},
DOI={10.1007/s00500-013-1069-8},
Publisher = {Springer}
}
[3] Xiao-Feng Xie, Wen-Jun Zhang, and De-Chun Bi. Handling equality constraints by adaptive relaxing rule for swarm algorithms. In Congress on Evolutionary Computation (CEC), pages 2012-2016, Portland, OR, USA, 2004.
@InProceedings{Xie:2004p2012,
Title = {Handling equality constraints by adaptive relaxing rule for swarm algorithms},
Author = {Xiao-Feng Xie and Wen-Jun Zhang and De-Chun Bi},
Year = {2004},
Pages = {2012--2016},
Booktitle = {Congress on Evolutionary Computation (CEC)},
PDF={http://www.wiomax.com/team/xie/paper/CEC04_ECH.pdf},
DOI={10.1109/CEC.2004.1331143},
}
[4] Xiao-Feng Xie and Jiming Liu. A compact multiagent system based on autonomy oriented computing. In IEEE/WIC/ACM International Conference on Intelligent Agent Technology (IAT), pages 38-44, Compiegne, France, 2005. IEEE.
@InProceedings{Xie:2005p1406,
Title = {A compact multiagent system based on autonomy oriented computing},
Author = {Xiao-Feng Xie and Jiming Liu},
Booktitle = {IEEE/WIC/ACM International Conference on Intelligent Agent Technology (IAT)},
Year = {2005},
PDF={http://www.wiomax.com/team/xie/paper/IAT05.pdf},
DOI={10.1109/IAT.2005.6},
Pages = {38--44},
Publisher = {IEEE}
}
[5] Xiao-Feng Xie and Wen-Jun Zhang. SWAF: Swarm algorithm framework for numerical optimization. In Genetic and Evolutionary Computation Conference (GECCO), pages 238-250, Seattle, WA, USA, 2004. Springer.
@InProceedings{Xie:2004p238,
Title = {{SWAF}: Swarm algorithm framework for numerical optimization},
Author = {Xiao-Feng Xie and Wen-Jun Zhang},
Booktitle = {Genetic and Evolutionary Computation Conference (GECCO)},
Year = {2004},
PDF={http://www.wiomax.com/team/xie/paper/GECCO04_SWAF.pdf},
DOI={10.1007/978-3-540-24854-5_21},
}