ºÝºÝߣ

ºÝºÝߣShare a Scribd company logo
International Journal of Information Technology, Modeling and Computing (IJITMC) Vol. 4, No.1, February 2016
DOI : 10.5121/ijitmc.2016.4104 35
AN IMPROVED ITERATIVE METHOD FOR SOLVING
GENERAL SYSTEM OF EQUATIONS VIA GENETIC
ALGORITHMS
Seyed Abolfazl Shahzadehfazeli1, 2
Zainab Haji Abootorabi2,3
1
Parallel Processing Laboratory, Yazd University, Yazd, Iran
2
Department of Computer Science, Faculty of Mathematics, Yazd University,Yazd, Iran
3
Department of Mathematics, PNU University, Yazd, Iran
ABSTRACT
Various algorithms are known for solving linear system of equations. Iteration methods for solving the
large sparse linear systems are recommended. But in the case of general n× m matrices the classic
iterative algorithms are not applicable except for a few cases. The algorithm presented here is based on the
minimization of residual of solution and has some genetic characteristics which require using Genetic
Algorithms. Therefore, this algorithm is best applicable for construction of parallel algorithms. In this
paper, we describe a sequential version of proposed algorithm and present its theoretical analysis.
Moreover we show some numerical results of the sequential algorithm and supply an improved algorithm
and compare the two algorithms.
Keywords
Large sparse linear systems, Iterative Genetic algorithms, Parallel algorithm.
1. INTRODUCTION
Let A be a general n ×m matrix. The main problem is to solve the linear system of equations:
Ax = b (1)
where x∈Rm
and b∈Rn
are the solution and the given right hand side vectors. We can determine
from matrix A and the vector b, the existence and uniqueness of the solution of (1). Theoretically
the Gaussian or Gauss-Jordan elimination algorithm is an appropriate tool to solve the system (1)
and to decide the question of solvability. when we use floating point arithmetic for large
systems, these direct algorithms are inapplicable. For these cases the iterative algorithms are
suitable. Effective iterative algorithms are known for symmetric positive definite linear systems.
In general, iterative algorithms can be written in the form of:
x(n)=B x(n−1)+d, n=1, 2,... (2)
where B and d are such a matrix and vector that make stationary solution of (2) equivalent with
(1), see ([1]). These iterative algorithms can be applied for general non symmetric linear systems
as well, if we solve the following normal system:
AT
Ax = AT
b = v (3)
International Journal of Information Technology, Modeling and Computing (IJITMC) Vol. 4, No.1, February 2016
36
instead of the original one. A disadvantage of this approach is that the resulting linear system (3)
for matrices with full rank will be Hermitian ones, however, its condition number will be the
square of the original condition number. Therefore, the convergence will be very slow. For
general linear systems when A is non-Hermitian, instead of using some variant of the Conjugate
Gradient (CG) algorithms, one of the most successful schemes is the generalized minimal residual
algorithm (GMRES), see ([9, 10]) and the biconjugate gradient algorithm (BCG) see ([2]).
A more effective approach was suggested by Freund and Nachtigal ([5]) for the case of general
nonsingular non-Hermitian systems which is called the quasi minimal residual algorithm (QMR).
An iterative minimal residual algorithm which is slightly different from the above ones uses
Genetic Algorithms (GA), see ([4, 6,7, 8]).
In the following, we describe an improved method using genetic algorithms, in which, the initial
population is larger, uses a broader search field and its crossover operator on initial population
enhances the algorithm convergence speed. Generally, genetic algorithm with larger search space,
does not guarantee the convergence speed see ([3]).
In this paper, it is shown that our improved method is in practice much faster than previous
types. This advantage can be very important for development of these algorithms for parallel
processing. The result obtained in [8] is briefly reviewed here to clarify the improved algorithm.
2. AN ITERATIVE MINIMAL RESIDUAL ALGORITHM
The most of iterative algorithms for solving linear systems are based on some minimization
algorithm. We can obtain the normal system (3) in the following way by the least square
minimization. We have to solve the following problem:
2
2
2
2
min
)
,
(
min
)
,
(
min
min r
r
r
b
Ax
b
Ax
b
Ax m
n
n
R
r
R
x
R
x ∈
∈
∈
=
=
−
−
=
− (4)
where r=Ax−b is the residual of the vector x.
It is easy to show that the equation (4) can be written as in (3). More precisely, the necessary
condition for the existence and uniqueness of the solution of (4) is obtained for the fulfillment of
(3). The Hermitian property of the normal matrix AT
A is a sufficient condition for the
uniqueness. For general non-Hermitian matrices this condition is not fulfilled in general. One
possible algorithm to solve the problem (4) can be obtained from the following theorem.
Theorem 1. Let n
m
R
R
A →
∈ and n
R
b∈ be arbitrary matrix and vector. Moreover, let
m
R
x ∈
α
and m
R
x ∈
β
be arbitrary different vectors for which ( ) 0
≠
− β
α
x
x
A .
Let us introduce the following notations:
,
b
Ax
r s
s
−
= β
α ,
=
S
and
β
α
β
α
x
c
cx
X )
1
(
,
−
+
= , β
α
β
α
r
c
cr
r )
(
,
−
+
= 1
where R
c∈ . We have β
α
β
α ,
,
r
b
Ax =
− . Then, the solution of the minimization problem of
(4) is the vector β
α ,
x with c, where
International Journal of Information Technology, Modeling and Computing (IJITMC) Vol. 4, No.1, February 2016
37
2
2
)
,
(
β
α
β
α
β
r
r
r
r
r
c
−
−
=
Moreover,
{ }
2
2
,
2
,
,
min β
α
β
α
β
α
r
r
r 〈 .
The Algorithm 1
From Theorem 1 we obtain an algorithm (see [8]), which generates an approximate solution
sequence k
x , k=1, 2, 3,... with residual vectors k
r , k=1, 2, 3,.....
1) Let x1
be an arbitrary vector and ε the tolerance.
2) Calculate r1
=Ax1
−b.
3) Generate an arbitrary vector, x2
such that r1
−r2
≠ 0.
4) Calculate the c1,2
.
5) Calculate the new
x1,2
:=c 1,2
x1
+(1−c1,2
) x2
and r1,2
:=c 1,2
r1
+(1−c1,2
) r2
vectors.
6) x1
:= x1,2
and r1
:=r1,2
.
7) If r1
< ε then go to 8, else go to 3.
8 )The approximate solution is x1
.
9)End of algorithm.
The simplest algorithm which can be obtained from Theorem 1 is the algorithm 1. Therefore, this
algorithm does not converge faster than the classical ones.
3. THE IMPROVED ALGORITHM USING GA
Genetic algorithms (GAs) were proposed first time by John Holland and were developed by
Holland and his colleagues at the University of Michigan in the 1960s and the 1970s. On
continuous and discrete combinatorial problems, GAs work very well. But they tend to be
computationally expensive. GAs are examples of algorithms that are used in this field and have
improved tremendously in the past two decades. A genetic algorithm (or GA) is a search
technique used in computing to find true or approximate solutions to optimization and search
problems. (GA)s are in the class of global search heuristics. (GA)s are a particular class of
evolutionary algorithms that use techniques inspired by evolutionary biology such as inheritance,
selection, crossover and mutation.
Selection: Choice of individual genomes from a population for using the crossover operator is the
stage of a genetic algorithm which is called Selection. There are many ways how to select the
best chromosomes, for example roulette wheel selection, Boltzman selection, tournament
selection, rank selection, steady state selection and some others.
Crossover: After we have decided what encoding we will use, we can make a step to crossover.
Crossover selects genes from parent chromosomes and creates a new offspring. The simplest way
how to do this is to choose randomly some crossover point and everything before this point copy
from a first parent and then everything after a crossover point copy from the second parent. There
are many methods how to do crossover. For example Single point crossover, Two point
crossover, Uniform crossover and Arithmetic crossover.
International Journal of Information Technology, Modeling and Computing (IJITMC) Vol. 4, No.1, February 2016
38
Mutation: After a crossover is performed, mutation takes place. This is to prevent falling all
solutions in population into a local optimum of solved problem. Mutation changes randomly the
new offspring. As well as the crossover, the mutation depends on the encoding . For example
mutation could be exchanging two genes, when we are encoding permutations. For binary
encoding we can switch a few randomly chosen bits from 1 to 0 or from 0 to 1.
The most important parts of the genetic algorithm are the crossover and mutation. The
performance is influenced mainly by these two operators. Crossover and mutation are two basic
operators of GA and performance of GA is very dependent on them. Implementation and type of
operators depends on a given problem and encoding.
The evolution usually starts from a population of randomly generated individuals and happens in
generations. The fitness of every individual in the population, evaluate in each generation and
select multiple individuals from the current population and modify to form a new population. In
the next iteration of the algorithm use the new population. The algorithm terminates when either a
maximum number of generations has been produced, or a satisfactory fitness level has been
reached for the population.
The Basic Genetic Algorithm
1) Generate random population of n chromosomes.
2) Evaluate the fitness function of each chromosome x in the population.
3) Create a new population by repeating following steps until the new population is
complete.
a) Selection: Select two parent chromosomes from a population according to their fitness.
b) Crossover: With a crossover probability crossover the parents to form a new
offspring (children). If no crossover was performed, offspring is an exact copy of
parents.
c) Mutation: With a mutation probability mutate new offspring at each locus.
d) Place new offspring in a new population.
4) Use new generated population for a further run of algorithm.
5) If the end condition is satisfied, stop, and return the best solution in current
population.
6) Go to step 2
The three most important aspects of using genetic algorithms are:
1) Definition of the objective function.
2) Definition and implementation of the genetic representation.
3) Definition and implementation of the genetic operators. Once these three have been
defined, the generic algorithm should work fairly well.
In algorithm 1, we choose x1
and x2
arbitrarily, then use crossover operator to reach an optimal x1,2
and replace it for x1
. Then, we randomly select x2
again. Finally this process is continued until a
fairly accurate approximation to the answer is achieved for linear equations Ax=b. But in the
improved algorithm, instead of x1
and x2
and instead of the original population from two-parent,
m-parent is chosen. (Note in the allocate names, x1
, x2
,..., xm
, m is the number of columns of
matrix A).
The crossover operator performed on the initial population generates vectors, x1,2
,..., xm-1,m
. This
process is repeatedly performed on the newly generated vectors until a single vector x1,2,3,…,m
as
International Journal of Information Technology, Modeling and Computing (IJITMC) Vol. 4, No.1, February 2016
39
an approximate initial solution is obtained. This is now replaced by x1
also we randomly select
x2
,..., xm
again for second population and the algorithm is repeated again and again until a close
solution is obtained. The following table shows how the new vectors are generated. For detail
refer to the algorithm 2.
x1
x1,2
x2
x1,2,3
x2,3
x3
x2,3,4
…
...
x1,2,3,...,m
…
…
xm-1,m
xm
Now the algorithm 1 is improved in order to increase the convergence speed.
The Algorithm 2
1) Let x1
be an arbitrary vector and ε the error tolerance and i=1.
2) Calculate r1
=Ax1
−b.
3) Generate an arbitrary vector , x2
,…, xm
such that ri
−rj
≠ 0 , )
( j
i ≠ ‫و‬ m
j
i ,...,
1
, = .
4) Calculate the
2
2
1
1
1
)
,
(
k
k
k
k
k
k
r
r
r
r
r
C
−
−
=
+
+
+
, for 1
,...,
2
, −
= m
i
k .
5) Calculate the new
xk,k+1
= Ck xk
+(1-Ck) xk+1
and rk,k+1
= Ck rk
+(1-Ck) rk+1
vectors, for 1
,..., −
= m
i
k .
6) 1
1 +
+
= k
k
k
x
x ,
, for k= i,…,n-1, and 1
+
= i
i .
7) If i=n-1, then m
m
x
x ,
1
1 −
= and m
m
r
r ,
1
1 −
= else go to 4.
8) If ε
<
2
1
r then go to 9, else go to 3.
9) The approximate solution is x1
.
10) End of algorithm.
4. NUMERICAL EXPERIMENTS
In this section, we compare algorithm 1 and algorithm 2. Also, we use the different examples and
review speed of the algorithm and we show some examples in the summary table and an example
to bring more detail.
In the examples, the condition number of the matrices A are chosen rather small (The coefficient
matrices are well-conditioned).
The following table (table1) compares the number of iterations by the two algorithms.
Figure 1 shows that for matrix A1 with the condition number 80.903 and spectral radius 15.7009,
the algorithm 1 converges after 136720 iterations while the number of iterations in the improved
algorithm (algorithm 2) is 16129. This is a notable reduction.
International Journal of Information Technology, Modeling and Computing (IJITMC) Vol. 4, No.1, February 2016
40
Table1. The number of iterations.
No. of iter.
algorithm 2
No. of iter.
algorithm1
Tol.
Dim.
Matrix
16129
136720
10-3
20
15×
A1
1812
10691
10-3
15
20×
A2
8285
52273
10-3
25
20×
A3
279
665
10-3
20
25×
A4
22920
119041
10-3
30
25×
A5
349
805
10-3
25
30×
A6
436
1228
10-3
30
35×
A7
500
1390
10-3
35
40×
A8
Figure 1. Speed of convergence of the Algorithm 1 and Algorithm 2 on the A1 matrix.
0 2 4 6 8 10 12 14
x 10
4
10
-3
10
-2
10
-1
10
0
10
1
10
2
10
3
iteration
residual
norm
Algo.1
Algo.2
International Journal of Information Technology, Modeling and Computing (IJITMC) Vol. 4, No.1, February 2016
41
5. CONCLUSION
In this paper, for solving systems of linear equations an improved algorithm is presented. In
contradiction to other iterative methods (Jacobi, Gauss-Seidel, conjugate gradient and even
Gauss-elimination methods), this method has not any limitations.
Genetic algorithm enhances an appropriate response to eliminate restrictions and is a simple
method for obtaining the solution. As the examples show, the number of iterations in algorithm 2
is incredibly reduced. The merit of the algorithm is its simplicity to use specially for non-square
systems and to extend to large systems of equations by incorporating parallel computing.
REFERENCES
[1] Hageman L. A. & Joung D. M., (1981) Applied Iterative Methods, Computer Science and Applied
Mathematics, Academic Press.
[2] Hestenes M.R. & Stiefel,E. (1954) Methods of conjugate gradients for solving linear systems; J. Res.
Natl. Bur. Stand. 49, 409- 436, .
[3] Hoppe T., (2006) Optimization of Genetic Algorithms, Drexel University,Research Paper.
[4] Koza J. R., Bennett H. B., Andre D., & Keane M. A., (1999) Genetic programming III: Drawinian
Invention and Problem Solving, Morgan Kaufmann Publishers.
[5] Lanczos C., (1952) Solution of systems of linear equations by minimized iterations, J Res. Nat. Bur.
Standards, 49, 33-53.
[6] Michalewicz & Zbeigniew, (1996) Genetic algorithms + Data Structures = Evolution Program,
Springer – Verlog, Thirst edition.
[7] Mitchell & Melanie, (1996) An Introduction to Genetic Algorithms, Cambridge, MA:The MIT Press.
[8] Molnárka G. & Miletic, (2004) A Genetic Algorithm for Solving General System of Equations,
Department of Mathematics, Széchenyi István University, Győr, Hungary.
[9] Molnárka G. & Török B. (1996) Residual Elimination Algorithm for Solving Linear Equations and
Application to Sparse Systems, Zeitschrift für Angewandte Mathematik und Mechanik (ZAMM),
Issue 1, Numerical Analysis, Scientific Computing, Computer Science, 485-486.
[10] Saad Y. & Schultz M. H. (1986) GMRES: A generalized minimal residual algorithm for solving
nonsymmetric linear systems, SIAM J. Sci. Stat. Comput., 7, 856-869.

More Related Content

Similar to An Improved Iterative Method for Solving General System of Equations via Genetic Algorithms (20)

Accelerating the ant colony optimization by
Accelerating the ant colony optimization byAccelerating the ant colony optimization by
Accelerating the ant colony optimization by
ijcsa
Ìý
Fabric Textile Defect Detection, By Selection A Suitable Subset Of Wavelet Co...
Fabric Textile Defect Detection, By Selection A Suitable Subset Of Wavelet Co...Fabric Textile Defect Detection, By Selection A Suitable Subset Of Wavelet Co...
Fabric Textile Defect Detection, By Selection A Suitable Subset Of Wavelet Co...
CSCJournals
Ìý
L018147377
L018147377L018147377
L018147377
IOSR Journals
Ìý
A NEW APPROACH IN DYNAMIC TRAVELING SALESMAN PROBLEM: A HYBRID OF ANT COLONY ...
A NEW APPROACH IN DYNAMIC TRAVELING SALESMAN PROBLEM: A HYBRID OF ANT COLONY ...A NEW APPROACH IN DYNAMIC TRAVELING SALESMAN PROBLEM: A HYBRID OF ANT COLONY ...
A NEW APPROACH IN DYNAMIC TRAVELING SALESMAN PROBLEM: A HYBRID OF ANT COLONY ...
ijmpict
Ìý
A NEW APPROACH IN DYNAMIC TRAVELING SALESMAN PROBLEM: A HYBRID OF ANT COLONY ...
A NEW APPROACH IN DYNAMIC TRAVELING SALESMAN PROBLEM: A HYBRID OF ANT COLONY ...A NEW APPROACH IN DYNAMIC TRAVELING SALESMAN PROBLEM: A HYBRID OF ANT COLONY ...
A NEW APPROACH IN DYNAMIC TRAVELING SALESMAN PROBLEM: A HYBRID OF ANT COLONY ...
ijmpict
Ìý
Mimo system-order-reduction-using-real-coded-genetic-algorithm
Mimo system-order-reduction-using-real-coded-genetic-algorithmMimo system-order-reduction-using-real-coded-genetic-algorithm
Mimo system-order-reduction-using-real-coded-genetic-algorithm
Cemal Ardil
Ìý
GENETIC ALGORITHM FOR FUNCTION APPROXIMATION: AN EXPERIMENTAL INVESTIGATION
GENETIC ALGORITHM FOR FUNCTION APPROXIMATION: AN EXPERIMENTAL INVESTIGATIONGENETIC ALGORITHM FOR FUNCTION APPROXIMATION: AN EXPERIMENTAL INVESTIGATION
GENETIC ALGORITHM FOR FUNCTION APPROXIMATION: AN EXPERIMENTAL INVESTIGATION
ijaia
Ìý
Applications and Analysis of Bio-Inspired Eagle Strategy for Engineering Opti...
Applications and Analysis of Bio-Inspired Eagle Strategy for Engineering Opti...Applications and Analysis of Bio-Inspired Eagle Strategy for Engineering Opti...
Applications and Analysis of Bio-Inspired Eagle Strategy for Engineering Opti...
Xin-She Yang
Ìý
MARGINAL PERCEPTRON FOR NON-LINEAR AND MULTI CLASS CLASSIFICATION
MARGINAL PERCEPTRON FOR NON-LINEAR AND MULTI CLASS CLASSIFICATION MARGINAL PERCEPTRON FOR NON-LINEAR AND MULTI CLASS CLASSIFICATION
MARGINAL PERCEPTRON FOR NON-LINEAR AND MULTI CLASS CLASSIFICATION
ijscai
Ìý
An Automatic Clustering Technique for Optimal Clusters
An Automatic Clustering Technique for Optimal ClustersAn Automatic Clustering Technique for Optimal Clusters
An Automatic Clustering Technique for Optimal Clusters
IJCSEA Journal
Ìý
A MODIFIED VORTEX SEARCH ALGORITHM FOR NUMERICAL FUNCTION OPTIMIZATION
A MODIFIED VORTEX SEARCH ALGORITHM FOR NUMERICAL FUNCTION OPTIMIZATIONA MODIFIED VORTEX SEARCH ALGORITHM FOR NUMERICAL FUNCTION OPTIMIZATION
A MODIFIED VORTEX SEARCH ALGORITHM FOR NUMERICAL FUNCTION OPTIMIZATION
ijaia
Ìý
CONSTRUCTING A FUZZY NETWORK INTRUSION CLASSIFIER BASED ON DIFFERENTIAL EVOLU...
CONSTRUCTING A FUZZY NETWORK INTRUSION CLASSIFIER BASED ON DIFFERENTIAL EVOLU...CONSTRUCTING A FUZZY NETWORK INTRUSION CLASSIFIER BASED ON DIFFERENTIAL EVOLU...
CONSTRUCTING A FUZZY NETWORK INTRUSION CLASSIFIER BASED ON DIFFERENTIAL EVOLU...
IJCNCJournal
Ìý
Cost Optimized Design Technique for Pseudo-Random Numbers in Cellular Automata
Cost Optimized Design Technique for Pseudo-Random Numbers in Cellular AutomataCost Optimized Design Technique for Pseudo-Random Numbers in Cellular Automata
Cost Optimized Design Technique for Pseudo-Random Numbers in Cellular Automata
ijait
Ìý
The International Journal of Engineering and Science (The IJES)
The International Journal of Engineering and Science (The IJES)The International Journal of Engineering and Science (The IJES)
The International Journal of Engineering and Science (The IJES)
theijes
Ìý
A Real Coded Genetic Algorithm For Solving Integer And Mixed Integer Optimiza...
A Real Coded Genetic Algorithm For Solving Integer And Mixed Integer Optimiza...A Real Coded Genetic Algorithm For Solving Integer And Mixed Integer Optimiza...
A Real Coded Genetic Algorithm For Solving Integer And Mixed Integer Optimiza...
Jim Jimenez
Ìý
Cuckoo Search: Recent Advances and Applications
Cuckoo Search: Recent Advances and ApplicationsCuckoo Search: Recent Advances and Applications
Cuckoo Search: Recent Advances and Applications
Xin-She Yang
Ìý
COMPARING THE CUCKOO ALGORITHM WITH OTHER ALGORITHMS FOR ESTIMATING TWO GLSD ...
COMPARING THE CUCKOO ALGORITHM WITH OTHER ALGORITHMS FOR ESTIMATING TWO GLSD ...COMPARING THE CUCKOO ALGORITHM WITH OTHER ALGORITHMS FOR ESTIMATING TWO GLSD ...
COMPARING THE CUCKOO ALGORITHM WITH OTHER ALGORITHMS FOR ESTIMATING TWO GLSD ...
csandit
Ìý
50120140503004
5012014050300450120140503004
50120140503004
IAEME Publication
Ìý
Modified Vortex Search Algorithm for Real Parameter Optimization
Modified Vortex Search Algorithm for Real Parameter Optimization Modified Vortex Search Algorithm for Real Parameter Optimization
Modified Vortex Search Algorithm for Real Parameter Optimization
csandit
Ìý
MODIFIED VORTEX SEARCH ALGORITHM FOR REAL PARAMETER OPTIMIZATION
MODIFIED VORTEX SEARCH ALGORITHM FOR REAL PARAMETER OPTIMIZATIONMODIFIED VORTEX SEARCH ALGORITHM FOR REAL PARAMETER OPTIMIZATION
MODIFIED VORTEX SEARCH ALGORITHM FOR REAL PARAMETER OPTIMIZATION
cscpconf
Ìý
Accelerating the ant colony optimization by
Accelerating the ant colony optimization byAccelerating the ant colony optimization by
Accelerating the ant colony optimization by
ijcsa
Ìý
Fabric Textile Defect Detection, By Selection A Suitable Subset Of Wavelet Co...
Fabric Textile Defect Detection, By Selection A Suitable Subset Of Wavelet Co...Fabric Textile Defect Detection, By Selection A Suitable Subset Of Wavelet Co...
Fabric Textile Defect Detection, By Selection A Suitable Subset Of Wavelet Co...
CSCJournals
Ìý
A NEW APPROACH IN DYNAMIC TRAVELING SALESMAN PROBLEM: A HYBRID OF ANT COLONY ...
A NEW APPROACH IN DYNAMIC TRAVELING SALESMAN PROBLEM: A HYBRID OF ANT COLONY ...A NEW APPROACH IN DYNAMIC TRAVELING SALESMAN PROBLEM: A HYBRID OF ANT COLONY ...
A NEW APPROACH IN DYNAMIC TRAVELING SALESMAN PROBLEM: A HYBRID OF ANT COLONY ...
ijmpict
Ìý
A NEW APPROACH IN DYNAMIC TRAVELING SALESMAN PROBLEM: A HYBRID OF ANT COLONY ...
A NEW APPROACH IN DYNAMIC TRAVELING SALESMAN PROBLEM: A HYBRID OF ANT COLONY ...A NEW APPROACH IN DYNAMIC TRAVELING SALESMAN PROBLEM: A HYBRID OF ANT COLONY ...
A NEW APPROACH IN DYNAMIC TRAVELING SALESMAN PROBLEM: A HYBRID OF ANT COLONY ...
ijmpict
Ìý
Mimo system-order-reduction-using-real-coded-genetic-algorithm
Mimo system-order-reduction-using-real-coded-genetic-algorithmMimo system-order-reduction-using-real-coded-genetic-algorithm
Mimo system-order-reduction-using-real-coded-genetic-algorithm
Cemal Ardil
Ìý
GENETIC ALGORITHM FOR FUNCTION APPROXIMATION: AN EXPERIMENTAL INVESTIGATION
GENETIC ALGORITHM FOR FUNCTION APPROXIMATION: AN EXPERIMENTAL INVESTIGATIONGENETIC ALGORITHM FOR FUNCTION APPROXIMATION: AN EXPERIMENTAL INVESTIGATION
GENETIC ALGORITHM FOR FUNCTION APPROXIMATION: AN EXPERIMENTAL INVESTIGATION
ijaia
Ìý
Applications and Analysis of Bio-Inspired Eagle Strategy for Engineering Opti...
Applications and Analysis of Bio-Inspired Eagle Strategy for Engineering Opti...Applications and Analysis of Bio-Inspired Eagle Strategy for Engineering Opti...
Applications and Analysis of Bio-Inspired Eagle Strategy for Engineering Opti...
Xin-She Yang
Ìý
MARGINAL PERCEPTRON FOR NON-LINEAR AND MULTI CLASS CLASSIFICATION
MARGINAL PERCEPTRON FOR NON-LINEAR AND MULTI CLASS CLASSIFICATION MARGINAL PERCEPTRON FOR NON-LINEAR AND MULTI CLASS CLASSIFICATION
MARGINAL PERCEPTRON FOR NON-LINEAR AND MULTI CLASS CLASSIFICATION
ijscai
Ìý
An Automatic Clustering Technique for Optimal Clusters
An Automatic Clustering Technique for Optimal ClustersAn Automatic Clustering Technique for Optimal Clusters
An Automatic Clustering Technique for Optimal Clusters
IJCSEA Journal
Ìý
A MODIFIED VORTEX SEARCH ALGORITHM FOR NUMERICAL FUNCTION OPTIMIZATION
A MODIFIED VORTEX SEARCH ALGORITHM FOR NUMERICAL FUNCTION OPTIMIZATIONA MODIFIED VORTEX SEARCH ALGORITHM FOR NUMERICAL FUNCTION OPTIMIZATION
A MODIFIED VORTEX SEARCH ALGORITHM FOR NUMERICAL FUNCTION OPTIMIZATION
ijaia
Ìý
CONSTRUCTING A FUZZY NETWORK INTRUSION CLASSIFIER BASED ON DIFFERENTIAL EVOLU...
CONSTRUCTING A FUZZY NETWORK INTRUSION CLASSIFIER BASED ON DIFFERENTIAL EVOLU...CONSTRUCTING A FUZZY NETWORK INTRUSION CLASSIFIER BASED ON DIFFERENTIAL EVOLU...
CONSTRUCTING A FUZZY NETWORK INTRUSION CLASSIFIER BASED ON DIFFERENTIAL EVOLU...
IJCNCJournal
Ìý
Cost Optimized Design Technique for Pseudo-Random Numbers in Cellular Automata
Cost Optimized Design Technique for Pseudo-Random Numbers in Cellular AutomataCost Optimized Design Technique for Pseudo-Random Numbers in Cellular Automata
Cost Optimized Design Technique for Pseudo-Random Numbers in Cellular Automata
ijait
Ìý
The International Journal of Engineering and Science (The IJES)
The International Journal of Engineering and Science (The IJES)The International Journal of Engineering and Science (The IJES)
The International Journal of Engineering and Science (The IJES)
theijes
Ìý
A Real Coded Genetic Algorithm For Solving Integer And Mixed Integer Optimiza...
A Real Coded Genetic Algorithm For Solving Integer And Mixed Integer Optimiza...A Real Coded Genetic Algorithm For Solving Integer And Mixed Integer Optimiza...
A Real Coded Genetic Algorithm For Solving Integer And Mixed Integer Optimiza...
Jim Jimenez
Ìý
Cuckoo Search: Recent Advances and Applications
Cuckoo Search: Recent Advances and ApplicationsCuckoo Search: Recent Advances and Applications
Cuckoo Search: Recent Advances and Applications
Xin-She Yang
Ìý
COMPARING THE CUCKOO ALGORITHM WITH OTHER ALGORITHMS FOR ESTIMATING TWO GLSD ...
COMPARING THE CUCKOO ALGORITHM WITH OTHER ALGORITHMS FOR ESTIMATING TWO GLSD ...COMPARING THE CUCKOO ALGORITHM WITH OTHER ALGORITHMS FOR ESTIMATING TWO GLSD ...
COMPARING THE CUCKOO ALGORITHM WITH OTHER ALGORITHMS FOR ESTIMATING TWO GLSD ...
csandit
Ìý
Modified Vortex Search Algorithm for Real Parameter Optimization
Modified Vortex Search Algorithm for Real Parameter Optimization Modified Vortex Search Algorithm for Real Parameter Optimization
Modified Vortex Search Algorithm for Real Parameter Optimization
csandit
Ìý
MODIFIED VORTEX SEARCH ALGORITHM FOR REAL PARAMETER OPTIMIZATION
MODIFIED VORTEX SEARCH ALGORITHM FOR REAL PARAMETER OPTIMIZATIONMODIFIED VORTEX SEARCH ALGORITHM FOR REAL PARAMETER OPTIMIZATION
MODIFIED VORTEX SEARCH ALGORITHM FOR REAL PARAMETER OPTIMIZATION
cscpconf
Ìý

Recently uploaded (20)

Caddlance PortfolioMixed Projects 2024.pdf
Caddlance PortfolioMixed Projects 2024.pdfCaddlance PortfolioMixed Projects 2024.pdf
Caddlance PortfolioMixed Projects 2024.pdf
sonam254547
Ìý
GIS Mapping Caddlance Portfolio 2025 .pdf
GIS Mapping Caddlance Portfolio 2025 .pdfGIS Mapping Caddlance Portfolio 2025 .pdf
GIS Mapping Caddlance Portfolio 2025 .pdf
sonam254547
Ìý
FIRST Tech Challenge/Robotics: Scouting out the competition
FIRST Tech Challenge/Robotics: Scouting out the competitionFIRST Tech Challenge/Robotics: Scouting out the competition
FIRST Tech Challenge/Robotics: Scouting out the competition
FTC Team 23014
Ìý
UHV unit-2UNIT - II HARMONY IN THE HUMAN BEING.pptx
UHV unit-2UNIT - II HARMONY IN THE HUMAN BEING.pptxUHV unit-2UNIT - II HARMONY IN THE HUMAN BEING.pptx
UHV unit-2UNIT - II HARMONY IN THE HUMAN BEING.pptx
ariomthermal2031
Ìý
Analysis of Daylighting in Interior Spaces using the Daylight Factor - A Manu...
Analysis of Daylighting in Interior Spaces using the Daylight Factor - A Manu...Analysis of Daylighting in Interior Spaces using the Daylight Factor - A Manu...
Analysis of Daylighting in Interior Spaces using the Daylight Factor - A Manu...
Ignacio J. J. Palma Carazo
Ìý
UHV Unit - 4 HARMONY IN THE NATURE AND EXISTENCE.pptx
UHV Unit - 4 HARMONY IN THE NATURE AND EXISTENCE.pptxUHV Unit - 4 HARMONY IN THE NATURE AND EXISTENCE.pptx
UHV Unit - 4 HARMONY IN THE NATURE AND EXISTENCE.pptx
ariomthermal2031
Ìý
PLANT CELL REACTORS presenation PTC amity
PLANT CELL REACTORS presenation PTC amityPLANT CELL REACTORS presenation PTC amity
PLANT CELL REACTORS presenation PTC amity
UrjaMoon
Ìý
Urban Design and Planning Portfolio .pdf
Urban Design and Planning Portfolio .pdfUrban Design and Planning Portfolio .pdf
Urban Design and Planning Portfolio .pdf
sonam254547
Ìý
Production Planning & Control and Inventory Management.pptx
Production Planning & Control and Inventory Management.pptxProduction Planning & Control and Inventory Management.pptx
Production Planning & Control and Inventory Management.pptx
VirajPasare
Ìý
Requirements Engineering for Secure Software
Requirements Engineering for Secure SoftwareRequirements Engineering for Secure Software
Requirements Engineering for Secure Software
Dr Sarika Jadhav
Ìý
Introduction to Forensic Research Digital Forensics
Introduction to Forensic Research Digital ForensicsIntroduction to Forensic Research Digital Forensics
Introduction to Forensic Research Digital Forensics
SaanviMisar
Ìý
UHV UNIT-3 HARMONY IN THE FAMILY AND SOCIETY.pptx
UHV UNIT-3 HARMONY IN THE FAMILY AND SOCIETY.pptxUHV UNIT-3 HARMONY IN THE FAMILY AND SOCIETY.pptx
UHV UNIT-3 HARMONY IN THE FAMILY AND SOCIETY.pptx
ariomthermal2031
Ìý
Mastering Secure Login Mechanisms for React Apps.pdf
Mastering Secure Login Mechanisms for React Apps.pdfMastering Secure Login Mechanisms for React Apps.pdf
Mastering Secure Login Mechanisms for React Apps.pdf
Brion Mario
Ìý
UHV UNIT-5 IMPLICATIONS OF THE ABOVE HOLISTIC UNDERSTANDING OF HARMONY ON ...
UHV UNIT-5    IMPLICATIONS OF THE ABOVE HOLISTIC UNDERSTANDING OF HARMONY ON ...UHV UNIT-5    IMPLICATIONS OF THE ABOVE HOLISTIC UNDERSTANDING OF HARMONY ON ...
UHV UNIT-5 IMPLICATIONS OF THE ABOVE HOLISTIC UNDERSTANDING OF HARMONY ON ...
ariomthermal2031
Ìý
Energy Transition Factbook Bloomberg.pdf
Energy Transition Factbook Bloomberg.pdfEnergy Transition Factbook Bloomberg.pdf
Energy Transition Factbook Bloomberg.pdf
CarlosdelaFuenteMnde
Ìý
UHV UNIT-I INTRODUCTION TO VALUE EDUCATION .pptx
UHV UNIT-I INTRODUCTION TO VALUE EDUCATION  .pptxUHV UNIT-I INTRODUCTION TO VALUE EDUCATION  .pptx
UHV UNIT-I INTRODUCTION TO VALUE EDUCATION .pptx
ariomthermal2031
Ìý
Using 3D CAD in FIRST Tech Challenge - Fusion 360
Using 3D CAD in FIRST Tech Challenge - Fusion 360Using 3D CAD in FIRST Tech Challenge - Fusion 360
Using 3D CAD in FIRST Tech Challenge - Fusion 360
FTC Team 23014
Ìý
module-4.1-Class notes_R and DD_basket-IV -.pdf
module-4.1-Class notes_R and DD_basket-IV -.pdfmodule-4.1-Class notes_R and DD_basket-IV -.pdf
module-4.1-Class notes_R and DD_basket-IV -.pdf
ritikkumarchaudhury7
Ìý
wind energy types of turbines and advantages
wind energy types of turbines and advantageswind energy types of turbines and advantages
wind energy types of turbines and advantages
MahmudHalef
Ìý
DAY 4VVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVV.pptx
DAY 4VVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVV.pptxDAY 4VVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVV.pptx
DAY 4VVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVV.pptx
GellaBenson1
Ìý
Caddlance PortfolioMixed Projects 2024.pdf
Caddlance PortfolioMixed Projects 2024.pdfCaddlance PortfolioMixed Projects 2024.pdf
Caddlance PortfolioMixed Projects 2024.pdf
sonam254547
Ìý
GIS Mapping Caddlance Portfolio 2025 .pdf
GIS Mapping Caddlance Portfolio 2025 .pdfGIS Mapping Caddlance Portfolio 2025 .pdf
GIS Mapping Caddlance Portfolio 2025 .pdf
sonam254547
Ìý
FIRST Tech Challenge/Robotics: Scouting out the competition
FIRST Tech Challenge/Robotics: Scouting out the competitionFIRST Tech Challenge/Robotics: Scouting out the competition
FIRST Tech Challenge/Robotics: Scouting out the competition
FTC Team 23014
Ìý
UHV unit-2UNIT - II HARMONY IN THE HUMAN BEING.pptx
UHV unit-2UNIT - II HARMONY IN THE HUMAN BEING.pptxUHV unit-2UNIT - II HARMONY IN THE HUMAN BEING.pptx
UHV unit-2UNIT - II HARMONY IN THE HUMAN BEING.pptx
ariomthermal2031
Ìý
Analysis of Daylighting in Interior Spaces using the Daylight Factor - A Manu...
Analysis of Daylighting in Interior Spaces using the Daylight Factor - A Manu...Analysis of Daylighting in Interior Spaces using the Daylight Factor - A Manu...
Analysis of Daylighting in Interior Spaces using the Daylight Factor - A Manu...
Ignacio J. J. Palma Carazo
Ìý
UHV Unit - 4 HARMONY IN THE NATURE AND EXISTENCE.pptx
UHV Unit - 4 HARMONY IN THE NATURE AND EXISTENCE.pptxUHV Unit - 4 HARMONY IN THE NATURE AND EXISTENCE.pptx
UHV Unit - 4 HARMONY IN THE NATURE AND EXISTENCE.pptx
ariomthermal2031
Ìý
PLANT CELL REACTORS presenation PTC amity
PLANT CELL REACTORS presenation PTC amityPLANT CELL REACTORS presenation PTC amity
PLANT CELL REACTORS presenation PTC amity
UrjaMoon
Ìý
Urban Design and Planning Portfolio .pdf
Urban Design and Planning Portfolio .pdfUrban Design and Planning Portfolio .pdf
Urban Design and Planning Portfolio .pdf
sonam254547
Ìý
Production Planning & Control and Inventory Management.pptx
Production Planning & Control and Inventory Management.pptxProduction Planning & Control and Inventory Management.pptx
Production Planning & Control and Inventory Management.pptx
VirajPasare
Ìý
Requirements Engineering for Secure Software
Requirements Engineering for Secure SoftwareRequirements Engineering for Secure Software
Requirements Engineering for Secure Software
Dr Sarika Jadhav
Ìý
Introduction to Forensic Research Digital Forensics
Introduction to Forensic Research Digital ForensicsIntroduction to Forensic Research Digital Forensics
Introduction to Forensic Research Digital Forensics
SaanviMisar
Ìý
UHV UNIT-3 HARMONY IN THE FAMILY AND SOCIETY.pptx
UHV UNIT-3 HARMONY IN THE FAMILY AND SOCIETY.pptxUHV UNIT-3 HARMONY IN THE FAMILY AND SOCIETY.pptx
UHV UNIT-3 HARMONY IN THE FAMILY AND SOCIETY.pptx
ariomthermal2031
Ìý
Mastering Secure Login Mechanisms for React Apps.pdf
Mastering Secure Login Mechanisms for React Apps.pdfMastering Secure Login Mechanisms for React Apps.pdf
Mastering Secure Login Mechanisms for React Apps.pdf
Brion Mario
Ìý
UHV UNIT-5 IMPLICATIONS OF THE ABOVE HOLISTIC UNDERSTANDING OF HARMONY ON ...
UHV UNIT-5    IMPLICATIONS OF THE ABOVE HOLISTIC UNDERSTANDING OF HARMONY ON ...UHV UNIT-5    IMPLICATIONS OF THE ABOVE HOLISTIC UNDERSTANDING OF HARMONY ON ...
UHV UNIT-5 IMPLICATIONS OF THE ABOVE HOLISTIC UNDERSTANDING OF HARMONY ON ...
ariomthermal2031
Ìý
Energy Transition Factbook Bloomberg.pdf
Energy Transition Factbook Bloomberg.pdfEnergy Transition Factbook Bloomberg.pdf
Energy Transition Factbook Bloomberg.pdf
CarlosdelaFuenteMnde
Ìý
UHV UNIT-I INTRODUCTION TO VALUE EDUCATION .pptx
UHV UNIT-I INTRODUCTION TO VALUE EDUCATION  .pptxUHV UNIT-I INTRODUCTION TO VALUE EDUCATION  .pptx
UHV UNIT-I INTRODUCTION TO VALUE EDUCATION .pptx
ariomthermal2031
Ìý
Using 3D CAD in FIRST Tech Challenge - Fusion 360
Using 3D CAD in FIRST Tech Challenge - Fusion 360Using 3D CAD in FIRST Tech Challenge - Fusion 360
Using 3D CAD in FIRST Tech Challenge - Fusion 360
FTC Team 23014
Ìý
module-4.1-Class notes_R and DD_basket-IV -.pdf
module-4.1-Class notes_R and DD_basket-IV -.pdfmodule-4.1-Class notes_R and DD_basket-IV -.pdf
module-4.1-Class notes_R and DD_basket-IV -.pdf
ritikkumarchaudhury7
Ìý
wind energy types of turbines and advantages
wind energy types of turbines and advantageswind energy types of turbines and advantages
wind energy types of turbines and advantages
MahmudHalef
Ìý
DAY 4VVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVV.pptx
DAY 4VVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVV.pptxDAY 4VVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVV.pptx
DAY 4VVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVV.pptx
GellaBenson1
Ìý

An Improved Iterative Method for Solving General System of Equations via Genetic Algorithms

  • 1. International Journal of Information Technology, Modeling and Computing (IJITMC) Vol. 4, No.1, February 2016 DOI : 10.5121/ijitmc.2016.4104 35 AN IMPROVED ITERATIVE METHOD FOR SOLVING GENERAL SYSTEM OF EQUATIONS VIA GENETIC ALGORITHMS Seyed Abolfazl Shahzadehfazeli1, 2 Zainab Haji Abootorabi2,3 1 Parallel Processing Laboratory, Yazd University, Yazd, Iran 2 Department of Computer Science, Faculty of Mathematics, Yazd University,Yazd, Iran 3 Department of Mathematics, PNU University, Yazd, Iran ABSTRACT Various algorithms are known for solving linear system of equations. Iteration methods for solving the large sparse linear systems are recommended. But in the case of general n× m matrices the classic iterative algorithms are not applicable except for a few cases. The algorithm presented here is based on the minimization of residual of solution and has some genetic characteristics which require using Genetic Algorithms. Therefore, this algorithm is best applicable for construction of parallel algorithms. In this paper, we describe a sequential version of proposed algorithm and present its theoretical analysis. Moreover we show some numerical results of the sequential algorithm and supply an improved algorithm and compare the two algorithms. Keywords Large sparse linear systems, Iterative Genetic algorithms, Parallel algorithm. 1. INTRODUCTION Let A be a general n ×m matrix. The main problem is to solve the linear system of equations: Ax = b (1) where x∈Rm and b∈Rn are the solution and the given right hand side vectors. We can determine from matrix A and the vector b, the existence and uniqueness of the solution of (1). Theoretically the Gaussian or Gauss-Jordan elimination algorithm is an appropriate tool to solve the system (1) and to decide the question of solvability. when we use floating point arithmetic for large systems, these direct algorithms are inapplicable. For these cases the iterative algorithms are suitable. Effective iterative algorithms are known for symmetric positive definite linear systems. In general, iterative algorithms can be written in the form of: x(n)=B x(n−1)+d, n=1, 2,... (2) where B and d are such a matrix and vector that make stationary solution of (2) equivalent with (1), see ([1]). These iterative algorithms can be applied for general non symmetric linear systems as well, if we solve the following normal system: AT Ax = AT b = v (3)
  • 2. International Journal of Information Technology, Modeling and Computing (IJITMC) Vol. 4, No.1, February 2016 36 instead of the original one. A disadvantage of this approach is that the resulting linear system (3) for matrices with full rank will be Hermitian ones, however, its condition number will be the square of the original condition number. Therefore, the convergence will be very slow. For general linear systems when A is non-Hermitian, instead of using some variant of the Conjugate Gradient (CG) algorithms, one of the most successful schemes is the generalized minimal residual algorithm (GMRES), see ([9, 10]) and the biconjugate gradient algorithm (BCG) see ([2]). A more effective approach was suggested by Freund and Nachtigal ([5]) for the case of general nonsingular non-Hermitian systems which is called the quasi minimal residual algorithm (QMR). An iterative minimal residual algorithm which is slightly different from the above ones uses Genetic Algorithms (GA), see ([4, 6,7, 8]). In the following, we describe an improved method using genetic algorithms, in which, the initial population is larger, uses a broader search field and its crossover operator on initial population enhances the algorithm convergence speed. Generally, genetic algorithm with larger search space, does not guarantee the convergence speed see ([3]). In this paper, it is shown that our improved method is in practice much faster than previous types. This advantage can be very important for development of these algorithms for parallel processing. The result obtained in [8] is briefly reviewed here to clarify the improved algorithm. 2. AN ITERATIVE MINIMAL RESIDUAL ALGORITHM The most of iterative algorithms for solving linear systems are based on some minimization algorithm. We can obtain the normal system (3) in the following way by the least square minimization. We have to solve the following problem: 2 2 2 2 min ) , ( min ) , ( min min r r r b Ax b Ax b Ax m n n R r R x R x ∈ ∈ ∈ = = − − = − (4) where r=Ax−b is the residual of the vector x. It is easy to show that the equation (4) can be written as in (3). More precisely, the necessary condition for the existence and uniqueness of the solution of (4) is obtained for the fulfillment of (3). The Hermitian property of the normal matrix AT A is a sufficient condition for the uniqueness. For general non-Hermitian matrices this condition is not fulfilled in general. One possible algorithm to solve the problem (4) can be obtained from the following theorem. Theorem 1. Let n m R R A → ∈ and n R b∈ be arbitrary matrix and vector. Moreover, let m R x ∈ α and m R x ∈ β be arbitrary different vectors for which ( ) 0 ≠ − β α x x A . Let us introduce the following notations: , b Ax r s s − = β α , = S and β α β α x c cx X ) 1 ( , − + = , β α β α r c cr r ) ( , − + = 1 where R c∈ . We have β α β α , , r b Ax = − . Then, the solution of the minimization problem of (4) is the vector β α , x with c, where
  • 3. International Journal of Information Technology, Modeling and Computing (IJITMC) Vol. 4, No.1, February 2016 37 2 2 ) , ( β α β α β r r r r r c − − = Moreover, { } 2 2 , 2 , , min β α β α β α r r r 〈 . The Algorithm 1 From Theorem 1 we obtain an algorithm (see [8]), which generates an approximate solution sequence k x , k=1, 2, 3,... with residual vectors k r , k=1, 2, 3,..... 1) Let x1 be an arbitrary vector and ε the tolerance. 2) Calculate r1 =Ax1 −b. 3) Generate an arbitrary vector, x2 such that r1 −r2 ≠ 0. 4) Calculate the c1,2 . 5) Calculate the new x1,2 :=c 1,2 x1 +(1−c1,2 ) x2 and r1,2 :=c 1,2 r1 +(1−c1,2 ) r2 vectors. 6) x1 := x1,2 and r1 :=r1,2 . 7) If r1 < ε then go to 8, else go to 3. 8 )The approximate solution is x1 . 9)End of algorithm. The simplest algorithm which can be obtained from Theorem 1 is the algorithm 1. Therefore, this algorithm does not converge faster than the classical ones. 3. THE IMPROVED ALGORITHM USING GA Genetic algorithms (GAs) were proposed first time by John Holland and were developed by Holland and his colleagues at the University of Michigan in the 1960s and the 1970s. On continuous and discrete combinatorial problems, GAs work very well. But they tend to be computationally expensive. GAs are examples of algorithms that are used in this field and have improved tremendously in the past two decades. A genetic algorithm (or GA) is a search technique used in computing to find true or approximate solutions to optimization and search problems. (GA)s are in the class of global search heuristics. (GA)s are a particular class of evolutionary algorithms that use techniques inspired by evolutionary biology such as inheritance, selection, crossover and mutation. Selection: Choice of individual genomes from a population for using the crossover operator is the stage of a genetic algorithm which is called Selection. There are many ways how to select the best chromosomes, for example roulette wheel selection, Boltzman selection, tournament selection, rank selection, steady state selection and some others. Crossover: After we have decided what encoding we will use, we can make a step to crossover. Crossover selects genes from parent chromosomes and creates a new offspring. The simplest way how to do this is to choose randomly some crossover point and everything before this point copy from a first parent and then everything after a crossover point copy from the second parent. There are many methods how to do crossover. For example Single point crossover, Two point crossover, Uniform crossover and Arithmetic crossover.
  • 4. International Journal of Information Technology, Modeling and Computing (IJITMC) Vol. 4, No.1, February 2016 38 Mutation: After a crossover is performed, mutation takes place. This is to prevent falling all solutions in population into a local optimum of solved problem. Mutation changes randomly the new offspring. As well as the crossover, the mutation depends on the encoding . For example mutation could be exchanging two genes, when we are encoding permutations. For binary encoding we can switch a few randomly chosen bits from 1 to 0 or from 0 to 1. The most important parts of the genetic algorithm are the crossover and mutation. The performance is influenced mainly by these two operators. Crossover and mutation are two basic operators of GA and performance of GA is very dependent on them. Implementation and type of operators depends on a given problem and encoding. The evolution usually starts from a population of randomly generated individuals and happens in generations. The fitness of every individual in the population, evaluate in each generation and select multiple individuals from the current population and modify to form a new population. In the next iteration of the algorithm use the new population. The algorithm terminates when either a maximum number of generations has been produced, or a satisfactory fitness level has been reached for the population. The Basic Genetic Algorithm 1) Generate random population of n chromosomes. 2) Evaluate the fitness function of each chromosome x in the population. 3) Create a new population by repeating following steps until the new population is complete. a) Selection: Select two parent chromosomes from a population according to their fitness. b) Crossover: With a crossover probability crossover the parents to form a new offspring (children). If no crossover was performed, offspring is an exact copy of parents. c) Mutation: With a mutation probability mutate new offspring at each locus. d) Place new offspring in a new population. 4) Use new generated population for a further run of algorithm. 5) If the end condition is satisfied, stop, and return the best solution in current population. 6) Go to step 2 The three most important aspects of using genetic algorithms are: 1) Definition of the objective function. 2) Definition and implementation of the genetic representation. 3) Definition and implementation of the genetic operators. Once these three have been defined, the generic algorithm should work fairly well. In algorithm 1, we choose x1 and x2 arbitrarily, then use crossover operator to reach an optimal x1,2 and replace it for x1 . Then, we randomly select x2 again. Finally this process is continued until a fairly accurate approximation to the answer is achieved for linear equations Ax=b. But in the improved algorithm, instead of x1 and x2 and instead of the original population from two-parent, m-parent is chosen. (Note in the allocate names, x1 , x2 ,..., xm , m is the number of columns of matrix A). The crossover operator performed on the initial population generates vectors, x1,2 ,..., xm-1,m . This process is repeatedly performed on the newly generated vectors until a single vector x1,2,3,…,m as
  • 5. International Journal of Information Technology, Modeling and Computing (IJITMC) Vol. 4, No.1, February 2016 39 an approximate initial solution is obtained. This is now replaced by x1 also we randomly select x2 ,..., xm again for second population and the algorithm is repeated again and again until a close solution is obtained. The following table shows how the new vectors are generated. For detail refer to the algorithm 2. x1 x1,2 x2 x1,2,3 x2,3 x3 x2,3,4 … ... x1,2,3,...,m … … xm-1,m xm Now the algorithm 1 is improved in order to increase the convergence speed. The Algorithm 2 1) Let x1 be an arbitrary vector and ε the error tolerance and i=1. 2) Calculate r1 =Ax1 −b. 3) Generate an arbitrary vector , x2 ,…, xm such that ri −rj ≠ 0 , ) ( j i ≠ ‫و‬ m j i ,..., 1 , = . 4) Calculate the 2 2 1 1 1 ) , ( k k k k k k r r r r r C − − = + + + , for 1 ,..., 2 , − = m i k . 5) Calculate the new xk,k+1 = Ck xk +(1-Ck) xk+1 and rk,k+1 = Ck rk +(1-Ck) rk+1 vectors, for 1 ,..., − = m i k . 6) 1 1 + + = k k k x x , , for k= i,…,n-1, and 1 + = i i . 7) If i=n-1, then m m x x , 1 1 − = and m m r r , 1 1 − = else go to 4. 8) If ε < 2 1 r then go to 9, else go to 3. 9) The approximate solution is x1 . 10) End of algorithm. 4. NUMERICAL EXPERIMENTS In this section, we compare algorithm 1 and algorithm 2. Also, we use the different examples and review speed of the algorithm and we show some examples in the summary table and an example to bring more detail. In the examples, the condition number of the matrices A are chosen rather small (The coefficient matrices are well-conditioned). The following table (table1) compares the number of iterations by the two algorithms. Figure 1 shows that for matrix A1 with the condition number 80.903 and spectral radius 15.7009, the algorithm 1 converges after 136720 iterations while the number of iterations in the improved algorithm (algorithm 2) is 16129. This is a notable reduction.
  • 6. International Journal of Information Technology, Modeling and Computing (IJITMC) Vol. 4, No.1, February 2016 40 Table1. The number of iterations. No. of iter. algorithm 2 No. of iter. algorithm1 Tol. Dim. Matrix 16129 136720 10-3 20 15× A1 1812 10691 10-3 15 20× A2 8285 52273 10-3 25 20× A3 279 665 10-3 20 25× A4 22920 119041 10-3 30 25× A5 349 805 10-3 25 30× A6 436 1228 10-3 30 35× A7 500 1390 10-3 35 40× A8 Figure 1. Speed of convergence of the Algorithm 1 and Algorithm 2 on the A1 matrix. 0 2 4 6 8 10 12 14 x 10 4 10 -3 10 -2 10 -1 10 0 10 1 10 2 10 3 iteration residual norm Algo.1 Algo.2
  • 7. International Journal of Information Technology, Modeling and Computing (IJITMC) Vol. 4, No.1, February 2016 41 5. CONCLUSION In this paper, for solving systems of linear equations an improved algorithm is presented. In contradiction to other iterative methods (Jacobi, Gauss-Seidel, conjugate gradient and even Gauss-elimination methods), this method has not any limitations. Genetic algorithm enhances an appropriate response to eliminate restrictions and is a simple method for obtaining the solution. As the examples show, the number of iterations in algorithm 2 is incredibly reduced. The merit of the algorithm is its simplicity to use specially for non-square systems and to extend to large systems of equations by incorporating parallel computing. REFERENCES [1] Hageman L. A. & Joung D. M., (1981) Applied Iterative Methods, Computer Science and Applied Mathematics, Academic Press. [2] Hestenes M.R. & Stiefel,E. (1954) Methods of conjugate gradients for solving linear systems; J. Res. Natl. Bur. Stand. 49, 409- 436, . [3] Hoppe T., (2006) Optimization of Genetic Algorithms, Drexel University,Research Paper. [4] Koza J. R., Bennett H. B., Andre D., & Keane M. A., (1999) Genetic programming III: Drawinian Invention and Problem Solving, Morgan Kaufmann Publishers. [5] Lanczos C., (1952) Solution of systems of linear equations by minimized iterations, J Res. Nat. Bur. Standards, 49, 33-53. [6] Michalewicz & Zbeigniew, (1996) Genetic algorithms + Data Structures = Evolution Program, Springer – Verlog, Thirst edition. [7] Mitchell & Melanie, (1996) An Introduction to Genetic Algorithms, Cambridge, MA:The MIT Press. [8] Molnárka G. & Miletic, (2004) A Genetic Algorithm for Solving General System of Equations, Department of Mathematics, Széchenyi István University, GyÅ‘r, Hungary. [9] Molnárka G. & Török B. (1996) Residual Elimination Algorithm for Solving Linear Equations and Application to Sparse Systems, Zeitschrift für Angewandte Mathematik und Mechanik (ZAMM), Issue 1, Numerical Analysis, Scientific Computing, Computer Science, 485-486. [10] Saad Y. & Schultz M. H. (1986) GMRES: A generalized minimal residual algorithm for solving nonsymmetric linear systems, SIAM J. Sci. Stat. Comput., 7, 856-869.