Various algorithms are known for solving linear system of equations. Iteration methods for solving the
large sparse linear systems are recommended. But in the case of general n× m matrices the classic
iterative algorithms are not applicable except for a few cases. The algorithm presented here is based on the
minimization of residual of solution and has some genetic characteristics which require using Genetic
Algorithms. Therefore, this algorithm is best applicable for construction of parallel algorithms. In this
paper, we describe a sequential version of proposed algorithm and present its theoretical analysis.
Moreover we show some numerical results of the sequential algorithm and supply an improved algorithm
and compare the two algorithms.
AN IMPROVED ITERATIVE METHOD FOR SOLVING GENERAL SYSTEM OF EQUATIONS VIA GENE...Zac Darcy
Ìý
Various algorithms are known for solving linear system of equations. Iteration methods for solving the
large sparse linear systems are recommended. But in the case of general n× m matrices the classic
iterative algorithms are not applicable except for a few cases. The algorithm presented here is based on the
minimization of residual of solution and has some genetic characteristics which require using Genetic
Algorithms. Therefore, this algorithm is best applicable for construction of parallel algorithms. In this
paper, we describe a sequential version of proposed algorithm and present its theoretical analysis.
Moreover we show some numerical results of the sequential algorithm and supply an improved algorithm
and compare the two algorithms.
Presentation is about genetic algorithms. Also it includes introduction to soft computing and hard computing. Hope it serves the purpose and be useful for reference.
An Adaptive Masker for the Differential Evolution AlgorithmIOSR Journals
Ìý
The document proposes an adaptive masker technique for the differential evolution algorithm to perform automatic fuzzy clustering. The adaptive masker aims to guide the search process towards the optimal clustering solution by dividing the mask matrix into three zones - a best masks zone, a global best influence zone where the number of clusters is a function of the best fitness, and a random zone. Experimental results on a remote sensing dataset show the proposed adaptive masker differential evolution algorithm performs better than other fuzzy clustering algorithms like iterative fuzzy c-means, improved differential evolution, and variable length genetic algorithm based fuzzy clustering in automatically detecting the optimal number of clusters.
This document discusses advanced optimization techniques used to solve large-scale problems that traditional techniques cannot handle effectively. It introduces several population-based metaheuristic algorithms inspired by natural phenomena, including genetic algorithms, artificial immune algorithms, and differential evolution. Genetic algorithms use operations like selection, crossover and mutation to evolve solutions over generations. Artificial immune algorithms are based on clonal selection to amplify high-affinity antibodies. Differential evolution generates trial vectors through mutation and crossover of randomly selected target vectors.
Quantum inspired evolutionary algorithm for solving multiple travelling sales...eSAT Publishing House
Ìý
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
Genetic algorithm guided key generation in wireless communication (gakg)IJCI JOURNAL
Ìý
In this paper, the proposed technique use high speed stream cipher approach because this approach is useful where less memory and maximum speed is required for encryption process. In this proposed approach Self Acclimatize Genetic Algorithm based approach is exploits to generate the key stream for encrypt / decrypt the plaintext with the help of key stream. A widely practiced approach to identify a good set of parameters for a problem is through experimentation. For these reasons, proposed enhanced Self Acclimatize Genetic Algorithm (GAKG) offering the most appropriate exploration and exploitation behavior. Parametric tests are done and results are compared with some existing classical techniques, which shows comparable results for the proposed system.
1) The document presents an approach to solving the inverse kinematics problem of robotic manipulators using genetic algorithms.
2) Genetic algorithms are applied by encoding joint angles into chromosomes and evaluating fitness based on end-effector position and orientation accuracy.
3) The approach handles redundancies and singularities effectively and can compute motions for manipulators to follow specified end-effector paths.
Two-Stage Eagle Strategy with Differential EvolutionXin-She Yang
Ìý
The document describes a two-stage optimization strategy called the Eagle Strategy (ES) that combines global and local search algorithms to improve search efficiency. It evaluates applying ES to differential evolution (DE), a popular evolutionary algorithm. ES first uses randomization like Levy flights for global exploration, then switches to DE for intensive local search around promising solutions. The authors validate ES-DE on test functions, finding it requires only 9.7-24.9% of the function evaluations of pure DE. They also apply it to real-world pressure vessel and gearbox design problems, achieving solutions with 14.9-17.7% fewer function evaluations than pure DE.
This paper research review Ant colony optimization (ACO) and Genetic Algorithm (GA), both are two
powerful meta-heuristics. This paper explains some major defects of these two algorithm at first then
proposes a new model for ACO in which, artificial ants use a quick genetic operator and accelerate their
actions in selecting next state.
Experimental results show that proposed hybrid algorithm is effective and its performance including speed
and accuracy beats other version.
Fabric Textile Defect Detection, By Selection A Suitable Subset Of Wavelet Co...CSCJournals
Ìý
This document presents a method for fabric defect detection using wavelet transforms and genetic algorithms. Wavelet transforms are used to extract coefficients from sample fabric images, and a genetic algorithm selects an optimal subset of coefficients that best identify defects. Two separate coefficient sets are determined, one for horizontal defects and one for vertical defects, to improve accuracy. Experimental results on two fabric image databases demonstrate that the technique can effectively detect various defect types and configurations after applying thresholding and denoising post-processing steps to the wavelet-filtered images.
This document proposes an improved genetic algorithm called DGA that combines genetic algorithm and differential evolution. DGA uses adaptive differential evolution as its mutation operator instead of simple genetic algorithm's crossover and mutation. It also adds strategies of optimal reservation and worst elimination. Simulation results show DGA has stronger global optimization ability, faster convergence speed and better stability compared to simple genetic algorithm.
A NEW APPROACH IN DYNAMIC TRAVELING SALESMAN PROBLEM: A HYBRID OF ANT COLONY ...ijmpict
Ìý
Nowadays swarm intelligence-based algorithms are being used widely to optimize the dynamic traveling salesman problem (DTSP). In this paper, we have used mixed method of Ant Colony Optimization (AOC) and gradient descent to optimize DTSP which differs with ACO algorithm in evaporation rate and innovative data. This approach prevents premature convergence and scape from local optimum spots and also makes it possible to find better solutions for algorithm. In this paper, we’re going to offer gradient descent and ACO algorithm which in comparison to some former methods it shows that algorithm has significantly improved routes optimization.
A NEW APPROACH IN DYNAMIC TRAVELING SALESMAN PROBLEM: A HYBRID OF ANT COLONY ...ijmpict
Ìý
Nowadays swarm intelligence-based algorithms are being used widely to optimize the dynamic traveling salesman problem (DTSP). In this paper, we have used mixed method of Ant Colony Optimization (AOC) and gradient descent to optimize DTSP which differs with ACO algorithm in evaporation rate and innovative data. This approach prevents premature convergence and scape from local optimum spots and also makes it possible to find better solutions for algorithm. In this paper, we’re going to offer gradient descent and ACO algorithm which in comparison to some former methods it shows that algorithm has significantly improved routes optimization.
Mimo system-order-reduction-using-real-coded-genetic-algorithmCemal Ardil
Ìý
This document describes a method for reducing the order of multi-input multi-output (MIMO) systems using real-coded genetic algorithms. The method aims to minimize the integral square error between the transient responses of the original and reduced order models. It treats both the numerator and denominator parameters of the reduced order model as free parameters to be optimized. A real-coded genetic algorithm is used to search for the parameter values that minimize the error. The method is illustrated with an example and shown to produce results comparable to other established order reduction techniques while guaranteeing stability of the reduced model.
GENETIC ALGORITHM FOR FUNCTION APPROXIMATION: AN EXPERIMENTAL INVESTIGATIONijaia
Ìý
Function Approximation is a popular engineering problems used in system identification or Equation
optimization. Due to the complex search space it requires, AI techniques has been used extensively to spot
the best curves that match the real behavior of the system. Genetic algorithm is known for their fast
convergence and their ability to find an optimal structure of the solution. We propose using a genetic
algorithm as a function approximator. Our attempt will focus on using the polynomial form of the
approximation. After implementing the algorithm, we are going to report our results and compare it with
the real function output.
Applications and Analysis of Bio-Inspired Eagle Strategy for Engineering Opti...Xin-She Yang
Ìý
This document discusses applying an eagle strategy inspired by nature to engineering optimization problems. The eagle strategy uses a two-stage approach combining global exploration with local exploitation. Global exploration uses Lèvy flights for random walks to diversify solutions. Promising solutions are then locally optimized using an efficient local search algorithm like particle swarm optimization. The document analyzes random walk models like Lèvy flights and how they can maintain diversity in swarm intelligence algorithms. It applies the eagle strategy to four engineering design problems, finding Lèvy flights can effectively reduce computational efforts.
MARGINAL PERCEPTRON FOR NON-LINEAR AND MULTI CLASS CLASSIFICATION ijscai
Ìý
Generalization error of classifier can be reduced by larger margin of separating hyperplane. The proposed classification algorithm implements margin in classical perceptron algorithm, to reduce generalized errors by maximizing margin of separating hyperplane. Algorithm uses the same updation rule with the perceptron, to converge in a finite number of updates to solutions, possessing any desirable fraction of the margin. This solution is again optimized to get maximum possible margin. The algorithm can process linear, non-linear and multi class problems. Experimental results place the proposed classifier equivalent to the support vector machine and even better in some cases. Some preliminary experimental results are briefly discussed.
An Automatic Clustering Technique for Optimal ClustersIJCSEA Journal
Ìý
This document presents a new automatic clustering algorithm called Automatic Merging for Optimal Clusters (AMOC). AMOC is a two-phase iterative extension of k-means clustering that aims to automatically determine the optimal number of clusters for a given dataset. In the first phase, AMOC initializes a large number of clusters k using k-means. In the second phase, it iteratively merges the lowest probability cluster with its closest neighbor, recomputing metrics each time to evaluate if the merge improved clustering quality. The algorithm stops merging once no improvements are found. Experimental results on synthetic and real datasets show AMOC finds nearly optimal cluster structures in terms of number, compactness and separation of clusters.
A MODIFIED VORTEX SEARCH ALGORITHM FOR NUMERICAL FUNCTION OPTIMIZATIONijaia
Ìý
This document presents a modified version of the Vortex Search (VS) algorithm called the Modified Vortex Search (MVS) algorithm for numerical function optimization. The VS algorithm has the drawback that it can get trapped in local minima for functions with multiple local minima. The MVS algorithm addresses this by generating candidate solutions around multiple points at each iteration rather than a single point, allowing it to escape local minima more easily. Computational results on benchmark functions showed the MVS algorithm outperformed the original VS algorithm, as well as PSO2011 and ABC algorithms.
CONSTRUCTING A FUZZY NETWORK INTRUSION CLASSIFIER BASED ON DIFFERENTIAL EVOLU...IJCNCJournal
Ìý
This paper presents a method for constructing intrusion detection systems based on efficient fuzzy rulebased
classifiers. The design process of a fuzzy rule-based classifier from a given input-output data set can
be presented as a feature selection and parameter optimization problem. For parameter optimization of
fuzzy classifiers, the differential evolution is used, while the binary harmonic search algorithm is used for
selection of relevant features. The performance of the designed classifiers is evaluated using the KDD Cup
1999 intrusion detection dataset. The optimal classifier is selected based on the Akaike information
criterion. The optimal intrusion detection system has a 1.21% type I error and a 0.39% type II error. A
comparative study with other methods was accomplished. The results obtained showed the adequacy of the
proposed method
Cost Optimized Design Technique for Pseudo-Random Numbers in Cellular Automataijait
Ìý
In this research work, we have put an emphasis on the cost effective design approach for high quality pseudo-random numbers using one dimensional Cellular Automata (CA) over Maximum Length CA. This work focuses on different complexities e.g., space complexity, time complexity, design complexity and searching complexity for the generation of pseudo-random numbers in CA. The optimization procedure for
these associated complexities is commonly referred as the cost effective generation approach for pseudorandom numbers. The mathematical approach for proposed methodology over the existing maximum length CA emphasizes on better flexibility to fault coverage. The randomness quality of the generated patterns for the proposed methodology has been verified using Diehard Tests which reflects that the randomness quality
achieved for proposed methodology is equal to the quality of randomness of the patterns generated by the maximum length cellular automata. The cost effectiveness results a cheap hardware implementation for the concerned pseudo-random pattern generator. Short version of this paper has been published in [1].
The International Journal of Engineering and Science (The IJES)theijes
Ìý
The International Journal of Engineering & Science is aimed at providing a platform for researchers, engineers, scientists, or educators to publish their original research results, to exchange new ideas, to disseminate information in innovative designs, engineering experiences and technological skills. It is also the Journal's objective to promote engineering and technology education. All papers submitted to the Journal will be blind peer-reviewed. Only original articles will be published.
A Real Coded Genetic Algorithm For Solving Integer And Mixed Integer Optimiza...Jim Jimenez
Ìý
This document describes a real coded genetic algorithm called MI-LXPM for solving integer and mixed integer constrained optimization problems. MI-LXPM modifies and extends an existing real coded genetic algorithm (LXPM) to handle integer restrictions on decision variables. It incorporates a truncation procedure to satisfy integer restrictions and a penalty approach for handling constraints. The performance of MI-LXPM is tested on 20 problems and compared to other algorithms, showing it outperforms them in most cases.
Cuckoo Search: Recent Advances and ApplicationsXin-She Yang
Ìý
This document summarizes recent advances and applications of the cuckoo search algorithm, a nature-inspired metaheuristic optimization algorithm developed in 2009. Cuckoo search mimics the brood parasitism breeding behavior of some cuckoo species. It uses a combination of local and global search achieved through random walks and Levy flights to efficiently explore the search space. Studies show cuckoo search often finds optimal solutions faster than genetic algorithms and particle swarm optimization. The algorithm has been applied to diverse optimization problems and continues to be improved and extended to multi-objective optimization.
COMPARING THE CUCKOO ALGORITHM WITH OTHER ALGORITHMS FOR ESTIMATING TWO GLSD ...csandit
Ìý
This study introduces and compares different methods for estimating the two parameters of
generalized logarithmic series distribution. These methods are the cuckoo search optimization,
maximum likelihood estimation, and method of moments algorithms. All the required
derivations and basic steps of each algorithm are explained. The applications for these
algorithms are implemented through simulations using different sample sizes (n = 15, 25, 50,
100). Results are compared using the statistical measure mean square error.
This document discusses using particle swarm optimization based on variable neighborhood search (PSO-VNS) to attack classical cryptography ciphers. PSO is a population-based optimization algorithm inspired by bird flocking behavior. VNS is a metaheuristic algorithm that explores neighborhoods of solutions to escape local optima. The paper proposes improving PSO with VNS to find better solutions. It evaluates PSO-VNS on substitution and transposition ciphers, finding it recovers keys better than standard PSO and other variants.
Modified Vortex Search Algorithm for Real Parameter Optimization csandit
Ìý
The document presents a modified version of the Vortex Search (VS) algorithm called the Modified Vortex Search (MVS) algorithm. The MVS algorithm aims to overcome the drawback of the VS algorithm getting trapped in local minima for functions with multiple local minima. In the MVS algorithm, candidate solutions are generated around multiple centers at each iteration rather than a single center. This allows the algorithm to explore different regions simultaneously and avoid getting stuck in local minima. Computational results showed the MVS algorithm outperformed the original VS algorithm as well as PSO, ABC algorithms on benchmark test functions prone to getting trapped in local minima.
MODIFIED VORTEX SEARCH ALGORITHM FOR REAL PARAMETER OPTIMIZATIONcscpconf
Ìý
The Vortex Search (VS) algorithm is one of the recently proposed metaheuristic algorithms which was inspired from the vortical flow of the stirred fluids. Although the VS algorithm is
shown to be a good candidate for the solution of certain optimization problems, it also has some drawbacks. In the VS algorithm, candidate solutions are generated around the current best solution by using a Gaussian distribution at each iteration pass. This provides simplicity to the
algorithm but it also leads to some problems along. Especially, for the functions those have a number of local minimum points, to select a single point to generate candidate solutions leads the algorithm to being trapped into a local minimum point. Due to the adaptive step-size
adjustment scheme used in the VS algorithm, the locality of the created candidate solutions is increased at each iteration pass. Therefore, if the algorithm cannot escape a local point as
quickly as possible, it becomes much more difficult for the algorithm to escape from that point
in the latter iterations. In this study, a modified Vortex Search algorithm (MVS) is proposed to
overcome above mentioned drawback of the existing VS algorithm. In the MVS algorithm, the candidate solutions are generated around a number of points at each iteration pass. Computational results showed that with the help of this modification the global search ability of
the existing VS algorithm is improved and the MVS algorithm outperformed the existing VS algorithm, PSO2011 and ABC algorithms for the benchmark numerical function set.
A glimpse into the world of Caddlance! Explore our portfolio featuring captivating 3D renderings, detailed BIM models, and inspiring architectural designs. Let's build the future, together. #Architecture #3D #BIM #Caddlance
See the world through a spatial lens with the Caddlance GIS Portfolio. We excel at creating compelling maps and visualizations that effectively communicate complex spatial information for better project understanding and stakeholder engagement.
More Related Content
Similar to An Improved Iterative Method for Solving General System of Equations via Genetic Algorithms (20)
This paper research review Ant colony optimization (ACO) and Genetic Algorithm (GA), both are two
powerful meta-heuristics. This paper explains some major defects of these two algorithm at first then
proposes a new model for ACO in which, artificial ants use a quick genetic operator and accelerate their
actions in selecting next state.
Experimental results show that proposed hybrid algorithm is effective and its performance including speed
and accuracy beats other version.
Fabric Textile Defect Detection, By Selection A Suitable Subset Of Wavelet Co...CSCJournals
Ìý
This document presents a method for fabric defect detection using wavelet transforms and genetic algorithms. Wavelet transforms are used to extract coefficients from sample fabric images, and a genetic algorithm selects an optimal subset of coefficients that best identify defects. Two separate coefficient sets are determined, one for horizontal defects and one for vertical defects, to improve accuracy. Experimental results on two fabric image databases demonstrate that the technique can effectively detect various defect types and configurations after applying thresholding and denoising post-processing steps to the wavelet-filtered images.
This document proposes an improved genetic algorithm called DGA that combines genetic algorithm and differential evolution. DGA uses adaptive differential evolution as its mutation operator instead of simple genetic algorithm's crossover and mutation. It also adds strategies of optimal reservation and worst elimination. Simulation results show DGA has stronger global optimization ability, faster convergence speed and better stability compared to simple genetic algorithm.
A NEW APPROACH IN DYNAMIC TRAVELING SALESMAN PROBLEM: A HYBRID OF ANT COLONY ...ijmpict
Ìý
Nowadays swarm intelligence-based algorithms are being used widely to optimize the dynamic traveling salesman problem (DTSP). In this paper, we have used mixed method of Ant Colony Optimization (AOC) and gradient descent to optimize DTSP which differs with ACO algorithm in evaporation rate and innovative data. This approach prevents premature convergence and scape from local optimum spots and also makes it possible to find better solutions for algorithm. In this paper, we’re going to offer gradient descent and ACO algorithm which in comparison to some former methods it shows that algorithm has significantly improved routes optimization.
A NEW APPROACH IN DYNAMIC TRAVELING SALESMAN PROBLEM: A HYBRID OF ANT COLONY ...ijmpict
Ìý
Nowadays swarm intelligence-based algorithms are being used widely to optimize the dynamic traveling salesman problem (DTSP). In this paper, we have used mixed method of Ant Colony Optimization (AOC) and gradient descent to optimize DTSP which differs with ACO algorithm in evaporation rate and innovative data. This approach prevents premature convergence and scape from local optimum spots and also makes it possible to find better solutions for algorithm. In this paper, we’re going to offer gradient descent and ACO algorithm which in comparison to some former methods it shows that algorithm has significantly improved routes optimization.
Mimo system-order-reduction-using-real-coded-genetic-algorithmCemal Ardil
Ìý
This document describes a method for reducing the order of multi-input multi-output (MIMO) systems using real-coded genetic algorithms. The method aims to minimize the integral square error between the transient responses of the original and reduced order models. It treats both the numerator and denominator parameters of the reduced order model as free parameters to be optimized. A real-coded genetic algorithm is used to search for the parameter values that minimize the error. The method is illustrated with an example and shown to produce results comparable to other established order reduction techniques while guaranteeing stability of the reduced model.
GENETIC ALGORITHM FOR FUNCTION APPROXIMATION: AN EXPERIMENTAL INVESTIGATIONijaia
Ìý
Function Approximation is a popular engineering problems used in system identification or Equation
optimization. Due to the complex search space it requires, AI techniques has been used extensively to spot
the best curves that match the real behavior of the system. Genetic algorithm is known for their fast
convergence and their ability to find an optimal structure of the solution. We propose using a genetic
algorithm as a function approximator. Our attempt will focus on using the polynomial form of the
approximation. After implementing the algorithm, we are going to report our results and compare it with
the real function output.
Applications and Analysis of Bio-Inspired Eagle Strategy for Engineering Opti...Xin-She Yang
Ìý
This document discusses applying an eagle strategy inspired by nature to engineering optimization problems. The eagle strategy uses a two-stage approach combining global exploration with local exploitation. Global exploration uses Lèvy flights for random walks to diversify solutions. Promising solutions are then locally optimized using an efficient local search algorithm like particle swarm optimization. The document analyzes random walk models like Lèvy flights and how they can maintain diversity in swarm intelligence algorithms. It applies the eagle strategy to four engineering design problems, finding Lèvy flights can effectively reduce computational efforts.
MARGINAL PERCEPTRON FOR NON-LINEAR AND MULTI CLASS CLASSIFICATION ijscai
Ìý
Generalization error of classifier can be reduced by larger margin of separating hyperplane. The proposed classification algorithm implements margin in classical perceptron algorithm, to reduce generalized errors by maximizing margin of separating hyperplane. Algorithm uses the same updation rule with the perceptron, to converge in a finite number of updates to solutions, possessing any desirable fraction of the margin. This solution is again optimized to get maximum possible margin. The algorithm can process linear, non-linear and multi class problems. Experimental results place the proposed classifier equivalent to the support vector machine and even better in some cases. Some preliminary experimental results are briefly discussed.
An Automatic Clustering Technique for Optimal ClustersIJCSEA Journal
Ìý
This document presents a new automatic clustering algorithm called Automatic Merging for Optimal Clusters (AMOC). AMOC is a two-phase iterative extension of k-means clustering that aims to automatically determine the optimal number of clusters for a given dataset. In the first phase, AMOC initializes a large number of clusters k using k-means. In the second phase, it iteratively merges the lowest probability cluster with its closest neighbor, recomputing metrics each time to evaluate if the merge improved clustering quality. The algorithm stops merging once no improvements are found. Experimental results on synthetic and real datasets show AMOC finds nearly optimal cluster structures in terms of number, compactness and separation of clusters.
A MODIFIED VORTEX SEARCH ALGORITHM FOR NUMERICAL FUNCTION OPTIMIZATIONijaia
Ìý
This document presents a modified version of the Vortex Search (VS) algorithm called the Modified Vortex Search (MVS) algorithm for numerical function optimization. The VS algorithm has the drawback that it can get trapped in local minima for functions with multiple local minima. The MVS algorithm addresses this by generating candidate solutions around multiple points at each iteration rather than a single point, allowing it to escape local minima more easily. Computational results on benchmark functions showed the MVS algorithm outperformed the original VS algorithm, as well as PSO2011 and ABC algorithms.
CONSTRUCTING A FUZZY NETWORK INTRUSION CLASSIFIER BASED ON DIFFERENTIAL EVOLU...IJCNCJournal
Ìý
This paper presents a method for constructing intrusion detection systems based on efficient fuzzy rulebased
classifiers. The design process of a fuzzy rule-based classifier from a given input-output data set can
be presented as a feature selection and parameter optimization problem. For parameter optimization of
fuzzy classifiers, the differential evolution is used, while the binary harmonic search algorithm is used for
selection of relevant features. The performance of the designed classifiers is evaluated using the KDD Cup
1999 intrusion detection dataset. The optimal classifier is selected based on the Akaike information
criterion. The optimal intrusion detection system has a 1.21% type I error and a 0.39% type II error. A
comparative study with other methods was accomplished. The results obtained showed the adequacy of the
proposed method
Cost Optimized Design Technique for Pseudo-Random Numbers in Cellular Automataijait
Ìý
In this research work, we have put an emphasis on the cost effective design approach for high quality pseudo-random numbers using one dimensional Cellular Automata (CA) over Maximum Length CA. This work focuses on different complexities e.g., space complexity, time complexity, design complexity and searching complexity for the generation of pseudo-random numbers in CA. The optimization procedure for
these associated complexities is commonly referred as the cost effective generation approach for pseudorandom numbers. The mathematical approach for proposed methodology over the existing maximum length CA emphasizes on better flexibility to fault coverage. The randomness quality of the generated patterns for the proposed methodology has been verified using Diehard Tests which reflects that the randomness quality
achieved for proposed methodology is equal to the quality of randomness of the patterns generated by the maximum length cellular automata. The cost effectiveness results a cheap hardware implementation for the concerned pseudo-random pattern generator. Short version of this paper has been published in [1].
The International Journal of Engineering and Science (The IJES)theijes
Ìý
The International Journal of Engineering & Science is aimed at providing a platform for researchers, engineers, scientists, or educators to publish their original research results, to exchange new ideas, to disseminate information in innovative designs, engineering experiences and technological skills. It is also the Journal's objective to promote engineering and technology education. All papers submitted to the Journal will be blind peer-reviewed. Only original articles will be published.
A Real Coded Genetic Algorithm For Solving Integer And Mixed Integer Optimiza...Jim Jimenez
Ìý
This document describes a real coded genetic algorithm called MI-LXPM for solving integer and mixed integer constrained optimization problems. MI-LXPM modifies and extends an existing real coded genetic algorithm (LXPM) to handle integer restrictions on decision variables. It incorporates a truncation procedure to satisfy integer restrictions and a penalty approach for handling constraints. The performance of MI-LXPM is tested on 20 problems and compared to other algorithms, showing it outperforms them in most cases.
Cuckoo Search: Recent Advances and ApplicationsXin-She Yang
Ìý
This document summarizes recent advances and applications of the cuckoo search algorithm, a nature-inspired metaheuristic optimization algorithm developed in 2009. Cuckoo search mimics the brood parasitism breeding behavior of some cuckoo species. It uses a combination of local and global search achieved through random walks and Levy flights to efficiently explore the search space. Studies show cuckoo search often finds optimal solutions faster than genetic algorithms and particle swarm optimization. The algorithm has been applied to diverse optimization problems and continues to be improved and extended to multi-objective optimization.
COMPARING THE CUCKOO ALGORITHM WITH OTHER ALGORITHMS FOR ESTIMATING TWO GLSD ...csandit
Ìý
This study introduces and compares different methods for estimating the two parameters of
generalized logarithmic series distribution. These methods are the cuckoo search optimization,
maximum likelihood estimation, and method of moments algorithms. All the required
derivations and basic steps of each algorithm are explained. The applications for these
algorithms are implemented through simulations using different sample sizes (n = 15, 25, 50,
100). Results are compared using the statistical measure mean square error.
This document discusses using particle swarm optimization based on variable neighborhood search (PSO-VNS) to attack classical cryptography ciphers. PSO is a population-based optimization algorithm inspired by bird flocking behavior. VNS is a metaheuristic algorithm that explores neighborhoods of solutions to escape local optima. The paper proposes improving PSO with VNS to find better solutions. It evaluates PSO-VNS on substitution and transposition ciphers, finding it recovers keys better than standard PSO and other variants.
Modified Vortex Search Algorithm for Real Parameter Optimization csandit
Ìý
The document presents a modified version of the Vortex Search (VS) algorithm called the Modified Vortex Search (MVS) algorithm. The MVS algorithm aims to overcome the drawback of the VS algorithm getting trapped in local minima for functions with multiple local minima. In the MVS algorithm, candidate solutions are generated around multiple centers at each iteration rather than a single center. This allows the algorithm to explore different regions simultaneously and avoid getting stuck in local minima. Computational results showed the MVS algorithm outperformed the original VS algorithm as well as PSO, ABC algorithms on benchmark test functions prone to getting trapped in local minima.
MODIFIED VORTEX SEARCH ALGORITHM FOR REAL PARAMETER OPTIMIZATIONcscpconf
Ìý
The Vortex Search (VS) algorithm is one of the recently proposed metaheuristic algorithms which was inspired from the vortical flow of the stirred fluids. Although the VS algorithm is
shown to be a good candidate for the solution of certain optimization problems, it also has some drawbacks. In the VS algorithm, candidate solutions are generated around the current best solution by using a Gaussian distribution at each iteration pass. This provides simplicity to the
algorithm but it also leads to some problems along. Especially, for the functions those have a number of local minimum points, to select a single point to generate candidate solutions leads the algorithm to being trapped into a local minimum point. Due to the adaptive step-size
adjustment scheme used in the VS algorithm, the locality of the created candidate solutions is increased at each iteration pass. Therefore, if the algorithm cannot escape a local point as
quickly as possible, it becomes much more difficult for the algorithm to escape from that point
in the latter iterations. In this study, a modified Vortex Search algorithm (MVS) is proposed to
overcome above mentioned drawback of the existing VS algorithm. In the MVS algorithm, the candidate solutions are generated around a number of points at each iteration pass. Computational results showed that with the help of this modification the global search ability of
the existing VS algorithm is improved and the MVS algorithm outperformed the existing VS algorithm, PSO2011 and ABC algorithms for the benchmark numerical function set.
A glimpse into the world of Caddlance! Explore our portfolio featuring captivating 3D renderings, detailed BIM models, and inspiring architectural designs. Let's build the future, together. #Architecture #3D #BIM #Caddlance
See the world through a spatial lens with the Caddlance GIS Portfolio. We excel at creating compelling maps and visualizations that effectively communicate complex spatial information for better project understanding and stakeholder engagement.
Urban Design and Planning Portfolio .pdfsonam254547
Ìý
Get insights into the urban planning and design process at Caddlance. Our portfolio highlights our expertise in analysis, strategy development, and design implementation, leading to successful and impactful urban projects.
Production Planning & Control and Inventory Management.pptxVirajPasare
Ìý
Production Planning and Control : Importance, Objectives and Functions . Inventory Management - Meaning, Types , Objectives, Selective Inventory Control : ABC Analysis
Security requirements are often treated as generic lists of features, neglecting system-specific needs and the attacker's perspective. A systematic approach to security requirements engineering is crucial to avoid this problem.
Requirements engineering defects can cost 10 to 200 times more to correct once the system is operational. Software development takes place in a dynamic environment, causing requirements to constantly change.
Introduction to Forensic Research Digital ForensicsSaanviMisar
Ìý
Digital Forensics: Analyzing Cyber Crimes & Investigations
This comprehensive guide on Digital Forensics covers key concepts, tools, and methodologies used in investigating cyber crimes. It explores forensic techniques, evidence collection, data recovery, malware analysis, and incident response with real-world applications.
Topics Covered:
Introduction to Digital Forensics
Cybercrime Investigation Process
Digital Evidence & Chain of Custody
Popular Forensic Tools (Autopsy, EnCase, FTK)
Memory & Network Forensics
Challenges in Modern Cyber Investigations
Ideal for students, cybersecurity professionals, and forensic analysts, this resource provides valuable insights into digital investigations.
This factbook, using research from BloombergNEF and other sources, provides public and private sector leaders the critical information they need to accelerate the
transition to clean energy, along with all the health and economic benefits it will bring.
DAY 4VVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVV.pptxGellaBenson1
Ìý
An Improved Iterative Method for Solving General System of Equations via Genetic Algorithms
1. International Journal of Information Technology, Modeling and Computing (IJITMC) Vol. 4, No.1, February 2016
DOI : 10.5121/ijitmc.2016.4104 35
AN IMPROVED ITERATIVE METHOD FOR SOLVING
GENERAL SYSTEM OF EQUATIONS VIA GENETIC
ALGORITHMS
Seyed Abolfazl Shahzadehfazeli1, 2
Zainab Haji Abootorabi2,3
1
Parallel Processing Laboratory, Yazd University, Yazd, Iran
2
Department of Computer Science, Faculty of Mathematics, Yazd University,Yazd, Iran
3
Department of Mathematics, PNU University, Yazd, Iran
ABSTRACT
Various algorithms are known for solving linear system of equations. Iteration methods for solving the
large sparse linear systems are recommended. But in the case of general n× m matrices the classic
iterative algorithms are not applicable except for a few cases. The algorithm presented here is based on the
minimization of residual of solution and has some genetic characteristics which require using Genetic
Algorithms. Therefore, this algorithm is best applicable for construction of parallel algorithms. In this
paper, we describe a sequential version of proposed algorithm and present its theoretical analysis.
Moreover we show some numerical results of the sequential algorithm and supply an improved algorithm
and compare the two algorithms.
Keywords
Large sparse linear systems, Iterative Genetic algorithms, Parallel algorithm.
1. INTRODUCTION
Let A be a general n ×m matrix. The main problem is to solve the linear system of equations:
Ax = b (1)
where x∈Rm
and b∈Rn
are the solution and the given right hand side vectors. We can determine
from matrix A and the vector b, the existence and uniqueness of the solution of (1). Theoretically
the Gaussian or Gauss-Jordan elimination algorithm is an appropriate tool to solve the system (1)
and to decide the question of solvability. when we use floating point arithmetic for large
systems, these direct algorithms are inapplicable. For these cases the iterative algorithms are
suitable. Effective iterative algorithms are known for symmetric positive definite linear systems.
In general, iterative algorithms can be written in the form of:
x(n)=B x(n−1)+d, n=1, 2,... (2)
where B and d are such a matrix and vector that make stationary solution of (2) equivalent with
(1), see ([1]). These iterative algorithms can be applied for general non symmetric linear systems
as well, if we solve the following normal system:
AT
Ax = AT
b = v (3)
2. International Journal of Information Technology, Modeling and Computing (IJITMC) Vol. 4, No.1, February 2016
36
instead of the original one. A disadvantage of this approach is that the resulting linear system (3)
for matrices with full rank will be Hermitian ones, however, its condition number will be the
square of the original condition number. Therefore, the convergence will be very slow. For
general linear systems when A is non-Hermitian, instead of using some variant of the Conjugate
Gradient (CG) algorithms, one of the most successful schemes is the generalized minimal residual
algorithm (GMRES), see ([9, 10]) and the biconjugate gradient algorithm (BCG) see ([2]).
A more effective approach was suggested by Freund and Nachtigal ([5]) for the case of general
nonsingular non-Hermitian systems which is called the quasi minimal residual algorithm (QMR).
An iterative minimal residual algorithm which is slightly different from the above ones uses
Genetic Algorithms (GA), see ([4, 6,7, 8]).
In the following, we describe an improved method using genetic algorithms, in which, the initial
population is larger, uses a broader search field and its crossover operator on initial population
enhances the algorithm convergence speed. Generally, genetic algorithm with larger search space,
does not guarantee the convergence speed see ([3]).
In this paper, it is shown that our improved method is in practice much faster than previous
types. This advantage can be very important for development of these algorithms for parallel
processing. The result obtained in [8] is briefly reviewed here to clarify the improved algorithm.
2. AN ITERATIVE MINIMAL RESIDUAL ALGORITHM
The most of iterative algorithms for solving linear systems are based on some minimization
algorithm. We can obtain the normal system (3) in the following way by the least square
minimization. We have to solve the following problem:
2
2
2
2
min
)
,
(
min
)
,
(
min
min r
r
r
b
Ax
b
Ax
b
Ax m
n
n
R
r
R
x
R
x ∈
∈
∈
=
=
−
−
=
− (4)
where r=Ax−b is the residual of the vector x.
It is easy to show that the equation (4) can be written as in (3). More precisely, the necessary
condition for the existence and uniqueness of the solution of (4) is obtained for the fulfillment of
(3). The Hermitian property of the normal matrix AT
A is a sufficient condition for the
uniqueness. For general non-Hermitian matrices this condition is not fulfilled in general. One
possible algorithm to solve the problem (4) can be obtained from the following theorem.
Theorem 1. Let n
m
R
R
A →
∈ and n
R
b∈ be arbitrary matrix and vector. Moreover, let
m
R
x ∈
α
and m
R
x ∈
β
be arbitrary different vectors for which ( ) 0
â‰
− β
α
x
x
A .
Let us introduce the following notations:
,
b
Ax
r s
s
−
= β
α ,
=
S
and
β
α
β
α
x
c
cx
X )
1
(
,
−
+
= , β
α
β
α
r
c
cr
r )
(
,
−
+
= 1
where R
c∈ . We have β
α
β
α ,
,
r
b
Ax =
− . Then, the solution of the minimization problem of
(4) is the vector β
α ,
x with c, where
4. International Journal of Information Technology, Modeling and Computing (IJITMC) Vol. 4, No.1, February 2016
38
Mutation: After a crossover is performed, mutation takes place. This is to prevent falling all
solutions in population into a local optimum of solved problem. Mutation changes randomly the
new offspring. As well as the crossover, the mutation depends on the encoding . For example
mutation could be exchanging two genes, when we are encoding permutations. For binary
encoding we can switch a few randomly chosen bits from 1 to 0 or from 0 to 1.
The most important parts of the genetic algorithm are the crossover and mutation. The
performance is influenced mainly by these two operators. Crossover and mutation are two basic
operators of GA and performance of GA is very dependent on them. Implementation and type of
operators depends on a given problem and encoding.
The evolution usually starts from a population of randomly generated individuals and happens in
generations. The fitness of every individual in the population, evaluate in each generation and
select multiple individuals from the current population and modify to form a new population. In
the next iteration of the algorithm use the new population. The algorithm terminates when either a
maximum number of generations has been produced, or a satisfactory fitness level has been
reached for the population.
The Basic Genetic Algorithm
1) Generate random population of n chromosomes.
2) Evaluate the fitness function of each chromosome x in the population.
3) Create a new population by repeating following steps until the new population is
complete.
a) Selection: Select two parent chromosomes from a population according to their fitness.
b) Crossover: With a crossover probability crossover the parents to form a new
offspring (children). If no crossover was performed, offspring is an exact copy of
parents.
c) Mutation: With a mutation probability mutate new offspring at each locus.
d) Place new offspring in a new population.
4) Use new generated population for a further run of algorithm.
5) If the end condition is satisfied, stop, and return the best solution in current
population.
6) Go to step 2
The three most important aspects of using genetic algorithms are:
1) Definition of the objective function.
2) Definition and implementation of the genetic representation.
3) Definition and implementation of the genetic operators. Once these three have been
defined, the generic algorithm should work fairly well.
In algorithm 1, we choose x1
and x2
arbitrarily, then use crossover operator to reach an optimal x1,2
and replace it for x1
. Then, we randomly select x2
again. Finally this process is continued until a
fairly accurate approximation to the answer is achieved for linear equations Ax=b. But in the
improved algorithm, instead of x1
and x2
and instead of the original population from two-parent,
m-parent is chosen. (Note in the allocate names, x1
, x2
,..., xm
, m is the number of columns of
matrix A).
The crossover operator performed on the initial population generates vectors, x1,2
,..., xm-1,m
. This
process is repeatedly performed on the newly generated vectors until a single vector x1,2,3,…,m
as
5. International Journal of Information Technology, Modeling and Computing (IJITMC) Vol. 4, No.1, February 2016
39
an approximate initial solution is obtained. This is now replaced by x1
also we randomly select
x2
,..., xm
again for second population and the algorithm is repeated again and again until a close
solution is obtained. The following table shows how the new vectors are generated. For detail
refer to the algorithm 2.
x1
x1,2
x2
x1,2,3
x2,3
x3
x2,3,4
…
...
x1,2,3,...,m
…
…
xm-1,m
xm
Now the algorithm 1 is improved in order to increase the convergence speed.
The Algorithm 2
1) Let x1
be an arbitrary vector and ε the error tolerance and i=1.
2) Calculate r1
=Ax1
−b.
3) Generate an arbitrary vector , x2
,…, xm
such that ri
−rj
≠0 , )
( j
i ≠‫و‬ m
j
i ,...,
1
, = .
4) Calculate the
2
2
1
1
1
)
,
(
k
k
k
k
k
k
r
r
r
r
r
C
−
−
=
+
+
+
, for 1
,...,
2
, −
= m
i
k .
5) Calculate the new
xk,k+1
= Ck xk
+(1-Ck) xk+1
and rk,k+1
= Ck rk
+(1-Ck) rk+1
vectors, for 1
,..., −
= m
i
k .
6) 1
1 +
+
= k
k
k
x
x ,
, for k= i,…,n-1, and 1
+
= i
i .
7) If i=n-1, then m
m
x
x ,
1
1 −
= and m
m
r
r ,
1
1 −
= else go to 4.
8) If ε
<
2
1
r then go to 9, else go to 3.
9) The approximate solution is x1
.
10) End of algorithm.
4. NUMERICAL EXPERIMENTS
In this section, we compare algorithm 1 and algorithm 2. Also, we use the different examples and
review speed of the algorithm and we show some examples in the summary table and an example
to bring more detail.
In the examples, the condition number of the matrices A are chosen rather small (The coefficient
matrices are well-conditioned).
The following table (table1) compares the number of iterations by the two algorithms.
Figure 1 shows that for matrix A1 with the condition number 80.903 and spectral radius 15.7009,
the algorithm 1 converges after 136720 iterations while the number of iterations in the improved
algorithm (algorithm 2) is 16129. This is a notable reduction.
6. International Journal of Information Technology, Modeling and Computing (IJITMC) Vol. 4, No.1, February 2016
40
Table1. The number of iterations.
No. of iter.
algorithm 2
No. of iter.
algorithm1
Tol.
Dim.
Matrix
16129
136720
10-3
20
15×
A1
1812
10691
10-3
15
20×
A2
8285
52273
10-3
25
20×
A3
279
665
10-3
20
25×
A4
22920
119041
10-3
30
25×
A5
349
805
10-3
25
30×
A6
436
1228
10-3
30
35×
A7
500
1390
10-3
35
40×
A8
Figure 1. Speed of convergence of the Algorithm 1 and Algorithm 2 on the A1 matrix.
0 2 4 6 8 10 12 14
x 10
4
10
-3
10
-2
10
-1
10
0
10
1
10
2
10
3
iteration
residual
norm
Algo.1
Algo.2