ºÝºÝߣshows by User: kknsastry / http://www.slideshare.net/images/logo.gif ºÝºÝߣshows by User: kknsastry / Wed, 12 Sep 2007 19:20:39 GMT ºÝºÝߣShare feed for ºÝºÝߣshows by User: kknsastry Genetic Algorithms and Genetic Programming for Multiscale Modeling /slideshow/genetic-algorithms-and-genetic-programming-for-multiscale-modeling/108742 genetic-algorithms-and-genetic-programming-for-multiscale-modeling2802
Effective and efficient multiscale modeling is essential to advance both the science and synthesis in a wide array of fields such as physics, chemistry, materials science, biology, biotechnology and pharmacology. This study investigates the efficacy and potential of using genetic algorithms for multiscale materials modeling and addresses some of the challenges involved in designing competent algorithms that solve hard problems quickly, reliably and accurately.]]>

Effective and efficient multiscale modeling is essential to advance both the science and synthesis in a wide array of fields such as physics, chemistry, materials science, biology, biotechnology and pharmacology. This study investigates the efficacy and potential of using genetic algorithms for multiscale materials modeling and addresses some of the challenges involved in designing competent algorithms that solve hard problems quickly, reliably and accurately.]]>
Wed, 12 Sep 2007 19:20:39 GMT /slideshow/genetic-algorithms-and-genetic-programming-for-multiscale-modeling/108742 kknsastry@slideshare.net(kknsastry) Genetic Algorithms and Genetic Programming for Multiscale Modeling kknsastry Effective and efficient multiscale modeling is essential to advance both the science and synthesis in a wide array of fields such as physics, chemistry, materials science, biology, biotechnology and pharmacology. This study investigates the efficacy and potential of using genetic algorithms for multiscale materials modeling and addresses some of the challenges involved in designing competent algorithms that solve hard problems quickly, reliably and accurately. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/genetic-algorithms-and-genetic-programming-for-multiscale-modeling2802-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Effective and efficient multiscale modeling is essential to advance both the science and synthesis in a wide array of fields such as physics, chemistry, materials science, biology, biotechnology and pharmacology. This study investigates the efficacy and potential of using genetic algorithms for multiscale materials modeling and addresses some of the challenges involved in designing competent algorithms that solve hard problems quickly, reliably and accurately.
Genetic Algorithms and Genetic Programming for Multiscale Modeling from kknsastry
]]>
1542 8 https://cdn.slidesharecdn.com/ss_thumbnails/genetic-algorithms-and-genetic-programming-for-multiscale-modeling2802-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Towards billion bit optimization via parallel estimation of distribution algorithm /slideshow/towards-billion-bit-optimization-via-parallel-estimation-of-distribution-algorithm/77764 towards-billion-bit-optimization-via-parallel-estimation-of-distribution-algorithm1292
This paper presents a highly efficient, fully parallelized implementation of the compact genetic algorithm to solve very large scale problems with millions to billions of variables. The paper presents principled results demonstrating the scalable solution of a difficult test function on instances over a billion variables using a parallel implementation of compact genetic algorithm (cGA). The problem addressed is a noisy, blind problem over a vector of binary decision variables. Noise is added equaling up to a tenth of the deterministic objective function variance of the problem, thereby making it difficult for simple hillclimbers to find the optimal solution. The compact GA, on the other hand, is able to find the optimum in the presence of noise quickly, reliably, and accurately, and the solution scalability follows known convergence theories. These results on noisy problem together with other results on problems involving varying modularity, hierarchy, and overlap foreshadow routine solution of billion-variable problems across the landscape of search problems.]]>

This paper presents a highly efficient, fully parallelized implementation of the compact genetic algorithm to solve very large scale problems with millions to billions of variables. The paper presents principled results demonstrating the scalable solution of a difficult test function on instances over a billion variables using a parallel implementation of compact genetic algorithm (cGA). The problem addressed is a noisy, blind problem over a vector of binary decision variables. Noise is added equaling up to a tenth of the deterministic objective function variance of the problem, thereby making it difficult for simple hillclimbers to find the optimal solution. The compact GA, on the other hand, is able to find the optimum in the presence of noise quickly, reliably, and accurately, and the solution scalability follows known convergence theories. These results on noisy problem together with other results on problems involving varying modularity, hierarchy, and overlap foreshadow routine solution of billion-variable problems across the landscape of search problems.]]>
Sat, 14 Jul 2007 07:49:14 GMT /slideshow/towards-billion-bit-optimization-via-parallel-estimation-of-distribution-algorithm/77764 kknsastry@slideshare.net(kknsastry) Towards billion bit optimization via parallel estimation of distribution algorithm kknsastry This paper presents a highly efficient, fully parallelized implementation of the compact genetic algorithm to solve very large scale problems with millions to billions of variables. The paper presents principled results demonstrating the scalable solution of a difficult test function on instances over a billion variables using a parallel implementation of compact genetic algorithm (cGA). The problem addressed is a noisy, blind problem over a vector of binary decision variables. Noise is added equaling up to a tenth of the deterministic objective function variance of the problem, thereby making it difficult for simple hillclimbers to find the optimal solution. The compact GA, on the other hand, is able to find the optimum in the presence of noise quickly, reliably, and accurately, and the solution scalability follows known convergence theories. These results on noisy problem together with other results on problems involving varying modularity, hierarchy, and overlap foreshadow routine solution of billion-variable problems across the landscape of search problems. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/towards-billion-bit-optimization-via-parallel-estimation-of-distribution-algorithm1292-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> This paper presents a highly efficient, fully parallelized implementation of the compact genetic algorithm to solve very large scale problems with millions to billions of variables. The paper presents principled results demonstrating the scalable solution of a difficult test function on instances over a billion variables using a parallel implementation of compact genetic algorithm (cGA). The problem addressed is a noisy, blind problem over a vector of binary decision variables. Noise is added equaling up to a tenth of the deterministic objective function variance of the problem, thereby making it difficult for simple hillclimbers to find the optimal solution. The compact GA, on the other hand, is able to find the optimum in the presence of noise quickly, reliably, and accurately, and the solution scalability follows known convergence theories. These results on noisy problem together with other results on problems involving varying modularity, hierarchy, and overlap foreshadow routine solution of billion-variable problems across the landscape of search problems.
Towards billion bit optimization via parallel estimation of distribution algorithm from kknsastry
]]>
716 8 https://cdn.slidesharecdn.com/ss_thumbnails/towards-billion-bit-optimization-via-parallel-estimation-of-distribution-algorithm1292-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Empirical Analysis of ideal recombination on random decomposable problems /slideshow/empirical-analysis-of-ideal-recombination-on-random-decomposable-problems/77761 empirical-analysis-of-ideal-recombination-on-random-decomposable-problems3122
This paper analyzes the behavior of a selectorecombinative genetic algorithm (GA) with an ideal crossover on a class of random additively decomposable problems (rADPs). Specifically, additively decomposable problems of order k whose subsolution fitnesses are sampled from the standard uniform distribution U[0,1] are analyzed. The scalability of the selectorecombinative GA is investigated for 10,000 rADP instances. The validity of facetwise models in bounding the population size, run duration, and the number of function evaluations required to successfully solve the problems is also verified. Finally, rADP instances that are easiest and most difficult are also investigated.]]>

This paper analyzes the behavior of a selectorecombinative genetic algorithm (GA) with an ideal crossover on a class of random additively decomposable problems (rADPs). Specifically, additively decomposable problems of order k whose subsolution fitnesses are sampled from the standard uniform distribution U[0,1] are analyzed. The scalability of the selectorecombinative GA is investigated for 10,000 rADP instances. The validity of facetwise models in bounding the population size, run duration, and the number of function evaluations required to successfully solve the problems is also verified. Finally, rADP instances that are easiest and most difficult are also investigated.]]>
Sat, 14 Jul 2007 07:30:53 GMT /slideshow/empirical-analysis-of-ideal-recombination-on-random-decomposable-problems/77761 kknsastry@slideshare.net(kknsastry) Empirical Analysis of ideal recombination on random decomposable problems kknsastry This paper analyzes the behavior of a selectorecombinative genetic algorithm (GA) with an ideal crossover on a class of random additively decomposable problems (rADPs). Specifically, additively decomposable problems of order k whose subsolution fitnesses are sampled from the standard uniform distribution U[0,1] are analyzed. The scalability of the selectorecombinative GA is investigated for 10,000 rADP instances. The validity of facetwise models in bounding the population size, run duration, and the number of function evaluations required to successfully solve the problems is also verified. Finally, rADP instances that are easiest and most difficult are also investigated. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/empirical-analysis-of-ideal-recombination-on-random-decomposable-problems3122-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> This paper analyzes the behavior of a selectorecombinative genetic algorithm (GA) with an ideal crossover on a class of random additively decomposable problems (rADPs). Specifically, additively decomposable problems of order k whose subsolution fitnesses are sampled from the standard uniform distribution U[0,1] are analyzed. The scalability of the selectorecombinative GA is investigated for 10,000 rADP instances. The validity of facetwise models in bounding the population size, run duration, and the number of function evaluations required to successfully solve the problems is also verified. Finally, rADP instances that are easiest and most difficult are also investigated.
Empirical Analysis of ideal recombination on random decomposable problems from kknsastry
]]>
631 3 https://cdn.slidesharecdn.com/ss_thumbnails/empirical-analysis-of-ideal-recombination-on-random-decomposable-problems3122-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Population sizing for entropy-based model buliding In genetic algorithms /slideshow/population-sizing-for-entropybased-model-buliding-in-genetic-algorithms/77752 population-sizing-for-entropybased-model-buliding-in-genetic-algorithms2125
This paper presents a population-sizing model for the entropy-based model building in genetic algorithms. Specifically, the population size required for building an accurate model is investigated. The effect of the selection pressure on population sizing is also incorporated. The proposed model indicates that the population size required for building an accurate model scales as Θ(m log m), where m is the number of substructures and proportional to the problem size. Experiments are conducted to verify the derivations, and the results agree with the proposed model.]]>

This paper presents a population-sizing model for the entropy-based model building in genetic algorithms. Specifically, the population size required for building an accurate model is investigated. The effect of the selection pressure on population sizing is also incorporated. The proposed model indicates that the population size required for building an accurate model scales as Θ(m log m), where m is the number of substructures and proportional to the problem size. Experiments are conducted to verify the derivations, and the results agree with the proposed model.]]>
Sat, 14 Jul 2007 07:10:18 GMT /slideshow/population-sizing-for-entropybased-model-buliding-in-genetic-algorithms/77752 kknsastry@slideshare.net(kknsastry) Population sizing for entropy-based model buliding In genetic algorithms kknsastry This paper presents a population-sizing model for the entropy-based model building in genetic algorithms. Specifically, the population size required for building an accurate model is investigated. The effect of the selection pressure on population sizing is also incorporated. The proposed model indicates that the population size required for building an accurate model scales as Θ(m log m), where m is the number of substructures and proportional to the problem size. Experiments are conducted to verify the derivations, and the results agree with the proposed model. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/population-sizing-for-entropybased-model-buliding-in-genetic-algorithms2125-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> This paper presents a population-sizing model for the entropy-based model building in genetic algorithms. Specifically, the population size required for building an accurate model is investigated. The effect of the selection pressure on population sizing is also incorporated. The proposed model indicates that the population size required for building an accurate model scales as Θ(m log m), where m is the number of substructures and proportional to the problem size. Experiments are conducted to verify the derivations, and the results agree with the proposed model.
Population sizing for entropy-based model buliding In genetic algorithms from kknsastry
]]>
547 3 https://cdn.slidesharecdn.com/ss_thumbnails/population-sizing-for-entropybased-model-buliding-in-genetic-algorithms2125-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Let's get ready to rumble redux: Crossover versus mutation head to head on exponentially scaled problems /slideshow/lets-get-ready-to-rumble-redux-crossover-versus-mutation-head-to-head-on-exponentially-scaled-problems-77662/77662 lets-get-ready-to-rumble-redux-crossover-versus-mutation-head-to-head-on-exponentially-scaled-problems1687
This paper analyzes the relative advantages between crossover and mutation on a class of deterministic and stochastic additively separable problems with substructures of non-uniform salience. This study assumes that the recombination and mutation operators have the knowledge of the building blocks (BBs) and effectively exchange or search among competing BBs. Facetwise models of convergence time and population sizing have been used to determine the scalability of each algorithm. The analysis shows that for deterministic exponentially-scaled additively separable, problems, the BB-wise mutation is more efficient than crossover yielding a speedup of Θ(l logl), where l is the problem size. For the noisy exponentially-scaled problems, the outcome depends on whether scaling on noise is dominant. When scaling dominates, mutation is more efficient than crossover yielding a speedup of Θ(l logl). On the other hand, when noise dominates, crossover is more efficient than mutation yielding a speedup of Θ(l).]]>

This paper analyzes the relative advantages between crossover and mutation on a class of deterministic and stochastic additively separable problems with substructures of non-uniform salience. This study assumes that the recombination and mutation operators have the knowledge of the building blocks (BBs) and effectively exchange or search among competing BBs. Facetwise models of convergence time and population sizing have been used to determine the scalability of each algorithm. The analysis shows that for deterministic exponentially-scaled additively separable, problems, the BB-wise mutation is more efficient than crossover yielding a speedup of Θ(l logl), where l is the problem size. For the noisy exponentially-scaled problems, the outcome depends on whether scaling on noise is dominant. When scaling dominates, mutation is more efficient than crossover yielding a speedup of Θ(l logl). On the other hand, when noise dominates, crossover is more efficient than mutation yielding a speedup of Θ(l).]]>
Fri, 13 Jul 2007 20:00:26 GMT /slideshow/lets-get-ready-to-rumble-redux-crossover-versus-mutation-head-to-head-on-exponentially-scaled-problems-77662/77662 kknsastry@slideshare.net(kknsastry) Let's get ready to rumble redux: Crossover versus mutation head to head on exponentially scaled problems kknsastry This paper analyzes the relative advantages between crossover and mutation on a class of deterministic and stochastic additively separable problems with substructures of non-uniform salience. This study assumes that the recombination and mutation operators have the knowledge of the building blocks (BBs) and effectively exchange or search among competing BBs. Facetwise models of convergence time and population sizing have been used to determine the scalability of each algorithm. The analysis shows that for deterministic exponentially-scaled additively separable, problems, the BB-wise mutation is more efficient than crossover yielding a speedup of Θ(l logl), where l is the problem size. For the noisy exponentially-scaled problems, the outcome depends on whether scaling on noise is dominant. When scaling dominates, mutation is more efficient than crossover yielding a speedup of Θ(l logl). On the other hand, when noise dominates, crossover is more efficient than mutation yielding a speedup of Θ(l). <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/lets-get-ready-to-rumble-redux-crossover-versus-mutation-head-to-head-on-exponentially-scaled-problems1687-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> This paper analyzes the relative advantages between crossover and mutation on a class of deterministic and stochastic additively separable problems with substructures of non-uniform salience. This study assumes that the recombination and mutation operators have the knowledge of the building blocks (BBs) and effectively exchange or search among competing BBs. Facetwise models of convergence time and population sizing have been used to determine the scalability of each algorithm. The analysis shows that for deterministic exponentially-scaled additively separable, problems, the BB-wise mutation is more efficient than crossover yielding a speedup of Θ(l logl), where l is the problem size. For the noisy exponentially-scaled problems, the outcome depends on whether scaling on noise is dominant. When scaling dominates, mutation is more efficient than crossover yielding a speedup of Θ(l logl). On the other hand, when noise dominates, crossover is more efficient than mutation yielding a speedup of Θ(l).
Let's get ready to rumble redux: Crossover versus mutation head to head on exponentially scaled problems from kknsastry
]]>
535 3 https://cdn.slidesharecdn.com/ss_thumbnails/lets-get-ready-to-rumble-redux-crossover-versus-mutation-head-to-head-on-exponentially-scaled-problems1687-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Modeling selection pressure in XCS for proportionate and tournament selection /slideshow/modeling-selection-pressure-in-xcs-for-proportionate-and-tournament-selection-77652/77652 modeling-selection-pressure-in-xcs-for-proportionate-and-tournament-selection355
In this paper, we derive models of the selection pressure in XCS for proportionate (roulette wheel) selection and tournament selection. We show that these models can explain the empirical results that have been previously presented in the literature. We validate the models on simple problems showing that, (i) when the model assumptions hold, the theory perfectly matches the empirical evidence; (ii) when the model assumptions do not hold, the theory can still provide qualitative explanations of the experimental results.]]>

In this paper, we derive models of the selection pressure in XCS for proportionate (roulette wheel) selection and tournament selection. We show that these models can explain the empirical results that have been previously presented in the literature. We validate the models on simple problems showing that, (i) when the model assumptions hold, the theory perfectly matches the empirical evidence; (ii) when the model assumptions do not hold, the theory can still provide qualitative explanations of the experimental results.]]>
Fri, 13 Jul 2007 19:12:51 GMT /slideshow/modeling-selection-pressure-in-xcs-for-proportionate-and-tournament-selection-77652/77652 kknsastry@slideshare.net(kknsastry) Modeling selection pressure in XCS for proportionate and tournament selection kknsastry In this paper, we derive models of the selection pressure in XCS for proportionate (roulette wheel) selection and tournament selection. We show that these models can explain the empirical results that have been previously presented in the literature. We validate the models on simple problems showing that, (i) when the model assumptions hold, the theory perfectly matches the empirical evidence; (ii) when the model assumptions do not hold, the theory can still provide qualitative explanations of the experimental results. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/modeling-selection-pressure-in-xcs-for-proportionate-and-tournament-selection355-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> In this paper, we derive models of the selection pressure in XCS for proportionate (roulette wheel) selection and tournament selection. We show that these models can explain the empirical results that have been previously presented in the literature. We validate the models on simple problems showing that, (i) when the model assumptions hold, the theory perfectly matches the empirical evidence; (ii) when the model assumptions do not hold, the theory can still provide qualitative explanations of the experimental results.
Modeling selection pressure in XCS for proportionate and tournament selection from kknsastry
]]>
597 5 https://cdn.slidesharecdn.com/ss_thumbnails/modeling-selection-pressure-in-xcs-for-proportionate-and-tournament-selection355-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Modeling XCS in class imbalances: Population sizing and parameter settings /slideshow/modeling-xcs-in-class-imbalances-population-sizing-and-parameter-settings/77645 modeling-xcs-in-class-imbalances-population-sizing-and-parameter-settings3902
This paper analyzes the scalability of the population size required in XCS to maintain niches that are infrequently activated. Facetwise models have been developed to predict the effect of the imbalance ratio—ratio between the number of instances of the majority class and the minority class that are sampled to XCS—on population initialization, and on the creation and deletion of classifiers of the minority class. While theoretical models show that, ideally, XCS scales linearly with the imbalance ratio, XCS with standard configuration scales exponentially. The causes that are potentially responsible for this deviation from the ideal scalability are also investigated. Specifically, the inheritance procedure of classifiers’ parameters, mutation, and subsumption are analyzed, and improvements in XCS’s mechanisms are proposed to effectively and efficiently handle imbalanced problems. Once the recommendations are incorporated to XCS, empirical results show that the population size in XCS indeed scales linearly with the imbalance ratio.]]>

This paper analyzes the scalability of the population size required in XCS to maintain niches that are infrequently activated. Facetwise models have been developed to predict the effect of the imbalance ratio—ratio between the number of instances of the majority class and the minority class that are sampled to XCS—on population initialization, and on the creation and deletion of classifiers of the minority class. While theoretical models show that, ideally, XCS scales linearly with the imbalance ratio, XCS with standard configuration scales exponentially. The causes that are potentially responsible for this deviation from the ideal scalability are also investigated. Specifically, the inheritance procedure of classifiers’ parameters, mutation, and subsumption are analyzed, and improvements in XCS’s mechanisms are proposed to effectively and efficiently handle imbalanced problems. Once the recommendations are incorporated to XCS, empirical results show that the population size in XCS indeed scales linearly with the imbalance ratio.]]>
Fri, 13 Jul 2007 18:38:14 GMT /slideshow/modeling-xcs-in-class-imbalances-population-sizing-and-parameter-settings/77645 kknsastry@slideshare.net(kknsastry) Modeling XCS in class imbalances: Population sizing and parameter settings kknsastry This paper analyzes the scalability of the population size required in XCS to maintain niches that are infrequently activated. Facetwise models have been developed to predict the effect of the imbalance ratio—ratio between the number of instances of the majority class and the minority class that are sampled to XCS—on population initialization, and on the creation and deletion of classifiers of the minority class. While theoretical models show that, ideally, XCS scales linearly with the imbalance ratio, XCS with standard configuration scales exponentially. The causes that are potentially responsible for this deviation from the ideal scalability are also investigated. Specifically, the inheritance procedure of classifiers’ parameters, mutation, and subsumption are analyzed, and improvements in XCS’s mechanisms are proposed to effectively and efficiently handle imbalanced problems. Once the recommendations are incorporated to XCS, empirical results show that the population size in XCS indeed scales linearly with the imbalance ratio. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/modeling-xcs-in-class-imbalances-population-sizing-and-parameter-settings3902-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> This paper analyzes the scalability of the population size required in XCS to maintain niches that are infrequently activated. Facetwise models have been developed to predict the effect of the imbalance ratio—ratio between the number of instances of the majority class and the minority class that are sampled to XCS—on population initialization, and on the creation and deletion of classifiers of the minority class. While theoretical models show that, ideally, XCS scales linearly with the imbalance ratio, XCS with standard configuration scales exponentially. The causes that are potentially responsible for this deviation from the ideal scalability are also investigated. Specifically, the inheritance procedure of classifiers’ parameters, mutation, and subsumption are analyzed, and improvements in XCS’s mechanisms are proposed to effectively and efficiently handle imbalanced problems. Once the recommendations are incorporated to XCS, empirical results show that the population size in XCS indeed scales linearly with the imbalance ratio.
Modeling XCS in class imbalances: Population sizing and parameter settings from kknsastry
]]>
563 3 https://cdn.slidesharecdn.com/ss_thumbnails/modeling-xcs-in-class-imbalances-population-sizing-and-parameter-settings3902-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Substructrual surrogates for learning decomposable classification problems: implementation and first results /slideshow/substructrual-surrogates-for-learning-decomposable-classification-problems-implementation-and-first-results/77643 substructrual-surrogates-for-learning-decomposable-classification-problems-implementation-and-first-results1767
This paper presents a learning methodology based on a substructural classification model to solve decomposable classification problems. The proposed method consists of three important components: (1) a structural model that represents salient interactions between attributes for a given data, (2) a surrogate model which provides a functional approximation of the output as a function of attributes, and (3) a classification model which predicts the class for new inputs. The structural model is used to infer the functional form of the surrogate and its coefficients are estimated using linear regression methods. The classification model uses a maximally-accurate, least-complex surrogate to predict the output for given inputs. The structural model that yields an optimal classification model is searched using an iterative greedy search heuristic. Results show that the proposed method successfully detects the interacting variables in hierarchical problems, group them in linkages groups, and build maximally accurate classification models. The initial results on non-trivial hierarchical test problems indicate that the proposed method holds promise and have also shed light on several improvements to enhance the capabilities of the proposed method.]]>

This paper presents a learning methodology based on a substructural classification model to solve decomposable classification problems. The proposed method consists of three important components: (1) a structural model that represents salient interactions between attributes for a given data, (2) a surrogate model which provides a functional approximation of the output as a function of attributes, and (3) a classification model which predicts the class for new inputs. The structural model is used to infer the functional form of the surrogate and its coefficients are estimated using linear regression methods. The classification model uses a maximally-accurate, least-complex surrogate to predict the output for given inputs. The structural model that yields an optimal classification model is searched using an iterative greedy search heuristic. Results show that the proposed method successfully detects the interacting variables in hierarchical problems, group them in linkages groups, and build maximally accurate classification models. The initial results on non-trivial hierarchical test problems indicate that the proposed method holds promise and have also shed light on several improvements to enhance the capabilities of the proposed method.]]>
Fri, 13 Jul 2007 18:29:31 GMT /slideshow/substructrual-surrogates-for-learning-decomposable-classification-problems-implementation-and-first-results/77643 kknsastry@slideshare.net(kknsastry) Substructrual surrogates for learning decomposable classification problems: implementation and first results kknsastry This paper presents a learning methodology based on a substructural classification model to solve decomposable classification problems. The proposed method consists of three important components: (1) a structural model that represents salient interactions between attributes for a given data, (2) a surrogate model which provides a functional approximation of the output as a function of attributes, and (3) a classification model which predicts the class for new inputs. The structural model is used to infer the functional form of the surrogate and its coefficients are estimated using linear regression methods. The classification model uses a maximally-accurate, least-complex surrogate to predict the output for given inputs. The structural model that yields an optimal classification model is searched using an iterative greedy search heuristic. Results show that the proposed method successfully detects the interacting variables in hierarchical problems, group them in linkages groups, and build maximally accurate classification models. The initial results on non-trivial hierarchical test problems indicate that the proposed method holds promise and have also shed light on several improvements to enhance the capabilities of the proposed method. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/substructrual-surrogates-for-learning-decomposable-classification-problems-implementation-and-first-results1767-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> This paper presents a learning methodology based on a substructural classification model to solve decomposable classification problems. The proposed method consists of three important components: (1) a structural model that represents salient interactions between attributes for a given data, (2) a surrogate model which provides a functional approximation of the output as a function of attributes, and (3) a classification model which predicts the class for new inputs. The structural model is used to infer the functional form of the surrogate and its coefficients are estimated using linear regression methods. The classification model uses a maximally-accurate, least-complex surrogate to predict the output for given inputs. The structural model that yields an optimal classification model is searched using an iterative greedy search heuristic. Results show that the proposed method successfully detects the interacting variables in hierarchical problems, group them in linkages groups, and build maximally accurate classification models. The initial results on non-trivial hierarchical test problems indicate that the proposed method holds promise and have also shed light on several improvements to enhance the capabilities of the proposed method.
Substructrual surrogates for learning decomposable classification problems: implementation and first results from kknsastry
]]>
575 6 https://cdn.slidesharecdn.com/ss_thumbnails/substructrual-surrogates-for-learning-decomposable-classification-problems-implementation-and-first-results1767-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Let's get ready to rumble redux: Crossover versus mutation head to head on exponentially scaled problems /slideshow/lets-get-ready-to-rumble-redux-crossover-versus-mutation-head-to-head-on-exponentially-scaled-problems/50558 lets-get-ready-to-rumble-redux-crossover-versus-mutation-head-to-head-on-exponentially-scaled-problems-16841
This paper analyzes the relative advantages between crossover and mutation on a class of deterministic and stochastic additively separable problems with substructures of non-uniform salience. This study assumes that the recombination and mutation operators have the knowledge of the building blocks (BBs) and effectively exchange or search among competing BBs. Facetwise models of convergence time and population sizing have been used to determine the scalability of each algorithm. The analysis shows that for deterministic exponentially-scaled additively separable, problems, the BB-wise mutation is more efficient than crossover yielding a speedup of Θ(l logl), where l is the problem size. For the noisy exponentially-scaled problems, the outcome depends on whether scaling on noise is dominant. When scaling dominates, mutation is more efficient than crossover yielding a speedup of Θ(l logl). On the other hand, when noise dominates, crossover is more efficient than mutation yielding a speedup of Θ(l).]]>

This paper analyzes the relative advantages between crossover and mutation on a class of deterministic and stochastic additively separable problems with substructures of non-uniform salience. This study assumes that the recombination and mutation operators have the knowledge of the building blocks (BBs) and effectively exchange or search among competing BBs. Facetwise models of convergence time and population sizing have been used to determine the scalability of each algorithm. The analysis shows that for deterministic exponentially-scaled additively separable, problems, the BB-wise mutation is more efficient than crossover yielding a speedup of Θ(l logl), where l is the problem size. For the noisy exponentially-scaled problems, the outcome depends on whether scaling on noise is dominant. When scaling dominates, mutation is more efficient than crossover yielding a speedup of Θ(l logl). On the other hand, when noise dominates, crossover is more efficient than mutation yielding a speedup of Θ(l).]]>
Wed, 16 May 2007 16:44:14 GMT /slideshow/lets-get-ready-to-rumble-redux-crossover-versus-mutation-head-to-head-on-exponentially-scaled-problems/50558 kknsastry@slideshare.net(kknsastry) Let's get ready to rumble redux: Crossover versus mutation head to head on exponentially scaled problems kknsastry This paper analyzes the relative advantages between crossover and mutation on a class of deterministic and stochastic additively separable problems with substructures of non-uniform salience. This study assumes that the recombination and mutation operators have the knowledge of the building blocks (BBs) and effectively exchange or search among competing BBs. Facetwise models of convergence time and population sizing have been used to determine the scalability of each algorithm. The analysis shows that for deterministic exponentially-scaled additively separable, problems, the BB-wise mutation is more efficient than crossover yielding a speedup of Θ(<em>l</em> log<em>l</em>), where <em>l</em> is the problem size. For the noisy exponentially-scaled problems, the outcome depends on whether scaling on noise is dominant. When scaling dominates, mutation is more efficient than crossover yielding a speedup of Θ(<em>l</em> log<em>l</em>). On the other hand, when noise dominates, crossover is more efficient than mutation yielding a speedup of Θ(<em>l</em>). <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/lets-get-ready-to-rumble-redux-crossover-versus-mutation-head-to-head-on-exponentially-scaled-problems-16841-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> This paper analyzes the relative advantages between crossover and mutation on a class of deterministic and stochastic additively separable problems with substructures of non-uniform salience. This study assumes that the recombination and mutation operators have the knowledge of the building blocks (BBs) and effectively exchange or search among competing BBs. Facetwise models of convergence time and population sizing have been used to determine the scalability of each algorithm. The analysis shows that for deterministic exponentially-scaled additively separable, problems, the BB-wise mutation is more efficient than crossover yielding a speedup of Θ(l logl), where l is the problem size. For the noisy exponentially-scaled problems, the outcome depends on whether scaling on noise is dominant. When scaling dominates, mutation is more efficient than crossover yielding a speedup of Θ(l logl). On the other hand, when noise dominates, crossover is more efficient than mutation yielding a speedup of Θ(l).
Let's get ready to rumble redux: Crossover versus mutation head to head on exponentially scaled problems from kknsastry
]]>
927 5 https://cdn.slidesharecdn.com/ss_thumbnails/lets-get-ready-to-rumble-redux-crossover-versus-mutation-head-to-head-on-exponentially-scaled-problems-16841-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Automated alphabet reduction with evolutionary algorithms for protein structure prediction /slideshow/automated-alphabet-reduction-with-evolutionary-algorithms-for-protein-structure-prediction/50552 automated-alphabet-reduction-with-evolutionary-algorithms-for-protein-structure-prediction-16717
This paper focuses on automated procedures to reduce the dimensionality of protein structure prediction datasets by simplifying the way in which the primary sequence of a protein is represented. The potential benefits of this procedure are faster and easier learning process as well as the generation of more compact and human-readable classifiers. The dimensionality reduction procedure we propose consists on the reduction of the 20-letter amino acid (AA) alphabet, which is normally used to specify a protein sequence, into a lower cardinality alphabet. This reduction comes about by a clustering of AA types accordingly to their physical and chemical similarity. Our automated reduction procedure is guided by a fitness function based on the Mutual Information between the AA-based input attributes of the dataset and the protein structure feature that being predicted. To search for the optimal reduction, the Extended Compact Genetic Algorithm (ECGA) was used, and afterwards the results of this process were fed into (and validated by) BioHEL, a genetics-based machine learning technique. BioHEL used the reduced alphabet to induce rules for protein structure prediction features. BioHEL results are compared to two standard machine learning systems. Our results show that it is possible to reduce the size of the alphabet used for prediction from twenty to just three letters resulting in more compact, i.e. interpretable, rules. Also, a protein-wise accuracy performance measure suggests that the loss of accuracy accrued by this substantial alphabet reduction is not statistically significant when compared to the full alphabet.]]>

This paper focuses on automated procedures to reduce the dimensionality of protein structure prediction datasets by simplifying the way in which the primary sequence of a protein is represented. The potential benefits of this procedure are faster and easier learning process as well as the generation of more compact and human-readable classifiers. The dimensionality reduction procedure we propose consists on the reduction of the 20-letter amino acid (AA) alphabet, which is normally used to specify a protein sequence, into a lower cardinality alphabet. This reduction comes about by a clustering of AA types accordingly to their physical and chemical similarity. Our automated reduction procedure is guided by a fitness function based on the Mutual Information between the AA-based input attributes of the dataset and the protein structure feature that being predicted. To search for the optimal reduction, the Extended Compact Genetic Algorithm (ECGA) was used, and afterwards the results of this process were fed into (and validated by) BioHEL, a genetics-based machine learning technique. BioHEL used the reduced alphabet to induce rules for protein structure prediction features. BioHEL results are compared to two standard machine learning systems. Our results show that it is possible to reduce the size of the alphabet used for prediction from twenty to just three letters resulting in more compact, i.e. interpretable, rules. Also, a protein-wise accuracy performance measure suggests that the loss of accuracy accrued by this substantial alphabet reduction is not statistically significant when compared to the full alphabet.]]>
Wed, 16 May 2007 16:29:32 GMT /slideshow/automated-alphabet-reduction-with-evolutionary-algorithms-for-protein-structure-prediction/50552 kknsastry@slideshare.net(kknsastry) Automated alphabet reduction with evolutionary algorithms for protein structure prediction kknsastry This paper focuses on automated procedures to reduce the dimensionality of protein structure prediction datasets by simplifying the way in which the primary sequence of a protein is represented. The potential benefits of this procedure are faster and easier learning process as well as the generation of more compact and human-readable classifiers. The dimensionality reduction procedure we propose consists on the reduction of the 20-letter amino acid (AA) alphabet, which is normally used to specify a protein sequence, into a lower cardinality alphabet. This reduction comes about by a clustering of AA types accordingly to their physical and chemical similarity. Our automated reduction procedure is guided by a fitness function based on the Mutual Information between the AA-based input attributes of the dataset and the protein structure feature that being predicted. <br /> To search for the optimal reduction, the Extended Compact Genetic Algorithm (ECGA) was used, and afterwards the results of this process were fed into (and validated by) BioHEL, a genetics-based machine learning technique. BioHEL used the reduced alphabet to induce rules for protein structure prediction features. BioHEL results are compared to two standard machine learning systems. Our results show that it is possible to reduce the size of the alphabet used for prediction from twenty to just three letters resulting in more compact, i.e. interpretable, rules. Also, a protein-wise accuracy performance measure suggests that the loss of accuracy accrued by this substantial alphabet reduction is not statistically significant when compared to the full alphabet. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/automated-alphabet-reduction-with-evolutionary-algorithms-for-protein-structure-prediction-16717-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> This paper focuses on automated procedures to reduce the dimensionality of protein structure prediction datasets by simplifying the way in which the primary sequence of a protein is represented. The potential benefits of this procedure are faster and easier learning process as well as the generation of more compact and human-readable classifiers. The dimensionality reduction procedure we propose consists on the reduction of the 20-letter amino acid (AA) alphabet, which is normally used to specify a protein sequence, into a lower cardinality alphabet. This reduction comes about by a clustering of AA types accordingly to their physical and chemical similarity. Our automated reduction procedure is guided by a fitness function based on the Mutual Information between the AA-based input attributes of the dataset and the protein structure feature that being predicted. To search for the optimal reduction, the Extended Compact Genetic Algorithm (ECGA) was used, and afterwards the results of this process were fed into (and validated by) BioHEL, a genetics-based machine learning technique. BioHEL used the reduced alphabet to induce rules for protein structure prediction features. BioHEL results are compared to two standard machine learning systems. Our results show that it is possible to reduce the size of the alphabet used for prediction from twenty to just three letters resulting in more compact, i.e. interpretable, rules. Also, a protein-wise accuracy performance measure suggests that the loss of accuracy accrued by this substantial alphabet reduction is not statistically significant when compared to the full alphabet.
Automated alphabet reduction with evolutionary algorithms for protein structure prediction from kknsastry
]]>
813 7 https://cdn.slidesharecdn.com/ss_thumbnails/automated-alphabet-reduction-with-evolutionary-algorithms-for-protein-structure-prediction-16717-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Analyzing probabilistic models in hierarchical BOA on traps and spin glasses /slideshow/analyzing-probabilistic-models-in-hierarchical-boa-on-traps-and-spin-glasses/50375 analyzing-probabilistic-models-in-hierarchical-boa-on-traps-and-spin-glasses-4915
The hierarchical Bayesian optimization algorithm (hBOA) can solve nearly decomposable and hierarchical problems of bounded difficulty in a robust and scalable manner by building and sampling probabilistic models of promising solutions. This paper analyzes probabilistic models in hBOA on two common test problems: concatenated traps and 2D Ising spin glasses with periodic boundary conditions. We argue that although Bayesian networks with local structures can encode complex probability distributions, analyzing these models in hBOA is relatively straightforward and the results of such analyses may provide practitioners with useful information about their problems. The results show that the probabilistic models in hBOA closely correspond to the structure of the underlying optimization problem, the models do not change significantly in subsequent iterations of BOA, and creating adequate probabilistic models by hand is not straightforward even with complete knowledge of the optimization problem.]]>

The hierarchical Bayesian optimization algorithm (hBOA) can solve nearly decomposable and hierarchical problems of bounded difficulty in a robust and scalable manner by building and sampling probabilistic models of promising solutions. This paper analyzes probabilistic models in hBOA on two common test problems: concatenated traps and 2D Ising spin glasses with periodic boundary conditions. We argue that although Bayesian networks with local structures can encode complex probability distributions, analyzing these models in hBOA is relatively straightforward and the results of such analyses may provide practitioners with useful information about their problems. The results show that the probabilistic models in hBOA closely correspond to the structure of the underlying optimization problem, the models do not change significantly in subsequent iterations of BOA, and creating adequate probabilistic models by hand is not straightforward even with complete knowledge of the optimization problem.]]>
Wed, 16 May 2007 10:29:52 GMT /slideshow/analyzing-probabilistic-models-in-hierarchical-boa-on-traps-and-spin-glasses/50375 kknsastry@slideshare.net(kknsastry) Analyzing probabilistic models in hierarchical BOA on traps and spin glasses kknsastry The hierarchical Bayesian optimization algorithm (hBOA) can solve nearly decomposable and hierarchical problems of bounded difficulty in a robust and scalable manner by building and sampling probabilistic models of promising solutions. This paper analyzes probabilistic models in hBOA on two common test problems: concatenated traps and 2D Ising spin glasses with periodic boundary conditions. We argue that although Bayesian networks with local structures can encode complex probability distributions, analyzing these models in hBOA is relatively straightforward and the results of such analyses may provide practitioners with useful information about their problems. The results show that the probabilistic models in hBOA closely correspond to the structure of the underlying optimization problem, the models do not change significantly in subsequent iterations of BOA, and creating adequate probabilistic models by hand is not straightforward even with complete knowledge of the optimization problem. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/analyzing-probabilistic-models-in-hierarchical-boa-on-traps-and-spin-glasses-4915-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> The hierarchical Bayesian optimization algorithm (hBOA) can solve nearly decomposable and hierarchical problems of bounded difficulty in a robust and scalable manner by building and sampling probabilistic models of promising solutions. This paper analyzes probabilistic models in hBOA on two common test problems: concatenated traps and 2D Ising spin glasses with periodic boundary conditions. We argue that although Bayesian networks with local structures can encode complex probability distributions, analyzing these models in hBOA is relatively straightforward and the results of such analyses may provide practitioners with useful information about their problems. The results show that the probabilistic models in hBOA closely correspond to the structure of the underlying optimization problem, the models do not change significantly in subsequent iterations of BOA, and creating adequate probabilistic models by hand is not straightforward even with complete knowledge of the optimization problem.
Analyzing probabilistic models in hierarchical BOA on traps and spin glasses from kknsastry
]]>
754 5 https://cdn.slidesharecdn.com/ss_thumbnails/analyzing-probabilistic-models-in-hierarchical-boa-on-traps-and-spin-glasses-4915-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Modeling selection pressure in XCS for proportionate and tournament selection /slideshow/modeling-selection-pressure-in-xcs-for-proportionate-and-tournament-selection/50374 modeling-selection-pressure-in-xcs-for-proportionate-and-tournament-selection-9172
In this paper, we derive models of the selection pressure in XCS for proportionate (roulette wheel) selection and tournament selection. We show that these models can explain the empirical results that have been previously presented in the literature. We validate the models on simple problems showing that, (i) when the model assumptions hold, the theory perfectly matches the empirical evidence; (ii) when the model assumptions do not hold, the theory can still provide qualitative explanations of the experimental results.]]>

In this paper, we derive models of the selection pressure in XCS for proportionate (roulette wheel) selection and tournament selection. We show that these models can explain the empirical results that have been previously presented in the literature. We validate the models on simple problems showing that, (i) when the model assumptions hold, the theory perfectly matches the empirical evidence; (ii) when the model assumptions do not hold, the theory can still provide qualitative explanations of the experimental results.]]>
Wed, 16 May 2007 10:29:49 GMT /slideshow/modeling-selection-pressure-in-xcs-for-proportionate-and-tournament-selection/50374 kknsastry@slideshare.net(kknsastry) Modeling selection pressure in XCS for proportionate and tournament selection kknsastry In this paper, we derive models of the selection pressure in XCS for proportionate (roulette wheel) selection and tournament selection. We show that these models can explain the empirical results that have been previously presented in the literature. We validate the models on simple problems showing that, (i) when the model assumptions hold, the theory perfectly matches the empirical evidence; (ii) when the model assumptions do not hold, the theory can still provide qualitative explanations of the experimental results. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/modeling-selection-pressure-in-xcs-for-proportionate-and-tournament-selection-9172-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> In this paper, we derive models of the selection pressure in XCS for proportionate (roulette wheel) selection and tournament selection. We show that these models can explain the empirical results that have been previously presented in the literature. We validate the models on simple problems showing that, (i) when the model assumptions hold, the theory perfectly matches the empirical evidence; (ii) when the model assumptions do not hold, the theory can still provide qualitative explanations of the experimental results.
Modeling selection pressure in XCS for proportionate and tournament selection from kknsastry
]]>
720 3 https://cdn.slidesharecdn.com/ss_thumbnails/modeling-selection-pressure-in-xcs-for-proportionate-and-tournament-selection-9172-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Modeling XCS in class imbalances: Population size and parameter settings /slideshow/modeling-xcs-in-class-imbalances-population-size-and-parameter-settings/50373 modeling-xcs-in-class-imbalances-population-size-and-parameter-settings-19927
This paper analyzes the scalability of the population size required in XCS to maintain niches that are infrequently activated. Facetwise models have been developed to predict the effect of the imbalance ratio—ratio between the number of instances of the majority class and the minority class that are sampled to XCS—on population initialization, and on the creation and deletion of classifiers of the minority class. While theoretical models show that, ideally, XCS scales linearly with the imbalance ratio, XCS with standard configuration scales exponentially. The causes that are potentially responsible for this deviation from the ideal scalability are also investigated. Specifically, the inheritance procedure of classifiers’ parameters, mutation, and subsumption are analyzed, and improvements in XCS’s mechanisms are proposed to effectively and efficiently handle imbalanced problems. Once the recommendations are incorporated to XCS, empirical results show that the population size in XCS indeed scales linearly with the imbalance ratio.]]>

This paper analyzes the scalability of the population size required in XCS to maintain niches that are infrequently activated. Facetwise models have been developed to predict the effect of the imbalance ratio—ratio between the number of instances of the majority class and the minority class that are sampled to XCS—on population initialization, and on the creation and deletion of classifiers of the minority class. While theoretical models show that, ideally, XCS scales linearly with the imbalance ratio, XCS with standard configuration scales exponentially. The causes that are potentially responsible for this deviation from the ideal scalability are also investigated. Specifically, the inheritance procedure of classifiers’ parameters, mutation, and subsumption are analyzed, and improvements in XCS’s mechanisms are proposed to effectively and efficiently handle imbalanced problems. Once the recommendations are incorporated to XCS, empirical results show that the population size in XCS indeed scales linearly with the imbalance ratio.]]>
Wed, 16 May 2007 10:29:33 GMT /slideshow/modeling-xcs-in-class-imbalances-population-size-and-parameter-settings/50373 kknsastry@slideshare.net(kknsastry) Modeling XCS in class imbalances: Population size and parameter settings kknsastry This paper analyzes the scalability of the population size required in XCS to maintain niches that are infrequently activated. Facetwise models have been developed to predict the effect of the imbalance ratio—ratio between the number of instances of the majority class and the minority class that are sampled to XCS—on population initialization, and on the creation and deletion of classifiers of the minority class. While theoretical models show that, ideally, XCS scales linearly with the imbalance ratio, XCS with standard configuration scales exponentially. The causes that are potentially responsible for this deviation from the ideal scalability are also investigated. Specifically, the inheritance procedure of classifiers’ parameters, mutation, and subsumption are analyzed, and improvements in XCS’s mechanisms are proposed to effectively and efficiently handle imbalanced problems. Once the recommendations are incorporated to XCS, empirical results show that the population size in XCS indeed scales linearly with the imbalance ratio. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/modeling-xcs-in-class-imbalances-population-size-and-parameter-settings-19927-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> This paper analyzes the scalability of the population size required in XCS to maintain niches that are infrequently activated. Facetwise models have been developed to predict the effect of the imbalance ratio—ratio between the number of instances of the majority class and the minority class that are sampled to XCS—on population initialization, and on the creation and deletion of classifiers of the minority class. While theoretical models show that, ideally, XCS scales linearly with the imbalance ratio, XCS with standard configuration scales exponentially. The causes that are potentially responsible for this deviation from the ideal scalability are also investigated. Specifically, the inheritance procedure of classifiers’ parameters, mutation, and subsumption are analyzed, and improvements in XCS’s mechanisms are proposed to effectively and efficiently handle imbalanced problems. Once the recommendations are incorporated to XCS, empirical results show that the population size in XCS indeed scales linearly with the imbalance ratio.
Modeling XCS in class imbalances: Population size and parameter settings from kknsastry
]]>
712 8 https://cdn.slidesharecdn.com/ss_thumbnails/modeling-xcs-in-class-imbalances-population-size-and-parameter-settings-19927-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Fast and accurate reaction dynamics via multiobjective genetic algorithm optimization of semiempirical potentials /slideshow/fast-and-accurate-reaction-dynamics-via-multiobjective-genetic-algorithm-optimization-of-semiempirical-potentials/37347 fast-and-accurate-reaction-dynamics-via-multiobjective-genetic-algorithm-optimization-of-semiempirical-potentials-16964
]]>

]]>
Tue, 10 Apr 2007 19:36:35 GMT /slideshow/fast-and-accurate-reaction-dynamics-via-multiobjective-genetic-algorithm-optimization-of-semiempirical-potentials/37347 kknsastry@slideshare.net(kknsastry) Fast and accurate reaction dynamics via multiobjective genetic algorithm optimization of semiempirical potentials kknsastry <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/fast-and-accurate-reaction-dynamics-via-multiobjective-genetic-algorithm-optimization-of-semiempirical-potentials-16964-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br>
Fast and accurate reaction dynamics via multiobjective genetic algorithm optimization of semiempirical potentials from kknsastry
]]>
756 7 https://cdn.slidesharecdn.com/ss_thumbnails/fast-and-accurate-reaction-dynamics-via-multiobjective-genetic-algorithm-optimization-of-semiempirical-potentials-16964-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
On Extended Compact Genetic Algorithm /slideshow/on-extended-compact-genetic-algorithm/26322 on-extended-compact-genetic-algorithm-28404
In this study we present a detailed analysis of the extended compact genetic algorithm (ECGA). Based on the analysis, empirical relations for population sizing and convergence time have been derived and are compared with the existing relations. We then apply ECGA to a non-azeotropic binary working fluid power cycle optimization problem. The optimal power cycle obtained improved the cycle efficiency by 2.5% over that existing cycles, thus illustrating the capabilities of ECGA in solving real-world problems.]]>

In this study we present a detailed analysis of the extended compact genetic algorithm (ECGA). Based on the analysis, empirical relations for population sizing and convergence time have been derived and are compared with the existing relations. We then apply ECGA to a non-azeotropic binary working fluid power cycle optimization problem. The optimal power cycle obtained improved the cycle efficiency by 2.5% over that existing cycles, thus illustrating the capabilities of ECGA in solving real-world problems.]]>
Sun, 25 Feb 2007 11:03:51 GMT /slideshow/on-extended-compact-genetic-algorithm/26322 kknsastry@slideshare.net(kknsastry) On Extended Compact Genetic Algorithm kknsastry In this study we present a detailed analysis of the extended compact genetic algorithm (ECGA). Based on the analysis, empirical relations for population sizing and convergence time have been derived and are compared with the existing relations. We then apply ECGA to a non-azeotropic binary working fluid power cycle optimization problem. The optimal power cycle obtained improved the cycle efficiency by 2.5% over that existing cycles, thus illustrating the capabilities of ECGA in solving real-world problems. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/on-extended-compact-genetic-algorithm-28404-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> In this study we present a detailed analysis of the extended compact genetic algorithm (ECGA). Based on the analysis, empirical relations for population sizing and convergence time have been derived and are compared with the existing relations. We then apply ECGA to a non-azeotropic binary working fluid power cycle optimization problem. The optimal power cycle obtained improved the cycle efficiency by 2.5% over that existing cycles, thus illustrating the capabilities of ECGA in solving real-world problems.
On Extended Compact Genetic Algorithm from kknsastry
]]>
615 4 https://cdn.slidesharecdn.com/ss_thumbnails/on-extended-compact-genetic-algorithm-28404-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Silicon Cluster Optimization Using Extended Compact Genetic Algorithm /slideshow/silicon-cluster-optimization-using-extended-compact-genetic-algorithm/26321 silicon-cluster-optimization-using-extended-compact-genetic-algorithm-16908
This paper presents an efficient cluster optimization algorithm. The proposed algorithm uses extended compact genetic algorithm (ECGA), one of the competent genetic algorithms (GAs) coupled with Nelder-Mead simplex local search. The lowest energy structures of silicon clusters with 4-11 atoms have been successfully predicted. The minimum population size and total number of function (potential energy of the cluster) evaluations required to converge to the global optimum with a reliability of 96% have been empirically determined and are O(n4.2) and O(n8.2) respectively. The results obtained indicate that the proposed algorithm is highly reliable in predicting globally optimal structures. However, certain efficiency techniques have to be employed for predicting structures of larger clusters to reduce the high computational cost due to function evaluation.]]>

This paper presents an efficient cluster optimization algorithm. The proposed algorithm uses extended compact genetic algorithm (ECGA), one of the competent genetic algorithms (GAs) coupled with Nelder-Mead simplex local search. The lowest energy structures of silicon clusters with 4-11 atoms have been successfully predicted. The minimum population size and total number of function (potential energy of the cluster) evaluations required to converge to the global optimum with a reliability of 96% have been empirically determined and are O(n4.2) and O(n8.2) respectively. The results obtained indicate that the proposed algorithm is highly reliable in predicting globally optimal structures. However, certain efficiency techniques have to be employed for predicting structures of larger clusters to reduce the high computational cost due to function evaluation.]]>
Sun, 25 Feb 2007 10:57:10 GMT /slideshow/silicon-cluster-optimization-using-extended-compact-genetic-algorithm/26321 kknsastry@slideshare.net(kknsastry) Silicon Cluster Optimization Using Extended Compact Genetic Algorithm kknsastry This paper presents an efficient cluster optimization algorithm. The proposed algorithm uses extended compact genetic algorithm (ECGA), one of the competent genetic algorithms (GAs) coupled with Nelder-Mead simplex local search. The lowest energy structures of silicon clusters with 4-11 atoms have been successfully predicted. The minimum population size and total number of function (potential energy of the cluster) evaluations required to converge to the global optimum with a reliability of 96% have been empirically determined and are O(n4.2) and O(n8.2) respectively. The results obtained indicate that the proposed algorithm is highly reliable in predicting globally optimal structures. However, certain efficiency techniques have to be employed for predicting structures of larger clusters to reduce the high computational cost due to function evaluation. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/silicon-cluster-optimization-using-extended-compact-genetic-algorithm-16908-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> This paper presents an efficient cluster optimization algorithm. The proposed algorithm uses extended compact genetic algorithm (ECGA), one of the competent genetic algorithms (GAs) coupled with Nelder-Mead simplex local search. The lowest energy structures of silicon clusters with 4-11 atoms have been successfully predicted. The minimum population size and total number of function (potential energy of the cluster) evaluations required to converge to the global optimum with a reliability of 96% have been empirically determined and are O(n4.2) and O(n8.2) respectively. The results obtained indicate that the proposed algorithm is highly reliable in predicting globally optimal structures. However, certain efficiency techniques have to be employed for predicting structures of larger clusters to reduce the high computational cost due to function evaluation.
Silicon Cluster Optimization Using Extended Compact Genetic Algorithm from kknsastry
]]>
1001 7 https://cdn.slidesharecdn.com/ss_thumbnails/silicon-cluster-optimization-using-extended-compact-genetic-algorithm-16908-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
A Practical Schema Theorem for Genetic Algorithm Design and Tuning /kknsastry/a-practical-schema-theorem-for-genetic-algorithm-design-and-tuning a-practical-schema-theorem-for-genetic-algorithm-design-and-tuning-9714
This paper develops the theory that can enable the design of genetic algorithms and choose the parameters such that the proportion of the best building blocks grow. A practical schema theorem has been used for this purpose and its ramification for the choice of selection operator and parameterization of the algorithm is explored. In particular stochastic universal selection, tournament selection, and truncation selection schemes are employed to verify the results. Results agree with the schema theorem and indicate that it must be obeyed in order to ascertain sustained growth of good building blocks. The analysis suggests that schema theorem alone is insufficient to guarantee the success of a selectorecombinative genetic algorithm.]]>

This paper develops the theory that can enable the design of genetic algorithms and choose the parameters such that the proportion of the best building blocks grow. A practical schema theorem has been used for this purpose and its ramification for the choice of selection operator and parameterization of the algorithm is explored. In particular stochastic universal selection, tournament selection, and truncation selection schemes are employed to verify the results. Results agree with the schema theorem and indicate that it must be obeyed in order to ascertain sustained growth of good building blocks. The analysis suggests that schema theorem alone is insufficient to guarantee the success of a selectorecombinative genetic algorithm.]]>
Sun, 25 Feb 2007 10:48:22 GMT /kknsastry/a-practical-schema-theorem-for-genetic-algorithm-design-and-tuning kknsastry@slideshare.net(kknsastry) A Practical Schema Theorem for Genetic Algorithm Design and Tuning kknsastry This paper develops the theory that can enable the design of genetic algorithms and choose the parameters such that the proportion of the best building blocks grow. A practical schema theorem has been used for this purpose and its ramification for the choice of selection operator and parameterization of the algorithm is explored. In particular stochastic universal selection, tournament selection, and truncation selection schemes are employed to verify the results. Results agree with the schema theorem and indicate that it must be obeyed in order to ascertain sustained growth of good building blocks. The analysis suggests that schema theorem alone is insufficient to guarantee the success of a selectorecombinative genetic algorithm. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/a-practical-schema-theorem-for-genetic-algorithm-design-and-tuning-9714-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> This paper develops the theory that can enable the design of genetic algorithms and choose the parameters such that the proportion of the best building blocks grow. A practical schema theorem has been used for this purpose and its ramification for the choice of selection operator and parameterization of the algorithm is explored. In particular stochastic universal selection, tournament selection, and truncation selection schemes are employed to verify the results. Results agree with the schema theorem and indicate that it must be obeyed in order to ascertain sustained growth of good building blocks. The analysis suggests that schema theorem alone is insufficient to guarantee the success of a selectorecombinative genetic algorithm.
A Practical Schema Theorem for Genetic Algorithm Design and Tuning from kknsastry
]]>
1661 7 https://cdn.slidesharecdn.com/ss_thumbnails/a-practical-schema-theorem-for-genetic-algorithm-design-and-tuning-9714-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
On the Supply of Building Blocks /slideshow/on-the-supply-of-building-blocks/26318 on-the-supply-of-building-blocks-17046
This study addresses the issue of building-block supply in the initial population. Facetwise models for supply of a single building block as well as for supply of all schemata in a partition have been developed. An estimate for the population size required to ensure the presence of all raw building blocks has been derived using these facetwise models. The facetwise models and the population-sizing estimate are verified with computational results.]]>

This study addresses the issue of building-block supply in the initial population. Facetwise models for supply of a single building block as well as for supply of all schemata in a partition have been developed. An estimate for the population size required to ensure the presence of all raw building blocks has been derived using these facetwise models. The facetwise models and the population-sizing estimate are verified with computational results.]]>
Sun, 25 Feb 2007 10:42:22 GMT /slideshow/on-the-supply-of-building-blocks/26318 kknsastry@slideshare.net(kknsastry) On the Supply of Building Blocks kknsastry This study addresses the issue of building-block supply in the initial population. Facetwise models for supply of a single building block as well as for supply of all schemata in a partition have been developed. An estimate for the population size required to ensure the presence of all raw building blocks has been derived using these facetwise models. The facetwise models and the population-sizing estimate are verified with computational results. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/on-the-supply-of-building-blocks-17046-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> This study addresses the issue of building-block supply in the initial population. Facetwise models for supply of a single building block as well as for supply of all schemata in a partition have been developed. An estimate for the population size required to ensure the presence of all raw building blocks has been derived using these facetwise models. The facetwise models and the population-sizing estimate are verified with computational results.
On the Supply of Building Blocks from kknsastry
]]>
505 5 https://cdn.slidesharecdn.com/ss_thumbnails/on-the-supply-of-building-blocks-17046-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Don't Evaluate, Inherit /slideshow/dont-evaluate-inherit/26316 dont-evaluate-inherit-14555
This paper studies fitness inheritance as an efficiency enhancement technique for genetic and evolutionary algorithms. Convergence and population-sizing models are derived and compared with experimental results. These models are optimized for greatest speed-up and the optimal inheritance proportion to obtain such a speed-up is derived. Results on OneMax problems show that when the inheritance effects are considered in the population-sizing model, the number of function evaluations are reduced by 20% with the use of fitness inheritance. Results indicate that for a fixed population size, the number of function evaluations can be reduced by 70% using a simple fitness inheritance technique.]]>

This paper studies fitness inheritance as an efficiency enhancement technique for genetic and evolutionary algorithms. Convergence and population-sizing models are derived and compared with experimental results. These models are optimized for greatest speed-up and the optimal inheritance proportion to obtain such a speed-up is derived. Results on OneMax problems show that when the inheritance effects are considered in the population-sizing model, the number of function evaluations are reduced by 20% with the use of fitness inheritance. Results indicate that for a fixed population size, the number of function evaluations can be reduced by 70% using a simple fitness inheritance technique.]]>
Sun, 25 Feb 2007 10:36:02 GMT /slideshow/dont-evaluate-inherit/26316 kknsastry@slideshare.net(kknsastry) Don't Evaluate, Inherit kknsastry This paper studies fitness inheritance as an efficiency enhancement technique for genetic and evolutionary algorithms. Convergence and population-sizing models are derived and compared with experimental results. These models are optimized for greatest speed-up and the optimal inheritance proportion to obtain such a speed-up is derived. Results on OneMax problems show that when the inheritance effects are considered in the population-sizing model, the number of function evaluations are reduced by 20% with the use of fitness inheritance. Results indicate that for a fixed population size, the number of function evaluations can be reduced by 70% using a simple fitness inheritance technique. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/dont-evaluate-inherit-14555-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> This paper studies fitness inheritance as an efficiency enhancement technique for genetic and evolutionary algorithms. Convergence and population-sizing models are derived and compared with experimental results. These models are optimized for greatest speed-up and the optimal inheritance proportion to obtain such a speed-up is derived. Results on OneMax problems show that when the inheritance effects are considered in the population-sizing model, the number of function evaluations are reduced by 20% with the use of fitness inheritance. Results indicate that for a fixed population size, the number of function evaluations can be reduced by 70% using a simple fitness inheritance technique.
Don't Evaluate, Inherit from kknsastry
]]>
563 5 https://cdn.slidesharecdn.com/ss_thumbnails/dont-evaluate-inherit-14555-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Efficient Cluster Optimization Using A Hybrid Extended Compact Genetic Algorithm with A Seeded Population /slideshow/efficient-cluster-optimization-using-a-hybrid-extended-compact-genetic-algorithm-with-a-seeded-population/26315 efficient-cluster-optimization-using-a-hybrid-extended-compact-genetic-algorithm-with-a-seeded-population-14673
A recent study Sastry and Xiao (2001) proposed a highly reliable cluster optimization algorithm which employed extended compact genetic algorithm (ECGA) along with Nelder-Mead simplex search. This study utilizes an efficiency enhancement technique for the ECGA based cluster optimizer to reduce the population size and the number of function evaluation requirements, yet retaining the high reliability of predicting the lowest energy structure. Seeding of initial population with lowest energy structures of smaller cluster has been employed as the efficiency enhancement technique. Empirical results indicate that the population size and total number of function evaluations scale up with the cluster size are reduced from O(n4.2) and O(n8.2) to O(n0.83) and O(n2.45) respectively.]]>

A recent study Sastry and Xiao (2001) proposed a highly reliable cluster optimization algorithm which employed extended compact genetic algorithm (ECGA) along with Nelder-Mead simplex search. This study utilizes an efficiency enhancement technique for the ECGA based cluster optimizer to reduce the population size and the number of function evaluation requirements, yet retaining the high reliability of predicting the lowest energy structure. Seeding of initial population with lowest energy structures of smaller cluster has been employed as the efficiency enhancement technique. Empirical results indicate that the population size and total number of function evaluations scale up with the cluster size are reduced from O(n4.2) and O(n8.2) to O(n0.83) and O(n2.45) respectively.]]>
Sun, 25 Feb 2007 10:27:31 GMT /slideshow/efficient-cluster-optimization-using-a-hybrid-extended-compact-genetic-algorithm-with-a-seeded-population/26315 kknsastry@slideshare.net(kknsastry) Efficient Cluster Optimization Using A Hybrid Extended Compact Genetic Algorithm with A Seeded Population kknsastry A recent study Sastry and Xiao (2001) proposed a highly reliable cluster optimization algorithm which employed extended compact genetic algorithm (ECGA) along with Nelder-Mead simplex search. This study utilizes an efficiency enhancement technique for the ECGA based cluster optimizer to reduce the population size and the number of function evaluation requirements, yet retaining the high reliability of predicting the lowest energy structure. Seeding of initial population with lowest energy structures of smaller cluster has been employed as the efficiency enhancement technique. Empirical results indicate that the population size and total number of function evaluations scale up with the cluster size are reduced from O(n4.2) and O(n8.2) to O(n0.83) and O(n2.45) respectively. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/efficient-cluster-optimization-using-a-hybrid-extended-compact-genetic-algorithm-with-a-seeded-population-14673-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> A recent study Sastry and Xiao (2001) proposed a highly reliable cluster optimization algorithm which employed extended compact genetic algorithm (ECGA) along with Nelder-Mead simplex search. This study utilizes an efficiency enhancement technique for the ECGA based cluster optimizer to reduce the population size and the number of function evaluation requirements, yet retaining the high reliability of predicting the lowest energy structure. Seeding of initial population with lowest energy structures of smaller cluster has been employed as the efficiency enhancement technique. Empirical results indicate that the population size and total number of function evaluations scale up with the cluster size are reduced from O(n4.2) and O(n8.2) to O(n0.83) and O(n2.45) respectively.
Efficient Cluster Optimization Using A Hybrid Extended Compact Genetic Algorithm with A Seeded Population from kknsastry
]]>
636 6 https://cdn.slidesharecdn.com/ss_thumbnails/efficient-cluster-optimization-using-a-hybrid-extended-compact-genetic-algorithm-with-a-seeded-population-14673-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
https://public.slidesharecdn.com/v2/images/profile-picture.png www.kumarasastry.com https://cdn.slidesharecdn.com/ss_thumbnails/genetic-algorithms-and-genetic-programming-for-multiscale-modeling2802-thumbnail.jpg?width=320&height=320&fit=bounds slideshow/genetic-algorithms-and-genetic-programming-for-multiscale-modeling/108742 Genetic Algorithms and... https://cdn.slidesharecdn.com/ss_thumbnails/towards-billion-bit-optimization-via-parallel-estimation-of-distribution-algorithm1292-thumbnail.jpg?width=320&height=320&fit=bounds slideshow/towards-billion-bit-optimization-via-parallel-estimation-of-distribution-algorithm/77764 Towards billion bit op... https://cdn.slidesharecdn.com/ss_thumbnails/empirical-analysis-of-ideal-recombination-on-random-decomposable-problems3122-thumbnail.jpg?width=320&height=320&fit=bounds slideshow/empirical-analysis-of-ideal-recombination-on-random-decomposable-problems/77761 Empirical Analysis of ...