際際滷shows by User: ijaia / http://www.slideshare.net/images/logo.gif 際際滷shows by User: ijaia / Tue, 11 Feb 2025 14:35:40 GMT 際際滷Share feed for 際際滷shows by User: ijaia MOVIE RECOMMENDATION SYSTEM BASED ON MACHINE LEARNING USING PROFILING /slideshow/movie-recommendation-system-based-on-machine-learning-using-profiling/275551469 16125ijaia05-250211143540-74f54cd6
With the increasing amount of data available, recommendation systems are important for helping users find relevant content. This paper introduces a movie recommendation system that uses user profiles and machine learning techniques to improve the user experience by offering personalized suggestions. We tested different machine learning methods, including k nearest neighbors (KNN), support vector machines (SVM), and neural networks. We used several datasets, such as MovieLens and Netflix Prize, to check how accurate the recommendations were and how satisfied users were with them.]]>

With the increasing amount of data available, recommendation systems are important for helping users find relevant content. This paper introduces a movie recommendation system that uses user profiles and machine learning techniques to improve the user experience by offering personalized suggestions. We tested different machine learning methods, including k nearest neighbors (KNN), support vector machines (SVM), and neural networks. We used several datasets, such as MovieLens and Netflix Prize, to check how accurate the recommendations were and how satisfied users were with them.]]>
Tue, 11 Feb 2025 14:35:40 GMT /slideshow/movie-recommendation-system-based-on-machine-learning-using-profiling/275551469 ijaia@slideshare.net(ijaia) MOVIE RECOMMENDATION SYSTEM BASED ON MACHINE LEARNING USING PROFILING ijaia With the increasing amount of data available, recommendation systems are important for helping users find relevant content. This paper introduces a movie recommendation system that uses user profiles and machine learning techniques to improve the user experience by offering personalized suggestions. We tested different machine learning methods, including k nearest neighbors (KNN), support vector machines (SVM), and neural networks. We used several datasets, such as MovieLens and Netflix Prize, to check how accurate the recommendations were and how satisfied users were with them. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/16125ijaia05-250211143540-74f54cd6-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> With the increasing amount of data available, recommendation systems are important for helping users find relevant content. This paper introduces a movie recommendation system that uses user profiles and machine learning techniques to improve the user experience by offering personalized suggestions. We tested different machine learning methods, including k nearest neighbors (KNN), support vector machines (SVM), and neural networks. We used several datasets, such as MovieLens and Netflix Prize, to check how accurate the recommendations were and how satisfied users were with them.
MOVIE RECOMMENDATION SYSTEM BASED ON MACHINE LEARNING USING PROFILING from ijaia
]]>
164 0 https://cdn.slidesharecdn.com/ss_thumbnails/16125ijaia05-250211143540-74f54cd6-thumbnail.jpg?width=120&height=120&fit=bounds document Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
AE-ViT: Token Enhancement for Vision Transformers via CNN-based Autoencoder Ensembles /slideshow/ae-vit-token-enhancement-for-vision-transformers-via-cnn-based-autoencoder-ensembles/275551431 16125ijaia04-250211143356-242fb0ed
While Vision Transformers (ViTs) have revolutionized computer vision with their exceptional results, they struggle to balance processing speed with visual detail preservation. This tension becomes particularly evident when implementing larger patch sizes. Although larger patches reduce computational costs, they lead to significant information loss during the tokenization process. We present AE-ViT, a novel architecture that leverages an ensemble of autoencoders to address this issue by introducing specialized latent tokens that integrate seamlessly with standard patch tokens, enabling ViTs to capture both global and fine-grained features. Our experiments on CIFAR-100 show that AE-ViT achieves a 23.67% relative accuracy improvement over the baseline ViT when using 1616 patches, effectively recovering fine-grained details typically lost with larger patches. Notably, AE-ViT maintains relevant performance (60.64%) even at 3232 patches. We further validate our method on CIFAR-10, confirming consistent benefits and adaptability across different datasets. Ablation studies on ensemble size and integration strategy underscore the robustness of AE-ViT, while computational analysis shows that its efficiency scales favorably with increasing patch size. Overall, these findings suggest that AE-ViT provides a practical solution to the patch-size dilemma in ViTs by striking a balance between accuracy and computational cost, all within a simple, end-to-end trainable design.]]>

While Vision Transformers (ViTs) have revolutionized computer vision with their exceptional results, they struggle to balance processing speed with visual detail preservation. This tension becomes particularly evident when implementing larger patch sizes. Although larger patches reduce computational costs, they lead to significant information loss during the tokenization process. We present AE-ViT, a novel architecture that leverages an ensemble of autoencoders to address this issue by introducing specialized latent tokens that integrate seamlessly with standard patch tokens, enabling ViTs to capture both global and fine-grained features. Our experiments on CIFAR-100 show that AE-ViT achieves a 23.67% relative accuracy improvement over the baseline ViT when using 1616 patches, effectively recovering fine-grained details typically lost with larger patches. Notably, AE-ViT maintains relevant performance (60.64%) even at 3232 patches. We further validate our method on CIFAR-10, confirming consistent benefits and adaptability across different datasets. Ablation studies on ensemble size and integration strategy underscore the robustness of AE-ViT, while computational analysis shows that its efficiency scales favorably with increasing patch size. Overall, these findings suggest that AE-ViT provides a practical solution to the patch-size dilemma in ViTs by striking a balance between accuracy and computational cost, all within a simple, end-to-end trainable design.]]>
Tue, 11 Feb 2025 14:33:56 GMT /slideshow/ae-vit-token-enhancement-for-vision-transformers-via-cnn-based-autoencoder-ensembles/275551431 ijaia@slideshare.net(ijaia) AE-ViT: Token Enhancement for Vision Transformers via CNN-based Autoencoder Ensembles ijaia While Vision Transformers (ViTs) have revolutionized computer vision with their exceptional results, they struggle to balance processing speed with visual detail preservation. This tension becomes particularly evident when implementing larger patch sizes. Although larger patches reduce computational costs, they lead to significant information loss during the tokenization process. We present AE-ViT, a novel architecture that leverages an ensemble of autoencoders to address this issue by introducing specialized latent tokens that integrate seamlessly with standard patch tokens, enabling ViTs to capture both global and fine-grained features. Our experiments on CIFAR-100 show that AE-ViT achieves a 23.67% relative accuracy improvement over the baseline ViT when using 1616 patches, effectively recovering fine-grained details typically lost with larger patches. Notably, AE-ViT maintains relevant performance (60.64%) even at 3232 patches. We further validate our method on CIFAR-10, confirming consistent benefits and adaptability across different datasets. Ablation studies on ensemble size and integration strategy underscore the robustness of AE-ViT, while computational analysis shows that its efficiency scales favorably with increasing patch size. Overall, these findings suggest that AE-ViT provides a practical solution to the patch-size dilemma in ViTs by striking a balance between accuracy and computational cost, all within a simple, end-to-end trainable design. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/16125ijaia04-250211143356-242fb0ed-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> While Vision Transformers (ViTs) have revolutionized computer vision with their exceptional results, they struggle to balance processing speed with visual detail preservation. This tension becomes particularly evident when implementing larger patch sizes. Although larger patches reduce computational costs, they lead to significant information loss during the tokenization process. We present AE-ViT, a novel architecture that leverages an ensemble of autoencoders to address this issue by introducing specialized latent tokens that integrate seamlessly with standard patch tokens, enabling ViTs to capture both global and fine-grained features. Our experiments on CIFAR-100 show that AE-ViT achieves a 23.67% relative accuracy improvement over the baseline ViT when using 1616 patches, effectively recovering fine-grained details typically lost with larger patches. Notably, AE-ViT maintains relevant performance (60.64%) even at 3232 patches. We further validate our method on CIFAR-10, confirming consistent benefits and adaptability across different datasets. Ablation studies on ensemble size and integration strategy underscore the robustness of AE-ViT, while computational analysis shows that its efficiency scales favorably with increasing patch size. Overall, these findings suggest that AE-ViT provides a practical solution to the patch-size dilemma in ViTs by striking a balance between accuracy and computational cost, all within a simple, end-to-end trainable design.
AE-ViT: Token Enhancement for Vision Transformers via CNN-based Autoencoder Ensembles from ijaia
]]>
9 0 https://cdn.slidesharecdn.com/ss_thumbnails/16125ijaia04-250211143356-242fb0ed-thumbnail.jpg?width=120&height=120&fit=bounds document Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
EXPLORING THE INTEGRATION OF ARTIFICIAL INTELLIGENCE INTO THE FUNCTIONS OF AN ACCOUNTING DEPARTMENT /slideshow/exploring-the-integration-of-artificial-intelligence-into-the-functions-of-an-accounting-department/275551396 16125ijaia03-250211143207-8e6fa43a
Artificial intelligence is transforming various fields, including accounting, by representing a significant technological innovation. Artificial intelligence combines hardware and software to simulate human cognitive processes, enabling machines to perform complex tasks such as learning, reasoning, and decision-making. This paper explores the advantages and disadvantages of integrating artificial intelligence into accounting practices. While artificial intelligence presents numerous benefits for accountants, it also introduces challenges that must be addressed. The paper also contributes to the expanding knowledge base on artificial intelligence in accounting by offering practical recommendations for accountants on effectively adopting artificial intelligence. Even with the challenges presented from integrating artificial intelligence in accounting, such integration offers considerable efficiency gains. This positions artificial intelligence as a strategic investment for organizations aiming to improve the performance and effectiveness of their accounting departments.]]>

Artificial intelligence is transforming various fields, including accounting, by representing a significant technological innovation. Artificial intelligence combines hardware and software to simulate human cognitive processes, enabling machines to perform complex tasks such as learning, reasoning, and decision-making. This paper explores the advantages and disadvantages of integrating artificial intelligence into accounting practices. While artificial intelligence presents numerous benefits for accountants, it also introduces challenges that must be addressed. The paper also contributes to the expanding knowledge base on artificial intelligence in accounting by offering practical recommendations for accountants on effectively adopting artificial intelligence. Even with the challenges presented from integrating artificial intelligence in accounting, such integration offers considerable efficiency gains. This positions artificial intelligence as a strategic investment for organizations aiming to improve the performance and effectiveness of their accounting departments.]]>
Tue, 11 Feb 2025 14:32:07 GMT /slideshow/exploring-the-integration-of-artificial-intelligence-into-the-functions-of-an-accounting-department/275551396 ijaia@slideshare.net(ijaia) EXPLORING THE INTEGRATION OF ARTIFICIAL INTELLIGENCE INTO THE FUNCTIONS OF AN ACCOUNTING DEPARTMENT ijaia Artificial intelligence is transforming various fields, including accounting, by representing a significant technological innovation. Artificial intelligence combines hardware and software to simulate human cognitive processes, enabling machines to perform complex tasks such as learning, reasoning, and decision-making. This paper explores the advantages and disadvantages of integrating artificial intelligence into accounting practices. While artificial intelligence presents numerous benefits for accountants, it also introduces challenges that must be addressed. The paper also contributes to the expanding knowledge base on artificial intelligence in accounting by offering practical recommendations for accountants on effectively adopting artificial intelligence. Even with the challenges presented from integrating artificial intelligence in accounting, such integration offers considerable efficiency gains. This positions artificial intelligence as a strategic investment for organizations aiming to improve the performance and effectiveness of their accounting departments. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/16125ijaia03-250211143207-8e6fa43a-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Artificial intelligence is transforming various fields, including accounting, by representing a significant technological innovation. Artificial intelligence combines hardware and software to simulate human cognitive processes, enabling machines to perform complex tasks such as learning, reasoning, and decision-making. This paper explores the advantages and disadvantages of integrating artificial intelligence into accounting practices. While artificial intelligence presents numerous benefits for accountants, it also introduces challenges that must be addressed. The paper also contributes to the expanding knowledge base on artificial intelligence in accounting by offering practical recommendations for accountants on effectively adopting artificial intelligence. Even with the challenges presented from integrating artificial intelligence in accounting, such integration offers considerable efficiency gains. This positions artificial intelligence as a strategic investment for organizations aiming to improve the performance and effectiveness of their accounting departments.
EXPLORING THE INTEGRATION OF ARTIFICIAL INTELLIGENCE INTO THE FUNCTIONS OF AN ACCOUNTING DEPARTMENT from ijaia
]]>
7 0 https://cdn.slidesharecdn.com/ss_thumbnails/16125ijaia03-250211143207-8e6fa43a-thumbnail.jpg?width=120&height=120&fit=bounds document Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
DIVERGENT ENSEMBLE NETWORKS : IMPROVING PREDICTIVE RELIABILITY AND COMPUTATIONAL EFFICIENCY /slideshow/divergent-ensemble-networks-improving-predictive-reliability-and-computational-efficiency/275551370 16125ijaia02-250211143042-994299a2
The effectiveness of ensemble learning in improving prediction accuracy and estimating uncertainty is well- established. However, conventional ensemble methods often grapple with high computational demands and redundant parameters due to independent network training. This study introduces the Divergent Ensemble Network (DEN), a novel framework designed to optimize computational efficiency while maintaining prediction diversity. DEN achieves superior predictive reliability with reduced parameter overhead by leveraging shared representation learning and independent branching. Our results demonstrate the efficacy of DEN in balancing accuracy, uncertainty estimation, and scalability, making it a robust choice for real- world applications.]]>

The effectiveness of ensemble learning in improving prediction accuracy and estimating uncertainty is well- established. However, conventional ensemble methods often grapple with high computational demands and redundant parameters due to independent network training. This study introduces the Divergent Ensemble Network (DEN), a novel framework designed to optimize computational efficiency while maintaining prediction diversity. DEN achieves superior predictive reliability with reduced parameter overhead by leveraging shared representation learning and independent branching. Our results demonstrate the efficacy of DEN in balancing accuracy, uncertainty estimation, and scalability, making it a robust choice for real- world applications.]]>
Tue, 11 Feb 2025 14:30:42 GMT /slideshow/divergent-ensemble-networks-improving-predictive-reliability-and-computational-efficiency/275551370 ijaia@slideshare.net(ijaia) DIVERGENT ENSEMBLE NETWORKS : IMPROVING PREDICTIVE RELIABILITY AND COMPUTATIONAL EFFICIENCY ijaia The effectiveness of ensemble learning in improving prediction accuracy and estimating uncertainty is well- established. However, conventional ensemble methods often grapple with high computational demands and redundant parameters due to independent network training. This study introduces the Divergent Ensemble Network (DEN), a novel framework designed to optimize computational efficiency while maintaining prediction diversity. DEN achieves superior predictive reliability with reduced parameter overhead by leveraging shared representation learning and independent branching. Our results demonstrate the efficacy of DEN in balancing accuracy, uncertainty estimation, and scalability, making it a robust choice for real- world applications. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/16125ijaia02-250211143042-994299a2-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> The effectiveness of ensemble learning in improving prediction accuracy and estimating uncertainty is well- established. However, conventional ensemble methods often grapple with high computational demands and redundant parameters due to independent network training. This study introduces the Divergent Ensemble Network (DEN), a novel framework designed to optimize computational efficiency while maintaining prediction diversity. DEN achieves superior predictive reliability with reduced parameter overhead by leveraging shared representation learning and independent branching. Our results demonstrate the efficacy of DEN in balancing accuracy, uncertainty estimation, and scalability, making it a robust choice for real- world applications.
DIVERGENT ENSEMBLE NETWORKS : IMPROVING PREDICTIVE RELIABILITY AND COMPUTATIONAL EFFICIENCY from ijaia
]]>
7 0 https://cdn.slidesharecdn.com/ss_thumbnails/16125ijaia02-250211143042-994299a2-thumbnail.jpg?width=120&height=120&fit=bounds document Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
AI-BASED EARLY PREDICTION AND INTERVENTION FOR STUDENT ACADEMIC PERFORMANCE IN HIGHER EDUCATION /slideshow/ai-based-early-prediction-and-intervention-for-student-academic-performance-in-higher-education/275551349 16125ijaia01-250211142929-218f4d44
Accurately identifying at-risk students in higher education is crucial for timely interventions. This study presents an AI-based solution for predicting student performance using machine learning classifiers. A dataset of 208 student records from the past two years was preprocessed, and key predictors such as midterm grades, previous semester GPA, and cumulative GPA were selected using information gain evaluation. Multiple classifiers, including Support Vector Machine (SVM), Decision Tree, Naive Bayes, Artificial Neural Networks (ANN), and k-Nearest Neighbors (k-NN), were evaluated through 10-fold cross- validation. SVM demonstrated the highest performance with an accuracy of 85.1% and an F2 score of 94.0%, effectively identifying students scoring below 65% (GPA < 2.0). The model was implemented in a desktop application for educators, providing both class-level and individual-level predictions. This user- friendly tool enables instructors to monitor performance, predict outcomes, and implement timely interventions to support struggling students. The study highlights the effectiveness of machine learning in enhancing academic performance monitoring and offers a scalable approach for AI-driven educational tools.]]>

Accurately identifying at-risk students in higher education is crucial for timely interventions. This study presents an AI-based solution for predicting student performance using machine learning classifiers. A dataset of 208 student records from the past two years was preprocessed, and key predictors such as midterm grades, previous semester GPA, and cumulative GPA were selected using information gain evaluation. Multiple classifiers, including Support Vector Machine (SVM), Decision Tree, Naive Bayes, Artificial Neural Networks (ANN), and k-Nearest Neighbors (k-NN), were evaluated through 10-fold cross- validation. SVM demonstrated the highest performance with an accuracy of 85.1% and an F2 score of 94.0%, effectively identifying students scoring below 65% (GPA < 2.0). The model was implemented in a desktop application for educators, providing both class-level and individual-level predictions. This user- friendly tool enables instructors to monitor performance, predict outcomes, and implement timely interventions to support struggling students. The study highlights the effectiveness of machine learning in enhancing academic performance monitoring and offers a scalable approach for AI-driven educational tools.]]>
Tue, 11 Feb 2025 14:29:29 GMT /slideshow/ai-based-early-prediction-and-intervention-for-student-academic-performance-in-higher-education/275551349 ijaia@slideshare.net(ijaia) AI-BASED EARLY PREDICTION AND INTERVENTION FOR STUDENT ACADEMIC PERFORMANCE IN HIGHER EDUCATION ijaia Accurately identifying at-risk students in higher education is crucial for timely interventions. This study presents an AI-based solution for predicting student performance using machine learning classifiers. A dataset of 208 student records from the past two years was preprocessed, and key predictors such as midterm grades, previous semester GPA, and cumulative GPA were selected using information gain evaluation. Multiple classifiers, including Support Vector Machine (SVM), Decision Tree, Naive Bayes, Artificial Neural Networks (ANN), and k-Nearest Neighbors (k-NN), were evaluated through 10-fold cross- validation. SVM demonstrated the highest performance with an accuracy of 85.1% and an F2 score of 94.0%, effectively identifying students scoring below 65% (GPA < 2.0). The model was implemented in a desktop application for educators, providing both class-level and individual-level predictions. This user- friendly tool enables instructors to monitor performance, predict outcomes, and implement timely interventions to support struggling students. The study highlights the effectiveness of machine learning in enhancing academic performance monitoring and offers a scalable approach for AI-driven educational tools. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/16125ijaia01-250211142929-218f4d44-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Accurately identifying at-risk students in higher education is crucial for timely interventions. This study presents an AI-based solution for predicting student performance using machine learning classifiers. A dataset of 208 student records from the past two years was preprocessed, and key predictors such as midterm grades, previous semester GPA, and cumulative GPA were selected using information gain evaluation. Multiple classifiers, including Support Vector Machine (SVM), Decision Tree, Naive Bayes, Artificial Neural Networks (ANN), and k-Nearest Neighbors (k-NN), were evaluated through 10-fold cross- validation. SVM demonstrated the highest performance with an accuracy of 85.1% and an F2 score of 94.0%, effectively identifying students scoring below 65% (GPA &lt; 2.0). The model was implemented in a desktop application for educators, providing both class-level and individual-level predictions. This user- friendly tool enables instructors to monitor performance, predict outcomes, and implement timely interventions to support struggling students. The study highlights the effectiveness of machine learning in enhancing academic performance monitoring and offers a scalable approach for AI-driven educational tools.
AI-BASED EARLY PREDICTION AND INTERVENTION FOR STUDENT ACADEMIC PERFORMANCE IN HIGHER EDUCATION from ijaia
]]>
66 0 https://cdn.slidesharecdn.com/ss_thumbnails/16125ijaia01-250211142929-218f4d44-thumbnail.jpg?width=120&height=120&fit=bounds document Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
LEVERAGING NAIVE BAYES FOR ENHANCED SURVIVAL ANALYSIS IN BREAST CANCER /slideshow/leveraging-naive-bayes-for-enhanced-survival-analysis-in-breast-cancer/271010835 15424ijaia04-240814121607-f66c7a79
The study aims to predict breast cancer survival using Na誰ve Bayes techniques by comparing different machine learning models on a comprehensive dataset of patient records. The main classification groups were survival and non-survival. The objective was to assess the performance of the Na誰ve Bayes classifier in the field of data mining and to achieve significant results in survival classification, aligning with current academic research. The Naive Bayes classifier attained an average accuracy of 91.08%, indicating consistent performance, though with some variability across different folds. Conversely, Logistic Regression achieved a higher accuracy of 94.84%, demonstrating proficiency in recognizing instances of class 1, yet encountering challenges with class 0.The Decision Tree model, with an accuracy of 93.42%, exhibited similar performance patterns. With an accuracy of 95.68%, Random Forest surpassed the Decision Tree. Nonetheless, all models encountered challenges in accurately classifying instances of class 0. The Naive Bayes algorithm was juxtaposed with K-Nearest Neighbors (KNN) and Support Vector Machines (SVM). Future research aims to enhance prediction models with novel methods and tackle the challenge of accurately identifying instances of class 0.]]>

The study aims to predict breast cancer survival using Na誰ve Bayes techniques by comparing different machine learning models on a comprehensive dataset of patient records. The main classification groups were survival and non-survival. The objective was to assess the performance of the Na誰ve Bayes classifier in the field of data mining and to achieve significant results in survival classification, aligning with current academic research. The Naive Bayes classifier attained an average accuracy of 91.08%, indicating consistent performance, though with some variability across different folds. Conversely, Logistic Regression achieved a higher accuracy of 94.84%, demonstrating proficiency in recognizing instances of class 1, yet encountering challenges with class 0.The Decision Tree model, with an accuracy of 93.42%, exhibited similar performance patterns. With an accuracy of 95.68%, Random Forest surpassed the Decision Tree. Nonetheless, all models encountered challenges in accurately classifying instances of class 0. The Naive Bayes algorithm was juxtaposed with K-Nearest Neighbors (KNN) and Support Vector Machines (SVM). Future research aims to enhance prediction models with novel methods and tackle the challenge of accurately identifying instances of class 0.]]>
Wed, 14 Aug 2024 12:16:07 GMT /slideshow/leveraging-naive-bayes-for-enhanced-survival-analysis-in-breast-cancer/271010835 ijaia@slideshare.net(ijaia) LEVERAGING NAIVE BAYES FOR ENHANCED SURVIVAL ANALYSIS IN BREAST CANCER ijaia The study aims to predict breast cancer survival using Na誰ve Bayes techniques by comparing different machine learning models on a comprehensive dataset of patient records. The main classification groups were survival and non-survival. The objective was to assess the performance of the Na誰ve Bayes classifier in the field of data mining and to achieve significant results in survival classification, aligning with current academic research. The Naive Bayes classifier attained an average accuracy of 91.08%, indicating consistent performance, though with some variability across different folds. Conversely, Logistic Regression achieved a higher accuracy of 94.84%, demonstrating proficiency in recognizing instances of class 1, yet encountering challenges with class 0.The Decision Tree model, with an accuracy of 93.42%, exhibited similar performance patterns. With an accuracy of 95.68%, Random Forest surpassed the Decision Tree. Nonetheless, all models encountered challenges in accurately classifying instances of class 0. The Naive Bayes algorithm was juxtaposed with K-Nearest Neighbors (KNN) and Support Vector Machines (SVM). Future research aims to enhance prediction models with novel methods and tackle the challenge of accurately identifying instances of class 0. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/15424ijaia04-240814121607-f66c7a79-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> The study aims to predict breast cancer survival using Na誰ve Bayes techniques by comparing different machine learning models on a comprehensive dataset of patient records. The main classification groups were survival and non-survival. The objective was to assess the performance of the Na誰ve Bayes classifier in the field of data mining and to achieve significant results in survival classification, aligning with current academic research. The Naive Bayes classifier attained an average accuracy of 91.08%, indicating consistent performance, though with some variability across different folds. Conversely, Logistic Regression achieved a higher accuracy of 94.84%, demonstrating proficiency in recognizing instances of class 1, yet encountering challenges with class 0.The Decision Tree model, with an accuracy of 93.42%, exhibited similar performance patterns. With an accuracy of 95.68%, Random Forest surpassed the Decision Tree. Nonetheless, all models encountered challenges in accurately classifying instances of class 0. The Naive Bayes algorithm was juxtaposed with K-Nearest Neighbors (KNN) and Support Vector Machines (SVM). Future research aims to enhance prediction models with novel methods and tackle the challenge of accurately identifying instances of class 0.
LEVERAGING NAIVE BAYES FOR ENHANCED SURVIVAL ANALYSIS IN BREAST CANCER from ijaia
]]>
10 0 https://cdn.slidesharecdn.com/ss_thumbnails/15424ijaia04-240814121607-f66c7a79-thumbnail.jpg?width=120&height=120&fit=bounds document Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Comparing LLMs Using a Unified Performance Ranking System /slideshow/comparing-llms-using-a-unified-performance-ranking-system/271010828 15424ijaia03-240814121512-c76d58e4
Large Language Models (LLMs) have transformed natural language processing and AI-driven applications. These advances include OpenAIs GPT, Metas LLaMA, and Googles PaLM. These advances have happened quickly. Finding a common metric to compare these models presents a substantial barrier for researchers and practitioners, notwithstanding their transformative power. This research proposes a novel performance ranking metric to satisfy the pressing demand for a complete evaluation system. Our statistic comprehensively compares LLM capacities by combining qualitative and quantitative evaluations. We examine the advantages and disadvantages of top LLMs by thorough benchmarking, providing insightful information on how they compare performance. This project aims to progress the development of more reliable and effective language models and make it easier to make well-informed decisions when choosing models.]]>

Large Language Models (LLMs) have transformed natural language processing and AI-driven applications. These advances include OpenAIs GPT, Metas LLaMA, and Googles PaLM. These advances have happened quickly. Finding a common metric to compare these models presents a substantial barrier for researchers and practitioners, notwithstanding their transformative power. This research proposes a novel performance ranking metric to satisfy the pressing demand for a complete evaluation system. Our statistic comprehensively compares LLM capacities by combining qualitative and quantitative evaluations. We examine the advantages and disadvantages of top LLMs by thorough benchmarking, providing insightful information on how they compare performance. This project aims to progress the development of more reliable and effective language models and make it easier to make well-informed decisions when choosing models.]]>
Wed, 14 Aug 2024 12:15:12 GMT /slideshow/comparing-llms-using-a-unified-performance-ranking-system/271010828 ijaia@slideshare.net(ijaia) Comparing LLMs Using a Unified Performance Ranking System ijaia Large Language Models (LLMs) have transformed natural language processing and AI-driven applications. These advances include OpenAIs GPT, Metas LLaMA, and Googles PaLM. These advances have happened quickly. Finding a common metric to compare these models presents a substantial barrier for researchers and practitioners, notwithstanding their transformative power. This research proposes a novel performance ranking metric to satisfy the pressing demand for a complete evaluation system. Our statistic comprehensively compares LLM capacities by combining qualitative and quantitative evaluations. We examine the advantages and disadvantages of top LLMs by thorough benchmarking, providing insightful information on how they compare performance. This project aims to progress the development of more reliable and effective language models and make it easier to make well-informed decisions when choosing models. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/15424ijaia03-240814121512-c76d58e4-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Large Language Models (LLMs) have transformed natural language processing and AI-driven applications. These advances include OpenAIs GPT, Metas LLaMA, and Googles PaLM. These advances have happened quickly. Finding a common metric to compare these models presents a substantial barrier for researchers and practitioners, notwithstanding their transformative power. This research proposes a novel performance ranking metric to satisfy the pressing demand for a complete evaluation system. Our statistic comprehensively compares LLM capacities by combining qualitative and quantitative evaluations. We examine the advantages and disadvantages of top LLMs by thorough benchmarking, providing insightful information on how they compare performance. This project aims to progress the development of more reliable and effective language models and make it easier to make well-informed decisions when choosing models.
Comparing LLMs Using a Unified Performance Ranking System from ijaia
]]>
41 0 https://cdn.slidesharecdn.com/ss_thumbnails/15424ijaia03-240814121512-c76d58e4-thumbnail.jpg?width=120&height=120&fit=bounds document Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
EXPLORING AI APPLICATIONS FOR ADDRESSING ALTERNATIVE CONCEPTIONS IN TEACHING PHYSICS: FOCUS ON ELECTRICAL CIRCUITS AT THE SECONDARY LEVEL /slideshow/exploring-ai-applications-for-addressing-alternative-conceptions-in-teaching-physics-focus-on-electrical-circuits-at-the-secondary-level/271010807 15424ijaia02-240814121429-93049fdc
This article introduces our initial foray into a comprehensive, long-term study on the learning practices of high school students using chatbots as educational tools. In this section, we explore the correlation between the logic underlying the responses generated by two chatbots for problems related to electrical circuits and the well-known alternative conceptions prevalent in the field of science education. To achieve this, we employed a methodology involving the presentation of ten questions to the chatbots, followed by an analysis of their answers in conjunction with established knowledge in physics and the Theory of Conceptual Fields. The objective was to bridge the gap between AI-generated responses and human responses. Our primary findings reveal a close resemblance between these two groups. This initial endeavor lays the foundation for developing an investigative methodology that will facilitate a comprehensive understanding and categorization of the various forms of interaction between students and chatbots.]]>

This article introduces our initial foray into a comprehensive, long-term study on the learning practices of high school students using chatbots as educational tools. In this section, we explore the correlation between the logic underlying the responses generated by two chatbots for problems related to electrical circuits and the well-known alternative conceptions prevalent in the field of science education. To achieve this, we employed a methodology involving the presentation of ten questions to the chatbots, followed by an analysis of their answers in conjunction with established knowledge in physics and the Theory of Conceptual Fields. The objective was to bridge the gap between AI-generated responses and human responses. Our primary findings reveal a close resemblance between these two groups. This initial endeavor lays the foundation for developing an investigative methodology that will facilitate a comprehensive understanding and categorization of the various forms of interaction between students and chatbots.]]>
Wed, 14 Aug 2024 12:14:29 GMT /slideshow/exploring-ai-applications-for-addressing-alternative-conceptions-in-teaching-physics-focus-on-electrical-circuits-at-the-secondary-level/271010807 ijaia@slideshare.net(ijaia) EXPLORING AI APPLICATIONS FOR ADDRESSING ALTERNATIVE CONCEPTIONS IN TEACHING PHYSICS: FOCUS ON ELECTRICAL CIRCUITS AT THE SECONDARY LEVEL ijaia This article introduces our initial foray into a comprehensive, long-term study on the learning practices of high school students using chatbots as educational tools. In this section, we explore the correlation between the logic underlying the responses generated by two chatbots for problems related to electrical circuits and the well-known alternative conceptions prevalent in the field of science education. To achieve this, we employed a methodology involving the presentation of ten questions to the chatbots, followed by an analysis of their answers in conjunction with established knowledge in physics and the Theory of Conceptual Fields. The objective was to bridge the gap between AI-generated responses and human responses. Our primary findings reveal a close resemblance between these two groups. This initial endeavor lays the foundation for developing an investigative methodology that will facilitate a comprehensive understanding and categorization of the various forms of interaction between students and chatbots. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/15424ijaia02-240814121429-93049fdc-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> This article introduces our initial foray into a comprehensive, long-term study on the learning practices of high school students using chatbots as educational tools. In this section, we explore the correlation between the logic underlying the responses generated by two chatbots for problems related to electrical circuits and the well-known alternative conceptions prevalent in the field of science education. To achieve this, we employed a methodology involving the presentation of ten questions to the chatbots, followed by an analysis of their answers in conjunction with established knowledge in physics and the Theory of Conceptual Fields. The objective was to bridge the gap between AI-generated responses and human responses. Our primary findings reveal a close resemblance between these two groups. This initial endeavor lays the foundation for developing an investigative methodology that will facilitate a comprehensive understanding and categorization of the various forms of interaction between students and chatbots.
EXPLORING AI APPLICATIONS FOR ADDRESSING ALTERNATIVE CONCEPTIONS IN TEACHING PHYSICS: FOCUS ON ELECTRICAL CIRCUITS AT THE SECONDARY LEVEL from ijaia
]]>
13 0 https://cdn.slidesharecdn.com/ss_thumbnails/15424ijaia02-240814121429-93049fdc-thumbnail.jpg?width=120&height=120&fit=bounds document Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
ENHANCE THE DETECTION OF DOS AND BRUTE FORCE ATTACKS WITHIN THE MQTT ENVIRONMENT THROUGH FEATURE ENGINEERING AND EMPLOYING AN ENSEMBLE TECHNIQUE /slideshow/enhance-the-detection-of-dos-and-brute-force-attacks-within-the-mqtt-environment-through-feature-engineering-and-employing-an-ensemble-technique/271010795 15424ijaia01-240814121327-b44a0622
The rapid development of the Internet of Things (IoT) environment has introduced unprecedented levels of connectivity and automation. The Message Queuing Telemetry Transport (MQTT) protocol has become recognized in IoT applications due to its lightweight and efficient features; however, this simplicity also renders MQTT vulnerable to multiple attacks that can be launched against the protocol, including denial of service (DoS) and brute-force attacks. This study aims to improve the detection of intrusion DoS and bruteforce attacks in an MQTT traffic intrusion detection system (IDS). Our approach utilizes the MQTT dataset for model training by employing effective feature engineering and ensemble learning techniques. Following our analysis and comparison, we identified the top 10 features demonstrating the highest effectiveness, leading to improved model accuracy. We used supervised machine learning models, including Random Forest, Decision Trees, k-Nearest Neighbors, and XGBoost, in combination with ensemble classifiers. Stacking, voting, and bagging ensembles utilize these four supervised machinelearning methods to combine models. This study's results illustrate the proposed technique's efficacy in enhancing the accuracy of detecting DoS and brute-force attacks in MQTT traffic. Stacking and voting classifiers achieved the highest accuracy of 0.9538. Our approach outperforms the most recent study that utilized the same dataset.]]>

The rapid development of the Internet of Things (IoT) environment has introduced unprecedented levels of connectivity and automation. The Message Queuing Telemetry Transport (MQTT) protocol has become recognized in IoT applications due to its lightweight and efficient features; however, this simplicity also renders MQTT vulnerable to multiple attacks that can be launched against the protocol, including denial of service (DoS) and brute-force attacks. This study aims to improve the detection of intrusion DoS and bruteforce attacks in an MQTT traffic intrusion detection system (IDS). Our approach utilizes the MQTT dataset for model training by employing effective feature engineering and ensemble learning techniques. Following our analysis and comparison, we identified the top 10 features demonstrating the highest effectiveness, leading to improved model accuracy. We used supervised machine learning models, including Random Forest, Decision Trees, k-Nearest Neighbors, and XGBoost, in combination with ensemble classifiers. Stacking, voting, and bagging ensembles utilize these four supervised machinelearning methods to combine models. This study's results illustrate the proposed technique's efficacy in enhancing the accuracy of detecting DoS and brute-force attacks in MQTT traffic. Stacking and voting classifiers achieved the highest accuracy of 0.9538. Our approach outperforms the most recent study that utilized the same dataset.]]>
Wed, 14 Aug 2024 12:13:27 GMT /slideshow/enhance-the-detection-of-dos-and-brute-force-attacks-within-the-mqtt-environment-through-feature-engineering-and-employing-an-ensemble-technique/271010795 ijaia@slideshare.net(ijaia) ENHANCE THE DETECTION OF DOS AND BRUTE FORCE ATTACKS WITHIN THE MQTT ENVIRONMENT THROUGH FEATURE ENGINEERING AND EMPLOYING AN ENSEMBLE TECHNIQUE ijaia The rapid development of the Internet of Things (IoT) environment has introduced unprecedented levels of connectivity and automation. The Message Queuing Telemetry Transport (MQTT) protocol has become recognized in IoT applications due to its lightweight and efficient features; however, this simplicity also renders MQTT vulnerable to multiple attacks that can be launched against the protocol, including denial of service (DoS) and brute-force attacks. This study aims to improve the detection of intrusion DoS and bruteforce attacks in an MQTT traffic intrusion detection system (IDS). Our approach utilizes the MQTT dataset for model training by employing effective feature engineering and ensemble learning techniques. Following our analysis and comparison, we identified the top 10 features demonstrating the highest effectiveness, leading to improved model accuracy. We used supervised machine learning models, including Random Forest, Decision Trees, k-Nearest Neighbors, and XGBoost, in combination with ensemble classifiers. Stacking, voting, and bagging ensembles utilize these four supervised machinelearning methods to combine models. This study's results illustrate the proposed technique's efficacy in enhancing the accuracy of detecting DoS and brute-force attacks in MQTT traffic. Stacking and voting classifiers achieved the highest accuracy of 0.9538. Our approach outperforms the most recent study that utilized the same dataset. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/15424ijaia01-240814121327-b44a0622-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> The rapid development of the Internet of Things (IoT) environment has introduced unprecedented levels of connectivity and automation. The Message Queuing Telemetry Transport (MQTT) protocol has become recognized in IoT applications due to its lightweight and efficient features; however, this simplicity also renders MQTT vulnerable to multiple attacks that can be launched against the protocol, including denial of service (DoS) and brute-force attacks. This study aims to improve the detection of intrusion DoS and bruteforce attacks in an MQTT traffic intrusion detection system (IDS). Our approach utilizes the MQTT dataset for model training by employing effective feature engineering and ensemble learning techniques. Following our analysis and comparison, we identified the top 10 features demonstrating the highest effectiveness, leading to improved model accuracy. We used supervised machine learning models, including Random Forest, Decision Trees, k-Nearest Neighbors, and XGBoost, in combination with ensemble classifiers. Stacking, voting, and bagging ensembles utilize these four supervised machinelearning methods to combine models. This study&#39;s results illustrate the proposed technique&#39;s efficacy in enhancing the accuracy of detecting DoS and brute-force attacks in MQTT traffic. Stacking and voting classifiers achieved the highest accuracy of 0.9538. Our approach outperforms the most recent study that utilized the same dataset.
ENHANCE THE DETECTION OF DOS AND BRUTE FORCE ATTACKS WITHIN THE MQTT ENVIRONMENT THROUGH FEATURE ENGINEERING AND EMPLOYING AN ENSEMBLE TECHNIQUE from ijaia
]]>
11 0 https://cdn.slidesharecdn.com/ss_thumbnails/15424ijaia01-240814121327-b44a0622-thumbnail.jpg?width=120&height=120&fit=bounds document Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
CLASSIFYING EMERGENCY PATIENTS INTO FAST-TRACK AND COMPLEX CASES USING MACHINE LEARNING /slideshow/classifying-emergency-patients-into-fast-track-and-complex-cases-using-machine-learning/269664980 15324ijaia05-240613114549-271823fa
Emergency medicine is a lifeline specialty at hospitals that patients head to for various reasons, including serious health problems, traumas, and adventitious conditions. Emergency departments are restricted to limited resources and personnel, complicating the optimal handling of all received cases. Therefore, crowded waiting areas and long waiting durations result. In this research, the databases of MIMIC-IV-ED and MIMIC-IV were utilized to obtain records of patients who visited the Beth Israel Deaconess Medical Center in the USA. Triage data, dispositions, and length of stay of these individuals were extracted. Subsequently, the urgency of these cases was inferred based on standards stated in the literature and followed in developed countries. A comparative framework using four different machine learning algorithms besides a reference model was developed to classify these patients into complex and fasttrack categories. Moreover, the relative importance of employed predictors was determined. This study proposes an approach to deal with non-urgent visits and lower overall waiting times at the emergency by utilizing the powers of machine learning to identify high-severity and low-severity patients. Given the provision of the required resources, the proposed classification would help improve the overall throughput and patient satisfaction. ]]>

Emergency medicine is a lifeline specialty at hospitals that patients head to for various reasons, including serious health problems, traumas, and adventitious conditions. Emergency departments are restricted to limited resources and personnel, complicating the optimal handling of all received cases. Therefore, crowded waiting areas and long waiting durations result. In this research, the databases of MIMIC-IV-ED and MIMIC-IV were utilized to obtain records of patients who visited the Beth Israel Deaconess Medical Center in the USA. Triage data, dispositions, and length of stay of these individuals were extracted. Subsequently, the urgency of these cases was inferred based on standards stated in the literature and followed in developed countries. A comparative framework using four different machine learning algorithms besides a reference model was developed to classify these patients into complex and fasttrack categories. Moreover, the relative importance of employed predictors was determined. This study proposes an approach to deal with non-urgent visits and lower overall waiting times at the emergency by utilizing the powers of machine learning to identify high-severity and low-severity patients. Given the provision of the required resources, the proposed classification would help improve the overall throughput and patient satisfaction. ]]>
Thu, 13 Jun 2024 11:45:49 GMT /slideshow/classifying-emergency-patients-into-fast-track-and-complex-cases-using-machine-learning/269664980 ijaia@slideshare.net(ijaia) CLASSIFYING EMERGENCY PATIENTS INTO FAST-TRACK AND COMPLEX CASES USING MACHINE LEARNING ijaia Emergency medicine is a lifeline specialty at hospitals that patients head to for various reasons, including serious health problems, traumas, and adventitious conditions. Emergency departments are restricted to limited resources and personnel, complicating the optimal handling of all received cases. Therefore, crowded waiting areas and long waiting durations result. In this research, the databases of MIMIC-IV-ED and MIMIC-IV were utilized to obtain records of patients who visited the Beth Israel Deaconess Medical Center in the USA. Triage data, dispositions, and length of stay of these individuals were extracted. Subsequently, the urgency of these cases was inferred based on standards stated in the literature and followed in developed countries. A comparative framework using four different machine learning algorithms besides a reference model was developed to classify these patients into complex and fasttrack categories. Moreover, the relative importance of employed predictors was determined. This study proposes an approach to deal with non-urgent visits and lower overall waiting times at the emergency by utilizing the powers of machine learning to identify high-severity and low-severity patients. Given the provision of the required resources, the proposed classification would help improve the overall throughput and patient satisfaction. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/15324ijaia05-240613114549-271823fa-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Emergency medicine is a lifeline specialty at hospitals that patients head to for various reasons, including serious health problems, traumas, and adventitious conditions. Emergency departments are restricted to limited resources and personnel, complicating the optimal handling of all received cases. Therefore, crowded waiting areas and long waiting durations result. In this research, the databases of MIMIC-IV-ED and MIMIC-IV were utilized to obtain records of patients who visited the Beth Israel Deaconess Medical Center in the USA. Triage data, dispositions, and length of stay of these individuals were extracted. Subsequently, the urgency of these cases was inferred based on standards stated in the literature and followed in developed countries. A comparative framework using four different machine learning algorithms besides a reference model was developed to classify these patients into complex and fasttrack categories. Moreover, the relative importance of employed predictors was determined. This study proposes an approach to deal with non-urgent visits and lower overall waiting times at the emergency by utilizing the powers of machine learning to identify high-severity and low-severity patients. Given the provision of the required resources, the proposed classification would help improve the overall throughput and patient satisfaction.
CLASSIFYING EMERGENCY PATIENTS INTO FAST-TRACK AND COMPLEX CASES USING MACHINE LEARNING from ijaia
]]>
21 0 https://cdn.slidesharecdn.com/ss_thumbnails/15324ijaia05-240613114549-271823fa-thumbnail.jpg?width=120&height=120&fit=bounds document Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
SYSTEMATIC REVIEW OF MODELS USEDTO HANDLE CLASS IMBALANCE IN ANOMALY DETECTION FOR ENERGY CONSUMPTION /slideshow/systematic-review-of-models-usedto-handle-class-imbalance-in-anomaly-detection-for-energy-consumption/269664946 15324ijaia04-240613114354-ed3adca3
The widespread integration of Smart technologies into energy consumption systems has brought about a transformative shift in monitoring and managing electricity usage. The imbalanced nature of anomaly data often results in suboptimal performance in detecting rare anomalies. This literature review analyzes models designed to address this challenge. The methodology involves a systematic literature review based on the five-step framework proposed by Khan, encompassing framing research questions, identifying relevant literature, assessing article quality, conducting a critical review, and interpreting results. The findings show that classical machine learning models like Support Vector Machines (SVM) and Random Forests (RF) are commonly used. In conclusion, classical machine learning models like SVM and RF struggle to recognize rare anomalies, while deep learning models, notably Convolutional Neural Networks (CNNs) and Long Short-Term Memory (LSTM), show promise for automatically learning elaborate representations and improving performance while dealing with class imbalance. ]]>

The widespread integration of Smart technologies into energy consumption systems has brought about a transformative shift in monitoring and managing electricity usage. The imbalanced nature of anomaly data often results in suboptimal performance in detecting rare anomalies. This literature review analyzes models designed to address this challenge. The methodology involves a systematic literature review based on the five-step framework proposed by Khan, encompassing framing research questions, identifying relevant literature, assessing article quality, conducting a critical review, and interpreting results. The findings show that classical machine learning models like Support Vector Machines (SVM) and Random Forests (RF) are commonly used. In conclusion, classical machine learning models like SVM and RF struggle to recognize rare anomalies, while deep learning models, notably Convolutional Neural Networks (CNNs) and Long Short-Term Memory (LSTM), show promise for automatically learning elaborate representations and improving performance while dealing with class imbalance. ]]>
Thu, 13 Jun 2024 11:43:54 GMT /slideshow/systematic-review-of-models-usedto-handle-class-imbalance-in-anomaly-detection-for-energy-consumption/269664946 ijaia@slideshare.net(ijaia) SYSTEMATIC REVIEW OF MODELS USEDTO HANDLE CLASS IMBALANCE IN ANOMALY DETECTION FOR ENERGY CONSUMPTION ijaia The widespread integration of Smart technologies into energy consumption systems has brought about a transformative shift in monitoring and managing electricity usage. The imbalanced nature of anomaly data often results in suboptimal performance in detecting rare anomalies. This literature review analyzes models designed to address this challenge. The methodology involves a systematic literature review based on the five-step framework proposed by Khan, encompassing framing research questions, identifying relevant literature, assessing article quality, conducting a critical review, and interpreting results. The findings show that classical machine learning models like Support Vector Machines (SVM) and Random Forests (RF) are commonly used. In conclusion, classical machine learning models like SVM and RF struggle to recognize rare anomalies, while deep learning models, notably Convolutional Neural Networks (CNNs) and Long Short-Term Memory (LSTM), show promise for automatically learning elaborate representations and improving performance while dealing with class imbalance. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/15324ijaia04-240613114354-ed3adca3-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> The widespread integration of Smart technologies into energy consumption systems has brought about a transformative shift in monitoring and managing electricity usage. The imbalanced nature of anomaly data often results in suboptimal performance in detecting rare anomalies. This literature review analyzes models designed to address this challenge. The methodology involves a systematic literature review based on the five-step framework proposed by Khan, encompassing framing research questions, identifying relevant literature, assessing article quality, conducting a critical review, and interpreting results. The findings show that classical machine learning models like Support Vector Machines (SVM) and Random Forests (RF) are commonly used. In conclusion, classical machine learning models like SVM and RF struggle to recognize rare anomalies, while deep learning models, notably Convolutional Neural Networks (CNNs) and Long Short-Term Memory (LSTM), show promise for automatically learning elaborate representations and improving performance while dealing with class imbalance.
SYSTEMATIC REVIEW OF MODELS USEDTO HANDLE CLASS IMBALANCE IN ANOMALY DETECTION FOR ENERGY CONSUMPTION from ijaia
]]>
11 0 https://cdn.slidesharecdn.com/ss_thumbnails/15324ijaia04-240613114354-ed3adca3-thumbnail.jpg?width=120&height=120&fit=bounds document Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
IS PROMPT ENGINEERING A PROFESSION?..... /slideshow/is-prompt-engineering-a-profession/269664926 15324ijaia03-240613114230-9425cd7c
Prompt Engineering, the systematic design and construction of prompts for human-AI interaction, raises questions regarding its professional status. This paper examines Prompt Engineering and evaluates whether it qualifies as a distinct profession. Through an analysis of its defining characteristics, including specialized skills, ethical considerations, and societal impact, this study explores the parallels between Prompt Engineering and established professions. Drawing on examples from various fields, it argues for the recognition of Prompt Engineering as a legitimate profession. By addressing the complexities of human-AI interaction and the evolving demands of technology, this research contributes to the ongoing discourse on the professionalization of emerging disciplines. ]]>

Prompt Engineering, the systematic design and construction of prompts for human-AI interaction, raises questions regarding its professional status. This paper examines Prompt Engineering and evaluates whether it qualifies as a distinct profession. Through an analysis of its defining characteristics, including specialized skills, ethical considerations, and societal impact, this study explores the parallels between Prompt Engineering and established professions. Drawing on examples from various fields, it argues for the recognition of Prompt Engineering as a legitimate profession. By addressing the complexities of human-AI interaction and the evolving demands of technology, this research contributes to the ongoing discourse on the professionalization of emerging disciplines. ]]>
Thu, 13 Jun 2024 11:42:30 GMT /slideshow/is-prompt-engineering-a-profession/269664926 ijaia@slideshare.net(ijaia) IS PROMPT ENGINEERING A PROFESSION?..... ijaia Prompt Engineering, the systematic design and construction of prompts for human-AI interaction, raises questions regarding its professional status. This paper examines Prompt Engineering and evaluates whether it qualifies as a distinct profession. Through an analysis of its defining characteristics, including specialized skills, ethical considerations, and societal impact, this study explores the parallels between Prompt Engineering and established professions. Drawing on examples from various fields, it argues for the recognition of Prompt Engineering as a legitimate profession. By addressing the complexities of human-AI interaction and the evolving demands of technology, this research contributes to the ongoing discourse on the professionalization of emerging disciplines. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/15324ijaia03-240613114230-9425cd7c-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Prompt Engineering, the systematic design and construction of prompts for human-AI interaction, raises questions regarding its professional status. This paper examines Prompt Engineering and evaluates whether it qualifies as a distinct profession. Through an analysis of its defining characteristics, including specialized skills, ethical considerations, and societal impact, this study explores the parallels between Prompt Engineering and established professions. Drawing on examples from various fields, it argues for the recognition of Prompt Engineering as a legitimate profession. By addressing the complexities of human-AI interaction and the evolving demands of technology, this research contributes to the ongoing discourse on the professionalization of emerging disciplines.
IS PROMPT ENGINEERING A PROFESSION?..... from ijaia
]]>
40 0 https://cdn.slidesharecdn.com/ss_thumbnails/15324ijaia03-240613114230-9425cd7c-thumbnail.jpg?width=120&height=120&fit=bounds document Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
PHOTOQR: A NOVEL ID CARD WITH AN ENCODED VIEW /slideshow/photoqr-a-novel-id-card-with-an-encoded-view/269664905 15324ijaia02-240613114142-db72a861
There is an increasing interest in developing techniques to identify and assess data to allow an easy and continuous access to resources, services or places that require thorough ID control. Usually, in order to give access to these resources, different kinds of documents are mandatory. In order to avoid forgeries without the need of extra credentials, a new system named photoQR, is here proposed. This system is based on a ID card having two objects: one persons picture (pre-processed via blur and/or swirl techniques) and one QR code containing embedded data related to the picture. The idea is that the picture and the QR code can assess each other by a proper hash value in the QR. The QR without the picture cannot be assessed and vice versa. An open source prototype of the photoQR system has been implemented in Python and can be used both in offline and real-time environments, which effectively combines security concepts and image processing algorithms to obtain data assessment.]]>

There is an increasing interest in developing techniques to identify and assess data to allow an easy and continuous access to resources, services or places that require thorough ID control. Usually, in order to give access to these resources, different kinds of documents are mandatory. In order to avoid forgeries without the need of extra credentials, a new system named photoQR, is here proposed. This system is based on a ID card having two objects: one persons picture (pre-processed via blur and/or swirl techniques) and one QR code containing embedded data related to the picture. The idea is that the picture and the QR code can assess each other by a proper hash value in the QR. The QR without the picture cannot be assessed and vice versa. An open source prototype of the photoQR system has been implemented in Python and can be used both in offline and real-time environments, which effectively combines security concepts and image processing algorithms to obtain data assessment.]]>
Thu, 13 Jun 2024 11:41:42 GMT /slideshow/photoqr-a-novel-id-card-with-an-encoded-view/269664905 ijaia@slideshare.net(ijaia) PHOTOQR: A NOVEL ID CARD WITH AN ENCODED VIEW ijaia There is an increasing interest in developing techniques to identify and assess data to allow an easy and continuous access to resources, services or places that require thorough ID control. Usually, in order to give access to these resources, different kinds of documents are mandatory. In order to avoid forgeries without the need of extra credentials, a new system named photoQR, is here proposed. This system is based on a ID card having two objects: one persons picture (pre-processed via blur and/or swirl techniques) and one QR code containing embedded data related to the picture. The idea is that the picture and the QR code can assess each other by a proper hash value in the QR. The QR without the picture cannot be assessed and vice versa. An open source prototype of the photoQR system has been implemented in Python and can be used both in offline and real-time environments, which effectively combines security concepts and image processing algorithms to obtain data assessment. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/15324ijaia02-240613114142-db72a861-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> There is an increasing interest in developing techniques to identify and assess data to allow an easy and continuous access to resources, services or places that require thorough ID control. Usually, in order to give access to these resources, different kinds of documents are mandatory. In order to avoid forgeries without the need of extra credentials, a new system named photoQR, is here proposed. This system is based on a ID card having two objects: one persons picture (pre-processed via blur and/or swirl techniques) and one QR code containing embedded data related to the picture. The idea is that the picture and the QR code can assess each other by a proper hash value in the QR. The QR without the picture cannot be assessed and vice versa. An open source prototype of the photoQR system has been implemented in Python and can be used both in offline and real-time environments, which effectively combines security concepts and image processing algorithms to obtain data assessment.
PHOTOQR: A NOVEL ID CARD WITH AN ENCODED VIEW from ijaia
]]>
4 0 https://cdn.slidesharecdn.com/ss_thumbnails/15324ijaia02-240613114142-db72a861-thumbnail.jpg?width=120&height=120&fit=bounds document Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
DEEP LEARNING FOR SMART GRID INTRUSION DETECTION: A HYBRID CNN-LSTM-BASED MODEL /slideshow/deep-learning-for-smart-grid-intrusion-detection-a-hybrid-cnn-lstm-based-model/269664893 15324ijaia01-240613114050-3377656d
As digital technology becomes more deeply embedded in power systems, protecting the communication networks of Smart Grids (SG) has emerged as a critical concern. Distributed Network Protocol 3 (DNP3) represents a multi-tiered application layer protocol extensively utilized in Supervisory Control and Data Acquisition (SCADA)-based smart grids to facilitate real-time data gathering and control functionalities. Robust Intrusion Detection Systems (IDS) are necessary for early threat detection and mitigation because of the interconnection of these networks, which makes them vulnerable to a variety of cyberattacks. To solve this issue, this paper develops a hybrid Deep Learning (DL) model specifically designed for intrusion detection in smart grids. The proposed approach is a combination of the Convolutional Neural Network (CNN) and the Long-Short-Term Memory algorithms (LSTM). We employed a recent intrusion detection dataset (DNP3), which focuses on unauthorized commands and Denial of Service (DoS) cyberattacks, to train and test our model. The results of our experiments show that our CNN-LSTM method is much better at finding smart grid intrusions than other deep learning algorithms used for classification. In addition, our proposed approach improves accuracy, precision, recall, and F1 score, achieving a high detection accuracy rate of 99.50%. ]]>

As digital technology becomes more deeply embedded in power systems, protecting the communication networks of Smart Grids (SG) has emerged as a critical concern. Distributed Network Protocol 3 (DNP3) represents a multi-tiered application layer protocol extensively utilized in Supervisory Control and Data Acquisition (SCADA)-based smart grids to facilitate real-time data gathering and control functionalities. Robust Intrusion Detection Systems (IDS) are necessary for early threat detection and mitigation because of the interconnection of these networks, which makes them vulnerable to a variety of cyberattacks. To solve this issue, this paper develops a hybrid Deep Learning (DL) model specifically designed for intrusion detection in smart grids. The proposed approach is a combination of the Convolutional Neural Network (CNN) and the Long-Short-Term Memory algorithms (LSTM). We employed a recent intrusion detection dataset (DNP3), which focuses on unauthorized commands and Denial of Service (DoS) cyberattacks, to train and test our model. The results of our experiments show that our CNN-LSTM method is much better at finding smart grid intrusions than other deep learning algorithms used for classification. In addition, our proposed approach improves accuracy, precision, recall, and F1 score, achieving a high detection accuracy rate of 99.50%. ]]>
Thu, 13 Jun 2024 11:40:50 GMT /slideshow/deep-learning-for-smart-grid-intrusion-detection-a-hybrid-cnn-lstm-based-model/269664893 ijaia@slideshare.net(ijaia) DEEP LEARNING FOR SMART GRID INTRUSION DETECTION: A HYBRID CNN-LSTM-BASED MODEL ijaia As digital technology becomes more deeply embedded in power systems, protecting the communication networks of Smart Grids (SG) has emerged as a critical concern. Distributed Network Protocol 3 (DNP3) represents a multi-tiered application layer protocol extensively utilized in Supervisory Control and Data Acquisition (SCADA)-based smart grids to facilitate real-time data gathering and control functionalities. Robust Intrusion Detection Systems (IDS) are necessary for early threat detection and mitigation because of the interconnection of these networks, which makes them vulnerable to a variety of cyberattacks. To solve this issue, this paper develops a hybrid Deep Learning (DL) model specifically designed for intrusion detection in smart grids. The proposed approach is a combination of the Convolutional Neural Network (CNN) and the Long-Short-Term Memory algorithms (LSTM). We employed a recent intrusion detection dataset (DNP3), which focuses on unauthorized commands and Denial of Service (DoS) cyberattacks, to train and test our model. The results of our experiments show that our CNN-LSTM method is much better at finding smart grid intrusions than other deep learning algorithms used for classification. In addition, our proposed approach improves accuracy, precision, recall, and F1 score, achieving a high detection accuracy rate of 99.50%. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/15324ijaia01-240613114050-3377656d-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> As digital technology becomes more deeply embedded in power systems, protecting the communication networks of Smart Grids (SG) has emerged as a critical concern. Distributed Network Protocol 3 (DNP3) represents a multi-tiered application layer protocol extensively utilized in Supervisory Control and Data Acquisition (SCADA)-based smart grids to facilitate real-time data gathering and control functionalities. Robust Intrusion Detection Systems (IDS) are necessary for early threat detection and mitigation because of the interconnection of these networks, which makes them vulnerable to a variety of cyberattacks. To solve this issue, this paper develops a hybrid Deep Learning (DL) model specifically designed for intrusion detection in smart grids. The proposed approach is a combination of the Convolutional Neural Network (CNN) and the Long-Short-Term Memory algorithms (LSTM). We employed a recent intrusion detection dataset (DNP3), which focuses on unauthorized commands and Denial of Service (DoS) cyberattacks, to train and test our model. The results of our experiments show that our CNN-LSTM method is much better at finding smart grid intrusions than other deep learning algorithms used for classification. In addition, our proposed approach improves accuracy, precision, recall, and F1 score, achieving a high detection accuracy rate of 99.50%.
DEEP LEARNING FOR SMART GRID INTRUSION DETECTION: A HYBRID CNN-LSTM-BASED MODEL from ijaia
]]>
131 0 https://cdn.slidesharecdn.com/ss_thumbnails/15324ijaia01-240613114050-3377656d-thumbnail.jpg?width=120&height=120&fit=bounds document Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
INFORMATION EXTRACTION FROM PRODUCT LABELS: A MACHINE VISION APPROACH /slideshow/information-extraction-from-product-labels-a-machine-vision-approach/267182655 15224ijaia04-240409162758-227520a6
This research tackles the challenge of manual data extraction from product labels by employing a blend of computer vision and Natural Language Processing (NLP). We introduce an enhanced model that combines Convolutional Neural Networks (CNN) and Recurrent Neural Networks (RNN) in a Convolutional Recurrent Neural Network (CRNN) for reliable text recognition. Our model is further refined by incorporating the Tesseract OCR engine, enhancing its applicability in Optical Character Recognition (OCR) tasks. The methodology is augmented by NLP techniques and extended through the Open Food Facts API (Application Programming Interface) for database population and text-only label prediction. The CRNN model is trained on encoded labels and evaluated for accuracy on a dedicated test set. Importantly, our approach enables visually impaired individuals to access essential information on product labels, such as directions and ingredients. Overall, the study highlights the efficacy of deep learning and OCR in automating label extraction and recognition.]]>

This research tackles the challenge of manual data extraction from product labels by employing a blend of computer vision and Natural Language Processing (NLP). We introduce an enhanced model that combines Convolutional Neural Networks (CNN) and Recurrent Neural Networks (RNN) in a Convolutional Recurrent Neural Network (CRNN) for reliable text recognition. Our model is further refined by incorporating the Tesseract OCR engine, enhancing its applicability in Optical Character Recognition (OCR) tasks. The methodology is augmented by NLP techniques and extended through the Open Food Facts API (Application Programming Interface) for database population and text-only label prediction. The CRNN model is trained on encoded labels and evaluated for accuracy on a dedicated test set. Importantly, our approach enables visually impaired individuals to access essential information on product labels, such as directions and ingredients. Overall, the study highlights the efficacy of deep learning and OCR in automating label extraction and recognition.]]>
Tue, 09 Apr 2024 16:27:58 GMT /slideshow/information-extraction-from-product-labels-a-machine-vision-approach/267182655 ijaia@slideshare.net(ijaia) INFORMATION EXTRACTION FROM PRODUCT LABELS: A MACHINE VISION APPROACH ijaia This research tackles the challenge of manual data extraction from product labels by employing a blend of computer vision and Natural Language Processing (NLP). We introduce an enhanced model that combines Convolutional Neural Networks (CNN) and Recurrent Neural Networks (RNN) in a Convolutional Recurrent Neural Network (CRNN) for reliable text recognition. Our model is further refined by incorporating the Tesseract OCR engine, enhancing its applicability in Optical Character Recognition (OCR) tasks. The methodology is augmented by NLP techniques and extended through the Open Food Facts API (Application Programming Interface) for database population and text-only label prediction. The CRNN model is trained on encoded labels and evaluated for accuracy on a dedicated test set. Importantly, our approach enables visually impaired individuals to access essential information on product labels, such as directions and ingredients. Overall, the study highlights the efficacy of deep learning and OCR in automating label extraction and recognition. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/15224ijaia04-240409162758-227520a6-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> This research tackles the challenge of manual data extraction from product labels by employing a blend of computer vision and Natural Language Processing (NLP). We introduce an enhanced model that combines Convolutional Neural Networks (CNN) and Recurrent Neural Networks (RNN) in a Convolutional Recurrent Neural Network (CRNN) for reliable text recognition. Our model is further refined by incorporating the Tesseract OCR engine, enhancing its applicability in Optical Character Recognition (OCR) tasks. The methodology is augmented by NLP techniques and extended through the Open Food Facts API (Application Programming Interface) for database population and text-only label prediction. The CRNN model is trained on encoded labels and evaluated for accuracy on a dedicated test set. Importantly, our approach enables visually impaired individuals to access essential information on product labels, such as directions and ingredients. Overall, the study highlights the efficacy of deep learning and OCR in automating label extraction and recognition.
INFORMATION EXTRACTION FROM PRODUCT LABELS: A MACHINE VISION APPROACH from ijaia
]]>
36 0 https://cdn.slidesharecdn.com/ss_thumbnails/15224ijaia04-240409162758-227520a6-thumbnail.jpg?width=120&height=120&fit=bounds document Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
RESEARCH ON FUZZY C- CLUSTERING RECURSIVE GENETIC ALGORITHM BASED ON CLOUD COMPUTING BAYES FUNCTION /slideshow/research-on-fuzzy-c-clustering-recursive-genetic-algorithm-based-on-cloud-computing-bayes-function/267182641 15224ijaia03-240409162702-b4c9ff90
Aiming at the problems of poor local search ability and precocious convergence of fuzzy C-cluster recursive genetic algorithm (FOLD++), a new fuzzy C-cluster recursive genetic algorithm based on Bayesian function adaptation search (TS) was proposed by incorporating the idea of Bayesian function adaptation search into fuzzy C-cluster recursive genetic algorithm. The new algorithm combines the advantages of FOLD++ and TS. In the early stage of optimization, fuzzy C-cluster recursive genetic algorithm is used to get a good initial value, and the individual extreme value pbest is put into Bayesian function adaptation table. In the late stage of optimization, when the searching ability of fuzzy C-cluster recursive genetic is weakened, the short term memory function of Bayesian function adaptation table in Bayesian function adaptation search algorithm is utilized. Make it jump out of the local optimal solution, and allow bad solutions to be accepted during the search. The improved algorithm is applied to function optimization, and the simulation results show that the calculation accuracy and stability of the algorithm are improved, and the effectiveness of the improved algorithm is verified. ]]>

Aiming at the problems of poor local search ability and precocious convergence of fuzzy C-cluster recursive genetic algorithm (FOLD++), a new fuzzy C-cluster recursive genetic algorithm based on Bayesian function adaptation search (TS) was proposed by incorporating the idea of Bayesian function adaptation search into fuzzy C-cluster recursive genetic algorithm. The new algorithm combines the advantages of FOLD++ and TS. In the early stage of optimization, fuzzy C-cluster recursive genetic algorithm is used to get a good initial value, and the individual extreme value pbest is put into Bayesian function adaptation table. In the late stage of optimization, when the searching ability of fuzzy C-cluster recursive genetic is weakened, the short term memory function of Bayesian function adaptation table in Bayesian function adaptation search algorithm is utilized. Make it jump out of the local optimal solution, and allow bad solutions to be accepted during the search. The improved algorithm is applied to function optimization, and the simulation results show that the calculation accuracy and stability of the algorithm are improved, and the effectiveness of the improved algorithm is verified. ]]>
Tue, 09 Apr 2024 16:27:02 GMT /slideshow/research-on-fuzzy-c-clustering-recursive-genetic-algorithm-based-on-cloud-computing-bayes-function/267182641 ijaia@slideshare.net(ijaia) RESEARCH ON FUZZY C- CLUSTERING RECURSIVE GENETIC ALGORITHM BASED ON CLOUD COMPUTING BAYES FUNCTION ijaia Aiming at the problems of poor local search ability and precocious convergence of fuzzy C-cluster recursive genetic algorithm (FOLD++), a new fuzzy C-cluster recursive genetic algorithm based on Bayesian function adaptation search (TS) was proposed by incorporating the idea of Bayesian function adaptation search into fuzzy C-cluster recursive genetic algorithm. The new algorithm combines the advantages of FOLD++ and TS. In the early stage of optimization, fuzzy C-cluster recursive genetic algorithm is used to get a good initial value, and the individual extreme value pbest is put into Bayesian function adaptation table. In the late stage of optimization, when the searching ability of fuzzy C-cluster recursive genetic is weakened, the short term memory function of Bayesian function adaptation table in Bayesian function adaptation search algorithm is utilized. Make it jump out of the local optimal solution, and allow bad solutions to be accepted during the search. The improved algorithm is applied to function optimization, and the simulation results show that the calculation accuracy and stability of the algorithm are improved, and the effectiveness of the improved algorithm is verified. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/15224ijaia03-240409162702-b4c9ff90-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Aiming at the problems of poor local search ability and precocious convergence of fuzzy C-cluster recursive genetic algorithm (FOLD++), a new fuzzy C-cluster recursive genetic algorithm based on Bayesian function adaptation search (TS) was proposed by incorporating the idea of Bayesian function adaptation search into fuzzy C-cluster recursive genetic algorithm. The new algorithm combines the advantages of FOLD++ and TS. In the early stage of optimization, fuzzy C-cluster recursive genetic algorithm is used to get a good initial value, and the individual extreme value pbest is put into Bayesian function adaptation table. In the late stage of optimization, when the searching ability of fuzzy C-cluster recursive genetic is weakened, the short term memory function of Bayesian function adaptation table in Bayesian function adaptation search algorithm is utilized. Make it jump out of the local optimal solution, and allow bad solutions to be accepted during the search. The improved algorithm is applied to function optimization, and the simulation results show that the calculation accuracy and stability of the algorithm are improved, and the effectiveness of the improved algorithm is verified.
RESEARCH ON FUZZY C- CLUSTERING RECURSIVE GENETIC ALGORITHM BASED ON CLOUD COMPUTING BAYES FUNCTION from ijaia
]]>
7 0 https://cdn.slidesharecdn.com/ss_thumbnails/15224ijaia03-240409162702-b4c9ff90-thumbnail.jpg?width=120&height=120&fit=bounds document Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
EMPLOYEE ATTRITION PREDICTION USING MACHINE LEARNING MODELS: A REVIEW PAPER /slideshow/employee-attrition-prediction-using-machine-learning-models-a-review-paper/267182623 15224ijaia02-240409162559-12381977
Employee attrition refers to the decrease in staff numbers within an organization due to various reasons. As it has a negative impact on long-term growth objectives and workplace productivity, firms have recognized it as a significant concern. To address this issue, organizations are increasingly turning to machine-learning approaches to forecast employee attrition rates. This topic has gained significant attention from researchers, especially in recent times. Several studies have applied various machine-learning methods to predict employee attrition, producing different results depending on the employed methods, factors, and datasets. However, there has been no comprehensive comparative review of multiple studies applying machinelearning models to predict employee attrition to date. Therefore, this study aims to fill this gap by providing an overview of research conducted on applying machine learning to predict employee attrition from 2019 to February 2024. A literature review of relevant studies was conducted, summarized, and classified. Most studies agree on conducting comparative experiments with multiple predictive models to determine the most effective one. From this literature survey, the RF algorithm and XGB ensemble method are repeatedly the best-performing, outperforming many other algorithms. Additionally, the application of deep learning to employee attrition prediction issues also shows promise. While there are discrepancies in the datasets used in previous studies, it is notable that the dataset provided by IBM is the most widely utilized. This study serves as a concise review for new researchers, facilitating their understanding of the primary techniques employed in predicting employee attrition and highlighting recent research trends in this field. Furthermore, it provides organizations with insight into the prominent factors affecting employee attrition, as identified by studies, enabling them to implement solutions aimed at reducing attrition rates. ]]>

Employee attrition refers to the decrease in staff numbers within an organization due to various reasons. As it has a negative impact on long-term growth objectives and workplace productivity, firms have recognized it as a significant concern. To address this issue, organizations are increasingly turning to machine-learning approaches to forecast employee attrition rates. This topic has gained significant attention from researchers, especially in recent times. Several studies have applied various machine-learning methods to predict employee attrition, producing different results depending on the employed methods, factors, and datasets. However, there has been no comprehensive comparative review of multiple studies applying machinelearning models to predict employee attrition to date. Therefore, this study aims to fill this gap by providing an overview of research conducted on applying machine learning to predict employee attrition from 2019 to February 2024. A literature review of relevant studies was conducted, summarized, and classified. Most studies agree on conducting comparative experiments with multiple predictive models to determine the most effective one. From this literature survey, the RF algorithm and XGB ensemble method are repeatedly the best-performing, outperforming many other algorithms. Additionally, the application of deep learning to employee attrition prediction issues also shows promise. While there are discrepancies in the datasets used in previous studies, it is notable that the dataset provided by IBM is the most widely utilized. This study serves as a concise review for new researchers, facilitating their understanding of the primary techniques employed in predicting employee attrition and highlighting recent research trends in this field. Furthermore, it provides organizations with insight into the prominent factors affecting employee attrition, as identified by studies, enabling them to implement solutions aimed at reducing attrition rates. ]]>
Tue, 09 Apr 2024 16:25:59 GMT /slideshow/employee-attrition-prediction-using-machine-learning-models-a-review-paper/267182623 ijaia@slideshare.net(ijaia) EMPLOYEE ATTRITION PREDICTION USING MACHINE LEARNING MODELS: A REVIEW PAPER ijaia Employee attrition refers to the decrease in staff numbers within an organization due to various reasons. As it has a negative impact on long-term growth objectives and workplace productivity, firms have recognized it as a significant concern. To address this issue, organizations are increasingly turning to machine-learning approaches to forecast employee attrition rates. This topic has gained significant attention from researchers, especially in recent times. Several studies have applied various machine-learning methods to predict employee attrition, producing different results depending on the employed methods, factors, and datasets. However, there has been no comprehensive comparative review of multiple studies applying machinelearning models to predict employee attrition to date. Therefore, this study aims to fill this gap by providing an overview of research conducted on applying machine learning to predict employee attrition from 2019 to February 2024. A literature review of relevant studies was conducted, summarized, and classified. Most studies agree on conducting comparative experiments with multiple predictive models to determine the most effective one. From this literature survey, the RF algorithm and XGB ensemble method are repeatedly the best-performing, outperforming many other algorithms. Additionally, the application of deep learning to employee attrition prediction issues also shows promise. While there are discrepancies in the datasets used in previous studies, it is notable that the dataset provided by IBM is the most widely utilized. This study serves as a concise review for new researchers, facilitating their understanding of the primary techniques employed in predicting employee attrition and highlighting recent research trends in this field. Furthermore, it provides organizations with insight into the prominent factors affecting employee attrition, as identified by studies, enabling them to implement solutions aimed at reducing attrition rates. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/15224ijaia02-240409162559-12381977-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Employee attrition refers to the decrease in staff numbers within an organization due to various reasons. As it has a negative impact on long-term growth objectives and workplace productivity, firms have recognized it as a significant concern. To address this issue, organizations are increasingly turning to machine-learning approaches to forecast employee attrition rates. This topic has gained significant attention from researchers, especially in recent times. Several studies have applied various machine-learning methods to predict employee attrition, producing different results depending on the employed methods, factors, and datasets. However, there has been no comprehensive comparative review of multiple studies applying machinelearning models to predict employee attrition to date. Therefore, this study aims to fill this gap by providing an overview of research conducted on applying machine learning to predict employee attrition from 2019 to February 2024. A literature review of relevant studies was conducted, summarized, and classified. Most studies agree on conducting comparative experiments with multiple predictive models to determine the most effective one. From this literature survey, the RF algorithm and XGB ensemble method are repeatedly the best-performing, outperforming many other algorithms. Additionally, the application of deep learning to employee attrition prediction issues also shows promise. While there are discrepancies in the datasets used in previous studies, it is notable that the dataset provided by IBM is the most widely utilized. This study serves as a concise review for new researchers, facilitating their understanding of the primary techniques employed in predicting employee attrition and highlighting recent research trends in this field. Furthermore, it provides organizations with insight into the prominent factors affecting employee attrition, as identified by studies, enabling them to implement solutions aimed at reducing attrition rates.
EMPLOYEE ATTRITION PREDICTION USING MACHINE LEARNING MODELS: A REVIEW PAPER from ijaia
]]>
646 0 https://cdn.slidesharecdn.com/ss_thumbnails/15224ijaia02-240409162559-12381977-thumbnail.jpg?width=120&height=120&fit=bounds document Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
THE TRANSFORMATION RISK-BENEFIT MODEL OF ARTIFICIAL INTELLIGENCE:BALANCING RISKS AND BENEFITS THROUGH PRACTICAL SOLUTIONS AND USE CASES /slideshow/the-transformation-riskbenefit-model-of-artificial-intelligencebalancing-risks-and-benefits-through-practical-solutions-and-use-cases/267182602 15224ijaia01-240409162458-88ee9de2
This paper summarizes the most cogent advantages and risks associated with Artificial Intelligence from an in-depth review of the literature. Then the authors synthesize the salient risk-related models currently being used in AI, technology and business-related scenarios. Next, in view of an updated context of AI along with theories and models reviewed and expanded constructs, the writers propose a new framework called The Transformation Risk-Benefit Model of Artificial Intelligence to address the increasing fears and levels of AIrisk. Using the model characteristics, the article emphasizes practical and innovative solutions where benefitsoutweigh risks and three use cases in healthcare, climate change/environment and cyber security to illustrate unique interplay of principles, dimensions and processes of this powerful AI transformational model. ]]>

This paper summarizes the most cogent advantages and risks associated with Artificial Intelligence from an in-depth review of the literature. Then the authors synthesize the salient risk-related models currently being used in AI, technology and business-related scenarios. Next, in view of an updated context of AI along with theories and models reviewed and expanded constructs, the writers propose a new framework called The Transformation Risk-Benefit Model of Artificial Intelligence to address the increasing fears and levels of AIrisk. Using the model characteristics, the article emphasizes practical and innovative solutions where benefitsoutweigh risks and three use cases in healthcare, climate change/environment and cyber security to illustrate unique interplay of principles, dimensions and processes of this powerful AI transformational model. ]]>
Tue, 09 Apr 2024 16:24:58 GMT /slideshow/the-transformation-riskbenefit-model-of-artificial-intelligencebalancing-risks-and-benefits-through-practical-solutions-and-use-cases/267182602 ijaia@slideshare.net(ijaia) THE TRANSFORMATION RISK-BENEFIT MODEL OF ARTIFICIAL INTELLIGENCE:BALANCING RISKS AND BENEFITS THROUGH PRACTICAL SOLUTIONS AND USE CASES ijaia This paper summarizes the most cogent advantages and risks associated with Artificial Intelligence from an in-depth review of the literature. Then the authors synthesize the salient risk-related models currently being used in AI, technology and business-related scenarios. Next, in view of an updated context of AI along with theories and models reviewed and expanded constructs, the writers propose a new framework called The Transformation Risk-Benefit Model of Artificial Intelligence to address the increasing fears and levels of AIrisk. Using the model characteristics, the article emphasizes practical and innovative solutions where benefitsoutweigh risks and three use cases in healthcare, climate change/environment and cyber security to illustrate unique interplay of principles, dimensions and processes of this powerful AI transformational model. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/15224ijaia01-240409162458-88ee9de2-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> This paper summarizes the most cogent advantages and risks associated with Artificial Intelligence from an in-depth review of the literature. Then the authors synthesize the salient risk-related models currently being used in AI, technology and business-related scenarios. Next, in view of an updated context of AI along with theories and models reviewed and expanded constructs, the writers propose a new framework called The Transformation Risk-Benefit Model of Artificial Intelligence to address the increasing fears and levels of AIrisk. Using the model characteristics, the article emphasizes practical and innovative solutions where benefitsoutweigh risks and three use cases in healthcare, climate change/environment and cyber security to illustrate unique interplay of principles, dimensions and processes of this powerful AI transformational model.
THE TRANSFORMATION RISK-BENEFIT MODEL OF ARTIFICIAL INTELLIGENCE:BALANCING RISKS AND BENEFITS THROUGH PRACTICAL SOLUTIONS AND USE CASES from ijaia
]]>
36 0 https://cdn.slidesharecdn.com/ss_thumbnails/15224ijaia01-240409162458-88ee9de2-thumbnail.jpg?width=120&height=120&fit=bounds document Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
AN IMPROVED MT5 MODEL FOR CHINESE TEXT SUMMARY GENERATION /slideshow/an-improved-mt5-model-for-chinese-text-summary-generation/266215002 15124ijaia09-240208095521-f87b7fba
Complicated policy texts require a lot of effort to read, so there is a need for intelligent interpretation of Chinese policies. To better solve the Chinese Text Summarization task, this paper utilized the mT5 model as the core framework and initial weights. Additionally, In addition, this paper reduced the model size through parameter clipping, used the Gap Sentence Generation (GSG) method as unsupervised method, and improved the Chinese tokenizer. After training on a meticulously processed 30GB Chinese training corpus, the paper developed the enhanced mT5-GSG model. Then, when fine-tuning the Chinese Policy text, this paper chose the idea of Dropout Twice, and innovatively combined the probability distribution of the two Dropouts through the Wasserstein distance. Experimental results indicate that the proposed model achieved Rouge-1, Rouge-2, and Rouge-L scores of 56.13%, 45.76%, and 56.41% respectively on the Chinese policy text summarization dataset. ]]>

Complicated policy texts require a lot of effort to read, so there is a need for intelligent interpretation of Chinese policies. To better solve the Chinese Text Summarization task, this paper utilized the mT5 model as the core framework and initial weights. Additionally, In addition, this paper reduced the model size through parameter clipping, used the Gap Sentence Generation (GSG) method as unsupervised method, and improved the Chinese tokenizer. After training on a meticulously processed 30GB Chinese training corpus, the paper developed the enhanced mT5-GSG model. Then, when fine-tuning the Chinese Policy text, this paper chose the idea of Dropout Twice, and innovatively combined the probability distribution of the two Dropouts through the Wasserstein distance. Experimental results indicate that the proposed model achieved Rouge-1, Rouge-2, and Rouge-L scores of 56.13%, 45.76%, and 56.41% respectively on the Chinese policy text summarization dataset. ]]>
Thu, 08 Feb 2024 09:55:21 GMT /slideshow/an-improved-mt5-model-for-chinese-text-summary-generation/266215002 ijaia@slideshare.net(ijaia) AN IMPROVED MT5 MODEL FOR CHINESE TEXT SUMMARY GENERATION ijaia Complicated policy texts require a lot of effort to read, so there is a need for intelligent interpretation of Chinese policies. To better solve the Chinese Text Summarization task, this paper utilized the mT5 model as the core framework and initial weights. Additionally, In addition, this paper reduced the model size through parameter clipping, used the Gap Sentence Generation (GSG) method as unsupervised method, and improved the Chinese tokenizer. After training on a meticulously processed 30GB Chinese training corpus, the paper developed the enhanced mT5-GSG model. Then, when fine-tuning the Chinese Policy text, this paper chose the idea of Dropout Twice, and innovatively combined the probability distribution of the two Dropouts through the Wasserstein distance. Experimental results indicate that the proposed model achieved Rouge-1, Rouge-2, and Rouge-L scores of 56.13%, 45.76%, and 56.41% respectively on the Chinese policy text summarization dataset. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/15124ijaia09-240208095521-f87b7fba-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Complicated policy texts require a lot of effort to read, so there is a need for intelligent interpretation of Chinese policies. To better solve the Chinese Text Summarization task, this paper utilized the mT5 model as the core framework and initial weights. Additionally, In addition, this paper reduced the model size through parameter clipping, used the Gap Sentence Generation (GSG) method as unsupervised method, and improved the Chinese tokenizer. After training on a meticulously processed 30GB Chinese training corpus, the paper developed the enhanced mT5-GSG model. Then, when fine-tuning the Chinese Policy text, this paper chose the idea of Dropout Twice, and innovatively combined the probability distribution of the two Dropouts through the Wasserstein distance. Experimental results indicate that the proposed model achieved Rouge-1, Rouge-2, and Rouge-L scores of 56.13%, 45.76%, and 56.41% respectively on the Chinese policy text summarization dataset.
AN IMPROVED MT5 MODEL FOR CHINESE TEXT SUMMARY GENERATION from ijaia
]]>
30 0 https://cdn.slidesharecdn.com/ss_thumbnails/15124ijaia09-240208095521-f87b7fba-thumbnail.jpg?width=120&height=120&fit=bounds document Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
A MACHINE LEARNING ENSEMBLE MODEL FOR THE DETECTION OF CYBERBULLYING /slideshow/a-machine-learning-ensemble-model-for-the-detection-of-cyberbullying/266214986 15124ijaia08-240208095425-69b4b0ea
The pervasive use of social media platforms, such as Facebook, Instagram, and X, has significantly amplified our electronic interconnectedness. Moreover, these platforms are now easily accessible from any location at any given time. However, the increased popularity of social media has also led to cyberbullying.It is imperative to address the need for finding, monitoring, and mitigating cyberbullying posts on social media platforms. Motivated by this necessity, we present this paper to contribute to developing an automated system for detecting binary labels of aggressive tweets.Our study has demonstrated remarkable performance compared to previous experiments on the same dataset. We employed the stacking ensemble machine learning method, utilizing four various feature extraction techniques to optimize performance within the stacking ensemble learning framework. Combining five machine learning algorithms,Decision Trees, Random Forest, Linear Support Vector Classification, Logistic Regression, and K-Nearest Neighbors into an ensemble method, we achieved superior results compared to traditional machine learning classifier models. The stacking classifier achieved a high accuracy rate of 94.00%, outperforming traditional machine learning models and surpassing the results of prior experiments that utilized the same dataset. The outcomes of our experiments showcased an accuracy rate of 0.94% in detection tweets as aggressive or non-aggressive. ]]>

The pervasive use of social media platforms, such as Facebook, Instagram, and X, has significantly amplified our electronic interconnectedness. Moreover, these platforms are now easily accessible from any location at any given time. However, the increased popularity of social media has also led to cyberbullying.It is imperative to address the need for finding, monitoring, and mitigating cyberbullying posts on social media platforms. Motivated by this necessity, we present this paper to contribute to developing an automated system for detecting binary labels of aggressive tweets.Our study has demonstrated remarkable performance compared to previous experiments on the same dataset. We employed the stacking ensemble machine learning method, utilizing four various feature extraction techniques to optimize performance within the stacking ensemble learning framework. Combining five machine learning algorithms,Decision Trees, Random Forest, Linear Support Vector Classification, Logistic Regression, and K-Nearest Neighbors into an ensemble method, we achieved superior results compared to traditional machine learning classifier models. The stacking classifier achieved a high accuracy rate of 94.00%, outperforming traditional machine learning models and surpassing the results of prior experiments that utilized the same dataset. The outcomes of our experiments showcased an accuracy rate of 0.94% in detection tweets as aggressive or non-aggressive. ]]>
Thu, 08 Feb 2024 09:54:25 GMT /slideshow/a-machine-learning-ensemble-model-for-the-detection-of-cyberbullying/266214986 ijaia@slideshare.net(ijaia) A MACHINE LEARNING ENSEMBLE MODEL FOR THE DETECTION OF CYBERBULLYING ijaia The pervasive use of social media platforms, such as Facebook, Instagram, and X, has significantly amplified our electronic interconnectedness. Moreover, these platforms are now easily accessible from any location at any given time. However, the increased popularity of social media has also led to cyberbullying.It is imperative to address the need for finding, monitoring, and mitigating cyberbullying posts on social media platforms. Motivated by this necessity, we present this paper to contribute to developing an automated system for detecting binary labels of aggressive tweets.Our study has demonstrated remarkable performance compared to previous experiments on the same dataset. We employed the stacking ensemble machine learning method, utilizing four various feature extraction techniques to optimize performance within the stacking ensemble learning framework. Combining five machine learning algorithms,Decision Trees, Random Forest, Linear Support Vector Classification, Logistic Regression, and K-Nearest Neighbors into an ensemble method, we achieved superior results compared to traditional machine learning classifier models. The stacking classifier achieved a high accuracy rate of 94.00%, outperforming traditional machine learning models and surpassing the results of prior experiments that utilized the same dataset. The outcomes of our experiments showcased an accuracy rate of 0.94% in detection tweets as aggressive or non-aggressive. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/15124ijaia08-240208095425-69b4b0ea-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> The pervasive use of social media platforms, such as Facebook, Instagram, and X, has significantly amplified our electronic interconnectedness. Moreover, these platforms are now easily accessible from any location at any given time. However, the increased popularity of social media has also led to cyberbullying.It is imperative to address the need for finding, monitoring, and mitigating cyberbullying posts on social media platforms. Motivated by this necessity, we present this paper to contribute to developing an automated system for detecting binary labels of aggressive tweets.Our study has demonstrated remarkable performance compared to previous experiments on the same dataset. We employed the stacking ensemble machine learning method, utilizing four various feature extraction techniques to optimize performance within the stacking ensemble learning framework. Combining five machine learning algorithms,Decision Trees, Random Forest, Linear Support Vector Classification, Logistic Regression, and K-Nearest Neighbors into an ensemble method, we achieved superior results compared to traditional machine learning classifier models. The stacking classifier achieved a high accuracy rate of 94.00%, outperforming traditional machine learning models and surpassing the results of prior experiments that utilized the same dataset. The outcomes of our experiments showcased an accuracy rate of 0.94% in detection tweets as aggressive or non-aggressive.
A MACHINE LEARNING ENSEMBLE MODEL FOR THE DETECTION OF CYBERBULLYING from ijaia
]]>
26 0 https://cdn.slidesharecdn.com/ss_thumbnails/15124ijaia08-240208095425-69b4b0ea-thumbnail.jpg?width=120&height=120&fit=bounds document Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
https://public.slidesharecdn.com/v2/images/profile-picture.png https://cdn.slidesharecdn.com/ss_thumbnails/16125ijaia05-250211143540-74f54cd6-thumbnail.jpg?width=320&height=320&fit=bounds slideshow/movie-recommendation-system-based-on-machine-learning-using-profiling/275551469 MOVIE RECOMMENDATION S... https://cdn.slidesharecdn.com/ss_thumbnails/16125ijaia04-250211143356-242fb0ed-thumbnail.jpg?width=320&height=320&fit=bounds slideshow/ae-vit-token-enhancement-for-vision-transformers-via-cnn-based-autoencoder-ensembles/275551431 AE-ViT: Token Enhancem... https://cdn.slidesharecdn.com/ss_thumbnails/16125ijaia03-250211143207-8e6fa43a-thumbnail.jpg?width=320&height=320&fit=bounds slideshow/exploring-the-integration-of-artificial-intelligence-into-the-functions-of-an-accounting-department/275551396 EXPLORING THE INTEGRAT...