ºÝºÝߣshows by User: ijci / http://www.slideshare.net/images/logo.gif ºÝºÝߣshows by User: ijci / Thu, 02 Jan 2025 10:32:01 GMT ºÝºÝߣShare feed for ºÝºÝߣshows by User: ijci 6th International Conference on Machine learning and Cloud Computing (MLCL 2025) /slideshow/6th-international-conference-on-machine-learning-and-cloud-computing-mlcl-2025/274582528 mlcl2025cfp-250102103201-d5cbbe4e
#machinelearning #cloudcomputing #virtualization #CloudStorage #ParallelProcessing #filesystem #programming #security #socialnetwork #cloudsecurity #consolidation #opensource #systemintegration #computing #EdgeComputing #deployment #artificialintelligence #DataSecurity #DataStorage #CloudServices #scalability #performance Submit Your Research Articles...!!! Welcome To MLCL 2025 6th International Conference on Machine learning and Cloud Computing (MLCL 2025) March 22 ~ 23, 2025, Sydney, Australia Webpage URL: https://csita2025.org/mlcl/index Submission Deadline: January 04, 2025 Contact us: Here's where you can reach us: mlcl@csita2025.org (or) mlclconference@yahoo.com Submission URL: https://csita2025.org/submission/index.php]]>

#machinelearning #cloudcomputing #virtualization #CloudStorage #ParallelProcessing #filesystem #programming #security #socialnetwork #cloudsecurity #consolidation #opensource #systemintegration #computing #EdgeComputing #deployment #artificialintelligence #DataSecurity #DataStorage #CloudServices #scalability #performance Submit Your Research Articles...!!! Welcome To MLCL 2025 6th International Conference on Machine learning and Cloud Computing (MLCL 2025) March 22 ~ 23, 2025, Sydney, Australia Webpage URL: https://csita2025.org/mlcl/index Submission Deadline: January 04, 2025 Contact us: Here's where you can reach us: mlcl@csita2025.org (or) mlclconference@yahoo.com Submission URL: https://csita2025.org/submission/index.php]]>
Thu, 02 Jan 2025 10:32:01 GMT /slideshow/6th-international-conference-on-machine-learning-and-cloud-computing-mlcl-2025/274582528 ijci@slideshare.net(ijci) 6th International Conference on Machine learning and Cloud Computing (MLCL 2025) ijci #machinelearning #cloudcomputing #virtualization #CloudStorage #ParallelProcessing #filesystem #programming #security #socialnetwork #cloudsecurity #consolidation #opensource #systemintegration #computing #EdgeComputing #deployment #artificialintelligence #DataSecurity #DataStorage #CloudServices #scalability #performance Submit Your Research Articles...!!! Welcome To MLCL 2025 6th International Conference on Machine learning and Cloud Computing (MLCL 2025) March 22 ~ 23, 2025, Sydney, Australia Webpage URL: https://csita2025.org/mlcl/index Submission Deadline: January 04, 2025 Contact us: Here's where you can reach us: mlcl@csita2025.org (or) mlclconference@yahoo.com Submission URL: https://csita2025.org/submission/index.php <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/mlcl2025cfp-250102103201-d5cbbe4e-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> #machinelearning #cloudcomputing #virtualization #CloudStorage #ParallelProcessing #filesystem #programming #security #socialnetwork #cloudsecurity #consolidation #opensource #systemintegration #computing #EdgeComputing #deployment #artificialintelligence #DataSecurity #DataStorage #CloudServices #scalability #performance Submit Your Research Articles...!!! Welcome To MLCL 2025 6th International Conference on Machine learning and Cloud Computing (MLCL 2025) March 22 ~ 23, 2025, Sydney, Australia Webpage URL: https://csita2025.org/mlcl/index Submission Deadline: January 04, 2025 Contact us: Here&#39;s where you can reach us: mlcl@csita2025.org (or) mlclconference@yahoo.com Submission URL: https://csita2025.org/submission/index.php
6th International Conference on Machine learning and Cloud Computing (MLCL 2025) from IJCI JOURNAL
]]>
13 0 https://cdn.slidesharecdn.com/ss_thumbnails/mlcl2025cfp-250102103201-d5cbbe4e-thumbnail.jpg?width=120&height=120&fit=bounds document Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Actionable Pattern Discovery for Emotion Detection in BigData in Education and Business /slideshow/actionable-pattern-discovery-for-emotion-detection-in-bigdata-in-education-and-business/274185840 13624ijci07-241218135032-89b954d4
Action Rules are rule based systems that extract actionable patterns which are hidden in big volumes of data generated from Education sector, Business field, Medical domain and Social Media, in a single day. In the technological world of big data, massive amounts of data are collected by organizations, including in major domains like financial, medical, social media and Internet of Things(IoT). Mining this data can provide a lot of meaningful insights on how to improve user experience in multiple domain. Users need recommendations on actions they can undertake to increase their profit or accomplish their goals, this recommendations are provided by Actionable patterns. For example: How to improve student learning; how to increase business profitability; how to improve user experience in social media; and how to heal patients and assist hospital administrators. Action Rules provide actionable suggestions on how to change the state of an object from an existing state to a desired state for the benefit of the user. The traditional Action Rules extraction models, which analyze the data in a non distributed fashion, does not perform well when dealing larger datasets. In this work we are concentrating on the vertical data splitting strategy using information granules and creating the data partitioning more logically instead of splitting the data randomly and also generating meta actions after the vertical split. Information granules form basic entities in the world of Granular Computing(GrC), which represents meaningful smaller units derived from a larger complex information system. We introduced Modified Hybrid Action rule method with Partition Threshold Rho. Modified Hybrid Action rule mining approach combines both these frameworks and generates complete set of Action Rules, which further improves the computational performance with large datasets. ]]>

Action Rules are rule based systems that extract actionable patterns which are hidden in big volumes of data generated from Education sector, Business field, Medical domain and Social Media, in a single day. In the technological world of big data, massive amounts of data are collected by organizations, including in major domains like financial, medical, social media and Internet of Things(IoT). Mining this data can provide a lot of meaningful insights on how to improve user experience in multiple domain. Users need recommendations on actions they can undertake to increase their profit or accomplish their goals, this recommendations are provided by Actionable patterns. For example: How to improve student learning; how to increase business profitability; how to improve user experience in social media; and how to heal patients and assist hospital administrators. Action Rules provide actionable suggestions on how to change the state of an object from an existing state to a desired state for the benefit of the user. The traditional Action Rules extraction models, which analyze the data in a non distributed fashion, does not perform well when dealing larger datasets. In this work we are concentrating on the vertical data splitting strategy using information granules and creating the data partitioning more logically instead of splitting the data randomly and also generating meta actions after the vertical split. Information granules form basic entities in the world of Granular Computing(GrC), which represents meaningful smaller units derived from a larger complex information system. We introduced Modified Hybrid Action rule method with Partition Threshold Rho. Modified Hybrid Action rule mining approach combines both these frameworks and generates complete set of Action Rules, which further improves the computational performance with large datasets. ]]>
Wed, 18 Dec 2024 13:50:32 GMT /slideshow/actionable-pattern-discovery-for-emotion-detection-in-bigdata-in-education-and-business/274185840 ijci@slideshare.net(ijci) Actionable Pattern Discovery for Emotion Detection in BigData in Education and Business ijci Action Rules are rule based systems that extract actionable patterns which are hidden in big volumes of data generated from Education sector, Business field, Medical domain and Social Media, in a single day. In the technological world of big data, massive amounts of data are collected by organizations, including in major domains like financial, medical, social media and Internet of Things(IoT). Mining this data can provide a lot of meaningful insights on how to improve user experience in multiple domain. Users need recommendations on actions they can undertake to increase their profit or accomplish their goals, this recommendations are provided by Actionable patterns. For example: How to improve student learning; how to increase business profitability; how to improve user experience in social media; and how to heal patients and assist hospital administrators. Action Rules provide actionable suggestions on how to change the state of an object from an existing state to a desired state for the benefit of the user. The traditional Action Rules extraction models, which analyze the data in a non distributed fashion, does not perform well when dealing larger datasets. In this work we are concentrating on the vertical data splitting strategy using information granules and creating the data partitioning more logically instead of splitting the data randomly and also generating meta actions after the vertical split. Information granules form basic entities in the world of Granular Computing(GrC), which represents meaningful smaller units derived from a larger complex information system. We introduced Modified Hybrid Action rule method with Partition Threshold Rho. Modified Hybrid Action rule mining approach combines both these frameworks and generates complete set of Action Rules, which further improves the computational performance with large datasets. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/13624ijci07-241218135032-89b954d4-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Action Rules are rule based systems that extract actionable patterns which are hidden in big volumes of data generated from Education sector, Business field, Medical domain and Social Media, in a single day. In the technological world of big data, massive amounts of data are collected by organizations, including in major domains like financial, medical, social media and Internet of Things(IoT). Mining this data can provide a lot of meaningful insights on how to improve user experience in multiple domain. Users need recommendations on actions they can undertake to increase their profit or accomplish their goals, this recommendations are provided by Actionable patterns. For example: How to improve student learning; how to increase business profitability; how to improve user experience in social media; and how to heal patients and assist hospital administrators. Action Rules provide actionable suggestions on how to change the state of an object from an existing state to a desired state for the benefit of the user. The traditional Action Rules extraction models, which analyze the data in a non distributed fashion, does not perform well when dealing larger datasets. In this work we are concentrating on the vertical data splitting strategy using information granules and creating the data partitioning more logically instead of splitting the data randomly and also generating meta actions after the vertical split. Information granules form basic entities in the world of Granular Computing(GrC), which represents meaningful smaller units derived from a larger complex information system. We introduced Modified Hybrid Action rule method with Partition Threshold Rho. Modified Hybrid Action rule mining approach combines both these frameworks and generates complete set of Action Rules, which further improves the computational performance with large datasets.
Actionable Pattern Discovery for Emotion Detection in BigData in Education and Business from IJCI JOURNAL
]]>
17 0 https://cdn.slidesharecdn.com/ss_thumbnails/13624ijci07-241218135032-89b954d4-thumbnail.jpg?width=120&height=120&fit=bounds document Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Proposal of a Data Model for a Dynamic Adaptation of Resources in IOTs (ADR-IOT) /slideshow/proposal-of-a-data-model-for-a-dynamic-adaptation-of-resources-in-iots-adr-iot/274185826 13624ijci06-241218134938-52a100a1
In this work, the main objective is to provide a contribution of resources adaptation to consumption demand in IOT environments. To do this, we have proposed a data model including the entities "resource ", "load ", "event ", "policy" and "device" as well as the different relationships between IOT devices and others. This data model, an adaptation process is proposed as well as a mathematical model based on the optimization of resource consumption on requests while, taking into account certain constraints including the Maximum Capacity of resources, the Satisfaction of user or IOT device requests and the Energy Constraints have been proposed. The simulation results regarding the optimization of resource consumption show that our model could be beneficial for smart city management, industry 4.0 and e-health. ]]>

In this work, the main objective is to provide a contribution of resources adaptation to consumption demand in IOT environments. To do this, we have proposed a data model including the entities "resource ", "load ", "event ", "policy" and "device" as well as the different relationships between IOT devices and others. This data model, an adaptation process is proposed as well as a mathematical model based on the optimization of resource consumption on requests while, taking into account certain constraints including the Maximum Capacity of resources, the Satisfaction of user or IOT device requests and the Energy Constraints have been proposed. The simulation results regarding the optimization of resource consumption show that our model could be beneficial for smart city management, industry 4.0 and e-health. ]]>
Wed, 18 Dec 2024 13:49:38 GMT /slideshow/proposal-of-a-data-model-for-a-dynamic-adaptation-of-resources-in-iots-adr-iot/274185826 ijci@slideshare.net(ijci) Proposal of a Data Model for a Dynamic Adaptation of Resources in IOTs (ADR-IOT) ijci In this work, the main objective is to provide a contribution of resources adaptation to consumption demand in IOT environments. To do this, we have proposed a data model including the entities "resource ", "load ", "event ", "policy" and "device" as well as the different relationships between IOT devices and others. This data model, an adaptation process is proposed as well as a mathematical model based on the optimization of resource consumption on requests while, taking into account certain constraints including the Maximum Capacity of resources, the Satisfaction of user or IOT device requests and the Energy Constraints have been proposed. The simulation results regarding the optimization of resource consumption show that our model could be beneficial for smart city management, industry 4.0 and e-health. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/13624ijci06-241218134938-52a100a1-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> In this work, the main objective is to provide a contribution of resources adaptation to consumption demand in IOT environments. To do this, we have proposed a data model including the entities &quot;resource &quot;, &quot;load &quot;, &quot;event &quot;, &quot;policy&quot; and &quot;device&quot; as well as the different relationships between IOT devices and others. This data model, an adaptation process is proposed as well as a mathematical model based on the optimization of resource consumption on requests while, taking into account certain constraints including the Maximum Capacity of resources, the Satisfaction of user or IOT device requests and the Energy Constraints have been proposed. The simulation results regarding the optimization of resource consumption show that our model could be beneficial for smart city management, industry 4.0 and e-health.
Proposal of a Data Model for a Dynamic Adaptation of Resources in IOTs (ADR-IOT) from IJCI JOURNAL
]]>
7 0 https://cdn.slidesharecdn.com/ss_thumbnails/13624ijci06-241218134938-52a100a1-thumbnail.jpg?width=120&height=120&fit=bounds document Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Divergent Ensemble Networks: Enhancing Uncertainty Estimation with Shared Representations and Independent Branching /slideshow/divergent-ensemble-networks-enhancing-uncertainty-estimation-with-shared-representations-and-independent-branching/274185806 13624ijci05-241218134816-99578672
Ensemble learning has proven effective in improving predictive performance and estimating uncertainty in neural networks. However, conventional ensemble methods often suffer from redundant parameter usage and computational inefficiencies due to entirely independent network training. To address these challenges, we propose the Divergent Ensemble Network (DEN), a novel architecture that combines shared representation learning with independent branching. DEN employs a shared input layer to capture common features across all branches, followed by divergent, independently trainable layers that form an ensemble. This shared-to-branching structure reduces parameter redundancy while maintaining ensemble diversity, enabling efficient and scalable learning. ]]>

Ensemble learning has proven effective in improving predictive performance and estimating uncertainty in neural networks. However, conventional ensemble methods often suffer from redundant parameter usage and computational inefficiencies due to entirely independent network training. To address these challenges, we propose the Divergent Ensemble Network (DEN), a novel architecture that combines shared representation learning with independent branching. DEN employs a shared input layer to capture common features across all branches, followed by divergent, independently trainable layers that form an ensemble. This shared-to-branching structure reduces parameter redundancy while maintaining ensemble diversity, enabling efficient and scalable learning. ]]>
Wed, 18 Dec 2024 13:48:16 GMT /slideshow/divergent-ensemble-networks-enhancing-uncertainty-estimation-with-shared-representations-and-independent-branching/274185806 ijci@slideshare.net(ijci) Divergent Ensemble Networks: Enhancing Uncertainty Estimation with Shared Representations and Independent Branching ijci Ensemble learning has proven effective in improving predictive performance and estimating uncertainty in neural networks. However, conventional ensemble methods often suffer from redundant parameter usage and computational inefficiencies due to entirely independent network training. To address these challenges, we propose the Divergent Ensemble Network (DEN), a novel architecture that combines shared representation learning with independent branching. DEN employs a shared input layer to capture common features across all branches, followed by divergent, independently trainable layers that form an ensemble. This shared-to-branching structure reduces parameter redundancy while maintaining ensemble diversity, enabling efficient and scalable learning. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/13624ijci05-241218134816-99578672-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Ensemble learning has proven effective in improving predictive performance and estimating uncertainty in neural networks. However, conventional ensemble methods often suffer from redundant parameter usage and computational inefficiencies due to entirely independent network training. To address these challenges, we propose the Divergent Ensemble Network (DEN), a novel architecture that combines shared representation learning with independent branching. DEN employs a shared input layer to capture common features across all branches, followed by divergent, independently trainable layers that form an ensemble. This shared-to-branching structure reduces parameter redundancy while maintaining ensemble diversity, enabling efficient and scalable learning.
Divergent Ensemble Networks: Enhancing Uncertainty Estimation with Shared Representations and Independent Branching from IJCI JOURNAL
]]>
5 0 https://cdn.slidesharecdn.com/ss_thumbnails/13624ijci05-241218134816-99578672-thumbnail.jpg?width=120&height=120&fit=bounds document Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
AdversLLM: A Practical Guide To Governance, Maturity and Risk Assessment For LLM-Based Applications /slideshow/adversllm-a-practical-guide-to-governance-maturity-and-risk-assessment-for-llm-based-applications/274185791 13624ijci04-241218134717-e88f121d
AdversLLM is a comprehensive framework designed to help organizations tackle security threats associated with the use of Large Language Models (LLMs), such as prompt injections and data poisoning. As LLMs become integral to various industries, the framework aims to bolster organizational readiness and resilience by assessing governance, maturity, and risk mitigation strategies. AdversLLM includes an assessment form for reviewing practices, maturity levels, and auditing mitigation strategies, supplemented with real-world scenarios to demonstrate effective AI governance. Additionally, it features a prompt injection testing ground with a benchmark dataset to evaluate LLMs' robustness against malicious prompts. The framework also addresses ethical concerns by proposing a zero-shot learning defense mechanism and a RAG-based LLM safety tutor to educate on security risks and protection methods. AdversLLM provides a targeted, practical approach for organizations to ensure responsible AI adoption and strengthen their defenses against emerging LLM-related security challenges. ]]>

AdversLLM is a comprehensive framework designed to help organizations tackle security threats associated with the use of Large Language Models (LLMs), such as prompt injections and data poisoning. As LLMs become integral to various industries, the framework aims to bolster organizational readiness and resilience by assessing governance, maturity, and risk mitigation strategies. AdversLLM includes an assessment form for reviewing practices, maturity levels, and auditing mitigation strategies, supplemented with real-world scenarios to demonstrate effective AI governance. Additionally, it features a prompt injection testing ground with a benchmark dataset to evaluate LLMs' robustness against malicious prompts. The framework also addresses ethical concerns by proposing a zero-shot learning defense mechanism and a RAG-based LLM safety tutor to educate on security risks and protection methods. AdversLLM provides a targeted, practical approach for organizations to ensure responsible AI adoption and strengthen their defenses against emerging LLM-related security challenges. ]]>
Wed, 18 Dec 2024 13:47:17 GMT /slideshow/adversllm-a-practical-guide-to-governance-maturity-and-risk-assessment-for-llm-based-applications/274185791 ijci@slideshare.net(ijci) AdversLLM: A Practical Guide To Governance, Maturity and Risk Assessment For LLM-Based Applications ijci AdversLLM is a comprehensive framework designed to help organizations tackle security threats associated with the use of Large Language Models (LLMs), such as prompt injections and data poisoning. As LLMs become integral to various industries, the framework aims to bolster organizational readiness and resilience by assessing governance, maturity, and risk mitigation strategies. AdversLLM includes an assessment form for reviewing practices, maturity levels, and auditing mitigation strategies, supplemented with real-world scenarios to demonstrate effective AI governance. Additionally, it features a prompt injection testing ground with a benchmark dataset to evaluate LLMs' robustness against malicious prompts. The framework also addresses ethical concerns by proposing a zero-shot learning defense mechanism and a RAG-based LLM safety tutor to educate on security risks and protection methods. AdversLLM provides a targeted, practical approach for organizations to ensure responsible AI adoption and strengthen their defenses against emerging LLM-related security challenges. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/13624ijci04-241218134717-e88f121d-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> AdversLLM is a comprehensive framework designed to help organizations tackle security threats associated with the use of Large Language Models (LLMs), such as prompt injections and data poisoning. As LLMs become integral to various industries, the framework aims to bolster organizational readiness and resilience by assessing governance, maturity, and risk mitigation strategies. AdversLLM includes an assessment form for reviewing practices, maturity levels, and auditing mitigation strategies, supplemented with real-world scenarios to demonstrate effective AI governance. Additionally, it features a prompt injection testing ground with a benchmark dataset to evaluate LLMs&#39; robustness against malicious prompts. The framework also addresses ethical concerns by proposing a zero-shot learning defense mechanism and a RAG-based LLM safety tutor to educate on security risks and protection methods. AdversLLM provides a targeted, practical approach for organizations to ensure responsible AI adoption and strengthen their defenses against emerging LLM-related security challenges.
AdversLLM: A Practical Guide To Governance, Maturity and Risk Assessment For LLM-Based Applications from IJCI JOURNAL
]]>
13 0 https://cdn.slidesharecdn.com/ss_thumbnails/13624ijci04-241218134717-e88f121d-thumbnail.jpg?width=120&height=120&fit=bounds document Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Integrating Event-Based Neuromorphic Processing and Hyperdimensional Computing with Tropical Algebra for Cognitive Ontology Networks /slideshow/integrating-event-based-neuromorphic-processing-and-hyperdimensional-computing-with-tropical-algebra-for-cognitive-ontology-networks/274185776 13624ijci03-241218134633-6a1d85eb
This paper presents a complete framework for combining event-based neuromorphic processing, hyperdimensional computing and tropical algebra for use within cognitive ontology networks. Using the Iris dataset I construct a virtual ontology network to simulate cognitive computing processes. Event-based neuromorphic processing models with spike activities and stochastic synapses dynamically adapt the networks’ topology. Hyperdimensional vectors represent the entities and relationships whilst tropical algebra operations bind these representations to encode complex relationships. A Multi-Layer Perceptron (MLP) with adaptive dropout and learning rates that are influenced by neuromorphic spike activities performs clustering and classification tasks. ]]>

This paper presents a complete framework for combining event-based neuromorphic processing, hyperdimensional computing and tropical algebra for use within cognitive ontology networks. Using the Iris dataset I construct a virtual ontology network to simulate cognitive computing processes. Event-based neuromorphic processing models with spike activities and stochastic synapses dynamically adapt the networks’ topology. Hyperdimensional vectors represent the entities and relationships whilst tropical algebra operations bind these representations to encode complex relationships. A Multi-Layer Perceptron (MLP) with adaptive dropout and learning rates that are influenced by neuromorphic spike activities performs clustering and classification tasks. ]]>
Wed, 18 Dec 2024 13:46:32 GMT /slideshow/integrating-event-based-neuromorphic-processing-and-hyperdimensional-computing-with-tropical-algebra-for-cognitive-ontology-networks/274185776 ijci@slideshare.net(ijci) Integrating Event-Based Neuromorphic Processing and Hyperdimensional Computing with Tropical Algebra for Cognitive Ontology Networks ijci This paper presents a complete framework for combining event-based neuromorphic processing, hyperdimensional computing and tropical algebra for use within cognitive ontology networks. Using the Iris dataset I construct a virtual ontology network to simulate cognitive computing processes. Event-based neuromorphic processing models with spike activities and stochastic synapses dynamically adapt the networks’ topology. Hyperdimensional vectors represent the entities and relationships whilst tropical algebra operations bind these representations to encode complex relationships. A Multi-Layer Perceptron (MLP) with adaptive dropout and learning rates that are influenced by neuromorphic spike activities performs clustering and classification tasks. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/13624ijci03-241218134633-6a1d85eb-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> This paper presents a complete framework for combining event-based neuromorphic processing, hyperdimensional computing and tropical algebra for use within cognitive ontology networks. Using the Iris dataset I construct a virtual ontology network to simulate cognitive computing processes. Event-based neuromorphic processing models with spike activities and stochastic synapses dynamically adapt the networks’ topology. Hyperdimensional vectors represent the entities and relationships whilst tropical algebra operations bind these representations to encode complex relationships. A Multi-Layer Perceptron (MLP) with adaptive dropout and learning rates that are influenced by neuromorphic spike activities performs clustering and classification tasks.
Integrating Event-Based Neuromorphic Processing and Hyperdimensional Computing with Tropical Algebra for Cognitive Ontology Networks from IJCI JOURNAL
]]>
11 0 https://cdn.slidesharecdn.com/ss_thumbnails/13624ijci03-241218134633-6a1d85eb-thumbnail.jpg?width=120&height=120&fit=bounds document Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Dynamic Multi-Agent Orchestration and Retrieval for Multi-Source Question-Answer Systems using Large Language Models /slideshow/dynamic-multi-agent-orchestration-and-retrieval-for-multi-source-question-answer-systems-using-large-language-models/274185738 13624ijci02-241218134446-dce24ea2
We propose a methodology that combines several advanced techniques in Large Language Model (LLM) retrieval to support the development of robust, multi-source questionanswer systems. This methodology is designed to integrate information from diverse data sources, including unstructured documents (PDFs) and structured databases, through a coordinated multi-agent orchestration and dynamic retrieval approach. Our methodology leverages specialized agents—such as SQL agents, Retrieval-Augmented Generation (RAG) agents, and router agents—that dynamically select the most appropriate retrieval strategy based on the nature of each query. To further improve accuracy and contextual relevance, we employ dynamic prompt engineering, which adapts in real time to query-specific contexts. The methodology’s effectiveness is demonstrated within the domain of Contract Management, where complex queries often require seamless interaction between unstructured and structured data. Our results indicate that this approach enhances response accuracy and relevance, offering a versatile and scalable framework for developing question-answer systems that can operate across various domains and data sources. ]]>

We propose a methodology that combines several advanced techniques in Large Language Model (LLM) retrieval to support the development of robust, multi-source questionanswer systems. This methodology is designed to integrate information from diverse data sources, including unstructured documents (PDFs) and structured databases, through a coordinated multi-agent orchestration and dynamic retrieval approach. Our methodology leverages specialized agents—such as SQL agents, Retrieval-Augmented Generation (RAG) agents, and router agents—that dynamically select the most appropriate retrieval strategy based on the nature of each query. To further improve accuracy and contextual relevance, we employ dynamic prompt engineering, which adapts in real time to query-specific contexts. The methodology’s effectiveness is demonstrated within the domain of Contract Management, where complex queries often require seamless interaction between unstructured and structured data. Our results indicate that this approach enhances response accuracy and relevance, offering a versatile and scalable framework for developing question-answer systems that can operate across various domains and data sources. ]]>
Wed, 18 Dec 2024 13:44:46 GMT /slideshow/dynamic-multi-agent-orchestration-and-retrieval-for-multi-source-question-answer-systems-using-large-language-models/274185738 ijci@slideshare.net(ijci) Dynamic Multi-Agent Orchestration and Retrieval for Multi-Source Question-Answer Systems using Large Language Models ijci We propose a methodology that combines several advanced techniques in Large Language Model (LLM) retrieval to support the development of robust, multi-source questionanswer systems. This methodology is designed to integrate information from diverse data sources, including unstructured documents (PDFs) and structured databases, through a coordinated multi-agent orchestration and dynamic retrieval approach. Our methodology leverages specialized agents—such as SQL agents, Retrieval-Augmented Generation (RAG) agents, and router agents—that dynamically select the most appropriate retrieval strategy based on the nature of each query. To further improve accuracy and contextual relevance, we employ dynamic prompt engineering, which adapts in real time to query-specific contexts. The methodology’s effectiveness is demonstrated within the domain of Contract Management, where complex queries often require seamless interaction between unstructured and structured data. Our results indicate that this approach enhances response accuracy and relevance, offering a versatile and scalable framework for developing question-answer systems that can operate across various domains and data sources. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/13624ijci02-241218134446-dce24ea2-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> We propose a methodology that combines several advanced techniques in Large Language Model (LLM) retrieval to support the development of robust, multi-source questionanswer systems. This methodology is designed to integrate information from diverse data sources, including unstructured documents (PDFs) and structured databases, through a coordinated multi-agent orchestration and dynamic retrieval approach. Our methodology leverages specialized agents—such as SQL agents, Retrieval-Augmented Generation (RAG) agents, and router agents—that dynamically select the most appropriate retrieval strategy based on the nature of each query. To further improve accuracy and contextual relevance, we employ dynamic prompt engineering, which adapts in real time to query-specific contexts. The methodology’s effectiveness is demonstrated within the domain of Contract Management, where complex queries often require seamless interaction between unstructured and structured data. Our results indicate that this approach enhances response accuracy and relevance, offering a versatile and scalable framework for developing question-answer systems that can operate across various domains and data sources.
Dynamic Multi-Agent Orchestration and Retrieval for Multi-Source Question-Answer Systems using Large Language Models from IJCI JOURNAL
]]>
61 0 https://cdn.slidesharecdn.com/ss_thumbnails/13624ijci02-241218134446-dce24ea2-thumbnail.jpg?width=120&height=120&fit=bounds document Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Leveraging Large Language Models For Optimized Item Categorization using UNSPSC Taxonomy /slideshow/leveraging-large-language-models-for-optimized-item-categorization-using-unspsc-taxonomy/274185716 13624ijci01-241218134318-63bac607
Effective item categorization is vital for businesses, enabling the transformation of unstructured datasets into organized categories that streamline inventory management. Despite its importance, item categorization remains highly subjective and lacks a uniform standard across industries and businesses. The United Nations Standard Products and Services Code (UNSPSC) provides a standardized system for cataloguing inventory, yet employing UNSPSC categorizations often demands significant manual effort. This paper investigates the deployment of Large Language Models (LLMs) to automate the classification of inventory data into UNSPSC codes based on Item Descriptions. We evaluate the accuracy and efficiency of LLMs in categorizing diverse datasets, exploring their language processing capabilities and their potential as a tool for standardizing inventory classification. Our findings reveal that LLMs can substantially diminish the manual labor involved in item categorization while maintaining high accuracy, offering a scalable solution for businesses striving to enhance their inventory management practices. ]]>

Effective item categorization is vital for businesses, enabling the transformation of unstructured datasets into organized categories that streamline inventory management. Despite its importance, item categorization remains highly subjective and lacks a uniform standard across industries and businesses. The United Nations Standard Products and Services Code (UNSPSC) provides a standardized system for cataloguing inventory, yet employing UNSPSC categorizations often demands significant manual effort. This paper investigates the deployment of Large Language Models (LLMs) to automate the classification of inventory data into UNSPSC codes based on Item Descriptions. We evaluate the accuracy and efficiency of LLMs in categorizing diverse datasets, exploring their language processing capabilities and their potential as a tool for standardizing inventory classification. Our findings reveal that LLMs can substantially diminish the manual labor involved in item categorization while maintaining high accuracy, offering a scalable solution for businesses striving to enhance their inventory management practices. ]]>
Wed, 18 Dec 2024 13:43:18 GMT /slideshow/leveraging-large-language-models-for-optimized-item-categorization-using-unspsc-taxonomy/274185716 ijci@slideshare.net(ijci) Leveraging Large Language Models For Optimized Item Categorization using UNSPSC Taxonomy ijci Effective item categorization is vital for businesses, enabling the transformation of unstructured datasets into organized categories that streamline inventory management. Despite its importance, item categorization remains highly subjective and lacks a uniform standard across industries and businesses. The United Nations Standard Products and Services Code (UNSPSC) provides a standardized system for cataloguing inventory, yet employing UNSPSC categorizations often demands significant manual effort. This paper investigates the deployment of Large Language Models (LLMs) to automate the classification of inventory data into UNSPSC codes based on Item Descriptions. We evaluate the accuracy and efficiency of LLMs in categorizing diverse datasets, exploring their language processing capabilities and their potential as a tool for standardizing inventory classification. Our findings reveal that LLMs can substantially diminish the manual labor involved in item categorization while maintaining high accuracy, offering a scalable solution for businesses striving to enhance their inventory management practices. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/13624ijci01-241218134318-63bac607-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Effective item categorization is vital for businesses, enabling the transformation of unstructured datasets into organized categories that streamline inventory management. Despite its importance, item categorization remains highly subjective and lacks a uniform standard across industries and businesses. The United Nations Standard Products and Services Code (UNSPSC) provides a standardized system for cataloguing inventory, yet employing UNSPSC categorizations often demands significant manual effort. This paper investigates the deployment of Large Language Models (LLMs) to automate the classification of inventory data into UNSPSC codes based on Item Descriptions. We evaluate the accuracy and efficiency of LLMs in categorizing diverse datasets, exploring their language processing capabilities and their potential as a tool for standardizing inventory classification. Our findings reveal that LLMs can substantially diminish the manual labor involved in item categorization while maintaining high accuracy, offering a scalable solution for businesses striving to enhance their inventory management practices.
Leveraging Large Language Models For Optimized Item Categorization using UNSPSC Taxonomy from IJCI JOURNAL
]]>
92 0 https://cdn.slidesharecdn.com/ss_thumbnails/13624ijci01-241218134318-63bac607-thumbnail.jpg?width=120&height=120&fit=bounds document Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Uncertainty-Aware Seismic Signal Discrimination using Bayesian Convolutional Neural Networks /slideshow/uncertainty-aware-seismic-signal-discrimination-using-bayesian-convolutional-neural-networks/272320704 13524ijci13-241010115138-925e4c00
Seismic signal classification plays a crucial role in mitigating the impact of seismic events on human lives and infrastructure. Traditional methods in seismic hazard assessment often overlook the inherent uncertainties associated with the prediction of this complex geological phenomenon. This work introduces a probabilistic framework that leverages Bayesian principles to model and quantify uncertainty in seismic signal classification by applying a Bayesian Convolutional Neural Network (BCNN). The BCNN was trained on a dataset that comprises waveforms detected in the Southern California region and achieved an accuracy of 99.1%. Monte Carlo Sampling subsequently creates a 95% prediction interval for probabilities that considers epistemic and aleatoric uncertainties. The ability to visualize both aleatoric and epistemic uncertainties provides decision-makers with information to determine the reliability of seismic signal classifications. Further, the use of Bayesian CNN for seismic signal classification provides a more robust foundation for decision-making and risk assessment in earthquake-prone regions. ]]>

Seismic signal classification plays a crucial role in mitigating the impact of seismic events on human lives and infrastructure. Traditional methods in seismic hazard assessment often overlook the inherent uncertainties associated with the prediction of this complex geological phenomenon. This work introduces a probabilistic framework that leverages Bayesian principles to model and quantify uncertainty in seismic signal classification by applying a Bayesian Convolutional Neural Network (BCNN). The BCNN was trained on a dataset that comprises waveforms detected in the Southern California region and achieved an accuracy of 99.1%. Monte Carlo Sampling subsequently creates a 95% prediction interval for probabilities that considers epistemic and aleatoric uncertainties. The ability to visualize both aleatoric and epistemic uncertainties provides decision-makers with information to determine the reliability of seismic signal classifications. Further, the use of Bayesian CNN for seismic signal classification provides a more robust foundation for decision-making and risk assessment in earthquake-prone regions. ]]>
Thu, 10 Oct 2024 11:51:38 GMT /slideshow/uncertainty-aware-seismic-signal-discrimination-using-bayesian-convolutional-neural-networks/272320704 ijci@slideshare.net(ijci) Uncertainty-Aware Seismic Signal Discrimination using Bayesian Convolutional Neural Networks ijci Seismic signal classification plays a crucial role in mitigating the impact of seismic events on human lives and infrastructure. Traditional methods in seismic hazard assessment often overlook the inherent uncertainties associated with the prediction of this complex geological phenomenon. This work introduces a probabilistic framework that leverages Bayesian principles to model and quantify uncertainty in seismic signal classification by applying a Bayesian Convolutional Neural Network (BCNN). The BCNN was trained on a dataset that comprises waveforms detected in the Southern California region and achieved an accuracy of 99.1%. Monte Carlo Sampling subsequently creates a 95% prediction interval for probabilities that considers epistemic and aleatoric uncertainties. The ability to visualize both aleatoric and epistemic uncertainties provides decision-makers with information to determine the reliability of seismic signal classifications. Further, the use of Bayesian CNN for seismic signal classification provides a more robust foundation for decision-making and risk assessment in earthquake-prone regions. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/13524ijci13-241010115138-925e4c00-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Seismic signal classification plays a crucial role in mitigating the impact of seismic events on human lives and infrastructure. Traditional methods in seismic hazard assessment often overlook the inherent uncertainties associated with the prediction of this complex geological phenomenon. This work introduces a probabilistic framework that leverages Bayesian principles to model and quantify uncertainty in seismic signal classification by applying a Bayesian Convolutional Neural Network (BCNN). The BCNN was trained on a dataset that comprises waveforms detected in the Southern California region and achieved an accuracy of 99.1%. Monte Carlo Sampling subsequently creates a 95% prediction interval for probabilities that considers epistemic and aleatoric uncertainties. The ability to visualize both aleatoric and epistemic uncertainties provides decision-makers with information to determine the reliability of seismic signal classifications. Further, the use of Bayesian CNN for seismic signal classification provides a more robust foundation for decision-making and risk assessment in earthquake-prone regions.
Uncertainty-Aware Seismic Signal Discrimination using Bayesian Convolutional Neural Networks from IJCI JOURNAL
]]>
5 0 https://cdn.slidesharecdn.com/ss_thumbnails/13524ijci13-241010115138-925e4c00-thumbnail.jpg?width=120&height=120&fit=bounds document Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Predictive Analytics for Pilot Training in Southern Africa /slideshow/predictive-analytics-for-pilot-training-in-southern-africa/272291990 13524ijci12-241009110934-4cb22bab
This paper aims to enhance aviation safety by identifying and addressing pilot performance weaknesses through data-driven techniques, focusing on the strategic adoption of predictive analytics in pilot training across Southern Africa, particularly in South Africa, Namibia, and Botswana. The main objective is to utilize advanced technologies like Natural Language Processing (NLP) and machine learning (ML) to analyze aviation incident reports and identify patterns of pilot errors and operational risks. The study's results highlight vital insights, which pave the way for tailored training programs designed to mitigate risks. The achievements of the study include filling a non-empirical gap by applying the Diffusion of Innovations (DOI) framework to examine the adoption of predictive analytics alongside recommendations for standardized reporting, specialized training modules, and the integration of weather analytics. These outcomes demonstrate the transformative potential of predictive analytics in improving pilot training and enhancing safety in the Southern African aviation sector. ]]>

This paper aims to enhance aviation safety by identifying and addressing pilot performance weaknesses through data-driven techniques, focusing on the strategic adoption of predictive analytics in pilot training across Southern Africa, particularly in South Africa, Namibia, and Botswana. The main objective is to utilize advanced technologies like Natural Language Processing (NLP) and machine learning (ML) to analyze aviation incident reports and identify patterns of pilot errors and operational risks. The study's results highlight vital insights, which pave the way for tailored training programs designed to mitigate risks. The achievements of the study include filling a non-empirical gap by applying the Diffusion of Innovations (DOI) framework to examine the adoption of predictive analytics alongside recommendations for standardized reporting, specialized training modules, and the integration of weather analytics. These outcomes demonstrate the transformative potential of predictive analytics in improving pilot training and enhancing safety in the Southern African aviation sector. ]]>
Wed, 09 Oct 2024 11:09:34 GMT /slideshow/predictive-analytics-for-pilot-training-in-southern-africa/272291990 ijci@slideshare.net(ijci) Predictive Analytics for Pilot Training in Southern Africa ijci This paper aims to enhance aviation safety by identifying and addressing pilot performance weaknesses through data-driven techniques, focusing on the strategic adoption of predictive analytics in pilot training across Southern Africa, particularly in South Africa, Namibia, and Botswana. The main objective is to utilize advanced technologies like Natural Language Processing (NLP) and machine learning (ML) to analyze aviation incident reports and identify patterns of pilot errors and operational risks. The study's results highlight vital insights, which pave the way for tailored training programs designed to mitigate risks. The achievements of the study include filling a non-empirical gap by applying the Diffusion of Innovations (DOI) framework to examine the adoption of predictive analytics alongside recommendations for standardized reporting, specialized training modules, and the integration of weather analytics. These outcomes demonstrate the transformative potential of predictive analytics in improving pilot training and enhancing safety in the Southern African aviation sector. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/13524ijci12-241009110934-4cb22bab-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> This paper aims to enhance aviation safety by identifying and addressing pilot performance weaknesses through data-driven techniques, focusing on the strategic adoption of predictive analytics in pilot training across Southern Africa, particularly in South Africa, Namibia, and Botswana. The main objective is to utilize advanced technologies like Natural Language Processing (NLP) and machine learning (ML) to analyze aviation incident reports and identify patterns of pilot errors and operational risks. The study&#39;s results highlight vital insights, which pave the way for tailored training programs designed to mitigate risks. The achievements of the study include filling a non-empirical gap by applying the Diffusion of Innovations (DOI) framework to examine the adoption of predictive analytics alongside recommendations for standardized reporting, specialized training modules, and the integration of weather analytics. These outcomes demonstrate the transformative potential of predictive analytics in improving pilot training and enhancing safety in the Southern African aviation sector.
Predictive Analytics for Pilot Training in Southern Africa from IJCI JOURNAL
]]>
46 0 https://cdn.slidesharecdn.com/ss_thumbnails/13524ijci12-241009110934-4cb22bab-thumbnail.jpg?width=120&height=120&fit=bounds document Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Similar Data Points Identification with LLM: A Human-in-the-Loop Strategy Using Summarization and Hidden State Insights /slideshow/similar-data-points-identification-with-llm-a-human-in-the-loop-strategy-using-summarization-and-hidden-state-insights/272291795 13524ijci111-241009110118-8cdee1ae
This study introduces a simple yet effective method for identifying similar data points across non-free text domains, such as tabular and image data, using Large Language Models (LLMs). Our two-step approach involves data point summarization and hidden state extraction. Initially, data is condensed via summarization using an LLM, reducing complexity and highlighting essential information in sentences. Subsequently, the summarization sentences are fed through another LLM to extract hidden states, serving as compact, feature-rich representations. This approach leverages the advanced comprehension and generative capabilities of LLMs, offering a scalable and efficient strategy for similarity identification across diverse datasets. We demonstrate the effectiveness of our method in identifying similar data points on multiple datasets. Additionally, our approach enables non-technical domain experts, such as fraud investigators or marketing operators, to quickly identify similar data points tailored to specific scenarios, demonstrating its utility in practical applications. In general, our results open new avenues for leveraging LLMs in data analysis across various domains. ]]>

This study introduces a simple yet effective method for identifying similar data points across non-free text domains, such as tabular and image data, using Large Language Models (LLMs). Our two-step approach involves data point summarization and hidden state extraction. Initially, data is condensed via summarization using an LLM, reducing complexity and highlighting essential information in sentences. Subsequently, the summarization sentences are fed through another LLM to extract hidden states, serving as compact, feature-rich representations. This approach leverages the advanced comprehension and generative capabilities of LLMs, offering a scalable and efficient strategy for similarity identification across diverse datasets. We demonstrate the effectiveness of our method in identifying similar data points on multiple datasets. Additionally, our approach enables non-technical domain experts, such as fraud investigators or marketing operators, to quickly identify similar data points tailored to specific scenarios, demonstrating its utility in practical applications. In general, our results open new avenues for leveraging LLMs in data analysis across various domains. ]]>
Wed, 09 Oct 2024 11:01:18 GMT /slideshow/similar-data-points-identification-with-llm-a-human-in-the-loop-strategy-using-summarization-and-hidden-state-insights/272291795 ijci@slideshare.net(ijci) Similar Data Points Identification with LLM: A Human-in-the-Loop Strategy Using Summarization and Hidden State Insights ijci This study introduces a simple yet effective method for identifying similar data points across non-free text domains, such as tabular and image data, using Large Language Models (LLMs). Our two-step approach involves data point summarization and hidden state extraction. Initially, data is condensed via summarization using an LLM, reducing complexity and highlighting essential information in sentences. Subsequently, the summarization sentences are fed through another LLM to extract hidden states, serving as compact, feature-rich representations. This approach leverages the advanced comprehension and generative capabilities of LLMs, offering a scalable and efficient strategy for similarity identification across diverse datasets. We demonstrate the effectiveness of our method in identifying similar data points on multiple datasets. Additionally, our approach enables non-technical domain experts, such as fraud investigators or marketing operators, to quickly identify similar data points tailored to specific scenarios, demonstrating its utility in practical applications. In general, our results open new avenues for leveraging LLMs in data analysis across various domains. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/13524ijci111-241009110118-8cdee1ae-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> This study introduces a simple yet effective method for identifying similar data points across non-free text domains, such as tabular and image data, using Large Language Models (LLMs). Our two-step approach involves data point summarization and hidden state extraction. Initially, data is condensed via summarization using an LLM, reducing complexity and highlighting essential information in sentences. Subsequently, the summarization sentences are fed through another LLM to extract hidden states, serving as compact, feature-rich representations. This approach leverages the advanced comprehension and generative capabilities of LLMs, offering a scalable and efficient strategy for similarity identification across diverse datasets. We demonstrate the effectiveness of our method in identifying similar data points on multiple datasets. Additionally, our approach enables non-technical domain experts, such as fraud investigators or marketing operators, to quickly identify similar data points tailored to specific scenarios, demonstrating its utility in practical applications. In general, our results open new avenues for leveraging LLMs in data analysis across various domains.
Similar Data Points Identification with LLM: A Human-in-the-Loop Strategy Using Summarization and Hidden State Insights from IJCI JOURNAL
]]>
14 0 https://cdn.slidesharecdn.com/ss_thumbnails/13524ijci111-241009110118-8cdee1ae-thumbnail.jpg?width=120&height=120&fit=bounds document Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Haste Makes Waste: A Moderated Mediation Model of the Mechanisms Linking Artificial Intelligence Advancement to Film Firm Performance /slideshow/haste-makes-waste-a-moderated-mediation-model-of-the-mechanisms-linking-artificial-intelligence-advancement-to-film-firm-performance/272291760 13524ijci101-241009110025-d62ef033
Artificial intelligence (AI) has emerged as a transformative force in the modern film industry, revolutionizing production processes and redefining audience experiences. This study delves into the mechanisms through which AI advancement impacts film firm performance, with a focus on the mediating roles of innovation speed and quality, and the moderating effect of human-machine collaboration. Employing a resource-based view, we construct a moderated mediation model and analyze data from 355 global film firms. Our findings reveal that AI advancement positively influences film firm performance, with innovation quality serving as a significant mediator. However, the mediating role of innovation speed is not pronounced. Moreover, the degree of human-machine collaboration positively moderates the relationships between AI advancement and both innovation speed and quality. However, its moderating role between AI advancement and firm performance is not significant. The study underscores the theoretical and practical implications of utilizing advanced AI to foster innovation and competitive advantage in film firms. ]]>

Artificial intelligence (AI) has emerged as a transformative force in the modern film industry, revolutionizing production processes and redefining audience experiences. This study delves into the mechanisms through which AI advancement impacts film firm performance, with a focus on the mediating roles of innovation speed and quality, and the moderating effect of human-machine collaboration. Employing a resource-based view, we construct a moderated mediation model and analyze data from 355 global film firms. Our findings reveal that AI advancement positively influences film firm performance, with innovation quality serving as a significant mediator. However, the mediating role of innovation speed is not pronounced. Moreover, the degree of human-machine collaboration positively moderates the relationships between AI advancement and both innovation speed and quality. However, its moderating role between AI advancement and firm performance is not significant. The study underscores the theoretical and practical implications of utilizing advanced AI to foster innovation and competitive advantage in film firms. ]]>
Wed, 09 Oct 2024 11:00:25 GMT /slideshow/haste-makes-waste-a-moderated-mediation-model-of-the-mechanisms-linking-artificial-intelligence-advancement-to-film-firm-performance/272291760 ijci@slideshare.net(ijci) Haste Makes Waste: A Moderated Mediation Model of the Mechanisms Linking Artificial Intelligence Advancement to Film Firm Performance ijci Artificial intelligence (AI) has emerged as a transformative force in the modern film industry, revolutionizing production processes and redefining audience experiences. This study delves into the mechanisms through which AI advancement impacts film firm performance, with a focus on the mediating roles of innovation speed and quality, and the moderating effect of human-machine collaboration. Employing a resource-based view, we construct a moderated mediation model and analyze data from 355 global film firms. Our findings reveal that AI advancement positively influences film firm performance, with innovation quality serving as a significant mediator. However, the mediating role of innovation speed is not pronounced. Moreover, the degree of human-machine collaboration positively moderates the relationships between AI advancement and both innovation speed and quality. However, its moderating role between AI advancement and firm performance is not significant. The study underscores the theoretical and practical implications of utilizing advanced AI to foster innovation and competitive advantage in film firms. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/13524ijci101-241009110025-d62ef033-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Artificial intelligence (AI) has emerged as a transformative force in the modern film industry, revolutionizing production processes and redefining audience experiences. This study delves into the mechanisms through which AI advancement impacts film firm performance, with a focus on the mediating roles of innovation speed and quality, and the moderating effect of human-machine collaboration. Employing a resource-based view, we construct a moderated mediation model and analyze data from 355 global film firms. Our findings reveal that AI advancement positively influences film firm performance, with innovation quality serving as a significant mediator. However, the mediating role of innovation speed is not pronounced. Moreover, the degree of human-machine collaboration positively moderates the relationships between AI advancement and both innovation speed and quality. However, its moderating role between AI advancement and firm performance is not significant. The study underscores the theoretical and practical implications of utilizing advanced AI to foster innovation and competitive advantage in film firms.
Haste Makes Waste: A Moderated Mediation Model of the Mechanisms Linking Artificial Intelligence Advancement to Film Firm Performance from IJCI JOURNAL
]]>
10 0 https://cdn.slidesharecdn.com/ss_thumbnails/13524ijci101-241009110025-d62ef033-thumbnail.jpg?width=120&height=120&fit=bounds document Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
A New Mathetic (The Science of Learning) and Didactic (The Science of Teaching) Concept /slideshow/a-new-mathetic-the-science-of-learning-and-didactic-the-science-of-teaching-concept/272291695 13524ijci091-241009105836-9fa92c7e
The concept of Learngaming integrates Darwin’s natural selection theory into a learning and teaching framework designed to prepare individuals for the demands of the 21st century. It focuses on entrepreneurial learning through play, fostering the development of key skills such as efficiency, flexibility, and creativity. In contrast to traditional educational approaches, which emphasise rote learning and exams, Learngaming promotes active, nonlinear, and collaborative learning environments that reflect the dynamic, digital context of modern life. The methodology uses digital "LEARNGames" to simulate real-world challenges, allowing learners to adapt and thrive in uncertain environments by encouraging risk-taking and instant feedback. This method proves particularly effective in teaching 21stcentury skills, as demonstrated in projects with gifted children, where learners showed improved retention and engagement through play-based learning. By shifting from a teacher-centred model to a learner-driven, game-based approach, Learngaming enhances both personal and collective learning outcomes, preparing individuals to succeed in an ever-evolving society. ]]>

The concept of Learngaming integrates Darwin’s natural selection theory into a learning and teaching framework designed to prepare individuals for the demands of the 21st century. It focuses on entrepreneurial learning through play, fostering the development of key skills such as efficiency, flexibility, and creativity. In contrast to traditional educational approaches, which emphasise rote learning and exams, Learngaming promotes active, nonlinear, and collaborative learning environments that reflect the dynamic, digital context of modern life. The methodology uses digital "LEARNGames" to simulate real-world challenges, allowing learners to adapt and thrive in uncertain environments by encouraging risk-taking and instant feedback. This method proves particularly effective in teaching 21stcentury skills, as demonstrated in projects with gifted children, where learners showed improved retention and engagement through play-based learning. By shifting from a teacher-centred model to a learner-driven, game-based approach, Learngaming enhances both personal and collective learning outcomes, preparing individuals to succeed in an ever-evolving society. ]]>
Wed, 09 Oct 2024 10:58:36 GMT /slideshow/a-new-mathetic-the-science-of-learning-and-didactic-the-science-of-teaching-concept/272291695 ijci@slideshare.net(ijci) A New Mathetic (The Science of Learning) and Didactic (The Science of Teaching) Concept ijci The concept of Learngaming integrates Darwin’s natural selection theory into a learning and teaching framework designed to prepare individuals for the demands of the 21st century. It focuses on entrepreneurial learning through play, fostering the development of key skills such as efficiency, flexibility, and creativity. In contrast to traditional educational approaches, which emphasise rote learning and exams, Learngaming promotes active, nonlinear, and collaborative learning environments that reflect the dynamic, digital context of modern life. The methodology uses digital "LEARNGames" to simulate real-world challenges, allowing learners to adapt and thrive in uncertain environments by encouraging risk-taking and instant feedback. This method proves particularly effective in teaching 21st�century skills, as demonstrated in projects with gifted children, where learners showed improved retention and engagement through play-based learning. By shifting from a teacher-centred model to a learner-driven, game-based approach, Learngaming enhances both personal and collective learning outcomes, preparing individuals to succeed in an ever-evolving society. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/13524ijci091-241009105836-9fa92c7e-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> The concept of Learngaming integrates Darwin’s natural selection theory into a learning and teaching framework designed to prepare individuals for the demands of the 21st century. It focuses on entrepreneurial learning through play, fostering the development of key skills such as efficiency, flexibility, and creativity. In contrast to traditional educational approaches, which emphasise rote learning and exams, Learngaming promotes active, nonlinear, and collaborative learning environments that reflect the dynamic, digital context of modern life. The methodology uses digital &quot;LEARNGames&quot; to simulate real-world challenges, allowing learners to adapt and thrive in uncertain environments by encouraging risk-taking and instant feedback. This method proves particularly effective in teaching 21st�century skills, as demonstrated in projects with gifted children, where learners showed improved retention and engagement through play-based learning. By shifting from a teacher-centred model to a learner-driven, game-based approach, Learngaming enhances both personal and collective learning outcomes, preparing individuals to succeed in an ever-evolving society.
A New Mathetic (The Science of Learning) and Didactic (The Science of Teaching) Concept from IJCI JOURNAL
]]>
10 0 https://cdn.slidesharecdn.com/ss_thumbnails/13524ijci091-241009105836-9fa92c7e-thumbnail.jpg?width=120&height=120&fit=bounds document Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Blockchain Applications in Cyber Liability Insurance /slideshow/blockchain-applications-in-cyber-liability-insurance/272291654 13524ijci08-241009105653-2d903219
Blockchain technology is revolutionizing cyber liability insurance (CLI) by addressing key challenges in underwriting, risk assessment, and claims management. As cyber-attacks become more frequent and complex, the demand for effective CLI solutions has surged. Traditional insurance practices often fall short in this rapidly evolving landscape. Blockchain offers a decentralized, secure, and transparent approach, enhancing the accuracy of risk assessments and preventing fraudulent claims. By maintaining an immutable ledger of historical claims, blockchain allows for better comparison of new claims against past data. Additionally, smart contracts within blockchain frameworks can automate claims processing, reducing administrative tasks and speeding up resolutions. Blockchain also enables decentralized, peer-to-peer insurance platforms, allowing multiple insurers to pool resources and share risks in a transparent, efficient manner. This study explores how blockchain can transform CLI, improving efficiency and security across the industry. ]]>

Blockchain technology is revolutionizing cyber liability insurance (CLI) by addressing key challenges in underwriting, risk assessment, and claims management. As cyber-attacks become more frequent and complex, the demand for effective CLI solutions has surged. Traditional insurance practices often fall short in this rapidly evolving landscape. Blockchain offers a decentralized, secure, and transparent approach, enhancing the accuracy of risk assessments and preventing fraudulent claims. By maintaining an immutable ledger of historical claims, blockchain allows for better comparison of new claims against past data. Additionally, smart contracts within blockchain frameworks can automate claims processing, reducing administrative tasks and speeding up resolutions. Blockchain also enables decentralized, peer-to-peer insurance platforms, allowing multiple insurers to pool resources and share risks in a transparent, efficient manner. This study explores how blockchain can transform CLI, improving efficiency and security across the industry. ]]>
Wed, 09 Oct 2024 10:56:53 GMT /slideshow/blockchain-applications-in-cyber-liability-insurance/272291654 ijci@slideshare.net(ijci) Blockchain Applications in Cyber Liability Insurance ijci Blockchain technology is revolutionizing cyber liability insurance (CLI) by addressing key challenges in underwriting, risk assessment, and claims management. As cyber-attacks become more frequent and complex, the demand for effective CLI solutions has surged. Traditional insurance practices often fall short in this rapidly evolving landscape. Blockchain offers a decentralized, secure, and transparent approach, enhancing the accuracy of risk assessments and preventing fraudulent claims. By maintaining an immutable ledger of historical claims, blockchain allows for better comparison of new claims against past data. Additionally, smart contracts within blockchain frameworks can automate claims processing, reducing administrative tasks and speeding up resolutions. Blockchain also enables decentralized, peer-to-peer insurance platforms, allowing multiple insurers to pool resources and share risks in a transparent, efficient manner. This study explores how blockchain can transform CLI, improving efficiency and security across the industry. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/13524ijci08-241009105653-2d903219-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Blockchain technology is revolutionizing cyber liability insurance (CLI) by addressing key challenges in underwriting, risk assessment, and claims management. As cyber-attacks become more frequent and complex, the demand for effective CLI solutions has surged. Traditional insurance practices often fall short in this rapidly evolving landscape. Blockchain offers a decentralized, secure, and transparent approach, enhancing the accuracy of risk assessments and preventing fraudulent claims. By maintaining an immutable ledger of historical claims, blockchain allows for better comparison of new claims against past data. Additionally, smart contracts within blockchain frameworks can automate claims processing, reducing administrative tasks and speeding up resolutions. Blockchain also enables decentralized, peer-to-peer insurance platforms, allowing multiple insurers to pool resources and share risks in a transparent, efficient manner. This study explores how blockchain can transform CLI, improving efficiency and security across the industry.
Blockchain Applications in Cyber Liability Insurance from IJCI JOURNAL
]]>
11 0 https://cdn.slidesharecdn.com/ss_thumbnails/13524ijci08-241009105653-2d903219-thumbnail.jpg?width=120&height=120&fit=bounds document Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
DAPLSR: Data Augmentation Partial Least Squares Regression Model via Manifold Optimization /slideshow/daplsr-data-augmentation-partial-least-squares-regression-model-via-manifold-optimization/272291640 13524ijci07-241009105619-ef0bc1af
Traditional Partial Least Squares Regression (PLSR) models frequently underperform when handling data characterized by uneven categories. To address the issue, this paper proposes a Data Augmentation Partial Least Squares Regression (DAPLSR) model via manifold optimization. The DAPLSR model introduces the Synthetic Minority Over-sampling Technique (SMOTE) to increase the number of samples and utilizes the Value Difference Metric (VDM) to select the nearest neighbor samples that closely resemble the original samples for generating synthetic samples. In solving the model, in order to obtain a more accurate numerical solution for PLSR, this paper proposes a manifold optimization method that uses the geometric properties of the constraint space to improve model degradation and optimization. Comprehensive experiments show that the proposed DAPLSR model achieves superior classification performance and outstanding evaluation metrics on various datasets, significantly outperforming existing methods. ]]>

Traditional Partial Least Squares Regression (PLSR) models frequently underperform when handling data characterized by uneven categories. To address the issue, this paper proposes a Data Augmentation Partial Least Squares Regression (DAPLSR) model via manifold optimization. The DAPLSR model introduces the Synthetic Minority Over-sampling Technique (SMOTE) to increase the number of samples and utilizes the Value Difference Metric (VDM) to select the nearest neighbor samples that closely resemble the original samples for generating synthetic samples. In solving the model, in order to obtain a more accurate numerical solution for PLSR, this paper proposes a manifold optimization method that uses the geometric properties of the constraint space to improve model degradation and optimization. Comprehensive experiments show that the proposed DAPLSR model achieves superior classification performance and outstanding evaluation metrics on various datasets, significantly outperforming existing methods. ]]>
Wed, 09 Oct 2024 10:56:19 GMT /slideshow/daplsr-data-augmentation-partial-least-squares-regression-model-via-manifold-optimization/272291640 ijci@slideshare.net(ijci) DAPLSR: Data Augmentation Partial Least Squares Regression Model via Manifold Optimization ijci Traditional Partial Least Squares Regression (PLSR) models frequently underperform when handling data characterized by uneven categories. To address the issue, this paper proposes a Data Augmentation Partial Least Squares Regression (DAPLSR) model via manifold optimization. The DAPLSR model introduces the Synthetic Minority Over-sampling Technique (SMOTE) to increase the number of samples and utilizes the Value Difference Metric (VDM) to select the nearest neighbor samples that closely resemble the original samples for generating synthetic samples. In solving the model, in order to obtain a more accurate numerical solution for PLSR, this paper proposes a manifold optimization method that uses the geometric properties of the constraint space to improve model degradation and optimization. Comprehensive experiments show that the proposed DAPLSR model achieves superior classification performance and outstanding evaluation metrics on various datasets, significantly outperforming existing methods. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/13524ijci07-241009105619-ef0bc1af-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Traditional Partial Least Squares Regression (PLSR) models frequently underperform when handling data characterized by uneven categories. To address the issue, this paper proposes a Data Augmentation Partial Least Squares Regression (DAPLSR) model via manifold optimization. The DAPLSR model introduces the Synthetic Minority Over-sampling Technique (SMOTE) to increase the number of samples and utilizes the Value Difference Metric (VDM) to select the nearest neighbor samples that closely resemble the original samples for generating synthetic samples. In solving the model, in order to obtain a more accurate numerical solution for PLSR, this paper proposes a manifold optimization method that uses the geometric properties of the constraint space to improve model degradation and optimization. Comprehensive experiments show that the proposed DAPLSR model achieves superior classification performance and outstanding evaluation metrics on various datasets, significantly outperforming existing methods.
DAPLSR: Data Augmentation Partial Least Squares Regression Model via Manifold Optimization from IJCI JOURNAL
]]>
6 0 https://cdn.slidesharecdn.com/ss_thumbnails/13524ijci07-241009105619-ef0bc1af-thumbnail.jpg?width=120&height=120&fit=bounds document Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
WTCL-Dehaze: Rethinking Real-World Image Dehazing via Wavelet Transform and Contrastive Learning /slideshow/wtcl-dehaze-rethinking-real-world-image-dehazing-via-wavelet-transform-and-contrastive-learning/272291610 13524ijci06-241009105511-9a7353df
Images captured in hazy outdoor conditions often suffer from colour distortion, low contrast, and loss of detail, which impair high-level vision tasks. Single image dehazing is essential for applications such as autonomous driving and surveillance, with the aim of restoring image clarity. In this work, we propose WTCL-Dehaze an enhanced semi-supervised dehazing network that integrates Contrastive Loss and Discrete Wavelet Transform (DWT). We incorporate contrastive regularization to enhance feature representation by contrasting hazy and clear image pairs. Additionally, we utilize DWT for multi-scale feature extraction, effectively capturing high-frequency details and global structures. Our approach leverages both labelled and unlabelled data to mitigate the domain gap and improve generalization. The model is trained on a combination of synthetic and real-world datasets, ensuring robust performance across different scenarios. Extensive experiments demonstrate that our proposed algorithm achieves superior performance and improved robustness compared to state-of-the-art single image dehazing methods on both benchmark datasets and real-world images. ]]>

Images captured in hazy outdoor conditions often suffer from colour distortion, low contrast, and loss of detail, which impair high-level vision tasks. Single image dehazing is essential for applications such as autonomous driving and surveillance, with the aim of restoring image clarity. In this work, we propose WTCL-Dehaze an enhanced semi-supervised dehazing network that integrates Contrastive Loss and Discrete Wavelet Transform (DWT). We incorporate contrastive regularization to enhance feature representation by contrasting hazy and clear image pairs. Additionally, we utilize DWT for multi-scale feature extraction, effectively capturing high-frequency details and global structures. Our approach leverages both labelled and unlabelled data to mitigate the domain gap and improve generalization. The model is trained on a combination of synthetic and real-world datasets, ensuring robust performance across different scenarios. Extensive experiments demonstrate that our proposed algorithm achieves superior performance and improved robustness compared to state-of-the-art single image dehazing methods on both benchmark datasets and real-world images. ]]>
Wed, 09 Oct 2024 10:55:11 GMT /slideshow/wtcl-dehaze-rethinking-real-world-image-dehazing-via-wavelet-transform-and-contrastive-learning/272291610 ijci@slideshare.net(ijci) WTCL-Dehaze: Rethinking Real-World Image Dehazing via Wavelet Transform and Contrastive Learning ijci Images captured in hazy outdoor conditions often suffer from colour distortion, low contrast, and loss of detail, which impair high-level vision tasks. Single image dehazing is essential for applications such as autonomous driving and surveillance, with the aim of restoring image clarity. In this work, we propose WTCL-Dehaze an enhanced semi-supervised dehazing network that integrates Contrastive Loss and Discrete Wavelet Transform (DWT). We incorporate contrastive regularization to enhance feature representation by contrasting hazy and clear image pairs. Additionally, we utilize DWT for multi-scale feature extraction, effectively capturing high-frequency details and global structures. Our approach leverages both labelled and unlabelled data to mitigate the domain gap and improve generalization. The model is trained on a combination of synthetic and real-world datasets, ensuring robust performance across different scenarios. Extensive experiments demonstrate that our proposed algorithm achieves superior performance and improved robustness compared to state-of-the-art single image dehazing methods on both benchmark datasets and real-world images. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/13524ijci06-241009105511-9a7353df-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Images captured in hazy outdoor conditions often suffer from colour distortion, low contrast, and loss of detail, which impair high-level vision tasks. Single image dehazing is essential for applications such as autonomous driving and surveillance, with the aim of restoring image clarity. In this work, we propose WTCL-Dehaze an enhanced semi-supervised dehazing network that integrates Contrastive Loss and Discrete Wavelet Transform (DWT). We incorporate contrastive regularization to enhance feature representation by contrasting hazy and clear image pairs. Additionally, we utilize DWT for multi-scale feature extraction, effectively capturing high-frequency details and global structures. Our approach leverages both labelled and unlabelled data to mitigate the domain gap and improve generalization. The model is trained on a combination of synthetic and real-world datasets, ensuring robust performance across different scenarios. Extensive experiments demonstrate that our proposed algorithm achieves superior performance and improved robustness compared to state-of-the-art single image dehazing methods on both benchmark datasets and real-world images.
WTCL-Dehaze: Rethinking Real-World Image Dehazing via Wavelet Transform and Contrastive Learning from IJCI JOURNAL
]]>
12 0 https://cdn.slidesharecdn.com/ss_thumbnails/13524ijci06-241009105511-9a7353df-thumbnail.jpg?width=120&height=120&fit=bounds document Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Block Medcare: Advancing Healthcare Through Blockchain Integration /slideshow/block-medcare-advancing-healthcare-through-blockchain-integration/272291582 13524ijci05-241009105405-6bc2d070
In an era driven by information exchange, transparency and security hold crucial importance, particularly within the healthcare industry, where data integrity and confidentiality are paramount. This paper investigates the integration of blockchain technology in healthcare, focusing on its potential to revolutionize Electronic Health Records (EHR) management and data sharing. By leveraging Ethereum-based blockchain implementations and smart contracts, we propose a novel system that empowers patients to securely store and manage their medical data. Our research addresses critical challenges in implementing blockchain in healthcare, including scalability, user privacy, and regulatory compliance. We propose a solution that combines digital signatures, Role-Based Access Control, and a multi-layered architecture to enhance security and ensure controlled access. The system's key functions, including user registration, data append, and data retrieval, are facilitated through smart contracts, providing a secure and efficient mechanism for managing health information. To validate our approach, we developed a decentralized application (dApp) that demonstrates the practical implementation of our blockchain-based healthcare solution. The dApp incorporates user-friendly interfaces for patients, doctors, and administrators, showcasing the system's potential to streamline healthcare processes while maintaining data security and integrity. Additionally, we conducted a survey to gain insights into the perceived benefits and challenges of blockchain adoption in healthcare. The results indicate strong interest among healthcare professionals and IT experts, while also highlighting concerns about integration costs and technological complexity. Our findings underscore the transformative potential of blockchain technology in healthcare, pointing towards a new era of patient-centric and secure healthcare services. By addressing current limitations and exploring future enhancements, such as integration with IoT devices and AI-driven analytics, this research contributes to the ongoing evolution of secure, efficient, and interoperable healthcare systems. ]]>

In an era driven by information exchange, transparency and security hold crucial importance, particularly within the healthcare industry, where data integrity and confidentiality are paramount. This paper investigates the integration of blockchain technology in healthcare, focusing on its potential to revolutionize Electronic Health Records (EHR) management and data sharing. By leveraging Ethereum-based blockchain implementations and smart contracts, we propose a novel system that empowers patients to securely store and manage their medical data. Our research addresses critical challenges in implementing blockchain in healthcare, including scalability, user privacy, and regulatory compliance. We propose a solution that combines digital signatures, Role-Based Access Control, and a multi-layered architecture to enhance security and ensure controlled access. The system's key functions, including user registration, data append, and data retrieval, are facilitated through smart contracts, providing a secure and efficient mechanism for managing health information. To validate our approach, we developed a decentralized application (dApp) that demonstrates the practical implementation of our blockchain-based healthcare solution. The dApp incorporates user-friendly interfaces for patients, doctors, and administrators, showcasing the system's potential to streamline healthcare processes while maintaining data security and integrity. Additionally, we conducted a survey to gain insights into the perceived benefits and challenges of blockchain adoption in healthcare. The results indicate strong interest among healthcare professionals and IT experts, while also highlighting concerns about integration costs and technological complexity. Our findings underscore the transformative potential of blockchain technology in healthcare, pointing towards a new era of patient-centric and secure healthcare services. By addressing current limitations and exploring future enhancements, such as integration with IoT devices and AI-driven analytics, this research contributes to the ongoing evolution of secure, efficient, and interoperable healthcare systems. ]]>
Wed, 09 Oct 2024 10:54:05 GMT /slideshow/block-medcare-advancing-healthcare-through-blockchain-integration/272291582 ijci@slideshare.net(ijci) Block Medcare: Advancing Healthcare Through Blockchain Integration ijci In an era driven by information exchange, transparency and security hold crucial importance, particularly within the healthcare industry, where data integrity and confidentiality are paramount. This paper investigates the integration of blockchain technology in healthcare, focusing on its potential to revolutionize Electronic Health Records (EHR) management and data sharing. By leveraging Ethereum-based blockchain implementations and smart contracts, we propose a novel system that empowers patients to securely store and manage their medical data. Our research addresses critical challenges in implementing blockchain in healthcare, including scalability, user privacy, and regulatory compliance. We propose a solution that combines digital signatures, Role-Based Access Control, and a multi-layered architecture to enhance security and ensure controlled access. The system's key functions, including user registration, data append, and data retrieval, are facilitated through smart contracts, providing a secure and efficient mechanism for managing health information. To validate our approach, we developed a decentralized application (dApp) that demonstrates the practical implementation of our blockchain-based healthcare solution. The dApp incorporates user-friendly interfaces for patients, doctors, and administrators, showcasing the system's potential to streamline healthcare processes while maintaining data security and integrity. Additionally, we conducted a survey to gain insights into the perceived benefits and challenges of blockchain adoption in healthcare. The results indicate strong interest among healthcare professionals and IT experts, while also highlighting concerns about integration costs and technological complexity. Our findings underscore the transformative potential of blockchain technology in healthcare, pointing towards a new era of patient-centric and secure healthcare services. By addressing current limitations and exploring future enhancements, such as integration with IoT devices and AI-driven analytics, this research contributes to the ongoing evolution of secure, efficient, and interoperable healthcare systems. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/13524ijci05-241009105405-6bc2d070-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> In an era driven by information exchange, transparency and security hold crucial importance, particularly within the healthcare industry, where data integrity and confidentiality are paramount. This paper investigates the integration of blockchain technology in healthcare, focusing on its potential to revolutionize Electronic Health Records (EHR) management and data sharing. By leveraging Ethereum-based blockchain implementations and smart contracts, we propose a novel system that empowers patients to securely store and manage their medical data. Our research addresses critical challenges in implementing blockchain in healthcare, including scalability, user privacy, and regulatory compliance. We propose a solution that combines digital signatures, Role-Based Access Control, and a multi-layered architecture to enhance security and ensure controlled access. The system&#39;s key functions, including user registration, data append, and data retrieval, are facilitated through smart contracts, providing a secure and efficient mechanism for managing health information. To validate our approach, we developed a decentralized application (dApp) that demonstrates the practical implementation of our blockchain-based healthcare solution. The dApp incorporates user-friendly interfaces for patients, doctors, and administrators, showcasing the system&#39;s potential to streamline healthcare processes while maintaining data security and integrity. Additionally, we conducted a survey to gain insights into the perceived benefits and challenges of blockchain adoption in healthcare. The results indicate strong interest among healthcare professionals and IT experts, while also highlighting concerns about integration costs and technological complexity. Our findings underscore the transformative potential of blockchain technology in healthcare, pointing towards a new era of patient-centric and secure healthcare services. By addressing current limitations and exploring future enhancements, such as integration with IoT devices and AI-driven analytics, this research contributes to the ongoing evolution of secure, efficient, and interoperable healthcare systems.
Block Medcare: Advancing Healthcare Through Blockchain Integration from IJCI JOURNAL
]]>
27 0 https://cdn.slidesharecdn.com/ss_thumbnails/13524ijci05-241009105405-6bc2d070-thumbnail.jpg?width=120&height=120&fit=bounds document Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Analysis of Synchronization Mechanisms in Operating Systems /slideshow/analysis-of-synchronization-mechanisms-in-operating-systems/271970397 13524ijci041-240923124712-c97411a7
This research analyzed the performance and consistency of four synchronization mechanisms—reentrant locks, semaphores, synchronized methods, and synchronized blocks—across three operating systems: macOS, Windows, and Linux. Synchronization ensures that concurrent processes or threads access shared resources safely, and efficient synchronization is vital for maintaining system performance and reliability. The study aimed to identify the synchronization mechanism that balances efficiency, measured by execution time, and consistency, assessed by variance and standard deviation, across platforms.The initial hypothesis proposed that mutex-based mechanisms, specifically synchronized methods and blocks, would be the most efficient due to their simplicity. However, empirical results showed that reentrant locks had the lowest average execution time (14.67ms), making them the most efficient mechanism, but with the highest variability (standard deviation of 1.15). In contrast, synchronized methods, blocks, and semaphores exhibited higher average execution times (16.33ms for methods, 16.67ms for blocks), but with greater consistency (variance of 0.33).The findings indicated that while reentrant locks were faster, they were more platform-dependent, whereas mutex-based mechanisms provided more predictable performance across all operating systems. The use of virtual machines for Windows and Linux was a limitation, potentially affecting the results. Future research should include native testing and explore additional synchronization mechanisms and higher concurrency levels. These insights help developers and system designers optimize synchronization strategies for either performance or stability, depending on the application's requirements. ]]>

This research analyzed the performance and consistency of four synchronization mechanisms—reentrant locks, semaphores, synchronized methods, and synchronized blocks—across three operating systems: macOS, Windows, and Linux. Synchronization ensures that concurrent processes or threads access shared resources safely, and efficient synchronization is vital for maintaining system performance and reliability. The study aimed to identify the synchronization mechanism that balances efficiency, measured by execution time, and consistency, assessed by variance and standard deviation, across platforms.The initial hypothesis proposed that mutex-based mechanisms, specifically synchronized methods and blocks, would be the most efficient due to their simplicity. However, empirical results showed that reentrant locks had the lowest average execution time (14.67ms), making them the most efficient mechanism, but with the highest variability (standard deviation of 1.15). In contrast, synchronized methods, blocks, and semaphores exhibited higher average execution times (16.33ms for methods, 16.67ms for blocks), but with greater consistency (variance of 0.33).The findings indicated that while reentrant locks were faster, they were more platform-dependent, whereas mutex-based mechanisms provided more predictable performance across all operating systems. The use of virtual machines for Windows and Linux was a limitation, potentially affecting the results. Future research should include native testing and explore additional synchronization mechanisms and higher concurrency levels. These insights help developers and system designers optimize synchronization strategies for either performance or stability, depending on the application's requirements. ]]>
Mon, 23 Sep 2024 12:47:12 GMT /slideshow/analysis-of-synchronization-mechanisms-in-operating-systems/271970397 ijci@slideshare.net(ijci) Analysis of Synchronization Mechanisms in Operating Systems ijci This research analyzed the performance and consistency of four synchronization mechanisms—reentrant locks, semaphores, synchronized methods, and synchronized blocks—across three operating systems: macOS, Windows, and Linux. Synchronization ensures that concurrent processes or threads access shared resources safely, and efficient synchronization is vital for maintaining system performance and reliability. The study aimed to identify the synchronization mechanism that balances efficiency, measured by execution time, and consistency, assessed by variance and standard deviation, across platforms.The initial hypothesis proposed that mutex-based mechanisms, specifically synchronized methods and blocks, would be the most efficient due to their simplicity. However, empirical results showed that reentrant locks had the lowest average execution time (14.67ms), making them the most efficient mechanism, but with the highest variability (standard deviation of 1.15). In contrast, synchronized methods, blocks, and semaphores exhibited higher average execution times (16.33ms for methods, 16.67ms for blocks), but with greater consistency (variance of 0.33).The findings indicated that while reentrant locks were faster, they were more platform-dependent, whereas mutex-based mechanisms provided more predictable performance across all operating systems. The use of virtual machines for Windows and Linux was a limitation, potentially affecting the results. Future research should include native testing and explore additional synchronization mechanisms and higher concurrency levels. These insights help developers and system designers optimize synchronization strategies for either performance or stability, depending on the application's requirements. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/13524ijci041-240923124712-c97411a7-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> This research analyzed the performance and consistency of four synchronization mechanisms—reentrant locks, semaphores, synchronized methods, and synchronized blocks—across three operating systems: macOS, Windows, and Linux. Synchronization ensures that concurrent processes or threads access shared resources safely, and efficient synchronization is vital for maintaining system performance and reliability. The study aimed to identify the synchronization mechanism that balances efficiency, measured by execution time, and consistency, assessed by variance and standard deviation, across platforms.The initial hypothesis proposed that mutex-based mechanisms, specifically synchronized methods and blocks, would be the most efficient due to their simplicity. However, empirical results showed that reentrant locks had the lowest average execution time (14.67ms), making them the most efficient mechanism, but with the highest variability (standard deviation of 1.15). In contrast, synchronized methods, blocks, and semaphores exhibited higher average execution times (16.33ms for methods, 16.67ms for blocks), but with greater consistency (variance of 0.33).The findings indicated that while reentrant locks were faster, they were more platform-dependent, whereas mutex-based mechanisms provided more predictable performance across all operating systems. The use of virtual machines for Windows and Linux was a limitation, potentially affecting the results. Future research should include native testing and explore additional synchronization mechanisms and higher concurrency levels. These insights help developers and system designers optimize synchronization strategies for either performance or stability, depending on the application&#39;s requirements.
Analysis of Synchronization Mechanisms in Operating Systems from IJCI JOURNAL
]]>
157 0 https://cdn.slidesharecdn.com/ss_thumbnails/13524ijci041-240923124712-c97411a7-thumbnail.jpg?width=120&height=120&fit=bounds document Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Multi-classification of Cad Entities: Leveraging the Entity-as-Node Approach with Graph Neural Networks /slideshow/multi-classification-of-cad-entities-leveraging-the-entity-as-node-approach-with-graph-neural-networks/271970376 13524ijci03-240923124615-d94f08a2
The construction industry faces challenges in extracting and interpreting semantic information from CAD floor plans and related data. Graph Neural Networks (GNNs) have emerged as a potential solution, preserving the structural integrity of CAD drawings without rasterization. Accurate identification of structural symbols, such as walls, doors, and windows, is vital for generalizing floor plans. This paper investigates GNN methods to enhance the classification of these symbols in CAD floor plans, proposing an entity-as-node graph representation. We evaluate various preprocessing strategies and GNN architectures, including Graph Attention Networks (GAT), GATv2, Generalized Aggregation Networks (GANet), Principal Neighborhood Aggregation (PNA), and Unified Message Passing (UniMP) on the CubiCasa5K dataset. Our results show that these methods significantly outperform current state-of-the-art approaches, demonstrating their effectiveness in CAD floor plan entity classification. ]]>

The construction industry faces challenges in extracting and interpreting semantic information from CAD floor plans and related data. Graph Neural Networks (GNNs) have emerged as a potential solution, preserving the structural integrity of CAD drawings without rasterization. Accurate identification of structural symbols, such as walls, doors, and windows, is vital for generalizing floor plans. This paper investigates GNN methods to enhance the classification of these symbols in CAD floor plans, proposing an entity-as-node graph representation. We evaluate various preprocessing strategies and GNN architectures, including Graph Attention Networks (GAT), GATv2, Generalized Aggregation Networks (GANet), Principal Neighborhood Aggregation (PNA), and Unified Message Passing (UniMP) on the CubiCasa5K dataset. Our results show that these methods significantly outperform current state-of-the-art approaches, demonstrating their effectiveness in CAD floor plan entity classification. ]]>
Mon, 23 Sep 2024 12:46:15 GMT /slideshow/multi-classification-of-cad-entities-leveraging-the-entity-as-node-approach-with-graph-neural-networks/271970376 ijci@slideshare.net(ijci) Multi-classification of Cad Entities: Leveraging the Entity-as-Node Approach with Graph Neural Networks ijci The construction industry faces challenges in extracting and interpreting semantic information from CAD floor plans and related data. Graph Neural Networks (GNNs) have emerged as a potential solution, preserving the structural integrity of CAD drawings without rasterization. Accurate identification of structural symbols, such as walls, doors, and windows, is vital for generalizing floor plans. This paper investigates GNN methods to enhance the classification of these symbols in CAD floor plans, proposing an entity-as-node graph representation. We evaluate various preprocessing strategies and GNN architectures, including Graph Attention Networks (GAT), GATv2, Generalized Aggregation Networks (GANet), Principal Neighborhood Aggregation (PNA), and Unified Message Passing (UniMP) on the CubiCasa5K dataset. Our results show that these methods significantly outperform current state-of-the-art approaches, demonstrating their effectiveness in CAD floor plan entity classification. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/13524ijci03-240923124615-d94f08a2-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> The construction industry faces challenges in extracting and interpreting semantic information from CAD floor plans and related data. Graph Neural Networks (GNNs) have emerged as a potential solution, preserving the structural integrity of CAD drawings without rasterization. Accurate identification of structural symbols, such as walls, doors, and windows, is vital for generalizing floor plans. This paper investigates GNN methods to enhance the classification of these symbols in CAD floor plans, proposing an entity-as-node graph representation. We evaluate various preprocessing strategies and GNN architectures, including Graph Attention Networks (GAT), GATv2, Generalized Aggregation Networks (GANet), Principal Neighborhood Aggregation (PNA), and Unified Message Passing (UniMP) on the CubiCasa5K dataset. Our results show that these methods significantly outperform current state-of-the-art approaches, demonstrating their effectiveness in CAD floor plan entity classification.
Multi-classification of Cad Entities: Leveraging the Entity-as-Node Approach with Graph Neural Networks from IJCI JOURNAL
]]>
13 0 https://cdn.slidesharecdn.com/ss_thumbnails/13524ijci03-240923124615-d94f08a2-thumbnail.jpg?width=120&height=120&fit=bounds document Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Leveraging Generative AI for On-Demand Tutoring as a New Paradigm in Education /slideshow/leveraging-generative-ai-for-on-demand-tutoring-as-a-new-paradigm-in-education/271970358 13524ijci02-240923124524-60b53c4c
Traditional education often fails to provide personalized, immediate support to all students, leading to gaps in understanding and learning inequality. Generative Artificial Intelligence (GenAI) offers a scalable, cost-effective solution for on-demand tutoring, providing personalized, 24/7 support. This paper explores the application of GenAI as an on-demand tutoring system, addressing the critical need for personalized, immediate educational support. Using GenAI to create an on-demand tutoring system that offers personalized, real-time student support is vital to today’s academic needs. Crucial components of this approach include advanced natural language processing to understand and respond to student queries, machine learning algorithms to adapt to individual learning styles, and a scalable cloud-based infrastructure to ensure 24/7 availability. This approach’s expected scientific surplus value lies in its potential to significantly enhance educational outcomes by providing scalable, personalized learning experiences. This paper outlines a pathway for future research and development in this area, highlighting the potential of GenAI to revolutionize education and improve learning outcomes for all students. ]]>

Traditional education often fails to provide personalized, immediate support to all students, leading to gaps in understanding and learning inequality. Generative Artificial Intelligence (GenAI) offers a scalable, cost-effective solution for on-demand tutoring, providing personalized, 24/7 support. This paper explores the application of GenAI as an on-demand tutoring system, addressing the critical need for personalized, immediate educational support. Using GenAI to create an on-demand tutoring system that offers personalized, real-time student support is vital to today’s academic needs. Crucial components of this approach include advanced natural language processing to understand and respond to student queries, machine learning algorithms to adapt to individual learning styles, and a scalable cloud-based infrastructure to ensure 24/7 availability. This approach’s expected scientific surplus value lies in its potential to significantly enhance educational outcomes by providing scalable, personalized learning experiences. This paper outlines a pathway for future research and development in this area, highlighting the potential of GenAI to revolutionize education and improve learning outcomes for all students. ]]>
Mon, 23 Sep 2024 12:45:24 GMT /slideshow/leveraging-generative-ai-for-on-demand-tutoring-as-a-new-paradigm-in-education/271970358 ijci@slideshare.net(ijci) Leveraging Generative AI for On-Demand Tutoring as a New Paradigm in Education ijci Traditional education often fails to provide personalized, immediate support to all students, leading to gaps in understanding and learning inequality. Generative Artificial Intelligence (GenAI) offers a scalable, cost-effective solution for on-demand tutoring, providing personalized, 24/7 support. This paper explores the application of GenAI as an on-demand tutoring system, addressing the critical need for personalized, immediate educational support. Using GenAI to create an on-demand tutoring system that offers personalized, real-time student support is vital to today’s academic needs. Crucial components of this approach include advanced natural language processing to understand and respond to student queries, machine learning algorithms to adapt to individual learning styles, and a scalable cloud-based infrastructure to ensure 24/7 availability. This approach’s expected scientific surplus value lies in its potential to significantly enhance educational outcomes by providing scalable, personalized learning experiences. This paper outlines a pathway for future research and development in this area, highlighting the potential of GenAI to revolutionize education and improve learning outcomes for all students. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/13524ijci02-240923124524-60b53c4c-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Traditional education often fails to provide personalized, immediate support to all students, leading to gaps in understanding and learning inequality. Generative Artificial Intelligence (GenAI) offers a scalable, cost-effective solution for on-demand tutoring, providing personalized, 24/7 support. This paper explores the application of GenAI as an on-demand tutoring system, addressing the critical need for personalized, immediate educational support. Using GenAI to create an on-demand tutoring system that offers personalized, real-time student support is vital to today’s academic needs. Crucial components of this approach include advanced natural language processing to understand and respond to student queries, machine learning algorithms to adapt to individual learning styles, and a scalable cloud-based infrastructure to ensure 24/7 availability. This approach’s expected scientific surplus value lies in its potential to significantly enhance educational outcomes by providing scalable, personalized learning experiences. This paper outlines a pathway for future research and development in this area, highlighting the potential of GenAI to revolutionize education and improve learning outcomes for all students.
Leveraging Generative AI for On-Demand Tutoring as a New Paradigm in Education from IJCI JOURNAL
]]>
19 0 https://cdn.slidesharecdn.com/ss_thumbnails/13524ijci02-240923124524-60b53c4c-thumbnail.jpg?width=120&height=120&fit=bounds document Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
https://cdn.slidesharecdn.com/profile-photo-ijci-48x48.jpg?cb=1736338819 International Journal on Cybernetics & Informatics ( IJCI) is an open access peer- reviewed journal that focuses on the areas related to cybernetics which is information, control and system theory, understands the design and function of any system and the relationship among these applications. This journal aims to provide a platform for exchanging ideas in new emerging trends that needs more focus and exposure and will attempt to publish proposals that strengthen our goals. airccse.org/journal/ijci/index.html https://cdn.slidesharecdn.com/ss_thumbnails/mlcl2025cfp-250102103201-d5cbbe4e-thumbnail.jpg?width=320&height=320&fit=bounds slideshow/6th-international-conference-on-machine-learning-and-cloud-computing-mlcl-2025/274582528 6th International Conf... https://cdn.slidesharecdn.com/ss_thumbnails/13624ijci07-241218135032-89b954d4-thumbnail.jpg?width=320&height=320&fit=bounds slideshow/actionable-pattern-discovery-for-emotion-detection-in-bigdata-in-education-and-business/274185840 Actionable Pattern Dis... https://cdn.slidesharecdn.com/ss_thumbnails/13624ijci06-241218134938-52a100a1-thumbnail.jpg?width=320&height=320&fit=bounds slideshow/proposal-of-a-data-model-for-a-dynamic-adaptation-of-resources-in-iots-adr-iot/274185826 Proposal of a Data Mod...