際際滷shows by User: LocNguyen38 / http://www.slideshare.net/images/logo.gif 際際滷shows by User: LocNguyen38 / Wed, 09 Oct 2024 11:25:55 GMT 際際滷Share feed for 際際滷shows by User: LocNguyen38 Recover and Heal - ICEPD Conference 2024 - Batam - Indonesia /slideshow/recover-and-heal-icepd-conference-2024-batam-indonesia/272292362 08-241009112556-d4460843
The 11th International Multidisciplinary Research Conference in Education, Tourism, Environmental Science and Technology with theme: Leveraging Sustainable climate Through Education ]]>

The 11th International Multidisciplinary Research Conference in Education, Tourism, Environmental Science and Technology with theme: Leveraging Sustainable climate Through Education ]]>
Wed, 09 Oct 2024 11:25:55 GMT /slideshow/recover-and-heal-icepd-conference-2024-batam-indonesia/272292362 LocNguyen38@slideshare.net(LocNguyen38) Recover and Heal - ICEPD Conference 2024 - Batam - Indonesia LocNguyen38 The 11th International Multidisciplinary Research Conference in Education, Tourism, Environmental Science and Technology with theme: Leveraging Sustainable climate Through Education <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/08-241009112556-d4460843-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> The 11th International Multidisciplinary Research Conference in Education, Tourism, Environmental Science and Technology with theme: Leveraging Sustainable climate Through Education
Recover and Heal - ICEPD Conference 2024 - Batam - Indonesia from Loc Nguyen
]]>
2 0 https://cdn.slidesharecdn.com/ss_thumbnails/08-241009112556-d4460843-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Digital Transformation and Governance - SSS2024 /slideshow/digital-transformation-and-governance-sss2024/271423352 85-240830023834-26070c9c
Digital transformation is very important in modern society and government, which connects infrastructure and superstructure. Vietnam Government is doing the best job.]]>

Digital transformation is very important in modern society and government, which connects infrastructure and superstructure. Vietnam Government is doing the best job.]]>
Fri, 30 Aug 2024 02:38:34 GMT /slideshow/digital-transformation-and-governance-sss2024/271423352 LocNguyen38@slideshare.net(LocNguyen38) Digital Transformation and Governance - SSS2024 LocNguyen38 Digital transformation is very important in modern society and government, which connects infrastructure and superstructure. Vietnam Government is doing the best job. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/85-240830023834-26070c9c-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Digital transformation is very important in modern society and government, which connects infrastructure and superstructure. Vietnam Government is doing the best job.
Digital Transformation and Governance - SSS2024 from Loc Nguyen
]]>
22 0 https://cdn.slidesharecdn.com/ss_thumbnails/85-240830023834-26070c9c-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Tutorial on deep transformer (presentation slides) /slideshow/tutorial-on-deep-transformer-presentation-slides-ad59/270802572 85-240806092236-7305e1be
Development of transformer is a far progressive step in the long journeys of both generative artificial intelligence (GenAI) and statistical translation machine (STM) with support of deep neural network (DNN), in which STM can be known as interesting result of GenAI because of encoder-decoder mechanism for sequence generation built in transformer. But why is transformer being preeminent in GenAI and STM? Firstly, transformer has a so-called self-attention mechanism that discovers contextual meaning of every token in sequence, which contributes to reduce ambiguousness. Secondly, transformer does not concern ordering of tokens in sequence, which allows to train transformer from many parts of sequences in parallel. Thirdly, the third reason which is result of the two previous reasons is that transformer can be trained from large corpus with high accuracy as well as highly computational performance. Moreover, transformer is implemented by DNN which is one of important and effective approaches in artificial intelligence (AI) in recent time. Although transformer is preeminent because of its good consistency, it is not easily understandable. Therefore, this technical report aims to describe transformer with explanations which are as easily understandable as possible.]]>

Development of transformer is a far progressive step in the long journeys of both generative artificial intelligence (GenAI) and statistical translation machine (STM) with support of deep neural network (DNN), in which STM can be known as interesting result of GenAI because of encoder-decoder mechanism for sequence generation built in transformer. But why is transformer being preeminent in GenAI and STM? Firstly, transformer has a so-called self-attention mechanism that discovers contextual meaning of every token in sequence, which contributes to reduce ambiguousness. Secondly, transformer does not concern ordering of tokens in sequence, which allows to train transformer from many parts of sequences in parallel. Thirdly, the third reason which is result of the two previous reasons is that transformer can be trained from large corpus with high accuracy as well as highly computational performance. Moreover, transformer is implemented by DNN which is one of important and effective approaches in artificial intelligence (AI) in recent time. Although transformer is preeminent because of its good consistency, it is not easily understandable. Therefore, this technical report aims to describe transformer with explanations which are as easily understandable as possible.]]>
Tue, 06 Aug 2024 09:22:36 GMT /slideshow/tutorial-on-deep-transformer-presentation-slides-ad59/270802572 LocNguyen38@slideshare.net(LocNguyen38) Tutorial on deep transformer (presentation slides) LocNguyen38 Development of transformer is a far progressive step in the long journeys of both generative artificial intelligence (GenAI) and statistical translation machine (STM) with support of deep neural network (DNN), in which STM can be known as interesting result of GenAI because of encoder-decoder mechanism for sequence generation built in transformer. But why is transformer being preeminent in GenAI and STM? Firstly, transformer has a so-called self-attention mechanism that discovers contextual meaning of every token in sequence, which contributes to reduce ambiguousness. Secondly, transformer does not concern ordering of tokens in sequence, which allows to train transformer from many parts of sequences in parallel. Thirdly, the third reason which is result of the two previous reasons is that transformer can be trained from large corpus with high accuracy as well as highly computational performance. Moreover, transformer is implemented by DNN which is one of important and effective approaches in artificial intelligence (AI) in recent time. Although transformer is preeminent because of its good consistency, it is not easily understandable. Therefore, this technical report aims to describe transformer with explanations which are as easily understandable as possible. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/85-240806092236-7305e1be-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Development of transformer is a far progressive step in the long journeys of both generative artificial intelligence (GenAI) and statistical translation machine (STM) with support of deep neural network (DNN), in which STM can be known as interesting result of GenAI because of encoder-decoder mechanism for sequence generation built in transformer. But why is transformer being preeminent in GenAI and STM? Firstly, transformer has a so-called self-attention mechanism that discovers contextual meaning of every token in sequence, which contributes to reduce ambiguousness. Secondly, transformer does not concern ordering of tokens in sequence, which allows to train transformer from many parts of sequences in parallel. Thirdly, the third reason which is result of the two previous reasons is that transformer can be trained from large corpus with high accuracy as well as highly computational performance. Moreover, transformer is implemented by DNN which is one of important and effective approaches in artificial intelligence (AI) in recent time. Although transformer is preeminent because of its good consistency, it is not easily understandable. Therefore, this technical report aims to describe transformer with explanations which are as easily understandable as possible.
Tutorial on deep transformer (presentation slides) from Loc Nguyen
]]>
114 0 https://cdn.slidesharecdn.com/ss_thumbnails/85-240806092236-7305e1be-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Nam Ton C畉u: k畛 v畛ng v hi畛n th畛c (slides) /slideshow/nam-toan-c-u-k-v-ng-va-hi-n-th-c-slides/270802319 15-240806091225-62f61194
Nam Ton C畉u (NTC) v B畉c Ton C畉u (BTC), 坦 kh担ng ph畉i ph但n chia 畛a l箪 v t畉t nhi棚n cng kh担ng ph畉i ph但n t叩ch nam b畉c theo 動畛ng x鱈ch 畉o nh動 gi畛i tuy畉n chia 担i m畛t th畉 gi畛i tr畛n gi畛a v畉t ch畉t v tinh th畉n, gi畛a t畛 nhi棚n v x達 h畛i, gi畛a th動董ng m畉i v s畉n xu畉t m 坦 g畉n nh動 畉o t動畛ng ph但n t叩ch th畉 gi畛i gi畛a Ph動董ng T但y v phi Ph動董ng T但y gi畛a giu v ngh竪o, v畛i k畛 v畛ng 畉t th畉 c但n b畉ng s畛c m畉nh khi b畉t 畉u b畛t l棚n ti畉ng n坦i v畛i tr畛ng l動畛ng c畛a d但n s畛, ti nguy棚n v h董n h畉t l kh叩t v畛ng. M畛t khi 畉o t動畛ng ny 動畛c th炭c 辿p b畛i kh叩t v畛ng, h畛 tr畛 b畛i ti nguy棚n tr鱈 tu畛 ang lan t畛a c滴ng nh動 動畛c c畛 v滴 b畛i s畛 suy gi畉m quy畛n l畛c ki畛m so叩t c畛a Ph動董ng T但y c湛ng di畛n bi畉n ch鱈nh tr畛 ph畛c t畉p an xen xung 畛t s畉 d畉n tr畛 thnh hi畛n th畛c ti畉n 畉n i畛m c但n b畉ng m ti畉n tr狸nh ton c畉u h坦a v畛i lu畉n i畛m t畛 do 達 b畛 ch畉n l畉i trong nh畛ng nm g畉n 但y b畛i ch畛 ngh挑a b畉o h畛 khai sinh t畛 kh畛ng ho畉ng. B動畛c l湛i ny t動董ng t畛 qu畉 b坦ng b畛 b坦p 畛 h狸nh thnh n棚n xu h動畛ng NTC v BTC hay hi畛n th畛c h坦a c畛a 畉o t動畛ng NTC v BTC. C坦 l畉 ho畉t 畛ng c畛a NTC b畉t 畉u b畉ng th動董ng m畉i, ti ch鱈nh v ngo畉i giao 畛 h炭t s畛c m畉nh c担ng ngh畛 v ch鱈nh tr畛 t畛u trung v畉n l l畛i 鱈ch nh動ng t畉o n棚n m畛t t動畛ng t動畛ng gi畉 l畉p c畛a th動畛ng vi畛n BTC v h畉 vi畛n NTC. Tuy nhi棚n t担i kh担ng ngh挑 r畉ng NTC t畉o n棚n c畛c m 炭ng h董n l m畛t phong tro, m畛t s但n kh畉u n董i c叩c c動畛ng qu畛c c畛 g畉ng t畉o n棚n c畛c v nh畛ng qu畛c gia kh叩c chen ch但n m動u c畉u l畛i 鱈ch ch鱈nh 叩ng.]]>

Nam Ton C畉u (NTC) v B畉c Ton C畉u (BTC), 坦 kh担ng ph畉i ph但n chia 畛a l箪 v t畉t nhi棚n cng kh担ng ph畉i ph但n t叩ch nam b畉c theo 動畛ng x鱈ch 畉o nh動 gi畛i tuy畉n chia 担i m畛t th畉 gi畛i tr畛n gi畛a v畉t ch畉t v tinh th畉n, gi畛a t畛 nhi棚n v x達 h畛i, gi畛a th動董ng m畉i v s畉n xu畉t m 坦 g畉n nh動 畉o t動畛ng ph但n t叩ch th畉 gi畛i gi畛a Ph動董ng T但y v phi Ph動董ng T但y gi畛a giu v ngh竪o, v畛i k畛 v畛ng 畉t th畉 c但n b畉ng s畛c m畉nh khi b畉t 畉u b畛t l棚n ti畉ng n坦i v畛i tr畛ng l動畛ng c畛a d但n s畛, ti nguy棚n v h董n h畉t l kh叩t v畛ng. M畛t khi 畉o t動畛ng ny 動畛c th炭c 辿p b畛i kh叩t v畛ng, h畛 tr畛 b畛i ti nguy棚n tr鱈 tu畛 ang lan t畛a c滴ng nh動 動畛c c畛 v滴 b畛i s畛 suy gi畉m quy畛n l畛c ki畛m so叩t c畛a Ph動董ng T但y c湛ng di畛n bi畉n ch鱈nh tr畛 ph畛c t畉p an xen xung 畛t s畉 d畉n tr畛 thnh hi畛n th畛c ti畉n 畉n i畛m c但n b畉ng m ti畉n tr狸nh ton c畉u h坦a v畛i lu畉n i畛m t畛 do 達 b畛 ch畉n l畉i trong nh畛ng nm g畉n 但y b畛i ch畛 ngh挑a b畉o h畛 khai sinh t畛 kh畛ng ho畉ng. B動畛c l湛i ny t動董ng t畛 qu畉 b坦ng b畛 b坦p 畛 h狸nh thnh n棚n xu h動畛ng NTC v BTC hay hi畛n th畛c h坦a c畛a 畉o t動畛ng NTC v BTC. C坦 l畉 ho畉t 畛ng c畛a NTC b畉t 畉u b畉ng th動董ng m畉i, ti ch鱈nh v ngo畉i giao 畛 h炭t s畛c m畉nh c担ng ngh畛 v ch鱈nh tr畛 t畛u trung v畉n l l畛i 鱈ch nh動ng t畉o n棚n m畛t t動畛ng t動畛ng gi畉 l畉p c畛a th動畛ng vi畛n BTC v h畉 vi畛n NTC. Tuy nhi棚n t担i kh担ng ngh挑 r畉ng NTC t畉o n棚n c畛c m 炭ng h董n l m畛t phong tro, m畛t s但n kh畉u n董i c叩c c動畛ng qu畛c c畛 g畉ng t畉o n棚n c畛c v nh畛ng qu畛c gia kh叩c chen ch但n m動u c畉u l畛i 鱈ch ch鱈nh 叩ng.]]>
Tue, 06 Aug 2024 09:12:25 GMT /slideshow/nam-toan-c-u-k-v-ng-va-hi-n-th-c-slides/270802319 LocNguyen38@slideshare.net(LocNguyen38) Nam Ton C畉u: k畛 v畛ng v hi畛n th畛c (slides) LocNguyen38 Nam Ton C畉u (NTC) v B畉c Ton C畉u (BTC), 坦 kh担ng ph畉i ph但n chia 畛a l箪 v t畉t nhi棚n cng kh担ng ph畉i ph但n t叩ch nam b畉c theo 動畛ng x鱈ch 畉o nh動 gi畛i tuy畉n chia 担i m畛t th畉 gi畛i tr畛n gi畛a v畉t ch畉t v tinh th畉n, gi畛a t畛 nhi棚n v x達 h畛i, gi畛a th動董ng m畉i v s畉n xu畉t m 坦 g畉n nh動 畉o t動畛ng ph但n t叩ch th畉 gi畛i gi畛a Ph動董ng T但y v phi Ph動董ng T但y gi畛a giu v ngh竪o, v畛i k畛 v畛ng 畉t th畉 c但n b畉ng s畛c m畉nh khi b畉t 畉u b畛t l棚n ti畉ng n坦i v畛i tr畛ng l動畛ng c畛a d但n s畛, ti nguy棚n v h董n h畉t l kh叩t v畛ng. M畛t khi 畉o t動畛ng ny 動畛c th炭c 辿p b畛i kh叩t v畛ng, h畛 tr畛 b畛i ti nguy棚n tr鱈 tu畛 ang lan t畛a c滴ng nh動 動畛c c畛 v滴 b畛i s畛 suy gi畉m quy畛n l畛c ki畛m so叩t c畛a Ph動董ng T但y c湛ng di畛n bi畉n ch鱈nh tr畛 ph畛c t畉p an xen xung 畛t s畉 d畉n tr畛 thnh hi畛n th畛c ti畉n 畉n i畛m c但n b畉ng m ti畉n tr狸nh ton c畉u h坦a v畛i lu畉n i畛m t畛 do 達 b畛 ch畉n l畉i trong nh畛ng nm g畉n 但y b畛i ch畛 ngh挑a b畉o h畛 khai sinh t畛 kh畛ng ho畉ng. B動畛c l湛i ny t動董ng t畛 qu畉 b坦ng b畛 b坦p 畛 h狸nh thnh n棚n xu h動畛ng NTC v BTC hay hi畛n th畛c h坦a c畛a 畉o t動畛ng NTC v BTC. C坦 l畉 ho畉t 畛ng c畛a NTC b畉t 畉u b畉ng th動董ng m畉i, ti ch鱈nh v ngo畉i giao 畛 h炭t s畛c m畉nh c担ng ngh畛 v ch鱈nh tr畛 t畛u trung v畉n l l畛i 鱈ch nh動ng t畉o n棚n m畛t t動畛ng t動畛ng gi畉 l畉p c畛a th動畛ng vi畛n BTC v h畉 vi畛n NTC. Tuy nhi棚n t担i kh担ng ngh挑 r畉ng NTC t畉o n棚n c畛c m 炭ng h董n l m畛t phong tro, m畛t s但n kh畉u n董i c叩c c動畛ng qu畛c c畛 g畉ng t畉o n棚n c畛c v nh畛ng qu畛c gia kh叩c chen ch但n m動u c畉u l畛i 鱈ch ch鱈nh 叩ng. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/15-240806091225-62f61194-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Nam Ton C畉u (NTC) v B畉c Ton C畉u (BTC), 坦 kh担ng ph畉i ph但n chia 畛a l箪 v t畉t nhi棚n cng kh担ng ph畉i ph但n t叩ch nam b畉c theo 動畛ng x鱈ch 畉o nh動 gi畛i tuy畉n chia 担i m畛t th畉 gi畛i tr畛n gi畛a v畉t ch畉t v tinh th畉n, gi畛a t畛 nhi棚n v x達 h畛i, gi畛a th動董ng m畉i v s畉n xu畉t m 坦 g畉n nh動 畉o t動畛ng ph但n t叩ch th畉 gi畛i gi畛a Ph動董ng T但y v phi Ph動董ng T但y gi畛a giu v ngh竪o, v畛i k畛 v畛ng 畉t th畉 c但n b畉ng s畛c m畉nh khi b畉t 畉u b畛t l棚n ti畉ng n坦i v畛i tr畛ng l動畛ng c畛a d但n s畛, ti nguy棚n v h董n h畉t l kh叩t v畛ng. M畛t khi 畉o t動畛ng ny 動畛c th炭c 辿p b畛i kh叩t v畛ng, h畛 tr畛 b畛i ti nguy棚n tr鱈 tu畛 ang lan t畛a c滴ng nh動 動畛c c畛 v滴 b畛i s畛 suy gi畉m quy畛n l畛c ki畛m so叩t c畛a Ph動董ng T但y c湛ng di畛n bi畉n ch鱈nh tr畛 ph畛c t畉p an xen xung 畛t s畉 d畉n tr畛 thnh hi畛n th畛c ti畉n 畉n i畛m c但n b畉ng m ti畉n tr狸nh ton c畉u h坦a v畛i lu畉n i畛m t畛 do 達 b畛 ch畉n l畉i trong nh畛ng nm g畉n 但y b畛i ch畛 ngh挑a b畉o h畛 khai sinh t畛 kh畛ng ho畉ng. B動畛c l湛i ny t動董ng t畛 qu畉 b坦ng b畛 b坦p 畛 h狸nh thnh n棚n xu h動畛ng NTC v BTC hay hi畛n th畛c h坦a c畛a 畉o t動畛ng NTC v BTC. C坦 l畉 ho畉t 畛ng c畛a NTC b畉t 畉u b畉ng th動董ng m畉i, ti ch鱈nh v ngo畉i giao 畛 h炭t s畛c m畉nh c担ng ngh畛 v ch鱈nh tr畛 t畛u trung v畉n l l畛i 鱈ch nh動ng t畉o n棚n m畛t t動畛ng t動畛ng gi畉 l畉p c畛a th動畛ng vi畛n BTC v h畉 vi畛n NTC. Tuy nhi棚n t担i kh担ng ngh挑 r畉ng NTC t畉o n棚n c畛c m 炭ng h董n l m畛t phong tro, m畛t s但n kh畉u n董i c叩c c動畛ng qu畛c c畛 g畉ng t畉o n棚n c畛c v nh畛ng qu畛c gia kh叩c chen ch但n m動u c畉u l畛i 鱈ch ch鱈nh 叩ng.
Nam Ton C畉u: k畛 v畛ng v hi畛n th畛c (slides) from Loc Nguyen
]]>
13 0 https://cdn.slidesharecdn.com/ss_thumbnails/15-240806091225-62f61194-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Tutorial on deep generative model (slides) /slideshow/tutorial-on-deep-generative-model-slides/270802212 84-240806090824-b8637466
Artificial intelligence (AI) is a current trend in computer science, which extends itself its amazing capacities to other technologies such as mechatronics and robotics. Going beyond technological applications, the philosophy behind AI is that there is a vague and potential convergence of artificial manufacture and natural world although the limiting approach may be still very far away, but why? The implicit problem is that Darwin theory of evolution focuses on natural world where breeding conservation is the cornerstone of the existence of creature world but there is no similar concept of breeding conservation in artificial world whose things are created by human. However, after developing for a long time until now, AI issues an interesting concept of generation in which artifacts created by computer science can derive their new generations inheriting their aspects / characteristics. Such generated artifacts make us look back on offsprings by the process of breeding conservation in natural world. Therefore, it is possible to think that AI generation, which is a recent subject of AI, is a significant development in computer science as well as high-tech domain. AI generation does not help us to reach near biological evolution even in the case that AI can combine with biological technology but, AI generation can help us to extend our viewpoint about Darwin theory of evolution as well as there may exist some uncertain relationship between man-made world and natural world. Anyhow AI generation is a current important subject in AI and there are two main generative models in computer science: 1) generative model that applies large language model into generating natural language texts understandable by human and 2) generative model that applies deep neural network into generating digital content such as sound, image, and video. This technical report focuses on deep generative model (DGM) for digital content generation, which is a short summary of approaches to implement DGMs. Researchers can read this work as an introduction to DGM with easily understandable explanations.]]>

Artificial intelligence (AI) is a current trend in computer science, which extends itself its amazing capacities to other technologies such as mechatronics and robotics. Going beyond technological applications, the philosophy behind AI is that there is a vague and potential convergence of artificial manufacture and natural world although the limiting approach may be still very far away, but why? The implicit problem is that Darwin theory of evolution focuses on natural world where breeding conservation is the cornerstone of the existence of creature world but there is no similar concept of breeding conservation in artificial world whose things are created by human. However, after developing for a long time until now, AI issues an interesting concept of generation in which artifacts created by computer science can derive their new generations inheriting their aspects / characteristics. Such generated artifacts make us look back on offsprings by the process of breeding conservation in natural world. Therefore, it is possible to think that AI generation, which is a recent subject of AI, is a significant development in computer science as well as high-tech domain. AI generation does not help us to reach near biological evolution even in the case that AI can combine with biological technology but, AI generation can help us to extend our viewpoint about Darwin theory of evolution as well as there may exist some uncertain relationship between man-made world and natural world. Anyhow AI generation is a current important subject in AI and there are two main generative models in computer science: 1) generative model that applies large language model into generating natural language texts understandable by human and 2) generative model that applies deep neural network into generating digital content such as sound, image, and video. This technical report focuses on deep generative model (DGM) for digital content generation, which is a short summary of approaches to implement DGMs. Researchers can read this work as an introduction to DGM with easily understandable explanations.]]>
Tue, 06 Aug 2024 09:08:24 GMT /slideshow/tutorial-on-deep-generative-model-slides/270802212 LocNguyen38@slideshare.net(LocNguyen38) Tutorial on deep generative model (slides) LocNguyen38 Artificial intelligence (AI) is a current trend in computer science, which extends itself its amazing capacities to other technologies such as mechatronics and robotics. Going beyond technological applications, the philosophy behind AI is that there is a vague and potential convergence of artificial manufacture and natural world although the limiting approach may be still very far away, but why? The implicit problem is that Darwin theory of evolution focuses on natural world where breeding conservation is the cornerstone of the existence of creature world but there is no similar concept of breeding conservation in artificial world whose things are created by human. However, after developing for a long time until now, AI issues an interesting concept of generation in which artifacts created by computer science can derive their new generations inheriting their aspects / characteristics. Such generated artifacts make us look back on offsprings by the process of breeding conservation in natural world. Therefore, it is possible to think that AI generation, which is a recent subject of AI, is a significant development in computer science as well as high-tech domain. AI generation does not help us to reach near biological evolution even in the case that AI can combine with biological technology but, AI generation can help us to extend our viewpoint about Darwin theory of evolution as well as there may exist some uncertain relationship between man-made world and natural world. Anyhow AI generation is a current important subject in AI and there are two main generative models in computer science: 1) generative model that applies large language model into generating natural language texts understandable by human and 2) generative model that applies deep neural network into generating digital content such as sound, image, and video. This technical report focuses on deep generative model (DGM) for digital content generation, which is a short summary of approaches to implement DGMs. Researchers can read this work as an introduction to DGM with easily understandable explanations. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/84-240806090824-b8637466-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Artificial intelligence (AI) is a current trend in computer science, which extends itself its amazing capacities to other technologies such as mechatronics and robotics. Going beyond technological applications, the philosophy behind AI is that there is a vague and potential convergence of artificial manufacture and natural world although the limiting approach may be still very far away, but why? The implicit problem is that Darwin theory of evolution focuses on natural world where breeding conservation is the cornerstone of the existence of creature world but there is no similar concept of breeding conservation in artificial world whose things are created by human. However, after developing for a long time until now, AI issues an interesting concept of generation in which artifacts created by computer science can derive their new generations inheriting their aspects / characteristics. Such generated artifacts make us look back on offsprings by the process of breeding conservation in natural world. Therefore, it is possible to think that AI generation, which is a recent subject of AI, is a significant development in computer science as well as high-tech domain. AI generation does not help us to reach near biological evolution even in the case that AI can combine with biological technology but, AI generation can help us to extend our viewpoint about Darwin theory of evolution as well as there may exist some uncertain relationship between man-made world and natural world. Anyhow AI generation is a current important subject in AI and there are two main generative models in computer science: 1) generative model that applies large language model into generating natural language texts understandable by human and 2) generative model that applies deep neural network into generating digital content such as sound, image, and video. This technical report focuses on deep generative model (DGM) for digital content generation, which is a short summary of approaches to implement DGMs. Researchers can read this work as an introduction to DGM with easily understandable explanations.
Tutorial on deep generative model (slides) from Loc Nguyen
]]>
92 0 https://cdn.slidesharecdn.com/ss_thumbnails/84-240806090824-b8637466-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Inspirational message: Artificial general intelligence /slideshow/inspirational-message-artificial-general-intelligence/270767893 85-240805065346-b71801f0
Artificial general intelligence International Interdisciplinary Research Conference (IIRC) 2024 EduHeart Book Publishing & Training and Development Services, 4th August 2024, Philippines]]>

Artificial general intelligence International Interdisciplinary Research Conference (IIRC) 2024 EduHeart Book Publishing & Training and Development Services, 4th August 2024, Philippines]]>
Mon, 05 Aug 2024 06:53:46 GMT /slideshow/inspirational-message-artificial-general-intelligence/270767893 LocNguyen38@slideshare.net(LocNguyen38) Inspirational message: Artificial general intelligence LocNguyen38 Artificial general intelligence International Interdisciplinary Research Conference (IIRC) 2024 EduHeart Book Publishing & Training and Development Services, 4th August 2024, Philippines <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/85-240805065346-b71801f0-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Artificial general intelligence International Interdisciplinary Research Conference (IIRC) 2024 EduHeart Book Publishing &amp; Training and Development Services, 4th August 2024, Philippines
Inspirational message: Artificial general intelligence from Loc Nguyen
]]>
4 0 https://cdn.slidesharecdn.com/ss_thumbnails/85-240805065346-b71801f0-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Adversarial Variational Autoencoders to extend and improve generative model - ICBDC2024 /slideshow/adversarial-variational-autoencoders-to-extend-and-improve-generative-model-icbdc2024/269453660 ava-icbdc2024-240601111212-b6aa6e1d
Generative artificial intelligence (GenAI) has been developing with many incredible achievements like ChatGPT and Bard. Deep generative model (DGM) is a branch of GenAI, which is preeminent in generating raster data such as image and sound due to strong points of deep neural network (DNN) in inference and recognition. The built-in inference mechanism of DNN, which simulates and aims to synaptic plasticity of human neuron network, fosters generation ability of DGM which produces surprised results with support of statistical flexibility. Two popular approaches in DGM are Variational Autoencoders (VAE) and Generative Adversarial Network (GAN). Both VAE and GAN have their own strong points although they share and imply underline theory of statistics as well as incredible complex via hidden layers of DNN when DNN becomes effective encoding/decoding functions without concrete specifications. In this research, I try to unify VAE and GAN into a consistent and consolidated model called Adversarial Variational Autoencoders (AVA) in which VAE and GAN complement each other, for instance, VAE is a good data generator by encoding data via excellent ideology of Kullback-Leibler divergence and GAN is a significantly important method to assess reliability of data which is realistic or fake. In other words, AVA aims to improve accuracy of generative models, besides AVA extends function of simple generative models. In methodology this research focuses on combination of applied mathematical concepts and skillful techniques of computer programming in order to implement and solve complicated problems as simply as possible.]]>

Generative artificial intelligence (GenAI) has been developing with many incredible achievements like ChatGPT and Bard. Deep generative model (DGM) is a branch of GenAI, which is preeminent in generating raster data such as image and sound due to strong points of deep neural network (DNN) in inference and recognition. The built-in inference mechanism of DNN, which simulates and aims to synaptic plasticity of human neuron network, fosters generation ability of DGM which produces surprised results with support of statistical flexibility. Two popular approaches in DGM are Variational Autoencoders (VAE) and Generative Adversarial Network (GAN). Both VAE and GAN have their own strong points although they share and imply underline theory of statistics as well as incredible complex via hidden layers of DNN when DNN becomes effective encoding/decoding functions without concrete specifications. In this research, I try to unify VAE and GAN into a consistent and consolidated model called Adversarial Variational Autoencoders (AVA) in which VAE and GAN complement each other, for instance, VAE is a good data generator by encoding data via excellent ideology of Kullback-Leibler divergence and GAN is a significantly important method to assess reliability of data which is realistic or fake. In other words, AVA aims to improve accuracy of generative models, besides AVA extends function of simple generative models. In methodology this research focuses on combination of applied mathematical concepts and skillful techniques of computer programming in order to implement and solve complicated problems as simply as possible.]]>
Sat, 01 Jun 2024 11:12:12 GMT /slideshow/adversarial-variational-autoencoders-to-extend-and-improve-generative-model-icbdc2024/269453660 LocNguyen38@slideshare.net(LocNguyen38) Adversarial Variational Autoencoders to extend and improve generative model - ICBDC2024 LocNguyen38 Generative artificial intelligence (GenAI) has been developing with many incredible achievements like ChatGPT and Bard. Deep generative model (DGM) is a branch of GenAI, which is preeminent in generating raster data such as image and sound due to strong points of deep neural network (DNN) in inference and recognition. The built-in inference mechanism of DNN, which simulates and aims to synaptic plasticity of human neuron network, fosters generation ability of DGM which produces surprised results with support of statistical flexibility. Two popular approaches in DGM are Variational Autoencoders (VAE) and Generative Adversarial Network (GAN). Both VAE and GAN have their own strong points although they share and imply underline theory of statistics as well as incredible complex via hidden layers of DNN when DNN becomes effective encoding/decoding functions without concrete specifications. In this research, I try to unify VAE and GAN into a consistent and consolidated model called Adversarial Variational Autoencoders (AVA) in which VAE and GAN complement each other, for instance, VAE is a good data generator by encoding data via excellent ideology of Kullback-Leibler divergence and GAN is a significantly important method to assess reliability of data which is realistic or fake. In other words, AVA aims to improve accuracy of generative models, besides AVA extends function of simple generative models. In methodology this research focuses on combination of applied mathematical concepts and skillful techniques of computer programming in order to implement and solve complicated problems as simply as possible. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/ava-icbdc2024-240601111212-b6aa6e1d-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Generative artificial intelligence (GenAI) has been developing with many incredible achievements like ChatGPT and Bard. Deep generative model (DGM) is a branch of GenAI, which is preeminent in generating raster data such as image and sound due to strong points of deep neural network (DNN) in inference and recognition. The built-in inference mechanism of DNN, which simulates and aims to synaptic plasticity of human neuron network, fosters generation ability of DGM which produces surprised results with support of statistical flexibility. Two popular approaches in DGM are Variational Autoencoders (VAE) and Generative Adversarial Network (GAN). Both VAE and GAN have their own strong points although they share and imply underline theory of statistics as well as incredible complex via hidden layers of DNN when DNN becomes effective encoding/decoding functions without concrete specifications. In this research, I try to unify VAE and GAN into a consistent and consolidated model called Adversarial Variational Autoencoders (AVA) in which VAE and GAN complement each other, for instance, VAE is a good data generator by encoding data via excellent ideology of Kullback-Leibler divergence and GAN is a significantly important method to assess reliability of data which is realistic or fake. In other words, AVA aims to improve accuracy of generative models, besides AVA extends function of simple generative models. In methodology this research focuses on combination of applied mathematical concepts and skillful techniques of computer programming in order to implement and solve complicated problems as simply as possible.
Adversarial Variational Autoencoders to extend and improve generative model - ICBDC2024 from Loc Nguyen
]]>
13 0 https://cdn.slidesharecdn.com/ss_thumbnails/ava-icbdc2024-240601111212-b6aa6e1d-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Sharing some thoughts of ASEAN relations /slideshow/sharing-some-thoughts-of-asean-relations/266331866 05-240216032143-9ed7ade9
The presentation shares some thoughts about ASEAN relations.]]>

The presentation shares some thoughts about ASEAN relations.]]>
Fri, 16 Feb 2024 03:21:42 GMT /slideshow/sharing-some-thoughts-of-asean-relations/266331866 LocNguyen38@slideshare.net(LocNguyen38) Sharing some thoughts of ASEAN relations LocNguyen38 The presentation shares some thoughts about ASEAN relations. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/05-240216032143-9ed7ade9-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> The presentation shares some thoughts about ASEAN relations.
Sharing some thoughts of ASEAN relations from Loc Nguyen
]]>
3 0 https://cdn.slidesharecdn.com/ss_thumbnails/05-240216032143-9ed7ade9-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Conditional mixture model and its application for regression model /slideshow/conditional-mixture-model-and-its-application-for-regression-model/265552130 61-240119081854-fec94c58
Expectation maximization (EM) algorithm is a powerful mathematical tool for estimating statistical parameter when data sample contains hidden part and observed part. EM is applied to learn finite mixture model in which the whole distribution of observed variable is average sum of partial distributions. Coverage ratio of every partial distribution is specified by the probability of hidden variable. An application of mixture model is soft clustering in which cluster is modeled by hidden variable whereas each data point can be assigned to more than one cluster and degree of such assignment is represented by the probability of hidden variable. However, such probability in traditional mixture model is simplified as a parameter, which can cause loss of valuable information. Therefore, in this research I propose a so-called conditional mixture model (CMM) in which the probability of hidden variable is modeled as a full probabilistic density function (PDF) that owns individual parameter. CMM aims to extend mixture model. I also propose an application of CMM which is called adaptive regression model (ARM). Traditional regression model is effective when data sample is scattered equally. If data points are grouped into clusters, regression model tries to learn a unified regression function which goes through all data points. Obviously, such unified function is not effective to evaluate response variable based on grouped data points. The concept adaptive of ARM means that ARM solves the ineffectiveness problem by selecting the best cluster of data points firstly and then evaluating response variable within such best cluster. In orther words, ARM reduces estimation space of regression model so as to gain high accuracy in calculation. Keywords: expectation maximization (EM) algorithm, finite mixture model, conditional mixture model, regression model, adaptive regression model (ARM).]]>

Expectation maximization (EM) algorithm is a powerful mathematical tool for estimating statistical parameter when data sample contains hidden part and observed part. EM is applied to learn finite mixture model in which the whole distribution of observed variable is average sum of partial distributions. Coverage ratio of every partial distribution is specified by the probability of hidden variable. An application of mixture model is soft clustering in which cluster is modeled by hidden variable whereas each data point can be assigned to more than one cluster and degree of such assignment is represented by the probability of hidden variable. However, such probability in traditional mixture model is simplified as a parameter, which can cause loss of valuable information. Therefore, in this research I propose a so-called conditional mixture model (CMM) in which the probability of hidden variable is modeled as a full probabilistic density function (PDF) that owns individual parameter. CMM aims to extend mixture model. I also propose an application of CMM which is called adaptive regression model (ARM). Traditional regression model is effective when data sample is scattered equally. If data points are grouped into clusters, regression model tries to learn a unified regression function which goes through all data points. Obviously, such unified function is not effective to evaluate response variable based on grouped data points. The concept adaptive of ARM means that ARM solves the ineffectiveness problem by selecting the best cluster of data points firstly and then evaluating response variable within such best cluster. In orther words, ARM reduces estimation space of regression model so as to gain high accuracy in calculation. Keywords: expectation maximization (EM) algorithm, finite mixture model, conditional mixture model, regression model, adaptive regression model (ARM).]]>
Fri, 19 Jan 2024 08:18:53 GMT /slideshow/conditional-mixture-model-and-its-application-for-regression-model/265552130 LocNguyen38@slideshare.net(LocNguyen38) Conditional mixture model and its application for regression model LocNguyen38 Expectation maximization (EM) algorithm is a powerful mathematical tool for estimating statistical parameter when data sample contains hidden part and observed part. EM is applied to learn finite mixture model in which the whole distribution of observed variable is average sum of partial distributions. Coverage ratio of every partial distribution is specified by the probability of hidden variable. An application of mixture model is soft clustering in which cluster is modeled by hidden variable whereas each data point can be assigned to more than one cluster and degree of such assignment is represented by the probability of hidden variable. However, such probability in traditional mixture model is simplified as a parameter, which can cause loss of valuable information. Therefore, in this research I propose a so-called conditional mixture model (CMM) in which the probability of hidden variable is modeled as a full probabilistic density function (PDF) that owns individual parameter. CMM aims to extend mixture model. I also propose an application of CMM which is called adaptive regression model (ARM). Traditional regression model is effective when data sample is scattered equally. If data points are grouped into clusters, regression model tries to learn a unified regression function which goes through all data points. Obviously, such unified function is not effective to evaluate response variable based on grouped data points. The concept adaptive of ARM means that ARM solves the ineffectiveness problem by selecting the best cluster of data points firstly and then evaluating response variable within such best cluster. In orther words, ARM reduces estimation space of regression model so as to gain high accuracy in calculation. Keywords: expectation maximization (EM) algorithm, finite mixture model, conditional mixture model, regression model, adaptive regression model (ARM). <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/61-240119081854-fec94c58-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Expectation maximization (EM) algorithm is a powerful mathematical tool for estimating statistical parameter when data sample contains hidden part and observed part. EM is applied to learn finite mixture model in which the whole distribution of observed variable is average sum of partial distributions. Coverage ratio of every partial distribution is specified by the probability of hidden variable. An application of mixture model is soft clustering in which cluster is modeled by hidden variable whereas each data point can be assigned to more than one cluster and degree of such assignment is represented by the probability of hidden variable. However, such probability in traditional mixture model is simplified as a parameter, which can cause loss of valuable information. Therefore, in this research I propose a so-called conditional mixture model (CMM) in which the probability of hidden variable is modeled as a full probabilistic density function (PDF) that owns individual parameter. CMM aims to extend mixture model. I also propose an application of CMM which is called adaptive regression model (ARM). Traditional regression model is effective when data sample is scattered equally. If data points are grouped into clusters, regression model tries to learn a unified regression function which goes through all data points. Obviously, such unified function is not effective to evaluate response variable based on grouped data points. The concept adaptive of ARM means that ARM solves the ineffectiveness problem by selecting the best cluster of data points firstly and then evaluating response variable within such best cluster. In orther words, ARM reduces estimation space of regression model so as to gain high accuracy in calculation. Keywords: expectation maximization (EM) algorithm, finite mixture model, conditional mixture model, regression model, adaptive regression model (ARM).
Conditional mixture model and its application for regression model from Loc Nguyen
]]>
17 0 https://cdn.slidesharecdn.com/ss_thumbnails/61-240119081854-fec94c58-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Ngh畛ch d但n ch畛 lu畉n (t畛ng quan v畛 d但n ch畛 v th畛 ch畉 ch鱈nh tr畛 li棚n quan 畉n tri畉t h畛c v t担n gi叩o) /LocNguyen38/nghch-dn-ch-lun-tng-quan-v-dn-ch-v-th-ch-chnh-tr-lin-quan-n-trit-hc-v-tn-gio-6b71 14-240107095308-db9ed4e0
V滴 tr畛 c坦 v畉t ch畉t v ph畉n v畉t ch畉t, x達 h畛i c坦 xung 畛t v h畛u h達o 畛 ph叩t tri畛n v suy tn r畛i suy tn v ph叩t tri畛n. T担i d畛a vo 坦 畛 bi畛n minh cho m畛t bi vi畉t c坦 t鱈nh ch畉t ph畉n 畛ng ngh畛ch chuy畛n th畛i cu畛c nh動ng b畉n 畛c s畉 t畛 t狸m ra 箪 ngh挑a b畉t ly c畛a c叩c h狸nh th叩i x達 h畛i. Ngoi ra bi vi畉t ny kh担ng i s但u vo nghi棚n c畛u ph叩p lu畉t, ch畛 動a ra m畛t c叩ch nh狸n t畛ng quan v畛 d但n ch畛 v th畛 ch畉 ch鱈nh tr畛 li棚n quan 畉n tri畉t h畛c v t担n gi叩o, m theo 坦 坦ng g坦p c畛a bi vi畉t l kh叩i ni畛m n動董ng t畉m c畛a t動 ph叩p kh担ng th畉t s畛 t畛 b畉u c畛 v c滴ng kh担ng th畉t s畛 t畛 b畛 nhi畛m.]]>

V滴 tr畛 c坦 v畉t ch畉t v ph畉n v畉t ch畉t, x達 h畛i c坦 xung 畛t v h畛u h達o 畛 ph叩t tri畛n v suy tn r畛i suy tn v ph叩t tri畛n. T担i d畛a vo 坦 畛 bi畛n minh cho m畛t bi vi畉t c坦 t鱈nh ch畉t ph畉n 畛ng ngh畛ch chuy畛n th畛i cu畛c nh動ng b畉n 畛c s畉 t畛 t狸m ra 箪 ngh挑a b畉t ly c畛a c叩c h狸nh th叩i x達 h畛i. Ngoi ra bi vi畉t ny kh担ng i s但u vo nghi棚n c畛u ph叩p lu畉t, ch畛 動a ra m畛t c叩ch nh狸n t畛ng quan v畛 d但n ch畛 v th畛 ch畉 ch鱈nh tr畛 li棚n quan 畉n tri畉t h畛c v t担n gi叩o, m theo 坦 坦ng g坦p c畛a bi vi畉t l kh叩i ni畛m n動董ng t畉m c畛a t動 ph叩p kh担ng th畉t s畛 t畛 b畉u c畛 v c滴ng kh担ng th畉t s畛 t畛 b畛 nhi畛m.]]>
Sun, 07 Jan 2024 09:53:08 GMT /LocNguyen38/nghch-dn-ch-lun-tng-quan-v-dn-ch-v-th-ch-chnh-tr-lin-quan-n-trit-hc-v-tn-gio-6b71 LocNguyen38@slideshare.net(LocNguyen38) Ngh畛ch d但n ch畛 lu畉n (t畛ng quan v畛 d但n ch畛 v th畛 ch畉 ch鱈nh tr畛 li棚n quan 畉n tri畉t h畛c v t担n gi叩o) LocNguyen38 V滴 tr畛 c坦 v畉t ch畉t v ph畉n v畉t ch畉t, x達 h畛i c坦 xung 畛t v h畛u h達o 畛 ph叩t tri畛n v suy tn r畛i suy tn v ph叩t tri畛n. T担i d畛a vo 坦 畛 bi畛n minh cho m畛t bi vi畉t c坦 t鱈nh ch畉t ph畉n 畛ng ngh畛ch chuy畛n th畛i cu畛c nh動ng b畉n 畛c s畉 t畛 t狸m ra 箪 ngh挑a b畉t ly c畛a c叩c h狸nh th叩i x達 h畛i. Ngoi ra bi vi畉t ny kh担ng i s但u vo nghi棚n c畛u ph叩p lu畉t, ch畛 動a ra m畛t c叩ch nh狸n t畛ng quan v畛 d但n ch畛 v th畛 ch畉 ch鱈nh tr畛 li棚n quan 畉n tri畉t h畛c v t担n gi叩o, m theo 坦 坦ng g坦p c畛a bi vi畉t l kh叩i ni畛m n動董ng t畉m c畛a t動 ph叩p kh担ng th畉t s畛 t畛 b畉u c畛 v c滴ng kh担ng th畉t s畛 t畛 b畛 nhi畛m. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/14-240107095308-db9ed4e0-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> V滴 tr畛 c坦 v畉t ch畉t v ph畉n v畉t ch畉t, x達 h畛i c坦 xung 畛t v h畛u h達o 畛 ph叩t tri畛n v suy tn r畛i suy tn v ph叩t tri畛n. T担i d畛a vo 坦 畛 bi畛n minh cho m畛t bi vi畉t c坦 t鱈nh ch畉t ph畉n 畛ng ngh畛ch chuy畛n th畛i cu畛c nh動ng b畉n 畛c s畉 t畛 t狸m ra 箪 ngh挑a b畉t ly c畛a c叩c h狸nh th叩i x達 h畛i. Ngoi ra bi vi畉t ny kh担ng i s但u vo nghi棚n c畛u ph叩p lu畉t, ch畛 動a ra m畛t c叩ch nh狸n t畛ng quan v畛 d但n ch畛 v th畛 ch畉 ch鱈nh tr畛 li棚n quan 畉n tri畉t h畛c v t担n gi叩o, m theo 坦 坦ng g坦p c畛a bi vi畉t l kh叩i ni畛m n動董ng t畉m c畛a t動 ph叩p kh担ng th畉t s畛 t畛 b畉u c畛 v c滴ng kh担ng th畉t s畛 t畛 b畛 nhi畛m.
Ngh畛ch d但n ch畛 lu畉n (t畛ng quan v畛 d但n ch畛 v th畛 ch畉 ch鱈nh tr畛 li棚n quan 畉n tri畉t h畛c v t担n gi叩o) from Loc Nguyen
]]>
3 0 https://cdn.slidesharecdn.com/ss_thumbnails/14-240107095308-db9ed4e0-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
A Novel Collaborative Filtering Algorithm by Bit Mining Frequent Itemsets /slideshow/a-novel-collaborative-filtering-algorithm-by-bit-mining-frequent-itemsets/264474331 43-231209083247-98f3b351
Collaborative filtering (CF) is a popular technique in recommendation study. Concretely, items which are recommended to user are determined by surveying her/his communities. There are two main CF approaches, which are memory-based and model-based. I propose a new CF model-based algorithm by mining frequent itemsets from rating database. Hence items which belong to frequent itemsets are recommended to user. My CF algorithm gives immediate response because the mining task is performed at offline process-mode. I also propose another so-called Roller algorithm for improving the process of mining frequent itemsets. Roller algorithm is implemented by heuristic assumption The larger the support of an item is, the higher its likely that this item will occur in some frequent itemset. It models upon doing white-wash task, which rolls a roller on a wall in such a way that is capable of picking frequent itemsets. Moreover I provide enhanced techniques such as bit representation, bit matching and bit mining in order to speed up recommendation process. These techniques take advantages of bitwise operations (AND, NOT) so as to reduce storage space and make algorithms run faster.]]>

Collaborative filtering (CF) is a popular technique in recommendation study. Concretely, items which are recommended to user are determined by surveying her/his communities. There are two main CF approaches, which are memory-based and model-based. I propose a new CF model-based algorithm by mining frequent itemsets from rating database. Hence items which belong to frequent itemsets are recommended to user. My CF algorithm gives immediate response because the mining task is performed at offline process-mode. I also propose another so-called Roller algorithm for improving the process of mining frequent itemsets. Roller algorithm is implemented by heuristic assumption The larger the support of an item is, the higher its likely that this item will occur in some frequent itemset. It models upon doing white-wash task, which rolls a roller on a wall in such a way that is capable of picking frequent itemsets. Moreover I provide enhanced techniques such as bit representation, bit matching and bit mining in order to speed up recommendation process. These techniques take advantages of bitwise operations (AND, NOT) so as to reduce storage space and make algorithms run faster.]]>
Sat, 09 Dec 2023 08:32:47 GMT /slideshow/a-novel-collaborative-filtering-algorithm-by-bit-mining-frequent-itemsets/264474331 LocNguyen38@slideshare.net(LocNguyen38) A Novel Collaborative Filtering Algorithm by Bit Mining Frequent Itemsets LocNguyen38 Collaborative filtering (CF) is a popular technique in recommendation study. Concretely, items which are recommended to user are determined by surveying her/his communities. There are two main CF approaches, which are memory-based and model-based. I propose a new CF model-based algorithm by mining frequent itemsets from rating database. Hence items which belong to frequent itemsets are recommended to user. My CF algorithm gives immediate response because the mining task is performed at offline process-mode. I also propose another so-called Roller algorithm for improving the process of mining frequent itemsets. Roller algorithm is implemented by heuristic assumption The larger the support of an item is, the higher its likely that this item will occur in some frequent itemset. It models upon doing white-wash task, which rolls a roller on a wall in such a way that is capable of picking frequent itemsets. Moreover I provide enhanced techniques such as bit representation, bit matching and bit mining in order to speed up recommendation process. These techniques take advantages of bitwise operations (AND, NOT) so as to reduce storage space and make algorithms run faster. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/43-231209083247-98f3b351-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Collaborative filtering (CF) is a popular technique in recommendation study. Concretely, items which are recommended to user are determined by surveying her/his communities. There are two main CF approaches, which are memory-based and model-based. I propose a new CF model-based algorithm by mining frequent itemsets from rating database. Hence items which belong to frequent itemsets are recommended to user. My CF algorithm gives immediate response because the mining task is performed at offline process-mode. I also propose another so-called Roller algorithm for improving the process of mining frequent itemsets. Roller algorithm is implemented by heuristic assumption The larger the support of an item is, the higher its likely that this item will occur in some frequent itemset. It models upon doing white-wash task, which rolls a roller on a wall in such a way that is capable of picking frequent itemsets. Moreover I provide enhanced techniques such as bit representation, bit matching and bit mining in order to speed up recommendation process. These techniques take advantages of bitwise operations (AND, NOT) so as to reduce storage space and make algorithms run faster.
A Novel Collaborative Filtering Algorithm by Bit Mining Frequent Itemsets from Loc Nguyen
]]>
5 0 https://cdn.slidesharecdn.com/ss_thumbnails/43-231209083247-98f3b351-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Simple image deconvolution based on reverse image convolution and backpropagation algorithm /slideshow/simple-image-deconvolution-based-on-reverse-image-convolution-and-backpropagation-algorithm/263344306 79-231113070355-221f9ce3
Deconvolution task is not important in convolutional neural network (CNN) because it is not imperative to recover convoluted image when convolutional layer is important to extract features. However, the deconvolution task is useful in some cases of inspecting and reflecting a convolutional filter as well as trying to improve a generated image when information loss is not serious with regard to trade-off of information loss and specific features such as edge detection and sharpening. This research proposes a duplicated and reverse process of recovering a filtered image. Firstly, source layer and target layer are reversed in accordance with traditional image convolution so as to train the convolutional filter. Secondly, the trained filter is reversed again to derive a deconvolutional operator for recovering the filtered image. The reverse process is associated with backpropagation algorithm which is most popular in learning neural network. Experimental results show that the proposed technique in this research is better to learn the filters that focus on discovering pixel differences. Therefore, the main contribution of this research is to inspect convolutional filters from data.]]>

Deconvolution task is not important in convolutional neural network (CNN) because it is not imperative to recover convoluted image when convolutional layer is important to extract features. However, the deconvolution task is useful in some cases of inspecting and reflecting a convolutional filter as well as trying to improve a generated image when information loss is not serious with regard to trade-off of information loss and specific features such as edge detection and sharpening. This research proposes a duplicated and reverse process of recovering a filtered image. Firstly, source layer and target layer are reversed in accordance with traditional image convolution so as to train the convolutional filter. Secondly, the trained filter is reversed again to derive a deconvolutional operator for recovering the filtered image. The reverse process is associated with backpropagation algorithm which is most popular in learning neural network. Experimental results show that the proposed technique in this research is better to learn the filters that focus on discovering pixel differences. Therefore, the main contribution of this research is to inspect convolutional filters from data.]]>
Mon, 13 Nov 2023 07:03:55 GMT /slideshow/simple-image-deconvolution-based-on-reverse-image-convolution-and-backpropagation-algorithm/263344306 LocNguyen38@slideshare.net(LocNguyen38) Simple image deconvolution based on reverse image convolution and backpropagation algorithm LocNguyen38 Deconvolution task is not important in convolutional neural network (CNN) because it is not imperative to recover convoluted image when convolutional layer is important to extract features. However, the deconvolution task is useful in some cases of inspecting and reflecting a convolutional filter as well as trying to improve a generated image when information loss is not serious with regard to trade-off of information loss and specific features such as edge detection and sharpening. This research proposes a duplicated and reverse process of recovering a filtered image. Firstly, source layer and target layer are reversed in accordance with traditional image convolution so as to train the convolutional filter. Secondly, the trained filter is reversed again to derive a deconvolutional operator for recovering the filtered image. The reverse process is associated with backpropagation algorithm which is most popular in learning neural network. Experimental results show that the proposed technique in this research is better to learn the filters that focus on discovering pixel differences. Therefore, the main contribution of this research is to inspect convolutional filters from data. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/79-231113070355-221f9ce3-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Deconvolution task is not important in convolutional neural network (CNN) because it is not imperative to recover convoluted image when convolutional layer is important to extract features. However, the deconvolution task is useful in some cases of inspecting and reflecting a convolutional filter as well as trying to improve a generated image when information loss is not serious with regard to trade-off of information loss and specific features such as edge detection and sharpening. This research proposes a duplicated and reverse process of recovering a filtered image. Firstly, source layer and target layer are reversed in accordance with traditional image convolution so as to train the convolutional filter. Secondly, the trained filter is reversed again to derive a deconvolutional operator for recovering the filtered image. The reverse process is associated with backpropagation algorithm which is most popular in learning neural network. Experimental results show that the proposed technique in this research is better to learn the filters that focus on discovering pixel differences. Therefore, the main contribution of this research is to inspect convolutional filters from data.
Simple image deconvolution based on reverse image convolution and backpropagation algorithm from Loc Nguyen
]]>
23 0 https://cdn.slidesharecdn.com/ss_thumbnails/79-231113070355-221f9ce3-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Technological Accessibility: Learning Platform Among Senior High School Students /slideshow/technological-accessibility-learning-platform-among-senior-high-school-students/262984522 04-231102101609-d340e6cb
Technological Accessibility: Learning Platform Among Senior High School Students]]>

Technological Accessibility: Learning Platform Among Senior High School Students]]>
Thu, 02 Nov 2023 10:16:09 GMT /slideshow/technological-accessibility-learning-platform-among-senior-high-school-students/262984522 LocNguyen38@slideshare.net(LocNguyen38) Technological Accessibility: Learning Platform Among Senior High School Students LocNguyen38 Technological Accessibility: Learning Platform Among Senior High School Students <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/04-231102101609-d340e6cb-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Technological Accessibility: Learning Platform Among Senior High School Students
Technological Accessibility: Learning Platform Among Senior High School Students from Loc Nguyen
]]>
4 0 https://cdn.slidesharecdn.com/ss_thumbnails/04-231102101609-d340e6cb-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Engineering for Social Impact /slideshow/engineering-for-social-impact/262984500 04-231102101458-5419a91d
Engineering for Social Impact]]>

Engineering for Social Impact]]>
Thu, 02 Nov 2023 10:14:58 GMT /slideshow/engineering-for-social-impact/262984500 LocNguyen38@slideshare.net(LocNguyen38) Engineering for Social Impact LocNguyen38 Engineering for Social Impact <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/04-231102101458-5419a91d-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Engineering for Social Impact
Engineering for Social Impact from Loc Nguyen
]]>
23 0 https://cdn.slidesharecdn.com/ss_thumbnails/04-231102101458-5419a91d-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Harnessing Technology for Research Education /slideshow/harnessing-technology-for-research-education/262984484 04-231102101357-b26f3ab9
Harnessing Technology for Research Education]]>

Harnessing Technology for Research Education]]>
Thu, 02 Nov 2023 10:13:56 GMT /slideshow/harnessing-technology-for-research-education/262984484 LocNguyen38@slideshare.net(LocNguyen38) Harnessing Technology for Research Education LocNguyen38 Harnessing Technology for Research Education <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/04-231102101357-b26f3ab9-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Harnessing Technology for Research Education
Harnessing Technology for Research Education from Loc Nguyen
]]>
28 0 https://cdn.slidesharecdn.com/ss_thumbnails/04-231102101357-b26f3ab9-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Future of education with support of technology /slideshow/future-of-education-with-support-of-technology/262984427 04-231102101125-3e7d60ad
Future of education with support of technology]]>

Future of education with support of technology]]>
Thu, 02 Nov 2023 10:11:25 GMT /slideshow/future-of-education-with-support-of-technology/262984427 LocNguyen38@slideshare.net(LocNguyen38) Future of education with support of technology LocNguyen38 Future of education with support of technology <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/04-231102101125-3e7d60ad-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Future of education with support of technology
Future of education with support of technology from Loc Nguyen
]]>
6 0 https://cdn.slidesharecdn.com/ss_thumbnails/04-231102101125-3e7d60ad-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Where the dragon to fly /slideshow/where-the-dragon-to-fly/262984388 04-231102100949-e8edb08f
Where the dragon to fly]]>

Where the dragon to fly]]>
Thu, 02 Nov 2023 10:09:49 GMT /slideshow/where-the-dragon-to-fly/262984388 LocNguyen38@slideshare.net(LocNguyen38) Where the dragon to fly LocNguyen38 Where the dragon to fly <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/04-231102100949-e8edb08f-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Where the dragon to fly
Where the dragon to fly from Loc Nguyen
]]>
6 0 https://cdn.slidesharecdn.com/ss_thumbnails/04-231102100949-e8edb08f-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Adversarial Variational Autoencoders to extend and improve generative model /slideshow/adversarial-variational-autoencoders-to-extend-and-improve-generative-model/260927144 78-230914044814-9300bbbb
Generative artificial intelligence (GenAI) has been developing with many incredible achievements like ChatGPT and Bard. Deep generative model (DGM) is a branch of GenAI, which is preeminent in generating raster data such as image and sound due to strong points of deep neural network (DNN) in inference and recognition. The built-in inference mechanism of DNN, which simulates and aims to synaptic plasticity of human neuron network, fosters generation ability of DGM which produces surprised results with support of statistical flexibility. Two popular approaches in DGM are Variational Autoencoders (VAE) and Generative Adversarial Network (GAN). Both VAE and GAN have their own strong points although they share and imply underline theory of statistics as well as incredible complex via hidden layers of DNN when DNN becomes effective encoding/decoding functions without concrete specifications. In this research, I try to unify VAE and GAN into a consistent and consolidated model called Adversarial Variational Autoencoders (AVA) in which VAE and GAN complement each other, for instance, VAE is good at generator by encoding data via excellent ideology of Kullback-Leibler divergence and GAN is a significantly important method to assess reliability of data which is realistic or fake. In other words, AVA aims to improve accuracy of generative models, besides AVA extends function of simple generative models. In methodology this research focuses on combination of applied mathematical concepts and skillful techniques of computer programming in order to implement and solve complicated problems as simply as possible.]]>

Generative artificial intelligence (GenAI) has been developing with many incredible achievements like ChatGPT and Bard. Deep generative model (DGM) is a branch of GenAI, which is preeminent in generating raster data such as image and sound due to strong points of deep neural network (DNN) in inference and recognition. The built-in inference mechanism of DNN, which simulates and aims to synaptic plasticity of human neuron network, fosters generation ability of DGM which produces surprised results with support of statistical flexibility. Two popular approaches in DGM are Variational Autoencoders (VAE) and Generative Adversarial Network (GAN). Both VAE and GAN have their own strong points although they share and imply underline theory of statistics as well as incredible complex via hidden layers of DNN when DNN becomes effective encoding/decoding functions without concrete specifications. In this research, I try to unify VAE and GAN into a consistent and consolidated model called Adversarial Variational Autoencoders (AVA) in which VAE and GAN complement each other, for instance, VAE is good at generator by encoding data via excellent ideology of Kullback-Leibler divergence and GAN is a significantly important method to assess reliability of data which is realistic or fake. In other words, AVA aims to improve accuracy of generative models, besides AVA extends function of simple generative models. In methodology this research focuses on combination of applied mathematical concepts and skillful techniques of computer programming in order to implement and solve complicated problems as simply as possible.]]>
Thu, 14 Sep 2023 04:48:14 GMT /slideshow/adversarial-variational-autoencoders-to-extend-and-improve-generative-model/260927144 LocNguyen38@slideshare.net(LocNguyen38) Adversarial Variational Autoencoders to extend and improve generative model LocNguyen38 Generative artificial intelligence (GenAI) has been developing with many incredible achievements like ChatGPT and Bard. Deep generative model (DGM) is a branch of GenAI, which is preeminent in generating raster data such as image and sound due to strong points of deep neural network (DNN) in inference and recognition. The built-in inference mechanism of DNN, which simulates and aims to synaptic plasticity of human neuron network, fosters generation ability of DGM which produces surprised results with support of statistical flexibility. Two popular approaches in DGM are Variational Autoencoders (VAE) and Generative Adversarial Network (GAN). Both VAE and GAN have their own strong points although they share and imply underline theory of statistics as well as incredible complex via hidden layers of DNN when DNN becomes effective encoding/decoding functions without concrete specifications. In this research, I try to unify VAE and GAN into a consistent and consolidated model called Adversarial Variational Autoencoders (AVA) in which VAE and GAN complement each other, for instance, VAE is good at generator by encoding data via excellent ideology of Kullback-Leibler divergence and GAN is a significantly important method to assess reliability of data which is realistic or fake. In other words, AVA aims to improve accuracy of generative models, besides AVA extends function of simple generative models. In methodology this research focuses on combination of applied mathematical concepts and skillful techniques of computer programming in order to implement and solve complicated problems as simply as possible. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/78-230914044814-9300bbbb-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Generative artificial intelligence (GenAI) has been developing with many incredible achievements like ChatGPT and Bard. Deep generative model (DGM) is a branch of GenAI, which is preeminent in generating raster data such as image and sound due to strong points of deep neural network (DNN) in inference and recognition. The built-in inference mechanism of DNN, which simulates and aims to synaptic plasticity of human neuron network, fosters generation ability of DGM which produces surprised results with support of statistical flexibility. Two popular approaches in DGM are Variational Autoencoders (VAE) and Generative Adversarial Network (GAN). Both VAE and GAN have their own strong points although they share and imply underline theory of statistics as well as incredible complex via hidden layers of DNN when DNN becomes effective encoding/decoding functions without concrete specifications. In this research, I try to unify VAE and GAN into a consistent and consolidated model called Adversarial Variational Autoencoders (AVA) in which VAE and GAN complement each other, for instance, VAE is good at generator by encoding data via excellent ideology of Kullback-Leibler divergence and GAN is a significantly important method to assess reliability of data which is realistic or fake. In other words, AVA aims to improve accuracy of generative models, besides AVA extends function of simple generative models. In methodology this research focuses on combination of applied mathematical concepts and skillful techniques of computer programming in order to implement and solve complicated problems as simply as possible.
Adversarial Variational Autoencoders to extend and improve generative model from Loc Nguyen
]]>
18 0 https://cdn.slidesharecdn.com/ss_thumbnails/78-230914044814-9300bbbb-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Learning dyadic data and predicting unaccomplished co-occurrent values by mixture model /slideshow/learning-dyadic-data-and-predicting-unaccomplished-cooccurrent-values-by-mixture-model/259907636 62-230816032218-e178d037
Dyadic data which is also called co-occurrence data (COD) contains co-occurrences of objects. Searching for statistical models to represent dyadic data is necessary. Fortunately, finite mixture model is a solid statistical model to learn and make inference on dyadic data because mixture model is built smoothly and reliably by expectation maximization (EM) algorithm which is suitable to inherent spareness of dyadic data. This research summarizes mixture models for dyadic data. When each co-occurrence in dyadic data is associated with a value, there are many unaccomplished values because a lot of co-occurrences are inexistent. In this research, these unaccomplished values are estimated as mean (expectation) of random variable given partial probabilistic distributions inside dyadic mixture model.]]>

Dyadic data which is also called co-occurrence data (COD) contains co-occurrences of objects. Searching for statistical models to represent dyadic data is necessary. Fortunately, finite mixture model is a solid statistical model to learn and make inference on dyadic data because mixture model is built smoothly and reliably by expectation maximization (EM) algorithm which is suitable to inherent spareness of dyadic data. This research summarizes mixture models for dyadic data. When each co-occurrence in dyadic data is associated with a value, there are many unaccomplished values because a lot of co-occurrences are inexistent. In this research, these unaccomplished values are estimated as mean (expectation) of random variable given partial probabilistic distributions inside dyadic mixture model.]]>
Wed, 16 Aug 2023 03:22:18 GMT /slideshow/learning-dyadic-data-and-predicting-unaccomplished-cooccurrent-values-by-mixture-model/259907636 LocNguyen38@slideshare.net(LocNguyen38) Learning dyadic data and predicting unaccomplished co-occurrent values by mixture model LocNguyen38 Dyadic data which is also called co-occurrence data (COD) contains co-occurrences of objects. Searching for statistical models to represent dyadic data is necessary. Fortunately, finite mixture model is a solid statistical model to learn and make inference on dyadic data because mixture model is built smoothly and reliably by expectation maximization (EM) algorithm which is suitable to inherent spareness of dyadic data. This research summarizes mixture models for dyadic data. When each co-occurrence in dyadic data is associated with a value, there are many unaccomplished values because a lot of co-occurrences are inexistent. In this research, these unaccomplished values are estimated as mean (expectation) of random variable given partial probabilistic distributions inside dyadic mixture model. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/62-230816032218-e178d037-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Dyadic data which is also called co-occurrence data (COD) contains co-occurrences of objects. Searching for statistical models to represent dyadic data is necessary. Fortunately, finite mixture model is a solid statistical model to learn and make inference on dyadic data because mixture model is built smoothly and reliably by expectation maximization (EM) algorithm which is suitable to inherent spareness of dyadic data. This research summarizes mixture models for dyadic data. When each co-occurrence in dyadic data is associated with a value, there are many unaccomplished values because a lot of co-occurrences are inexistent. In this research, these unaccomplished values are estimated as mean (expectation) of random variable given partial probabilistic distributions inside dyadic mixture model.
Learning dyadic data and predicting unaccomplished co-occurrent values by mixture model from Loc Nguyen
]]>
7 0 https://cdn.slidesharecdn.com/ss_thumbnails/62-230816032218-e178d037-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Tutorial on Bayesian optimization /slideshow/tutorial-on-bayesian-optimization/258013924 75-230525074640-c70b8b76
Machine learning forks into three main branches such as supervised learning, unsupervised learning, and reinforcement learning where reinforcement learning is much potential to artificial intelligence (AI) applications because it solves real problems by progressive process in which possible solutions are improved and finetuned continuously. The progressive approach, which reflects ability of adaptation, is appropriate to the real world where most events occur and change continuously and unexpectedly. Moreover, data is getting too huge for supervised learning and unsupervised learning to draw valuable knowledge from such huge data at one time. Bayesian optimization (BO) models an optimization problem as a probabilistic form called surrogate model and then directly maximizes an acquisition function created from such surrogate model in order to maximize implicitly and indirectly the target function for finding out solution of the optimization problem. A popular surrogate model is Gaussian process regression model. The process of maximizing acquisition function is based on updating posterior probability of surrogate model repeatedly, which is improved after every iteration. Taking advantages of acquisition function or utility function is also common in decision theory but the semantic meaning behind BO is that BO solves problems by progressive and adaptive approach via updating surrogate model from a small piece of data at each time, according to ideology of reinforcement learning. Undoubtedly, BO is a reinforcement learning algorithm with many potential applications and thus it is surveyed in this research with attention to its mathematical ideas. Moreover, the solution of optimization problem is important to not only applied mathematics but also AI.]]>

Machine learning forks into three main branches such as supervised learning, unsupervised learning, and reinforcement learning where reinforcement learning is much potential to artificial intelligence (AI) applications because it solves real problems by progressive process in which possible solutions are improved and finetuned continuously. The progressive approach, which reflects ability of adaptation, is appropriate to the real world where most events occur and change continuously and unexpectedly. Moreover, data is getting too huge for supervised learning and unsupervised learning to draw valuable knowledge from such huge data at one time. Bayesian optimization (BO) models an optimization problem as a probabilistic form called surrogate model and then directly maximizes an acquisition function created from such surrogate model in order to maximize implicitly and indirectly the target function for finding out solution of the optimization problem. A popular surrogate model is Gaussian process regression model. The process of maximizing acquisition function is based on updating posterior probability of surrogate model repeatedly, which is improved after every iteration. Taking advantages of acquisition function or utility function is also common in decision theory but the semantic meaning behind BO is that BO solves problems by progressive and adaptive approach via updating surrogate model from a small piece of data at each time, according to ideology of reinforcement learning. Undoubtedly, BO is a reinforcement learning algorithm with many potential applications and thus it is surveyed in this research with attention to its mathematical ideas. Moreover, the solution of optimization problem is important to not only applied mathematics but also AI.]]>
Thu, 25 May 2023 07:46:40 GMT /slideshow/tutorial-on-bayesian-optimization/258013924 LocNguyen38@slideshare.net(LocNguyen38) Tutorial on Bayesian optimization LocNguyen38 Machine learning forks into three main branches such as supervised learning, unsupervised learning, and reinforcement learning where reinforcement learning is much potential to artificial intelligence (AI) applications because it solves real problems by progressive process in which possible solutions are improved and finetuned continuously. The progressive approach, which reflects ability of adaptation, is appropriate to the real world where most events occur and change continuously and unexpectedly. Moreover, data is getting too huge for supervised learning and unsupervised learning to draw valuable knowledge from such huge data at one time. Bayesian optimization (BO) models an optimization problem as a probabilistic form called surrogate model and then directly maximizes an acquisition function created from such surrogate model in order to maximize implicitly and indirectly the target function for finding out solution of the optimization problem. A popular surrogate model is Gaussian process regression model. The process of maximizing acquisition function is based on updating posterior probability of surrogate model repeatedly, which is improved after every iteration. Taking advantages of acquisition function or utility function is also common in decision theory but the semantic meaning behind BO is that BO solves problems by progressive and adaptive approach via updating surrogate model from a small piece of data at each time, according to ideology of reinforcement learning. Undoubtedly, BO is a reinforcement learning algorithm with many potential applications and thus it is surveyed in this research with attention to its mathematical ideas. Moreover, the solution of optimization problem is important to not only applied mathematics but also AI. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/75-230525074640-c70b8b76-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Machine learning forks into three main branches such as supervised learning, unsupervised learning, and reinforcement learning where reinforcement learning is much potential to artificial intelligence (AI) applications because it solves real problems by progressive process in which possible solutions are improved and finetuned continuously. The progressive approach, which reflects ability of adaptation, is appropriate to the real world where most events occur and change continuously and unexpectedly. Moreover, data is getting too huge for supervised learning and unsupervised learning to draw valuable knowledge from such huge data at one time. Bayesian optimization (BO) models an optimization problem as a probabilistic form called surrogate model and then directly maximizes an acquisition function created from such surrogate model in order to maximize implicitly and indirectly the target function for finding out solution of the optimization problem. A popular surrogate model is Gaussian process regression model. The process of maximizing acquisition function is based on updating posterior probability of surrogate model repeatedly, which is improved after every iteration. Taking advantages of acquisition function or utility function is also common in decision theory but the semantic meaning behind BO is that BO solves problems by progressive and adaptive approach via updating surrogate model from a small piece of data at each time, according to ideology of reinforcement learning. Undoubtedly, BO is a reinforcement learning algorithm with many potential applications and thus it is surveyed in this research with attention to its mathematical ideas. Moreover, the solution of optimization problem is important to not only applied mathematics but also AI.
Tutorial on Bayesian optimization from Loc Nguyen
]]>
150 0 https://cdn.slidesharecdn.com/ss_thumbnails/75-230525074640-c70b8b76-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
https://cdn.slidesharecdn.com/profile-photo-LocNguyen38-48x48.jpg?cb=1731921131 Creative man is The Creator. This is my favorite statement. I am Loc Nguyen, a scientist, a poet and a business man; especially I am very attractive and enthusiastic. Ultimately, I am the creative man. I belief that our wealths are originated from creativeness and our civilization is based on the creativeness. We create our world by ourselves. Therefore, I am very happy and honored to introduce myself and share you my creative works in science, technology and art. www.locnguyen.net https://cdn.slidesharecdn.com/ss_thumbnails/08-241009112556-d4460843-thumbnail.jpg?width=320&height=320&fit=bounds slideshow/recover-and-heal-icepd-conference-2024-batam-indonesia/272292362 Recover and Heal - ICE... https://cdn.slidesharecdn.com/ss_thumbnails/85-240830023834-26070c9c-thumbnail.jpg?width=320&height=320&fit=bounds slideshow/digital-transformation-and-governance-sss2024/271423352 Digital Transformation... https://cdn.slidesharecdn.com/ss_thumbnails/85-240806092236-7305e1be-thumbnail.jpg?width=320&height=320&fit=bounds slideshow/tutorial-on-deep-transformer-presentation-slides-ad59/270802572 Tutorial on deep trans...