際際滷shows by User: KhangPham3 / http://www.slideshare.net/images/logo.gif 際際滷shows by User: KhangPham3 / Mon, 16 Dec 2019 02:52:35 GMT 際際滷Share feed for 際際滷shows by User: KhangPham3 BERT /slideshow/bert-206161415/206161415 bert-191216025236
Short introduction about BERT and some related papers]]>

Short introduction about BERT and some related papers]]>
Mon, 16 Dec 2019 02:52:35 GMT /slideshow/bert-206161415/206161415 KhangPham3@slideshare.net(KhangPham3) BERT KhangPham3 Short introduction about BERT and some related papers <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/bert-191216025236-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Short introduction about BERT and some related papers
BERT from Khang Pham
]]>
702 3 https://cdn.slidesharecdn.com/ss_thumbnails/bert-191216025236-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Nas net where model learn to generate models /slideshow/nas-net-where-model-learn-to-generate-models/164469112 nasnetwheremodellearntogeneratemodels-190817142236
Walk through NAS net and a few papers applied NAS search space as well as the approach for architecture search to achieve SOTA in accuracy for ImageNet]]>

Walk through NAS net and a few papers applied NAS search space as well as the approach for architecture search to achieve SOTA in accuracy for ImageNet]]>
Sat, 17 Aug 2019 14:22:36 GMT /slideshow/nas-net-where-model-learn-to-generate-models/164469112 KhangPham3@slideshare.net(KhangPham3) Nas net where model learn to generate models KhangPham3 Walk through NAS net and a few papers applied NAS search space as well as the approach for architecture search to achieve SOTA in accuracy for ImageNet <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/nasnetwheremodellearntogeneratemodels-190817142236-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Walk through NAS net and a few papers applied NAS search space as well as the approach for architecture search to achieve SOTA in accuracy for ImageNet
Nas net where model learn to generate models from Khang Pham
]]>
339 1 https://cdn.slidesharecdn.com/ss_thumbnails/nasnetwheremodellearntogeneratemodels-190817142236-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Notes on attention mechanism /slideshow/notes-on-attention-mechanism/125958170 notesonattentionmechanism-181215171642
Brief introduction on attention mechanism and its application in neural machine translation, especially in transformer, where attention was used to remove RNNs completely from NMT.]]>

Brief introduction on attention mechanism and its application in neural machine translation, especially in transformer, where attention was used to remove RNNs completely from NMT.]]>
Sat, 15 Dec 2018 17:16:42 GMT /slideshow/notes-on-attention-mechanism/125958170 KhangPham3@slideshare.net(KhangPham3) Notes on attention mechanism KhangPham3 Brief introduction on attention mechanism and its application in neural machine translation, especially in transformer, where attention was used to remove RNNs completely from NMT. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/notesonattentionmechanism-181215171642-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Brief introduction on attention mechanism and its application in neural machine translation, especially in transformer, where attention was used to remove RNNs completely from NMT.
Notes on attention mechanism from Khang Pham
]]>
2826 6 https://cdn.slidesharecdn.com/ss_thumbnails/notesonattentionmechanism-181215171642-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
CVPR 2018 Paper Reading MobileNet V2 /slideshow/cvpr-2018-paper-reading-mobilenet-v2/110420324 cvprmobilenetv2khang-180818114918
際際滷s for paper reading in VietNam AI Community in Japan Explanation on MobileNet V2: Inverted Residuals and Linear Bottlenecks, a paper in CVPR 23018]]>

際際滷s for paper reading in VietNam AI Community in Japan Explanation on MobileNet V2: Inverted Residuals and Linear Bottlenecks, a paper in CVPR 23018]]>
Sat, 18 Aug 2018 11:49:18 GMT /slideshow/cvpr-2018-paper-reading-mobilenet-v2/110420324 KhangPham3@slideshare.net(KhangPham3) CVPR 2018 Paper Reading MobileNet V2 KhangPham3 際際滷s for paper reading in VietNam AI Community in Japan Explanation on MobileNet V2: Inverted Residuals and Linear Bottlenecks, a paper in CVPR 23018 <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/cvprmobilenetv2khang-180818114918-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> 際際滷s for paper reading in VietNam AI Community in Japan Explanation on MobileNet V2: Inverted Residuals and Linear Bottlenecks, a paper in CVPR 23018
CVPR 2018 Paper Reading MobileNet V2 from Khang Pham
]]>
1510 6 https://cdn.slidesharecdn.com/ss_thumbnails/cvprmobilenetv2khang-180818114918-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
A note on word embedding /KhangPham3/a-note-on-word-embedding anoteonwordembedding-180603093915
The slide covers a few state of the art models of word embedding and deep explanation on algorithms for approximation of softmax function in language models.]]>

The slide covers a few state of the art models of word embedding and deep explanation on algorithms for approximation of softmax function in language models.]]>
Sun, 03 Jun 2018 09:39:15 GMT /KhangPham3/a-note-on-word-embedding KhangPham3@slideshare.net(KhangPham3) A note on word embedding KhangPham3 The slide covers a few state of the art models of word embedding and deep explanation on algorithms for approximation of softmax function in language models. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/anoteonwordembedding-180603093915-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> The slide covers a few state of the art models of word embedding and deep explanation on algorithms for approximation of softmax function in language models.
A note on word embedding from Khang Pham
]]>
608 6 https://cdn.slidesharecdn.com/ss_thumbnails/anoteonwordembedding-180603093915-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Overview on Optimization algorithms in Deep Learning /slideshow/overview-on-optimization-algorithms-in-deep-learning/77238170 optimizationalgorithms-170625041750
Overview on function optimization in general and in deep learning. The slides cover from basic algorithms like batch gradient descent, stochastic gradient descent to the state of art algorithm like Momentum, Adagrad, RMSprop, Adam.]]>

Overview on function optimization in general and in deep learning. The slides cover from basic algorithms like batch gradient descent, stochastic gradient descent to the state of art algorithm like Momentum, Adagrad, RMSprop, Adam.]]>
Sun, 25 Jun 2017 04:17:50 GMT /slideshow/overview-on-optimization-algorithms-in-deep-learning/77238170 KhangPham3@slideshare.net(KhangPham3) Overview on Optimization algorithms in Deep Learning KhangPham3 Overview on function optimization in general and in deep learning. The slides cover from basic algorithms like batch gradient descent, stochastic gradient descent to the state of art algorithm like Momentum, Adagrad, RMSprop, Adam. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/optimizationalgorithms-170625041750-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Overview on function optimization in general and in deep learning. The slides cover from basic algorithms like batch gradient descent, stochastic gradient descent to the state of art algorithm like Momentum, Adagrad, RMSprop, Adam.
Overview on Optimization algorithms in Deep Learning from Khang Pham
]]>
1921 6 https://cdn.slidesharecdn.com/ss_thumbnails/optimizationalgorithms-170625041750-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
https://cdn.slidesharecdn.com/profile-photo-KhangPham3-48x48.jpg?cb=1578917246 https://cdn.slidesharecdn.com/ss_thumbnails/bert-191216025236-thumbnail.jpg?width=320&height=320&fit=bounds slideshow/bert-206161415/206161415 BERT https://cdn.slidesharecdn.com/ss_thumbnails/nasnetwheremodellearntogeneratemodels-190817142236-thumbnail.jpg?width=320&height=320&fit=bounds slideshow/nas-net-where-model-learn-to-generate-models/164469112 Nas net where model le... https://cdn.slidesharecdn.com/ss_thumbnails/notesonattentionmechanism-181215171642-thumbnail.jpg?width=320&height=320&fit=bounds slideshow/notes-on-attention-mechanism/125958170 Notes on attention mec...