際際滷shows by User: ssuser62b35f / http://www.slideshare.net/images/logo.gif 際際滷shows by User: ssuser62b35f / Thu, 04 Jul 2019 12:01:17 GMT 際際滷Share feed for 際際滷shows by User: ssuser62b35f Mixture-Rank Matrix Approximation for Collaborative Filtering /slideshow/mixturerank-matrix-approximation-for-collaborative-filtering/153538992 20190625joonyoungmrma-190704120117
The unofficial slide of Mixture-Rank Matrix Approximation for Collaborative Filtering (NIPS 2017) -- Abstract of the paper: Low-rank matrix approximation (LRMA) methods have achieved excellent accuracy among today's collaborative filtering (CF) methods. In existing LRMA methods, the rank of user/item feature matrices is typically fixed, i.e., the same rank is adopted to describe all users/items. However, our studies show that submatrices with different ranks could coexist in the same user-item rating matrix, so that approximations with fixed ranks cannot perfectly describe the internal structures of the rating matrix, therefore leading to inferior recommendation accuracy. In this paper, a mixture-rank matrix approximation (MRMA) method is proposed, in which user-item ratings can be characterized by a mixture of LRMA models with different ranks. Meanwhile, a learning algorithm capitalizing on iterated condition modes is proposed to tackle the non-convex optimization problem pertaining to MRMA. Experimental studies on MovieLens and Netflix datasets demonstrate that MRMA can outperform six state-of-the-art LRMA-based CF methods in terms of recommendation accuracy.]]>

The unofficial slide of Mixture-Rank Matrix Approximation for Collaborative Filtering (NIPS 2017) -- Abstract of the paper: Low-rank matrix approximation (LRMA) methods have achieved excellent accuracy among today's collaborative filtering (CF) methods. In existing LRMA methods, the rank of user/item feature matrices is typically fixed, i.e., the same rank is adopted to describe all users/items. However, our studies show that submatrices with different ranks could coexist in the same user-item rating matrix, so that approximations with fixed ranks cannot perfectly describe the internal structures of the rating matrix, therefore leading to inferior recommendation accuracy. In this paper, a mixture-rank matrix approximation (MRMA) method is proposed, in which user-item ratings can be characterized by a mixture of LRMA models with different ranks. Meanwhile, a learning algorithm capitalizing on iterated condition modes is proposed to tackle the non-convex optimization problem pertaining to MRMA. Experimental studies on MovieLens and Netflix datasets demonstrate that MRMA can outperform six state-of-the-art LRMA-based CF methods in terms of recommendation accuracy.]]>
Thu, 04 Jul 2019 12:01:17 GMT /slideshow/mixturerank-matrix-approximation-for-collaborative-filtering/153538992 ssuser62b35f@slideshare.net(ssuser62b35f) Mixture-Rank Matrix Approximation for Collaborative Filtering ssuser62b35f The unofficial slide of Mixture-Rank Matrix Approximation for Collaborative Filtering (NIPS 2017) -- Abstract of the paper: Low-rank matrix approximation (LRMA) methods have achieved excellent accuracy among today's collaborative filtering (CF) methods. In existing LRMA methods, the rank of user/item feature matrices is typically fixed, i.e., the same rank is adopted to describe all users/items. However, our studies show that submatrices with different ranks could coexist in the same user-item rating matrix, so that approximations with fixed ranks cannot perfectly describe the internal structures of the rating matrix, therefore leading to inferior recommendation accuracy. In this paper, a mixture-rank matrix approximation (MRMA) method is proposed, in which user-item ratings can be characterized by a mixture of LRMA models with different ranks. Meanwhile, a learning algorithm capitalizing on iterated condition modes is proposed to tackle the non-convex optimization problem pertaining to MRMA. Experimental studies on MovieLens and Netflix datasets demonstrate that MRMA can outperform six state-of-the-art LRMA-based CF methods in terms of recommendation accuracy. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/20190625joonyoungmrma-190704120117-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> The unofficial slide of Mixture-Rank Matrix Approximation for Collaborative Filtering (NIPS 2017) -- Abstract of the paper: Low-rank matrix approximation (LRMA) methods have achieved excellent accuracy among today&#39;s collaborative filtering (CF) methods. In existing LRMA methods, the rank of user/item feature matrices is typically fixed, i.e., the same rank is adopted to describe all users/items. However, our studies show that submatrices with different ranks could coexist in the same user-item rating matrix, so that approximations with fixed ranks cannot perfectly describe the internal structures of the rating matrix, therefore leading to inferior recommendation accuracy. In this paper, a mixture-rank matrix approximation (MRMA) method is proposed, in which user-item ratings can be characterized by a mixture of LRMA models with different ranks. Meanwhile, a learning algorithm capitalizing on iterated condition modes is proposed to tackle the non-convex optimization problem pertaining to MRMA. Experimental studies on MovieLens and Netflix datasets demonstrate that MRMA can outperform six state-of-the-art LRMA-based CF methods in terms of recommendation accuracy.
Mixture-Rank Matrix Approximation for Collaborative Filtering from Joonyoung Yi
]]>
816 2 https://cdn.slidesharecdn.com/ss_thumbnails/20190625joonyoungmrma-190704120117-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Sparsity Normalization: Stabilizing the Expected Outputs of Deep Networks /slideshow/sparsity-normalization-stabilizing-the-expected-outputs-of-deep-networks/149029063 190607sparsitynormalization-190607070910
The learning of deep models, in which a numerous of parameters are superimposed, is known to be a fairly sensitive process and should be carefully done through a combination of several techniques that can help to stabilize it. We introduce an additional challenge that has never been explicitly studied: the heterogeneity of sparsity at the instance level due to missing values or the innate nature of the input distribution. We confirm experimentally on the widely used benchmark datasets that this variable sparsity problem makes the output statistics of neurons unstable and makes the learning process more difficult by saturating non-linearities. We also provide the analysis of this phenomenon, and based on our analysis, we present a simple technique to prevent this issue, referred to as Sparsity Normalization (SN). Finally, we show that the performance can be significantly improved with SN on certain popular benchmark datasets, or that similar performance can be achieved with lower capacity. Especially focusing on the collaborative filtering problem where the variable sparsity issue has been completely ignored, we achieve new state-of-the-art results on Movielens 100k and 1M datasets, by simply applying Sparsity Normalization (SN). https://arxiv.org/abs/1906.00150]]>

The learning of deep models, in which a numerous of parameters are superimposed, is known to be a fairly sensitive process and should be carefully done through a combination of several techniques that can help to stabilize it. We introduce an additional challenge that has never been explicitly studied: the heterogeneity of sparsity at the instance level due to missing values or the innate nature of the input distribution. We confirm experimentally on the widely used benchmark datasets that this variable sparsity problem makes the output statistics of neurons unstable and makes the learning process more difficult by saturating non-linearities. We also provide the analysis of this phenomenon, and based on our analysis, we present a simple technique to prevent this issue, referred to as Sparsity Normalization (SN). Finally, we show that the performance can be significantly improved with SN on certain popular benchmark datasets, or that similar performance can be achieved with lower capacity. Especially focusing on the collaborative filtering problem where the variable sparsity issue has been completely ignored, we achieve new state-of-the-art results on Movielens 100k and 1M datasets, by simply applying Sparsity Normalization (SN). https://arxiv.org/abs/1906.00150]]>
Fri, 07 Jun 2019 07:09:10 GMT /slideshow/sparsity-normalization-stabilizing-the-expected-outputs-of-deep-networks/149029063 ssuser62b35f@slideshare.net(ssuser62b35f) Sparsity Normalization: Stabilizing the Expected Outputs of Deep Networks ssuser62b35f The learning of deep models, in which a numerous of parameters are superimposed, is known to be a fairly sensitive process and should be carefully done through a combination of several techniques that can help to stabilize it. We introduce an additional challenge that has never been explicitly studied: the heterogeneity of sparsity at the instance level due to missing values or the innate nature of the input distribution. We confirm experimentally on the widely used benchmark datasets that this variable sparsity problem makes the output statistics of neurons unstable and makes the learning process more difficult by saturating non-linearities. We also provide the analysis of this phenomenon, and based on our analysis, we present a simple technique to prevent this issue, referred to as Sparsity Normalization (SN). Finally, we show that the performance can be significantly improved with SN on certain popular benchmark datasets, or that similar performance can be achieved with lower capacity. Especially focusing on the collaborative filtering problem where the variable sparsity issue has been completely ignored, we achieve new state-of-the-art results on Movielens 100k and 1M datasets, by simply applying Sparsity Normalization (SN). https://arxiv.org/abs/1906.00150 <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/190607sparsitynormalization-190607070910-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> The learning of deep models, in which a numerous of parameters are superimposed, is known to be a fairly sensitive process and should be carefully done through a combination of several techniques that can help to stabilize it. We introduce an additional challenge that has never been explicitly studied: the heterogeneity of sparsity at the instance level due to missing values or the innate nature of the input distribution. We confirm experimentally on the widely used benchmark datasets that this variable sparsity problem makes the output statistics of neurons unstable and makes the learning process more difficult by saturating non-linearities. We also provide the analysis of this phenomenon, and based on our analysis, we present a simple technique to prevent this issue, referred to as Sparsity Normalization (SN). Finally, we show that the performance can be significantly improved with SN on certain popular benchmark datasets, or that similar performance can be achieved with lower capacity. Especially focusing on the collaborative filtering problem where the variable sparsity issue has been completely ignored, we achieve new state-of-the-art results on Movielens 100k and 1M datasets, by simply applying Sparsity Normalization (SN). https://arxiv.org/abs/1906.00150
Sparsity Normalization: Stabilizing the Expected Outputs of Deep Networks from Joonyoung Yi
]]>
160 1 https://cdn.slidesharecdn.com/ss_thumbnails/190607sparsitynormalization-190607070910-thumbnail.jpg?width=120&height=120&fit=bounds presentation White http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Low-rank Matrix Approximation with Stability /slideshow/lowrank-matrix-approximation-with-stability/132383287 20190219joonyoung-190219100551
際際滷 of Low-rank Matrix Approximation with Stability (SMA), ICML 2016 -- Abstract of Paper: Low-rank matrix approximation has been widely adopted in machine learning applications with sparse data, such as recommender systems. However, the sparsity of the data, incomplete and noisy, introduces challenges to the algorithm stability small changes in the training data may significantly change the models. As a result, existing low-rank matrix approximation solutions yield low generalization performance, exhibiting high error variance on the training dataset, and minimizing the training error may not guarantee error reduction on the testing dataset. In this paper, we investigate the algorithm stability problem of low-rank matrix approximations. We present a new algorithm design framework, which (1) introduces new optimization objectives to guide stable matrix approximation algorithm design, and (2) solves the optimization problem to obtain stable low-rank approximation solutions with good generalization performance. Experimental results on real-world datasets demonstrate that the proposed work can achieve better prediction accuracy compared with both state-ofthe-art low-rank matrix approximation methods and ensemble methods in recommendation task]]>

際際滷 of Low-rank Matrix Approximation with Stability (SMA), ICML 2016 -- Abstract of Paper: Low-rank matrix approximation has been widely adopted in machine learning applications with sparse data, such as recommender systems. However, the sparsity of the data, incomplete and noisy, introduces challenges to the algorithm stability small changes in the training data may significantly change the models. As a result, existing low-rank matrix approximation solutions yield low generalization performance, exhibiting high error variance on the training dataset, and minimizing the training error may not guarantee error reduction on the testing dataset. In this paper, we investigate the algorithm stability problem of low-rank matrix approximations. We present a new algorithm design framework, which (1) introduces new optimization objectives to guide stable matrix approximation algorithm design, and (2) solves the optimization problem to obtain stable low-rank approximation solutions with good generalization performance. Experimental results on real-world datasets demonstrate that the proposed work can achieve better prediction accuracy compared with both state-ofthe-art low-rank matrix approximation methods and ensemble methods in recommendation task]]>
Tue, 19 Feb 2019 10:05:51 GMT /slideshow/lowrank-matrix-approximation-with-stability/132383287 ssuser62b35f@slideshare.net(ssuser62b35f) Low-rank Matrix Approximation with Stability ssuser62b35f 際際滷 of Low-rank Matrix Approximation with Stability (SMA), ICML 2016 -- Abstract of Paper: Low-rank matrix approximation has been widely adopted in machine learning applications with sparse data, such as recommender systems. However, the sparsity of the data, incomplete and noisy, introduces challenges to the algorithm stability small changes in the training data may significantly change the models. As a result, existing low-rank matrix approximation solutions yield low generalization performance, exhibiting high error variance on the training dataset, and minimizing the training error may not guarantee error reduction on the testing dataset. In this paper, we investigate the algorithm stability problem of low-rank matrix approximations. We present a new algorithm design framework, which (1) introduces new optimization objectives to guide stable matrix approximation algorithm design, and (2) solves the optimization problem to obtain stable low-rank approximation solutions with good generalization performance. Experimental results on real-world datasets demonstrate that the proposed work can achieve better prediction accuracy compared with both state-ofthe-art low-rank matrix approximation methods and ensemble methods in recommendation task <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/20190219joonyoung-190219100551-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> 際際滷 of Low-rank Matrix Approximation with Stability (SMA), ICML 2016 -- Abstract of Paper: Low-rank matrix approximation has been widely adopted in machine learning applications with sparse data, such as recommender systems. However, the sparsity of the data, incomplete and noisy, introduces challenges to the algorithm stability small changes in the training data may significantly change the models. As a result, existing low-rank matrix approximation solutions yield low generalization performance, exhibiting high error variance on the training dataset, and minimizing the training error may not guarantee error reduction on the testing dataset. In this paper, we investigate the algorithm stability problem of low-rank matrix approximations. We present a new algorithm design framework, which (1) introduces new optimization objectives to guide stable matrix approximation algorithm design, and (2) solves the optimization problem to obtain stable low-rank approximation solutions with good generalization performance. Experimental results on real-world datasets demonstrate that the proposed work can achieve better prediction accuracy compared with both state-ofthe-art low-rank matrix approximation methods and ensemble methods in recommendation task
Low-rank Matrix Approximation with Stability from Joonyoung Yi
]]>
664 2 https://cdn.slidesharecdn.com/ss_thumbnails/20190219joonyoung-190219100551-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Introduction to MAML (Model Agnostic Meta Learning) with Discussions /slideshow/introduction-to-maml-model-agnostic-meta-learning-with-discussions-124492943/124492943 maml-181130144044
The slides for Model Agnostic Meta Learning with additional experiments.]]>

The slides for Model Agnostic Meta Learning with additional experiments.]]>
Fri, 30 Nov 2018 14:40:44 GMT /slideshow/introduction-to-maml-model-agnostic-meta-learning-with-discussions-124492943/124492943 ssuser62b35f@slideshare.net(ssuser62b35f) Introduction to MAML (Model Agnostic Meta Learning) with Discussions ssuser62b35f The slides for Model Agnostic Meta Learning with additional experiments. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/maml-181130144044-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> The slides for Model Agnostic Meta Learning with additional experiments.
Introduction to MAML (Model Agnostic Meta Learning) with Discussions from Joonyoung Yi
]]>
5100 3 https://cdn.slidesharecdn.com/ss_thumbnails/maml-181130144044-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
A Neural Autoregressive Approach to Collaborative Filtering (CF-NADE) 際際滷 /slideshow/a-neural-autoregressive-approach-to-collaborative-filtering-cfnade-slide/121351681 20181031joonyoung-181101001509
PPT for A Neural Autoregressive Approach to Collaborative Filtering (CF-NADE). I made ppt for explaining the paper. Abstract of the paper: This paper proposes CF-NADE, a neural autoregressive architecture for collaborative filtering (CF) tasks, which is inspired by the Restricted Boltzmann Machine (RBM) based CF model and the Neural Autoregressive Distribution Estimator (NADE). We first describe the basic CF-NADE model for CF tasks. Then we propose to improve the model by sharing parameters between different ratings. A factored version of CF-NADE is also proposed for better scalability. Furthermore, we take the ordinal nature of the preferences into consideration and propose an ordinal cost to optimize CF-NADE, which shows superior performance. Finally, CF-NADE can be extended to a deep model, with only moderately increased computational complexity. Experimental results show that CF-NADE with a single hidden layer beats all previous state-of-the-art methods on MovieLens 1M, MovieLens 10M, and Netflix datasets, and adding more hidden layers can further improve the performance.]]>

PPT for A Neural Autoregressive Approach to Collaborative Filtering (CF-NADE). I made ppt for explaining the paper. Abstract of the paper: This paper proposes CF-NADE, a neural autoregressive architecture for collaborative filtering (CF) tasks, which is inspired by the Restricted Boltzmann Machine (RBM) based CF model and the Neural Autoregressive Distribution Estimator (NADE). We first describe the basic CF-NADE model for CF tasks. Then we propose to improve the model by sharing parameters between different ratings. A factored version of CF-NADE is also proposed for better scalability. Furthermore, we take the ordinal nature of the preferences into consideration and propose an ordinal cost to optimize CF-NADE, which shows superior performance. Finally, CF-NADE can be extended to a deep model, with only moderately increased computational complexity. Experimental results show that CF-NADE with a single hidden layer beats all previous state-of-the-art methods on MovieLens 1M, MovieLens 10M, and Netflix datasets, and adding more hidden layers can further improve the performance.]]>
Thu, 01 Nov 2018 00:15:09 GMT /slideshow/a-neural-autoregressive-approach-to-collaborative-filtering-cfnade-slide/121351681 ssuser62b35f@slideshare.net(ssuser62b35f) A Neural Autoregressive Approach to Collaborative Filtering (CF-NADE) 際際滷 ssuser62b35f PPT for A Neural Autoregressive Approach to Collaborative Filtering (CF-NADE). I made ppt for explaining the paper. Abstract of the paper: This paper proposes CF-NADE, a neural autoregressive architecture for collaborative filtering (CF) tasks, which is inspired by the Restricted Boltzmann Machine (RBM) based CF model and the Neural Autoregressive Distribution Estimator (NADE). We first describe the basic CF-NADE model for CF tasks. Then we propose to improve the model by sharing parameters between different ratings. A factored version of CF-NADE is also proposed for better scalability. Furthermore, we take the ordinal nature of the preferences into consideration and propose an ordinal cost to optimize CF-NADE, which shows superior performance. Finally, CF-NADE can be extended to a deep model, with only moderately increased computational complexity. Experimental results show that CF-NADE with a single hidden layer beats all previous state-of-the-art methods on MovieLens 1M, MovieLens 10M, and Netflix datasets, and adding more hidden layers can further improve the performance. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/20181031joonyoung-181101001509-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> PPT for A Neural Autoregressive Approach to Collaborative Filtering (CF-NADE). I made ppt for explaining the paper. Abstract of the paper: This paper proposes CF-NADE, a neural autoregressive architecture for collaborative filtering (CF) tasks, which is inspired by the Restricted Boltzmann Machine (RBM) based CF model and the Neural Autoregressive Distribution Estimator (NADE). We first describe the basic CF-NADE model for CF tasks. Then we propose to improve the model by sharing parameters between different ratings. A factored version of CF-NADE is also proposed for better scalability. Furthermore, we take the ordinal nature of the preferences into consideration and propose an ordinal cost to optimize CF-NADE, which shows superior performance. Finally, CF-NADE can be extended to a deep model, with only moderately increased computational complexity. Experimental results show that CF-NADE with a single hidden layer beats all previous state-of-the-art methods on MovieLens 1M, MovieLens 10M, and Netflix datasets, and adding more hidden layers can further improve the performance.
A Neural Autoregressive Approach to Collaborative Filtering (CF-NADE) 際際滷 from Joonyoung Yi
]]>
567 3 https://cdn.slidesharecdn.com/ss_thumbnails/20181031joonyoung-181101001509-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Introduction to XGBoost /slideshow/introduction-to-xgboost/116796433 180920xgboost-180927032156
Introduction to XGBoost. XGBoost Tutorial. Quick Start XGBoost.]]>

Introduction to XGBoost. XGBoost Tutorial. Quick Start XGBoost.]]>
Thu, 27 Sep 2018 03:21:56 GMT /slideshow/introduction-to-xgboost/116796433 ssuser62b35f@slideshare.net(ssuser62b35f) Introduction to XGBoost ssuser62b35f Introduction to XGBoost. XGBoost Tutorial. Quick Start XGBoost. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/180920xgboost-180927032156-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Introduction to XGBoost. XGBoost Tutorial. Quick Start XGBoost.
Introduction to XGBoost from Joonyoung Yi
]]>
7120 2 https://cdn.slidesharecdn.com/ss_thumbnails/180920xgboost-180927032156-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Why biased matrix factorization works well? /slideshow/180829-why-biasedmfworkswell/112108990 180829whybiasedmfworkswell-180829133511
This slide can answer why Biased Matrix Factorization works well?]]>

This slide can answer why Biased Matrix Factorization works well?]]>
Wed, 29 Aug 2018 13:35:11 GMT /slideshow/180829-why-biasedmfworkswell/112108990 ssuser62b35f@slideshare.net(ssuser62b35f) Why biased matrix factorization works well? ssuser62b35f This slide can answer why Biased Matrix Factorization works well? <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/180829whybiasedmfworkswell-180829133511-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> This slide can answer why Biased Matrix Factorization works well?
Why biased matrix factorization works well? from Joonyoung Yi
]]>
1824 32 https://cdn.slidesharecdn.com/ss_thumbnails/180829whybiasedmfworkswell-180829133511-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Dynamically Expandable Network (DEN) /slideshow/180808-dynamically-expandable-network/109116556 180808dynamicallyexpandablenetwork-180808190021
Third party's slide of Lifelong Learning with DEN paper.]]>

Third party's slide of Lifelong Learning with DEN paper.]]>
Wed, 08 Aug 2018 19:00:20 GMT /slideshow/180808-dynamically-expandable-network/109116556 ssuser62b35f@slideshare.net(ssuser62b35f) Dynamically Expandable Network (DEN) ssuser62b35f Third party's slide of Lifelong Learning with DEN paper. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/180808dynamicallyexpandablenetwork-180808190021-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Third party&#39;s slide of Lifelong Learning with DEN paper.
Dynamically Expandable Network (DEN) from Joonyoung Yi
]]>
961 52 https://cdn.slidesharecdn.com/ss_thumbnails/180808dynamicallyexpandablenetwork-180808190021-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Introduction to Low-rank Matrix Completion /ssuser62b35f/180725-introduction-tomatrixcompletion 180725introductiontomatrixcompletion-180725073347
螳襦]]>

螳襦]]>
Wed, 25 Jul 2018 07:33:46 GMT /ssuser62b35f/180725-introduction-tomatrixcompletion ssuser62b35f@slideshare.net(ssuser62b35f) Introduction to Low-rank Matrix Completion ssuser62b35f 螳襦 <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/180725introductiontomatrixcompletion-180725073347-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> 螳襦
Introduction to Low-rank Matrix Completion from Joonyoung Yi
]]>
306 46 https://cdn.slidesharecdn.com/ss_thumbnails/180725introductiontomatrixcompletion-180725073347-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Exact Matrix Completion via Convex Optimization 際際滷 (PPT) /slideshow/exact-matrix-completion-via-convex-optimization-slideppt/94862181 cs592exactcopied-180424114016
際際滷 of the paper "Exact Matrix Completion via Convex Optimization" of Emmanuel J. Cand竪s and Benjamin Recht. We presented this slide in KAIST CS592 Class, April 2018. - Code: https://github.com/JoonyoungYi/MCCO-numpy - Abstract of the paper: We consider a problem of considerable practical interest: the recovery of a data matrix from a sampling of its entries. Suppose that we observe m entries selected uniformly at random from a matrix M. Can we complete the matrix and recover the entries that we have not seen? We show that one can perfectly recover most low-rank matrices from what appears to be an incomplete set of entries. We prove that if the number m of sampled entries obeys ヰ駒1.2log for some positive numerical constant C, then with very high probability, most nn matrices of rank r can be perfectly recovered by solving a simple convex optimization program. This program finds the matrix with minimum nuclear norm that fits the data. The condition above assumes that the rank is not too large. However, if one replaces the 1.2 exponent with 1.25, then the result holds for all values of the rank. Similar results hold for arbitrary rectangular matrices as well. Our results are connected with the recent literature on compressed sensing, and show that objects other than signals and images can be perfectly reconstructed from very limited information.]]>

際際滷 of the paper "Exact Matrix Completion via Convex Optimization" of Emmanuel J. Cand竪s and Benjamin Recht. We presented this slide in KAIST CS592 Class, April 2018. - Code: https://github.com/JoonyoungYi/MCCO-numpy - Abstract of the paper: We consider a problem of considerable practical interest: the recovery of a data matrix from a sampling of its entries. Suppose that we observe m entries selected uniformly at random from a matrix M. Can we complete the matrix and recover the entries that we have not seen? We show that one can perfectly recover most low-rank matrices from what appears to be an incomplete set of entries. We prove that if the number m of sampled entries obeys ヰ駒1.2log for some positive numerical constant C, then with very high probability, most nn matrices of rank r can be perfectly recovered by solving a simple convex optimization program. This program finds the matrix with minimum nuclear norm that fits the data. The condition above assumes that the rank is not too large. However, if one replaces the 1.2 exponent with 1.25, then the result holds for all values of the rank. Similar results hold for arbitrary rectangular matrices as well. Our results are connected with the recent literature on compressed sensing, and show that objects other than signals and images can be perfectly reconstructed from very limited information.]]>
Tue, 24 Apr 2018 11:40:16 GMT /slideshow/exact-matrix-completion-via-convex-optimization-slideppt/94862181 ssuser62b35f@slideshare.net(ssuser62b35f) Exact Matrix Completion via Convex Optimization 際際滷 (PPT) ssuser62b35f 際際滷 of the paper "Exact Matrix Completion via Convex Optimization" of Emmanuel J. Cand竪s and Benjamin Recht. We presented this slide in KAIST CS592 Class, April 2018. - Code: https://github.com/JoonyoungYi/MCCO-numpy - Abstract of the paper: We consider a problem of considerable practical interest: the recovery of a data matrix from a sampling of its entries. Suppose that we observe m entries selected uniformly at random from a matrix M. Can we complete the matrix and recover the entries that we have not seen? We show that one can perfectly recover most low-rank matrices from what appears to be an incomplete set of entries. We prove that if the number m of sampled entries obeys ヰ駒1.2log for some positive numerical constant C, then with very high probability, most nn matrices of rank r can be perfectly recovered by solving a simple convex optimization program. This program finds the matrix with minimum nuclear norm that fits the data. The condition above assumes that the rank is not too large. However, if one replaces the 1.2 exponent with 1.25, then the result holds for all values of the rank. Similar results hold for arbitrary rectangular matrices as well. Our results are connected with the recent literature on compressed sensing, and show that objects other than signals and images can be perfectly reconstructed from very limited information. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/cs592exactcopied-180424114016-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> 際際滷 of the paper &quot;Exact Matrix Completion via Convex Optimization&quot; of Emmanuel J. Cand竪s and Benjamin Recht. We presented this slide in KAIST CS592 Class, April 2018. - Code: https://github.com/JoonyoungYi/MCCO-numpy - Abstract of the paper: We consider a problem of considerable practical interest: the recovery of a data matrix from a sampling of its entries. Suppose that we observe m entries selected uniformly at random from a matrix M. Can we complete the matrix and recover the entries that we have not seen? We show that one can perfectly recover most low-rank matrices from what appears to be an incomplete set of entries. We prove that if the number m of sampled entries obeys ヰ駒1.2log for some positive numerical constant C, then with very high probability, most nn matrices of rank r can be perfectly recovered by solving a simple convex optimization program. This program finds the matrix with minimum nuclear norm that fits the data. The condition above assumes that the rank is not too large. However, if one replaces the 1.2 exponent with 1.25, then the result holds for all values of the rank. Similar results hold for arbitrary rectangular matrices as well. Our results are connected with the recent literature on compressed sensing, and show that objects other than signals and images can be perfectly reconstructed from very limited information.
Exact Matrix Completion via Convex Optimization 際際滷 (PPT) from Joonyoung Yi
]]>
3708 179 https://cdn.slidesharecdn.com/ss_thumbnails/cs592exactcopied-180424114016-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
https://cdn.slidesharecdn.com/profile-photo-ssuser62b35f-48x48.jpg?cb=1672367467 https://cdn.slidesharecdn.com/ss_thumbnails/20190625joonyoungmrma-190704120117-thumbnail.jpg?width=320&height=320&fit=bounds slideshow/mixturerank-matrix-approximation-for-collaborative-filtering/153538992 Mixture-Rank Matrix Ap... https://cdn.slidesharecdn.com/ss_thumbnails/190607sparsitynormalization-190607070910-thumbnail.jpg?width=320&height=320&fit=bounds slideshow/sparsity-normalization-stabilizing-the-expected-outputs-of-deep-networks/149029063 Sparsity Normalization... https://cdn.slidesharecdn.com/ss_thumbnails/20190219joonyoung-190219100551-thumbnail.jpg?width=320&height=320&fit=bounds slideshow/lowrank-matrix-approximation-with-stability/132383287 Low-rank Matrix Approx...