際際滷shows by User: NesreenAhmed2 / http://www.slideshare.net/images/logo.gif 際際滷shows by User: NesreenAhmed2 / Tue, 17 Jul 2018 12:58:39 GMT 際際滷Share feed for 際際滷shows by User: NesreenAhmed2 Sampling from Massive Graph Streams: A Unifying Framework /NesreenAhmed2/sampling-from-massive-graph-streams-a-unifying-framework dagstuhl-graph18-180717125839
Invited talk at the Dagstuhl Seminar on High Performance Graph Algorithms at Schloss Dagstuhl, Germany]]>

Invited talk at the Dagstuhl Seminar on High Performance Graph Algorithms at Schloss Dagstuhl, Germany]]>
Tue, 17 Jul 2018 12:58:39 GMT /NesreenAhmed2/sampling-from-massive-graph-streams-a-unifying-framework NesreenAhmed2@slideshare.net(NesreenAhmed2) Sampling from Massive Graph Streams: A Unifying Framework NesreenAhmed2 Invited talk at the Dagstuhl Seminar on High Performance Graph Algorithms at Schloss Dagstuhl, Germany <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/dagstuhl-graph18-180717125839-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Invited talk at the Dagstuhl Seminar on High Performance Graph Algorithms at Schloss Dagstuhl, Germany
Sampling from Massive Graph Streams: A Unifying Framework from Nesreen K. Ahmed
]]>
562 4 https://cdn.slidesharecdn.com/ss_thumbnails/dagstuhl-graph18-180717125839-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
The Power of Motif Counting Theory, Algorithms, and Applications for Large Graphs /slideshow/the-power-of-motif-counting-theory-algorithms-and-applications-for-large-graphs/106274487 ipdps-graml2018-180717125317
Keynote presentation at GraML 2018, The Second Workshop on the Intersection of Graph Algorithms and Machine Learning, Co-located with IPDPS 2018.]]>

Keynote presentation at GraML 2018, The Second Workshop on the Intersection of Graph Algorithms and Machine Learning, Co-located with IPDPS 2018.]]>
Tue, 17 Jul 2018 12:53:17 GMT /slideshow/the-power-of-motif-counting-theory-algorithms-and-applications-for-large-graphs/106274487 NesreenAhmed2@slideshare.net(NesreenAhmed2) The Power of Motif Counting Theory, Algorithms, and Applications for Large Graphs NesreenAhmed2 Keynote presentation at GraML 2018, The Second Workshop on the Intersection of Graph Algorithms and Machine Learning, Co-located with IPDPS 2018. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/ipdps-graml2018-180717125317-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Keynote presentation at GraML 2018, The Second Workshop on the Intersection of Graph Algorithms and Machine Learning, Co-located with IPDPS 2018.
The Power of Motif Counting Theory, Algorithms, and Applications for Large Graphs from Nesreen K. Ahmed
]]>
1042 9 https://cdn.slidesharecdn.com/ss_thumbnails/ipdps-graml2018-180717125317-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Sampling for Approximate Bipartite Network Projection /slideshow/sampling-for-approximate-bipartite-network-projection/106272626 bipartite-ijcai18-180717123332
Bipartite graphs manifest as a stream of edges that represent transactions, e.g., purchases by retail customers. Recommender systems employ neighborhood-based measures of node similarity, such as the pairwise number of common neighbors (CN) and related metrics. While the number of node pairs that share neighbors is potentially enormous, only a relatively small proportion of them have many common neighbors. This motivates finding a weighted sampling approach to preferentially sample these node pairs. This paper presents a new sampling algorithm that provides a fixed size unbiased estimate of the similarity matrix resulting from a bipartite edge stream projection. The algorithm has two components. First, it maintains a reservoir of sampled bipartite edges with sampling weights that favor selection of high similarity nodes. Second, arriving edges generate a stream of similarity updates, based on their adjacency with the current sample. These updates are aggregated in a second reservoir sample-based stream aggregator to yield the final unbiased estimate. Experiments on real world graphs show that a 10% sample at each stage yields estimates of high similarity edges with weighted relative errors of about 1%.]]>

Bipartite graphs manifest as a stream of edges that represent transactions, e.g., purchases by retail customers. Recommender systems employ neighborhood-based measures of node similarity, such as the pairwise number of common neighbors (CN) and related metrics. While the number of node pairs that share neighbors is potentially enormous, only a relatively small proportion of them have many common neighbors. This motivates finding a weighted sampling approach to preferentially sample these node pairs. This paper presents a new sampling algorithm that provides a fixed size unbiased estimate of the similarity matrix resulting from a bipartite edge stream projection. The algorithm has two components. First, it maintains a reservoir of sampled bipartite edges with sampling weights that favor selection of high similarity nodes. Second, arriving edges generate a stream of similarity updates, based on their adjacency with the current sample. These updates are aggregated in a second reservoir sample-based stream aggregator to yield the final unbiased estimate. Experiments on real world graphs show that a 10% sample at each stage yields estimates of high similarity edges with weighted relative errors of about 1%.]]>
Tue, 17 Jul 2018 12:33:32 GMT /slideshow/sampling-for-approximate-bipartite-network-projection/106272626 NesreenAhmed2@slideshare.net(NesreenAhmed2) Sampling for Approximate Bipartite Network Projection NesreenAhmed2 Bipartite graphs manifest as a stream of edges that represent transactions, e.g., purchases by retail customers. Recommender systems employ neighborhood-based measures of node similarity, such as the pairwise number of common neighbors (CN) and related metrics. While the number of node pairs that share neighbors is potentially enormous, only a relatively small proportion of them have many common neighbors. This motivates finding a weighted sampling approach to preferentially sample these node pairs. This paper presents a new sampling algorithm that provides a fixed size unbiased estimate of the similarity matrix resulting from a bipartite edge stream projection. The algorithm has two components. First, it maintains a reservoir of sampled bipartite edges with sampling weights that favor selection of high similarity nodes. Second, arriving edges generate a stream of similarity updates, based on their adjacency with the current sample. These updates are aggregated in a second reservoir sample-based stream aggregator to yield the final unbiased estimate. Experiments on real world graphs show that a 10% sample at each stage yields estimates of high similarity edges with weighted relative errors of about 1%. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/bipartite-ijcai18-180717123332-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Bipartite graphs manifest as a stream of edges that represent transactions, e.g., purchases by retail customers. Recommender systems employ neighborhood-based measures of node similarity, such as the pairwise number of common neighbors (CN) and related metrics. While the number of node pairs that share neighbors is potentially enormous, only a relatively small proportion of them have many common neighbors. This motivates finding a weighted sampling approach to preferentially sample these node pairs. This paper presents a new sampling algorithm that provides a fixed size unbiased estimate of the similarity matrix resulting from a bipartite edge stream projection. The algorithm has two components. First, it maintains a reservoir of sampled bipartite edges with sampling weights that favor selection of high similarity nodes. Second, arriving edges generate a stream of similarity updates, based on their adjacency with the current sample. These updates are aggregated in a second reservoir sample-based stream aggregator to yield the final unbiased estimate. Experiments on real world graphs show that a 10% sample at each stage yields estimates of high similarity edges with weighted relative errors of about 1%.
Sampling for Approximate Bipartite Network Projection from Nesreen K. Ahmed
]]>
491 2 https://cdn.slidesharecdn.com/ss_thumbnails/bipartite-ijcai18-180717123332-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
High-Performance Graph Analysis and Modeling /slideshow/highperformance-graph-analysis-and-modeling/94148441 sdsc2018-180417220838
Southern Data Science Conference 2018]]>

Southern Data Science Conference 2018]]>
Tue, 17 Apr 2018 22:08:38 GMT /slideshow/highperformance-graph-analysis-and-modeling/94148441 NesreenAhmed2@slideshare.net(NesreenAhmed2) High-Performance Graph Analysis and Modeling NesreenAhmed2 Southern Data Science Conference 2018 <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/sdsc2018-180417220838-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Southern Data Science Conference 2018
High-Performance Graph Analysis and Modeling from Nesreen K. Ahmed
]]>
213 1 https://cdn.slidesharecdn.com/ss_thumbnails/sdsc2018-180417220838-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Representation Learning in Large Attributed Graphs /slideshow/representation-learning-in-large-attributed-graphs/83762211 rl-nesreen-wiml-nips17-171210035745
Invited Talk at the Women in Machine Learning Workshop at NIPS 2017]]>

Invited Talk at the Women in Machine Learning Workshop at NIPS 2017]]>
Sun, 10 Dec 2017 03:57:45 GMT /slideshow/representation-learning-in-large-attributed-graphs/83762211 NesreenAhmed2@slideshare.net(NesreenAhmed2) Representation Learning in Large Attributed Graphs NesreenAhmed2 Invited Talk at the Women in Machine Learning Workshop at NIPS 2017 <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/rl-nesreen-wiml-nips17-171210035745-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Invited Talk at the Women in Machine Learning Workshop at NIPS 2017
Representation Learning in Large Attributed Graphs from Nesreen K. Ahmed
]]>
866 2 https://cdn.slidesharecdn.com/ss_thumbnails/rl-nesreen-wiml-nips17-171210035745-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
On Sampling from Massive Graph Streams /slideshow/on-sampling-from-massive-graph-streams/79264480 vldb2017talk-170829203405
際際滷s from my talk at VLDB 2017]]>

際際滷s from my talk at VLDB 2017]]>
Tue, 29 Aug 2017 20:34:05 GMT /slideshow/on-sampling-from-massive-graph-streams/79264480 NesreenAhmed2@slideshare.net(NesreenAhmed2) On Sampling from Massive Graph Streams NesreenAhmed2 際際滷s from my talk at VLDB 2017 <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/vldb2017talk-170829203405-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> 際際滷s from my talk at VLDB 2017
On Sampling from Massive Graph Streams from Nesreen K. Ahmed
]]>
820 4 https://cdn.slidesharecdn.com/ss_thumbnails/vldb2017talk-170829203405-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Graph Sample and Hold: A Framework for Big Graph Analytics /slideshow/graph-sample-and-hold-a-framework-for-big-graph-analytics/64238057 ahmed-kdd2014-gsh-160721084341
Sampling is a standard approach in big-graph analytics; the goal is to efficiently estimate the graph properties by consulting a sample of the whole population. A perfect sample is assumed to mirror every property of the whole population. Unfortunately, such a perfect sample is hard to collect in complex populations such as graphs(e.g. web graphs, social networks), where an underlying network connects the units of the population. Therefore, a good sample will be representative in the sense that graph properties of interest can be estimated with a known degree of accuracy.While previous work focused particularly on sampling schemes to estimate certain graph properties (e.g. triangle count), much less is known for the case when we need to estimate various graph properties with the same sampling scheme. In this paper, we pro-pose a generic stream sampling framework for big-graph analytics,called Graph Sample and Hold (gSH), which samples from massive graphs sequentially in a single pass, one edge at a time, while maintaining a small state in memory. We use a Horvitz-Thompson construction in conjunction with a scheme that samples arriving edges without adjacencies to previously sampled edges with probability p and holds edges with adjacencies with probability q. Our sample and hold framework facilitates the accurate estimation of subgraph patterns by enabling the dependence of the sampling process to vary based on previous history. Within our framework, we show how to produce statistically unbiased estimators for various graph properties from the sample. Given that the graph analytic swill run on a sample instead of the whole population, the runtime complexity is kept under control. Moreover, given that the estimators are unbiased, the approximation error is also kept under control. ]]>

Sampling is a standard approach in big-graph analytics; the goal is to efficiently estimate the graph properties by consulting a sample of the whole population. A perfect sample is assumed to mirror every property of the whole population. Unfortunately, such a perfect sample is hard to collect in complex populations such as graphs(e.g. web graphs, social networks), where an underlying network connects the units of the population. Therefore, a good sample will be representative in the sense that graph properties of interest can be estimated with a known degree of accuracy.While previous work focused particularly on sampling schemes to estimate certain graph properties (e.g. triangle count), much less is known for the case when we need to estimate various graph properties with the same sampling scheme. In this paper, we pro-pose a generic stream sampling framework for big-graph analytics,called Graph Sample and Hold (gSH), which samples from massive graphs sequentially in a single pass, one edge at a time, while maintaining a small state in memory. We use a Horvitz-Thompson construction in conjunction with a scheme that samples arriving edges without adjacencies to previously sampled edges with probability p and holds edges with adjacencies with probability q. Our sample and hold framework facilitates the accurate estimation of subgraph patterns by enabling the dependence of the sampling process to vary based on previous history. Within our framework, we show how to produce statistically unbiased estimators for various graph properties from the sample. Given that the graph analytic swill run on a sample instead of the whole population, the runtime complexity is kept under control. Moreover, given that the estimators are unbiased, the approximation error is also kept under control. ]]>
Thu, 21 Jul 2016 08:43:41 GMT /slideshow/graph-sample-and-hold-a-framework-for-big-graph-analytics/64238057 NesreenAhmed2@slideshare.net(NesreenAhmed2) Graph Sample and Hold: A Framework for Big Graph Analytics NesreenAhmed2 Sampling is a standard approach in big-graph analytics; the goal is to efficiently estimate the graph properties by consulting a sample of the whole population. A perfect sample is assumed to mirror every property of the whole population. Unfortunately, such a perfect sample is hard to collect in complex populations such as graphs(e.g. web graphs, social networks), where an underlying network connects the units of the population. Therefore, a good sample will be representative in the sense that graph properties of interest can be estimated with a known degree of accuracy.While previous work focused particularly on sampling schemes to estimate certain graph properties (e.g. triangle count), much less is known for the case when we need to estimate various graph properties with the same sampling scheme. In this paper, we pro-pose a generic stream sampling framework for big-graph analytics,called Graph Sample and Hold (gSH), which samples from massive graphs sequentially in a single pass, one edge at a time, while maintaining a small state in memory. We use a Horvitz-Thompson construction in conjunction with a scheme that samples arriving edges without adjacencies to previously sampled edges with probability p and holds edges with adjacencies with probability q. Our sample and hold framework facilitates the accurate estimation of subgraph patterns by enabling the dependence of the sampling process to vary based on previous history. Within our framework, we show how to produce statistically unbiased estimators for various graph properties from the sample. Given that the graph analytic swill run on a sample instead of the whole population, the runtime complexity is kept under control. Moreover, given that the estimators are unbiased, the approximation error is also kept under control. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/ahmed-kdd2014-gsh-160721084341-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Sampling is a standard approach in big-graph analytics; the goal is to efficiently estimate the graph properties by consulting a sample of the whole population. A perfect sample is assumed to mirror every property of the whole population. Unfortunately, such a perfect sample is hard to collect in complex populations such as graphs(e.g. web graphs, social networks), where an underlying network connects the units of the population. Therefore, a good sample will be representative in the sense that graph properties of interest can be estimated with a known degree of accuracy.While previous work focused particularly on sampling schemes to estimate certain graph properties (e.g. triangle count), much less is known for the case when we need to estimate various graph properties with the same sampling scheme. In this paper, we pro-pose a generic stream sampling framework for big-graph analytics,called Graph Sample and Hold (gSH), which samples from massive graphs sequentially in a single pass, one edge at a time, while maintaining a small state in memory. We use a Horvitz-Thompson construction in conjunction with a scheme that samples arriving edges without adjacencies to previously sampled edges with probability p and holds edges with adjacencies with probability q. Our sample and hold framework facilitates the accurate estimation of subgraph patterns by enabling the dependence of the sampling process to vary based on previous history. Within our framework, we show how to produce statistically unbiased estimators for various graph properties from the sample. Given that the graph analytic swill run on a sample instead of the whole population, the runtime complexity is kept under control. Moreover, given that the estimators are unbiased, the approximation error is also kept under control.
Graph Sample and Hold: A Framework for Big Graph Analytics from Nesreen K. Ahmed
]]>
462 6 https://cdn.slidesharecdn.com/ss_thumbnails/ahmed-kdd2014-gsh-160721084341-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Fast Graphlet Decomposition: Theory, Algorithms, and Applications /slideshow/fast-graphlet-decomposition-theory-algorithms-and-applications/64177657 mmds-graphlet-decomposition-160719194119
From social science to biology, graphlets have found numerous applications and were used as the building blocks of network analysis. In social science, graphlet analysis (typically known as k-subgraph census) is widely adopted in sociometric studies. Much of the work in this vein focused on analyzing triadic tendencies as important structural features of social networks (e.g., transitivity or triadic closure) as well as analyzing triadic configurations as the basis for various social network theories (e.g., social balance, strength of weak ties, stability of ties, or trust). In biology, graphlets were widely used for protein function prediction, network alignment, and phylogeny to name a few. More recently, there has been an increased interest in exploring the role of graphlet analysis in computer networking (e.g., for web spam detection, analysis of peer-to-peer protocols and Internet AS graphs), chemoinformatics, image segmentation, among others. While graphlet counting and discovery have witnessed a tremendous success and impact in a variety of domains from social science to biology, there has yet to be a fast and efficient approach for computing the frequencies of these patterns. The main contribution of this work is a fast, efficient, and parallel framework and a family of algorithms for counting graphlets of size k-nodes that take only a fraction of the time to compute when compared with the current methods used. The proposed graphlet counting algorithm leverages a number of theoretical combinatorial arguments for different graphlets. For each edge, we count a few graphlets, and with these counts along with the combinatorial arguments, we obtain the exact counts of others in constant time. Furthermore, we show a number of important machine learning tasks that rely on this approach, including graph anomaly detection, as well as using graphlets as features for improving community detection, role discovery, graph classification, and relational learning.]]>

From social science to biology, graphlets have found numerous applications and were used as the building blocks of network analysis. In social science, graphlet analysis (typically known as k-subgraph census) is widely adopted in sociometric studies. Much of the work in this vein focused on analyzing triadic tendencies as important structural features of social networks (e.g., transitivity or triadic closure) as well as analyzing triadic configurations as the basis for various social network theories (e.g., social balance, strength of weak ties, stability of ties, or trust). In biology, graphlets were widely used for protein function prediction, network alignment, and phylogeny to name a few. More recently, there has been an increased interest in exploring the role of graphlet analysis in computer networking (e.g., for web spam detection, analysis of peer-to-peer protocols and Internet AS graphs), chemoinformatics, image segmentation, among others. While graphlet counting and discovery have witnessed a tremendous success and impact in a variety of domains from social science to biology, there has yet to be a fast and efficient approach for computing the frequencies of these patterns. The main contribution of this work is a fast, efficient, and parallel framework and a family of algorithms for counting graphlets of size k-nodes that take only a fraction of the time to compute when compared with the current methods used. The proposed graphlet counting algorithm leverages a number of theoretical combinatorial arguments for different graphlets. For each edge, we count a few graphlets, and with these counts along with the combinatorial arguments, we obtain the exact counts of others in constant time. Furthermore, we show a number of important machine learning tasks that rely on this approach, including graph anomaly detection, as well as using graphlets as features for improving community detection, role discovery, graph classification, and relational learning.]]>
Tue, 19 Jul 2016 19:41:19 GMT /slideshow/fast-graphlet-decomposition-theory-algorithms-and-applications/64177657 NesreenAhmed2@slideshare.net(NesreenAhmed2) Fast Graphlet Decomposition: Theory, Algorithms, and Applications NesreenAhmed2 From social science to biology, graphlets have found numerous applications and were used as the building blocks of network analysis. In social science, graphlet analysis (typically known as k-subgraph census) is widely adopted in sociometric studies. Much of the work in this vein focused on analyzing triadic tendencies as important structural features of social networks (e.g., transitivity or triadic closure) as well as analyzing triadic configurations as the basis for various social network theories (e.g., social balance, strength of weak ties, stability of ties, or trust). In biology, graphlets were widely used for protein function prediction, network alignment, and phylogeny to name a few. More recently, there has been an increased interest in exploring the role of graphlet analysis in computer networking (e.g., for web spam detection, analysis of peer-to-peer protocols and Internet AS graphs), chemoinformatics, image segmentation, among others. While graphlet counting and discovery have witnessed a tremendous success and impact in a variety of domains from social science to biology, there has yet to be a fast and efficient approach for computing the frequencies of these patterns. The main contribution of this work is a fast, efficient, and parallel framework and a family of algorithms for counting graphlets of size k-nodes that take only a fraction of the time to compute when compared with the current methods used. The proposed graphlet counting algorithm leverages a number of theoretical combinatorial arguments for different graphlets. For each edge, we count a few graphlets, and with these counts along with the combinatorial arguments, we obtain the exact counts of others in constant time. Furthermore, we show a number of important machine learning tasks that rely on this approach, including graph anomaly detection, as well as using graphlets as features for improving community detection, role discovery, graph classification, and relational learning. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/mmds-graphlet-decomposition-160719194119-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> From social science to biology, graphlets have found numerous applications and were used as the building blocks of network analysis. In social science, graphlet analysis (typically known as k-subgraph census) is widely adopted in sociometric studies. Much of the work in this vein focused on analyzing triadic tendencies as important structural features of social networks (e.g., transitivity or triadic closure) as well as analyzing triadic configurations as the basis for various social network theories (e.g., social balance, strength of weak ties, stability of ties, or trust). In biology, graphlets were widely used for protein function prediction, network alignment, and phylogeny to name a few. More recently, there has been an increased interest in exploring the role of graphlet analysis in computer networking (e.g., for web spam detection, analysis of peer-to-peer protocols and Internet AS graphs), chemoinformatics, image segmentation, among others. While graphlet counting and discovery have witnessed a tremendous success and impact in a variety of domains from social science to biology, there has yet to be a fast and efficient approach for computing the frequencies of these patterns. The main contribution of this work is a fast, efficient, and parallel framework and a family of algorithms for counting graphlets of size k-nodes that take only a fraction of the time to compute when compared with the current methods used. The proposed graphlet counting algorithm leverages a number of theoretical combinatorial arguments for different graphlets. For each edge, we count a few graphlets, and with these counts along with the combinatorial arguments, we obtain the exact counts of others in constant time. Furthermore, we show a number of important machine learning tasks that rely on this approach, including graph anomaly detection, as well as using graphlets as features for improving community detection, role discovery, graph classification, and relational learning.
Fast Graphlet Decomposition: Theory, Algorithms, and Applications from Nesreen K. Ahmed
]]>
867 7 https://cdn.slidesharecdn.com/ss_thumbnails/mmds-graphlet-decomposition-160719194119-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
https://cdn.slidesharecdn.com/profile-photo-NesreenAhmed2-48x48.jpg?cb=1579675276 Statistical Machine Learning , Data Mining, Graph Mining, Artificial Intelligence, Deep Learning http://www.nesreenahmed.com/ https://cdn.slidesharecdn.com/ss_thumbnails/dagstuhl-graph18-180717125839-thumbnail.jpg?width=320&height=320&fit=bounds NesreenAhmed2/sampling-from-massive-graph-streams-a-unifying-framework Sampling from Massive ... https://cdn.slidesharecdn.com/ss_thumbnails/ipdps-graml2018-180717125317-thumbnail.jpg?width=320&height=320&fit=bounds slideshow/the-power-of-motif-counting-theory-algorithms-and-applications-for-large-graphs/106274487 The Power of Motif Cou... https://cdn.slidesharecdn.com/ss_thumbnails/bipartite-ijcai18-180717123332-thumbnail.jpg?width=320&height=320&fit=bounds slideshow/sampling-for-approximate-bipartite-network-projection/106272626 Sampling for Approxima...