ºÝºÝߣshows by User: npinto / http://www.slideshare.net/images/logo.gif ºÝºÝߣshows by User: npinto / Wed, 03 Jul 2019 04:43:42 GMT ºÝºÝߣShare feed for ºÝºÝߣshows by User: npinto "AI" for Blockchain Security (Case Study: Cosmos) /slideshow/ai-for-blockchain-security-case-study-cosmos/153277708 2019-06icfcosmosicaruscygnilabsv02share-190703044342
Presentation given at the "Interchain Conversations" conference in Berlin (Full Node, June 2019)]]>

Presentation given at the "Interchain Conversations" conference in Berlin (Full Node, June 2019)]]>
Wed, 03 Jul 2019 04:43:42 GMT /slideshow/ai-for-blockchain-security-case-study-cosmos/153277708 npinto@slideshare.net(npinto) "AI" for Blockchain Security (Case Study: Cosmos) npinto Presentation given at the "Interchain Conversations" conference in Berlin (Full Node, June 2019) <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/2019-06icfcosmosicaruscygnilabsv02share-190703044342-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Presentation given at the &quot;Interchain Conversations&quot; conference in Berlin (Full Node, June 2019)
"AI" for Blockchain Security (Case Study: Cosmos) from npinto
]]>
585 7 https://cdn.slidesharecdn.com/ss_thumbnails/2019-06icfcosmosicaruscygnilabsv02share-190703044342-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
High-Performance Computing Needs Machine Learning... And Vice Versa (NIPS 2011, Big Learning) /slideshow/highperformance-computing-needs-machine-learning-and-vice-versa-nips-2011-big-learning/10621921 2011-12-16hpc-awaremachinelearningandviceversanips2011biglearnshareopt-111217034405-phpapp01
http://biglearn.org]]>

http://biglearn.org]]>
Sat, 17 Dec 2011 03:44:03 GMT /slideshow/highperformance-computing-needs-machine-learning-and-vice-versa-nips-2011-big-learning/10621921 npinto@slideshare.net(npinto) High-Performance Computing Needs Machine Learning... And Vice Versa (NIPS 2011, Big Learning) npinto http://biglearn.org <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/2011-12-16hpc-awaremachinelearningandviceversanips2011biglearnshareopt-111217034405-phpapp01-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> http://biglearn.org
High-Performance Computing Needs Machine Learning... And Vice Versa (NIPS 2011, Big Learning) from npinto
]]>
5612 4 https://cdn.slidesharecdn.com/ss_thumbnails/2011-12-16hpc-awaremachinelearningandviceversanips2011biglearnshareopt-111217034405-phpapp01-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
[Harvard CS264] 16 - Managing Dynamic Parallelism on GPUs: A Case Study of High Performance Sorting (Duane Merrill, University of Virginia) /slideshow/harvard-cs264-16-managing-dynamic-parallelism-on-gpus-a-case-study-of-high-performance-sorting-duane-merrill-university-of-virginia/7789442 managingdynamicparallelism-110430142356-phpapp02
http://cs264.org http://goo.gl/1K2fI]]>

http://cs264.org http://goo.gl/1K2fI]]>
Sat, 30 Apr 2011 14:23:52 GMT /slideshow/harvard-cs264-16-managing-dynamic-parallelism-on-gpus-a-case-study-of-high-performance-sorting-duane-merrill-university-of-virginia/7789442 npinto@slideshare.net(npinto) [Harvard CS264] 16 - Managing Dynamic Parallelism on GPUs: A Case Study of High Performance Sorting (Duane Merrill, University of Virginia) npinto http://cs264.org http://goo.gl/1K2fI <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/managingdynamicparallelism-110430142356-phpapp02-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> http://cs264.org http://goo.gl/1K2fI
[Harvard CS264] 16 - Managing Dynamic Parallelism on GPUs: A Case Study of High Performance Sorting (Duane Merrill, University of Virginia) from npinto
]]>
1051 4 https://cdn.slidesharecdn.com/ss_thumbnails/managingdynamicparallelism-110430142356-phpapp02-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
[Harvard CS264] 15a - The Onset of Parallelism, Changes in Computer Architecture and Microsoft's Role in the Transition (David Rich, Microsoft Research) /slideshow/harvard-cs264-15a-the-onset-of-parallelism-changes-in-computer-architecture-and-microsofts-role-in-the-transition-david-rich-microsoft-research/7772271 drich110413cs264-110428210736-phpapp01
http://cs264.org http://goo.gl/mBWaO]]>

http://cs264.org http://goo.gl/mBWaO]]>
Thu, 28 Apr 2011 21:07:31 GMT /slideshow/harvard-cs264-15a-the-onset-of-parallelism-changes-in-computer-architecture-and-microsofts-role-in-the-transition-david-rich-microsoft-research/7772271 npinto@slideshare.net(npinto) [Harvard CS264] 15a - The Onset of Parallelism, Changes in Computer Architecture and Microsoft's Role in the Transition (David Rich, Microsoft Research) npinto http://cs264.org http://goo.gl/mBWaO <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/drich110413cs264-110428210736-phpapp01-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> http://cs264.org http://goo.gl/mBWaO
[Harvard CS264] 15a - The Onset of Parallelism, Changes in Computer Architecture and Microsoft's Role in the Transition (David Rich, Microsoft Research) from npinto
]]>
946 4 https://cdn.slidesharecdn.com/ss_thumbnails/drich110413cs264-110428210736-phpapp01-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
[Harvard CS264] 15a - Jacket: Visual Computing (James Malcolm, Accelereyes) /slideshow/harvard-cs264-15a-xxx/7772228 jamesmalcomvisualcomputingcs264-110428210006-phpapp02
http://cs264.org http://goo.gl/068h1 ]]>

http://cs264.org http://goo.gl/068h1 ]]>
Thu, 28 Apr 2011 21:00:02 GMT /slideshow/harvard-cs264-15a-xxx/7772228 npinto@slideshare.net(npinto) [Harvard CS264] 15a - Jacket: Visual Computing (James Malcolm, Accelereyes) npinto http://cs264.org http://goo.gl/068h1 <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/jamesmalcomvisualcomputingcs264-110428210006-phpapp02-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> http://cs264.org http://goo.gl/068h1
[Harvard CS264] 15a - Jacket: Visual Computing (James Malcolm, Accelereyes) from npinto
]]>
845 2 https://cdn.slidesharecdn.com/ss_thumbnails/jamesmalcomvisualcomputingcs264-110428210006-phpapp02-thumbnail.jpg?width=120&height=120&fit=bounds presentation White http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
[Harvard CS264] 14 - Dynamic Compilation for Massively Parallel Processors (Gregory Diamos, Georgia Tech) /slideshow/harvard-cs264-14-dynamic-compilation-for-massively-parallel-processors-gregory-diamos-georgia-tech/7659117 gregdiamoscs264lecture-110417212416-phpapp02
http://cs264.org http://j.mp/h2zN72]]>

http://cs264.org http://j.mp/h2zN72]]>
Sun, 17 Apr 2011 21:24:13 GMT /slideshow/harvard-cs264-14-dynamic-compilation-for-massively-parallel-processors-gregory-diamos-georgia-tech/7659117 npinto@slideshare.net(npinto) [Harvard CS264] 14 - Dynamic Compilation for Massively Parallel Processors (Gregory Diamos, Georgia Tech) npinto http://cs264.org http://j.mp/h2zN72 <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/gregdiamoscs264lecture-110417212416-phpapp02-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> http://cs264.org http://j.mp/h2zN72
[Harvard CS264] 14 - Dynamic Compilation for Massively Parallel Processors (Gregory Diamos, Georgia Tech) from npinto
]]>
658 2 https://cdn.slidesharecdn.com/ss_thumbnails/gregdiamoscs264lecture-110417212416-phpapp02-thumbnail.jpg?width=120&height=120&fit=bounds presentation White http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
[Harvard CS264] 13 - The R-Stream High-Level Program Transformation Tool / Programming GPUs without Writing a Line of CUDA (Nicolas Vasilache, Reservoir Labs) /slideshow/reservoir-labs-harvardpresentation/7607033 reservoirlabsharvard-presentation-110412184012-phpapp02
http://cs264.org http://j.mp/fmzhYl]]>

http://cs264.org http://j.mp/fmzhYl]]>
Tue, 12 Apr 2011 18:40:08 GMT /slideshow/reservoir-labs-harvardpresentation/7607033 npinto@slideshare.net(npinto) [Harvard CS264] 13 - The R-Stream High-Level Program Transformation Tool / Programming GPUs without Writing a Line of CUDA (Nicolas Vasilache, Reservoir Labs) npinto http://cs264.org http://j.mp/fmzhYl <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/reservoirlabsharvard-presentation-110412184012-phpapp02-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> http://cs264.org http://j.mp/fmzhYl
[Harvard CS264] 13 - The R-Stream High-Level Program Transformation Tool / Programming GPUs without Writing a Line of CUDA (Nicolas Vasilache, Reservoir Labs) from npinto
]]>
1738 4 https://cdn.slidesharecdn.com/ss_thumbnails/reservoirlabsharvard-presentation-110412184012-phpapp02-thumbnail.jpg?width=120&height=120&fit=bounds presentation White http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
[Harvard CS264] 12 - Irregular Parallelism on the GPU: Algorithms and Data Structures (John Owens, UC Davis) /slideshow/harvard-cs264-12-irregular-parallelism-on-the-gpu-algorithms-and-data-structures-john-owens-uc-davis/7556631 owens-harvard-110407-110407230512-phpapp02
http://cs264.org http://j.mp/gUKodD]]>

http://cs264.org http://j.mp/gUKodD]]>
Thu, 07 Apr 2011 23:05:07 GMT /slideshow/harvard-cs264-12-irregular-parallelism-on-the-gpu-algorithms-and-data-structures-john-owens-uc-davis/7556631 npinto@slideshare.net(npinto) [Harvard CS264] 12 - Irregular Parallelism on the GPU: Algorithms and Data Structures (John Owens, UC Davis) npinto http://cs264.org http://j.mp/gUKodD <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/owens-harvard-110407-110407230512-phpapp02-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> http://cs264.org http://j.mp/gUKodD
[Harvard CS264] 12 - Irregular Parallelism on the GPU: Algorithms and Data Structures (John Owens, UC Davis) from npinto
]]>
1525 4 https://cdn.slidesharecdn.com/ss_thumbnails/owens-harvard-110407-110407230512-phpapp02-thumbnail.jpg?width=120&height=120&fit=bounds presentation White http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
[Harvard CS264] 11b - Analysis-Driven Performance Optimization with CUDA (Cliff Woolley, NVIDIA) /slideshow/harvard-cs264-11b-analysisdriven-performance-optimization-with-cuda-cliff-woolley-nvidia/7556603 analysisdrivenoptimization-110407230024-phpapp01
http://cs264.org http://j.mp/eF9Oj2]]>

http://cs264.org http://j.mp/eF9Oj2]]>
Thu, 07 Apr 2011 23:00:21 GMT /slideshow/harvard-cs264-11b-analysisdriven-performance-optimization-with-cuda-cliff-woolley-nvidia/7556603 npinto@slideshare.net(npinto) [Harvard CS264] 11b - Analysis-Driven Performance Optimization with CUDA (Cliff Woolley, NVIDIA) npinto http://cs264.org http://j.mp/eF9Oj2 <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/analysisdrivenoptimization-110407230024-phpapp01-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> http://cs264.org http://j.mp/eF9Oj2
[Harvard CS264] 11b - Analysis-Driven Performance Optimization with CUDA (Cliff Woolley, NVIDIA) from npinto
]]>
751 3 https://cdn.slidesharecdn.com/ss_thumbnails/analysisdrivenoptimization-110407230024-phpapp01-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
[Harvard CS264] 11a - Programming the Memory Hierarchy with Sequoia (Mike Bauer, Stanford) /slideshow/harvard-cs264-11a-programming-the-memory-hierarchy-with-sequoia-mike-bauer-stanford/7556598 analysisdrivenoptimization-110407225811-phpapp02
http://cs264.org http://j.mp/gBHYkg]]>

http://cs264.org http://j.mp/gBHYkg]]>
Thu, 07 Apr 2011 22:58:08 GMT /slideshow/harvard-cs264-11a-programming-the-memory-hierarchy-with-sequoia-mike-bauer-stanford/7556598 npinto@slideshare.net(npinto) [Harvard CS264] 11a - Programming the Memory Hierarchy with Sequoia (Mike Bauer, Stanford) npinto http://cs264.org http://j.mp/gBHYkg <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/analysisdrivenoptimization-110407225811-phpapp02-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> http://cs264.org http://j.mp/gBHYkg
[Harvard CS264] 11a - Programming the Memory Hierarchy with Sequoia (Mike Bauer, Stanford) from npinto
]]>
799 3 https://cdn.slidesharecdn.com/ss_thumbnails/analysisdrivenoptimization-110407225811-phpapp02-thumbnail.jpg?width=120&height=120&fit=bounds presentation White http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
[Harvard CS264] 10b - cl.oquence: High-Level Language Abstractions for Low-Level Programming (Cyrus Omar, CMU) /slideshow/harvard-cs264-10b-cloquence-highlevel-language-abstractions-for-lowlevel-programming-cyrus-omar-cmu/7503556 cl-oquence-cs264-110403182645-phpapp01
http://cs264.org http://bit.ly/gjQ3k7]]>

http://cs264.org http://bit.ly/gjQ3k7]]>
Sun, 03 Apr 2011 18:26:40 GMT /slideshow/harvard-cs264-10b-cloquence-highlevel-language-abstractions-for-lowlevel-programming-cyrus-omar-cmu/7503556 npinto@slideshare.net(npinto) [Harvard CS264] 10b - cl.oquence: High-Level Language Abstractions for Low-Level Programming (Cyrus Omar, CMU) npinto http://cs264.org http://bit.ly/gjQ3k7 <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/cl-oquence-cs264-110403182645-phpapp01-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> http://cs264.org http://bit.ly/gjQ3k7
[Harvard CS264] 10b - cl.oquence: High-Level Language Abstractions for Low-Level Programming (Cyrus Omar, CMU) from npinto
]]>
923 3 https://cdn.slidesharecdn.com/ss_thumbnails/cl-oquence-cs264-110403182645-phpapp01-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
[Harvard CS264] 10a - Easy, Effective, Efficient: GPU Programming in Python with PyOpenCL and PyCUDA (Andreas Kloeckner, NYU) /slideshow/andreas-cs264/7471283 andreas-cs264-110331202547-phpapp02
http://cs264.org Abstract: High-level scripting languages are in many ways polar opposites to GPUs. GPUs are highly parallel, subject to hardware subtleties, and designed for maximum throughput, and they offer a tremendous advance in the performance achievable for a significant number of computational problems. On the other hand, scripting languages such as Python favor ease of use over computational speed and do not generally emphasize parallelism. PyOpenCL and PyCUDA are two packages that attempt to join the two together. By showing concrete examples, both at the toy and the whole-application level, this talk aims to demonstrate that by combining these opposites, a programming environment is created that is greater than just the sum of its two parts. Speaker biography: Andreas Klöckner obtained his PhD degree working with Jan Hesthaven at the Department of Applied Mathematics at Brown University. He worked on a variety of topics all aiming to broaden the utility of discontinuous Galerkin (DG) methods. This included their use in the simulation of plasma physics and the demonstration of their particular suitability for computation on throughput-oriented graphics processors (GPUs). He also worked on multi-rate time stepping methods and shock capturing schemes for DG. In the fall of 2010, he joined the Courant Institute of Mathematical Sciences at New York University as a Courant Instructor. There, he is working on problems in computational electromagnetics with Leslie Greengard. His research interests include: - Discontinuous Galerkin and integral equation methods for wave propagation - Programming tools for parallel architectures - High-order unstructured particle-in-cell methods for plasma simulation]]>

http://cs264.org Abstract: High-level scripting languages are in many ways polar opposites to GPUs. GPUs are highly parallel, subject to hardware subtleties, and designed for maximum throughput, and they offer a tremendous advance in the performance achievable for a significant number of computational problems. On the other hand, scripting languages such as Python favor ease of use over computational speed and do not generally emphasize parallelism. PyOpenCL and PyCUDA are two packages that attempt to join the two together. By showing concrete examples, both at the toy and the whole-application level, this talk aims to demonstrate that by combining these opposites, a programming environment is created that is greater than just the sum of its two parts. Speaker biography: Andreas Klöckner obtained his PhD degree working with Jan Hesthaven at the Department of Applied Mathematics at Brown University. He worked on a variety of topics all aiming to broaden the utility of discontinuous Galerkin (DG) methods. This included their use in the simulation of plasma physics and the demonstration of their particular suitability for computation on throughput-oriented graphics processors (GPUs). He also worked on multi-rate time stepping methods and shock capturing schemes for DG. In the fall of 2010, he joined the Courant Institute of Mathematical Sciences at New York University as a Courant Instructor. There, he is working on problems in computational electromagnetics with Leslie Greengard. His research interests include: - Discontinuous Galerkin and integral equation methods for wave propagation - Programming tools for parallel architectures - High-order unstructured particle-in-cell methods for plasma simulation]]>
Thu, 31 Mar 2011 20:25:43 GMT /slideshow/andreas-cs264/7471283 npinto@slideshare.net(npinto) [Harvard CS264] 10a - Easy, Effective, Efficient: GPU Programming in Python with PyOpenCL and PyCUDA (Andreas Kloeckner, NYU) npinto http://cs264.org Abstract: High-level scripting languages are in many ways polar opposites to GPUs. GPUs are highly parallel, subject to hardware subtleties, and designed for maximum throughput, and they offer a tremendous advance in the performance achievable for a significant number of computational problems. On the other hand, scripting languages such as Python favor ease of use over computational speed and do not generally emphasize parallelism. PyOpenCL and PyCUDA are two packages that attempt to join the two together. By showing concrete examples, both at the toy and the whole-application level, this talk aims to demonstrate that by combining these opposites, a programming environment is created that is greater than just the sum of its two parts. Speaker biography: Andreas Klöckner obtained his PhD degree working with Jan Hesthaven at the Department of Applied Mathematics at Brown University. He worked on a variety of topics all aiming to broaden the utility of discontinuous Galerkin (DG) methods. This included their use in the simulation of plasma physics and the demonstration of their particular suitability for computation on throughput-oriented graphics processors (GPUs). He also worked on multi-rate time stepping methods and shock capturing schemes for DG. In the fall of 2010, he joined the Courant Institute of Mathematical Sciences at New York University as a Courant Instructor. There, he is working on problems in computational electromagnetics with Leslie Greengard. His research interests include: - Discontinuous Galerkin and integral equation methods for wave propagation - Programming tools for parallel architectures - High-order unstructured particle-in-cell methods for plasma simulation <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/andreas-cs264-110331202547-phpapp02-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> http://cs264.org Abstract: High-level scripting languages are in many ways polar opposites to GPUs. GPUs are highly parallel, subject to hardware subtleties, and designed for maximum throughput, and they offer a tremendous advance in the performance achievable for a significant number of computational problems. On the other hand, scripting languages such as Python favor ease of use over computational speed and do not generally emphasize parallelism. PyOpenCL and PyCUDA are two packages that attempt to join the two together. By showing concrete examples, both at the toy and the whole-application level, this talk aims to demonstrate that by combining these opposites, a programming environment is created that is greater than just the sum of its two parts. Speaker biography: Andreas Klöckner obtained his PhD degree working with Jan Hesthaven at the Department of Applied Mathematics at Brown University. He worked on a variety of topics all aiming to broaden the utility of discontinuous Galerkin (DG) methods. This included their use in the simulation of plasma physics and the demonstration of their particular suitability for computation on throughput-oriented graphics processors (GPUs). He also worked on multi-rate time stepping methods and shock capturing schemes for DG. In the fall of 2010, he joined the Courant Institute of Mathematical Sciences at New York University as a Courant Instructor. There, he is working on problems in computational electromagnetics with Leslie Greengard. His research interests include: - Discontinuous Galerkin and integral equation methods for wave propagation - Programming tools for parallel architectures - High-order unstructured particle-in-cell methods for plasma simulation
[Harvard CS264] 10a - Easy, Effective, Efficient: GPU Programming in Python with PyOpenCL and PyCUDA (Andreas Kloeckner, NYU) from npinto
]]>
2095 8 https://cdn.slidesharecdn.com/ss_thumbnails/andreas-cs264-110331202547-phpapp02-thumbnail.jpg?width=120&height=120&fit=bounds presentation White http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
[Harvard CS264] 09 - Machine Learning on Big Data: Lessons Learned from Google Projects (Max Lin, Google Research) /slideshow/harvard-cs264-09-machine-learning-on-big-data-lessons-learned-from-google-projects-max-lin-google-research/7471135 machinelearningbigdata-maxlin-cs264opt-110331195757-phpapp01
Abstract: Machine learning researchers and practitioners develop computer algorithms that "improve performance automatically through experience". At Google, machine learning is applied to solve many problems, such as prioritizing emails in Gmail, recommending tags for YouTube videos, and identifying different aspects from online user reviews. Machine learning on big data, however, is challenging. Some "simple" machine learning algorithms with quadratic time complexity, while running fine with hundreds of records, are almost impractical to use on billions of records. In this talk, I will describe lessons drawn from various Google projects on developing large scale machine learning systems. These systems build on top of Google's computing infrastructure such as GFS and MapReduce, and attack the scalability problem through massively parallel algorithms. I will present the design decisions made in these systems, strategies of scaling and speeding up machine learning systems on web scale data. Speaker biography: Max Lin is a software engineer with Google Research in New York City office. He is the tech lead of the Google Prediction API, a machine learning web service in the cloud. Prior to Google, he published research work on video content analysis, sentiment analysis, machine learning, and cross-lingual information retrieval. He had a PhD in Computer Science from Carnegie Mellon University.]]>

Abstract: Machine learning researchers and practitioners develop computer algorithms that "improve performance automatically through experience". At Google, machine learning is applied to solve many problems, such as prioritizing emails in Gmail, recommending tags for YouTube videos, and identifying different aspects from online user reviews. Machine learning on big data, however, is challenging. Some "simple" machine learning algorithms with quadratic time complexity, while running fine with hundreds of records, are almost impractical to use on billions of records. In this talk, I will describe lessons drawn from various Google projects on developing large scale machine learning systems. These systems build on top of Google's computing infrastructure such as GFS and MapReduce, and attack the scalability problem through massively parallel algorithms. I will present the design decisions made in these systems, strategies of scaling and speeding up machine learning systems on web scale data. Speaker biography: Max Lin is a software engineer with Google Research in New York City office. He is the tech lead of the Google Prediction API, a machine learning web service in the cloud. Prior to Google, he published research work on video content analysis, sentiment analysis, machine learning, and cross-lingual information retrieval. He had a PhD in Computer Science from Carnegie Mellon University.]]>
Thu, 31 Mar 2011 19:57:55 GMT /slideshow/harvard-cs264-09-machine-learning-on-big-data-lessons-learned-from-google-projects-max-lin-google-research/7471135 npinto@slideshare.net(npinto) [Harvard CS264] 09 - Machine Learning on Big Data: Lessons Learned from Google Projects (Max Lin, Google Research) npinto Abstract: Machine learning researchers and practitioners develop computer algorithms that "improve performance automatically through experience". At Google, machine learning is applied to solve many problems, such as prioritizing emails in Gmail, recommending tags for YouTube videos, and identifying different aspects from online user reviews. Machine learning on big data, however, is challenging. Some "simple" machine learning algorithms with quadratic time complexity, while running fine with hundreds of records, are almost impractical to use on billions of records. In this talk, I will describe lessons drawn from various Google projects on developing large scale machine learning systems. These systems build on top of Google's computing infrastructure such as GFS and MapReduce, and attack the scalability problem through massively parallel algorithms. I will present the design decisions made in these systems, strategies of scaling and speeding up machine learning systems on web scale data. Speaker biography: Max Lin is a software engineer with Google Research in New York City office. He is the tech lead of the Google Prediction API, a machine learning web service in the cloud. Prior to Google, he published research work on video content analysis, sentiment analysis, machine learning, and cross-lingual information retrieval. He had a PhD in Computer Science from Carnegie Mellon University. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/machinelearningbigdata-maxlin-cs264opt-110331195757-phpapp01-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Abstract: Machine learning researchers and practitioners develop computer algorithms that &quot;improve performance automatically through experience&quot;. At Google, machine learning is applied to solve many problems, such as prioritizing emails in Gmail, recommending tags for YouTube videos, and identifying different aspects from online user reviews. Machine learning on big data, however, is challenging. Some &quot;simple&quot; machine learning algorithms with quadratic time complexity, while running fine with hundreds of records, are almost impractical to use on billions of records. In this talk, I will describe lessons drawn from various Google projects on developing large scale machine learning systems. These systems build on top of Google&#39;s computing infrastructure such as GFS and MapReduce, and attack the scalability problem through massively parallel algorithms. I will present the design decisions made in these systems, strategies of scaling and speeding up machine learning systems on web scale data. Speaker biography: Max Lin is a software engineer with Google Research in New York City office. He is the tech lead of the Google Prediction API, a machine learning web service in the cloud. Prior to Google, he published research work on video content analysis, sentiment analysis, machine learning, and cross-lingual information retrieval. He had a PhD in Computer Science from Carnegie Mellon University.
[Harvard CS264] 09 - Machine Learning on Big Data: Lessons Learned from Google Projects (Max Lin, Google Research) from npinto
]]>
2855 8 https://cdn.slidesharecdn.com/ss_thumbnails/machinelearningbigdata-maxlin-cs264opt-110331195757-phpapp01-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
[Harvard CS264] 08a - Cloud Computing, Amazon EC2, MIT StarCluster (Justin Riley, MIT) /slideshow/harvard-cs264-08a-cloud-computing-amazon-ec2-mit-starcluster/7351655 cs264-intro-to-cloud-computing-110322172806-phpapp02
http://cs264.org]]>

http://cs264.org]]>
Tue, 22 Mar 2011 17:28:03 GMT /slideshow/harvard-cs264-08a-cloud-computing-amazon-ec2-mit-starcluster/7351655 npinto@slideshare.net(npinto) [Harvard CS264] 08a - Cloud Computing, Amazon EC2, MIT StarCluster (Justin Riley, MIT) npinto http://cs264.org <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/cs264-intro-to-cloud-computing-110322172806-phpapp02-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> http://cs264.org
[Harvard CS264] 08a - Cloud Computing, Amazon EC2, MIT StarCluster (Justin Riley, MIT) from npinto
]]>
2492 6 https://cdn.slidesharecdn.com/ss_thumbnails/cs264-intro-to-cloud-computing-110322172806-phpapp02-thumbnail.jpg?width=120&height=120&fit=bounds presentation White http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
[Harvard CS264] 08b - MapReduce and Hadoop (Zak Stone, Harvard) /slideshow/harvard-cs264-08b-mapreduce-and-hadoop/7351615 cs264hadooplecture2011-110322172329-phpapp02
http://cs264.org]]>

http://cs264.org]]>
Tue, 22 Mar 2011 17:23:24 GMT /slideshow/harvard-cs264-08b-mapreduce-and-hadoop/7351615 npinto@slideshare.net(npinto) [Harvard CS264] 08b - MapReduce and Hadoop (Zak Stone, Harvard) npinto http://cs264.org <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/cs264hadooplecture2011-110322172329-phpapp02-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> http://cs264.org
[Harvard CS264] 08b - MapReduce and Hadoop (Zak Stone, Harvard) from npinto
]]>
3673 4 https://cdn.slidesharecdn.com/ss_thumbnails/cs264hadooplecture2011-110322172329-phpapp02-thumbnail.jpg?width=120&height=120&fit=bounds presentation White http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
[Harvard CS264] 07 - GPU Cluster Programming (MPI & ZeroMQ) /slideshow/harvard-cs264-07-gpu-cluster-programming-mpi-zeromq/7197360 cs264201107-mpi0mq-110308182928-phpapp02
http://cs264.org]]>

http://cs264.org]]>
Tue, 08 Mar 2011 18:29:26 GMT /slideshow/harvard-cs264-07-gpu-cluster-programming-mpi-zeromq/7197360 npinto@slideshare.net(npinto) [Harvard CS264] 07 - GPU Cluster Programming (MPI & ZeroMQ) npinto http://cs264.org <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/cs264201107-mpi0mq-110308182928-phpapp02-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> http://cs264.org
[Harvard CS264] 07 - GPU Cluster Programming (MPI & ZeroMQ) from npinto
]]>
6833 6 https://cdn.slidesharecdn.com/ss_thumbnails/cs264201107-mpi0mq-110308182928-phpapp02-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
[Harvard CS264] 06 - CUDA Ninja Tricks: GPU Scripting, Meta-programming & Auto-tuning /slideshow/harvard-cs264-06-cuda-ninja-tricks-gpu-scripting-metaprogramming-autotuning/7108507 cs264201106-cudaninjasharetmp-110301171948-phpapp01
http://cs264.org]]>

http://cs264.org]]>
Tue, 01 Mar 2011 17:19:45 GMT /slideshow/harvard-cs264-06-cuda-ninja-tricks-gpu-scripting-metaprogramming-autotuning/7108507 npinto@slideshare.net(npinto) [Harvard CS264] 06 - CUDA Ninja Tricks: GPU Scripting, Meta-programming & Auto-tuning npinto http://cs264.org <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/cs264201106-cudaninjasharetmp-110301171948-phpapp01-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> http://cs264.org
[Harvard CS264] 06 - CUDA Ninja Tricks: GPU Scripting, Meta-programming & Auto-tuning from npinto
]]>
4278 13 https://cdn.slidesharecdn.com/ss_thumbnails/cs264201106-cudaninjasharetmp-110301171948-phpapp01-thumbnail.jpg?width=120&height=120&fit=bounds presentation White http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
[Harvard CS264] 05 - Advanced-level CUDA Programming /slideshow/harvard-cs264-05-advancedlevel-cuda-programming/7021926 cs264201105-cudaadvancedsharetmp-110222173227-phpapp02
http://cs264.org]]>

http://cs264.org]]>
Tue, 22 Feb 2011 17:32:19 GMT /slideshow/harvard-cs264-05-advancedlevel-cuda-programming/7021926 npinto@slideshare.net(npinto) [Harvard CS264] 05 - Advanced-level CUDA Programming npinto http://cs264.org <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/cs264201105-cudaadvancedsharetmp-110222173227-phpapp02-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> http://cs264.org
[Harvard CS264] 05 - Advanced-level CUDA Programming from npinto
]]>
2179 4 https://cdn.slidesharecdn.com/ss_thumbnails/cs264201105-cudaadvancedsharetmp-110222173227-phpapp02-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
[Harvard CS264] 04 - Intermediate-level CUDA Programming /slideshow/cs264-2011-04cudaintermediatesharetmpopt/6940226 cs264201104-cudaintermediatesharetmpopt-110215180915-phpapp02
http://cs264.org]]>

http://cs264.org]]>
Tue, 15 Feb 2011 18:09:10 GMT /slideshow/cs264-2011-04cudaintermediatesharetmpopt/6940226 npinto@slideshare.net(npinto) [Harvard CS264] 04 - Intermediate-level CUDA Programming npinto http://cs264.org <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/cs264201104-cudaintermediatesharetmpopt-110215180915-phpapp02-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> http://cs264.org
[Harvard CS264] 04 - Intermediate-level CUDA Programming from npinto
]]>
1297 1 https://cdn.slidesharecdn.com/ss_thumbnails/cs264201104-cudaintermediatesharetmpopt-110215180915-phpapp02-thumbnail.jpg?width=120&height=120&fit=bounds presentation White http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
[Harvard CS264] 03 - Introduction to GPU Computing, CUDA Basics /slideshow/harvard-cs264-03-introduction-to-gpu-computing-cuda-basics/6859031 cs264201103-cudabasicsshare-110209024624-phpapp02
http://cs264.org]]>

http://cs264.org]]>
Wed, 09 Feb 2011 02:46:20 GMT /slideshow/harvard-cs264-03-introduction-to-gpu-computing-cuda-basics/6859031 npinto@slideshare.net(npinto) [Harvard CS264] 03 - Introduction to GPU Computing, CUDA Basics npinto http://cs264.org <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/cs264201103-cudabasicsshare-110209024624-phpapp02-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> http://cs264.org
[Harvard CS264] 03 - Introduction to GPU Computing, CUDA Basics from npinto
]]>
1979 5 https://cdn.slidesharecdn.com/ss_thumbnails/cs264201103-cudabasicsshare-110209024624-phpapp02-thumbnail.jpg?width=120&height=120&fit=bounds presentation White http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
https://public.slidesharecdn.com/v2/images/profile-picture.png https://cdn.slidesharecdn.com/ss_thumbnails/2019-06icfcosmosicaruscygnilabsv02share-190703044342-thumbnail.jpg?width=320&height=320&fit=bounds slideshow/ai-for-blockchain-security-case-study-cosmos/153277708 &quot;AI&quot; for Blockchain Se... https://cdn.slidesharecdn.com/ss_thumbnails/2011-12-16hpc-awaremachinelearningandviceversanips2011biglearnshareopt-111217034405-phpapp01-thumbnail.jpg?width=320&height=320&fit=bounds slideshow/highperformance-computing-needs-machine-learning-and-vice-versa-nips-2011-big-learning/10621921 High-Performance Compu... https://cdn.slidesharecdn.com/ss_thumbnails/managingdynamicparallelism-110430142356-phpapp02-thumbnail.jpg?width=320&height=320&fit=bounds slideshow/harvard-cs264-16-managing-dynamic-parallelism-on-gpus-a-case-study-of-high-performance-sorting-duane-merrill-university-of-virginia/7789442 [Harvard CS264] 16 - M...