際際滷shows by User: liuml07 / http://www.slideshare.net/images/logo.gif 際際滷shows by User: liuml07 / Wed, 14 Jun 2017 04:37:47 GMT 際際滷Share feed for 際際滷shows by User: liuml07 Cloudy with a chance of Hadoop - DataWorks Summit 2017 San Jose /slideshow/cloudy-with-a-chance-of-hadoop-dataworks-summit-2017-san-jose/76924072 cloudy-2017-v5-170614043748
This talks about use cases and scenarios of running Hadoop applications in the cloud. It covers the problems encountered and lessons learned at Hortonworks. In the talk, we will see a couple deep dives, including Hadoop cluster/service auto-scaling, fault tolerance, and object storage consistency problems. This appears at DataWorks Summit 2017 San Jose, a co-joint talk by Ram Venkatesh and Mingliang Liu.]]>

This talks about use cases and scenarios of running Hadoop applications in the cloud. It covers the problems encountered and lessons learned at Hortonworks. In the talk, we will see a couple deep dives, including Hadoop cluster/service auto-scaling, fault tolerance, and object storage consistency problems. This appears at DataWorks Summit 2017 San Jose, a co-joint talk by Ram Venkatesh and Mingliang Liu.]]>
Wed, 14 Jun 2017 04:37:47 GMT /slideshow/cloudy-with-a-chance-of-hadoop-dataworks-summit-2017-san-jose/76924072 liuml07@slideshare.net(liuml07) Cloudy with a chance of Hadoop - DataWorks Summit 2017 San Jose liuml07 This talks about use cases and scenarios of running Hadoop applications in the cloud. It covers the problems encountered and lessons learned at Hortonworks. In the talk, we will see a couple deep dives, including Hadoop cluster/service auto-scaling, fault tolerance, and object storage consistency problems. This appears at DataWorks Summit 2017 San Jose, a co-joint talk by Ram Venkatesh and Mingliang Liu. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/cloudy-2017-v5-170614043748-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> This talks about use cases and scenarios of running Hadoop applications in the cloud. It covers the problems encountered and lessons learned at Hortonworks. In the talk, we will see a couple deep dives, including Hadoop cluster/service auto-scaling, fault tolerance, and object storage consistency problems. This appears at DataWorks Summit 2017 San Jose, a co-joint talk by Ram Venkatesh and Mingliang Liu.
Cloudy with a chance of Hadoop - DataWorks Summit 2017 San Jose from Mingliang Liu
]]>
292 3 https://cdn.slidesharecdn.com/ss_thumbnails/cloudy-2017-v5-170614043748-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Combining Phase Identification and Statistic Modeling for Automated Parallel Benchmark Generation /slideshow/combining-phase-identification-and-statistic-modeling-for-automated-parallel-benchmark-generation/70002161 yjin-sigmetrics-2015-161209223759
Parallel application benchmarks are indispensable for evaluating/optimizing HPC software and hardware. However, it is very challenging and costly to obtain high-fidelity benchmarks reflecting the scale and complexity of state-of-the-art parallel applications. Hand-extracted synthetic benchmarks are time- and labor-intensive to create. Real applications themselves, while offering most accurate performance evaluation, are expensive to compile, port, reconfigure, and often plainly inaccessible due to security or ownership concerns. This work contributes APPrime, a novel tool for trace-based automatic parallel benchmark generation. Taking as input standard communication-I/O traces of an application's execution, it couples accurate automatic phase identification with statistical regeneration of event parameters to create compact, portable, and to some degree reconfigurable parallel application benchmarks. Experiments with four NAS Parallel Benchmarks (NPB) and three real scientific simulation codes confirm the fidelity of APPrime benchmarks. They retain the original applications' performance characteristics, in particular their relative performance across platforms. Also, the result benchmarks, already released online, are much more compact and easy-to-port compared to the original applications. http://dl.acm.org/citation.cfm?id=2745876]]>

Parallel application benchmarks are indispensable for evaluating/optimizing HPC software and hardware. However, it is very challenging and costly to obtain high-fidelity benchmarks reflecting the scale and complexity of state-of-the-art parallel applications. Hand-extracted synthetic benchmarks are time- and labor-intensive to create. Real applications themselves, while offering most accurate performance evaluation, are expensive to compile, port, reconfigure, and often plainly inaccessible due to security or ownership concerns. This work contributes APPrime, a novel tool for trace-based automatic parallel benchmark generation. Taking as input standard communication-I/O traces of an application's execution, it couples accurate automatic phase identification with statistical regeneration of event parameters to create compact, portable, and to some degree reconfigurable parallel application benchmarks. Experiments with four NAS Parallel Benchmarks (NPB) and three real scientific simulation codes confirm the fidelity of APPrime benchmarks. They retain the original applications' performance characteristics, in particular their relative performance across platforms. Also, the result benchmarks, already released online, are much more compact and easy-to-port compared to the original applications. http://dl.acm.org/citation.cfm?id=2745876]]>
Fri, 09 Dec 2016 22:37:59 GMT /slideshow/combining-phase-identification-and-statistic-modeling-for-automated-parallel-benchmark-generation/70002161 liuml07@slideshare.net(liuml07) Combining Phase Identification and Statistic Modeling for Automated Parallel Benchmark Generation liuml07 Parallel application benchmarks are indispensable for evaluating/optimizing HPC software and hardware. However, it is very challenging and costly to obtain high-fidelity benchmarks reflecting the scale and complexity of state-of-the-art parallel applications. Hand-extracted synthetic benchmarks are time- and labor-intensive to create. Real applications themselves, while offering most accurate performance evaluation, are expensive to compile, port, reconfigure, and often plainly inaccessible due to security or ownership concerns. This work contributes APPrime, a novel tool for trace-based automatic parallel benchmark generation. Taking as input standard communication-I/O traces of an application's execution, it couples accurate automatic phase identification with statistical regeneration of event parameters to create compact, portable, and to some degree reconfigurable parallel application benchmarks. Experiments with four NAS Parallel Benchmarks (NPB) and three real scientific simulation codes confirm the fidelity of APPrime benchmarks. They retain the original applications' performance characteristics, in particular their relative performance across platforms. Also, the result benchmarks, already released online, are much more compact and easy-to-port compared to the original applications. http://dl.acm.org/citation.cfm?id=2745876 <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/yjin-sigmetrics-2015-161209223759-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Parallel application benchmarks are indispensable for evaluating/optimizing HPC software and hardware. However, it is very challenging and costly to obtain high-fidelity benchmarks reflecting the scale and complexity of state-of-the-art parallel applications. Hand-extracted synthetic benchmarks are time- and labor-intensive to create. Real applications themselves, while offering most accurate performance evaluation, are expensive to compile, port, reconfigure, and often plainly inaccessible due to security or ownership concerns. This work contributes APPrime, a novel tool for trace-based automatic parallel benchmark generation. Taking as input standard communication-I/O traces of an application&#39;s execution, it couples accurate automatic phase identification with statistical regeneration of event parameters to create compact, portable, and to some degree reconfigurable parallel application benchmarks. Experiments with four NAS Parallel Benchmarks (NPB) and three real scientific simulation codes confirm the fidelity of APPrime benchmarks. They retain the original applications&#39; performance characteristics, in particular their relative performance across platforms. Also, the result benchmarks, already released online, are much more compact and easy-to-port compared to the original applications. http://dl.acm.org/citation.cfm?id=2745876
Combining Phase Identification and Statistic Modeling for Automated Parallel Benchmark Generation from Mingliang Liu
]]>
130 2 https://cdn.slidesharecdn.com/ss_thumbnails/yjin-sigmetrics-2015-161209223759-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
ACIC: Automatic Cloud I/O Configurator for HPC Applications /slideshow/acic-automatic-cloud-io-configurator-for-hpc-applications/70001900 acic-161209222110
ACIC is a system which automatically searches for optimized I/O system configurations from many candidates for each individual HPC application running on a given cloud platform. This work was published in SuperComputing 2013, Denver. See event http://sc13.supercomputing.org/schedule/event_detail.php-evid=pap127.html]]>

ACIC is a system which automatically searches for optimized I/O system configurations from many candidates for each individual HPC application running on a given cloud platform. This work was published in SuperComputing 2013, Denver. See event http://sc13.supercomputing.org/schedule/event_detail.php-evid=pap127.html]]>
Fri, 09 Dec 2016 22:21:10 GMT /slideshow/acic-automatic-cloud-io-configurator-for-hpc-applications/70001900 liuml07@slideshare.net(liuml07) ACIC: Automatic Cloud I/O Configurator for HPC Applications liuml07 ACIC is a system which automatically searches for optimized I/O system configurations from many candidates for each individual HPC application running on a given cloud platform. This work was published in SuperComputing 2013, Denver. See event http://sc13.supercomputing.org/schedule/event_detail.php-evid=pap127.html <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/acic-161209222110-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> ACIC is a system which automatically searches for optimized I/O system configurations from many candidates for each individual HPC application running on a given cloud platform. This work was published in SuperComputing 2013, Denver. See event http://sc13.supercomputing.org/schedule/event_detail.php-evid=pap127.html
ACIC: Automatic Cloud I/O Configurator for HPC Applications from Mingliang Liu
]]>
211 2 https://cdn.slidesharecdn.com/ss_thumbnails/acic-161209222110-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
https://cdn.slidesharecdn.com/profile-photo-liuml07-48x48.jpg?cb=1675705830 people.apache.org/~liuml07 https://cdn.slidesharecdn.com/ss_thumbnails/cloudy-2017-v5-170614043748-thumbnail.jpg?width=320&height=320&fit=bounds slideshow/cloudy-with-a-chance-of-hadoop-dataworks-summit-2017-san-jose/76924072 Cloudy with a chance o... https://cdn.slidesharecdn.com/ss_thumbnails/yjin-sigmetrics-2015-161209223759-thumbnail.jpg?width=320&height=320&fit=bounds slideshow/combining-phase-identification-and-statistic-modeling-for-automated-parallel-benchmark-generation/70002161 Combining Phase Identi... https://cdn.slidesharecdn.com/ss_thumbnails/acic-161209222110-thumbnail.jpg?width=320&height=320&fit=bounds slideshow/acic-automatic-cloud-io-configurator-for-hpc-applications/70001900 ACIC: Automatic Cloud ...