際際滷shows by User: sdec2011 / http://www.slideshare.net/images/logo.gif 際際滷shows by User: sdec2011 / Tue, 05 Jul 2011 23:48:46 GMT 際際滷Share feed for 際際滷shows by User: sdec2011 SDEC2011 Big engineer vs small entreprenuer /slideshow/sdec2011-big-engineer-vs-small-entreprenuer/8518573 backupof20110627sdecbigengineervssmallentreprenuer-110705234850-phpapp02
]]>

]]>
Tue, 05 Jul 2011 23:48:46 GMT /slideshow/sdec2011-big-engineer-vs-small-entreprenuer/8518573 sdec2011@slideshare.net(sdec2011) SDEC2011 Big engineer vs small entreprenuer sdec2011 <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/backupof20110627sdecbigengineervssmallentreprenuer-110705234850-phpapp02-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br>
SDEC2011 Big engineer vs small entreprenuer from Korea Sdec
]]>
984 2 https://cdn.slidesharecdn.com/ss_thumbnails/backupof20110627sdecbigengineervssmallentreprenuer-110705234850-phpapp02-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
SDEC2011 Implementing me2day friend suggestion /slideshow/sdec2011-implementing-me2day-friend-suggestion/8517416 sdec2001me2day-110705201120-phpapp01
In the SNS domain, Response time of Friend Suggestion Algorithms and several SNA algorithms is in direct proportion to square of relationship number. In addition, increasing rate of relationship number is bigger and bigger. But existing usage pattern of Relational DB is suffering poor performance. To guarantee performance and scalability, we have developed such methods for Friend Suggestion and SNA Relation Pruning using intimacy value No Join & Keeping all Data in-Memory Strategy Distributed Graph Structure]]>

In the SNS domain, Response time of Friend Suggestion Algorithms and several SNA algorithms is in direct proportion to square of relationship number. In addition, increasing rate of relationship number is bigger and bigger. But existing usage pattern of Relational DB is suffering poor performance. To guarantee performance and scalability, we have developed such methods for Friend Suggestion and SNA Relation Pruning using intimacy value No Join & Keeping all Data in-Memory Strategy Distributed Graph Structure]]>
Tue, 05 Jul 2011 20:11:17 GMT /slideshow/sdec2011-implementing-me2day-friend-suggestion/8517416 sdec2011@slideshare.net(sdec2011) SDEC2011 Implementing me2day friend suggestion sdec2011 In the SNS domain, Response time of Friend Suggestion Algorithms and several SNA algorithms is in direct proportion to square of relationship number. In addition, increasing rate of relationship number is bigger and bigger. But existing usage pattern of Relational DB is suffering poor performance. To guarantee performance and scalability, we have developed such methods for Friend Suggestion and SNA Relation Pruning using intimacy value No Join & Keeping all Data in-Memory Strategy Distributed Graph Structure <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/sdec2001me2day-110705201120-phpapp01-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> In the SNS domain, Response time of Friend Suggestion Algorithms and several SNA algorithms is in direct proportion to square of relationship number. In addition, increasing rate of relationship number is bigger and bigger. But existing usage pattern of Relational DB is suffering poor performance. To guarantee performance and scalability, we have developed such methods for Friend Suggestion and SNA Relation Pruning using intimacy value No Join &amp; Keeping all Data in-Memory Strategy Distributed Graph Structure
SDEC2011 Implementing me2day friend suggestion from Korea Sdec
]]>
1111 53 https://cdn.slidesharecdn.com/ss_thumbnails/sdec2001me2day-110705201120-phpapp01-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
SDEC2011 Introducing Hadoop /slideshow/sdec2011-introducing-hadoop/8487712 sdec2011-shashank-introducinghadoop-110702034247-phpapp01
]]>

]]>
Sat, 02 Jul 2011 03:42:44 GMT /slideshow/sdec2011-introducing-hadoop/8487712 sdec2011@slideshare.net(sdec2011) SDEC2011 Introducing Hadoop sdec2011 <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/sdec2011-shashank-introducinghadoop-110702034247-phpapp01-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br>
SDEC2011 Introducing Hadoop from Korea Sdec
]]>
644 2 https://cdn.slidesharecdn.com/ss_thumbnails/sdec2011-shashank-introducinghadoop-110702034247-phpapp01-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Sdec2011 shashank-introducing hadoop /slideshow/sdec2011-shashankintroducing-hadoop-8487666/8487666 sdec2011-shashank-introducinghadoop-110702033521-phpapp02
]]>

]]>
Sat, 02 Jul 2011 03:35:19 GMT /slideshow/sdec2011-shashankintroducing-hadoop-8487666/8487666 sdec2011@slideshare.net(sdec2011) Sdec2011 shashank-introducing hadoop sdec2011 <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/sdec2011-shashank-introducinghadoop-110702033521-phpapp02-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br>
Sdec2011 shashank-introducing hadoop from Korea Sdec
]]>
481 2 https://cdn.slidesharecdn.com/ss_thumbnails/sdec2011-shashank-introducinghadoop-110702033521-phpapp02-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
SDEC2011 NoSQL Data modelling /slideshow/sdec2011-nosql-data-modelling/8487655 sdec2011-shashank-nosql-datamodeling-110702033326-phpapp01
]]>

]]>
Sat, 02 Jul 2011 03:33:24 GMT /slideshow/sdec2011-nosql-data-modelling/8487655 sdec2011@slideshare.net(sdec2011) SDEC2011 NoSQL Data modelling sdec2011 <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/sdec2011-shashank-nosql-datamodeling-110702033326-phpapp01-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br>
SDEC2011 NoSQL Data modelling from Korea Sdec
]]>
1503 5 https://cdn.slidesharecdn.com/ss_thumbnails/sdec2011-shashank-nosql-datamodeling-110702033326-phpapp01-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
SDEC2011 Essentials of Pig /slideshow/sdec2011-essentials-of-pig/8487653 sdec2011-shashank-essentialsofpig-110702033321-phpapp02
]]>

]]>
Sat, 02 Jul 2011 03:33:17 GMT /slideshow/sdec2011-essentials-of-pig/8487653 sdec2011@slideshare.net(sdec2011) SDEC2011 Essentials of Pig sdec2011 <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/sdec2011-shashank-essentialsofpig-110702033321-phpapp02-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br>
SDEC2011 Essentials of Pig from Korea Sdec
]]>
738 2 https://cdn.slidesharecdn.com/ss_thumbnails/sdec2011-shashank-essentialsofpig-110702033321-phpapp02-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
SDEC2011 Essentials of Mahout /slideshow/sdec2011-shashankessentials-ofmahout-8487651/8487651 sdec2011-shashank-essentialsofmahout-110702033318-phpapp02
]]>

]]>
Sat, 02 Jul 2011 03:33:11 GMT /slideshow/sdec2011-shashankessentials-ofmahout-8487651/8487651 sdec2011@slideshare.net(sdec2011) SDEC2011 Essentials of Mahout sdec2011 <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/sdec2011-shashank-essentialsofmahout-110702033318-phpapp02-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br>
SDEC2011 Essentials of Mahout from Korea Sdec
]]>
745 2 https://cdn.slidesharecdn.com/ss_thumbnails/sdec2011-shashank-essentialsofmahout-110702033318-phpapp02-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
SDEC2011 Essentials of Hive /slideshow/sdec2011-shashankessentials-ofhive-8487650/8487650 sdec2011-shashank-essentialsofhive-110702033311-phpapp02
]]>

]]>
Sat, 02 Jul 2011 03:33:10 GMT /slideshow/sdec2011-shashankessentials-ofhive-8487650/8487650 sdec2011@slideshare.net(sdec2011) SDEC2011 Essentials of Hive sdec2011 <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/sdec2011-shashank-essentialsofhive-110702033311-phpapp02-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br>
SDEC2011 Essentials of Hive from Korea Sdec
]]>
715 2 https://cdn.slidesharecdn.com/ss_thumbnails/sdec2011-shashank-essentialsofhive-110702033311-phpapp02-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
SDEC2011 NoSQL concepts and models /sdec2011/sdec2011-nos-conceptsandmodels sdec2011-shashank-nosqlconceptsandmodels-110702033311-phpapp02
]]>

]]>
Sat, 02 Jul 2011 03:33:08 GMT /sdec2011/sdec2011-nos-conceptsandmodels sdec2011@slideshare.net(sdec2011) SDEC2011 NoSQL concepts and models sdec2011 <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/sdec2011-shashank-nosqlconceptsandmodels-110702033311-phpapp02-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br>
SDEC2011 NoSQL concepts and models from Korea Sdec
]]>
1910 2 https://cdn.slidesharecdn.com/ss_thumbnails/sdec2011-shashank-nosqlconceptsandmodels-110702033311-phpapp02-thumbnail.jpg?width=120&height=120&fit=bounds presentation White http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Sdec2011 Introducing Hadoop /slideshow/sdec2011-no-sqldatamodellingconceptsandcases/8485105 sdec2011-nosqldatamodellingconceptsandcases-110701180704-phpapp02
]]>

]]>
Fri, 01 Jul 2011 18:06:59 GMT /slideshow/sdec2011-no-sqldatamodellingconceptsandcases/8485105 sdec2011@slideshare.net(sdec2011) Sdec2011 Introducing Hadoop sdec2011 <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/sdec2011-nosqldatamodellingconceptsandcases-110701180704-phpapp02-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br>
Sdec2011 Introducing Hadoop from Korea Sdec
]]>
625 2 https://cdn.slidesharecdn.com/ss_thumbnails/sdec2011-nosqldatamodellingconceptsandcases-110701180704-phpapp02-thumbnail.jpg?width=120&height=120&fit=bounds presentation White http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
SDEC2011 Replacing legacy Telco DB/DW to Hadoop and Hive /slideshow/sdec2011-replacing-legacy-telco-dbdw-to-hadoop-and-hive/8485086 sdec2011-replacinglegacytelcodbdwtohadoopandhive-110701180526-phpapp02
Currently telecom companies store their data in database or data warehouse, treating them through ETL process and working on statistics and analysis by using OLAP tools or data mining engines. However, due to the data explosion along with the spread of Smart Phones traditional data storages like DB and DW arent sufficient to cope with these Big Data. As an alternative the method of storing data in Hadoop and performing ETL process and Ad-hoc Query with Hive is being introduced, and China Mobile is being mentioned as the most representative example. But, they are adopted mainly by new projects, which have low barriers in applying the new Hive data model and HQL. On the other hand, it is extremely difficult to replace the existing database with the combination of Hadoop and Hive if there are already a number of tables and SQL queries. NexR is migrating the telecom companys data from Oracle DB to Hadoop, and converting a lot of existing Oracle SQL queries to Hive HQL queries. Though HQL supports a similar syntax to ANSI-SQL, it lacks a large portion of basic functions and hardly supports Oracle analytic functions like rank() which are utilized mainly in statistical analysis. Furthermore, the difference of data types like null value is also blocking the application of it. In this presentation, we will share the experience converting Oracle SQL to Hive HQL and developing additional functions with MapReduce. Also, we will introduce several ideas and trials to improve Hive performance. http://sdec.kr/]]>

Currently telecom companies store their data in database or data warehouse, treating them through ETL process and working on statistics and analysis by using OLAP tools or data mining engines. However, due to the data explosion along with the spread of Smart Phones traditional data storages like DB and DW arent sufficient to cope with these Big Data. As an alternative the method of storing data in Hadoop and performing ETL process and Ad-hoc Query with Hive is being introduced, and China Mobile is being mentioned as the most representative example. But, they are adopted mainly by new projects, which have low barriers in applying the new Hive data model and HQL. On the other hand, it is extremely difficult to replace the existing database with the combination of Hadoop and Hive if there are already a number of tables and SQL queries. NexR is migrating the telecom companys data from Oracle DB to Hadoop, and converting a lot of existing Oracle SQL queries to Hive HQL queries. Though HQL supports a similar syntax to ANSI-SQL, it lacks a large portion of basic functions and hardly supports Oracle analytic functions like rank() which are utilized mainly in statistical analysis. Furthermore, the difference of data types like null value is also blocking the application of it. In this presentation, we will share the experience converting Oracle SQL to Hive HQL and developing additional functions with MapReduce. Also, we will introduce several ideas and trials to improve Hive performance. http://sdec.kr/]]>
Fri, 01 Jul 2011 18:05:23 GMT /slideshow/sdec2011-replacing-legacy-telco-dbdw-to-hadoop-and-hive/8485086 sdec2011@slideshare.net(sdec2011) SDEC2011 Replacing legacy Telco DB/DW to Hadoop and Hive sdec2011 Currently telecom companies store their data in database or data warehouse, treating them through ETL process and working on statistics and analysis by using OLAP tools or data mining engines. However, due to the data explosion along with the spread of Smart Phones traditional data storages like DB and DW arent sufficient to cope with these Big Data. As an alternative the method of storing data in Hadoop and performing ETL process and Ad-hoc Query with Hive is being introduced, and China Mobile is being mentioned as the most representative example. But, they are adopted mainly by new projects, which have low barriers in applying the new Hive data model and HQL. On the other hand, it is extremely difficult to replace the existing database with the combination of Hadoop and Hive if there are already a number of tables and SQL queries. NexR is migrating the telecom companys data from Oracle DB to Hadoop, and converting a lot of existing Oracle SQL queries to Hive HQL queries. Though HQL supports a similar syntax to ANSI-SQL, it lacks a large portion of basic functions and hardly supports Oracle analytic functions like rank() which are utilized mainly in statistical analysis. Furthermore, the difference of data types like null value is also blocking the application of it. In this presentation, we will share the experience converting Oracle SQL to Hive HQL and developing additional functions with MapReduce. Also, we will introduce several ideas and trials to improve Hive performance. http://sdec.kr/ <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/sdec2011-replacinglegacytelcodbdwtohadoopandhive-110701180526-phpapp02-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Currently telecom companies store their data in database or data warehouse, treating them through ETL process and working on statistics and analysis by using OLAP tools or data mining engines. However, due to the data explosion along with the spread of Smart Phones traditional data storages like DB and DW arent sufficient to cope with these Big Data. As an alternative the method of storing data in Hadoop and performing ETL process and Ad-hoc Query with Hive is being introduced, and China Mobile is being mentioned as the most representative example. But, they are adopted mainly by new projects, which have low barriers in applying the new Hive data model and HQL. On the other hand, it is extremely difficult to replace the existing database with the combination of Hadoop and Hive if there are already a number of tables and SQL queries. NexR is migrating the telecom companys data from Oracle DB to Hadoop, and converting a lot of existing Oracle SQL queries to Hive HQL queries. Though HQL supports a similar syntax to ANSI-SQL, it lacks a large portion of basic functions and hardly supports Oracle analytic functions like rank() which are utilized mainly in statistical analysis. Furthermore, the difference of data types like null value is also blocking the application of it. In this presentation, we will share the experience converting Oracle SQL to Hive HQL and developing additional functions with MapReduce. Also, we will introduce several ideas and trials to improve Hive performance. http://sdec.kr/
SDEC2011 Replacing legacy Telco DB/DW to Hadoop and Hive from Korea Sdec
]]>
1593 6 https://cdn.slidesharecdn.com/ss_thumbnails/sdec2011-replacinglegacytelcodbdwtohadoopandhive-110701180526-phpapp02-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
SDEC2011 Rapidant /slideshow/sdec2011-rapidant/8485084 sdec2011-rapidant-110701180520-phpapp02
http://sdec.kr/]]>

http://sdec.kr/]]>
Fri, 01 Jul 2011 18:05:14 GMT /slideshow/sdec2011-rapidant/8485084 sdec2011@slideshare.net(sdec2011) SDEC2011 Rapidant sdec2011 http://sdec.kr/ <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/sdec2011-rapidant-110701180520-phpapp02-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> http://sdec.kr/
SDEC2011 Rapidant from Korea Sdec
]]>
2225 9 https://cdn.slidesharecdn.com/ss_thumbnails/sdec2011-rapidant-110701180520-phpapp02-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
SDEC2011 Mahout - the what, the how and the why /slideshow/sdec2011-mahout-the-what-the-how-and-the-why/8485065 sdec2011-mahout-110701180226-phpapp02
Mahout is an open source machine learning library from Apache. From its humble beginnings at Apache Lucene, the project has grown into a active community of developers, machine learning experts and enthusiasts. With v0.5 released recently, the project has been focussing full steam on developing stable APIs with an eye on our major milestone of v1.0. The speaker has been with Mahout from his days in college as a computer science student. The talk will focus on the major use cases of Mahout. The design decisions, things that worked, things that didn't, and things to expect in the future releases. http://sdec.kr/]]>

Mahout is an open source machine learning library from Apache. From its humble beginnings at Apache Lucene, the project has grown into a active community of developers, machine learning experts and enthusiasts. With v0.5 released recently, the project has been focussing full steam on developing stable APIs with an eye on our major milestone of v1.0. The speaker has been with Mahout from his days in college as a computer science student. The talk will focus on the major use cases of Mahout. The design decisions, things that worked, things that didn't, and things to expect in the future releases. http://sdec.kr/]]>
Fri, 01 Jul 2011 18:02:23 GMT /slideshow/sdec2011-mahout-the-what-the-how-and-the-why/8485065 sdec2011@slideshare.net(sdec2011) SDEC2011 Mahout - the what, the how and the why sdec2011 Mahout is an open source machine learning library from Apache. From its humble beginnings at Apache Lucene, the project has grown into a active community of developers, machine learning experts and enthusiasts. With v0.5 released recently, the project has been focussing full steam on developing stable APIs with an eye on our major milestone of v1.0. The speaker has been with Mahout from his days in college as a computer science student. The talk will focus on the major use cases of Mahout. The design decisions, things that worked, things that didn't, and things to expect in the future releases. http://sdec.kr/ <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/sdec2011-mahout-110701180226-phpapp02-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Mahout is an open source machine learning library from Apache. From its humble beginnings at Apache Lucene, the project has grown into a active community of developers, machine learning experts and enthusiasts. With v0.5 released recently, the project has been focussing full steam on developing stable APIs with an eye on our major milestone of v1.0. The speaker has been with Mahout from his days in college as a computer science student. The talk will focus on the major use cases of Mahout. The design decisions, things that worked, things that didn&#39;t, and things to expect in the future releases. http://sdec.kr/
SDEC2011 Mahout - the what, the how and the why from Korea Sdec
]]>
2259 6 https://cdn.slidesharecdn.com/ss_thumbnails/sdec2011-mahout-110701180226-phpapp02-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
SDEC2011 Going by TACC /slideshow/sdec2011-going-by-tacc/8485050 sdec2011-goingbytaccdistr-110701175945-phpapp02
Key-value stores are widely used in applications that only require primary key data access, which is common in many web applications. Because developing an industrial grade key value store is expensive, the conventional solution is to use one of the existing key-value stores and layer application semantics on top of the primitives provided by the store. This approach leads to potential inefficiencies, because application specific semantics can often allow optimizations in the implementation of the store. We present an alternative approach, using the TACC platform to provide a key-value store implementation that is both performant and easily customizable. The TACC programming model separates state from logic: state is stored in a collection of distributed in-memory database instances, while logic is performed by distributed agents that react asynchronously to changes in objects stored in the database instances. Agents can selectively subscribe to updates using a fine-grain hierarchical directory system to mount objects into a local namespace. TACC provides performance comparable to hand-coded C while reducing the actual source code size to a fraction of that. We describe the implementation and performance of a scalable and fault tolerant key-value store using TACC, pointing out the benefits realized by using TACC's strong, user-defined types and triggering/notification.Key-value stores are widely used in applications that only require primary key data access, which is common in many web applications. Because developing an industrial grade key value store is expensive, the conventional solution is to use one of the existing key-value stores and layer application semantics on top of the primitives provided by the store. This approach leads to potential inefficiencies, because application specific semantics can often allow optimizations in the implementation of the store. We present an alternative approach, using the TACC platform to provide a key-value store implementation that is both performant and easily customizable. The TACC programming model separates state from logic: state is stored in a collection of distributed in-memory database instances, while logic is performed by distributed agents that react asynchronously to changes in objects stored in the database instances. Agents can selectively subscribe to updates using a fine-grain hierarchical directory system to mount objects into a local namespace. TACC provides performance comparable to hand-coded C while reducing the actual source code size to a fraction of that. We describe the implementation and performance of a scalable and fault tolerant key-value store using TACC, pointing out the benefits realized by using TACC's strong, user-defined types and triggering/notification. http://sdec.kr/]]>

Key-value stores are widely used in applications that only require primary key data access, which is common in many web applications. Because developing an industrial grade key value store is expensive, the conventional solution is to use one of the existing key-value stores and layer application semantics on top of the primitives provided by the store. This approach leads to potential inefficiencies, because application specific semantics can often allow optimizations in the implementation of the store. We present an alternative approach, using the TACC platform to provide a key-value store implementation that is both performant and easily customizable. The TACC programming model separates state from logic: state is stored in a collection of distributed in-memory database instances, while logic is performed by distributed agents that react asynchronously to changes in objects stored in the database instances. Agents can selectively subscribe to updates using a fine-grain hierarchical directory system to mount objects into a local namespace. TACC provides performance comparable to hand-coded C while reducing the actual source code size to a fraction of that. We describe the implementation and performance of a scalable and fault tolerant key-value store using TACC, pointing out the benefits realized by using TACC's strong, user-defined types and triggering/notification.Key-value stores are widely used in applications that only require primary key data access, which is common in many web applications. Because developing an industrial grade key value store is expensive, the conventional solution is to use one of the existing key-value stores and layer application semantics on top of the primitives provided by the store. This approach leads to potential inefficiencies, because application specific semantics can often allow optimizations in the implementation of the store. We present an alternative approach, using the TACC platform to provide a key-value store implementation that is both performant and easily customizable. The TACC programming model separates state from logic: state is stored in a collection of distributed in-memory database instances, while logic is performed by distributed agents that react asynchronously to changes in objects stored in the database instances. Agents can selectively subscribe to updates using a fine-grain hierarchical directory system to mount objects into a local namespace. TACC provides performance comparable to hand-coded C while reducing the actual source code size to a fraction of that. We describe the implementation and performance of a scalable and fault tolerant key-value store using TACC, pointing out the benefits realized by using TACC's strong, user-defined types and triggering/notification. http://sdec.kr/]]>
Fri, 01 Jul 2011 17:59:42 GMT /slideshow/sdec2011-going-by-tacc/8485050 sdec2011@slideshare.net(sdec2011) SDEC2011 Going by TACC sdec2011 Key-value stores are widely used in applications that only require primary key data access, which is common in many web applications. Because developing an industrial grade key value store is expensive, the conventional solution is to use one of the existing key-value stores and layer application semantics on top of the primitives provided by the store. This approach leads to potential inefficiencies, because application specific semantics can often allow optimizations in the implementation of the store. We present an alternative approach, using the TACC platform to provide a key-value store implementation that is both performant and easily customizable. The TACC programming model separates state from logic: state is stored in a collection of distributed in-memory database instances, while logic is performed by distributed agents that react asynchronously to changes in objects stored in the database instances. Agents can selectively subscribe to updates using a fine-grain hierarchical directory system to mount objects into a local namespace. TACC provides performance comparable to hand-coded C while reducing the actual source code size to a fraction of that. We describe the implementation and performance of a scalable and fault tolerant key-value store using TACC, pointing out the benefits realized by using TACC's strong, user-defined types and triggering/notification.Key-value stores are widely used in applications that only require primary key data access, which is common in many web applications. Because developing an industrial grade key value store is expensive, the conventional solution is to use one of the existing key-value stores and layer application semantics on top of the primitives provided by the store. This approach leads to potential inefficiencies, because application specific semantics can often allow optimizations in the implementation of the store. We present an alternative approach, using the TACC platform to provide a key-value store implementation that is both performant and easily customizable. The TACC programming model separates state from logic: state is stored in a collection of distributed in-memory database instances, while logic is performed by distributed agents that react asynchronously to changes in objects stored in the database instances. Agents can selectively subscribe to updates using a fine-grain hierarchical directory system to mount objects into a local namespace. TACC provides performance comparable to hand-coded C while reducing the actual source code size to a fraction of that. We describe the implementation and performance of a scalable and fault tolerant key-value store using TACC, pointing out the benefits realized by using TACC's strong, user-defined types and triggering/notification. http://sdec.kr/ <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/sdec2011-goingbytaccdistr-110701175945-phpapp02-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Key-value stores are widely used in applications that only require primary key data access, which is common in many web applications. Because developing an industrial grade key value store is expensive, the conventional solution is to use one of the existing key-value stores and layer application semantics on top of the primitives provided by the store. This approach leads to potential inefficiencies, because application specific semantics can often allow optimizations in the implementation of the store. We present an alternative approach, using the TACC platform to provide a key-value store implementation that is both performant and easily customizable. The TACC programming model separates state from logic: state is stored in a collection of distributed in-memory database instances, while logic is performed by distributed agents that react asynchronously to changes in objects stored in the database instances. Agents can selectively subscribe to updates using a fine-grain hierarchical directory system to mount objects into a local namespace. TACC provides performance comparable to hand-coded C while reducing the actual source code size to a fraction of that. We describe the implementation and performance of a scalable and fault tolerant key-value store using TACC, pointing out the benefits realized by using TACC&#39;s strong, user-defined types and triggering/notification.Key-value stores are widely used in applications that only require primary key data access, which is common in many web applications. Because developing an industrial grade key value store is expensive, the conventional solution is to use one of the existing key-value stores and layer application semantics on top of the primitives provided by the store. This approach leads to potential inefficiencies, because application specific semantics can often allow optimizations in the implementation of the store. We present an alternative approach, using the TACC platform to provide a key-value store implementation that is both performant and easily customizable. The TACC programming model separates state from logic: state is stored in a collection of distributed in-memory database instances, while logic is performed by distributed agents that react asynchronously to changes in objects stored in the database instances. Agents can selectively subscribe to updates using a fine-grain hierarchical directory system to mount objects into a local namespace. TACC provides performance comparable to hand-coded C while reducing the actual source code size to a fraction of that. We describe the implementation and performance of a scalable and fault tolerant key-value store using TACC, pointing out the benefits realized by using TACC&#39;s strong, user-defined types and triggering/notification. http://sdec.kr/
SDEC2011 Going by TACC from Korea Sdec
]]>
9737 4 https://cdn.slidesharecdn.com/ss_thumbnails/sdec2011-goingbytaccdistr-110701175945-phpapp02-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
SDEC2011 Glory-FS development & Experiences /slideshow/sdec2011-gloryfs-development-experiences/8485046 sdec2011-glory-fs-110701175851-phpapp01
http://sdec.kr/]]>

http://sdec.kr/]]>
Fri, 01 Jul 2011 17:58:48 GMT /slideshow/sdec2011-gloryfs-development-experiences/8485046 sdec2011@slideshare.net(sdec2011) SDEC2011 Glory-FS development & Experiences sdec2011 http://sdec.kr/ <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/sdec2011-glory-fs-110701175851-phpapp01-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> http://sdec.kr/
SDEC2011 Glory-FS development & Experiences from Korea Sdec
]]>
1409 1 https://cdn.slidesharecdn.com/ss_thumbnails/sdec2011-glory-fs-110701175851-phpapp01-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
SDEC2011 Using Couchbase for social game scaling and speed /slideshow/sdec2011-using-couchbase-for-social-game-scaling-and-speed/8485033 sdec2011-couchbase-110701175713-phpapp01
A social game, by it's very nature, can spread very quickly to a large user population. Because the game is typically interactive, the speed of retrieving information needed for the user's interactions with the system is critical. When building their new game Animal Party, the developers at Tribal Crossing needed to get away from the complexity of sharding an SQL database. They also were looking for a solution to the administration cost associated with the operation of traditional data stores. When evaluating multiple different NoSQL solutions, they realized that Couchbase's Membase server meets most of their critical requirements in developing their game software. Simple to use, Couchbase's model allows Tribal Crossing to easily model their game interactions with the key/value data store. Fast read and write performance is required with interactive, social games, and they found that support in Membase as well. Elastic scalability is easily achieved by simply adding more nodes to the Couchbase cluster without any modifications required to the application. Relying on Couchbase's technology Tribal Crossing has been able to quickly build and scale Animal Party with a small team and no dedicated system administrators. http://sdec.kr/]]>

A social game, by it's very nature, can spread very quickly to a large user population. Because the game is typically interactive, the speed of retrieving information needed for the user's interactions with the system is critical. When building their new game Animal Party, the developers at Tribal Crossing needed to get away from the complexity of sharding an SQL database. They also were looking for a solution to the administration cost associated with the operation of traditional data stores. When evaluating multiple different NoSQL solutions, they realized that Couchbase's Membase server meets most of their critical requirements in developing their game software. Simple to use, Couchbase's model allows Tribal Crossing to easily model their game interactions with the key/value data store. Fast read and write performance is required with interactive, social games, and they found that support in Membase as well. Elastic scalability is easily achieved by simply adding more nodes to the Couchbase cluster without any modifications required to the application. Relying on Couchbase's technology Tribal Crossing has been able to quickly build and scale Animal Party with a small team and no dedicated system administrators. http://sdec.kr/]]>
Fri, 01 Jul 2011 17:57:08 GMT /slideshow/sdec2011-using-couchbase-for-social-game-scaling-and-speed/8485033 sdec2011@slideshare.net(sdec2011) SDEC2011 Using Couchbase for social game scaling and speed sdec2011 A social game, by it's very nature, can spread very quickly to a large user population. Because the game is typically interactive, the speed of retrieving information needed for the user's interactions with the system is critical. When building their new game Animal Party, the developers at Tribal Crossing needed to get away from the complexity of sharding an SQL database. They also were looking for a solution to the administration cost associated with the operation of traditional data stores. When evaluating multiple different NoSQL solutions, they realized that Couchbase's Membase server meets most of their critical requirements in developing their game software. Simple to use, Couchbase's model allows Tribal Crossing to easily model their game interactions with the key/value data store. Fast read and write performance is required with interactive, social games, and they found that support in Membase as well. Elastic scalability is easily achieved by simply adding more nodes to the Couchbase cluster without any modifications required to the application. Relying on Couchbase's technology Tribal Crossing has been able to quickly build and scale Animal Party with a small team and no dedicated system administrators. http://sdec.kr/ <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/sdec2011-couchbase-110701175713-phpapp01-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> A social game, by it&#39;s very nature, can spread very quickly to a large user population. Because the game is typically interactive, the speed of retrieving information needed for the user&#39;s interactions with the system is critical. When building their new game Animal Party, the developers at Tribal Crossing needed to get away from the complexity of sharding an SQL database. They also were looking for a solution to the administration cost associated with the operation of traditional data stores. When evaluating multiple different NoSQL solutions, they realized that Couchbase&#39;s Membase server meets most of their critical requirements in developing their game software. Simple to use, Couchbase&#39;s model allows Tribal Crossing to easily model their game interactions with the key/value data store. Fast read and write performance is required with interactive, social games, and they found that support in Membase as well. Elastic scalability is easily achieved by simply adding more nodes to the Couchbase cluster without any modifications required to the application. Relying on Couchbase&#39;s technology Tribal Crossing has been able to quickly build and scale Animal Party with a small team and no dedicated system administrators. http://sdec.kr/
SDEC2011 Using Couchbase for social game scaling and speed from Korea Sdec
]]>
1659 3 https://cdn.slidesharecdn.com/ss_thumbnails/sdec2011-couchbase-110701175713-phpapp01-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
SDEC2011 Arcus NHN memcached cloud /slideshow/sdec2011-arcus-nhn-memcached-cloud-8467157/8467157 sdec2011-arcusnhnmemcachedcloud-110630050127-phpapp01
Arcus is data caching cloud developed on Memcached which is a memory-based Key-Value repository. It was developed meeting requirements from various NHN services. Arcus is capable of having up-to-date cache server list of clients using ZooKeeper and also able to possess cloud architecture flexible to add or delete caching servers while eliminating extra network latency between client and server. Furthermore, to support save/ computation function of data collection which is becoming a requirement among many services, it provides list, set, b+tree structures. http://sdec.kr/]]>

Arcus is data caching cloud developed on Memcached which is a memory-based Key-Value repository. It was developed meeting requirements from various NHN services. Arcus is capable of having up-to-date cache server list of clients using ZooKeeper and also able to possess cloud architecture flexible to add or delete caching servers while eliminating extra network latency between client and server. Furthermore, to support save/ computation function of data collection which is becoming a requirement among many services, it provides list, set, b+tree structures. http://sdec.kr/]]>
Thu, 30 Jun 2011 05:01:24 GMT /slideshow/sdec2011-arcus-nhn-memcached-cloud-8467157/8467157 sdec2011@slideshare.net(sdec2011) SDEC2011 Arcus NHN memcached cloud sdec2011 Arcus is data caching cloud developed on Memcached which is a memory-based Key-Value repository. It was developed meeting requirements from various NHN services. Arcus is capable of having up-to-date cache server list of clients using ZooKeeper and also able to possess cloud architecture flexible to add or delete caching servers while eliminating extra network latency between client and server. Furthermore, to support save/ computation function of data collection which is becoming a requirement among many services, it provides list, set, b+tree structures. http://sdec.kr/ <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/sdec2011-arcusnhnmemcachedcloud-110630050127-phpapp01-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Arcus is data caching cloud developed on Memcached which is a memory-based Key-Value repository. It was developed meeting requirements from various NHN services. Arcus is capable of having up-to-date cache server list of clients using ZooKeeper and also able to possess cloud architecture flexible to add or delete caching servers while eliminating extra network latency between client and server. Furthermore, to support save/ computation function of data collection which is becoming a requirement among many services, it provides list, set, b+tree structures. http://sdec.kr/
SDEC2011 Arcus NHN memcached cloud from Korea Sdec
]]>
4376 4 https://cdn.slidesharecdn.com/ss_thumbnails/sdec2011-arcusnhnmemcachedcloud-110630050127-phpapp01-thumbnail.jpg?width=120&height=120&fit=bounds presentation White http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
https://public.slidesharecdn.com/v2/images/profile-picture.png https://cdn.slidesharecdn.com/ss_thumbnails/backupof20110627sdecbigengineervssmallentreprenuer-110705234850-phpapp02-thumbnail.jpg?width=320&height=320&fit=bounds slideshow/sdec2011-big-engineer-vs-small-entreprenuer/8518573 SDEC2011 Big engineer ... https://cdn.slidesharecdn.com/ss_thumbnails/sdec2001me2day-110705201120-phpapp01-thumbnail.jpg?width=320&height=320&fit=bounds slideshow/sdec2011-implementing-me2day-friend-suggestion/8517416 SDEC2011 Implementing ... https://cdn.slidesharecdn.com/ss_thumbnails/sdec2011-shashank-introducinghadoop-110702034247-phpapp01-thumbnail.jpg?width=320&height=320&fit=bounds slideshow/sdec2011-introducing-hadoop/8487712 SDEC2011 Introducing H...