際際滷shows by User: sageweil1 / http://www.slideshare.net/images/logo.gif 際際滷shows by User: sageweil1 / Thu, 15 Nov 2018 13:29:52 GMT 際際滷Share feed for 際際滷shows by User: sageweil1 Ceph data services in a multi- and hybrid cloud world /slideshow/ceph-data-services-in-a-multi-and-hybrid-cloud-world/123090793 2018-181115132952
IT organizations of the future (and present) are faced with managing infrastructure that spans multiple private data centers and multiple public clouds. Emerging tools and operational patterns like kubernetes and microservices are easing the process of deploying applications across multiple environments, but the achilles heel of such efforts remains that most applications require large quantities of state, either in databases, object stores, or file systems. Unlike stateless microservices, state is hard to move. Ceph is known for providing scale-out file, block, and object storage within a single data center, but it also includes a robust set of multi-cluster federation capabilities. This talk will cover how Ceph's underlying multi-site capabilities complement and enable true portability across cloud footprints--public and private--and how viewing Ceph from a multi-cloud perspective has fundamentally shifted our data services roadmap, especially for Ceph object storage.]]>

IT organizations of the future (and present) are faced with managing infrastructure that spans multiple private data centers and multiple public clouds. Emerging tools and operational patterns like kubernetes and microservices are easing the process of deploying applications across multiple environments, but the achilles heel of such efforts remains that most applications require large quantities of state, either in databases, object stores, or file systems. Unlike stateless microservices, state is hard to move. Ceph is known for providing scale-out file, block, and object storage within a single data center, but it also includes a robust set of multi-cluster federation capabilities. This talk will cover how Ceph's underlying multi-site capabilities complement and enable true portability across cloud footprints--public and private--and how viewing Ceph from a multi-cloud perspective has fundamentally shifted our data services roadmap, especially for Ceph object storage.]]>
Thu, 15 Nov 2018 13:29:52 GMT /slideshow/ceph-data-services-in-a-multi-and-hybrid-cloud-world/123090793 sageweil1@slideshare.net(sageweil1) Ceph data services in a multi- and hybrid cloud world sageweil1 IT organizations of the future (and present) are faced with managing infrastructure that spans multiple private data centers and multiple public clouds. Emerging tools and operational patterns like kubernetes and microservices are easing the process of deploying applications across multiple environments, but the achilles heel of such efforts remains that most applications require large quantities of state, either in databases, object stores, or file systems. Unlike stateless microservices, state is hard to move. Ceph is known for providing scale-out file, block, and object storage within a single data center, but it also includes a robust set of multi-cluster federation capabilities. This talk will cover how Ceph's underlying multi-site capabilities complement and enable true portability across cloud footprints--public and private--and how viewing Ceph from a multi-cloud perspective has fundamentally shifted our data services roadmap, especially for Ceph object storage. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/2018-181115132952-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> IT organizations of the future (and present) are faced with managing infrastructure that spans multiple private data centers and multiple public clouds. Emerging tools and operational patterns like kubernetes and microservices are easing the process of deploying applications across multiple environments, but the achilles heel of such efforts remains that most applications require large quantities of state, either in databases, object stores, or file systems. Unlike stateless microservices, state is hard to move. Ceph is known for providing scale-out file, block, and object storage within a single data center, but it also includes a robust set of multi-cluster federation capabilities. This talk will cover how Ceph&#39;s underlying multi-site capabilities complement and enable true portability across cloud footprints--public and private--and how viewing Ceph from a multi-cloud perspective has fundamentally shifted our data services roadmap, especially for Ceph object storage.
Ceph data services in a multi- and hybrid cloud world from Sage Weil
]]>
8957 5 https://cdn.slidesharecdn.com/ss_thumbnails/2018-181115132952-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Making distributed storage easy: usability in Ceph Luminous and beyond /slideshow/making-distributed-storage-easy-usability-in-ceph-luminous-and-beyond/86762493 20180126luminoususability-180126222109
Distributed storage is complicated, and historically Ceph hasn't spent a lot of time trying to hide that complexity, instead focusing on correctness, features, and flexibility. There has been a recent shift in focus to simplifying and streamlining the user/operator experience so that the information that is actually important is available without the noise of irrelevant details. Recent feature work has also focused on simplifying configurations that were previously possible but required tedious configuration steps to manage. This talk will cover the key new efforts in Ceph Luminous that aim to simplify and automate cluster management, as well as the plans for upcoming releases to address longstanding Cephisms that make it "hard" (e.g., choosing PG counts). ]]>

Distributed storage is complicated, and historically Ceph hasn't spent a lot of time trying to hide that complexity, instead focusing on correctness, features, and flexibility. There has been a recent shift in focus to simplifying and streamlining the user/operator experience so that the information that is actually important is available without the noise of irrelevant details. Recent feature work has also focused on simplifying configurations that were previously possible but required tedious configuration steps to manage. This talk will cover the key new efforts in Ceph Luminous that aim to simplify and automate cluster management, as well as the plans for upcoming releases to address longstanding Cephisms that make it "hard" (e.g., choosing PG counts). ]]>
Fri, 26 Jan 2018 22:21:09 GMT /slideshow/making-distributed-storage-easy-usability-in-ceph-luminous-and-beyond/86762493 sageweil1@slideshare.net(sageweil1) Making distributed storage easy: usability in Ceph Luminous and beyond sageweil1 Distributed storage is complicated, and historically Ceph hasn't spent a lot of time trying to hide that complexity, instead focusing on correctness, features, and flexibility. There has been a recent shift in focus to simplifying and streamlining the user/operator experience so that the information that is actually important is available without the noise of irrelevant details. Recent feature work has also focused on simplifying configurations that were previously possible but required tedious configuration steps to manage. This talk will cover the key new efforts in Ceph Luminous that aim to simplify and automate cluster management, as well as the plans for upcoming releases to address longstanding Cephisms that make it "hard" (e.g., choosing PG counts). <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/20180126luminoususability-180126222109-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Distributed storage is complicated, and historically Ceph hasn&#39;t spent a lot of time trying to hide that complexity, instead focusing on correctness, features, and flexibility. There has been a recent shift in focus to simplifying and streamlining the user/operator experience so that the information that is actually important is available without the noise of irrelevant details. Recent feature work has also focused on simplifying configurations that were previously possible but required tedious configuration steps to manage. This talk will cover the key new efforts in Ceph Luminous that aim to simplify and automate cluster management, as well as the plans for upcoming releases to address longstanding Cephisms that make it &quot;hard&quot; (e.g., choosing PG counts).
Making distributed storage easy: usability in Ceph Luminous and beyond from Sage Weil
]]>
3233 3 https://cdn.slidesharecdn.com/ss_thumbnails/20180126luminoususability-180126222109-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
What's new in Luminous and Beyond /slideshow/whats-new-in-luminous-and-beyond/81738746 20171106luminousatopenstack-171108051354
Ceph is an open source distributed storage system that provides scalable object, block, and file interfaces on a commodity hardware. Luminous, the latest stable release of Ceph, was just released in August. This talk will cover all that is new in Luminous (there is a lot!) and provide a sneak peak at the roadmap for Mimic, which is due out in the Spring.]]>

Ceph is an open source distributed storage system that provides scalable object, block, and file interfaces on a commodity hardware. Luminous, the latest stable release of Ceph, was just released in August. This talk will cover all that is new in Luminous (there is a lot!) and provide a sneak peak at the roadmap for Mimic, which is due out in the Spring.]]>
Wed, 08 Nov 2017 05:13:54 GMT /slideshow/whats-new-in-luminous-and-beyond/81738746 sageweil1@slideshare.net(sageweil1) What's new in Luminous and Beyond sageweil1 Ceph is an open source distributed storage system that provides scalable object, block, and file interfaces on a commodity hardware. Luminous, the latest stable release of Ceph, was just released in August. This talk will cover all that is new in Luminous (there is a lot!) and provide a sneak peak at the roadmap for Mimic, which is due out in the Spring. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/20171106luminousatopenstack-171108051354-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Ceph is an open source distributed storage system that provides scalable object, block, and file interfaces on a commodity hardware. Luminous, the latest stable release of Ceph, was just released in August. This talk will cover all that is new in Luminous (there is a lot!) and provide a sneak peak at the roadmap for Mimic, which is due out in the Spring.
What's new in Luminous and Beyond from Sage Weil
]]>
4991 9 https://cdn.slidesharecdn.com/ss_thumbnails/20171106luminousatopenstack-171108051354-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Community Update at OpenStack Summit Boston /sageweil1/community-update-at-openstack-summit-boston 20170511communityupdate-170511201545
Community update for Luminous and beyond.]]>

Community update for Luminous and beyond.]]>
Thu, 11 May 2017 20:15:45 GMT /sageweil1/community-update-at-openstack-summit-boston sageweil1@slideshare.net(sageweil1) Community Update at OpenStack Summit Boston sageweil1 Community update for Luminous and beyond. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/20170511communityupdate-170511201545-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Community update for Luminous and beyond.
Community Update at OpenStack Summit Boston from Sage Weil
]]>
2499 7 https://cdn.slidesharecdn.com/ss_thumbnails/20170511communityupdate-170511201545-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
BlueStore, A New Storage Backend for Ceph, One Year In /slideshow/bluestore-a-new-storage-backend-for-ceph-one-year-in/73563711 20170323bluestore-170323214745
BlueStore is a new storage backend for Ceph OSDs that consumes block devices directly, bypassing the local XFS file system that is currently used today. It's design is motivated by everything we've learned about OSD workloads and interface requirements over the last decade, and everything that has worked well and not so well when storing objects as files in local files systems like XFS, btrfs, or ext4. BlueStore has been under development for a bit more than a year now, and has reached a state where it is becoming usable in production. This talk will cover the BlueStore design, how it has evolved over the last year, and what challenges remain before it can become the new default storage backend.]]>

BlueStore is a new storage backend for Ceph OSDs that consumes block devices directly, bypassing the local XFS file system that is currently used today. It's design is motivated by everything we've learned about OSD workloads and interface requirements over the last decade, and everything that has worked well and not so well when storing objects as files in local files systems like XFS, btrfs, or ext4. BlueStore has been under development for a bit more than a year now, and has reached a state where it is becoming usable in production. This talk will cover the BlueStore design, how it has evolved over the last year, and what challenges remain before it can become the new default storage backend.]]>
Thu, 23 Mar 2017 21:47:45 GMT /slideshow/bluestore-a-new-storage-backend-for-ceph-one-year-in/73563711 sageweil1@slideshare.net(sageweil1) BlueStore, A New Storage Backend for Ceph, One Year In sageweil1 BlueStore is a new storage backend for Ceph OSDs that consumes block devices directly, bypassing the local XFS file system that is currently used today. It's design is motivated by everything we've learned about OSD workloads and interface requirements over the last decade, and everything that has worked well and not so well when storing objects as files in local files systems like XFS, btrfs, or ext4. BlueStore has been under development for a bit more than a year now, and has reached a state where it is becoming usable in production. This talk will cover the BlueStore design, how it has evolved over the last year, and what challenges remain before it can become the new default storage backend. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/20170323bluestore-170323214745-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> BlueStore is a new storage backend for Ceph OSDs that consumes block devices directly, bypassing the local XFS file system that is currently used today. It&#39;s design is motivated by everything we&#39;ve learned about OSD workloads and interface requirements over the last decade, and everything that has worked well and not so well when storing objects as files in local files systems like XFS, btrfs, or ext4. BlueStore has been under development for a bit more than a year now, and has reached a state where it is becoming usable in production. This talk will cover the BlueStore design, how it has evolved over the last year, and what challenges remain before it can become the new default storage backend.
BlueStore, A New Storage Backend for Ceph, One Year In from Sage Weil
]]>
29563 11 https://cdn.slidesharecdn.com/ss_thumbnails/20170323bluestore-170323214745-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Ceph, Now and Later: Our Plan for Open Unified Cloud Storage /slideshow/ceph-now-and-later-our-plan-for-open-unified-cloud-storage/67990571 20161027cephnowandlater-161101133333
Ceph is a highly scalable open source distributed storage system that provides object, block, and file interfaces on a single platform. Although Ceph RBD block storage has dominated OpenStack deployments for several years, maturing object (S3, Swift, and librados) interfaces and stable CephFS (file) interfaces now make Ceph the only fully open source unified storage platform. This talk will cover Ceph's architectural vision and project mission and how our approach differs from alternative approaches to storage in the OpenStack ecosystem. In particular, we will look at how our open development model dovetails well with OpenStack, how major contributors are advancing Ceph capabilities and performance at a rapid pace to adapt to new hardware types and deployment models, and what major features we are priotizing for the next few years to meet the needs of expanding cloud workloads. ]]>

Ceph is a highly scalable open source distributed storage system that provides object, block, and file interfaces on a single platform. Although Ceph RBD block storage has dominated OpenStack deployments for several years, maturing object (S3, Swift, and librados) interfaces and stable CephFS (file) interfaces now make Ceph the only fully open source unified storage platform. This talk will cover Ceph's architectural vision and project mission and how our approach differs from alternative approaches to storage in the OpenStack ecosystem. In particular, we will look at how our open development model dovetails well with OpenStack, how major contributors are advancing Ceph capabilities and performance at a rapid pace to adapt to new hardware types and deployment models, and what major features we are priotizing for the next few years to meet the needs of expanding cloud workloads. ]]>
Tue, 01 Nov 2016 13:33:33 GMT /slideshow/ceph-now-and-later-our-plan-for-open-unified-cloud-storage/67990571 sageweil1@slideshare.net(sageweil1) Ceph, Now and Later: Our Plan for Open Unified Cloud Storage sageweil1 Ceph is a highly scalable open source distributed storage system that provides object, block, and file interfaces on a single platform. Although Ceph RBD block storage has dominated OpenStack deployments for several years, maturing object (S3, Swift, and librados) interfaces and stable CephFS (file) interfaces now make Ceph the only fully open source unified storage platform. This talk will cover Ceph's architectural vision and project mission and how our approach differs from alternative approaches to storage in the OpenStack ecosystem. In particular, we will look at how our open development model dovetails well with OpenStack, how major contributors are advancing Ceph capabilities and performance at a rapid pace to adapt to new hardware types and deployment models, and what major features we are priotizing for the next few years to meet the needs of expanding cloud workloads. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/20161027cephnowandlater-161101133333-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Ceph is a highly scalable open source distributed storage system that provides object, block, and file interfaces on a single platform. Although Ceph RBD block storage has dominated OpenStack deployments for several years, maturing object (S3, Swift, and librados) interfaces and stable CephFS (file) interfaces now make Ceph the only fully open source unified storage platform. This talk will cover Ceph&#39;s architectural vision and project mission and how our approach differs from alternative approaches to storage in the OpenStack ecosystem. In particular, we will look at how our open development model dovetails well with OpenStack, how major contributors are advancing Ceph capabilities and performance at a rapid pace to adapt to new hardware types and deployment models, and what major features we are priotizing for the next few years to meet the needs of expanding cloud workloads.
Ceph, Now and Later: Our Plan for Open Unified Cloud Storage from Sage Weil
]]>
3889 7 https://cdn.slidesharecdn.com/ss_thumbnails/20161027cephnowandlater-161101133333-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
A crash course in CRUSH /slideshow/a-crash-course-in-crush/63588017 20160629crush-160629223838
CRUSH is the powerful, highly configurable algorithm Red Hat Ceph Storage uses to determine how data is stored across the many servers in a cluster. A healthy Red Hat Ceph Storage deployment depends on a properly configured CRUSH map. In this session, we will review the Red Hat Ceph Storage architecture and explain the purpose of CRUSH. Using example CRUSH maps, we will show you what works and what does not, and explain why. Presented at Red Hat Summit 2016-06-29.]]>

CRUSH is the powerful, highly configurable algorithm Red Hat Ceph Storage uses to determine how data is stored across the many servers in a cluster. A healthy Red Hat Ceph Storage deployment depends on a properly configured CRUSH map. In this session, we will review the Red Hat Ceph Storage architecture and explain the purpose of CRUSH. Using example CRUSH maps, we will show you what works and what does not, and explain why. Presented at Red Hat Summit 2016-06-29.]]>
Wed, 29 Jun 2016 22:38:38 GMT /slideshow/a-crash-course-in-crush/63588017 sageweil1@slideshare.net(sageweil1) A crash course in CRUSH sageweil1 CRUSH is the powerful, highly configurable algorithm Red Hat Ceph Storage uses to determine how data is stored across the many servers in a cluster. A healthy Red Hat Ceph Storage deployment depends on a properly configured CRUSH map. In this session, we will review the Red Hat Ceph Storage architecture and explain the purpose of CRUSH. Using example CRUSH maps, we will show you what works and what does not, and explain why. Presented at Red Hat Summit 2016-06-29. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/20160629crush-160629223838-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> CRUSH is the powerful, highly configurable algorithm Red Hat Ceph Storage uses to determine how data is stored across the many servers in a cluster. A healthy Red Hat Ceph Storage deployment depends on a properly configured CRUSH map. In this session, we will review the Red Hat Ceph Storage architecture and explain the purpose of CRUSH. Using example CRUSH maps, we will show you what works and what does not, and explain why. Presented at Red Hat Summit 2016-06-29.
A crash course in CRUSH from Sage Weil
]]>
14137 12 https://cdn.slidesharecdn.com/ss_thumbnails/20160629crush-160629223838-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
BlueStore: a new, faster storage backend for Ceph /slideshow/bluestore-a-new-faster-storage-backend-for-ceph-63311181/63311181 20160621bluestore-160621212519
An updated and expanded talk on BlueStore, similar to the one originally presented at Vault.]]>

An updated and expanded talk on BlueStore, similar to the one originally presented at Vault.]]>
Tue, 21 Jun 2016 21:25:18 GMT /slideshow/bluestore-a-new-faster-storage-backend-for-ceph-63311181/63311181 sageweil1@slideshare.net(sageweil1) BlueStore: a new, faster storage backend for Ceph sageweil1 An updated and expanded talk on BlueStore, similar to the one originally presented at Vault. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/20160621bluestore-160621212519-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> An updated and expanded talk on BlueStore, similar to the one originally presented at Vault.
BlueStore: a new, faster storage backend for Ceph from Sage Weil
]]>
5700 8 https://cdn.slidesharecdn.com/ss_thumbnails/20160621bluestore-160621212519-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
What's new in Jewel and Beyond /slideshow/whats-new-and-jewel-and-beyond/63069253 20160614cephdaycern-160614212248
Ceph project status update presented at Ceph Day Suisse at CERN.]]>

Ceph project status update presented at Ceph Day Suisse at CERN.]]>
Tue, 14 Jun 2016 21:22:48 GMT /slideshow/whats-new-and-jewel-and-beyond/63069253 sageweil1@slideshare.net(sageweil1) What's new in Jewel and Beyond sageweil1 Ceph project status update presented at Ceph Day Suisse at CERN. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/20160614cephdaycern-160614212248-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Ceph project status update presented at Ceph Day Suisse at CERN.
What's new in Jewel and Beyond from Sage Weil
]]>
2416 5 https://cdn.slidesharecdn.com/ss_thumbnails/20160614cephdaycern-160614212248-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
BlueStore: a new, faster storage backend for Ceph /slideshow/bluestore-a-new-faster-storage-backend-for-ceph/61430788 20160421bluestore-160427184547
Traditionally Ceph has made use of local file systems like XFS or btrfs to store its data. However, the mismatch between the OSD's requirements and the POSIX interface provided by kernel file systems has a huge performance cost and requires a lot of complexity. BlueStore, an entirely new OSD storage backend, utilizes block devices directly, doubling performance for most workloads. This talk will cover the motivation a new backend, the design and implementation, the improved performance on HDDs, SSDs, and NVMe, and discuss some of the thornier issues we had to overcome when replacing tried and true kernel file systems with entirely new code running in userspace.]]>

Traditionally Ceph has made use of local file systems like XFS or btrfs to store its data. However, the mismatch between the OSD's requirements and the POSIX interface provided by kernel file systems has a huge performance cost and requires a lot of complexity. BlueStore, an entirely new OSD storage backend, utilizes block devices directly, doubling performance for most workloads. This talk will cover the motivation a new backend, the design and implementation, the improved performance on HDDs, SSDs, and NVMe, and discuss some of the thornier issues we had to overcome when replacing tried and true kernel file systems with entirely new code running in userspace.]]>
Wed, 27 Apr 2016 18:45:47 GMT /slideshow/bluestore-a-new-faster-storage-backend-for-ceph/61430788 sageweil1@slideshare.net(sageweil1) BlueStore: a new, faster storage backend for Ceph sageweil1 Traditionally Ceph has made use of local file systems like XFS or btrfs to store its data. However, the mismatch between the OSD's requirements and the POSIX interface provided by kernel file systems has a huge performance cost and requires a lot of complexity. BlueStore, an entirely new OSD storage backend, utilizes block devices directly, doubling performance for most workloads. This talk will cover the motivation a new backend, the design and implementation, the improved performance on HDDs, SSDs, and NVMe, and discuss some of the thornier issues we had to overcome when replacing tried and true kernel file systems with entirely new code running in userspace. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/20160421bluestore-160427184547-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Traditionally Ceph has made use of local file systems like XFS or btrfs to store its data. However, the mismatch between the OSD&#39;s requirements and the POSIX interface provided by kernel file systems has a huge performance cost and requires a lot of complexity. BlueStore, an entirely new OSD storage backend, utilizes block devices directly, doubling performance for most workloads. This talk will cover the motivation a new backend, the design and implementation, the improved performance on HDDs, SSDs, and NVMe, and discuss some of the thornier issues we had to overcome when replacing tried and true kernel file systems with entirely new code running in userspace.
BlueStore: a new, faster storage backend for Ceph from Sage Weil
]]>
24207 12 https://cdn.slidesharecdn.com/ss_thumbnails/20160421bluestore-160427184547-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Ceph and RocksDB /slideshow/ceph-and-rocksdb/57934667 20160203cephandrocksdb-160205190417
My short talk at a RocksDB meetup on 2/3/16 about the new Ceph OSD backend BlueStore and its use of rocksdb.]]>

My short talk at a RocksDB meetup on 2/3/16 about the new Ceph OSD backend BlueStore and its use of rocksdb.]]>
Fri, 05 Feb 2016 19:04:17 GMT /slideshow/ceph-and-rocksdb/57934667 sageweil1@slideshare.net(sageweil1) Ceph and RocksDB sageweil1 My short talk at a RocksDB meetup on 2/3/16 about the new Ceph OSD backend BlueStore and its use of rocksdb. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/20160203cephandrocksdb-160205190417-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> My short talk at a RocksDB meetup on 2/3/16 about the new Ceph OSD backend BlueStore and its use of rocksdb.
Ceph and RocksDB from Sage Weil
]]>
13880 9 https://cdn.slidesharecdn.com/ss_thumbnails/20160203cephandrocksdb-160205190417-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
The State of Ceph, Manila, and Containers in OpenStack /sageweil1/the-state-of-ceph-manila-and-containers-in-openstack 20151028cephfsandmanila-151029042815-lva1-app6892
OpenStack users deploying Ceph for block (Cinder) and object (S3/Swift) are unsurprisingly looking at Manila and CephFS to round out a unified storage solution. Ceph is based on a low-level object storage layer call RADOS that serves as the foundation for its object, block, and file services. Manila's file as a service in OpenStack enables a range of container-based use-cases with Docker and Kubernetes, but a variety of deployment architectures are possible. This talk will cover the current state of CephFS support in Manila, including upstream Manila support, Manila works in progress, a progress update on CephFS itself, including new multi-tenancy support to facilitate cloud deployments, and a discussion of how this impact container deployment scenarios in an OpenStack cloud.]]>

OpenStack users deploying Ceph for block (Cinder) and object (S3/Swift) are unsurprisingly looking at Manila and CephFS to round out a unified storage solution. Ceph is based on a low-level object storage layer call RADOS that serves as the foundation for its object, block, and file services. Manila's file as a service in OpenStack enables a range of container-based use-cases with Docker and Kubernetes, but a variety of deployment architectures are possible. This talk will cover the current state of CephFS support in Manila, including upstream Manila support, Manila works in progress, a progress update on CephFS itself, including new multi-tenancy support to facilitate cloud deployments, and a discussion of how this impact container deployment scenarios in an OpenStack cloud.]]>
Thu, 29 Oct 2015 04:28:15 GMT /sageweil1/the-state-of-ceph-manila-and-containers-in-openstack sageweil1@slideshare.net(sageweil1) The State of Ceph, Manila, and Containers in OpenStack sageweil1 OpenStack users deploying Ceph for block (Cinder) and object (S3/Swift) are unsurprisingly looking at Manila and CephFS to round out a unified storage solution. Ceph is based on a low-level object storage layer call RADOS that serves as the foundation for its object, block, and file services. Manila's file as a service in OpenStack enables a range of container-based use-cases with Docker and Kubernetes, but a variety of deployment architectures are possible. This talk will cover the current state of CephFS support in Manila, including upstream Manila support, Manila works in progress, a progress update on CephFS itself, including new multi-tenancy support to facilitate cloud deployments, and a discussion of how this impact container deployment scenarios in an OpenStack cloud. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/20151028cephfsandmanila-151029042815-lva1-app6892-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> OpenStack users deploying Ceph for block (Cinder) and object (S3/Swift) are unsurprisingly looking at Manila and CephFS to round out a unified storage solution. Ceph is based on a low-level object storage layer call RADOS that serves as the foundation for its object, block, and file services. Manila&#39;s file as a service in OpenStack enables a range of container-based use-cases with Docker and Kubernetes, but a variety of deployment architectures are possible. This talk will cover the current state of CephFS support in Manila, including upstream Manila support, Manila works in progress, a progress update on CephFS itself, including new multi-tenancy support to facilitate cloud deployments, and a discussion of how this impact container deployment scenarios in an OpenStack cloud.
The State of Ceph, Manila, and Containers in OpenStack from Sage Weil
]]>
5572 11 https://cdn.slidesharecdn.com/ss_thumbnails/20151028cephfsandmanila-151029042815-lva1-app6892-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Keeping OpenStack storage trendy with Ceph and containers /slideshow/keeping-openstack-storage-trendy-with-ceph-and-containers/48408597 20150520openstackcephandcontainers-150520224422-lva1-app6892
The conventional approach to deploying applications on OpenStack uses virtual machines (usually KVM) backed by block devices (usually Ceph RBD). As interest increases in container-based application deployment models like Docker, it is worth looking at what alternatives exist for combining compute and storage (both shared and non-shared). Mapping RBD block devices directly to host kernels trades isolation for performance and may be appropriate for many private clouds without significant changes to the infrastructure. More importantly, moving away from a virtualization allows for non-block interfaces and a range of alternative models based on file or object. Attendees will leave this talk with a basic understanding of the storage components and services available to both virtual machines and Linux containers, a view of a several ways they can be combined and the performance, reliability, and security trade-offs associated with those possibilities, and several proposals for how the relevant OpenStack projects (Nova, Cinder, Manila) can work together to make it easy.]]>

The conventional approach to deploying applications on OpenStack uses virtual machines (usually KVM) backed by block devices (usually Ceph RBD). As interest increases in container-based application deployment models like Docker, it is worth looking at what alternatives exist for combining compute and storage (both shared and non-shared). Mapping RBD block devices directly to host kernels trades isolation for performance and may be appropriate for many private clouds without significant changes to the infrastructure. More importantly, moving away from a virtualization allows for non-block interfaces and a range of alternative models based on file or object. Attendees will leave this talk with a basic understanding of the storage components and services available to both virtual machines and Linux containers, a view of a several ways they can be combined and the performance, reliability, and security trade-offs associated with those possibilities, and several proposals for how the relevant OpenStack projects (Nova, Cinder, Manila) can work together to make it easy.]]>
Wed, 20 May 2015 22:44:22 GMT /slideshow/keeping-openstack-storage-trendy-with-ceph-and-containers/48408597 sageweil1@slideshare.net(sageweil1) Keeping OpenStack storage trendy with Ceph and containers sageweil1 The conventional approach to deploying applications on OpenStack uses virtual machines (usually KVM) backed by block devices (usually Ceph RBD). As interest increases in container-based application deployment models like Docker, it is worth looking at what alternatives exist for combining compute and storage (both shared and non-shared). Mapping RBD block devices directly to host kernels trades isolation for performance and may be appropriate for many private clouds without significant changes to the infrastructure. More importantly, moving away from a virtualization allows for non-block interfaces and a range of alternative models based on file or object. Attendees will leave this talk with a basic understanding of the storage components and services available to both virtual machines and Linux containers, a view of a several ways they can be combined and the performance, reliability, and security trade-offs associated with those possibilities, and several proposals for how the relevant OpenStack projects (Nova, Cinder, Manila) can work together to make it easy. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/20150520openstackcephandcontainers-150520224422-lva1-app6892-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> The conventional approach to deploying applications on OpenStack uses virtual machines (usually KVM) backed by block devices (usually Ceph RBD). As interest increases in container-based application deployment models like Docker, it is worth looking at what alternatives exist for combining compute and storage (both shared and non-shared). Mapping RBD block devices directly to host kernels trades isolation for performance and may be appropriate for many private clouds without significant changes to the infrastructure. More importantly, moving away from a virtualization allows for non-block interfaces and a range of alternative models based on file or object. Attendees will leave this talk with a basic understanding of the storage components and services available to both virtual machines and Linux containers, a view of a several ways they can be combined and the performance, reliability, and security trade-offs associated with those possibilities, and several proposals for how the relevant OpenStack projects (Nova, Cinder, Manila) can work together to make it easy.
Keeping OpenStack storage trendy with Ceph and containers from Sage Weil
]]>
7307 6 https://cdn.slidesharecdn.com/ss_thumbnails/20150520openstackcephandcontainers-150520224422-lva1-app6892-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Distributed Storage and Compute With Ceph's librados (Vault 2015) /slideshow/20150311-vault15-librados/45734101 20150311vault15librados-150311221653-conversion-gate01
The Ceph distributed storage system sports object, block, and file interfaces to a single storage cluster. These interface are built on a distributed object storage and compute platform called RADOS, which exports a conceptually simple yet powerful interface for storing and processing large amounts of data and is well-suited for backing web-scale applications and data analytics. In features a rich object model, efficient key/value storage, atomic transactions (including efficient compare-and-swap semantics), object cloning and other primitives for supporting snapshots, simple inter-client communication and coordination (ala Zookeeper), and the ability to extend the object interface using arbitrary code executed on the storage node. This talk will focus on librados API, how it is used, the security model, and some examples of RADOS classes implementing interesting functionality.]]>

The Ceph distributed storage system sports object, block, and file interfaces to a single storage cluster. These interface are built on a distributed object storage and compute platform called RADOS, which exports a conceptually simple yet powerful interface for storing and processing large amounts of data and is well-suited for backing web-scale applications and data analytics. In features a rich object model, efficient key/value storage, atomic transactions (including efficient compare-and-swap semantics), object cloning and other primitives for supporting snapshots, simple inter-client communication and coordination (ala Zookeeper), and the ability to extend the object interface using arbitrary code executed on the storage node. This talk will focus on librados API, how it is used, the security model, and some examples of RADOS classes implementing interesting functionality.]]>
Wed, 11 Mar 2015 22:16:53 GMT /slideshow/20150311-vault15-librados/45734101 sageweil1@slideshare.net(sageweil1) Distributed Storage and Compute With Ceph's librados (Vault 2015) sageweil1 The Ceph distributed storage system sports object, block, and file interfaces to a single storage cluster. These interface are built on a distributed object storage and compute platform called RADOS, which exports a conceptually simple yet powerful interface for storing and processing large amounts of data and is well-suited for backing web-scale applications and data analytics. In features a rich object model, efficient key/value storage, atomic transactions (including efficient compare-and-swap semantics), object cloning and other primitives for supporting snapshots, simple inter-client communication and coordination (ala Zookeeper), and the ability to extend the object interface using arbitrary code executed on the storage node. This talk will focus on librados API, how it is used, the security model, and some examples of RADOS classes implementing interesting functionality. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/20150311vault15librados-150311221653-conversion-gate01-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> The Ceph distributed storage system sports object, block, and file interfaces to a single storage cluster. These interface are built on a distributed object storage and compute platform called RADOS, which exports a conceptually simple yet powerful interface for storing and processing large amounts of data and is well-suited for backing web-scale applications and data analytics. In features a rich object model, efficient key/value storage, atomic transactions (including efficient compare-and-swap semantics), object cloning and other primitives for supporting snapshots, simple inter-client communication and coordination (ala Zookeeper), and the ability to extend the object interface using arbitrary code executed on the storage node. This talk will focus on librados API, how it is used, the security model, and some examples of RADOS classes implementing interesting functionality.
Distributed Storage and Compute With Ceph's librados (Vault 2015) from Sage Weil
]]>
3894 5 https://cdn.slidesharecdn.com/ss_thumbnails/20150311vault15librados-150311221653-conversion-gate01-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Storage tiering and erasure coding in Ceph (SCaLE13x) /slideshow/20150222-scale-sdc-tiering-and-ec/44995414 20150222scalesdctieringandec-150222165330-conversion-gate02
Ceph is designed around the assumption that all components of the system (disks, hosts, networks) can fail, and has traditionally leveraged replication to provide data durability and reliability. The CRUSH placement algorithm is used to allow failure domains to be defined across hosts, racks, rows, or datacenters, depending on the deployment scale and requirements. Recent releases have added support for erasure coding, which can provide much higher data durability and lower storage overheads. However, in practice erasure codes have different performance characteristics than traditional replication and, under some workloads, come at some expense. At the same time, we have introduced a storage tiering infrastructure and cache pools that allow alternate hardware backends (like high-end flash) to be leveraged for active data sets while cold data are transparently migrated to slower backends. The combination of these two features enables a surprisingly broad range of new applications and deployment configurations. This talk will cover a few Ceph fundamentals, discuss the new tiering and erasure coding features, and then discuss a variety of ways that the new capabilities can be leveraged.]]>

Ceph is designed around the assumption that all components of the system (disks, hosts, networks) can fail, and has traditionally leveraged replication to provide data durability and reliability. The CRUSH placement algorithm is used to allow failure domains to be defined across hosts, racks, rows, or datacenters, depending on the deployment scale and requirements. Recent releases have added support for erasure coding, which can provide much higher data durability and lower storage overheads. However, in practice erasure codes have different performance characteristics than traditional replication and, under some workloads, come at some expense. At the same time, we have introduced a storage tiering infrastructure and cache pools that allow alternate hardware backends (like high-end flash) to be leveraged for active data sets while cold data are transparently migrated to slower backends. The combination of these two features enables a surprisingly broad range of new applications and deployment configurations. This talk will cover a few Ceph fundamentals, discuss the new tiering and erasure coding features, and then discuss a variety of ways that the new capabilities can be leveraged.]]>
Sun, 22 Feb 2015 16:53:30 GMT /slideshow/20150222-scale-sdc-tiering-and-ec/44995414 sageweil1@slideshare.net(sageweil1) Storage tiering and erasure coding in Ceph (SCaLE13x) sageweil1 Ceph is designed around the assumption that all components of the system (disks, hosts, networks) can fail, and has traditionally leveraged replication to provide data durability and reliability. The CRUSH placement algorithm is used to allow failure domains to be defined across hosts, racks, rows, or datacenters, depending on the deployment scale and requirements. Recent releases have added support for erasure coding, which can provide much higher data durability and lower storage overheads. However, in practice erasure codes have different performance characteristics than traditional replication and, under some workloads, come at some expense. At the same time, we have introduced a storage tiering infrastructure and cache pools that allow alternate hardware backends (like high-end flash) to be leveraged for active data sets while cold data are transparently migrated to slower backends. The combination of these two features enables a surprisingly broad range of new applications and deployment configurations. This talk will cover a few Ceph fundamentals, discuss the new tiering and erasure coding features, and then discuss a variety of ways that the new capabilities can be leveraged. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/20150222scalesdctieringandec-150222165330-conversion-gate02-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Ceph is designed around the assumption that all components of the system (disks, hosts, networks) can fail, and has traditionally leveraged replication to provide data durability and reliability. The CRUSH placement algorithm is used to allow failure domains to be defined across hosts, racks, rows, or datacenters, depending on the deployment scale and requirements. Recent releases have added support for erasure coding, which can provide much higher data durability and lower storage overheads. However, in practice erasure codes have different performance characteristics than traditional replication and, under some workloads, come at some expense. At the same time, we have introduced a storage tiering infrastructure and cache pools that allow alternate hardware backends (like high-end flash) to be leveraged for active data sets while cold data are transparently migrated to slower backends. The combination of these two features enables a surprisingly broad range of new applications and deployment configurations. This talk will cover a few Ceph fundamentals, discuss the new tiering and erasure coding features, and then discuss a variety of ways that the new capabilities can be leveraged.
Storage tiering and erasure coding in Ceph (SCaLE13x) from Sage Weil
]]>
19577 10 https://cdn.slidesharecdn.com/ss_thumbnails/20150222scalesdctieringandec-150222165330-conversion-gate02-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
https://cdn.slidesharecdn.com/profile-photo-sageweil1-48x48.jpg?cb=1546190707 Specialties: Distributed system design, storage and file systems, management, software development. Sage helped design Ceph as part of his graduate research at the University of California, Santa Cruz. Since then, he has continued to refine the system with the goal of providing a stable next generation distributed storage system for Linux. ceph.com/ https://cdn.slidesharecdn.com/ss_thumbnails/2018-181115132952-thumbnail.jpg?width=320&height=320&fit=bounds slideshow/ceph-data-services-in-a-multi-and-hybrid-cloud-world/123090793 Ceph data services in ... https://cdn.slidesharecdn.com/ss_thumbnails/20180126luminoususability-180126222109-thumbnail.jpg?width=320&height=320&fit=bounds slideshow/making-distributed-storage-easy-usability-in-ceph-luminous-and-beyond/86762493 Making distributed sto... https://cdn.slidesharecdn.com/ss_thumbnails/20171106luminousatopenstack-171108051354-thumbnail.jpg?width=320&height=320&fit=bounds slideshow/whats-new-in-luminous-and-beyond/81738746 What&#39;s new in Luminous...