Event Streaming Architectures with Confluent and ScyllaDBScyllaDB
油
Jeff Bean will lead a discussion of event-driven architectures, Apache Kafka, Kafka Connect, KSQL and Confluent Cloud. Then we'll talk about some uses of Confluent and Scylla together, including a co-deployment with Lookout, ScyllaDB and Confluent in the IoT space, and the upcoming native connector.
Modern Cloud-Native Streaming Platforms: Event Streaming Microservices with A...confluent
油
Microservices, events, containers, and orchestrators are dominating our vernacular today. As operations teams adapt to support these technologies in production, cloud-native platforms like Pivotal Cloud Foundry and Kubernetes have quickly risen to serve as force multipliers of automation, productivity and value.
Apache Kafka速 is providing developers a critically important component as they build and modernize applications to cloud-native architecture.
This talk will explore:
Why cloud-native platforms and why run Apache Kafka on Kubernetes?
What kind of workloads are best suited for this combination?
Tips to determine the path forward for legacy monoliths in your application portfolio
Demo: Running Apache Kafka as a Streaming Platform on Kubernetes
Confluent Operator as Cloud-Native Kafka Operator for KubernetesKai W辰hner
油
Agenda:
- Cloud Native vs. SaaS / Serverless Kafka
- The Emergence of Kubernetes
- Kafka on K8s Deployment Challenges
- Confluent Operator as Kafka Operator
- Q&A
Confluent Operator enables you to:
Provisioning, management and operations of Confluent Platform (including ZooKeeper, Apache Kafka, Kafka Connect, KSQL, Schema Registry, REST Proxy, Control Center)
Deployment on any Kubernetes Platform (Vanilla K8s, OpenShift, Rancher, Mesosphere, Cloud Foundry, Amazon EKS, Azure AKS, Google GKE, etc.)
Automate provisioning of Kafka pods in minutes
Monitor SLAs through Confluent Control Center or Prometheus
Scale Kafka elastically, handle fail-over & Automate rolling updates
Automate security configuration
Built on our first hand knowledge of running Confluent at scale
Fully supported for production usage油
Applying ML on your Data in Motion with AWS and Confluent | Joseph Morais, Co...HostedbyConfluent
油
Event-driven application architectures are becoming increasingly common as a large number of users demand more interactive, real-time, and intelligent responses. Yet it can be challenging to decide how to capture and perform real-time data analysis and deliver differentiating experiences. Join experts from Confluent and AWS to learn how to build Apache Kafka速-based streaming applications backed by machine learning models. Adopting the recommendations will help you establish repeatable patterns for high performing event-based apps.
The Kubernetes cloud native landscape is vast. Delivering a solution requires managing a puzzling array of required tooling, monitoring, disaster recovery, and other solutions that lie outside the realm of the central cluster. The governing body of Kubernetes, the Cloud Native Computing Foundation, has developed guidance for organizations interested in this topic by publishing the Cloud Native Landscape, but while a list of options is helpful it does not give operations and DevOps professionals the knowledge they need to execute.油
Learn best practices of setting up and managing the tools needed around Kubernetes. This presentation covers popular open source options (to avoid lock in) and how one can implement and manage these tools on an ongoing basis. Learn from, and do not repeat, the mistakes of previous centralized platforms.油
In this session, attendees will learn:
1. Cloud Native Landscape 101 - Prometheus, Sysdig, NGINX, and more. Where do they all fit in Kubernetes solution?油
2. Avoiding the OpenStack sprawl of managing a multiverse of required tooling in the Kubernetes world.
3. Leverage technology like Kubernetes, now available on DC/OS, to provide part of the infrastructure framework that helps manage cloud native application patterns.
James Watters Kafka Summit NYC 2019 KeynoteJames Watters
油
The document discusses how Spring Boot and Kafka can form the basis of a new enterprise application platform that enables continuous delivery and efficient scaling through microservices and event-driven architecture. It provides examples of companies like Netflix and T-Mobile that have successfully adopted this approach. The document advocates an "event-first" design and argues this platform approach allows for arbitrary scaling, multi-cloud deployment, and increased developer autonomy and agility.
Modern Cloud-Native Streaming Platforms: Event Streaming Microservices with K...confluent
油
Microservices, events, containers, and orchestrators are dominating our vernacular today. As operations teams adapt to support these technologies in production, cloud-native platforms like Cloud Foundry and Kubernetes have quickly risen to serve as force multipliers of automation, productivity and value. Kafka is providing developers a critically important component as they build and modernize applications to cloud-native architecture. This talk will explore:
Why cloud-native platforms and why run Kafka on Kubernetes?
What kind of workloads are best suited for this combination?
Tips to determine the path forward for legacy monoliths in your application portfolio
Running Kafka as a Streaming Platform on Container Orchestration
Elastically Scaling Kafka Using Confluentconfluent
油
This document discusses how Confluent Platform provides elastic scaling for Apache Kafka. It offers fully managed cloud services through Confluent Cloud or self-managed software. Confluent Cloud allows users to easily scale Kafka workloads from 0 MBps to GBps without complex provisioning. It also offers pay-for-use pricing where customers only pay for the data streamed, with the ability to scale to zero. For self-managed deployments, Confluent Platform enables dynamic scaling of Kafka clusters on Kubernetes through features like tiered storage and self-balancing clusters that can rebalance partitions in seconds versus hours for other Kafka services.
Twitters Apache Kafka Adoption Journey | Ming Liu, TwitterHostedbyConfluent
油
Until recently, the Messaging team at Twitter had been running an in-house build Pub/Sub system, namely EventBus (built on top of Apache DistributedLog and Apache Bookkeeper, and similar in architecture to Apache Pulsar) to cater to our pubsub needs. In 2018, we made the decision to move to Apache Kafka by migrating existing use cases as well as onboarding new use cases directly onto Apache Kafka. Fast forward to today, Kafka is now an essential piece of Twitter Infrastructure and processes over 200M messages per second. In this talk, we will share the learning and challenges in our journey moving to Apache Kafka.
Build and Deploy Cloud Native Camel Quarkus routes with Tekton and KnativeOmar Al-Safi
油
In this talk, we will leverage all cloud native stacks and tools to build Camel Quarkus routes natively using GraalVM native-image on Tekton pipeline and deploy these routes to Kubernetes cluster with Knative installed. We will dive into the following topics in the talk: - Introduction to Camel - Introduction to Camel Quarkus - Introduction to GraalVM Native Image - Introduction to Tekon - Introduction to Knative - Demo shows how to deploy end to end a Camel Quarkus route which include the following steps: - Look at whole deployment pipeline for Cloud Native Camel Quarkus routes - Build Camel Quarkus routes with GraalVM native-image on Tekton pipeline. - Deploy Camel Quarkus routes to Kubernetes cluster with Knative Targeted Audience: Users with basic Camel knowledge
New Features in Confluent Platform 6.0 / Apache Kafka 2.6Kai W辰hner
油
New Features in Confluent Platform 6.0 / Apache Kafka 2.6, including REST Proxy and API, Tiered Storage for AWS S3 and GCP GCS, Cluster Linking (On-Premise, Edge, Hybrid, Multi-Cloud), Self-Balancing Clusters), ksqlDB.
Strategies For Migrating From SQL to NoSQL The Apache Kafka WayScyllaDB
油
This document discusses strategies for migrating from SQL to NoSQL databases using Apache Kafka. It outlines the challenges of modernizing legacy databases, how Confluent can help with the migration process, and proposes a three-phase plan. The plan involves initially migrating data sources using connectors, then optimizing the data with stream processing in ksqlDB, and finally modernizing by sending the data to cloud databases. The document provides an overview of Confluent's technologies and services that can help accelerate and simplify the database migration.
Over 100 million subscribers from over 190 countries enjoy the Netflix service. This leads to over a trillion events, amounting to 3 PB, flowing through the Keystone infrastructure to help improve customer experience and glean business insights. The self-serve Keystone stream processing service processes these messages in near real-time with at-least once semantics in the cloud. This enables the users to focus on extracting insights, and not worry about building out scalable infrastructure. Ill share the details about this platform, and our experience building it.
Zero Down Time Move From Apache Kafka to Confluent With Justin Dempsey | Curr...HostedbyConfluent
油
Zero Down Time Move From Apache Kafka to Confluent With Justin Dempsey | Current 2022
Kafka has been a crucial facet of the overall SAS Customer Intelligence 360 (CI360) architecture for quite some time. Until 2021, Kafka supporting CI360 was managed on standalone virtual machines. Traditional VM backed infrastructure posed administrative challenges for ensuring consistent software patching, adding scale on demand, and providing a highly available, redundant, and durable message bus for the CI360 microservices.
The goal was clear, the backend Kafka platform needed to move from the aging legacy systems to a more cost effective and stable solution.
The standalone VM backed Kafka clusters were migrated to the Amazon Elastic Kubernetes Service (EKS) with zero down time. Cluster Linking and the Confluent Operator were used as part of this effort. Both technologies were crucial in ensuring that the systems were online and available throughout the migration.
This session details the journey for moving standalone Kafka to Kafka on K8S. During the session, scope of the journey including Total Cost of Ownership (TCO), technical architecture, and the migration itself will be discussed.
NOTE: Experiences related to this effort are being published in a joint case study between SAS and Confluent titled, ""SAS Powers Instant, Real-Time Omnichannel Marketing at Massive Scale with Confluent's Hybrid Capabilities"".
Netflix uses containers to run both batch jobs and services. For batch jobs, containers simplify resource management and allow jobs like model training and media encoding to easily share resources. Services are more complex to run in containers due to challenges like constant resizing, statefulness, and networking. Netflix addresses these challenges through solutions like a VPC networking driver and reusing existing infrastructure services for containers. Looking ahead, Netflix aims to run more containers at larger scale for areas like developer experience, continuous integration, and internal resource optimization.
From Monoliths to Microservices - A Journey With Confluent With Gayathri Veal...HostedbyConfluent
油
Indeed is consciously transforming our monolith applications to microservices. Moving monoliths from on-premise to a hybrid architecture is a non-trivial endeavor. It is as we know a marathon and never never a race when we refactor not all of our applications but, incrementally progress onward to resilience with cloud.
By partnering with Confluent we were able to procedurally migrate many of our workloads both critical and non-critical primarily using Kafka by adopting a data domain driven approach. In this talk, you will learn,
1. How to piece complex puzzles when you have bits of information
2. What questions to ask to prioritize feature improvements
3. How to enumerate impact
4. How to let your vendor know what is valuable
With over 20 years of experience working with various databases and datastores, I will share real examples of success and failures and lessons we learned when working with Confluent Cloud by:
- Implementing strategies
- Addressing short and long term value - for both technical and business
- The very methodical methods to form roadmaps
If youre in discussions surrounding engineering platforms at your organization then this talk is for you. If you are a data driven engineering organization with solid leadership with sound decisions behind it, join us for this talk and lets have a discussion.
Confluent Platform 5.5 + Apache Kafka 2.5 => New Features (JSON Schema, Proto...Kai W辰hner
油
Confluent Platform 5.5 introduces several new features to simplify event streaming development including adding Protobuf and JSON schema support throughout the platform, providing exactly-once semantics for non-Java clients, introducing administrative functions to the REST Proxy, expanding the functionality of ksqlDB with new aggregates and data types, and adding a ksqlDB flow view to Confluent Control Center for increased visibility of streaming applications. The release is also based on the latest Apache Kafka 2.5 version.
Bridge to Cloud: Using Apache Kafka to Migrate to AWSconfluent
油
Watch this talk here: https://www.confluent.io/online-talks/bridge-to-cloud-apache-kafka-migrate-aws
Speakers: Priya Shivakumar, Director of Product, Confluent + Konstantine Karantasis, Software Engineer, Confluent + Rohit Pujari, Partner Solutions Architect, AWS
Most companies start their cloud journey with a new use case, or a new application. Sometimes these applications can run independently in the cloud, but often times they need data from the on premises datacenter. Existing applications will slowly migrate, but will need a strategy and the technology to enable a multi-year migration.
In this session, we will share how companies around the world are using Confluent Cloud, a fully managed Apache Kafka service, to migrate to AWS. By implementing a central-pipeline architecture using Apache Kafka to sync on-prem and cloud deployments, companies can accelerate migration times and reduce costs.
In this online talk we will cover:
How to take the first step in migrating to AWS
How to reliably sync your on premises applications using a persistent bridge to cloud
Learn how Confluent Cloud can make this daunting task simple, reliable and performant
See a demo of the hybrid-cloud and multi-region deployment of Apache Kafka
The document provides an overview of Confluent's product strategy and recent innovations in cloud-native data streaming. It discusses Confluent Cloud's key differentiators of being cloud native, complete, and everywhere. Recent updates are highlighted for each pillar, including expanded cluster management, stream processing capabilities with ksqlDB and Flink, and more connectors and regions. A demo then showcases features like Stream Designer and Cluster Linking. The roadmap teases expanding in-flight processing and data policies to increase real-time data value.
Reinventing Kafka in the Data Streaming Era - Jun Raoconfluent
油
This document discusses reinventing Apache Kafka in the data streaming era and introduces Confluent Cloud as a cloud-native data streaming platform. It notes that self-managing Kafka clusters comes with many complexities around sizing, provisioning, upgrades, security, and more. Confluent Cloud aims to solve these problems by providing a fully managed Kafka service with elastic scaling, infinite storage, high availability, and other benefits. It also outlines some new features of Confluent Cloud like SQL support, data governance tools, and stream sharing capabilities.
Monitoring kubernetes across data center and cloudDatadog
油
This document summarizes a presentation about monitoring Kubernetes clusters across data centers and cloud platforms using Datadog. It discusses how Kubernetes provides container-centric infrastructure and flexibility for hybrid cloud deployments. It also describes how monitoring works in Google Container Engine using cAdvisor, Heapster, and Stackdriver. Finally, it discusses how Datadog and Tectonic can be used to extend Kubernetes monitoring capabilities for enterprises.
Netflix keystone streaming data pipeline @scale in the cloud-dbtb-2016Monal Daxini
油
Keystone processes over 700 billion events per day (1 peta byte) with at-least once processing semantics in the cloud. We will explore in detail how we leverage Kafka, Samza, Docker, and Linux at scale to implement a multi-tenant pipeline in AWS cloud within a year. We will also share our plans on offering a Stream Processing as a Service for all of Netflix use.
This document provides a summary of Netflix's architecture and use of open source software. It discusses:
- Why Netflix open sources software, including gathering feedback, collaboration, and improving retention and recruiting
- Popular Netflix open source projects like Eureka, Ribbon, and Hystrix that are widely used in cloud architectures
- Netflix's microservices architecture and emphasis on automation, high availability, and continuous delivery
- How Netflix ensures operational visibility and security at scale through open source tools like Turbine, Atlas, and Security Monkey
- Getting started resources for understanding and running Netflix's technologies like ZeroToCloud and ZeroToDocker workshops
stackconf 2020 | The path to a Serverless-native era with Kubernetes by Paolo...NETWAYS
油
Serverless is one of the hottest design patterns in the cloud today, ill cover how the Serverless paradigms are changing the way we develop applications and the cloud infrastructures and how to implement Serveless-kind workloads with Kubernetes.
Well go through the latest Kubernetes-based serverless technologies, covering the most important aspects including pricing, scalability, observability and best practices
Pivotal CloudFoundry on Google cloud platformRonak Banka
油
This document is a slide presentation by Ronak Banka on using Pivotal Cloud Foundry (PCF) and Google Cloud Platform (GCP) together. It discusses how PCF provides a platform for deploying applications on GCP that enables both developer and operator productivity through features like automated deployments, service integration, and operations. It also highlights benefits of using PCF on GCP like performance, scale, cost savings, and access to differentiated GCP services.
The Netflix Way to deal with Big Data ProblemsMonal Daxini
油
The document discusses Netflix's approach to handling big data problems. It summarizes Netflix's data pipeline system called Keystone that was built in a year to replace a legacy system. Keystone ingests over 1 trillion events per day and processes them using technologies like Kafka, Samza and Spark Streaming. The document emphasizes Netflix's culture of freedom and responsibility and how it helped the small team replace the legacy system without disruption while achieving massive scale.
Profisee - HIMSS workshop - Mar 2025 - final.pptxProfisee
油
Workshop presentation given at the HIMSS 2025 conference, featuring Martin Boyd from Profisee, Anna Taylor from Multicare, Brigitte Tebow from Azulity, and Camille Whicker from Microsoft
Elastically Scaling Kafka Using Confluentconfluent
油
This document discusses how Confluent Platform provides elastic scaling for Apache Kafka. It offers fully managed cloud services through Confluent Cloud or self-managed software. Confluent Cloud allows users to easily scale Kafka workloads from 0 MBps to GBps without complex provisioning. It also offers pay-for-use pricing where customers only pay for the data streamed, with the ability to scale to zero. For self-managed deployments, Confluent Platform enables dynamic scaling of Kafka clusters on Kubernetes through features like tiered storage and self-balancing clusters that can rebalance partitions in seconds versus hours for other Kafka services.
Twitters Apache Kafka Adoption Journey | Ming Liu, TwitterHostedbyConfluent
油
Until recently, the Messaging team at Twitter had been running an in-house build Pub/Sub system, namely EventBus (built on top of Apache DistributedLog and Apache Bookkeeper, and similar in architecture to Apache Pulsar) to cater to our pubsub needs. In 2018, we made the decision to move to Apache Kafka by migrating existing use cases as well as onboarding new use cases directly onto Apache Kafka. Fast forward to today, Kafka is now an essential piece of Twitter Infrastructure and processes over 200M messages per second. In this talk, we will share the learning and challenges in our journey moving to Apache Kafka.
Build and Deploy Cloud Native Camel Quarkus routes with Tekton and KnativeOmar Al-Safi
油
In this talk, we will leverage all cloud native stacks and tools to build Camel Quarkus routes natively using GraalVM native-image on Tekton pipeline and deploy these routes to Kubernetes cluster with Knative installed. We will dive into the following topics in the talk: - Introduction to Camel - Introduction to Camel Quarkus - Introduction to GraalVM Native Image - Introduction to Tekon - Introduction to Knative - Demo shows how to deploy end to end a Camel Quarkus route which include the following steps: - Look at whole deployment pipeline for Cloud Native Camel Quarkus routes - Build Camel Quarkus routes with GraalVM native-image on Tekton pipeline. - Deploy Camel Quarkus routes to Kubernetes cluster with Knative Targeted Audience: Users with basic Camel knowledge
New Features in Confluent Platform 6.0 / Apache Kafka 2.6Kai W辰hner
油
New Features in Confluent Platform 6.0 / Apache Kafka 2.6, including REST Proxy and API, Tiered Storage for AWS S3 and GCP GCS, Cluster Linking (On-Premise, Edge, Hybrid, Multi-Cloud), Self-Balancing Clusters), ksqlDB.
Strategies For Migrating From SQL to NoSQL The Apache Kafka WayScyllaDB
油
This document discusses strategies for migrating from SQL to NoSQL databases using Apache Kafka. It outlines the challenges of modernizing legacy databases, how Confluent can help with the migration process, and proposes a three-phase plan. The plan involves initially migrating data sources using connectors, then optimizing the data with stream processing in ksqlDB, and finally modernizing by sending the data to cloud databases. The document provides an overview of Confluent's technologies and services that can help accelerate and simplify the database migration.
Over 100 million subscribers from over 190 countries enjoy the Netflix service. This leads to over a trillion events, amounting to 3 PB, flowing through the Keystone infrastructure to help improve customer experience and glean business insights. The self-serve Keystone stream processing service processes these messages in near real-time with at-least once semantics in the cloud. This enables the users to focus on extracting insights, and not worry about building out scalable infrastructure. Ill share the details about this platform, and our experience building it.
Zero Down Time Move From Apache Kafka to Confluent With Justin Dempsey | Curr...HostedbyConfluent
油
Zero Down Time Move From Apache Kafka to Confluent With Justin Dempsey | Current 2022
Kafka has been a crucial facet of the overall SAS Customer Intelligence 360 (CI360) architecture for quite some time. Until 2021, Kafka supporting CI360 was managed on standalone virtual machines. Traditional VM backed infrastructure posed administrative challenges for ensuring consistent software patching, adding scale on demand, and providing a highly available, redundant, and durable message bus for the CI360 microservices.
The goal was clear, the backend Kafka platform needed to move from the aging legacy systems to a more cost effective and stable solution.
The standalone VM backed Kafka clusters were migrated to the Amazon Elastic Kubernetes Service (EKS) with zero down time. Cluster Linking and the Confluent Operator were used as part of this effort. Both technologies were crucial in ensuring that the systems were online and available throughout the migration.
This session details the journey for moving standalone Kafka to Kafka on K8S. During the session, scope of the journey including Total Cost of Ownership (TCO), technical architecture, and the migration itself will be discussed.
NOTE: Experiences related to this effort are being published in a joint case study between SAS and Confluent titled, ""SAS Powers Instant, Real-Time Omnichannel Marketing at Massive Scale with Confluent's Hybrid Capabilities"".
Netflix uses containers to run both batch jobs and services. For batch jobs, containers simplify resource management and allow jobs like model training and media encoding to easily share resources. Services are more complex to run in containers due to challenges like constant resizing, statefulness, and networking. Netflix addresses these challenges through solutions like a VPC networking driver and reusing existing infrastructure services for containers. Looking ahead, Netflix aims to run more containers at larger scale for areas like developer experience, continuous integration, and internal resource optimization.
From Monoliths to Microservices - A Journey With Confluent With Gayathri Veal...HostedbyConfluent
油
Indeed is consciously transforming our monolith applications to microservices. Moving monoliths from on-premise to a hybrid architecture is a non-trivial endeavor. It is as we know a marathon and never never a race when we refactor not all of our applications but, incrementally progress onward to resilience with cloud.
By partnering with Confluent we were able to procedurally migrate many of our workloads both critical and non-critical primarily using Kafka by adopting a data domain driven approach. In this talk, you will learn,
1. How to piece complex puzzles when you have bits of information
2. What questions to ask to prioritize feature improvements
3. How to enumerate impact
4. How to let your vendor know what is valuable
With over 20 years of experience working with various databases and datastores, I will share real examples of success and failures and lessons we learned when working with Confluent Cloud by:
- Implementing strategies
- Addressing short and long term value - for both technical and business
- The very methodical methods to form roadmaps
If youre in discussions surrounding engineering platforms at your organization then this talk is for you. If you are a data driven engineering organization with solid leadership with sound decisions behind it, join us for this talk and lets have a discussion.
Confluent Platform 5.5 + Apache Kafka 2.5 => New Features (JSON Schema, Proto...Kai W辰hner
油
Confluent Platform 5.5 introduces several new features to simplify event streaming development including adding Protobuf and JSON schema support throughout the platform, providing exactly-once semantics for non-Java clients, introducing administrative functions to the REST Proxy, expanding the functionality of ksqlDB with new aggregates and data types, and adding a ksqlDB flow view to Confluent Control Center for increased visibility of streaming applications. The release is also based on the latest Apache Kafka 2.5 version.
Bridge to Cloud: Using Apache Kafka to Migrate to AWSconfluent
油
Watch this talk here: https://www.confluent.io/online-talks/bridge-to-cloud-apache-kafka-migrate-aws
Speakers: Priya Shivakumar, Director of Product, Confluent + Konstantine Karantasis, Software Engineer, Confluent + Rohit Pujari, Partner Solutions Architect, AWS
Most companies start their cloud journey with a new use case, or a new application. Sometimes these applications can run independently in the cloud, but often times they need data from the on premises datacenter. Existing applications will slowly migrate, but will need a strategy and the technology to enable a multi-year migration.
In this session, we will share how companies around the world are using Confluent Cloud, a fully managed Apache Kafka service, to migrate to AWS. By implementing a central-pipeline architecture using Apache Kafka to sync on-prem and cloud deployments, companies can accelerate migration times and reduce costs.
In this online talk we will cover:
How to take the first step in migrating to AWS
How to reliably sync your on premises applications using a persistent bridge to cloud
Learn how Confluent Cloud can make this daunting task simple, reliable and performant
See a demo of the hybrid-cloud and multi-region deployment of Apache Kafka
The document provides an overview of Confluent's product strategy and recent innovations in cloud-native data streaming. It discusses Confluent Cloud's key differentiators of being cloud native, complete, and everywhere. Recent updates are highlighted for each pillar, including expanded cluster management, stream processing capabilities with ksqlDB and Flink, and more connectors and regions. A demo then showcases features like Stream Designer and Cluster Linking. The roadmap teases expanding in-flight processing and data policies to increase real-time data value.
Reinventing Kafka in the Data Streaming Era - Jun Raoconfluent
油
This document discusses reinventing Apache Kafka in the data streaming era and introduces Confluent Cloud as a cloud-native data streaming platform. It notes that self-managing Kafka clusters comes with many complexities around sizing, provisioning, upgrades, security, and more. Confluent Cloud aims to solve these problems by providing a fully managed Kafka service with elastic scaling, infinite storage, high availability, and other benefits. It also outlines some new features of Confluent Cloud like SQL support, data governance tools, and stream sharing capabilities.
Monitoring kubernetes across data center and cloudDatadog
油
This document summarizes a presentation about monitoring Kubernetes clusters across data centers and cloud platforms using Datadog. It discusses how Kubernetes provides container-centric infrastructure and flexibility for hybrid cloud deployments. It also describes how monitoring works in Google Container Engine using cAdvisor, Heapster, and Stackdriver. Finally, it discusses how Datadog and Tectonic can be used to extend Kubernetes monitoring capabilities for enterprises.
Netflix keystone streaming data pipeline @scale in the cloud-dbtb-2016Monal Daxini
油
Keystone processes over 700 billion events per day (1 peta byte) with at-least once processing semantics in the cloud. We will explore in detail how we leverage Kafka, Samza, Docker, and Linux at scale to implement a multi-tenant pipeline in AWS cloud within a year. We will also share our plans on offering a Stream Processing as a Service for all of Netflix use.
This document provides a summary of Netflix's architecture and use of open source software. It discusses:
- Why Netflix open sources software, including gathering feedback, collaboration, and improving retention and recruiting
- Popular Netflix open source projects like Eureka, Ribbon, and Hystrix that are widely used in cloud architectures
- Netflix's microservices architecture and emphasis on automation, high availability, and continuous delivery
- How Netflix ensures operational visibility and security at scale through open source tools like Turbine, Atlas, and Security Monkey
- Getting started resources for understanding and running Netflix's technologies like ZeroToCloud and ZeroToDocker workshops
stackconf 2020 | The path to a Serverless-native era with Kubernetes by Paolo...NETWAYS
油
Serverless is one of the hottest design patterns in the cloud today, ill cover how the Serverless paradigms are changing the way we develop applications and the cloud infrastructures and how to implement Serveless-kind workloads with Kubernetes.
Well go through the latest Kubernetes-based serverless technologies, covering the most important aspects including pricing, scalability, observability and best practices
Pivotal CloudFoundry on Google cloud platformRonak Banka
油
This document is a slide presentation by Ronak Banka on using Pivotal Cloud Foundry (PCF) and Google Cloud Platform (GCP) together. It discusses how PCF provides a platform for deploying applications on GCP that enables both developer and operator productivity through features like automated deployments, service integration, and operations. It also highlights benefits of using PCF on GCP like performance, scale, cost savings, and access to differentiated GCP services.
The Netflix Way to deal with Big Data ProblemsMonal Daxini
油
The document discusses Netflix's approach to handling big data problems. It summarizes Netflix's data pipeline system called Keystone that was built in a year to replace a legacy system. Keystone ingests over 1 trillion events per day and processes them using technologies like Kafka, Samza and Spark Streaming. The document emphasizes Netflix's culture of freedom and responsibility and how it helped the small team replace the legacy system without disruption while achieving massive scale.
Profisee - HIMSS workshop - Mar 2025 - final.pptxProfisee
油
Workshop presentation given at the HIMSS 2025 conference, featuring Martin Boyd from Profisee, Anna Taylor from Multicare, Brigitte Tebow from Azulity, and Camille Whicker from Microsoft
Hinter diesem komplizierten Titel verbergen sich f端nf Jahre Experimente, Versuche und Schwierigkeiten mit dem OKR-Rahmen. Definitiv eine harte Nuss: XITASO hatte, wie viele andere Organisationen auch, eine schwere Zeit, es effektiv zum Laufen zu bringen aber nach einigen m端tigen und undogmatischen nderungen haben wir es geschafft. Strategisches Motto, asynchrones Drumbeating, neue Rollen und Verantwortlichkeiten, Ressourcenzuteilung auf der Grundlage von Beyond Budgeting Prinzipien, Domains und Selbstorganisation XITASO spielt kein Buzzword-Bingo, sondern hat viel zu erz辰hlen! In diesem aufregenden Vortrag wird Baptiste kurz die holakratische Organisation von XITASO vorstellen (mit 260 Mitarbeitern, 16 Teams und 31 Kreisen) und zeigen, wie sie ihren eigenen OKR-Rahmen geschaffen hat, um Innovationen strategisch und effektiv voranzutreiben. All ihre Erkenntnisse werden auch als kostenlose Handouts in Form des OKR.X Guide zur Verf端gung stehen!
Speaker: Baptiste Grand
Globibo Book Translation: Connect with Readers in Any Languageglobibo
油
Book translation makes knowledge, stories, and ideas accessible globally. It helps authors reach new readers, preserves cultural diversity, and supports learning across different languages and regions.
Book Translation Tips
Choose a Skilled Translator Accuracy matters.
Maintain Cultural Context Adapt idioms and references.
Ensure Consistent Terminology Avoid confusion.
Proofread Carefully Quality control is key.
Work with Experts Professional translation improves readability.
Benefits of Book Translation
Expands global readership
Increases author recognition
Preserves cultural heritage
Supports education and research
Opens new market opportunities
Bridges linguistic and cultural gaps
Helps spread knowledge and ideas
Allows books to reach non-native speakers
Globibos book translation services ensure accurate, culturally adapted translations by expert linguists. We handle various genres, maintaining the original essence while making books accessible worldwide. Our process guarantees clarity, consistency, and a smooth reading experience for diverse audiences.
Book translation connects authors with global readers, preserving ideas across languages. Choosing the right translator ensures quality. Globibo offers expert book translation, making content engaging and accessible to a broader audience.
For more information: https://globibo.com/ls/translation-book/
Learn more from:https://globibo.com/ls/unlocking-stories-the-role-of-book-translation/
Discover how museum digitisation has both positive and negative impacts on the climate. Participants will be invited to discuss their collections, approaches to digitisation, and climate strategies with the aim of forging a way forward which benefits collections, audiences, and nature.
This workshop opens with a spotlight on how National Museums Scotland has worked to create more inclusive recruitment practices. Participants will then work together to review and develop an inclusive person specification and consider changes they can apply in their own organisations.
Every company is at a different stage in the introduction of data science or AI. Not every use case fits every company, and finding the right one is often a challenge. Limited resources and a lack of expertise are common obstacles. This presentation will explore this challenge using an agile process to identify, develop and successfully implement impactful data science and AI projects.
This session starts with a presentation from our guest speakers on what an anti-racist curriculum can mean in practice. This will be followed by an interactive workshop on how museums can support efforts to promote and embed race equality and anti-racism in the curricula in a meaningful, effective, and sustainable way.
FIFA Friendly Match at Alberni Valley - Strategic Plan.pptxabuhasanjahangir
油
Let us make this match as the featured International friendly match between Team Canada and a popular World Cup-playing nation in Alberni Valley as part of the lead-up to FIFA 2026. This event will create global attention and drive economic and community benefits.
Satoshi Nakamoto is not a person, Satoshi Nakamoto is a partnership of two individuals.
The partners have a formal written partnership agreement which governs the activities of the partnership.
The term Satoshi Nakamoto is actually a portmanteau of the individual pseudonyms of the two partners.
Satoshi is the pseudonym of Natasha, the maternal aunt of Vitalik Buterin and former cryptologist at the CSE's Tutte Institute for Mathematics and Computing. However, she's still a member of Canada's national security and intelligence community.
Nakamoto is the pseudonym of Anastasia, the younger sister of former Edmonton police officer Elena Sinelnikova.
The two partners can verify all of this via cryptographic proof employing either the bitcoin genesis block address or the bitcoin block 9 address. The latter being the address that was used to pay 10btc to Hal Finney on January 12, 2009.
The two will also verify that they did not mine any of the so called patoshi pattern bitcoins and that bitcoin block 9 does not adhere to that mining pattern.
The two also published the proof of stake whitepapper under the pseudonym Sunny King.
See https://academy.youngplatform.com/en/crypto-heroes/who-is-inventor-proof-of-stake/
They also published the CryptoNote whitepaper using another pseudonym, that of Nicolas van Saberhagen.
See https://en.wikipedia.org/wiki/CryptoNote
They were also two of several individuals behind the thankful_for_today pseudonym who initiated the development of Monero.
see https://monero.stackexchange.com/questions/2407/what-is-the-story-with-thankful-for-today-and-the-transfer-of-dev-control
They continued to be active in the industry they helped to initiate.
They were behind the involvement of Natasha's nephew Vitalik Buterin in the founding of the Ethereum project.
They worked behind the scenes on the development of the Metis project by their sisters Natalia Ameline and Elena Sinelnikova.
They worked behind the scenes on other projects in the crypto/blockchain industry as well.
Natasha's sister Natalia Ameline is one of the cofounders of Cryptochicks.ca along with Anastasia's sister Elena Sinelnikova.
Don't take my work for it though, Natalia can be contacted at natalia.ameline@cryptochicks.ca and Elena can be contacted at elena.sinelnikova@cryptochicks.ca - give them a shout why don't you?
Natasha and Anastasia also confirm and clarify the role that the organization known as Cicada 3301 played in the origin and development of bitcoin. That is an interesting story you will not want to miss out on.
They will also confirm for you all the folks who knew the truth about the origins of bitcoin and who have been lying about it.
Natasha expressed an interest moving from the Canadian intelligence community and becoming an RCMP officer prior to revealing the truth about her involvement in the development of the industry. Her stated objective is to work in the RCMP's Federal Policing National Security Program.
Let us wish her well
Red blood cell (RBC) indices measure your red blood cells' size, shape, and quality. Red blood cells are also known as erythrocytes. They are made in your bone marrow (the spongy tissue inside your large bones). They contain hemoglobin, an iron-rich protein in your red blood cells that carries oxygen from your lungs to every cell in your body. Your cells need oxygen to grow, reproduce, and make energy.
Knowing the size and shape of your red blood cells can help your provider determine if you have a certain type of anemia, a condition in which your body does not make enough healthy red blood cells. There are four types of red blood cell indices:
Mean corpuscular volume (MCV), which measures the average size of your red blood cells.
Mean corpuscular hemoglobin (MCH), which measures the average amount of hemoglobin in a single red blood cell.
Mean corpuscular hemoglobin concentration (MCHC), which measures how concentrated (close together) the hemoglobin is in your red blood cells. It also includes a calculation of the size and volume of your red blood cells.
Red cell distribution width (RDW), which measures differences in the volume and size of your red blood cells. Healthy red blood cells are usually about the same size.
If one or more of these indices are not normal, it may mean you have some type of anemia.
Other names: erythrocyte indices
What are they used for?
Red blood cell (RBC) indices are part of a complete blood count, a group of tests that measures the number and type of cells in your blood. The results of RBC indices are used to diagnose different types of anemia. There are several types of anemia, and each type has a different effect on the size, shape, and/or quality of red blood cells.
Why do I need red blood cell indices testing?
You may get this test as part of a complete blood count, which is often included in a routine checkup. You may also need this test if you have symptoms of anemia, which may include:
Shortness of breath
Weakness or fatigue
Headache
Dizziness
Arrhythmia (a problem with the rate or rhythm of your heartbeat)
Pale skin
Cold hands and feet
What happens during a red blood cell indices test?
A health care professional will take a blood sample from a vein in your arm, using a small needle. After the needle is inserted, a small amount of blood will be collected into a test tube or vial. You may feel a little sting when the needle goes in or out. This test usually takes less than five minutes.
Will I need to do anything to prepare for these tests?
You don't need any special preparations for a red blood cell (RBC) indices test.
Are there any risks to these tests?
There is very little risk to having a blood test. There may be slight pain or bruising at the spot where the needle was put in, but most symptoms go away quickly.
What do the results mean?
You will get results for each of the indices. Abnormal results may include one or more of the following:
Mean corpuscular volume (MCV)
If your red blood cells ar
Science Communication beyond Journal Publications WorkshopWAIHIGA K.MUTURI
油
Science Not Shared is Science Lost: Bridging the Gap Between Research and Impact 鏝
In the heart of Africa, where innovation meets resilience, lies an untapped reservoir of scientific brilliance. Yet, too often, groundbreaking research remains confined within the walls of journals, inaccessible to the communities it seeks to serve. This February, I am thrilled to join the "Science Communication Beyond Journal Publications" workshop at the Uganda Virus Research Institute (UVRI) as one of the lead trainers. Together, we will unravel the power of storytelling, creative media, and strategic communication to amplify science's voice beyond academia.
Science is not just about discoveryit's about connection. Imagine a researcher in Kampala whose work could transform public health policy but struggles to translate their findings into actionable insights for policymakers. Or a young scientist in Nairobi whose groundbreaking study on climate resilience could inspire farmers but remains buried in technical jargon. These stories matter. They hold the potential to change lives and rewrite Africas narrative on poverty and development.
At this workshop, we will explore how scientists can collaborate with communicators to craft compelling stories that resonate with policymakers, communities, and global audiences alike. From podcasts that bring lab discoveries to life ァ to press releases that spark media attention and digital tools that democratize knowledge we will empower participants to make their research accessible and impactful.
This mission aligns deeply with my belief that Africa MUST change the way it tackles poverty. Science communication is not just about sharing knowledge; it's about driving action. When researchers effectively communicate their work, they empower communities with solutions rooted in evidence. They influence policies that prioritize sustainable development. They inspire innovation that addresses grassroots challenges.
Let us humanize scienceinfuse it with stories of hope, struggle, and triumphand ensure it reaches those who need it most. Because when science connects with people, it transforms lives.
To my fellow scientists and communicators: this is our call to action. Lets bridge the gap between discovery and impact. Lets co-create stories that not only inform but inspire action across Africa and beyond.
Heraldry Gold's Whiteburn Gold Project (PDAC, March 2025)RonHawkes1
油
Self-hosting Kafka at Scale: Netflix's Journey & Challenges
1. Self-hosting Kafka at Scale
Netflixs Journey & Challenges
Piyush Goyal, Staff Engineer, Data Platform
Nick Mahilani, Staff Engineer, Data Platform
Current 2024
2. Thank you for being here!
RAISE YOUR HAND
IF YOU USE KAFKA IN YOUR ORGANIZATION
3. KEEP YOUR HAND UP
IF YOU ARE SELF-HOSTING APACHE KAFKA
(NOT using a Kafka service provider)
4. WHAT CAN YOU EXPECT FROM THIS SESSION?
How Netflix leverages Kafka to unlock various use-cases ?
Our Long Journey with Kafka
How we operate Kafka today ?
Challenges and learnings
5. Business Context
Keystone Platform (2015)
Evolution to Composable Architecture
Kafka as a Service (2021)
KaaS Features and Architecture
KaaS Learnings
Our Journey With Kafka
7. Microservices Ecosystem
Systems at our scale generate a
lot of data
This data needs to be
transported to where it can be
processed and analysed
8. Centralized Event Pipeline (2015)
The System should have the following characteristics:
Easy to use
Highly Available
Scalable
Near Real-Time
9. Centralized Event Pipeline (2015)
The System should have the following characteristics:
Easy to use
Highly Available
Scalable
Near Real-Time
This gave rise to Netflixs Keystone Platform in 2015
10. Business Context
Keystone Platform (2015)
Evolution to Composable Architecture
Kafka as a Service (2021)
KaaS Features and Architecture
KaaS Learnings
Our Journey With Kafka
11. Keystone Platform (2015)
Highly abstracted product
Data Movement to Sinks
Simple Real-time processing (Filter, Projection)
Client Library, UI, Management plane, and Data Plane
Used Apache Kafka and Apache Flink under the hood
19. FRONTING CONSUMER
Multi-tenant clusters
Used to publish data
Abstracted from producers
Controlled Cluster access
Critical for High availability
Larger Fleet
Multi-tenant clusters
Used to consume data
Coupled with consumers
Smaller Fleet
Two types of Kafka Clusters
20. Resilience to cluster failure
Keystone Client
Stream Cluster Topic
playback_events Cluster A playback_events
ad_events Cluster B ad_events
Topic lookup
Cluster A
Cluster B
Topic:
playback_events
Fronting
Topic:
ad_events
21. Resilience to cluster failure
Keystone Client
Stream Cluster Topic
playback_events Cluster A
Cluster B
playback_events
ad_events Cluster B ad_events
Topic lookup
Cluster A
Cluster B
Topic:
playback_events
Fronting
Topic:
ad_events
Topic:
playback_events
22. Things worked well..
Highly abstracted and easy to use product
Only takes a couple minutes to create simple data pipelines
Huge adoption - more than 6000 data pipelines
>100M message per seconds (>150GB/s)
Quick real-time transformations like filtering and projection
23. Not everything worked well
For Streaming-only consumers, It was highly inefficient
Unnecessary hops
Higher latency
Extra Cost
Noisy neighbors in a multi-tenanted environment
No direct access to Kafka for producers
Administration of Kafka was semi-automated
24. And we needed more..
Highly abstracted product means limited functionality done well
Solved 80% use-cases, what about the rest?
New Business Requirements demanded more functionality
Event Driven Architecture
Change Data Capture
Low latency use-cases
Custom Stream Processing
Direct Kafka integration for Third party tools
25. Business Context
Keystone Platform (2015)
Evolution to Composable Architecture
Kafka as a Service (2021)
KaaS Features and Architecture
KaaS Learnings
Our Journey With Kafka
27. Whether to build or buy?
We evaluated the tradeoffs for our situation (Year 2020-21)
Customizability
Long term costs
Available in-house expertise
Minimize Risks
After careful consideration, we decided to BUILD our own managed Kafka
Platform. YMMV!
28. Business Context
Keystone Data Pipeline (2015)
Evolution to Composable Architecture
Kafka as a Service (2021)
KaaS Architecture
KaaS Learnings
Our Journey With Kafka
29. Kafka as a Service (KaaS)
Alerting & Auto Remediation Security & Access Control
Observability
Client Library Schema Management
Provisioning
36. Business Context
Keystone Data Pipeline (2015)
Evolution to Composable Architecture
Kafka as a Service (2021)
KaaS Architecture
KaaS Learnings
Our Journey With Kafka
40. KaaS Scale
190 million messages / second
150+ GB ingested / second
8+ PB persisted state
475+ dedicated Kafka Clusters
11,500 Kafka brokers
35,000 Kafka topics
41. Business Context
Keystone Data Pipeline (2015)
Evolution to Composable Architecture
Kafka as a Service (2021)
KaaS Architecture
KaaS Learnings
Our Journey With Kafka
51. Move Kafka state from local instance storage to EBS
Software Upgrade Strategy #1
52. EBS is expensive at large scale
Moved large scale clusters back to AWS instance types
with local disk
Back to where we started longer upgrade times
EBS is awesome but ..
58. Right Sizing a Kafka Cluster
Num Consumers
Throughput
Replication Factor
Retention
Which EC2 instance type?
How many instances?
How much disk?
59. Right Sizing a Kafka Cluster
Num Consumers
Throughput
Replication Factor
Retention
Kafka Capacity
Model
Num
Brokers
Instance Type Cost
3 i3en.2xl $
3 i4i.2xl $$
6 r5.4xl + EBS $$$
https://github.com/Netflix-Skunkworks/service-capacity-modeling/blob/main/service_capacity_modeling/models/org/netflix/kafka.py
60. Business Context
Keystone Data Pipeline (2015)
Evolution to Composable Architecture
Kafka as a Service (2021)
KaaS Features and Architecture
KaaS Learnings
Our Journey With Kafka
61. Composable architectures are easier to scale and evolve
with the business
Key Takeaway
Closed System
Pipeline Abstraction
Pipeline
Abstraction
Kafka
as a
Service
Stream
Processing
Composable
System
62. Q & A
Self-hosting Kafka at Scale
Netflixs Journey & Challenges
Piyush Goyal Nick Mahilani
63. S3 Flash Bootloader (precursor to AWS Replace Root
Volume)
Joeys talk on Capacity Plan optimally in the cloud
Kyle and JS talk on Iterating faster on Stateful Services in
the cloud
References