狠狠撸shows by User: lbernail
/
http://www.slideshare.net/images/logo.gif狠狠撸shows by User: lbernail
/
Fri, 20 Nov 2020 18:59:14 GMT狠狠撸Share feed for 狠狠撸shows by User: lbernailHow the OOM Killer Deleted My Namespace
/slideshow/how-the-oom-killer-deleted-my-namespace/239358850
howtheoom-killerdeletedmynamespace-201120185914 Running Kubernetes at scale is challenging and you can often end up in situations where you have to debug complex and unexpected issues. This requires understanding in detail how the different components work and interact with each other. Over the last 3 years, Datadog migrated most of its workloads to Kubernetes and now manages dozens of clusters consisting of thousands of nodes each. During this journey, engineers have debugged complex issues with root causes that were sometimes very surprising. In this talk Laurent and Tabitha will share some of these stories, including a favorite: how a complex interaction between familiar Kubernetes components allowed an OOM-killer invocation to trigger the deletion of a namespace.]]>
Running Kubernetes at scale is challenging and you can often end up in situations where you have to debug complex and unexpected issues. This requires understanding in detail how the different components work and interact with each other. Over the last 3 years, Datadog migrated most of its workloads to Kubernetes and now manages dozens of clusters consisting of thousands of nodes each. During this journey, engineers have debugged complex issues with root causes that were sometimes very surprising. In this talk Laurent and Tabitha will share some of these stories, including a favorite: how a complex interaction between familiar Kubernetes components allowed an OOM-killer invocation to trigger the deletion of a namespace.]]>
Fri, 20 Nov 2020 18:59:14 GMT/slideshow/how-the-oom-killer-deleted-my-namespace/239358850lbernail@slideshare.net(lbernail)How the OOM Killer Deleted My NamespacelbernailRunning Kubernetes at scale is challenging and you can often end up in situations where you have to debug complex and unexpected issues. This requires understanding in detail how the different components work and interact with each other. Over the last 3 years, Datadog migrated most of its workloads to Kubernetes and now manages dozens of clusters consisting of thousands of nodes each. During this journey, engineers have debugged complex issues with root causes that were sometimes very surprising. In this talk Laurent and Tabitha will share some of these stories, including a favorite: how a complex interaction between familiar Kubernetes components allowed an OOM-killer invocation to trigger the deletion of a namespace.<img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/howtheoom-killerdeletedmynamespace-201120185914-thumbnail.jpg?width=120&height=120&fit=bounds" /><br> Running Kubernetes at scale is challenging and you can often end up in situations where you have to debug complex and unexpected issues. This requires understanding in detail how the different components work and interact with each other. Over the last 3 years, Datadog migrated most of its workloads to Kubernetes and now manages dozens of clusters consisting of thousands of nodes each. During this journey, engineers have debugged complex issues with root causes that were sometimes very surprising. In this talk Laurent and Tabitha will share some of these stories, including a favorite: how a complex interaction between familiar Kubernetes components allowed an OOM-killer invocation to trigger the deletion of a namespace.
]]>
7660https://cdn.slidesharecdn.com/ss_thumbnails/howtheoom-killerdeletedmynamespace-201120185914-thumbnail.jpg?width=120&height=120&fit=boundspresentationBlackhttp://activitystrea.ms/schema/1.0/posthttp://activitystrea.ms/schema/1.0/posted0Kubernetes DNS Horror Stories
/slideshow/kubernetes-dns-horror-stories/239358825
kubernetesdnshorrorstories-201120185650 DNS is one of the Kubernetes core systems and can quickly become a source of issues when you鈥檙e running clusters at scale. For over a year at Datadog, we鈥檝e run Kubernetes clusters with thousands of nodes that host workloads generating tens of thousands of DNS queries per second. It wasn鈥檛 easy to build an architecture able to handle this load, and we鈥檝e had our share of problems along the way.
This talk starts with a presentation of how Kubernetes DNS works. It then dives into the challenges we鈥檝e faced, which span a variety of topics related to load, connection tracking, upstream servers, rolling updates, resolver implementations, and performance. We then show how our DNS architecture evolved over time to address or mitigate these problems. Finally, we share our solutions for detecting these problems before they happen鈥攁nd identifying misbehaving clients.]]>
DNS is one of the Kubernetes core systems and can quickly become a source of issues when you鈥檙e running clusters at scale. For over a year at Datadog, we鈥檝e run Kubernetes clusters with thousands of nodes that host workloads generating tens of thousands of DNS queries per second. It wasn鈥檛 easy to build an architecture able to handle this load, and we鈥檝e had our share of problems along the way.
This talk starts with a presentation of how Kubernetes DNS works. It then dives into the challenges we鈥檝e faced, which span a variety of topics related to load, connection tracking, upstream servers, rolling updates, resolver implementations, and performance. We then show how our DNS architecture evolved over time to address or mitigate these problems. Finally, we share our solutions for detecting these problems before they happen鈥攁nd identifying misbehaving clients.]]>
Fri, 20 Nov 2020 18:56:50 GMT/slideshow/kubernetes-dns-horror-stories/239358825lbernail@slideshare.net(lbernail)Kubernetes DNS Horror StorieslbernailDNS is one of the Kubernetes core systems and can quickly become a source of issues when you鈥檙e running clusters at scale. For over a year at Datadog, we鈥檝e run Kubernetes clusters with thousands of nodes that host workloads generating tens of thousands of DNS queries per second. It wasn鈥檛 easy to build an architecture able to handle this load, and we鈥檝e had our share of problems along the way.
This talk starts with a presentation of how Kubernetes DNS works. It then dives into the challenges we鈥檝e faced, which span a variety of topics related to load, connection tracking, upstream servers, rolling updates, resolver implementations, and performance. We then show how our DNS architecture evolved over time to address or mitigate these problems. Finally, we share our solutions for detecting these problems before they happen鈥攁nd identifying misbehaving clients.<img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/kubernetesdnshorrorstories-201120185650-thumbnail.jpg?width=120&height=120&fit=bounds" /><br> DNS is one of the Kubernetes core systems and can quickly become a source of issues when you鈥檙e running clusters at scale. For over a year at Datadog, we鈥檝e run Kubernetes clusters with thousands of nodes that host workloads generating tens of thousands of DNS queries per second. It wasn鈥檛 easy to build an architecture able to handle this load, and we鈥檝e had our share of problems along the way.
This talk starts with a presentation of how Kubernetes DNS works. It then dives into the challenges we鈥檝e faced, which span a variety of topics related to load, connection tracking, upstream servers, rolling updates, resolver implementations, and performance. We then show how our DNS architecture evolved over time to address or mitigate these problems. Finally, we share our solutions for detecting these problems before they happen鈥攁nd identifying misbehaving clients.
]]>
7421https://cdn.slidesharecdn.com/ss_thumbnails/kubernetesdnshorrorstories-201120185650-thumbnail.jpg?width=120&height=120&fit=boundspresentationBlackhttp://activitystrea.ms/schema/1.0/posthttp://activitystrea.ms/schema/1.0/posted0Evolution of kube-proxy (Brussels, Fosdem 2020)
/slideshow/evolution-of-kubeproxy-brussels-fosdem-2020/226790640
evolutionofkube-proxyfosdem2020-200203100144 Kube-proxy enables access to Kubernetes services (virtual IPs backed by pods) by configuring client-side load-balancing on nodes. The first implementation relied on a userspace proxy which was not very performant. The second implementation used iptables and is still the one used in most Kubernetes clusters. Recently, the community introduced an alternative based on IPVS. This talk will start with a description of the different modes and how they work. It will then focus on the IPVS implementation, the improvements it brings, the issues we encountered and how we fixed them as well as the remaining challenges and how they could be addressed. Finally, the talk will present alternative solutions based on eBPF such as Cilium.]]>
Kube-proxy enables access to Kubernetes services (virtual IPs backed by pods) by configuring client-side load-balancing on nodes. The first implementation relied on a userspace proxy which was not very performant. The second implementation used iptables and is still the one used in most Kubernetes clusters. Recently, the community introduced an alternative based on IPVS. This talk will start with a description of the different modes and how they work. It will then focus on the IPVS implementation, the improvements it brings, the issues we encountered and how we fixed them as well as the remaining challenges and how they could be addressed. Finally, the talk will present alternative solutions based on eBPF such as Cilium.]]>
Mon, 03 Feb 2020 10:01:44 GMT/slideshow/evolution-of-kubeproxy-brussels-fosdem-2020/226790640lbernail@slideshare.net(lbernail)Evolution of kube-proxy (Brussels, Fosdem 2020)lbernailKube-proxy enables access to Kubernetes services (virtual IPs backed by pods) by configuring client-side load-balancing on nodes. The first implementation relied on a userspace proxy which was not very performant. The second implementation used iptables and is still the one used in most Kubernetes clusters. Recently, the community introduced an alternative based on IPVS. This talk will start with a description of the different modes and how they work. It will then focus on the IPVS implementation, the improvements it brings, the issues we encountered and how we fixed them as well as the remaining challenges and how they could be addressed. Finally, the talk will present alternative solutions based on eBPF such as Cilium.<img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/evolutionofkube-proxyfosdem2020-200203100144-thumbnail.jpg?width=120&height=120&fit=bounds" /><br> Kube-proxy enables access to Kubernetes services (virtual IPs backed by pods) by configuring client-side load-balancing on nodes. The first implementation relied on a userspace proxy which was not very performant. The second implementation used iptables and is still the one used in most Kubernetes clusters. Recently, the community introduced an alternative based on IPVS. This talk will start with a description of the different modes and how they work. It will then focus on the IPVS implementation, the improvements it brings, the issues we encountered and how we fixed them as well as the remaining challenges and how they could be addressed. Finally, the talk will present alternative solutions based on eBPF such as Cilium.
]]>
7210https://cdn.slidesharecdn.com/ss_thumbnails/evolutionofkube-proxyfosdem2020-200203100144-thumbnail.jpg?width=120&height=120&fit=boundspresentationBlackhttp://activitystrea.ms/schema/1.0/posthttp://activitystrea.ms/schema/1.0/posted0Making the most out of kubernetes audit logs
/slideshow/making-the-most-out-of-kubernetes-audit-logs/197948548
makingthemostoutofkubernetesauditlogs-191126144639 The Kubernetes audit logs are a rich source of information: all of the calls made to the API server are stored, along with additional metadata such as usernames, timings, and source IPs. They help to answer questions such as 鈥淲hat is overloading my control plane?鈥� or 鈥淲hich sequence of events led to this problematic situation?鈥�. These questions are hard to answer otherwise鈥攅specially in large clusters. At Datadog, we have been running clusters with 1000+ nodes for more than a year and during that time, the audit logs have proved invaluable.
In this presentation, we will first introduce the audit logs, explain how they are configured, and review the type of data they store. Finally, we will describe in detail several scenarios where they have helped us to diagnose complex problems.]]>
The Kubernetes audit logs are a rich source of information: all of the calls made to the API server are stored, along with additional metadata such as usernames, timings, and source IPs. They help to answer questions such as 鈥淲hat is overloading my control plane?鈥� or 鈥淲hich sequence of events led to this problematic situation?鈥�. These questions are hard to answer otherwise鈥攅specially in large clusters. At Datadog, we have been running clusters with 1000+ nodes for more than a year and during that time, the audit logs have proved invaluable.
In this presentation, we will first introduce the audit logs, explain how they are configured, and review the type of data they store. Finally, we will describe in detail several scenarios where they have helped us to diagnose complex problems.]]>
Tue, 26 Nov 2019 14:46:39 GMT/slideshow/making-the-most-out-of-kubernetes-audit-logs/197948548lbernail@slideshare.net(lbernail)Making the most out of kubernetes audit logslbernailThe Kubernetes audit logs are a rich source of information: all of the calls made to the API server are stored, along with additional metadata such as usernames, timings, and source IPs. They help to answer questions such as 鈥淲hat is overloading my control plane?鈥� or 鈥淲hich sequence of events led to this problematic situation?鈥�. These questions are hard to answer otherwise鈥攅specially in large clusters. At Datadog, we have been running clusters with 1000+ nodes for more than a year and during that time, the audit logs have proved invaluable.
In this presentation, we will first introduce the audit logs, explain how they are configured, and review the type of data they store. Finally, we will describe in detail several scenarios where they have helped us to diagnose complex problems.<img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/makingthemostoutofkubernetesauditlogs-191126144639-thumbnail.jpg?width=120&height=120&fit=bounds" /><br> The Kubernetes audit logs are a rich source of information: all of the calls made to the API server are stored, along with additional metadata such as usernames, timings, and source IPs. They help to answer questions such as 鈥淲hat is overloading my control plane?鈥� or 鈥淲hich sequence of events led to this problematic situation?鈥�. These questions are hard to answer otherwise鈥攅specially in large clusters. At Datadog, we have been running clusters with 1000+ nodes for more than a year and during that time, the audit logs have proved invaluable.
In this presentation, we will first introduce the audit logs, explain how they are configured, and review the type of data they store. Finally, we will describe in detail several scenarios where they have helped us to diagnose complex problems.
]]>
6520https://cdn.slidesharecdn.com/ss_thumbnails/makingthemostoutofkubernetesauditlogs-191126144639-thumbnail.jpg?width=120&height=120&fit=boundspresentationBlackhttp://activitystrea.ms/schema/1.0/posthttp://activitystrea.ms/schema/1.0/posted0Kubernetes the Very Hard Way. Velocity Berlin 2019
/slideshow/kubernetes-the-very-hard-way-velocity-berlin-2019/191049721
kubernetestheveryhardway-velocity19-191106142656 Running large Kubernetes clusters is difficult. Datadog has been running large-scale Kubernetes clusters (thousands of nodes) for more than a year and has learned several lessons the hard way.
Laurent Bernaille examines the challenges Datadog faced during this journey. He dives into problems that arise when you run large clusters鈥攁nd, crucially, how to address them鈥攂y providing detailed examples based on Datadog鈥檚 experience across different cloud providers. You鈥檒l explore complex runtime and networking issues: at scale you discover complex issues in low-level components that are very rare but happen regularly when you have a large number of nodes.
Additionally, Laurent provides examples of how to improve the architecture of clusters to increase scalability and performance, both on the control plane and the data plane (communication between pods and ingress traffic). If scale can be hard on the control plane, it鈥檚 even harder on tools from the ecosystem, which have rarely been tested on very large clusters. He explains several examples of the tools Datadog uses and how it had to improve them to handle its scale. And you鈥檒l leave with practical advice on how to build a good relationship with the community and start contributing back.]]>
Running large Kubernetes clusters is difficult. Datadog has been running large-scale Kubernetes clusters (thousands of nodes) for more than a year and has learned several lessons the hard way.
Laurent Bernaille examines the challenges Datadog faced during this journey. He dives into problems that arise when you run large clusters鈥攁nd, crucially, how to address them鈥攂y providing detailed examples based on Datadog鈥檚 experience across different cloud providers. You鈥檒l explore complex runtime and networking issues: at scale you discover complex issues in low-level components that are very rare but happen regularly when you have a large number of nodes.
Additionally, Laurent provides examples of how to improve the architecture of clusters to increase scalability and performance, both on the control plane and the data plane (communication between pods and ingress traffic). If scale can be hard on the control plane, it鈥檚 even harder on tools from the ecosystem, which have rarely been tested on very large clusters. He explains several examples of the tools Datadog uses and how it had to improve them to handle its scale. And you鈥檒l leave with practical advice on how to build a good relationship with the community and start contributing back.]]>
Wed, 06 Nov 2019 14:26:56 GMT/slideshow/kubernetes-the-very-hard-way-velocity-berlin-2019/191049721lbernail@slideshare.net(lbernail)Kubernetes the Very Hard Way. Velocity Berlin 2019lbernailRunning large Kubernetes clusters is difficult. Datadog has been running large-scale Kubernetes clusters (thousands of nodes) for more than a year and has learned several lessons the hard way.
Laurent Bernaille examines the challenges Datadog faced during this journey. He dives into problems that arise when you run large clusters鈥攁nd, crucially, how to address them鈥攂y providing detailed examples based on Datadog鈥檚 experience across different cloud providers. You鈥檒l explore complex runtime and networking issues: at scale you discover complex issues in low-level components that are very rare but happen regularly when you have a large number of nodes.
Additionally, Laurent provides examples of how to improve the architecture of clusters to increase scalability and performance, both on the control plane and the data plane (communication between pods and ingress traffic). If scale can be hard on the control plane, it鈥檚 even harder on tools from the ecosystem, which have rarely been tested on very large clusters. He explains several examples of the tools Datadog uses and how it had to improve them to handle its scale. And you鈥檒l leave with practical advice on how to build a good relationship with the community and start contributing back.<img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/kubernetestheveryhardway-velocity19-191106142656-thumbnail.jpg?width=120&height=120&fit=bounds" /><br> Running large Kubernetes clusters is difficult. Datadog has been running large-scale Kubernetes clusters (thousands of nodes) for more than a year and has learned several lessons the hard way.
Laurent Bernaille examines the challenges Datadog faced during this journey. He dives into problems that arise when you run large clusters鈥攁nd, crucially, how to address them鈥攂y providing detailed examples based on Datadog鈥檚 experience across different cloud providers. You鈥檒l explore complex runtime and networking issues: at scale you discover complex issues in low-level components that are very rare but happen regularly when you have a large number of nodes.
Additionally, Laurent provides examples of how to improve the architecture of clusters to increase scalability and performance, both on the control plane and the data plane (communication between pods and ingress traffic). If scale can be hard on the control plane, it鈥檚 even harder on tools from the ecosystem, which have rarely been tested on very large clusters. He explains several examples of the tools Datadog uses and how it had to improve them to handle its scale. And you鈥檒l leave with practical advice on how to build a good relationship with the community and start contributing back.
]]>
15570https://cdn.slidesharecdn.com/ss_thumbnails/kubernetestheveryhardway-velocity19-191106142656-thumbnail.jpg?width=120&height=120&fit=boundspresentationBlackhttp://activitystrea.ms/schema/1.0/posthttp://activitystrea.ms/schema/1.0/posted0Kubernetes the Very Hard Way. Lisa Portland 2019
/slideshow/kubernetes-the-very-hard-way-188349737/188349737
kubernetestheveryhardway-lisa19-191029222334 Running large Kubernetes clusters is challenging. At large scales, practitioners need to adapt and tune both their architectures and component configurations in specialized ways.
Our organisation has been running large scale Kubernetes clusters (up to 2000 nodes, and growing) for more than a year, and we have learned several lessons the hard way. This talk will dive into complex runtime and networking issues that occur when running Kubernetes in production at scale. We will provide examples of how to improve the architecture of clusters to increase scalability and performance, both on the control plane and the data plane. Further, tools from the greater ecosystem will be examined, as they are rarely tested within the context of very large clusters.
Finally, the talk will also discuss the mutually beneficial relationship we built with the larger Kubernetes community by providing feedback on the tools and contributing both fixes and improvements upstream.]]>
Running large Kubernetes clusters is challenging. At large scales, practitioners need to adapt and tune both their architectures and component configurations in specialized ways.
Our organisation has been running large scale Kubernetes clusters (up to 2000 nodes, and growing) for more than a year, and we have learned several lessons the hard way. This talk will dive into complex runtime and networking issues that occur when running Kubernetes in production at scale. We will provide examples of how to improve the architecture of clusters to increase scalability and performance, both on the control plane and the data plane. Further, tools from the greater ecosystem will be examined, as they are rarely tested within the context of very large clusters.
Finally, the talk will also discuss the mutually beneficial relationship we built with the larger Kubernetes community by providing feedback on the tools and contributing both fixes and improvements upstream.]]>
Tue, 29 Oct 2019 22:23:34 GMT/slideshow/kubernetes-the-very-hard-way-188349737/188349737lbernail@slideshare.net(lbernail)Kubernetes the Very Hard Way. Lisa Portland 2019lbernailRunning large Kubernetes clusters is challenging. At large scales, practitioners need to adapt and tune both their architectures and component configurations in specialized ways.
Our organisation has been running large scale Kubernetes clusters (up to 2000 nodes, and growing) for more than a year, and we have learned several lessons the hard way. This talk will dive into complex runtime and networking issues that occur when running Kubernetes in production at scale. We will provide examples of how to improve the architecture of clusters to increase scalability and performance, both on the control plane and the data plane. Further, tools from the greater ecosystem will be examined, as they are rarely tested within the context of very large clusters.
Finally, the talk will also discuss the mutually beneficial relationship we built with the larger Kubernetes community by providing feedback on the tools and contributing both fixes and improvements upstream.<img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/kubernetestheveryhardway-lisa19-191029222334-thumbnail.jpg?width=120&height=120&fit=bounds" /><br> Running large Kubernetes clusters is challenging. At large scales, practitioners need to adapt and tune both their architectures and component configurations in specialized ways.
Our organisation has been running large scale Kubernetes clusters (up to 2000 nodes, and growing) for more than a year, and we have learned several lessons the hard way. This talk will dive into complex runtime and networking issues that occur when running Kubernetes in production at scale. We will provide examples of how to improve the architecture of clusters to increase scalability and performance, both on the control plane and the data plane. Further, tools from the greater ecosystem will be examined, as they are rarely tested within the context of very large clusters.
Finally, the talk will also discuss the mutually beneficial relationship we built with the larger Kubernetes community by providing feedback on the tools and contributing both fixes and improvements upstream.
]]>
26442https://cdn.slidesharecdn.com/ss_thumbnails/kubernetestheveryhardway-lisa19-191029222334-thumbnail.jpg?width=120&height=120&fit=boundspresentationBlackhttp://activitystrea.ms/schema/1.0/posthttp://activitystrea.ms/schema/1.0/posted010 ways to shoot yourself in the foot with kubernetes, #9 will surprise you! (Container Day Paris)
/slideshow/10-ways-to-shoot-yourself-in-the-foot-with-kubernetes-9-will-surprise-you-container-day-paris/148911982
containerday10waystoshootyourselfinthefootwithkubernetes9willsurpriseyou-190605092654 Kubernetes is a very powerful and complicated system, and many users don鈥檛 understand the underlying systems. Come learn how your users can abuse container runtimes, overwhelm your control plane, and cause outages - it鈥檚 actually quite easy!
In the last year, we have containerized hundreds of applications and deployed them in large scale clusters (more than 1000 nodes). The journey was eventful and we learned a lot along the way. We鈥檒l share stories of our ten favorite Kubernetes foot guns, including the dangers of cargo culting, rolling updates gone wrong, the pitfalls of initContainers, and nightmarish daemonset upgrades. The talk will present solutions we adopted to avoid or work around some these problems and will finally show several improvements we plan deploy in the future.
Similar to the Kubecon talk with the same title with a few new incidents.]]>
Kubernetes is a very powerful and complicated system, and many users don鈥檛 understand the underlying systems. Come learn how your users can abuse container runtimes, overwhelm your control plane, and cause outages - it鈥檚 actually quite easy!
In the last year, we have containerized hundreds of applications and deployed them in large scale clusters (more than 1000 nodes). The journey was eventful and we learned a lot along the way. We鈥檒l share stories of our ten favorite Kubernetes foot guns, including the dangers of cargo culting, rolling updates gone wrong, the pitfalls of initContainers, and nightmarish daemonset upgrades. The talk will present solutions we adopted to avoid or work around some these problems and will finally show several improvements we plan deploy in the future.
Similar to the Kubecon talk with the same title with a few new incidents.]]>
Wed, 05 Jun 2019 09:26:54 GMT/slideshow/10-ways-to-shoot-yourself-in-the-foot-with-kubernetes-9-will-surprise-you-container-day-paris/148911982lbernail@slideshare.net(lbernail)10 ways to shoot yourself in the foot with kubernetes, #9 will surprise you! (Container Day Paris)lbernailKubernetes is a very powerful and complicated system, and many users don鈥檛 understand the underlying systems. Come learn how your users can abuse container runtimes, overwhelm your control plane, and cause outages - it鈥檚 actually quite easy!
In the last year, we have containerized hundreds of applications and deployed them in large scale clusters (more than 1000 nodes). The journey was eventful and we learned a lot along the way. We鈥檒l share stories of our ten favorite Kubernetes foot guns, including the dangers of cargo culting, rolling updates gone wrong, the pitfalls of initContainers, and nightmarish daemonset upgrades. The talk will present solutions we adopted to avoid or work around some these problems and will finally show several improvements we plan deploy in the future.
Similar to the Kubecon talk with the same title with a few new incidents.<img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/containerday10waystoshootyourselfinthefootwithkubernetes9willsurpriseyou-190605092654-thumbnail.jpg?width=120&height=120&fit=bounds" /><br> Kubernetes is a very powerful and complicated system, and many users don鈥檛 understand the underlying systems. Come learn how your users can abuse container runtimes, overwhelm your control plane, and cause outages - it鈥檚 actually quite easy!
In the last year, we have containerized hundreds of applications and deployed them in large scale clusters (more than 1000 nodes). The journey was eventful and we learned a lot along the way. We鈥檒l share stories of our ten favorite Kubernetes foot guns, including the dangers of cargo culting, rolling updates gone wrong, the pitfalls of initContainers, and nightmarish daemonset upgrades. The talk will present solutions we adopted to avoid or work around some these problems and will finally show several improvements we plan deploy in the future.
Similar to the Kubecon talk with the same title with a few new incidents.
]]>
33014https://cdn.slidesharecdn.com/ss_thumbnails/containerday10waystoshootyourselfinthefootwithkubernetes9willsurpriseyou-190605092654-thumbnail.jpg?width=120&height=120&fit=boundspresentationBlackhttp://activitystrea.ms/schema/1.0/posthttp://activitystrea.ms/schema/1.0/posted010 ways to shoot yourself in the foot with kubernetes, #9 will surprise you!
/slideshow/10-ways-to-shoot-yourself-in-the-foot-with-kubernetes-9-will-surprise-you-148015068/148015068
10waystoshootyourselfinthefootwithkubernetes9willsurpriseyou-190528193744 Kubernetes is a very powerful and complicated system, and many users don鈥檛 understand the underlying systems. Come learn how your users can abuse container runtimes, overwhelm your control plane, and cause outages - it鈥檚 actually quite easy!
In the last year, we have containerized hundreds of applications and deployed them in large scale clusters (more than 1000 nodes). The journey was eventful and we learned a lot along the way. We鈥檒l share stories of our ten favorite Kubernetes foot guns, including the dangers of cargo culting, rolling updates gone wrong, the pitfalls of initContainers, and nightmarish daemonset upgrades. The talk will present solutions we adopted to avoid or work around some these problems and will finally show several improvements we plan deploy in the future.]]>
Kubernetes is a very powerful and complicated system, and many users don鈥檛 understand the underlying systems. Come learn how your users can abuse container runtimes, overwhelm your control plane, and cause outages - it鈥檚 actually quite easy!
In the last year, we have containerized hundreds of applications and deployed them in large scale clusters (more than 1000 nodes). The journey was eventful and we learned a lot along the way. We鈥檒l share stories of our ten favorite Kubernetes foot guns, including the dangers of cargo culting, rolling updates gone wrong, the pitfalls of initContainers, and nightmarish daemonset upgrades. The talk will present solutions we adopted to avoid or work around some these problems and will finally show several improvements we plan deploy in the future.]]>
Tue, 28 May 2019 19:37:44 GMT/slideshow/10-ways-to-shoot-yourself-in-the-foot-with-kubernetes-9-will-surprise-you-148015068/148015068lbernail@slideshare.net(lbernail)10 ways to shoot yourself in the foot with kubernetes, #9 will surprise you!lbernailKubernetes is a very powerful and complicated system, and many users don鈥檛 understand the underlying systems. Come learn how your users can abuse container runtimes, overwhelm your control plane, and cause outages - it鈥檚 actually quite easy!
In the last year, we have containerized hundreds of applications and deployed them in large scale clusters (more than 1000 nodes). The journey was eventful and we learned a lot along the way. We鈥檒l share stories of our ten favorite Kubernetes foot guns, including the dangers of cargo culting, rolling updates gone wrong, the pitfalls of initContainers, and nightmarish daemonset upgrades. The talk will present solutions we adopted to avoid or work around some these problems and will finally show several improvements we plan deploy in the future.<img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/10waystoshootyourselfinthefootwithkubernetes9willsurpriseyou-190528193744-thumbnail.jpg?width=120&height=120&fit=bounds" /><br> Kubernetes is a very powerful and complicated system, and many users don鈥檛 understand the underlying systems. Come learn how your users can abuse container runtimes, overwhelm your control plane, and cause outages - it鈥檚 actually quite easy!
In the last year, we have containerized hundreds of applications and deployed them in large scale clusters (more than 1000 nodes). The journey was eventful and we learned a lot along the way. We鈥檒l share stories of our ten favorite Kubernetes foot guns, including the dangers of cargo culting, rolling updates gone wrong, the pitfalls of initContainers, and nightmarish daemonset upgrades. The talk will present solutions we adopted to avoid or work around some these problems and will finally show several improvements we plan deploy in the future.
]]>
10172https://cdn.slidesharecdn.com/ss_thumbnails/10waystoshootyourselfinthefootwithkubernetes9willsurpriseyou-190528193744-thumbnail.jpg?width=120&height=120&fit=boundspresentationBlackhttp://activitystrea.ms/schema/1.0/posthttp://activitystrea.ms/schema/1.0/posted0Optimizing kubernetes networking
/slideshow/optimizing-kubernetes-networking/125570238
kubecon-optimizingkubernetesnetworking-181211032618 Running large Kubernetes clusters is challenging. This talk focus on how you can optimize your network setup in clusters with 1000-2000 nodes. It discusses standard ingresses solutions and their drawbacks as well as potential solutions]]>
Running large Kubernetes clusters is challenging. This talk focus on how you can optimize your network setup in clusters with 1000-2000 nodes. It discusses standard ingresses solutions and their drawbacks as well as potential solutions]]>
Tue, 11 Dec 2018 03:26:18 GMT/slideshow/optimizing-kubernetes-networking/125570238lbernail@slideshare.net(lbernail)Optimizing kubernetes networkinglbernailRunning large Kubernetes clusters is challenging. This talk focus on how you can optimize your network setup in clusters with 1000-2000 nodes. It discusses standard ingresses solutions and their drawbacks as well as potential solutions<img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/kubecon-optimizingkubernetesnetworking-181211032618-thumbnail.jpg?width=120&height=120&fit=bounds" /><br> Running large Kubernetes clusters is challenging. This talk focus on how you can optimize your network setup in clusters with 1000-2000 nodes. It discusses standard ingresses solutions and their drawbacks as well as potential solutions
]]>
13511https://cdn.slidesharecdn.com/ss_thumbnails/kubecon-optimizingkubernetesnetworking-181211032618-thumbnail.jpg?width=120&height=120&fit=boundspresentationBlackhttp://activitystrea.ms/schema/1.0/posthttp://activitystrea.ms/schema/1.0/posted0Kubernetes at Datadog the very hard way
/lbernail/kubernetes-at-datadog-the-very-hard-way
kubernetesatdatadog-181103181651 This presentation describes the challenges we faced building, scaling and operating a Kubernetes cluster of more than 1000 nodes to host the Datadog applications]]>
This presentation describes the challenges we faced building, scaling and operating a Kubernetes cluster of more than 1000 nodes to host the Datadog applications]]>
Sat, 03 Nov 2018 18:16:51 GMT/lbernail/kubernetes-at-datadog-the-very-hard-waylbernail@slideshare.net(lbernail)Kubernetes at Datadog the very hard waylbernailThis presentation describes the challenges we faced building, scaling and operating a Kubernetes cluster of more than 1000 nodes to host the Datadog applications<img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/kubernetesatdatadog-181103181651-thumbnail.jpg?width=120&height=120&fit=bounds" /><br> This presentation describes the challenges we faced building, scaling and operating a Kubernetes cluster of more than 1000 nodes to host the Datadog applications
]]>
33334https://cdn.slidesharecdn.com/ss_thumbnails/kubernetesatdatadog-181103181651-thumbnail.jpg?width=120&height=120&fit=boundspresentationBlackhttp://activitystrea.ms/schema/1.0/posthttp://activitystrea.ms/schema/1.0/posted0Deep Dive in Docker Overlay Networks
/lbernail/deep-dive-in-docker-overlay-networks-81193529
linuxcon-eu-2017-171025130617 The Docker network overlay driver relies on several technologies: network namespaces, VXLAN, Netlink and a distributed key-value store. This talk will present each of these mechanisms one by one along with their userland tools and show hands-on how they interact together when setting up an overlay to connect containers.
The talk will continue with a demo showing how to build your own simple overlay using these technologies.]]>
The Docker network overlay driver relies on several technologies: network namespaces, VXLAN, Netlink and a distributed key-value store. This talk will present each of these mechanisms one by one along with their userland tools and show hands-on how they interact together when setting up an overlay to connect containers.
The talk will continue with a demo showing how to build your own simple overlay using these technologies.]]>
Wed, 25 Oct 2017 13:06:17 GMT/lbernail/deep-dive-in-docker-overlay-networks-81193529lbernail@slideshare.net(lbernail)Deep Dive in Docker Overlay NetworkslbernailThe Docker network overlay driver relies on several technologies: network namespaces, VXLAN, Netlink and a distributed key-value store. This talk will present each of these mechanisms one by one along with their userland tools and show hands-on how they interact together when setting up an overlay to connect containers.
The talk will continue with a demo showing how to build your own simple overlay using these technologies.<img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/linuxcon-eu-2017-171025130617-thumbnail.jpg?width=120&height=120&fit=bounds" /><br> The Docker network overlay driver relies on several technologies: network namespaces, VXLAN, Netlink and a distributed key-value store. This talk will present each of these mechanisms one by one along with their userland tools and show hands-on how they interact together when setting up an overlay to connect containers.
The talk will continue with a demo showing how to build your own simple overlay using these technologies.
]]>
17495https://cdn.slidesharecdn.com/ss_thumbnails/linuxcon-eu-2017-171025130617-thumbnail.jpg?width=120&height=120&fit=boundspresentationBlackhttp://activitystrea.ms/schema/1.0/posthttp://activitystrea.ms/schema/1.0/posted0Deeper dive in Docker Overlay Networks
/slideshow/deeper-dive-in-docker-overlay-networks-81091247/81091247
dockercon-eu-2017-171023080606 The Docker network overlay driver relies on several technologies: network namespaces, VXLAN, Netlink and a distributed key-value store. This talk will present each of these mechanisms one by one along with their userland tools and show hands-on how they interact together when setting up an overlay to connect containers. The talk will continue with a demo showing how to build your own simple overlay using these technologies. Finally, it will show how we can dynamically distribute IP and MAC information to every hosts in the overlay using BGP EVPN]]>
The Docker network overlay driver relies on several technologies: network namespaces, VXLAN, Netlink and a distributed key-value store. This talk will present each of these mechanisms one by one along with their userland tools and show hands-on how they interact together when setting up an overlay to connect containers. The talk will continue with a demo showing how to build your own simple overlay using these technologies. Finally, it will show how we can dynamically distribute IP and MAC information to every hosts in the overlay using BGP EVPN]]>
Mon, 23 Oct 2017 08:06:06 GMT/slideshow/deeper-dive-in-docker-overlay-networks-81091247/81091247lbernail@slideshare.net(lbernail)Deeper dive in Docker Overlay NetworkslbernailThe Docker network overlay driver relies on several technologies: network namespaces, VXLAN, Netlink and a distributed key-value store. This talk will present each of these mechanisms one by one along with their userland tools and show hands-on how they interact together when setting up an overlay to connect containers. The talk will continue with a demo showing how to build your own simple overlay using these technologies. Finally, it will show how we can dynamically distribute IP and MAC information to every hosts in the overlay using BGP EVPN<img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/dockercon-eu-2017-171023080606-thumbnail.jpg?width=120&height=120&fit=bounds" /><br> The Docker network overlay driver relies on several technologies: network namespaces, VXLAN, Netlink and a distributed key-value store. This talk will present each of these mechanisms one by one along with their userland tools and show hands-on how they interact together when setting up an overlay to connect containers. The talk will continue with a demo showing how to build your own simple overlay using these technologies. Finally, it will show how we can dynamically distribute IP and MAC information to every hosts in the overlay using BGP EVPN
]]>
17683https://cdn.slidesharecdn.com/ss_thumbnails/dockercon-eu-2017-171023080606-thumbnail.jpg?width=120&height=120&fit=boundspresentationBlackhttp://activitystrea.ms/schema/1.0/posthttp://activitystrea.ms/schema/1.0/posted0Discovering OpenBSD on AWS
/slideshow/discovering-openbsd-on-aws/80085683
eurobsdcon-openbsd-aws-170923160421 The story behind the OpenBSD AMI, and how it integrates with AWS, and a demo of dynamic VPN built using Consul (all running OpenBSD)]]>
The story behind the OpenBSD AMI, and how it integrates with AWS, and a demo of dynamic VPN built using Consul (all running OpenBSD)]]>
Sat, 23 Sep 2017 16:04:21 GMT/slideshow/discovering-openbsd-on-aws/80085683lbernail@slideshare.net(lbernail)Discovering OpenBSD on AWSlbernailThe story behind the OpenBSD AMI, and how it integrates with AWS, and a demo of dynamic VPN built using Consul (all running OpenBSD)<img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/eurobsdcon-openbsd-aws-170923160421-thumbnail.jpg?width=120&height=120&fit=bounds" /><br> The story behind the OpenBSD AMI, and how it integrates with AWS, and a demo of dynamic VPN built using Consul (all running OpenBSD)
]]>
30253https://cdn.slidesharecdn.com/ss_thumbnails/eurobsdcon-openbsd-aws-170923160421-thumbnail.jpg?width=120&height=120&fit=boundspresentationBlackhttp://activitystrea.ms/schema/1.0/posthttp://activitystrea.ms/schema/1.0/posted0Operational challenges behind Serverless architectures
/slideshow/operational-challenges-behind-serverless-architectures/76098234
meetupawsserverless-170518155017 Serverless architectures are promising and will play an important role in the coming years but the ecosystem around serverless is still pretty young. We have been operating Lambda based applications for about a year and faced several challenges. In this presentation we share these challenges and propose some solutions to work around them.]]>
Serverless architectures are promising and will play an important role in the coming years but the ecosystem around serverless is still pretty young. We have been operating Lambda based applications for about a year and faced several challenges. In this presentation we share these challenges and propose some solutions to work around them.]]>
Thu, 18 May 2017 15:50:17 GMT/slideshow/operational-challenges-behind-serverless-architectures/76098234lbernail@slideshare.net(lbernail)Operational challenges behind Serverless architectureslbernailServerless architectures are promising and will play an important role in the coming years but the ecosystem around serverless is still pretty young. We have been operating Lambda based applications for about a year and faced several challenges. In this presentation we share these challenges and propose some solutions to work around them.<img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/meetupawsserverless-170518155017-thumbnail.jpg?width=120&height=120&fit=bounds" /><br> Serverless architectures are promising and will play an important role in the coming years but the ecosystem around serverless is still pretty young. We have been operating Lambda based applications for about a year and faced several challenges. In this presentation we share these challenges and propose some solutions to work around them.
]]>
5792https://cdn.slidesharecdn.com/ss_thumbnails/meetupawsserverless-170518155017-thumbnail.jpg?width=120&height=120&fit=boundspresentationBlackhttp://activitystrea.ms/schema/1.0/posthttp://activitystrea.ms/schema/1.0/posted0Deep dive in Docker Overlay Networks
/slideshow/deep-dive-in-docker-overlay-networks/75197114
dockercon2017-170419224551 The Docker network overlay driver relies on several technologies: network namespaces, VXLAN, Netlink and a distributed key-value store. This talk will present each of these mechanisms one by one along with their userland tools and show hands-on how they interact together when setting up an overlay to connect containers.
The talk will continue with a demo showing how to build your own simple overlay using these technologies.]]>
The Docker network overlay driver relies on several technologies: network namespaces, VXLAN, Netlink and a distributed key-value store. This talk will present each of these mechanisms one by one along with their userland tools and show hands-on how they interact together when setting up an overlay to connect containers.
The talk will continue with a demo showing how to build your own simple overlay using these technologies.]]>
Wed, 19 Apr 2017 22:45:51 GMT/slideshow/deep-dive-in-docker-overlay-networks/75197114lbernail@slideshare.net(lbernail)Deep dive in Docker Overlay NetworkslbernailThe Docker network overlay driver relies on several technologies: network namespaces, VXLAN, Netlink and a distributed key-value store. This talk will present each of these mechanisms one by one along with their userland tools and show hands-on how they interact together when setting up an overlay to connect containers.
The talk will continue with a demo showing how to build your own simple overlay using these technologies.<img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/dockercon2017-170419224551-thumbnail.jpg?width=120&height=120&fit=bounds" /><br> The Docker network overlay driver relies on several technologies: network namespaces, VXLAN, Netlink and a distributed key-value store. This talk will present each of these mechanisms one by one along with their userland tools and show hands-on how they interact together when setting up an overlay to connect containers.
The talk will continue with a demo showing how to build your own simple overlay using these technologies.
]]>
85088https://cdn.slidesharecdn.com/ss_thumbnails/dockercon2017-170419224551-thumbnail.jpg?width=120&height=120&fit=boundspresentationBlackhttp://activitystrea.ms/schema/1.0/posthttp://activitystrea.ms/schema/1.0/posted0Feedback on AWS re:invent 2016
/slideshow/feedback-on-aws-reinvent-2016/69908895
reinvent-2016-161207104906 Overview of the new services announced at re:invent]]>
Overview of the new services announced at re:invent]]>
Wed, 07 Dec 2016 10:49:06 GMT/slideshow/feedback-on-aws-reinvent-2016/69908895lbernail@slideshare.net(lbernail)Feedback on AWS re:invent 2016lbernailOverview of the new services announced at re:invent<img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/reinvent-2016-161207104906-thumbnail.jpg?width=120&height=120&fit=bounds" /><br> Overview of the new services announced at re:invent
]]>
14723https://cdn.slidesharecdn.com/ss_thumbnails/reinvent-2016-161207104906-thumbnail.jpg?width=120&height=120&fit=boundspresentationBlackhttp://activitystrea.ms/schema/1.0/posthttp://activitystrea.ms/schema/1.0/posted0Early recognition of encryted applications
/slideshow/slides-pam2007/46679567
slides-pam2007-150406072011-conversion-gate01 Most tools to recognize the application associated with network connections use well-known signatures as basis for their classification. This approach is very effective in enterprise and campus networks to pinpoint forbidden applications (peer to peer, for instance) or security threats. However, it is easy to use encryption to evade these mechanisms. In particular, Secure Sockets Layer (SSL) libraries such as OpenSSL are widely available and can easily be used to encrypt any type of traffic. In this paper, we propose a method to detect applications in SSL encrypted connections. Our method uses only the size of the first few packets of an SSL connection to recognize the application, which enables an early classification. We test our method on packet traces collected on two campus networks and on manually-encrypted traces. Our results show that we are able to recognize the application in an SSL connection with more than 85% accuracy.]]>
Most tools to recognize the application associated with network connections use well-known signatures as basis for their classification. This approach is very effective in enterprise and campus networks to pinpoint forbidden applications (peer to peer, for instance) or security threats. However, it is easy to use encryption to evade these mechanisms. In particular, Secure Sockets Layer (SSL) libraries such as OpenSSL are widely available and can easily be used to encrypt any type of traffic. In this paper, we propose a method to detect applications in SSL encrypted connections. Our method uses only the size of the first few packets of an SSL connection to recognize the application, which enables an early classification. We test our method on packet traces collected on two campus networks and on manually-encrypted traces. Our results show that we are able to recognize the application in an SSL connection with more than 85% accuracy.]]>
Mon, 06 Apr 2015 07:20:11 GMT/slideshow/slides-pam2007/46679567lbernail@slideshare.net(lbernail)Early recognition of encryted applicationslbernailMost tools to recognize the application associated with network connections use well-known signatures as basis for their classification. This approach is very effective in enterprise and campus networks to pinpoint forbidden applications (peer to peer, for instance) or security threats. However, it is easy to use encryption to evade these mechanisms. In particular, Secure Sockets Layer (SSL) libraries such as OpenSSL are widely available and can easily be used to encrypt any type of traffic. In this paper, we propose a method to detect applications in SSL encrypted connections. Our method uses only the size of the first few packets of an SSL connection to recognize the application, which enables an early classification. We test our method on packet traces collected on two campus networks and on manually-encrypted traces. Our results show that we are able to recognize the application in an SSL connection with more than 85% accuracy.<img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/slides-pam2007-150406072011-conversion-gate01-thumbnail.jpg?width=120&height=120&fit=bounds" /><br> Most tools to recognize the application associated with network connections use well-known signatures as basis for their classification. This approach is very effective in enterprise and campus networks to pinpoint forbidden applications (peer to peer, for instance) or security threats. However, it is easy to use encryption to evade these mechanisms. In particular, Secure Sockets Layer (SSL) libraries such as OpenSSL are widely available and can easily be used to encrypt any type of traffic. In this paper, we propose a method to detect applications in SSL encrypted connections. Our method uses only the size of the first few packets of an SSL connection to recognize the application, which enables an early classification. We test our method on packet traces collected on two campus networks and on manually-encrypted traces. Our results show that we are able to recognize the application in an SSL connection with more than 85% accuracy.
]]>
8082https://cdn.slidesharecdn.com/ss_thumbnails/slides-pam2007-150406072011-conversion-gate01-thumbnail.jpg?width=120&height=120&fit=boundspresentationBlackhttp://activitystrea.ms/schema/1.0/posthttp://activitystrea.ms/schema/1.0/posted0Early application identification. CONEXT 2006
/slideshow/early-application-identification-conext-2006/46679315
slides-conext2006-150406070937-conversion-gate01 The automatic detection of applications associated with network traffic is an essential step for network security and traffic engineering. Unfortunately, simple port-based classification methods are not always efficient and systematic analysis of packet payloads is too slow. Most recent research proposals use flow statistics to classify traffic flows once they are finished, which limit their applicability for online classification. In this paper, we evaluate the feasibility of application identification at the beginning of a TCP connection. Based on an analysis of packet traces collected on eight different networks, we find that it is possible to distinguish the behavior of an application from the observation of the size and the direction of the first few packets of the TCP connection. We apply three techniques to cluster TCP connections: K-Means, Gaussian Mixture Model and spectral clustering. Resulting clusters are used together with assignment and labeling heuristics to design classifiers. We evaluate these classifiers on different packet traces. Our results show that the first four packets of a TCP connection are sufficient to classify known applications with an accuracy over 90% and to identify new applications as unknown with a probability of 60%.]]>
The automatic detection of applications associated with network traffic is an essential step for network security and traffic engineering. Unfortunately, simple port-based classification methods are not always efficient and systematic analysis of packet payloads is too slow. Most recent research proposals use flow statistics to classify traffic flows once they are finished, which limit their applicability for online classification. In this paper, we evaluate the feasibility of application identification at the beginning of a TCP connection. Based on an analysis of packet traces collected on eight different networks, we find that it is possible to distinguish the behavior of an application from the observation of the size and the direction of the first few packets of the TCP connection. We apply three techniques to cluster TCP connections: K-Means, Gaussian Mixture Model and spectral clustering. Resulting clusters are used together with assignment and labeling heuristics to design classifiers. We evaluate these classifiers on different packet traces. Our results show that the first four packets of a TCP connection are sufficient to classify known applications with an accuracy over 90% and to identify new applications as unknown with a probability of 60%.]]>
Mon, 06 Apr 2015 07:09:37 GMT/slideshow/early-application-identification-conext-2006/46679315lbernail@slideshare.net(lbernail)Early application identification. CONEXT 2006lbernailThe automatic detection of applications associated with network traffic is an essential step for network security and traffic engineering. Unfortunately, simple port-based classification methods are not always efficient and systematic analysis of packet payloads is too slow. Most recent research proposals use flow statistics to classify traffic flows once they are finished, which limit their applicability for online classification. In this paper, we evaluate the feasibility of application identification at the beginning of a TCP connection. Based on an analysis of packet traces collected on eight different networks, we find that it is possible to distinguish the behavior of an application from the observation of the size and the direction of the first few packets of the TCP connection. We apply three techniques to cluster TCP connections: K-Means, Gaussian Mixture Model and spectral clustering. Resulting clusters are used together with assignment and labeling heuristics to design classifiers. We evaluate these classifiers on different packet traces. Our results show that the first four packets of a TCP connection are sufficient to classify known applications with an accuracy over 90% and to identify new applications as unknown with a probability of 60%.<img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/slides-conext2006-150406070937-conversion-gate01-thumbnail.jpg?width=120&height=120&fit=bounds" /><br> The automatic detection of applications associated with network traffic is an essential step for network security and traffic engineering. Unfortunately, simple port-based classification methods are not always efficient and systematic analysis of packet payloads is too slow. Most recent research proposals use flow statistics to classify traffic flows once they are finished, which limit their applicability for online classification. In this paper, we evaluate the feasibility of application identification at the beginning of a TCP connection. Based on an analysis of packet traces collected on eight different networks, we find that it is possible to distinguish the behavior of an application from the observation of the size and the direction of the first few packets of the TCP connection. We apply three techniques to cluster TCP connections: K-Means, Gaussian Mixture Model and spectral clustering. Resulting clusters are used together with assignment and labeling heuristics to design classifiers. We evaluate these classifiers on different packet traces. Our results show that the first four packets of a TCP connection are sufficient to classify known applications with an accuracy over 90% and to identify new applications as unknown with a probability of 60%.
]]>
3303https://cdn.slidesharecdn.com/ss_thumbnails/slides-conext2006-150406070937-conversion-gate01-thumbnail.jpg?width=120&height=120&fit=boundspresentation000000http://activitystrea.ms/schema/1.0/posthttp://activitystrea.ms/schema/1.0/posted0https://cdn.slidesharecdn.com/profile-photo-lbernail-48x48.jpg?cb=1605896753https://cdn.slidesharecdn.com/ss_thumbnails/howtheoom-killerdeletedmynamespace-201120185914-thumbnail.jpg?width=320&height=320&fit=boundsslideshow/how-the-oom-killer-deleted-my-namespace/239358850How the OOM Killer Del...https://cdn.slidesharecdn.com/ss_thumbnails/kubernetesdnshorrorstories-201120185650-thumbnail.jpg?width=320&height=320&fit=boundsslideshow/kubernetes-dns-horror-stories/239358825Kubernetes DNS Horror ...https://cdn.slidesharecdn.com/ss_thumbnails/evolutionofkube-proxyfosdem2020-200203100144-thumbnail.jpg?width=320&height=320&fit=boundsslideshow/evolution-of-kubeproxy-brussels-fosdem-2020/226790640Evolution of kube-prox...