際際滷shows by User: lbernail / http://www.slideshare.net/images/logo.gif 際際滷shows by User: lbernail / Fri, 20 Nov 2020 18:59:14 GMT 際際滷Share feed for 際際滷shows by User: lbernail How the OOM Killer Deleted My Namespace /slideshow/how-the-oom-killer-deleted-my-namespace/239358850 howtheoom-killerdeletedmynamespace-201120185914
Running Kubernetes at scale is challenging and you can often end up in situations where you have to debug complex and unexpected issues. This requires understanding in detail how the different components work and interact with each other. Over the last 3 years, Datadog migrated most of its workloads to Kubernetes and now manages dozens of clusters consisting of thousands of nodes each. During this journey, engineers have debugged complex issues with root causes that were sometimes very surprising. In this talk Laurent and Tabitha will share some of these stories, including a favorite: how a complex interaction between familiar Kubernetes components allowed an OOM-killer invocation to trigger the deletion of a namespace.]]>

Running Kubernetes at scale is challenging and you can often end up in situations where you have to debug complex and unexpected issues. This requires understanding in detail how the different components work and interact with each other. Over the last 3 years, Datadog migrated most of its workloads to Kubernetes and now manages dozens of clusters consisting of thousands of nodes each. During this journey, engineers have debugged complex issues with root causes that were sometimes very surprising. In this talk Laurent and Tabitha will share some of these stories, including a favorite: how a complex interaction between familiar Kubernetes components allowed an OOM-killer invocation to trigger the deletion of a namespace.]]>
Fri, 20 Nov 2020 18:59:14 GMT /slideshow/how-the-oom-killer-deleted-my-namespace/239358850 lbernail@slideshare.net(lbernail) How the OOM Killer Deleted My Namespace lbernail Running Kubernetes at scale is challenging and you can often end up in situations where you have to debug complex and unexpected issues. This requires understanding in detail how the different components work and interact with each other. Over the last 3 years, Datadog migrated most of its workloads to Kubernetes and now manages dozens of clusters consisting of thousands of nodes each. During this journey, engineers have debugged complex issues with root causes that were sometimes very surprising. In this talk Laurent and Tabitha will share some of these stories, including a favorite: how a complex interaction between familiar Kubernetes components allowed an OOM-killer invocation to trigger the deletion of a namespace. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/howtheoom-killerdeletedmynamespace-201120185914-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Running Kubernetes at scale is challenging and you can often end up in situations where you have to debug complex and unexpected issues. This requires understanding in detail how the different components work and interact with each other. Over the last 3 years, Datadog migrated most of its workloads to Kubernetes and now manages dozens of clusters consisting of thousands of nodes each. During this journey, engineers have debugged complex issues with root causes that were sometimes very surprising. In this talk Laurent and Tabitha will share some of these stories, including a favorite: how a complex interaction between familiar Kubernetes components allowed an OOM-killer invocation to trigger the deletion of a namespace.
How the OOM Killer Deleted My Namespace from Laurent Bernaille
]]>
759 0 https://cdn.slidesharecdn.com/ss_thumbnails/howtheoom-killerdeletedmynamespace-201120185914-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Kubernetes DNS Horror Stories /slideshow/kubernetes-dns-horror-stories/239358825 kubernetesdnshorrorstories-201120185650
DNS is one of the Kubernetes core systems and can quickly become a source of issues when youre running clusters at scale. For over a year at Datadog, weve run Kubernetes clusters with thousands of nodes that host workloads generating tens of thousands of DNS queries per second. It wasnt easy to build an architecture able to handle this load, and weve had our share of problems along the way. This talk starts with a presentation of how Kubernetes DNS works. It then dives into the challenges weve faced, which span a variety of topics related to load, connection tracking, upstream servers, rolling updates, resolver implementations, and performance. We then show how our DNS architecture evolved over time to address or mitigate these problems. Finally, we share our solutions for detecting these problems before they happenand identifying misbehaving clients.]]>

DNS is one of the Kubernetes core systems and can quickly become a source of issues when youre running clusters at scale. For over a year at Datadog, weve run Kubernetes clusters with thousands of nodes that host workloads generating tens of thousands of DNS queries per second. It wasnt easy to build an architecture able to handle this load, and weve had our share of problems along the way. This talk starts with a presentation of how Kubernetes DNS works. It then dives into the challenges weve faced, which span a variety of topics related to load, connection tracking, upstream servers, rolling updates, resolver implementations, and performance. We then show how our DNS architecture evolved over time to address or mitigate these problems. Finally, we share our solutions for detecting these problems before they happenand identifying misbehaving clients.]]>
Fri, 20 Nov 2020 18:56:50 GMT /slideshow/kubernetes-dns-horror-stories/239358825 lbernail@slideshare.net(lbernail) Kubernetes DNS Horror Stories lbernail DNS is one of the Kubernetes core systems and can quickly become a source of issues when youre running clusters at scale. For over a year at Datadog, weve run Kubernetes clusters with thousands of nodes that host workloads generating tens of thousands of DNS queries per second. It wasnt easy to build an architecture able to handle this load, and weve had our share of problems along the way. This talk starts with a presentation of how Kubernetes DNS works. It then dives into the challenges weve faced, which span a variety of topics related to load, connection tracking, upstream servers, rolling updates, resolver implementations, and performance. We then show how our DNS architecture evolved over time to address or mitigate these problems. Finally, we share our solutions for detecting these problems before they happenand identifying misbehaving clients. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/kubernetesdnshorrorstories-201120185650-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> DNS is one of the Kubernetes core systems and can quickly become a source of issues when youre running clusters at scale. For over a year at Datadog, weve run Kubernetes clusters with thousands of nodes that host workloads generating tens of thousands of DNS queries per second. It wasnt easy to build an architecture able to handle this load, and weve had our share of problems along the way. This talk starts with a presentation of how Kubernetes DNS works. It then dives into the challenges weve faced, which span a variety of topics related to load, connection tracking, upstream servers, rolling updates, resolver implementations, and performance. We then show how our DNS architecture evolved over time to address or mitigate these problems. Finally, we share our solutions for detecting these problems before they happenand identifying misbehaving clients.
Kubernetes DNS Horror Stories from Laurent Bernaille
]]>
724 1 https://cdn.slidesharecdn.com/ss_thumbnails/kubernetesdnshorrorstories-201120185650-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Evolution of kube-proxy (Brussels, Fosdem 2020) /slideshow/evolution-of-kubeproxy-brussels-fosdem-2020/226790640 evolutionofkube-proxyfosdem2020-200203100144
Kube-proxy enables access to Kubernetes services (virtual IPs backed by pods) by configuring client-side load-balancing on nodes. The first implementation relied on a userspace proxy which was not very performant. The second implementation used iptables and is still the one used in most Kubernetes clusters. Recently, the community introduced an alternative based on IPVS. This talk will start with a description of the different modes and how they work. It will then focus on the IPVS implementation, the improvements it brings, the issues we encountered and how we fixed them as well as the remaining challenges and how they could be addressed. Finally, the talk will present alternative solutions based on eBPF such as Cilium.]]>

Kube-proxy enables access to Kubernetes services (virtual IPs backed by pods) by configuring client-side load-balancing on nodes. The first implementation relied on a userspace proxy which was not very performant. The second implementation used iptables and is still the one used in most Kubernetes clusters. Recently, the community introduced an alternative based on IPVS. This talk will start with a description of the different modes and how they work. It will then focus on the IPVS implementation, the improvements it brings, the issues we encountered and how we fixed them as well as the remaining challenges and how they could be addressed. Finally, the talk will present alternative solutions based on eBPF such as Cilium.]]>
Mon, 03 Feb 2020 10:01:44 GMT /slideshow/evolution-of-kubeproxy-brussels-fosdem-2020/226790640 lbernail@slideshare.net(lbernail) Evolution of kube-proxy (Brussels, Fosdem 2020) lbernail Kube-proxy enables access to Kubernetes services (virtual IPs backed by pods) by configuring client-side load-balancing on nodes. The first implementation relied on a userspace proxy which was not very performant. The second implementation used iptables and is still the one used in most Kubernetes clusters. Recently, the community introduced an alternative based on IPVS. This talk will start with a description of the different modes and how they work. It will then focus on the IPVS implementation, the improvements it brings, the issues we encountered and how we fixed them as well as the remaining challenges and how they could be addressed. Finally, the talk will present alternative solutions based on eBPF such as Cilium. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/evolutionofkube-proxyfosdem2020-200203100144-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Kube-proxy enables access to Kubernetes services (virtual IPs backed by pods) by configuring client-side load-balancing on nodes. The first implementation relied on a userspace proxy which was not very performant. The second implementation used iptables and is still the one used in most Kubernetes clusters. Recently, the community introduced an alternative based on IPVS. This talk will start with a description of the different modes and how they work. It will then focus on the IPVS implementation, the improvements it brings, the issues we encountered and how we fixed them as well as the remaining challenges and how they could be addressed. Finally, the talk will present alternative solutions based on eBPF such as Cilium.
Evolution of kube-proxy (Brussels, Fosdem 2020) from Laurent Bernaille
]]>
711 0 https://cdn.slidesharecdn.com/ss_thumbnails/evolutionofkube-proxyfosdem2020-200203100144-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Making the most out of kubernetes audit logs /slideshow/making-the-most-out-of-kubernetes-audit-logs/197948548 makingthemostoutofkubernetesauditlogs-191126144639
The Kubernetes audit logs are a rich source of information: all of the calls made to the API server are stored, along with additional metadata such as usernames, timings, and source IPs. They help to answer questions such as What is overloading my control plane? or Which sequence of events led to this problematic situation?. These questions are hard to answer otherwiseespecially in large clusters. At Datadog, we have been running clusters with 1000+ nodes for more than a year and during that time, the audit logs have proved invaluable. In this presentation, we will first introduce the audit logs, explain how they are configured, and review the type of data they store. Finally, we will describe in detail several scenarios where they have helped us to diagnose complex problems.]]>

The Kubernetes audit logs are a rich source of information: all of the calls made to the API server are stored, along with additional metadata such as usernames, timings, and source IPs. They help to answer questions such as What is overloading my control plane? or Which sequence of events led to this problematic situation?. These questions are hard to answer otherwiseespecially in large clusters. At Datadog, we have been running clusters with 1000+ nodes for more than a year and during that time, the audit logs have proved invaluable. In this presentation, we will first introduce the audit logs, explain how they are configured, and review the type of data they store. Finally, we will describe in detail several scenarios where they have helped us to diagnose complex problems.]]>
Tue, 26 Nov 2019 14:46:39 GMT /slideshow/making-the-most-out-of-kubernetes-audit-logs/197948548 lbernail@slideshare.net(lbernail) Making the most out of kubernetes audit logs lbernail The Kubernetes audit logs are a rich source of information: all of the calls made to the API server are stored, along with additional metadata such as usernames, timings, and source IPs. They help to answer questions such as What is overloading my control plane? or Which sequence of events led to this problematic situation?. These questions are hard to answer otherwiseespecially in large clusters. At Datadog, we have been running clusters with 1000+ nodes for more than a year and during that time, the audit logs have proved invaluable. In this presentation, we will first introduce the audit logs, explain how they are configured, and review the type of data they store. Finally, we will describe in detail several scenarios where they have helped us to diagnose complex problems. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/makingthemostoutofkubernetesauditlogs-191126144639-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> The Kubernetes audit logs are a rich source of information: all of the calls made to the API server are stored, along with additional metadata such as usernames, timings, and source IPs. They help to answer questions such as What is overloading my control plane? or Which sequence of events led to this problematic situation?. These questions are hard to answer otherwiseespecially in large clusters. At Datadog, we have been running clusters with 1000+ nodes for more than a year and during that time, the audit logs have proved invaluable. In this presentation, we will first introduce the audit logs, explain how they are configured, and review the type of data they store. Finally, we will describe in detail several scenarios where they have helped us to diagnose complex problems.
Making the most out of kubernetes audit logs from Laurent Bernaille
]]>
648 0 https://cdn.slidesharecdn.com/ss_thumbnails/makingthemostoutofkubernetesauditlogs-191126144639-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Kubernetes the Very Hard Way. Velocity Berlin 2019 /slideshow/kubernetes-the-very-hard-way-velocity-berlin-2019/191049721 kubernetestheveryhardway-velocity19-191106142656
Running large Kubernetes clusters is difficult. Datadog has been running large-scale Kubernetes clusters (thousands of nodes) for more than a year and has learned several lessons the hard way. Laurent Bernaille examines the challenges Datadog faced during this journey. He dives into problems that arise when you run large clustersand, crucially, how to address themby providing detailed examples based on Datadogs experience across different cloud providers. Youll explore complex runtime and networking issues: at scale you discover complex issues in low-level components that are very rare but happen regularly when you have a large number of nodes. Additionally, Laurent provides examples of how to improve the architecture of clusters to increase scalability and performance, both on the control plane and the data plane (communication between pods and ingress traffic). If scale can be hard on the control plane, its even harder on tools from the ecosystem, which have rarely been tested on very large clusters. He explains several examples of the tools Datadog uses and how it had to improve them to handle its scale. And youll leave with practical advice on how to build a good relationship with the community and start contributing back.]]>

Running large Kubernetes clusters is difficult. Datadog has been running large-scale Kubernetes clusters (thousands of nodes) for more than a year and has learned several lessons the hard way. Laurent Bernaille examines the challenges Datadog faced during this journey. He dives into problems that arise when you run large clustersand, crucially, how to address themby providing detailed examples based on Datadogs experience across different cloud providers. Youll explore complex runtime and networking issues: at scale you discover complex issues in low-level components that are very rare but happen regularly when you have a large number of nodes. Additionally, Laurent provides examples of how to improve the architecture of clusters to increase scalability and performance, both on the control plane and the data plane (communication between pods and ingress traffic). If scale can be hard on the control plane, its even harder on tools from the ecosystem, which have rarely been tested on very large clusters. He explains several examples of the tools Datadog uses and how it had to improve them to handle its scale. And youll leave with practical advice on how to build a good relationship with the community and start contributing back.]]>
Wed, 06 Nov 2019 14:26:56 GMT /slideshow/kubernetes-the-very-hard-way-velocity-berlin-2019/191049721 lbernail@slideshare.net(lbernail) Kubernetes the Very Hard Way. Velocity Berlin 2019 lbernail Running large Kubernetes clusters is difficult. Datadog has been running large-scale Kubernetes clusters (thousands of nodes) for more than a year and has learned several lessons the hard way. Laurent Bernaille examines the challenges Datadog faced during this journey. He dives into problems that arise when you run large clustersand, crucially, how to address themby providing detailed examples based on Datadogs experience across different cloud providers. Youll explore complex runtime and networking issues: at scale you discover complex issues in low-level components that are very rare but happen regularly when you have a large number of nodes. Additionally, Laurent provides examples of how to improve the architecture of clusters to increase scalability and performance, both on the control plane and the data plane (communication between pods and ingress traffic). If scale can be hard on the control plane, its even harder on tools from the ecosystem, which have rarely been tested on very large clusters. He explains several examples of the tools Datadog uses and how it had to improve them to handle its scale. And youll leave with practical advice on how to build a good relationship with the community and start contributing back. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/kubernetestheveryhardway-velocity19-191106142656-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Running large Kubernetes clusters is difficult. Datadog has been running large-scale Kubernetes clusters (thousands of nodes) for more than a year and has learned several lessons the hard way. Laurent Bernaille examines the challenges Datadog faced during this journey. He dives into problems that arise when you run large clustersand, crucially, how to address themby providing detailed examples based on Datadogs experience across different cloud providers. Youll explore complex runtime and networking issues: at scale you discover complex issues in low-level components that are very rare but happen regularly when you have a large number of nodes. Additionally, Laurent provides examples of how to improve the architecture of clusters to increase scalability and performance, both on the control plane and the data plane (communication between pods and ingress traffic). If scale can be hard on the control plane, its even harder on tools from the ecosystem, which have rarely been tested on very large clusters. He explains several examples of the tools Datadog uses and how it had to improve them to handle its scale. And youll leave with practical advice on how to build a good relationship with the community and start contributing back.
Kubernetes the Very Hard Way. Velocity Berlin 2019 from Laurent Bernaille
]]>
1544 0 https://cdn.slidesharecdn.com/ss_thumbnails/kubernetestheveryhardway-velocity19-191106142656-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Kubernetes the Very Hard Way. Lisa Portland 2019 /slideshow/kubernetes-the-very-hard-way-188349737/188349737 kubernetestheveryhardway-lisa19-191029222334
Running large Kubernetes clusters is challenging. At large scales, practitioners need to adapt and tune both their architectures and component configurations in specialized ways. Our organisation has been running large scale Kubernetes clusters (up to 2000 nodes, and growing) for more than a year, and we have learned several lessons the hard way. This talk will dive into complex runtime and networking issues that occur when running Kubernetes in production at scale. We will provide examples of how to improve the architecture of clusters to increase scalability and performance, both on the control plane and the data plane. Further, tools from the greater ecosystem will be examined, as they are rarely tested within the context of very large clusters. Finally, the talk will also discuss the mutually beneficial relationship we built with the larger Kubernetes community by providing feedback on the tools and contributing both fixes and improvements upstream.]]>

Running large Kubernetes clusters is challenging. At large scales, practitioners need to adapt and tune both their architectures and component configurations in specialized ways. Our organisation has been running large scale Kubernetes clusters (up to 2000 nodes, and growing) for more than a year, and we have learned several lessons the hard way. This talk will dive into complex runtime and networking issues that occur when running Kubernetes in production at scale. We will provide examples of how to improve the architecture of clusters to increase scalability and performance, both on the control plane and the data plane. Further, tools from the greater ecosystem will be examined, as they are rarely tested within the context of very large clusters. Finally, the talk will also discuss the mutually beneficial relationship we built with the larger Kubernetes community by providing feedback on the tools and contributing both fixes and improvements upstream.]]>
Tue, 29 Oct 2019 22:23:34 GMT /slideshow/kubernetes-the-very-hard-way-188349737/188349737 lbernail@slideshare.net(lbernail) Kubernetes the Very Hard Way. Lisa Portland 2019 lbernail Running large Kubernetes clusters is challenging. At large scales, practitioners need to adapt and tune both their architectures and component configurations in specialized ways. Our organisation has been running large scale Kubernetes clusters (up to 2000 nodes, and growing) for more than a year, and we have learned several lessons the hard way. This talk will dive into complex runtime and networking issues that occur when running Kubernetes in production at scale. We will provide examples of how to improve the architecture of clusters to increase scalability and performance, both on the control plane and the data plane. Further, tools from the greater ecosystem will be examined, as they are rarely tested within the context of very large clusters. Finally, the talk will also discuss the mutually beneficial relationship we built with the larger Kubernetes community by providing feedback on the tools and contributing both fixes and improvements upstream. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/kubernetestheveryhardway-lisa19-191029222334-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Running large Kubernetes clusters is challenging. At large scales, practitioners need to adapt and tune both their architectures and component configurations in specialized ways. Our organisation has been running large scale Kubernetes clusters (up to 2000 nodes, and growing) for more than a year, and we have learned several lessons the hard way. This talk will dive into complex runtime and networking issues that occur when running Kubernetes in production at scale. We will provide examples of how to improve the architecture of clusters to increase scalability and performance, both on the control plane and the data plane. Further, tools from the greater ecosystem will be examined, as they are rarely tested within the context of very large clusters. Finally, the talk will also discuss the mutually beneficial relationship we built with the larger Kubernetes community by providing feedback on the tools and contributing both fixes and improvements upstream.
Kubernetes the Very Hard Way. Lisa Portland 2019 from Laurent Bernaille
]]>
2631 2 https://cdn.slidesharecdn.com/ss_thumbnails/kubernetestheveryhardway-lisa19-191029222334-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
10 ways to shoot yourself in the foot with kubernetes, #9 will surprise you! (Container Day Paris) /slideshow/10-ways-to-shoot-yourself-in-the-foot-with-kubernetes-9-will-surprise-you-container-day-paris/148911982 containerday10waystoshootyourselfinthefootwithkubernetes9willsurpriseyou-190605092654
Kubernetes is a very powerful and complicated system, and many users dont understand the underlying systems. Come learn how your users can abuse container runtimes, overwhelm your control plane, and cause outages - its actually quite easy! In the last year, we have containerized hundreds of applications and deployed them in large scale clusters (more than 1000 nodes). The journey was eventful and we learned a lot along the way. Well share stories of our ten favorite Kubernetes foot guns, including the dangers of cargo culting, rolling updates gone wrong, the pitfalls of initContainers, and nightmarish daemonset upgrades. The talk will present solutions we adopted to avoid or work around some these problems and will finally show several improvements we plan deploy in the future. Similar to the Kubecon talk with the same title with a few new incidents.]]>

Kubernetes is a very powerful and complicated system, and many users dont understand the underlying systems. Come learn how your users can abuse container runtimes, overwhelm your control plane, and cause outages - its actually quite easy! In the last year, we have containerized hundreds of applications and deployed them in large scale clusters (more than 1000 nodes). The journey was eventful and we learned a lot along the way. Well share stories of our ten favorite Kubernetes foot guns, including the dangers of cargo culting, rolling updates gone wrong, the pitfalls of initContainers, and nightmarish daemonset upgrades. The talk will present solutions we adopted to avoid or work around some these problems and will finally show several improvements we plan deploy in the future. Similar to the Kubecon talk with the same title with a few new incidents.]]>
Wed, 05 Jun 2019 09:26:54 GMT /slideshow/10-ways-to-shoot-yourself-in-the-foot-with-kubernetes-9-will-surprise-you-container-day-paris/148911982 lbernail@slideshare.net(lbernail) 10 ways to shoot yourself in the foot with kubernetes, #9 will surprise you! (Container Day Paris) lbernail Kubernetes is a very powerful and complicated system, and many users dont understand the underlying systems. Come learn how your users can abuse container runtimes, overwhelm your control plane, and cause outages - its actually quite easy! In the last year, we have containerized hundreds of applications and deployed them in large scale clusters (more than 1000 nodes). The journey was eventful and we learned a lot along the way. Well share stories of our ten favorite Kubernetes foot guns, including the dangers of cargo culting, rolling updates gone wrong, the pitfalls of initContainers, and nightmarish daemonset upgrades. The talk will present solutions we adopted to avoid or work around some these problems and will finally show several improvements we plan deploy in the future. Similar to the Kubecon talk with the same title with a few new incidents. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/containerday10waystoshootyourselfinthefootwithkubernetes9willsurpriseyou-190605092654-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Kubernetes is a very powerful and complicated system, and many users dont understand the underlying systems. Come learn how your users can abuse container runtimes, overwhelm your control plane, and cause outages - its actually quite easy! In the last year, we have containerized hundreds of applications and deployed them in large scale clusters (more than 1000 nodes). The journey was eventful and we learned a lot along the way. Well share stories of our ten favorite Kubernetes foot guns, including the dangers of cargo culting, rolling updates gone wrong, the pitfalls of initContainers, and nightmarish daemonset upgrades. The talk will present solutions we adopted to avoid or work around some these problems and will finally show several improvements we plan deploy in the future. Similar to the Kubecon talk with the same title with a few new incidents.
10 ways to shoot yourself in the foot with kubernetes, #9 will surprise you! (Container Day Paris) from Laurent Bernaille
]]>
3288 4 https://cdn.slidesharecdn.com/ss_thumbnails/containerday10waystoshootyourselfinthefootwithkubernetes9willsurpriseyou-190605092654-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
10 ways to shoot yourself in the foot with kubernetes, #9 will surprise you! /slideshow/10-ways-to-shoot-yourself-in-the-foot-with-kubernetes-9-will-surprise-you-148015068/148015068 10waystoshootyourselfinthefootwithkubernetes9willsurpriseyou-190528193744
Kubernetes is a very powerful and complicated system, and many users dont understand the underlying systems. Come learn how your users can abuse container runtimes, overwhelm your control plane, and cause outages - its actually quite easy! In the last year, we have containerized hundreds of applications and deployed them in large scale clusters (more than 1000 nodes). The journey was eventful and we learned a lot along the way. Well share stories of our ten favorite Kubernetes foot guns, including the dangers of cargo culting, rolling updates gone wrong, the pitfalls of initContainers, and nightmarish daemonset upgrades. The talk will present solutions we adopted to avoid or work around some these problems and will finally show several improvements we plan deploy in the future.]]>

Kubernetes is a very powerful and complicated system, and many users dont understand the underlying systems. Come learn how your users can abuse container runtimes, overwhelm your control plane, and cause outages - its actually quite easy! In the last year, we have containerized hundreds of applications and deployed them in large scale clusters (more than 1000 nodes). The journey was eventful and we learned a lot along the way. Well share stories of our ten favorite Kubernetes foot guns, including the dangers of cargo culting, rolling updates gone wrong, the pitfalls of initContainers, and nightmarish daemonset upgrades. The talk will present solutions we adopted to avoid or work around some these problems and will finally show several improvements we plan deploy in the future.]]>
Tue, 28 May 2019 19:37:44 GMT /slideshow/10-ways-to-shoot-yourself-in-the-foot-with-kubernetes-9-will-surprise-you-148015068/148015068 lbernail@slideshare.net(lbernail) 10 ways to shoot yourself in the foot with kubernetes, #9 will surprise you! lbernail Kubernetes is a very powerful and complicated system, and many users dont understand the underlying systems. Come learn how your users can abuse container runtimes, overwhelm your control plane, and cause outages - its actually quite easy! In the last year, we have containerized hundreds of applications and deployed them in large scale clusters (more than 1000 nodes). The journey was eventful and we learned a lot along the way. Well share stories of our ten favorite Kubernetes foot guns, including the dangers of cargo culting, rolling updates gone wrong, the pitfalls of initContainers, and nightmarish daemonset upgrades. The talk will present solutions we adopted to avoid or work around some these problems and will finally show several improvements we plan deploy in the future. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/10waystoshootyourselfinthefootwithkubernetes9willsurpriseyou-190528193744-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Kubernetes is a very powerful and complicated system, and many users dont understand the underlying systems. Come learn how your users can abuse container runtimes, overwhelm your control plane, and cause outages - its actually quite easy! In the last year, we have containerized hundreds of applications and deployed them in large scale clusters (more than 1000 nodes). The journey was eventful and we learned a lot along the way. Well share stories of our ten favorite Kubernetes foot guns, including the dangers of cargo culting, rolling updates gone wrong, the pitfalls of initContainers, and nightmarish daemonset upgrades. The talk will present solutions we adopted to avoid or work around some these problems and will finally show several improvements we plan deploy in the future.
10 ways to shoot yourself in the foot with kubernetes, #9 will surprise you! from Laurent Bernaille
]]>
1001 2 https://cdn.slidesharecdn.com/ss_thumbnails/10waystoshootyourselfinthefootwithkubernetes9willsurpriseyou-190528193744-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Optimizing kubernetes networking /slideshow/optimizing-kubernetes-networking/125570238 kubecon-optimizingkubernetesnetworking-181211032618
Running large Kubernetes clusters is challenging. This talk focus on how you can optimize your network setup in clusters with 1000-2000 nodes. It discusses standard ingresses solutions and their drawbacks as well as potential solutions]]>

Running large Kubernetes clusters is challenging. This talk focus on how you can optimize your network setup in clusters with 1000-2000 nodes. It discusses standard ingresses solutions and their drawbacks as well as potential solutions]]>
Tue, 11 Dec 2018 03:26:18 GMT /slideshow/optimizing-kubernetes-networking/125570238 lbernail@slideshare.net(lbernail) Optimizing kubernetes networking lbernail Running large Kubernetes clusters is challenging. This talk focus on how you can optimize your network setup in clusters with 1000-2000 nodes. It discusses standard ingresses solutions and their drawbacks as well as potential solutions <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/kubecon-optimizingkubernetesnetworking-181211032618-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Running large Kubernetes clusters is challenging. This talk focus on how you can optimize your network setup in clusters with 1000-2000 nodes. It discusses standard ingresses solutions and their drawbacks as well as potential solutions
Optimizing kubernetes networking from Laurent Bernaille
]]>
1346 1 https://cdn.slidesharecdn.com/ss_thumbnails/kubecon-optimizingkubernetesnetworking-181211032618-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Kubernetes at Datadog the very hard way /lbernail/kubernetes-at-datadog-the-very-hard-way kubernetesatdatadog-181103181651
This presentation describes the challenges we faced building, scaling and operating a Kubernetes cluster of more than 1000 nodes to host the Datadog applications]]>

This presentation describes the challenges we faced building, scaling and operating a Kubernetes cluster of more than 1000 nodes to host the Datadog applications]]>
Sat, 03 Nov 2018 18:16:51 GMT /lbernail/kubernetes-at-datadog-the-very-hard-way lbernail@slideshare.net(lbernail) Kubernetes at Datadog the very hard way lbernail This presentation describes the challenges we faced building, scaling and operating a Kubernetes cluster of more than 1000 nodes to host the Datadog applications <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/kubernetesatdatadog-181103181651-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> This presentation describes the challenges we faced building, scaling and operating a Kubernetes cluster of more than 1000 nodes to host the Datadog applications
Kubernetes at Datadog the very hard way from Laurent Bernaille
]]>
3323 4 https://cdn.slidesharecdn.com/ss_thumbnails/kubernetesatdatadog-181103181651-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Deep Dive in Docker Overlay Networks /lbernail/deep-dive-in-docker-overlay-networks-81193529 linuxcon-eu-2017-171025130617
The Docker network overlay driver relies on several technologies: network namespaces, VXLAN, Netlink and a distributed key-value store. This talk will present each of these mechanisms one by one along with their userland tools and show hands-on how they interact together when setting up an overlay to connect containers. The talk will continue with a demo showing how to build your own simple overlay using these technologies.]]>

The Docker network overlay driver relies on several technologies: network namespaces, VXLAN, Netlink and a distributed key-value store. This talk will present each of these mechanisms one by one along with their userland tools and show hands-on how they interact together when setting up an overlay to connect containers. The talk will continue with a demo showing how to build your own simple overlay using these technologies.]]>
Wed, 25 Oct 2017 13:06:17 GMT /lbernail/deep-dive-in-docker-overlay-networks-81193529 lbernail@slideshare.net(lbernail) Deep Dive in Docker Overlay Networks lbernail The Docker network overlay driver relies on several technologies: network namespaces, VXLAN, Netlink and a distributed key-value store. This talk will present each of these mechanisms one by one along with their userland tools and show hands-on how they interact together when setting up an overlay to connect containers. The talk will continue with a demo showing how to build your own simple overlay using these technologies. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/linuxcon-eu-2017-171025130617-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> The Docker network overlay driver relies on several technologies: network namespaces, VXLAN, Netlink and a distributed key-value store. This talk will present each of these mechanisms one by one along with their userland tools and show hands-on how they interact together when setting up an overlay to connect containers. The talk will continue with a demo showing how to build your own simple overlay using these technologies.
Deep Dive in Docker Overlay Networks from Laurent Bernaille
]]>
1732 5 https://cdn.slidesharecdn.com/ss_thumbnails/linuxcon-eu-2017-171025130617-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Deeper dive in Docker Overlay Networks /slideshow/deeper-dive-in-docker-overlay-networks-81091247/81091247 dockercon-eu-2017-171023080606
The Docker network overlay driver relies on several technologies: network namespaces, VXLAN, Netlink and a distributed key-value store. This talk will present each of these mechanisms one by one along with their userland tools and show hands-on how they interact together when setting up an overlay to connect containers. The talk will continue with a demo showing how to build your own simple overlay using these technologies. Finally, it will show how we can dynamically distribute IP and MAC information to every hosts in the overlay using BGP EVPN]]>

The Docker network overlay driver relies on several technologies: network namespaces, VXLAN, Netlink and a distributed key-value store. This talk will present each of these mechanisms one by one along with their userland tools and show hands-on how they interact together when setting up an overlay to connect containers. The talk will continue with a demo showing how to build your own simple overlay using these technologies. Finally, it will show how we can dynamically distribute IP and MAC information to every hosts in the overlay using BGP EVPN]]>
Mon, 23 Oct 2017 08:06:06 GMT /slideshow/deeper-dive-in-docker-overlay-networks-81091247/81091247 lbernail@slideshare.net(lbernail) Deeper dive in Docker Overlay Networks lbernail The Docker network overlay driver relies on several technologies: network namespaces, VXLAN, Netlink and a distributed key-value store. This talk will present each of these mechanisms one by one along with their userland tools and show hands-on how they interact together when setting up an overlay to connect containers. The talk will continue with a demo showing how to build your own simple overlay using these technologies. Finally, it will show how we can dynamically distribute IP and MAC information to every hosts in the overlay using BGP EVPN <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/dockercon-eu-2017-171023080606-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> The Docker network overlay driver relies on several technologies: network namespaces, VXLAN, Netlink and a distributed key-value store. This talk will present each of these mechanisms one by one along with their userland tools and show hands-on how they interact together when setting up an overlay to connect containers. The talk will continue with a demo showing how to build your own simple overlay using these technologies. Finally, it will show how we can dynamically distribute IP and MAC information to every hosts in the overlay using BGP EVPN
Deeper dive in Docker Overlay Networks from Laurent Bernaille
]]>
1759 3 https://cdn.slidesharecdn.com/ss_thumbnails/dockercon-eu-2017-171023080606-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Discovering OpenBSD on AWS /slideshow/discovering-openbsd-on-aws/80085683 eurobsdcon-openbsd-aws-170923160421
The story behind the OpenBSD AMI, and how it integrates with AWS, and a demo of dynamic VPN built using Consul (all running OpenBSD)]]>

The story behind the OpenBSD AMI, and how it integrates with AWS, and a demo of dynamic VPN built using Consul (all running OpenBSD)]]>
Sat, 23 Sep 2017 16:04:21 GMT /slideshow/discovering-openbsd-on-aws/80085683 lbernail@slideshare.net(lbernail) Discovering OpenBSD on AWS lbernail The story behind the OpenBSD AMI, and how it integrates with AWS, and a demo of dynamic VPN built using Consul (all running OpenBSD) <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/eurobsdcon-openbsd-aws-170923160421-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> The story behind the OpenBSD AMI, and how it integrates with AWS, and a demo of dynamic VPN built using Consul (all running OpenBSD)
Discovering OpenBSD on AWS from Laurent Bernaille
]]>
3016 3 https://cdn.slidesharecdn.com/ss_thumbnails/eurobsdcon-openbsd-aws-170923160421-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Operational challenges behind Serverless architectures /slideshow/operational-challenges-behind-serverless-architectures/76098234 meetupawsserverless-170518155017
Serverless architectures are promising and will play an important role in the coming years but the ecosystem around serverless is still pretty young. We have been operating Lambda based applications for about a year and faced several challenges. In this presentation we share these challenges and propose some solutions to work around them.]]>

Serverless architectures are promising and will play an important role in the coming years but the ecosystem around serverless is still pretty young. We have been operating Lambda based applications for about a year and faced several challenges. In this presentation we share these challenges and propose some solutions to work around them.]]>
Thu, 18 May 2017 15:50:17 GMT /slideshow/operational-challenges-behind-serverless-architectures/76098234 lbernail@slideshare.net(lbernail) Operational challenges behind Serverless architectures lbernail Serverless architectures are promising and will play an important role in the coming years but the ecosystem around serverless is still pretty young. We have been operating Lambda based applications for about a year and faced several challenges. In this presentation we share these challenges and propose some solutions to work around them. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/meetupawsserverless-170518155017-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Serverless architectures are promising and will play an important role in the coming years but the ecosystem around serverless is still pretty young. We have been operating Lambda based applications for about a year and faced several challenges. In this presentation we share these challenges and propose some solutions to work around them.
Operational challenges behind Serverless architectures from Laurent Bernaille
]]>
577 2 https://cdn.slidesharecdn.com/ss_thumbnails/meetupawsserverless-170518155017-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Deep dive in Docker Overlay Networks /slideshow/deep-dive-in-docker-overlay-networks/75197114 dockercon2017-170419224551
The Docker network overlay driver relies on several technologies: network namespaces, VXLAN, Netlink and a distributed key-value store. This talk will present each of these mechanisms one by one along with their userland tools and show hands-on how they interact together when setting up an overlay to connect containers. The talk will continue with a demo showing how to build your own simple overlay using these technologies.]]>

The Docker network overlay driver relies on several technologies: network namespaces, VXLAN, Netlink and a distributed key-value store. This talk will present each of these mechanisms one by one along with their userland tools and show hands-on how they interact together when setting up an overlay to connect containers. The talk will continue with a demo showing how to build your own simple overlay using these technologies.]]>
Wed, 19 Apr 2017 22:45:51 GMT /slideshow/deep-dive-in-docker-overlay-networks/75197114 lbernail@slideshare.net(lbernail) Deep dive in Docker Overlay Networks lbernail The Docker network overlay driver relies on several technologies: network namespaces, VXLAN, Netlink and a distributed key-value store. This talk will present each of these mechanisms one by one along with their userland tools and show hands-on how they interact together when setting up an overlay to connect containers. The talk will continue with a demo showing how to build your own simple overlay using these technologies. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/dockercon2017-170419224551-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> The Docker network overlay driver relies on several technologies: network namespaces, VXLAN, Netlink and a distributed key-value store. This talk will present each of these mechanisms one by one along with their userland tools and show hands-on how they interact together when setting up an overlay to connect containers. The talk will continue with a demo showing how to build your own simple overlay using these technologies.
Deep dive in Docker Overlay Networks from Laurent Bernaille
]]>
8466 8 https://cdn.slidesharecdn.com/ss_thumbnails/dockercon2017-170419224551-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Feedback on AWS re:invent 2016 /slideshow/feedback-on-aws-reinvent-2016/69908895 reinvent-2016-161207104906
Overview of the new services announced at re:invent]]>

Overview of the new services announced at re:invent]]>
Wed, 07 Dec 2016 10:49:06 GMT /slideshow/feedback-on-aws-reinvent-2016/69908895 lbernail@slideshare.net(lbernail) Feedback on AWS re:invent 2016 lbernail Overview of the new services announced at re:invent <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/reinvent-2016-161207104906-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Overview of the new services announced at re:invent
Feedback on AWS re:invent 2016 from Laurent Bernaille
]]>
1466 3 https://cdn.slidesharecdn.com/ss_thumbnails/reinvent-2016-161207104906-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Early recognition of encryted applications /slideshow/slides-pam2007/46679567 slides-pam2007-150406072011-conversion-gate01
Most tools to recognize the application associated with network connections use well-known signatures as basis for their classification. This approach is very effective in enterprise and campus networks to pinpoint forbidden applications (peer to peer, for instance) or security threats. However, it is easy to use encryption to evade these mechanisms. In particular, Secure Sockets Layer (SSL) libraries such as OpenSSL are widely available and can easily be used to encrypt any type of traffic. In this paper, we propose a method to detect applications in SSL encrypted connections. Our method uses only the size of the first few packets of an SSL connection to recognize the application, which enables an early classification. We test our method on packet traces collected on two campus networks and on manually-encrypted traces. Our results show that we are able to recognize the application in an SSL connection with more than 85% accuracy.]]>

Most tools to recognize the application associated with network connections use well-known signatures as basis for their classification. This approach is very effective in enterprise and campus networks to pinpoint forbidden applications (peer to peer, for instance) or security threats. However, it is easy to use encryption to evade these mechanisms. In particular, Secure Sockets Layer (SSL) libraries such as OpenSSL are widely available and can easily be used to encrypt any type of traffic. In this paper, we propose a method to detect applications in SSL encrypted connections. Our method uses only the size of the first few packets of an SSL connection to recognize the application, which enables an early classification. We test our method on packet traces collected on two campus networks and on manually-encrypted traces. Our results show that we are able to recognize the application in an SSL connection with more than 85% accuracy.]]>
Mon, 06 Apr 2015 07:20:11 GMT /slideshow/slides-pam2007/46679567 lbernail@slideshare.net(lbernail) Early recognition of encryted applications lbernail Most tools to recognize the application associated with network connections use well-known signatures as basis for their classification. This approach is very effective in enterprise and campus networks to pinpoint forbidden applications (peer to peer, for instance) or security threats. However, it is easy to use encryption to evade these mechanisms. In particular, Secure Sockets Layer (SSL) libraries such as OpenSSL are widely available and can easily be used to encrypt any type of traffic. In this paper, we propose a method to detect applications in SSL encrypted connections. Our method uses only the size of the first few packets of an SSL connection to recognize the application, which enables an early classification. We test our method on packet traces collected on two campus networks and on manually-encrypted traces. Our results show that we are able to recognize the application in an SSL connection with more than 85% accuracy. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/slides-pam2007-150406072011-conversion-gate01-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Most tools to recognize the application associated with network connections use well-known signatures as basis for their classification. This approach is very effective in enterprise and campus networks to pinpoint forbidden applications (peer to peer, for instance) or security threats. However, it is easy to use encryption to evade these mechanisms. In particular, Secure Sockets Layer (SSL) libraries such as OpenSSL are widely available and can easily be used to encrypt any type of traffic. In this paper, we propose a method to detect applications in SSL encrypted connections. Our method uses only the size of the first few packets of an SSL connection to recognize the application, which enables an early classification. We test our method on packet traces collected on two campus networks and on manually-encrypted traces. Our results show that we are able to recognize the application in an SSL connection with more than 85% accuracy.
Early recognition of encryted applications from Laurent Bernaille
]]>
804 2 https://cdn.slidesharecdn.com/ss_thumbnails/slides-pam2007-150406072011-conversion-gate01-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Early application identification. CONEXT 2006 /slideshow/early-application-identification-conext-2006/46679315 slides-conext2006-150406070937-conversion-gate01
The automatic detection of applications associated with network traffic is an essential step for network security and traffic engineering. Unfortunately, simple port-based classification methods are not always efficient and systematic analysis of packet payloads is too slow. Most recent research proposals use flow statistics to classify traffic flows once they are finished, which limit their applicability for online classification. In this paper, we evaluate the feasibility of application identification at the beginning of a TCP connection. Based on an analysis of packet traces collected on eight different networks, we find that it is possible to distinguish the behavior of an application from the observation of the size and the direction of the first few packets of the TCP connection. We apply three techniques to cluster TCP connections: K-Means, Gaussian Mixture Model and spectral clustering. Resulting clusters are used together with assignment and labeling heuristics to design classifiers. We evaluate these classifiers on different packet traces. Our results show that the first four packets of a TCP connection are sufficient to classify known applications with an accuracy over 90% and to identify new applications as unknown with a probability of 60%.]]>

The automatic detection of applications associated with network traffic is an essential step for network security and traffic engineering. Unfortunately, simple port-based classification methods are not always efficient and systematic analysis of packet payloads is too slow. Most recent research proposals use flow statistics to classify traffic flows once they are finished, which limit their applicability for online classification. In this paper, we evaluate the feasibility of application identification at the beginning of a TCP connection. Based on an analysis of packet traces collected on eight different networks, we find that it is possible to distinguish the behavior of an application from the observation of the size and the direction of the first few packets of the TCP connection. We apply three techniques to cluster TCP connections: K-Means, Gaussian Mixture Model and spectral clustering. Resulting clusters are used together with assignment and labeling heuristics to design classifiers. We evaluate these classifiers on different packet traces. Our results show that the first four packets of a TCP connection are sufficient to classify known applications with an accuracy over 90% and to identify new applications as unknown with a probability of 60%.]]>
Mon, 06 Apr 2015 07:09:37 GMT /slideshow/early-application-identification-conext-2006/46679315 lbernail@slideshare.net(lbernail) Early application identification. CONEXT 2006 lbernail The automatic detection of applications associated with network traffic is an essential step for network security and traffic engineering. Unfortunately, simple port-based classification methods are not always efficient and systematic analysis of packet payloads is too slow. Most recent research proposals use flow statistics to classify traffic flows once they are finished, which limit their applicability for online classification. In this paper, we evaluate the feasibility of application identification at the beginning of a TCP connection. Based on an analysis of packet traces collected on eight different networks, we find that it is possible to distinguish the behavior of an application from the observation of the size and the direction of the first few packets of the TCP connection. We apply three techniques to cluster TCP connections: K-Means, Gaussian Mixture Model and spectral clustering. Resulting clusters are used together with assignment and labeling heuristics to design classifiers. We evaluate these classifiers on different packet traces. Our results show that the first four packets of a TCP connection are sufficient to classify known applications with an accuracy over 90% and to identify new applications as unknown with a probability of 60%. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/slides-conext2006-150406070937-conversion-gate01-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> The automatic detection of applications associated with network traffic is an essential step for network security and traffic engineering. Unfortunately, simple port-based classification methods are not always efficient and systematic analysis of packet payloads is too slow. Most recent research proposals use flow statistics to classify traffic flows once they are finished, which limit their applicability for online classification. In this paper, we evaluate the feasibility of application identification at the beginning of a TCP connection. Based on an analysis of packet traces collected on eight different networks, we find that it is possible to distinguish the behavior of an application from the observation of the size and the direction of the first few packets of the TCP connection. We apply three techniques to cluster TCP connections: K-Means, Gaussian Mixture Model and spectral clustering. Resulting clusters are used together with assignment and labeling heuristics to design classifiers. We evaluate these classifiers on different packet traces. Our results show that the first four packets of a TCP connection are sufficient to classify known applications with an accuracy over 90% and to identify new applications as unknown with a probability of 60%.
Early application identification. CONEXT 2006 from Laurent Bernaille
]]>
327 3 https://cdn.slidesharecdn.com/ss_thumbnails/slides-conext2006-150406070937-conversion-gate01-thumbnail.jpg?width=120&height=120&fit=bounds presentation 000000 http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
https://cdn.slidesharecdn.com/profile-photo-lbernail-48x48.jpg?cb=1605896753 https://cdn.slidesharecdn.com/ss_thumbnails/howtheoom-killerdeletedmynamespace-201120185914-thumbnail.jpg?width=320&height=320&fit=bounds slideshow/how-the-oom-killer-deleted-my-namespace/239358850 How the OOM Killer Del... https://cdn.slidesharecdn.com/ss_thumbnails/kubernetesdnshorrorstories-201120185650-thumbnail.jpg?width=320&height=320&fit=bounds slideshow/kubernetes-dns-horror-stories/239358825 Kubernetes DNS Horror ... https://cdn.slidesharecdn.com/ss_thumbnails/evolutionofkube-proxyfosdem2020-200203100144-thumbnail.jpg?width=320&height=320&fit=bounds slideshow/evolution-of-kubeproxy-brussels-fosdem-2020/226790640 Evolution of kube-prox...