In Jan 2012, Zynga was kind enough to invite me to speak at their SF office. These are the slides I presented; its much of the same SPDY content, although starting to focus more on mobile.
University of Delaware - Improving Web Protocols (early SPDY talk)Mike Belshe
油
SPDY is a protocol developed by Google to improve web performance by addressing latency and security issues on the web. It allows for multiplexing of requests over a single TCP connection to reduce round trips and connection overhead. Initial testing showed SPDY reduced page load times by 40% on average compared to HTTP. Google has deployed SPDY internally and it is enabled by default in Chrome, but more work is still needed for full standardization and deployment.
This document provides an overview of the Hypertext Transfer Protocol (HTTP) by explaining its key components and concepts. It describes the main parts of an HTTP request, including the request line, headers, and body. It also covers HTTP responses, status codes, and common methods like GET and POST. The document discusses how HTTP enables communication on the web and APIs through its stateless request/response model and standardized methods, headers, and status codes. It concludes by mentioning newer developments like HTTP/2 and SPDY that aim to improve web performance.
The document discusses the limitations of HTTP for building interactive real-time web applications and introduces WebSockets as an alternative. It explains that WebSockets allow for asynchronous, bidirectional communication over a single TCP connection, unlike HTTP which is stateless and half-duplex. The document also outlines the Java API for WebSockets (JSR 356) and provides examples of how WebSockets can be used for applications like chat, games, and social networking.
HTTP 1.1, which is the backbone of pretty much everything weve been using on the Internet, has been around for over 15 years. Recently the HTTP 2.0 specification has been completed and gradually application servers will start supporting it. It does make one wonder though: why change if something has been working for so long. In this talk well examine the shortcomings of HTTP 1.1 and how 2.0 intends to address those. Well see what we need to know and how its going to affect our existing applications, and future ones.
Intro to Multi-WAN - pfSense Hangout April 2014Netgate
油
This document summarizes a pfSense hangout about multi-WAN configurations. It announces two new hardware platforms for pfSense and discusses the Heartbleed vulnerability in OpenSSL and its impact on pfSense. It then covers the goals and strategies for multi-WAN configurations, including redundancy, bandwidth aggregation, and service segregation. Various multi-WAN configuration examples and demos are provided. Troubleshooting tips are also included.
This document discusses HTTP/2, including a brief history of HTTP 1.x, the development of SPDY which became the basis for HTTP/2, the key features of HTTP/2 like binary framing, streams, header compression and server push, considerations for transitioning from HTTP 1.x to HTTP/2, and strategies for optimizing performance with HTTP/2. It recommends benchmarking optimizations and transitioning first internal APIs, then public APIs and CDNs, followed by front-end applications and proxies.
O'Reilly Fluent Conference: HTTP/1.1 vs. HTTP/2Load Impact
油
HTTP/2 is a new version of the HTTP network protocol that can improve page load performance over HTTP/1.1. The document discusses the history and limitations of HTTP/1.1, how HTTP/2 addresses these through features like multiplexing and header compression, and the results of an experiment that found HTTP/2 reduced page load times by 50-70% compared to HTTP/1.1. Real-world performance benefits may be less since HTTP/2 and site optimizations for it are still maturing, but initial experiments show promise of HTTP/2 significantly improving load times.
After 16 years of solid use, the HTTP protocol finally got a major update this year. HTTP is the standard that defines how computers communicate over the Internet, and had not changed since 1999. The modern web, however, has become much more complex and HTTP/2 helps to address this brave new world.
Watch the webinar on demand: https://www.nginx.com/resources/webinars/whats-new-in-http2/
Internet of Things Presentation
犢犖犖犖迦牽 犖犖犖犖 Android Control Hardware and Arduino IoT
犢犖犖 Adun Nantakaew 犖犖犖巌県犖園 Soft Power Group
email: info@softpowergroup.net
Tel : 081-6452400
http://softpowergroup.net/%E0%B8%AA%E0%B8%AD%E0%B8%99-arduino/
This document discusses Meek and domain fronting as techniques for circumventing internet censorship. It provides an overview of censorship tools and the arms race between censors and circumvention methods. Meek uses domain fronting to hide proxy traffic by making encrypted requests to CDNs like Google and Cloudflare that appear as normal traffic, making the connections difficult for censors to block without blocking major sites. Meek has been implemented in tools like Psiphon and Tor to provide uncensorable access by tunneling their protocols over domain-fronted connections. While attacks from deep packet inspection are possible, Meek has so far proven very effective at evading censorship.
HTTP/2 is a new version of the HTTP network protocol that makes web content delivery faster and more efficient. It introduces features like multiplexing, header compression, and server push that fix limitations in HTTP/1.1 like head-of-line blocking and slow start. HTTP/2 is now supported in all major browsers and servers and provides performance improvements over HTTP/1.1 without requiring workarounds. The presentation provided an overview of HTTP/2 concepts and how to troubleshoot using developer tools.
The document introduces HTTP/2 and discusses limitations of HTTP 1.1 including head of line blocking, TCP slow start, and latency issues. It describes key features of HTTP/2 such as multiplexing requests over a single TCP connection, header compression, and server push to reduce page load times. The presentation includes demos of HTTP/2 in Chrome dev tools and Wireshark to troubleshoot HTTP/2 connections.
Zingme practice for building scalable website with PHPChau Thanh
油
This document discusses best practices for building scalable PHP websites, as demonstrated by ZingMe's architecture. It describes how ZingMe uses a scale-out approach with load balancing, centralized session storage, and an automated code deployment system. The key aspects covered are load balancing with HAProxy, overriding PHP's session handling to use Memcached, and configuring servers identically through version control and scripts. The goal is to gracefully handle increasing traffic by adding commodity servers within 5 minutes while maintaining a consistent environment.
The document discusses SPDY and HTTP/2, which aim to improve upon HTTP/1.1 by allowing multiple requests to be sent concurrently over a single TCP connection through header compression and multiplexing. It notes that SPDY is now supported by major browsers but not Internet Explorer, while HTTP/2 is still not widely adopted. The document also describes how protocols like NPN and ALPN enable negotiation of the transport layer and encryption ensures security for intermediaries.
The document discusses how to build a system that can handle high access requests. It covers optimizing performance at the node level and scaling to multiple nodes. It then discusses various problems that can occur at different levels, from the client to the server to cross-server, and provides solutions for issues like caching, load balancing, and communication between servers. The overall goal is to understand where bottlenecks can occur and how to optimize each component to build a scalable system that can handle high traffic loads.
TSC Summit #4 - Howto get browser persitence and remote execution (JS)Mikal Villa
油
A simple PoC shown how insecure random http proxies are. And how easy you can trick people into traps.
Disclaimer: No data collected under the PoC was saved after the presentation, and everything was removed from the user browsers without any harm or stealing of information or any criminal activity at all.
SPDY is an experimental protocol developed by Google that aims to reduce web page load latency and improve security. It achieves this through compression, multiplexing requests over a single connection, and prioritizing content. SPDY modifies how HTTP requests and responses are transmitted but does not replace HTTP. The IETF is considering SPDY as a starting point for the development of HTTP 2.0.
1) The document discusses several common messaging protocols for IoT including HTTP, CoAP, MQTT, and AMQP.
2) It compares the architecture, implementation weight, data transport capabilities, communication patterns supported, and security features of each protocol.
3) The document concludes that the choice of protocol depends on factors like device constraints, network reliability, message rate, and need to process data payloads. No single protocol is optimal for all situations.
HTTP/2 is a new version of the HTTP network protocol that aims to improve website performance. It uses a single TCP connection to allow multiple requests and responses to be multiplexed together. This improves efficiency over HTTP/1.1. Additionally, HTTP/2 allows servers to push critical resources like CSS files to clients, potentially reducing load times. While HTTP/2 brings performance benefits, challenges remain around widespread server support and differing optimizations between HTTP/1.1 and HTTP/2.
HTTPS @Scale by Arvind Mani of LinkedIn discusses LinkedIn's efforts to migrate their site to default HTTPS, the challenges they faced around mixed content, site speed, scaling TLS, and session upgrades, and the security best practices they implemented. Key points include gradually rolling out default HTTPS from 2012-2014, measuring and fixing mixed content issues, optimizing TLS handshakes for performance, scaling TLS infrastructure with hardware and CDNs, securely upgrading HTTP sessions to HTTPS, and implementing security measures like HSTS, pinning, and perfect forward secrecy.
For successful implementation of distributed systems, flexible and scalable messaging layer is one of the most important components. Setting a static messaging infrastructure and provisioning it manually doesnt fit well in the cloud-centric development model most organisations are adopting lately. The EnMaase project (http://enmasse.io/) provides an open source solution for deploying your own messaging infrastructure in the cloud. Its based on proven standards and technologies like AMQP and Kubernetes and provides all the features youd need, from multi-tenancy to simple management and monitoring. This session will cover EnMaase project in details, providing details on the architecture, messaging concepts supported and ways to set and configure it.
This document provides an overview of HTTP caching and content distribution networks. It begins with a review of HTTP and persistent connections. It then discusses how caching works in HTTP, including cache validation via If-Modified-Since headers and ETags. It describes how web proxies and content delivery networks can be used for caching. Finally, it explains how content distribution networks like Akamai replicate and distribute content to edge servers close to users for improved performance.
CurveZMQ, ZMTP and other Dubious Characterspieterh
油
The document discusses secure messaging for internet applications using ZeroMQ. It proposes solutions like CurveZMQ and ZMTP 3.0 that provide encryption, authentication and extensibility at the protocol level. CurveZMQ defines a security handshake inspired by CurveCP that is transport neutral and compatible with SASL. ZMTP 3.0 introduces security mechanisms to ZMQ like CURVE for full encryption and authentication. The author advocates getting involved in the open RFC process to help evolve these standards.
HTTP/2 provides improvements over HTTP/1.1 by using binary framing, stream multiplexing, header compression and server push to reduce latency. It retains HTTP semantics and promotes the use of TLS for security. While improving performance, it also brings challenges for web application firewalls and other intermediaries due to its binary nature. Future protocols like QUIC aim to further reduce latency by running HTTP/2 over UDP instead of TCP.
Daniel Stenberg gave a presentation on the current status of HTTP/2. He discussed how HTTP usage has grown significantly, leading to slower page loads. HTTP/1.1 workarounds like concatenation and sharding add complexity. HTTP/2 aims to address these issues through features like multiplexed streams, header compression, and server push while maintaining backwards compatibility. Major browsers now support HTTP/2, but it currently only makes up a small percentage of traffic. Widespread adoption will take time as developers adjust practices.
SPDY was created by Google in 2009 to speed up web content delivery. It aims to reduce page load times by 50% by allowing multiple concurrent HTTP requests over a single TCP connection using features like request multiplexing, prioritization, and header compression. Some implementations of SPDY include Chrome, Firefox, and Amazon Kindle Fire. While it provides benefits, SPDY may not completely replace HTTP due to limitations like preventing single header extraction and potential for wasted server resources.
The document discusses techniques for improving web performance, including reducing time to first byte, using content delivery networks and HTTP compression, caching resources, keeping connections alive and reducing request sizes. It also covers optimizing images, loading JavaScript asynchronously to avoid blocking, and prefetching content. The overall goal is to reduce page load times and improve user experience.
After 16 years of solid use, the HTTP protocol finally got a major update this year. HTTP is the standard that defines how computers communicate over the Internet, and had not changed since 1999. The modern web, however, has become much more complex and HTTP/2 helps to address this brave new world.
Watch the webinar on demand: https://www.nginx.com/resources/webinars/whats-new-in-http2/
Internet of Things Presentation
犢犖犖犖迦牽 犖犖犖犖 Android Control Hardware and Arduino IoT
犢犖犖 Adun Nantakaew 犖犖犖巌県犖園 Soft Power Group
email: info@softpowergroup.net
Tel : 081-6452400
http://softpowergroup.net/%E0%B8%AA%E0%B8%AD%E0%B8%99-arduino/
This document discusses Meek and domain fronting as techniques for circumventing internet censorship. It provides an overview of censorship tools and the arms race between censors and circumvention methods. Meek uses domain fronting to hide proxy traffic by making encrypted requests to CDNs like Google and Cloudflare that appear as normal traffic, making the connections difficult for censors to block without blocking major sites. Meek has been implemented in tools like Psiphon and Tor to provide uncensorable access by tunneling their protocols over domain-fronted connections. While attacks from deep packet inspection are possible, Meek has so far proven very effective at evading censorship.
HTTP/2 is a new version of the HTTP network protocol that makes web content delivery faster and more efficient. It introduces features like multiplexing, header compression, and server push that fix limitations in HTTP/1.1 like head-of-line blocking and slow start. HTTP/2 is now supported in all major browsers and servers and provides performance improvements over HTTP/1.1 without requiring workarounds. The presentation provided an overview of HTTP/2 concepts and how to troubleshoot using developer tools.
The document introduces HTTP/2 and discusses limitations of HTTP 1.1 including head of line blocking, TCP slow start, and latency issues. It describes key features of HTTP/2 such as multiplexing requests over a single TCP connection, header compression, and server push to reduce page load times. The presentation includes demos of HTTP/2 in Chrome dev tools and Wireshark to troubleshoot HTTP/2 connections.
Zingme practice for building scalable website with PHPChau Thanh
油
This document discusses best practices for building scalable PHP websites, as demonstrated by ZingMe's architecture. It describes how ZingMe uses a scale-out approach with load balancing, centralized session storage, and an automated code deployment system. The key aspects covered are load balancing with HAProxy, overriding PHP's session handling to use Memcached, and configuring servers identically through version control and scripts. The goal is to gracefully handle increasing traffic by adding commodity servers within 5 minutes while maintaining a consistent environment.
The document discusses SPDY and HTTP/2, which aim to improve upon HTTP/1.1 by allowing multiple requests to be sent concurrently over a single TCP connection through header compression and multiplexing. It notes that SPDY is now supported by major browsers but not Internet Explorer, while HTTP/2 is still not widely adopted. The document also describes how protocols like NPN and ALPN enable negotiation of the transport layer and encryption ensures security for intermediaries.
The document discusses how to build a system that can handle high access requests. It covers optimizing performance at the node level and scaling to multiple nodes. It then discusses various problems that can occur at different levels, from the client to the server to cross-server, and provides solutions for issues like caching, load balancing, and communication between servers. The overall goal is to understand where bottlenecks can occur and how to optimize each component to build a scalable system that can handle high traffic loads.
TSC Summit #4 - Howto get browser persitence and remote execution (JS)Mikal Villa
油
A simple PoC shown how insecure random http proxies are. And how easy you can trick people into traps.
Disclaimer: No data collected under the PoC was saved after the presentation, and everything was removed from the user browsers without any harm or stealing of information or any criminal activity at all.
SPDY is an experimental protocol developed by Google that aims to reduce web page load latency and improve security. It achieves this through compression, multiplexing requests over a single connection, and prioritizing content. SPDY modifies how HTTP requests and responses are transmitted but does not replace HTTP. The IETF is considering SPDY as a starting point for the development of HTTP 2.0.
1) The document discusses several common messaging protocols for IoT including HTTP, CoAP, MQTT, and AMQP.
2) It compares the architecture, implementation weight, data transport capabilities, communication patterns supported, and security features of each protocol.
3) The document concludes that the choice of protocol depends on factors like device constraints, network reliability, message rate, and need to process data payloads. No single protocol is optimal for all situations.
HTTP/2 is a new version of the HTTP network protocol that aims to improve website performance. It uses a single TCP connection to allow multiple requests and responses to be multiplexed together. This improves efficiency over HTTP/1.1. Additionally, HTTP/2 allows servers to push critical resources like CSS files to clients, potentially reducing load times. While HTTP/2 brings performance benefits, challenges remain around widespread server support and differing optimizations between HTTP/1.1 and HTTP/2.
HTTPS @Scale by Arvind Mani of LinkedIn discusses LinkedIn's efforts to migrate their site to default HTTPS, the challenges they faced around mixed content, site speed, scaling TLS, and session upgrades, and the security best practices they implemented. Key points include gradually rolling out default HTTPS from 2012-2014, measuring and fixing mixed content issues, optimizing TLS handshakes for performance, scaling TLS infrastructure with hardware and CDNs, securely upgrading HTTP sessions to HTTPS, and implementing security measures like HSTS, pinning, and perfect forward secrecy.
For successful implementation of distributed systems, flexible and scalable messaging layer is one of the most important components. Setting a static messaging infrastructure and provisioning it manually doesnt fit well in the cloud-centric development model most organisations are adopting lately. The EnMaase project (http://enmasse.io/) provides an open source solution for deploying your own messaging infrastructure in the cloud. Its based on proven standards and technologies like AMQP and Kubernetes and provides all the features youd need, from multi-tenancy to simple management and monitoring. This session will cover EnMaase project in details, providing details on the architecture, messaging concepts supported and ways to set and configure it.
This document provides an overview of HTTP caching and content distribution networks. It begins with a review of HTTP and persistent connections. It then discusses how caching works in HTTP, including cache validation via If-Modified-Since headers and ETags. It describes how web proxies and content delivery networks can be used for caching. Finally, it explains how content distribution networks like Akamai replicate and distribute content to edge servers close to users for improved performance.
CurveZMQ, ZMTP and other Dubious Characterspieterh
油
The document discusses secure messaging for internet applications using ZeroMQ. It proposes solutions like CurveZMQ and ZMTP 3.0 that provide encryption, authentication and extensibility at the protocol level. CurveZMQ defines a security handshake inspired by CurveCP that is transport neutral and compatible with SASL. ZMTP 3.0 introduces security mechanisms to ZMQ like CURVE for full encryption and authentication. The author advocates getting involved in the open RFC process to help evolve these standards.
HTTP/2 provides improvements over HTTP/1.1 by using binary framing, stream multiplexing, header compression and server push to reduce latency. It retains HTTP semantics and promotes the use of TLS for security. While improving performance, it also brings challenges for web application firewalls and other intermediaries due to its binary nature. Future protocols like QUIC aim to further reduce latency by running HTTP/2 over UDP instead of TCP.
Daniel Stenberg gave a presentation on the current status of HTTP/2. He discussed how HTTP usage has grown significantly, leading to slower page loads. HTTP/1.1 workarounds like concatenation and sharding add complexity. HTTP/2 aims to address these issues through features like multiplexed streams, header compression, and server push while maintaining backwards compatibility. Major browsers now support HTTP/2, but it currently only makes up a small percentage of traffic. Widespread adoption will take time as developers adjust practices.
SPDY was created by Google in 2009 to speed up web content delivery. It aims to reduce page load times by 50% by allowing multiple concurrent HTTP requests over a single TCP connection using features like request multiplexing, prioritization, and header compression. Some implementations of SPDY include Chrome, Firefox, and Amazon Kindle Fire. While it provides benefits, SPDY may not completely replace HTTP due to limitations like preventing single header extraction and potential for wasted server resources.
The document discusses techniques for improving web performance, including reducing time to first byte, using content delivery networks and HTTP compression, caching resources, keeping connections alive and reducing request sizes. It also covers optimizing images, loading JavaScript asynchronously to avoid blocking, and prefetching content. The overall goal is to reduce page load times and improve user experience.
Web performance optimization - MercadoLibrePablo Moretti
油
The document provides techniques and tools for improving web performance. It discusses how reducing response times can directly impact revenues and user experience. It then covers various ways to optimize the frontend, including reducing time to first byte through DNS optimization and caching, using content delivery networks, HTTP compression, keeping connections alive, parallel downloads, and prefetching. It also discusses optimizing images, JavaScript loading, and introducing new formats like WebP. The overall document aims to educate on measuring and enhancing web performance.
Eduardo Silva is an open source engineer at Treasure Data working on projects like Fluentd and Fluent Bit. He created the Monkey HTTP server, which is optimized for embedded Linux and has a modular plugin architecture. He also created Duda I/O, a scalable web services stack built on top of Monkey using a friendly C API. Both projects aim to provide lightweight, high performance solutions for collecting and processing data from IoT and embedded devices.
WebCamp Ukraine 2016: Instant messenger with Python. Back-end developmentViach Kakovskyi
油
This document discusses building instant messaging platforms with Python. It covers common messaging protocols like XMPP and WebSocket, how they establish and send messages. It also discusses the life of a messaging platform, including authentication, delivery, parsing, and more. Lessons learned include handling bursty traffic, reconnect storms, and preventing incidents. Python is well-suited for messaging backends but other languages may be better for some tasks.
MySQL X protocol - Talking to MySQL Directly over the WireSimon J Mudd
油
The document discusses the MySQL X Protocol, which introduces a new way for clients to communicate directly with MySQL servers over TCP/IP. It provides an overview of how the protocol works, including capabilities exchange, authentication, querying the server for both SQL and noSQL data, pipelining requests, and the need for a formal protocol specification. Building client drivers requires understanding the protocol by reading documentation, source code, and examples as documentation is still incomplete. Pipelining requests can improve performance over high-latency connections. A standard specification would help driver development and ensure compatibility as the protocol evolves.
The document discusses SPDY, an evolution of HTTP developed by Google since 2009 that aims to speed up web content delivery. SPDY utilizes a single TCP connection more efficiently through multiplexing and other techniques. It allows for faster page loads, often around 39-55% faster than HTTP. While SPDY adoption is growing, with support in Chrome, Firefox, and Amazon Silk, widespread implementation by servers is still limited. SPDY is expected to influence the development of HTTP 2.0.
This document discusses problems with HTTP and HTML for mobile devices and proposes solutions. HTTP was designed for high-bandwidth, low-delay connections and results in large headers, uncompressed content transfers, and overhead from separate TCP connections for each page item. Caching is also often disabled on dynamic pages. HTML assumes desktop computers rather than mobile constraints. Solutions proposed include image scaling, content extraction, new protocols, push technologies, and enhanced caching through client and network proxies.
HTTP is an application-layer protocol for transmitting hypermedia documents across the internet. It is a stateless protocol that can be used on any reliable transport layer. HTTP uses requests and responses between clients and servers, with common methods including GET, POST, PUT, DELETE. It supports features like caching, cookies, authentication, and more to enable the web as we know it.
USENIX LISA15: How TubeMogul Handles over One Trillion HTTP Requests a MonthNicolas Brousse
油
TubeMogul grew from few servers to over two thousands servers and handling over one trillion http requests a month, processed in less than 50ms each. To keep up with the fast growth, the SRE team had to implement an efficient Continuous Delivery infrastructure that allowed to do over 10,000 puppet deployment and 8,500 application deployment in 2014. In this presentation, we will cover the nuts and bolts of the TubeMogul operations engineering team and how they overcome challenges.
This document discusses zero downtime architectures. It defines zero downtime as services being available to end users at all times. It identifies sources of planned and unplanned downtime. It proposes concepts like independent application groups, redundant infrastructure within and between datacenters, and replicating data between datacenters to reduce downtime. It provides examples of implementing high availability for networks, applications, and databases. It also discusses development guidelines and monitoring to support zero downtime operations.
Uber mobility - High Performance NetworkingDhaval Patel
油
Speakers: Ganesh Srinivasan & Minh Pham (Uber), Jim Roskind (Neumob), Makarand Dharmapurikar & Eric Anderson (Google), and Karthik Ramgopal (LinkedIn)
Networking is one of the most important, yet often underserved aspects of any mobile application. The latency and bandwidth of mobile networks can vary greatly between cities and even within cities, ranging from broadband LTE speeds to performance that feels more like a 300 baud modem.
You can read more about Uber Mobility here : https://www.uber.com/p/uber-mobility/
BSides Rochester 2018: Chaim Sanders: How the Cookie Crumbles: Modern HTTP St...JosephTesta9
油
This document discusses the history and modern state of HTTP cookies as a mechanism for web session persistence. It covers the evolution of cookie standards from the original Netscape specification to current RFCs. Key topics covered include cookie flows, same-origin policy implications for cookies, security issues like cross-site request forgery and cookie tampering, and alternative session management approaches like HTTP/2, local storage, and IndexedDB. The presenter is an expert in web security and standards with experience in consulting, research, and teaching.
This document provides an overview of HTML5 including its history, current status, implementation in browsers, and both benefits and security issues. It discusses how HTML5 aims to simplify and enhance usability but also introduces new vulnerabilities due to its dynamic nature forcing rapid implementation. While HTML5 enables rich content and interactivity, its inconsistencies and evolving specifications combined with a rush for browser support has resulted in buggy websites and potential for attacks like hijacking forms, stealing data, and bypassing security restrictions.
The web has dramatically evolved over the last 20+ years, yet HTTP - the workhorse of the Web - has not. Web developers have worked around HTTP's limitations, but:
--> Performance still falls short of full bandwidth utilization
--> Web design and maintenance are more complex
--> Resource consumption increases for client and server
--> Cacheability of resources suffers
HTTP/2 attempts to solve many of the shortcomings and inflexibilities of HTTP/1.1
2. Why am I here?
SPDY started over 3 years ago
Reduced latency is now proven
It's better for the network
Let's focus on interoperability
3. Who is using SPDY?
Google Chrome &
All Google Web Properties
Mozilla Firefox
Twitter
Amazon Silk
Others: Cotendo, Strangeloop, iPhone client, Apache mod-spdy, nginx
beta, jetty, netty, libraries in python, node.js, erlang, ruby, go, and C
4. How did SPDY come to be?
wanted reduced web page latency for users
5. What SPDY is Not
A transport layer protocol (like TCP)
Rocket Science
Cheap Compression Tricks
6. What SPDY is
An amalgam of well-known ideas based on
performance data:
multiplexing
prioritization
compression
server push
transparent to HTTP app servers
deployable today
7. Real deployment has shown also
Better for the network
Better for Mobile
HTTP is not just for HTML
Battery life matters
8. Background: What is a WebPage?
86 resources
13 hosts
800+KB
only 66% compressed (top sites are ~90%
compressed)
14. 1. Multiplexing
Small, fixed length frames
Fully interleaved streams
Streams can be created by either endpoint
with zero round trips.
Many implementors have remarked it's easy
to implement!
17. 2. Prioritization
Not all requests are equal!
Failure to prioritize is actually slower
Must consider two metrics:
Time to first render
Overall Page Load Time
SPDY allows client-specified priorities with
server best effort to deliver
18. 3. Header Compression
SPDY uses stateful
compression across
requests
Using zlib,
achieves 85-90%
compression
Don't care if compressor
is zlib; only care about
session state.
Must be mandatory
29. Other results
Firefox confirmed Chrome results
Google recently reported that SPDY over
SSL is now faster than HTTP without SSL
BoostEdge paper confirms Google numbers
need vendors to publish more!
30. Deployment
A Process of Elimination
Transport choices: TCP or UDP
Chose TCP
Port choices: 80 or 443
But both are taken!
Chrome test shows usability of port 80 for non HTTP
protocols is <75%.
Using port 80 makes SPDY like Pipelining.
Port 443 is the only untampered port.
Other ports: blocked by firewalls
31. Pause - That was the Big Picture
"Better is the enemy of good"
The aforementioned items are the non-
controversial parts of SPDY.
HTTP/2.0 should take those concepts.
Minutiae doesn't matter:
exact framing syntax
exact compression algorithm
Stay Focused on the Big Picture!
33. Why not SCTP?
Multiplexing over a single TCP stream does
have one element of head-of-line blocking.
But SCTP has problems:
Not available on most platforms
Requires administrative privs to install (so it
can't be bundled easily with browser installs)
Incompatible with NAT on today's internet.
34. Why not Pipelining?
Pipelining was introduced a decade ago.
Wasn't deployable due to intermediaries that didn't
handle it properly.
It has complex head-of-line blocking problems (hanging
GETs)
Firefox team list of heuristics is huge. SPDY was easier
to build than pipelining.
Counterpoint: mobile uses pipelining. Does it work?
35. Why 1 Connection?
More efficient for network, memory use, and
server scalability; better compression.
Don't have to wait for a handshake to
complete before sending a request.
Doesn't encourage Buffer bloat. (Jim
Gettys)
Lets the transport do what it does best.
Would like to see more research here.
37. Mobile is Different
New client-side problems
Battery life constraints
Small CPUs (changing fast!)
New Network Properties
Latency from 150 - 300ms per Round Trip
Bandwidth 1-4Mbps
New use cases
Mobile Web Browsers are 1st generation
So web browsing sucks
Everyone uses Apps w/ REST APIs anyway
38. SPDY and Mobile
Fewer connections/bytes/packets reduces
transmit requirements of radio
Mobile connection management is different
due to NAT and in-and-out networks.
Can't use TCP Keepalives
PING frame detects closed conns quickly
Header compression minimizes upstream
sends
1 conn per domain minimizes tcp-level
40. Don't make things "optional"
Optional features are disabled features.
e.g. pipelining.
Optional features are buggy.
e.g. absolute URIs fail on many HTTP/1.1 servers.
Feature detection often takes a round-trip.
e.g. does it support a compressed request?
Proxies will tamper with option negotiation.
e.g. Accept-EnXcoding
41. Security
I often hear that security is
difficult/expensive/costly or unwanted.
I've NEVER heard this complaint from a user.
I've ONLY heard this complaint from proxy and
server implementors.
Could it be that users just expect it to be
secure?
42. What Security Can HTTP/2.0
Provide?
Security is accomplished across the stack, not at a
single layer. But HTTP does play a role.
Requiring SSL with HTTP/2.0 will:
Protect the user from eavesdroppers (firesheep!)
Protect from content tampering
Protect the protocol for future extensions
Authenticate servers
43. Insecure Protocols Hurt Users
Without integrity & privacy, you enable anyone
to:
record data about you
inject advertisements into your content
prevent access to certain sites
alter site content
limit your bandwidth (for any reason)
Is this what the user wants?
44. Insecure Protocols Enable
Transparent Proxies
Transparent proxies are proxies that you
didn't opt-in to
As a site operator, they can alter your content
As a user, they can alter your web experience
Transparent proxies are to blame for many
of our protocol woes:
Inability to fix HTTP/1.1 pipelining
Turning off compression behind the user's back
They are easy to deploy, however...
45. SSL is not Expensive
Twitter and Google rolled out with zero
additional hardware.
Bulk encryption (RC4) is basically free
Handshakes are a little expensive, but <1%
of CPU costs
Certificates are free.
SPDY + SSL is faster than HTTP.
46. Is an insecure protocol legal
anymore?
Privacy laws in the US & EU make those
that leak private information liable for the
losses
Should web site administrators need to know
how HTTP works in order to obey basic
laws?
48. We have distinct use cases
End User HTTP
targets consumers and Internet User needs
BackOffice HTTP
for those using HTTP in behind their own firewalls
Caching HTTP (also corp firewall HTTP)
For corporate environments or organizations sharing
a common cache
May not be a separate protocol, but lets make it
work explicitly.
49. End User HTTP
Optimized for the Internet Consumer.
Features:
Always secure (safe to use in the Cafe)
Always compressed
Always fast
50. BackOffice HTTP
Used for backoffice server infrastructure,
already behind your own firewalls.
Features:
Not implemented by browsers
Makes SSL optional
Makes Compression optional
51. Caching HTTP
Used by corporations with filtering firewalls
or those that want to have an external cache
Features:
User opts-in. Never transparent.
SSL to the proxy; proxy brokers the request to origin
Respects HSTS
Reduces need for SSL MITM