Home > Latest News > Google aims to speed up the entire Internet with SPDY

Google aims to speed up the entire Internet with SPDY

google

There are many aspects of technology that can be tweaked in order to speed a user’s Internet experience up; web pages optimized, purchase a faster connection, or even just update software. Google has apparently bypassed all this, however… they aim to replace the HTTP protocol itself, with SPDY.

As the Google blog post reminds us, “HTTP is an elegantly simple protocol that emerged as a web standard in 1996 after a series of experiments.” It’s been great so far, though with initial tests of SPDY (pronounced “speedy”), it shows that it can definitely be improved. Google has currently set up a simulation of a typical household’s Internet connection, built a version of Google Chrome which is SPDY compatible, and given it a whirl. After loading the top 25 websites on the Internet, the pages loaded up to 55% faster than they did otherwise.

So, why exactly did Google do this? As you’ve hopefully gathered so far, it’s all about speed. Awhitepaper on SPDY has listed a few aspects of HTTP that are rather limiting:

  • Single request per connection. Because HTTP can only fetch one resource at a time (HTTP pipelining helps, but still enforces only a FIFO queue), a server delay of 500 ms prevents reuse of the TCP channel for additional requests. Browsers work around this problem by using multiple connections. Since 2008, most browsers have finally moved from 2 connections per domain to 6.
  • Exclusively client-initiated requests. In HTTP, only the client can initiate a request. Even if the server knows the client needs a resource, it has no mechanism to inform the client and must instead wait to receive a request for the resource from the client.
  • Uncompressed request and response headers. Request headers today vary in size from ~200 bytes to over 2KB. As applications use more cookies and user agents expand features, typical header sizes of 700-800 bytes is common. For modems or ADSL connections, in which the uplink bandwidth is fairly low, this latency can be significant. Reducing the data in headers could directly improve the serialization latency to send requests.
  • Redundant headers. In addition, several headers are repeatedly sent across requests on the same channel. However, headers such as the User-Agent, Host, and Accept* are generally static and do not need to be resent.
  • Optional data compression. HTTP uses optional compression encodings for data. Content should always be sent in a compressed format.

If you’d like to know the more in-depth details about the protocol, please check out the whitepaper in addition to the blog post linked earlier. Google may have a lot of different projects going on right now, but this one seems to be more exciting than usual.

Advertisements
Categories: Latest News Tags: , ,
  1. No comments yet.
  1. No trackbacks yet.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: