webdev

Last update: June 18, 2020

I started building browser extensions more than 22 years ago, and I started building browsers directly just over 16 years ago. At this point, I think it’s fair to say that I’m entering the grizzled veteran phase of my career.

With the Edge team continuing to grow with bright young minds from college and industry, I’m increasingly often asked “Where do I learn about browsers?” and I haven’t had a ready answer for that question.

This post aims to answer it.

First, a few prerequisites for developing expertise in browsers:

  1. Curiosity. While browsers are more complicated than ever, there are also better resources than ever to learn how they work. All major browsers are now based on open-source code, and if you’re curious, you no longer need to join a secret priesthood to discover how they operate under the hood.
  2. Willingness to Experiment. Considering how complex browsers are (and because they’re so diverse, across platforms, maker, and version), it’s often easiest to definitively answer questions about how browsers work by trying things, rather than reading an explainer (possibly outdated or a map that doesn’t match the terrain) or reading the code (often complex and potentially misleading). Build test cases and try them in each browser to see what happens. When you encounter surprising behavior, let your curiosity guide you into figuring it out. Browsers contain no magic, but plenty of butterfly effects.
  3. Doggedness. I’ve been doing this for half of my life, and I’m still learning daily. While historical knowledge will serve you well, things are changing in this space every day, and keeping up is an endless challenge. And it’s often fun.

Now, how do you apply these prerequisites and grow to become a master of browsers? Read on.

Fundamental Understanding

Over the years, a variety of broad resources have been developed that will give you a good foundation in the fundamentals of how browsers work. Taking advantage of these will help you more effectively explore and learn on your own.

  • First, I recommend reading the Chrome Comic Book. This short, 38 page comic book from legend Scott McCloud was published alongside the first version of Google Chrome back in 2008. It clearly and simply explains many of the core concepts behind modern browsers as application platforms.
  • HTML5Rocks has a great introduction into How Browsers Work. This is a lengthy and detailed introduction into how browsers turn HTML and CSS into what you see on the screen. Read this article and you’ll understand more about this topic than 90% of web developers.
  • The folks at Google have created a fantastic four-part illustrated series about how modern browsers work: Inside look at modern web browsers. Navigation, the Rendering Engine and Input and Compositing as a part of their Web Fundamentals site.
  • Mozilla wrote a fantastic cartoon introduction to WebAssembly, explaining the basics behind this new technology; there’s tons of other invaluable content on Mozilla Hacks.
  • The Chromium Chronicle is a monthly series geared specifically to the Chromium developers who build the browser.
  • Web Developers should check out Web.Dev, a great source of articles on building fast and secure websites.

Books

If you prefer to learn from books, I can only recommend a few. Sadly, there are few on browsers themselves (largely because they tend to evolve too quickly), but there are good books on web technologies.

Tools

One of the best ways to examine what’s going on with browsers is to just use tools to watch what’s going on as you use your favorite websites.

  • Built-in DevTools (just hit F12!) – They’re amazingly powerful. I don’t know of a great tutorial, but there are likely some on YouTube.
  • Telerik Fiddler – See what requests hit the network and what they contain.
  • The VisBug Chrome Extension – Easily manipulate any page layout, directly in your browser.

Use the Source, Leia

The fact that all of the major browsers are built atop open-source projects is a wonderful thing. No longer do you need to be a reverse-engineering ninja with a low-level debugger to figure out how things are meant to work (although sometimes such approaches can still be super-valuable).

Source code locations:

Navigating the Code

While simply perusing a browser’s source code might give you a good feel for the project, browsers tend to be enormous. Chromium is over 10 million lines of code, for example.

If you need to find something in particular, one often effective way to find it easily is to search for a string shown in the browser UI near the feature of interest. (Or, if you’re searching for a DOM function name or HTML attribute name, try searching for that.) We might call this method string chasing.

By way of example, today I encountered an unexpected behavior in the handling of the “Go to <url>” command on Chromium’s context menu:

So, to find the code that implements this feature, I first try searching for that string:

…but there are a gazillion hits, which makes it hard to find what I need. So I instead search for a string that’s elsewhere in the context menu, and find only one hit in the Chromium “grd” (resources) file:

When I go look at that grd file, I quickly find the identifier I’m really looking for just below my search result:

So, we now know that we’re looking for usages of IDS_CONTENT_CONTEXT_GOTOURL, probably in a .CC file, and we find that almost immediately:

From here, we see that the menu item has the command identifier IDC_CONTENT_CONTEXT_GOTOURL, which we can then continue to chase down through the source until we find the code that handles the command. That command makes use of a variable selection_navigation_url_, which is filled elsewhere by some pretty complicated logic.

After you gain experience in the Chromium code, you might learn “Oh, yeah, all of the context menu stuff is easy to find, it’s in the renderer_context_menu directory” and limit your searches to that area, but after four years of working on Chrome, I still usually start my searches broadly.

Optional: Compile the code

If you’d actually like to compile the code of a major browser, things are a bit more involved, but if you follow the configuration instructions to the letter— your first build will succeed. Back in 2015, Monica Dinculescu created an amazing illustrated guide to contributing to Chromium.

You can compile Chromium or Firefox on a mid-range machine from 2016, but it will take quite a long time. A beefy PC will speed things up a bunch, but until we have cloud compilers available to the public, it’s always going to be pretty slow.

Look at their Bugs

All browsers except Microsoft Edge have a public bug tracker where you can search for known issues and file new bugs if you encounter them.

  • FirefoxFirefox Bugzilla
  • WebkitWebKit Bugzilla
  • ChromiumCRBug
  • Microsoft Edge‘s – Platform bugs that are inherited from Chromium are tracked using CRBug. Sadly, at present there is no public tracker for bugs that reproduce only in Edge. Bugs reported by the “Feedback” button are tracked internally by Microsoft.
  • Braveon GitHub
  • HTML5 Specification – on GitHub

Here are instructions for filing a great browser bug.

Blogs to Read

  • This One – I write mostly about browsers.
  • My (archived) IEInternals – I started writing this blog because it was the only reliable way for me to find my notes from investigations and troubleshooting years later.
  • Cloudflare’s – Cloudflare is a $5B company whose primary product is their amazing blog. I understand they also run a CDN on the side to generate interesting topics for their blog to talk about.
  • Nasko Oskov’s – Nasko is an engineer on the Chrome Security team and writes mostly about security topics.
  • Chris Palmer’s – Chris is an engineer on the Chrome Security team and writes about secure design.
  • Adam Langley’s – Google’s expert cryptographer
  • Bruce Dawson’s – Bruce is a Chrome Engineer who posts lots of interesting information about debugging and performance troubleshooting, especially on Windows.
  • Anne van Kesteren’s – Anne works on the HTML5 spec.
  • Mark Nottingham’s – Mark co-chairs the HTTP and QUIC working groups

Specific Posts of Interest

People to Follow

I’ve doubtless forgotten some, see who I follow.

The Business of Browsers

Public data reveals each point of marketshare in the browser market is worth at least $100,000,000 USD annually (most directly in the form of payments from the browser’s configured search engine).

Remembering this fact will help you understand many other things, from how browsers pay their large teams of expensive software engineers, to how they manage to give browsers away for free, to why certain features behave the way that they do.

Extra Resources

Browsers are hugely complicated beasts, and tons of fun. If the resources above leave you feeling both overwhelmed and excited, maybe you should become a browser builder.

Want to change the world? Come join the new Microsoft Edge team today!

-Eric

The Chrome team is embarking on a clever and bold plan to change the recipe for cookies. It’s one of the most consequential changes to the web platform in almost a decade, but with any luck, users won’t notice anything has changed.

But if you’re a web developer, you should start testing your sites and services now to help ensure a smooth transition.

What’s this all about?

As originally designed, cookies were very simple. When a browser made a request to a website, that website could return a tiny piece of text, called a cookie, to the browser. When the web browser subsequently requested any resource from that website, the cookie string would be echoed back to the server that first sent it.

Simple, right?

A bit too simple, as it turns out.

Browser designers have spent the last two decades trying to clear up the mess that this one simple feature causes, and alternatives might never gain adoption.

There are two major classes of problem with the design of cookies: Privacy, and Security.

Privacy

The top privacy problem is that cookies are sent every time a request is made for a resource, even if that request is made from a completely different context. So, if you visit A.example.com, that page might request a tracking pixel from ad.doubleclick.net. This tracking pixel might set a cookie. The tracking pixel’s cookie is called a third party cookie because it was set by a domain unrelated to the page itself.

If you later visit B.textslashplain.com, which also contains a tracking pixel from ad.doubleclick.net, the tracking pixel’s cookie set on your visit to A.example.com is sent to ad.doubleclick.net, and now that tracker knows that you’ve visited both sites. As you browse more and more sites that contain a tracking pixel from the same provider, that provider can build up a very complete profile of the sites you like to visit, and use that information to target ads to you, sell the data to a data aggregation company, etc.

Today, Brave blocks 3rd party cookies by default, while Safari’s ITP feature does something more intricate. Firefox and the new Edge have “Tracking Prevention” features that block 3rd-party cookies from known trackers.

Most browsers offer a setting to turn off ALL third party cookies, and older versions of Internet Explorer used to P3P block cookies that did not promise to abide by reasonable privacy protections. However, almost all users leave 3rd party cookies enabled, and enough sites sent fraudulent P3P declarations that the P3P support was ripped out of the only browser that supported it.

Security

The security problems with cookies are a bit more subtle.

In most cases, after you log in, a site will store your identity in a cookie, such that you don’t have to reenter your password on every page, or retap your security key every time you do anything. An authentication token is stored in a cookie, and each request you make to a site carries that cookie and token.

The problem is that this creates the possibility of a cross site request forgery attack, in which an attacker carefully crafts a request to a website to which you are logged in. When you visit the attacker’s site (say, to read a news article or view an image link posted to your social media feed), the attacker’s page instructs your browser to send its malicious request (“Transfer $1000 from me to @badguy”) to the victim site where you are logged in (e.g. https://bank.example).

Normally, such a request would be ignored or responded to by a demand for credentials, but because your browser is already logged in to bank.example and because your browser made the request, the server receives the cookie containing your authentication token and deems the request legitimate. You’ve been robbed! This class of attack is called the confused deputy attack.

Now, there are myriad ways to protect against this problem, but they all require careful work on the part of web application developers, and a long history of exploits shows that failing to protect against CSRF is a common mistake.

Other Privacy and Security Problems

I’ve described the biggest and most prominent of the security and privacy problems with the design of cookies, but those are just the tip of the iceberg. There are many other problems, including:

  • Because cookies are sent on plaintext HTTP requests, a passive network observer can watch your cookies as they’re sent over the network and correlate requests back to a single client. The NSA famously abused this vulnerability.
  • Cookies are sent to servers which respond with data that might allow Cross Site Leaks or violations of Same Origin Policy. For instance, an attacker might include in their page several guesses as to your identity. If they guess right (e.g. your cookie matches the URL identity), the images loads and now the attacker site knows exactly who you are:
    WordPressCorruptsMarkup
  • Cookies can be trivially stolen in a XSS Attack if the HTTPOnly attribute was not set.
  • Cookies can be trivially leaked if the server forget to set the Secure attribute and the site isn’t on the HSTS preload list.
  • Same Origin Policy blocks reading of cross-origin resources, but this depends on the integrity of the browser sandbox. Attacks like Spectre weaken this guarantee. Ambient authentication (like cross-origin cookies) weakens Cross-Origin-Read-Blocking‘s ability to prevent a compromised renderer from stealing data.

Cookie Design Improvements

Over the years, browsers have introduced more and more features and toggles to help lock down cookies, from the Secure and HTTPOnly attributes to Cookie Prefixes (nee Magic Named Cookies).

Perhaps the most promising improvement is a feature called SameSite cookies, by which cookies can opt-in to being sent only in a first-party context:

Set-Cookie: __HostAuth=F123ABCA; SameSite=Strict; secure; httponly;

This helps protect your site against CSRF attacks and helps mitigate leakage of the user’s identity in a cross-site context.

There’s a nice SameSite cookie explainer (with pictures!).

While broadly supported by browsers, the SameSite directive isn’t getting used everywhere it should be.

Big Changes are Coming!

So the Chrome folks plan to change that.

In Chrome 80 and later, cookies will default to SameSite=Lax. This means that cookies will automatically be sent only in a first party context unless they opt-out by explicitly setting a directive of None:

Set-Cookie: ACookieAvailableCrossSite; SameSite=None; secure; httponly


This change is small in size, and huge in scope. It has huge implications for any site that expects its cookies to be used in a cross-origin context.

What’s the Immediate Good?

In one fell swoop, many websites will get more secure. Sites that were previously vulnerable to CSRF and cross-site leak attacks will be protected from attack in the most popular browser.

Privacy improves, because setting and sending of 3rd party cookies are blocked-by-default:

SubframeCookiesBlocked

Who’s on board?

We plan to match this change in default for the new Edge browser (with experiments starting in v82). However, there’s no plan to match this change for Internet Explorer (please stop using it!) or the old Edge (v18 and earlier).

Per Chromium’s Intent-to-Implement announcement, Firefox is looking into matching the change, although they’ve joking-not-joking suggested that they’re going to let Chrome lead the charge (and bear the brunt of the compatibility impact) before turning the feature on by default.

Per the I2I, Safari has not yet weighed in on this change. Safari’s ITP feature already imposes many interesting restrictions on cross-site cookies.

What Can Go Wrong?

If users visit a site that expects its cookies to be available but the cookies are missing, users might get a confusing error message suggesting that they toggle a setting that won’t help:

teamserror

Or, the site might just redirect between its identity provider and itself forever.

These sorts of problems happen on sites that use Federated Identity providers that depend on accessing cookies from 3rd-party subframes:

federatedid-1

To fix this, the identity provider site will either need to set SameSite=None on its cookies, or will need to use a browser storage feature (e.g. localStorage) that is not impacted by this change. Please note that other browser features do impact the availability of DOM Storage, so it’s not a silver bullet.

Rollout Plan

The Chrome team has set an ambitious timeline which calls for turning this feature on-by-default for Chrome 80, slated for stable release on February 4th, 2020. Chrome’s rollout plan includes enabling the new default on an experimental basis in Chrome 79 pre-release [Beta/Dev/Stable] channels for machines which are not externally managed — attempting to defer walking into the compatibility minefield of enterprise intranets. Presently, this exclusion does not apply to machines that are domain-joined via AAD due to a limitation in Chromium.

The Chrome team also announced two enterprise policies for Chrome 80 that will allow admins to opt-out of the new default entirely, or to opt-out only specific sites. Edge expects to offer these same policies.

Developer Tooling

Developers who wish to enable the SameSite-by-Default feature locally for testing purposes can do so by visiting chrome://flags and searching for SameSite:

flags

Set the SameSite by default cookies feature to Enabled and restart the browser.

You can view the cookies used by the current page using the Application tab of the Developer Tools; the column at the far right shows the declared SameSite attribute:

F12Tab

The Chrome team have enabled logging in the Developer Tools to notify web developers that cookie behavior is changing. Visit chrome://flags/cookie-deprecation-messages to ensure that the warnings are enabled:

CookieMessages

If you then explore my test page, you can see the notices from the tools:

DebugInfo

The Cookies subtab of the Network tab picked up a new checkbox “show filtered out request cookies” which allows you to see (in yellow) which cookies were not sent for the selected request due to SameSite rules:

CookiesTab

Cookies restricted by SameSite rules are also logged in NetLog captures (issue 1005217).

Problems and Accommodations

While the vast majority of cookie scenarios will continue to work as expected, compatibility breaks are inevitable. Unfortunately, some of these breaks might not be trivially fixed by adding the SameSite=None attribute.

For instance, older versions of Safari treated SameSite=None as SameSite=Strict, which means that servers must avoid sending the None token to Safari 12.

The .NET Framework’s cookie writer used to simply omit the SameSite attribute when the SameSiteMode was “None.” Changing this will require the affected sites to update their framework to a version with the patch.

Early in our investigations, we found another problem related to how SameSite impacts cookies sent while navigating.

Specifically, over a million sites first set an anti-CSRF cookie on themselves, then redirect to a federated Login provider, then the Login provider POSTs the login information back to the site. That initial anti-CSRF cookie is only meant to be used in a first party context. Crucially, however, SameSite cookies are not sent on navigations if the navigations use the HTTP POST verb. Making the anti-CSRF cookies SameSite=Lax by default breaks this scenario and thus breaks tons of websites.

AntiCSRF

The Two Minute Mitigation

Demanding these security cookies be set to SameSite=None would be both onerous (many more sites would need to change) and misleading (because these cookies are really only meant to go to a 1st party context).

To address this breakage, the new default was adjusted to allow a SameSite-Lax-by-Default cookie to be sent on a subsequent POST requests for two minutes, significantly reducing the breakage without giving up all of the security benefit of the change.

Note that the 2 minute mitigation might not be enough for login scenarios that take longer than 2 minutes. For instance, consider the case where the login flow is happening in a background tab, or you have to fetch your Security Key, or a child must get a parent’s permission, etc. Sites that wish to handle scenarios like this will need to store a copy of their anti-CSRF token elsewhere (e.g. sessionStorage seems appropriate).

The Chrome team plans to eventually remove the 2 minute mitigation entirely.

Compat Landmine: document.cookie

In Firefox and Safari, the document.cookie DOM property matches the Cookie header, including omission of cookies that were restricted by SameSite navigation rules.

In contrast, in Chrome and Edge, SameSite cookies that are omitted from the Cookie header are still included in the document.cookie collection following a cross-origin navigation. I’ve been convinced that this actually makes more sense, although the reasoning is subtle [issue].

To the extent possible, you should set your cookies with the httponly attribute, so that they’re not available to JavaScript and this compatibility difference is irrelevant.

Test Steps

To test this change in Microsoft Edge, set:

  1. edge://flags/#same-site-by-default-cookies to ENABLE
  2. edge://flags/#cookies-without-same-site-must-be-secure to ENABLE
  3. Close all instances of the browser and start it using msedge.exe –enable-features=SameSiteDefaultChecksMethodRigorously

Step #3 disables the two-minute mitigation that allows SameSiteByDefault cookies on POST for two minutes; this exemption is slated for removal from Chromium at some unknown time (likely 6-12 months).

Overall testing guidance from Chromium applies to Edge: https://www.chromium.org/updates/same-site/test-debug

If you’re a Fiddler user, you can use this script to easily visualize which cookies are being set with which SameSite values.

What’s Next?

Assuming the rollout happens on schedule, users will get more security and more privacy. But that’s just the start.

The next step is to combat the “non-secure-cookies-are-trackable” attack mentioned previously. To prevent non-secure cross-site cookies being used by network observers to follow users around the web, SameSite=None cookies will be blocked if set without the Secure attribute. Chrome’s timeline for enabling this change by default seems squishier, but ChromeStatus claims it is also slated for Chrome 80.

After that, the next step is to combat the inevitable abuse by trackers.

Because trackers can simply opt their own cookies out of restrictions by setting SameSite=None, trackers will do so. But this isn’t is bad as you think– by forcing sites to explicitly declare each cookie for which cross-site use is intended, browsers can then focus extra love and attention around such cookies.

If we peek at Chrome’s flags page today, we see an interesting hint about the Chrome team’s plans:

SSNoneSwitch

Enabling that option enables EnableRemovingAllThirdPartyCookies which adds a new Remove Third-Party Cookies button to the All cookies and site data page. When clicked, it pops the following dialog:Delete3rdPartyButton

The HandleRemoveThirdParty() function invoked by the Clear button clears not only the cookies from those domains, but also all of the site data for the sites on which those cookies existed. This provides a strong disincentive for sites to opt-in to SameSite=None cookies unless they really need to.

(Disclaimer: Chrome might never launch the “Clear third-party cookies” button, but it’s in the code today).

You can expect that browser designers will soon dream up new and interesting remediations for cookies marked for access across multiple sites. For instance, browsers could limit the lifetime of those cookies to days or hours, or even a single browser session (tricky).

We live in interesting times!

Update: Chrome pushed back experimenting with this feature from Chrome 78 to Chrome 79. As of March 2020, they’ve enabled the feature for 50% of all Chrome 80+ users in all channels.

Update: The Chromium team have published a post about the new SameSite cookie default on their blog, including a list of incompatible legacy browsers that refuse to accept cookies that specify SameSite=None, and guidance on their rollout timeline.

-Eric

Cookie-dough hero photo by Pam Menegakis on Unsplash.

I recently bought a few new domain names under the brand new .app top-level-domain (TLD). The .app TLD is awesome because it’s on the HSTSPreload list, meaning that browsers will automatically use only HTTPS for every request on every domain under .app, keeping connections secure and improving performance.

I’m not doing anything terribly exciting with these domains for now, but I’d like to at least put up a simple welcome page on each one. Now, in the old days of HTTP, this was trivial, but because .app requires HTTPS, that means I must get a certificate for each of my sites for them to load at all.

Fortunately, GitHub recently started supporting HTTPS on GitHub Pages with custom domains, meaning that I can easily get a HTTPS site up in running in just a few minutes.

1. Log into GitHub, go to your Repositories page and click New:

2. Name your new repository something reasonable:

3. Click to create a simple README file:

4. Edit the file

5. Click Commit new file

6. Click Settings on the repository

7. Scroll to the GitHub Pages section and choose master branch and click Save:

8. Enter your domain name in the Custom domain box and click Save

9. Login to NameCheap (or whatever DNS registrar you used) and click Manage for the target domain name:

10. Click the Advanced DNS tab:

11. Click Add New Record:

 

12. Enter four new A Records for host of @ with the list of IP addresses GitHub pages use:

13. Click Save All Changes.

14. Click Add New Record and add a new CNAME Record. Enter the host www and a target value of username.github.io. Click Save All Changes: 

15. Click the trash can icons to delete the two default DNS entries that NameCheap had for your domain previously:

16. Try loading your new site.

  • If you get a connection error, wait a few minutes for DNS to propagate and re-verify the DNS records you just added.
  • If you get a certificate error, look at the certificate. It’s probably the default GitHub certificate. If so, look in the GitHub Pages settings page and you may see a note that your certificate is awaiting issuance by LetsEncrypt.org. If so, just wait a little while.

  • After the certificate is issued, your site without errors:

 

Go forth and build great (secure) things!

-Eric Lawrence

Recently, my web host stopped supporting the FrontPage Server Extensions used by Microsoft Expression Web 4 for website publishing (FPSE is now out-of-support). FPSE allowed me to publish to my site over a HTTPS connection, helping keep my password safe and my uploaded files unmodified.

Unfortunately, the alternative FTP transport is completely insecure–passwords and data transfer in plaintext and can be stolen or tampered and Microsoft products generally don’t support FTPS. As a consequence, I had to stop using Expression Web to edit my various websites. Update: As it turns out, you *can* use FTPS inside Expression Web, but only via the Site > Open Site menu, not the File > Open menu. Expression then warns you that FTP is insecure (bizarrely implying HTTP is better), but network monitoring shows that it’s properly using FTPS under the covers.

Fortunately, my favorite text editor, EditPad Pro offers FTPS support and I quickly moved to using it to edit my site.

Except for one thing—even when using a shared host, the server always returned the same certificate, one whose Subject Name didn’t match the hostname of my website. Yet EditPad didn’t complain at all, it just silently accepted any certificate and sent my username and password. An active man-in-the-middle can easily intercept FTPS connections and return a dummy certificate which EditPad would happily use.

I reported this vulnerability to the developer and I’m happy to see that he’s fixed this problem in version 7.4.0; if the certificate presented isn’t valid for the target, a security prompt is shown every time:

TLS Name Mismatch warning

Ideally, my webhost will start using my installed certificate for FTPS and WebDAV connections, but in the interim manual certificate validation serves as a fallback.

If you build any TLS-protected client or server application, you should always validate the certificate presented during the handshake.

Stay safe out there!

-Eric Lawrence

I’ve written about Zopfli quite a bit in the past, and even wrote a tool to apply it to PNG files. For fun, I had a look at one of the most optimized pages in the world: Google.com, through the lens of Zopfli.

Here are the basic resources delivered by the Google homepage:

Zopfli WhatIf

This breakdown shows that Google isn’t optimizing their own compression using the compressor they wrote. The Savings column shows the number of bytes saved by using Zopfli over whatever Google used to compress the asset. Using the default settings in an ideal world, Google could save up to 16.5k, almost 5% of the bytes transferred, by using Zopfli.

I’ve color-coded the column based on how practical I believe the savings to be—the green numbers are the static images where there’s no question the size benefit could be realized. The yellow numbers are cases where script files are compressed; given the complicated query string parameters, I’m betting these scripts are dynamically generated and the compression cost of Zopfli might not be reasonable. The red number is the homepage itself, which probably isn’t reasonable to Zopfli compress as it certainly is generated dynamically.

So, most likely the savings of a practical Zopfli deployment on the homepage page would be about 3.7kb; savings are much greater on other pages on other sites.

More interesting, however, is the Google API CDN, which hosts scripts for other sites; optimizing these would take a minute or two at most and make every site that uses them faster.

Zopfli savings

Use Zopfli; give the tubes a little bit more room.

-Eric

PS: You may already have zopfli.exe on your system; Fiddler installs a copy to its \Tools\ subfolder!