I’m passionate about building tools that help developers and testers discover, analyze, and fix problems with their sites.

Some of the first code I ever released was a set of trivial JavaScript-based browser extensions for IE5. I later used the more powerful COM-based extensibility model to hack together some add-ons that would log ActiveX controls and perform other tasks as you browsed. But the IE COM extensibility model was extremely hard to use safely, and I never released any significant extensions based on it.

Later, I built a simple Firefox extension based on their XUL Overlay extensibility model to better-integrate Fiddler with Firefox, but this extension recently stopped working as Mozilla begins to retire that older extensibility model in favor of a new one.

Having joined the Chrome team, I was excited to see how difficult it would be to build extensions using the Chrome model which is conceptually quite a bit different than both the old IE and Firefox models. Both Microsoft Edge (IE’s successor) and Firefox are adopting the Chrome model for their new extensions, so I figured the time would be well-spent.

I haven’t ever coded anything of consequence using JavaScript, HTML, and CSS, so I expected that the learning curve would be pretty steep.

It wasn’t.

Sitting on the couch with my older son and an iPad a few weeks ago, I idly Googled for “Create Chrome Extension.” One of the first hits was John Sonmez’s article “Create a Chrome Extension in 10 Minutes Flat.” I’m currently reading a book he wrote and I like his writing style, so with a fair amount of skepticism, I opened the article.

Wow, that looks easy.


After my kids went to bed that night, I banged out my first trivial Chrome extension. After suffering from nearly non-existent documentation of IE’s extension models, and largely outdated and confusing docs for Mozilla’s old model, I was surprised and delighted to discover that Chrome has great documentation for building extensions, including a simple tutorial and developer’s guide

Beyond that, there’s a magically wonderful Chrome Extension Source Viewer that allows you to easily peruse the source code of every extension on the Chrome Web Store:

CRX Viewer

CRX Source

Over the next few weeks, I built my moarTLS Analyzer extension, mostly between the hours of 11pm and 2am– peak programmer hours in my youth, but that was long ago.

I pulled out some old JavaScript and CSS books I’d always been meaning to read, and giggled with glee at the joy of building for a modern browser where all of the legacy hacks these books spilled so much ink over were no longer needed.

I found a few minor omissions from the Chrome documentation (bugs submitted) but on the whole I never really got stuck.

My Code

You can install the extension from the Chrome Web Store. The extension is on GitHub; have a look at the source and feel free to report bugs.

The code is simple and you can read it all in less than five minutes:

Code Map

The images folder contains the images used in the toolbar and the report UI. The manifest.json file defines the extension’s name, icons, permissions, and browser compatibility. The popup.{css|html|js} files implement the flyout report UI. When invoked, the flyout uses the executeScript and insertCSS APIs to add the injected.{css|js} files to each frame in the currently loaded page. The script sends a message back to the popup to tattle on any non-secure links it finds. The background.js file watches for File Downloads and shows an alert() if any non-secure downloads occur. The options.{css|html|js} files implement the settings block that control the extension’s settings.

Things I Learned

allFrames Isn’t Always

When you call chrome.tabs.executeScript with allFrames: true, your script will only be injected into same-origin subframes if your manifest.json only specifies the activeTab permission. To have it inject your script into cross-origin subframes, you must declare the <all_urls> permission. This is unfortunate, but entirely logical for security reasons. Declaring the all_urls permission results in a somewhat scary permissions dialog when the extension is installed:

Permissions prompt

The Shadow DOM Hides Things

When I was testing the extension, I noticed that it wasn’t working properly on the ChromeStatus site. A quick peek at the Developer Tools revealed that my call to document.querySelectorAll(“a[href]”) wasn’t turning up any anchor elements nested inside the #shadow-root nodes.

#shadow-root node

These nodes are part of the Shadow DOM, a technology that allows building web pages containing web components—encapsulated blocks of markup written in HTML and CSS. By default, the internal markup of these nodes is invisible to normal DOM APIs like getElementsByClassname.

Fortunately, this was easy to fix. While deprecated in CSS, the /deep/ selector can still be used by querySelectorAll, and changing my code to document.querySelectorAll(“* /deep/ a[href]”); allowed enumeration of the links in the Shadow DOM.

The Downloads API Is Limited

The chrome.downloads API offers a lot of functionality, but not the one key bit I wanted—access to the raw file after the download is complete. Enabling moarTLS to warn users when a file download came from HTTP was easy, but I wanted to also automatically compute the hash of the downloaded file and display it for examination (since some sites still don’t sign files but they do publish their hashes).

Unfortunately, it looks like the only way to achieve this is to have a native platform installer that installs an executable, and have the Chrome extension use nativeMessaging to invoke that executable on the file. I’ll try that soon… I’m probably going to try to write the native portion in Go so that it will run on Windows, Linux, and OS X.

Landmines around filename case-sensitivity

chrome-extension URLs are case-sensitive only on Linux and CrOS which means that you can easily write an extension that works correctly on Mac/Win and fails on CrOS and Linux; on CrOS, the extension might be marked as corrupt while on Linux the request will just fail (probably breaking the extension).

Publishing your Extension is Easy

I figured getting my add-on listed on the Chrome Web Store would be complicated. It’s not. It costs $5 to get a developer account. Surprisingly, you don’t use the Pack extension command in the chrome://extensions tab—you instead just ZIP up the folder containing your manifest and other files. Be sure to omit unneeded files like unit tests, unused fonts, etc, and optimize your images. After you’ve got that ZIP, you simply upload it to the store. You’ll need to use the WebUI to upload a number of screenshots and other metadata about the extension, but it’ll be live for everyone to download shortly thereafter.

Updates are simple—just upload a new ZIP file, tweak any metadata, and wait about an hour for the update to start getting deployed to users.

Firefox, Edge, and Opera

After publishing my extension yesterday, two interesting things happened. First, someone said “I’ll use it when it runs in Firefox” and second, Microsoft released the first build of Edge with support for browser extensions. Last night, decided to look at what’s involved in porting moarTLS to Firefox and Edge.

For Firefox, it’s actually pretty straightforward and took about twenty minutes (coding without a mouse!):

Firefox with moarTLS


I grabbed the Nightly build of Firefox and  popped open their instructions on Porting Extensions from Chrome and Loading Unpacked Extensions from Disk. I had to make only a few tiny tweaks to get my extension running in Firefox:

1. The manifest needs an applications object:

  “applications”: {
“gecko”: {
“id”: “”,
“strict_min_version”: “47.0”

2. The chrome object is named browser instead. You can resolve this with the following code at the top of your script:  if (!chrome) chrome = browser || msBrowser || chrome; The chrome object is also defined, so it’s not clear there’s any value in preferring one over the other

3. It appears the /deep/ selector doesn’t work.

4. Styles using the -webkit- prefix need either the -moz- prefix or need to gracefully fall back.

5. The storage API is not available to content scripts yet.


Mozilla tracks progress of their implementation on


Getting the extension running in Microsoft Edge wasn’t possible yet. At first, even after adding:

  “minimum_edge_version”: “33.14281.1000.0”,

…to the manifest, the extension wouldn’t load at all. SysInternals’ Process Monitor revealed that the browser process was getting Access Denied when trying to read manifest.json. I suspect the reason is that Microsoft hasn’t yet hooked up the plumbing that allows read access out of the sandboxed AppContainer—this explains why Microsoft’s three demo extensions are unpacked by an executable instead of a plain ZIP file—the executable probably calls ICALCS.exe to set up the permissions on the folder to allow read from the sandbox. I tested this theory by allowing their installer to unpack one of the demo extensions, then I ripped out all of their files in that folder and replaced them with my own and it was loaded.

The extension still doesn’t run properly however; none of Microsoft’s three demos uses a browser_action with a default default_popup so I’m guessing that maybe they haven’t hooked up this capability yet. I’m hassling the Edge team on Twitter. :)

I haven’t tried building an Opera Extension yet, but I suspect my Chrome Extension will probably work almost without modification.

There are many interesting thing to say about HTTP caching. I’ve blogged about them a lot in the past.

Today’s public service announcement to clear up two extremely common misconceptions:

1. The no-cache directive does not mean “do not cache” (even though IE versions prior to IE10 implemented it that way).

What it really means is do not reuse this item from the cache without first validating with the server that it is fresh.

If the no-cache directive does not specify a field-name, then 
a cache MUST NOT use the response to satisfy a subsequent request
without successful revalidation with the origin server.

2. The must-revalidate directive does not mean “you must revalidate this resource with the server before using it.”

What it really means is do not reuse this item from the cache after it expires without first validating with the server that it is fresh. It’s basically saying: “Don’t ignore the Expires and max-age directives.” Which a client absolutely shouldn’t be doing anyway.

If the response includes the "must-revalidate" cache-control
directive, the cache MAY use that response in replying to a
subsequent request. But if the response is stale, all caches
MUST first revalidate it with the origin server



The new UI of Internet Explorer 7 included a dedicated search box adjacent to the address bar, like the then-new Firefox. As IE7 was built between 2004 and 2006, Microsoft didn’t have a very credible entry into the search engine market—Bing wouldn’t appear until 2009. The IE team made a wise decision in support of the open web—we embraced the nascent OpenSearch specification developed by Amazon for search provider specification, allowing the browser to easily discover search providers offered by the site and enabling users to easily add those providers to IE’s search box.

This was a huge win for openness– it ensured that IE users had their choice of the best search engines the web had to offer. There was no lock-in.

Aside: The Narrative

Part of the Internet Explorer team’s internal narrative1 for years was that only two browsers were properly aligned with user’s interest—the only browsers where the customer was also the user were Safari and Internet Explorer. Safari and IE were the browsers bought by the customers who purchased their vendor’s hardware and software respectively. In contrast, the story went, Firefox had one customer, Google, who paid hundreds of millions of dollars for the right to be the default search engine. Later, Chrome had many thousands of customers, the AdSense advertisers who were buying access to the real product (millions of users’ eyeballs). As a consequence, the narrative went, the IE team were champions of the user and thus we’d make every decision with only our customers’ experience in mind.

What Happened Next

Fortunately, OpenSearch was quickly successful, and both Chrome and Firefox adopted it and the window.external.AddSearchProvider API that allowed a site (upon a user-initiated action) to offer to add a new Search Provider to the browser. This enabled customers to easily access search engines both large (Google, Yahoo, Bing, etc) and niche (Amazon, MSDN, etc) within their browser of choice. Some browsers even used OpenSearch to allow users to access search providers without installing them.

Openness won…


… until it didn’t. The Internet Explorer team has indicated that they don’t plan to support the de facto standard AddSearchProvider API they invented in their next browser, currently codenamed Project Spartan. They’ve offered a variety of defenses of the decision (e.g. “Safari doesn’t support it so we don’t have to!”) that they’ve previously ridiculed in other contacts (e.g. Pointer Events).

Currently, in fact, Spartan is hardcoded to use just one search engine—Bing. I have no doubt that the Spartan team will add additional search engines to their browser before they ship, but only an open API provides the freedom for sites and users to interact without any confounding politics and economic decisions. If I want to switch to a privacy-focused engine like DuckDuckGo, that should be trivial. If I want the ability to quickly run MSDN searches, this shouldn’t require petitioning the IE development team.

Security And Privacy

Making matters worse, Spartan users’ searches are sent to Bing over the network in plaintext, despite the fact that Bing supports HTTPS and the latest versions of both Chrome and Firefox use that HTTPS provider.

Some have argued that AddSearchProvider is “too powerful” and browsers shouldn’t offer APIs that enable changes to their configuration, even with user consent. This argument is compelling until you notice what actually happens—when you take away the sandboxed, restricted API, sites don’t just throw up their hands and say “Ah well, guess we’ll go home.” Instead, they send the user executable downloads that can take any action they like on the system, changing the search provider and, while they’re at it, reconfiguring the user’s other browsers, changing the search page, throwing in a toolbar or two, or whatever. Once the user’s suckered into running your code, why not maximize the value? And the Windows ecosystem continues its swirl toward the drain…

Other users have argued that a “gallery” of search providers, like is the right way to go. There are many problems with this approach. First, it requires that each site go to Microsoft, hat in hand, and register a provider. It requires that users go out of their way to go to the Gallery. Worst of it, it doesn’t provide any user workaround when the Gallery gets things wrong: for example, both Bing and Google offer HTTPS-based searches, and have for years. But if you install their official providers from the IE Gallery, you get insecure search and leak of your keystrokes as you type in the address bar. Microsoft Security Response Center (MSRC) has indicated that they do not consider this a security vulnerability.

In contrast, when AddSearchProvider is supported, the search engine can itself offer the proper, secure, search URLs. Or a user can build their own provider.

Please join me in begging the Internet Explorer team to reconsider: Support freedom. #SupportOpenSearch.

Vote here to fix Spartan: Bug Tracker link

Update: Hours after this post, the April security update for IE broke the AddSearchProvider API in existing IE versions. :-(

-Eric Lawrence

1 The validity of this narrative is itself worthy of its own post, so please don’t bother flaming the comments below.