bluebadge

Prelude

In late 2004, I was the Program Manager for Microsoft’s clipart website, delivering a million pieces of clipart to Microsoft Office customers every day. It was great fun. But there was a problem– our “Clip of the Day” feature, meant to spotlight a new and topical piece of clipart every day, wasn’t changing as expected.

After much investigation (could the browser itself really be wrong?!?), I wrote to the IE team to complain about what looked like bugs in its caching implementation. In a terse reply, I was informed that the handful of people then left on the browser team were only working on critical security fixes, and my caching problems weren’t nearly important enough to even look at.

That night, unable to sleep, I tossed and turned and fumed at the seeming arrogance of the job link in the respondent’s email signature… “Want to change the world? Join the new IE team today!

Gradually, though, I calmed down and reasoned it through… While the product wasn’t exactly beloved, everyone I knew with a computer used Internet Explorer. Arrogant or not, it was probably accurate that there was nothing I could do with my career at that time that would have as big an impact as joining the IE team. And, I smugly realized that if I joined the team, I’d get access to the IE source code, and could go root out those caching bugs myself.

I reached out to the IE lead for an informational interview the following day, and passed an interview loop shortly thereafter.

After joining the team, I printed out the source code for the network stack and sat down with a red pen. There were no fewer than six different bugs causing my “Clip of the Day gets stuck” issue. When my devs fixed the last of them, I mentioned this and my story to my GPM (boss’ boss).

Does this mean you’re a retention risk?” Tony asked.

Maybe after we fix the rest of these…” I retorted, pointing at the pile of paper with almost a hundred red circles.

No one in the world loved IE as much as I did, warts and all. Investigating, documenting, and fixing problems in Internet Explorer was a nearly all-consuming passion throughout my twenties. Internet Explorer pioneered a broad range of (mostly overlooked) innovations, and in rediscovering them, I felt like one of the characters on Lost — a castaway in a codebase whose brilliant designers were long gone. IE9 was a fantastic, best-of-its-time browser, and I’ll forever be proud of it. But as IE9 wound down and the Windows 8 adventure began, it was already clear that its lead would not last against the Chrome juggernaut.

I shipped IE7, IE8, IE9, and IE10, leaving Microsoft in late 2012, shortly after IE10 was finished, to build Fiddler for Telerik.

In 2015, I changed my default browser to Chrome. In 2016, I joined the Chrome Security team. I left Google in the summer of 2018 and rejoined the Microsoft Edge team, and that summer and fall I spent 50% of my time rediscovering bugs that I’d first found in IE and blogged about a decade before.

Fortunately, Edge’s faster development pace meant that we actually got to fix some of the bugs this time, but Chrome’s advantages in nearly every dimension left Edge very much in an underdog status. Fortunately, the other half of my time was spent working on our (then) secret project to replatform the next version of our Edge browser atop the open-source Chromium project.

We’ve now shipped our best browser ever — the Chromium-based Microsoft Edge. I hope you’ll try it out.

It’s with love that I beg you… please let Internet Explorer retire to the great bitbucket in the sky. It’s time. It’s been time for a long time.

Burndown List

Last night, as I read the details of yet another 0-day security bug in Internet Explorer, I posted the following throwaway tweet, which netted a surprising number of interactions:

I expected the usual slew of “Yeah, IE is terrible,” and “IE was always terrible,” and “Somebody tell my {boss,school,parents}” responses, but I didn’t really expect serious replies. I got some, however, and they’re interesting.

Shared Credentials

Internet Explorer shares a common networking stack (WinINET) and Cookie Jar (for Intranet/Trusted sites) with many native code applications on Windows, including Windows Explorer. Tim identifies a scenario where Windows Explorer relies on an auth cookie being found in the WinINET cookie jar, put there by Internet Explorer. We’ve seen similar scenarios in some Microsoft Office flows.

Depending on a cookie set by Internet Explorer might’ve been somewhat reasonable in 2003, but Vista/IE7’s introduction of Protected Mode (and cookie jar partitioning) in 2006 made this a fragile architecture. The fact that anything depends upon it in 2020 is appalling.

Thoughts: I need to bang on some doors. This is depressing.

Certificate Issuance

Developers who apply digital signatures to their apps and server operators who expose their sites over HTTPS do so using a digital certificate. In ideal cases, getting a certificate is automatic and doesn’t involve a browser at all, but some Certificate Authorities require browser-based flows. Those flows often demand that the user use either Internet Explorer or Firefox because the former supports ActiveX Controls for certificate issuance, while Firefox, until recently, supported the Keygen element.

WebCrypto, now supported in all modern browsers, serves as a modern replacement for these deprecated approaches, and some certificate issuers are starting to build issuance flows atop it.

Thoughts: We all need to send some angry emails. Companies in the Trust space should not be built atop insecure technologies.

Banking, especially in Asia

A fascinating set of circumstances led to Internet Explorer’s dominance in Asian markets. First, early browsers had poor support for Unicode and East Asian character sets, forcing website developers to build their own text rendering atop native code plugins (ActiveX). South Korea mandated use of a locally-developed cipher (SEED) for banking transactions[1], and this cipher was not implemented by browser developers… ActiveX again to the rescue. Finally, since all users were using IE, and were accustomed to installing ActiveX controls, malware started running rampant, so banks and other financial institutions started bundling “security solutions” (aka rootkits) into their ActiveX controls. Every user’s browser was a battlefield with warring native code trying to get the upper hand. A series of beleaguered Microsoft engineers (including Ed Praitis, who helped inspire me to make my first significant code commits to the browser) spent long weeks trying to keep all of this mess working as we rearchitected the browser, built Protected Mode and later Enhanced Protected Mode, and otherwise modernized a codebase nearing its second decade.

Thoughts: IE marketshare in Asia may be higher than other places, but it can’t be nearly as high as it once was. Haven’t these sites all pivoted to mobile apps yet?

Reader Survey: Do you have any especially interesting scenarios where you’re forced to use Internet Explorer? Sound off in the comments below!

Q&A

Q: I get that IE is terrible, but I’m an enterprise admin and I own 400 websites running lousy websites written by a vendor in a hurry back in 2004. These sites will not be updated, and my employees need to keep using them. What can I do?

A: The new Chromium-based Edge has an IE Mode; you can configure your users so that Edge will use an Internet Explorer tab when loading those sites, directly within Edge itself.

Q: Uh, isn’t IE Mode a security risk?

A: Any use of an ancient web engine poses some risk, but IE Mode dramatically reduces the risk, by ensuring that only sites selected by the IT Administrator load in IE mode. Everything else seamlessly transitions back to the modern, performant and secure Chromium Edge engine.

Q: What about Web Browser Controls (WebOCs) inside my native code applications?

A: In many cases, WebOCs inside a native application are used to render trusted content delivered from the application itself, or from a server controlled by the application’s vendor. In such cases, and presuming that all content is loaded over HTTPS, the security risk of the use of a WebOC is significantly lower. Rendering untrusted HTML in a WebOC is strongly discouraged, as WebOCs are even less secure than Internet Explorer itself. For compatibility reasons, numerous security features are disabled-by-default in WebOCs, and the WebOC does not run content in any type of process sandbox.

Looking forward, the new Chromium-based WebView2 control should be preferred over WebOCs for scenarios that require the rendering of HTML content within an application.

Q: Does this post mean anything has changed with regard to Internet Explorer’s support lifecycle, etc?

A: No. Internet Explorer will remain a supported product until its support lifecycle runs out. I’m simply begging you to not use it except to download a better browser.

Footnotes

[1] The SEED cipher wasn’t just a case of the South Korean government suffering from not-invented-here, but instead a response to the fact that the US Government at the time forbid export of strong crypto.

I rejoined Microsoft as a Principal Program Manager for the web networking team on June 4th, 2018.

TravellogForward

June2018

If you offer web developers footguns, you’d better staff up your local trauma department.

In a prior life, I wrote a lot about Same-Origin-Policy, including the basic DENY-READ principle that means that script running in the context of origin A.com cannot read content from B.com. When we built the (ill-fated) XDomainRequest object in IE8, we resisted calls to enhance its power and offer dangerous capabilities that web developers might misconfigure. As evidence that even experienced web developers could be trusted to misconfigure almost anything, we pointed to a high-profile misconfiguration of Flash cross-domain policy by a major website (Flickr).

For a number of reasons (prominently including unwillingness to fix major bugs in our implementation), XDomainRequest received little adoption, and in IE10 IE joined the other browsers in supporting CORS (Cross-Origin-Resource-Sharing) in the existing XMLHttpRequest object.

The CORS specification allows sites to allow extremely powerful cross-origin access to data via the Access-Control-Allow-Origin and Access-Control-Allow-Credentials headers. By setting these headers, a site effectively opts-out of the bedrock isolation principle of the web and allows script from any other site to read its data.

Evan Johnson recently did a scan of top sites and found over 600 sites which have used the CORS footgun to disable security, allowing, in some cases, theft of account information, API keys, and the like. One of the most interesting findings is that some sites attempt to limit their sharing by checking the inbound Origin request header for their own domain, without verifying that their domain was at the end of the string. So, victim.com is vulnerable if the attacker uses an attacking page on hostname victim.com.Malicious.com or even AttackThisVictim.com. Oops.

Vulnerability Checklist

For your site to be vulnerable, a few things need to be true:

1. You send Access-Control-Allow headers

If you’re not sending these headers to opt-out of Same-Origin-Policy, you’re not vulnerable.

2. You allow arbitrary or untrusted Origins

If your Access-Control-Allow-Origin header only specifies a site that you trust and which is under your control (and which is free of XSS bugs), then the header won’t hurt you.

3. Your site serves non-public information

If your site only ever serves up public information that doesn’t vary based on the user’s cookies or authentication, then same-origin-policy isn’t providing you any meaningful protection. An attacker could scrape your site from a bot directly without abusing a user’s tokens.

Warning: If your site is on an Intranet, keep in mind that it is offering non-public information—you’re relying upon ambient authorization because your sites’ visitors are inside your firewall. You may not want Internet sites to be able to be able to scrape your Intranet.

Warning: If your site has any sort of login process, it’s almost guaranteed that it serves non-public information. For instance, consider a site where I can log in using my email address and browse some boring public information. If any of the pages on the site show me my username or email address, you’re now serving non-public information. An attacker can scrape the username or email address from any visitor to his site that also happens to be logged into your site, violating the user’s expectation of anonymity.

Visualizing in Fiddler

In Fiddler, you can easily see Access-Control policy headers in the Web Sessions list. Right-click the column headers, choose Customize Columns, choose Response Headers and type the name of the Header you’d like to display in the Web Sessions list.

Add a custom column

For extra info, you can click Rules > Customize Rules and add the following inside the Handlers class:


public static BindUIColumn("Access-Control", 200, 1)
function FillAccessControlCol(oS: Session): String {
  if (!oS.bHasResponse) return String.Empty;
  var sResult = String.Empty;
  var s = oS.ResponseHeaders.AllValues("Access-Control-Allow-Origin");
  if (s.Length > 0) sResult += ("Origin: " + s);

  if (oS.ResponseHeaders.ExistsAndContains(
    "Access-Control-Allow-Credentials", "true"))
  {
    sResult += " +Creds";
  }

  var s = oS.ResponseHeaders.AllValues("Access-Control-Allow-Methods");
  if (s.Length > 0) sResult += (" Methods: " + s);

  var s = oS.ResponseHeaders.AllValues("Access-Control-Allow-Headers");
    if (s.Length > 0) sResult += (" SendHdrs: " + s);

  var s = oS.ResponseHeaders.AllValues("Access-Control-Expose-Headers");
  if (s.Length > 0) sResult += (" ExposeHdrs: " + s);

  return sResult;
}

-Eric

In 2005, one of my first projects on the Internet Explorer team was improving the user-experience for HTTPS sites (“SSLUX”).

Our first task was to change the certificate error experience from the confusing and misleading modal dialog box:

Certificate errors UX

… to something that more clearly conveyed the risk and which more clearly discouraged users from accepting invalid certificates. We quickly settled upon using a blocking page for bad certificates, a pattern seen in all major browsers today.

Next, we wanted to elevate the security information from the lowly lock buried at the bottom of the window (at best, since the status bar could be hidden entirely):

IE6 status bar

As a UI element, the lock resonated with users, but it wasn’t well understood (“I look for the lock and if it’s there, I’m safe”). We felt it was key to ensure that users not only saw that the connection was secure, but also with whom a secure connection had been made. This was especially important as some enterprising phishers had started obtaining HTTPS certificates for their spoofing sites, with domain names like BankOfTheVVest.com. Less urgently, we also wanted to help users understand that a secure connection didn’t necessarily mean the site is safethe common refrain was that we’d happily set up a secure connection to a site run by the Russian Mafia, so long as the user recognized who they were talking to.

We decided to promote the HTTPS certificate information to a new UI element next to the address bar1. Called the “Trust Badge”, the button would prominently display the information about the owner and issuer of the HTTPS certificate, and clicking it would allow users to examine the certificate in full:

EV Certificate UI in IE7

Displaying the Issuer of the certificate was deemed especially important– we knew some CAs were doing a much better job than others. High-volume-Low-vetting CAs’ $20 certificates were, to users, indistinguishable from the certificates from CAs who did a much more thorough job of vetting their applicants (usually at a much higher price point). The hope was that the UI would both shame lazy CAs and also provide a powerful branding incentive for those doing a good job.

We were pretty excited to show off our new work in IE7 Beta 1, but five months before our beta shipped, Opera beat us to it with Opera 8 beta 2 with a UI that was nearly identical to what we were building.

During those five months, however, we spoke to some of the Certificate Authorities in the Microsoft Root CA program and mentioned that we’d be making some changes to IE’s certificate UI. They expressed excitement to hear that their names would no longer be buried in the depths of a secondary dialog, but cautioned: “Just so long as you don’t do what Opera did.

Why’s that?” we asked innocently.

Well, they show the Subject organization and location information in their UI.”

“And that’s a problem because…” we prompted.

Well, we don’t validate any of the information in the certificate beyond the domain name.” came the reply.

But you omit any fields you don’t validate, right?” we asked with growing desperation.

Nah, we just copy ‘em over.

After the SSLUX feature team picked our collective jaws off the floor, we asked around and determined that, yes, the ecosystem “race to the bottom” had been well underway over the preceding few years, and so-called “Domain validation” (DV) of certificates was extremely prevalent. While not all DV certificates contained inaccurate information, there was no consistent behavior across CAs.

Those CAs who were doing a good job of vetting certificates were eager to work with browsers to help users recognize their products, and even the “cheap” CAs felt that their vetting was better than that of their competitors2. Soon the group that evolved into the CA/Browser forum was born, pulling in stakeholders of all types (technologists, policy wonks, lawyers) from all over the industry (Microsoft, Mozilla, Konquerer, etc). Meetings were had. And more meetings. And calls. And much sniping and snarking. And more meetings. Eventually, the version 1.0 guidelines for a new sort of certificate were adopted. These Extended Validation (nee “Enhanced Validation”, nee “High Assurance”) certificates required specific validation steps that every CA would be required to undertake.

EV certificates were far from perfect, but we thought they were a great first step toward fixing the worst problems in the ecosystem.

Browsers would clearly identify when a connection was secured with EV (IE flood-filled the address bar with green) to improve user confidence and provide sites with a business reason to invest (time and money) in a certificate with more vetting. For the EV UI treatment, browsers could demand sites and CAs use stronger algorithms and support features like revocation checking. Importantly, this new class of certificates finally gave browsers a stick to wield against popular CAs who did a poor job—in the past, threats to remove a CA from the trust store rang hollow, because the CA knew that users would blame the browser vendor more than the CA (“Why do I care if bad.com got a certificate, good.com should work fine!”); with EV, browsers could strip the EV UX from a CA (leading their paying customers to demand refunds) without issuing an “Internet Death Sentence” for the entire CA itself.

Our feature was looking great. Then the knives really came out.

…to be continued…

-Eric

1 Other SSLUX investments, like improving the handling of Mixed Content, were not undertaken until later releases.

2 Multiple CAs who individually came to visit Redmond for meetings brought along fraudulent certificates they’d tricked their competitors to issue in our names, perhaps not realizing how shady this made them look.

Sadly, you’re unlikely to get wealthy by writing a book. You should definitely write one anyway.

My Background

People I respect suggest you shouldn’t write (or buy) books on specific technologies, going so far as to say that writing a book was on their top-10 lists of life regrets. Top-10… whoa!

As a consequence, when I was approached to write a book about Internet Explorer in 2009, I turned it down. “No one reads books anymore,” I asserted to the Vice President of the Internet Explorer team. At the time, I was sitting about 6 feet from my bookshelf full of technical books that I’d been buying over the last decade, including a few I’d purchased in the last month.

My counter-factual stance continued for the next few years, even as I served as a technical reviewer for five books on web development and performance. Then, in 2011, as I started pondering the sale of Fiddler, I met some product managers at the proposed acquirer and watched them use Fiddler for a few minutes. I was appalled—these guys had been looking at Fiddler for six months, and seemed to have settled on the most cumbersome and complicated ways to get anything accomplished. It wasn’t really their fault—Fiddler was incredibly feature rich, and I couldn’t fault them for not reading the manual—there wasn’t one. I felt a moral obligation, whether I sold Fiddler or not, to at least write down how to use it.

At the time, my wife was training to run marathons and I had quite a bit of free time in the mornings. Not knowing any better, I did what I assumed most writers do—I took my laptop to the coffee shop in the mornings and started writing. My resolve was aided by two crutches—

  1. I was happy to describe Fiddler, feature-by-feature, from top to bottom, left to right (Hemmingway, this wasn’t), and
  2. I decided that even if I abandoned the project without finishing a proper book, I could just save the results to a PDF, title it “Fiddler-Manual.pdf” and upload it to the Fiddler website.

I’ll cover the mechanics of actual writing in a future post (surprisingly straightforward, but I have some advice that may save you some time), but for now it suffices to say that after nine months of work a few times a week, I had a book.

My Investment

Writing the first edition took about 110 hours authoring, 20 hours of editing, and 30 hours of designing the cover, fixing formatting, and futzing with the printer/publisher webapp. I spent about $50 on draft copies, $40 or so on the cover photo, $20 on the fiddlerbook.com domain name registration, and about $650 for coffee and pastries consumed while writing. From September 2011 to June 2012, I periodically took a “snapshot” of my progress by printing the work:

Fiddler draft copies

Writing took about three months longer than my prediction:

Notebook with dates crossed out

… in part because as I wrote, I discovered what I’ve come to call Book-Driven Development (covered in the next section).

I was proud of the final product, but skeptical that it would earn back even what I spent on coffee.

So, Why Write?

First, it’s something parents and other folks can tangibly hold and appreciate. It’s probably the only way my grandmother will ever have any idea what the heck a “Fiddler Web Debugger” is and why someone might use one.

Second, it’s tangible. Many people have contributed to Fiddler over the years, and I can inscribe a paperback copy and send it to them as a “Thank you.” When the book was finished, I bought a dozen copies and dropped them off in the offices of colleagues who’d made contributions (in the form of bug reports or suggestions) over the years. One of the proudest moments of my life was when I got an email from Mark Russinovich thanking me for the signed copy and noting that it would nicely complement the ebook he’d already bought.

Third, writing a book makes you think very very hard about what you’re writing about, and with a different mindset. The Fiddler book took quite a bit longer to write because I made hundreds of improvements to Fiddler while I was writing the book, because I was writing the book. Almost every time I thought of something interesting to explain, I began to write about it, then realized that whatever it was shouldn’t have been so complicated in the first place. I’d then go fix Fiddler itself to avoid the problem. In other cases, explicitly writing out everything you can do with Fiddler made me recognize some important (and in hindsight, obvious) gaps, and go implement those features. I started calling this process Book-Driven Development and Fiddler was dramatically improved over the authoring of the book. Having said that, this also made writing the book take quite a bit longer—I’d write three or four pages, realize that the feature in question shouldn’t be so complicated, and go implement a checkbox or a button that would do everything the user needed without explanation. Then I’d have to go delete those three or four pages and replace it with “To do <X>, just push the <X> button.

Fourth, I got to choose what to write about. Fiddler is insanely powerful, but after watching people “in the field” use it, it was plain that most of its functionality is completely unknown to the vast majority of users. While some users watch the videos and read the blog posts, it was clear that there are some number of folks for which a complete book with an end-to-end explanation of the tool is the best way to learn it.

Fifth, it gives you an appreciation for other authors that you may never get otherwise. Marathon runners probably have more respect for other marathon runners than the general public ever will, simply because they know how grueling it is to run 26.2 in a way that someone who hasn’t never will. I think the same is probably true for book-writers.

So, in summary:

  1. It’s tangible.
  2. You can gift it to contributors.
  3. You’re forced to think like a new user.
  4. You can drive the direction of usage.
  5. You learn to appreciate authors.
  6. Self-publishing significantly changes your book’s financial prospects.

Money Matters

One of the challenges with almost any profit-making endeavor is that folks are so coy about the numbers involved. Inspired by John Resig’s post on Programming Book profits, I’m going to share my numbers here. My goal isn’t to brag—I think these are solid numbers, not runaway success numbers, but I want to show why “You’ll never make any money selling a book” is simply untrue.

Having read a bunch of posts like Jeff Atwood’s and Resig’s, I realized that going the traditional publisher route was a bad deal for both the reader and for me– the Fiddler book would have been ~$30 and I’d see maybe two or three dollars of that. Self-publishing is a better deal for the reader (lower prices) and it’s a better deal for me (I get about $6 and $8 respectively). While a traditional publisher would have probably netted me an advance of a few thousand bucks (more than I expected to make) I frankly prefer the “honesty” of being solely responsible for my book’s sales, and the often happy feeling I get when I (obsessively?) check sales stats and find that I sold a handful more copies overnight.

The first edition of Debugging with Fiddler was released in June 2012. The book was self-published on Lulu for the ebook (a PDF) and via CreateSpace (paperback) which was mostly sold on Amazon. The Lulu ebook was sold for a flat $10, while Amazon set the price for the paperback, usually at a small discount off the nominal $18.99 cover price.

Here are the sales figures for the ebook on Lulu:

$20116

The paperback sold slightly better, with 2713 English copies sold; the CreateSpace report below includes the 319 copies sold (so far) of the Second Edition:

3032 copies sold

Beyond the sales of my book, I also agreed to let the book be translated to Korean, Chinese, and Japanese by three local publishers. Each publisher agreed to an advance of about $1500, as well as three or four copies of the translated paperback. Of these, only one publisher (O’Reilly Japan) sends follow-up royalty statements; the book sold approximately 1400 copies in Japanese, and their advance was followed by a royalty check of $1450 in February of 2014.

On March 5th of 2015, I released the Second Edition update, revised to describe the changes in Fiddler over the last three years. This too proved far more successful than I’d ever dreamed. The $14.99 PDF (usually “on sale” for $9.99) took the lion’s share of sales with 840 copies sold, vs. 319 copies of the revised paperback. While the paperback stayed at CreateSpace, I moved to GumRoad for the new ebook for reasons I’ll describe in a future post.

$7453 royalties

So, how much did I earn? A bit more than $53318 or so thus far– the Euro/GBP exchange rates make the math a bit tricky. I spent about 200 hours of solid work on the First and Second Editions, so this works out to a bit over $250 per hour. Pretty amazing, for a project that yielded so many non-financial rewards!

Results Will Vary

It’s worth mentioning that my sales numbers are almost certainly much higher than they would’ve been “naturally”, but for one critical factor— as the developer of Fiddler, I was in a position to advertise the book both on the Fiddler website and in the Fiddler application itself. Exposed to millions of Fiddler users, this exposure was obviously invaluable and not, alas, something available to most writers.

It’s also the case that as the tool’s author, I benefit from credibility and name recognition (people expect that I’ll know what I’m writing about). As the primary source, I also have the luxury of writing more quickly since I didn’t need to do much research (subject to the “Book driven development” caveats earlier).

My (long overdue) next book project, Blue Badge, is a memoir of my time working at Microsoft, and it won’t benefit from the incredible exposure I had for Debugging with Fiddler. I’m intrigued to see how it sells.

 

If you’re an aspiring author, or simply interested in book publishing, I hope you found this post useful!

-Eric

Update: Cookie Prefixes are supported by Chrome 49, Opera 36, and Firefox 50. Test page; no status from the Edge team

A new cookie feature called SameSite Cookies has been shipped by Chrome, Firefox and Edge; it addresses slightly different threats.


When I worked on Internet Explorer, we were severely constrained on development resources. While the team made a few major investments for each release (Protected Mode, Loosely-coupled IE, new layout engines, etc), there was a pretty high bar to get any additional feature work in. As a consequence, I very quickly learned to scope down any work I needed done to the bare minimum required to accomplish the job. In many cases, I wouldn’t even propose work if I wasn’t confident that I (a PM) could code it myself.

In many cases, that worked out pretty well; for instance, IE led the way in developing the X-Frame-Options clickjacking protection, not only because we found other approaches to be unworkable (bypassable, compat-breaking, or computationally infeasible) but also because a simple header (internally nicknamed “Don’t Frame Me, Bro”) was the only thing we could afford to build1.

In other cases, aiming for the bare minimum didn’t work out as well. The XDomainRequest object was a tiny bit too simple—for security reasons, we didn’t allow the caller to set the request’s Content-Type header. This proved to be a fatal limitation because it meant that many existing server frameworks (ASP, ASPNET, etc) would need to change in order to be able to properly parse a URLEncoded request body string.

One of the “little features” that lingered on my whiteboard for several years was a proposal called “Magic-Named Cookies.” The feature aimed to resolve one significant security shortcoming of cookies—namely, that a server has no way to know where a given cookie came from. This limitation relates to the fact that the attributes of a cookie (who set it, for what path, with what expiration time, etc) are sent to the client in the Set-Cookie header but these attributes are omitted when the Cookie header is sent back to the server. Coupled with cookies’ loose-scoping rules (where a cookie can be sent to both “parent” and “sub” domains, and cookies sent from a HTTP origin are sent to the HTTPS origin of the same hostname) this leads to a significant security bug, whereby an attacker can perform a “Cookie Fixation” attack by setting a cookie that will later be sent to (and potentially trusted by) a secure origin. These attacks still exist today, although various approaches (e.g. HSTS with includeSubdomains set) are proposed to mitigate it.

RFC2965 had attempted to resolve this but it never got any real adoption because it required a major change in the syntax of the Cookie header sent back to the server, and changing all of the clients and servers proved too high a bar.

My Magic-Named Cookies proposal aimed to address this using the “The simplest thing that could possibly work” approach. We’d reserve a cookie name prefix (I proposed $SEC-) that, if present, would indicate that a cookie had been set (or updated) over a HTTPS connection. The code change to the browser would be extremely simple: When setting or updating a cookie, if the name started with $SEC-, the operation would be aborted if the context wasn’t HTTPS. As a consequence, a server or page could have confidence that any cookie so named had been set by a page sent on a HTTPS connection.

While magic naming is “ugly” (no one likes magic strings), the proposal’s beauty is in its simplicity—it’d be a two line code change for the browser, and wouldn’t add even a single bitfield to the cookie database format. More importantly, web server platforms (ASP, ASPNET, etc) wouldn’t have to change a single line of code. Web Developers and frameworks could opt-in simply by naming their cookies with the prefix—no other code would need to be written. Crucially, the approach degrades gracefully (albeit unsecurely)—legacy clients without support for the restriction would simply ignore it and not enforce the restriction, leaving them no more (or less) safe than they were before.

Unfortunately, this idea never made it off my whiteboard while I was at Microsoft. Over the last few years, I’ve tweeted it at the Chrome team’s Mike West a few times when he mentions some of the other work he’s been doing on cookies, and on Wednesday I was delighted to see that he had whipped up an Internet Draft proposal named Cookie Prefixes. The draft elaborates on the original idea somewhat:

  • changing $SEC- to __SECURE-
  • requiring a __SECURE- cookie to have the secure attribute set
  • adding an __HOST- prefix to allow cookies to inform the server that they are host-locked

In Twitter discussion, some obvious questions arose (“how do I name a cookie to indicate both HTTPS-set and Origin locked?” and “is there a prefix I can use for first-party-only cookies”?) which lead to questions about whether the design is too simple. We could easily accommodate the additional functionality by making the proposal uglier—for instance, by adding a flags field after a prefix:

Set-Cookie: $RESTRICT_ofh_MyName= I+am+origin-locked+first+party+only+and+httponly; secure; httponly

Set-Cookie: $RESTRICT_s_MyName2= I+am+only+settable+by+HTTPS+without+other+restrictions

 

… but some reasonably wonder whether this is too ugly to even consider.

Cookies are an interesting beast—one of the messiest hacks of the early web, they’re too important to nuke from orbit, but too dangerous to leave alone. As such, they’re a wonderful engineering challenge, and I’m very excited to see the Chrome team probing to find improvements without breaking the world.

-Eric Lawrence

1 See Dan Kaminsky’s proposal to understand, given infinite resources, the sort of ClickJacking protection we might have tried building.

Every few weeks for the last six or so years, I see someone complain on Twitter or in forums that the entire Internet seems to think they’re running an old version of IE. For instance, an IE11 user on Windows 8.1 might see the following warning on Facebook:

image

These warnings typically occur when the browser is using Compatibility View mode for a site and the site demands a browser that supports modern standards. Many customers used to find themselves accidentally in this state because they were overzealously clicking the “Compatibility View” button (back when IE had one) or clicking the “Display all sites in Compatibility View” checkbox (back when IE had it).

Since IE11 has cleaned that mess up (by hiding Compatibility View), you might wonder how a user could end up in such a broken state.

The answer is both complicated and interesting, deeply intertwined with nearly 15 years of subtle Internet Explorer behaviors.

When I ask the affected IE11 user to visit my User-Agent string test page, they see IE7’s Compatibility View user-agent string:

Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.3; Win64; x64; Trident/7.0; .NET4.0E; .NET4.0C; Media Center PC 6.0; .NET CLR 3.5.30729; .NET CLR 2.0.50727; .NET CLR 3.0.30729)

But why?

Since IE no longer shows the Zone in the status bar, you must right-click the page and choose Properties to get your next clue:

image

Wait, what?!? Why is some random site on the Internet in the privileged Local Intranet security zone?

Next the user does the same test on Facebook.com and finds that it too is in the Intranet Zone. In fact, the whole web is getting zoned as Intranet!

This represents a significant security hole, and the user has only discovered it because, by default, Tools > Compatibility View Settings has Display Intranet sites in Compatibility View set, and the unwanted CompatView causes sites like Facebook to complain.

So what’s going on here!?!

Click Tools > Internet Options > Connections > LAN Settings, and observe that the settings are the defaults:

image

Wait… what exactly does that Automatically detect settings option do?

Why, it allows a computer on your network to decide what proxy server your client should use through a process called WPAD. The server in question gets to supply a proxy configuration script that implements a function FindProxyForUrl(). That function returns either a proxy (e.g. “PROXY myproxy:8080” or “DIRECT” to indicate that the request should be sent directly to the origin server and bypass the proxy.

And now we’re getting somewhere. Take a look at the checkboxes inside Tools > Internet Options > Security > Local Intranet > Custom Level, specifically the second checkbox:

image

Yup, that’s right—if a proxy script returns DIRECT for a given site, IE defaults to treating that site as a part of the Local Intranet Zone, giving it additional privileges and also defaulting it to CompatView. Oops.

You might think: “well, surely a network proxy administrator would never make that mistake!”

Back in 2011, the IE team started getting email from all over the company complaining that “IE is broken. It doesn’t support HTML5!” Guess why not? Oops.

Recommendations

Unless you’re running IE on a Corporate Network that requires support for things like Negotiate Authentication and the like, you should untick the Automatically detect intranet network checkbox and all of the checkboxes beneath it. This improves security and enhances IE’s sandbox.

Unless you’re running a laptop that moves to corporate networks, you should also disable the Automatically detect settings checkbox to prevent IE from asking your network what proxy to use.

-Eric Lawrence

I’ve found myself a bit stalled in writing my memoir, so I’m going to post a few stories here in the hopes of breaking free of writer’s block…

The use of first names and email aliases at Microsoft could easily lead to confusion for new employees. A few weeks into my first summer (1999) at Microsoft, the interns received an email from a Steven Sinofsky, announcing that there would be a party later that month “at Jillians.” The email was a bit short on details beyond the date and time, and I wanted to make a good impression. I’d hate to show up at some big shot’s fancy house in jeans and a T-shirt only to discover that corporate parties are always formal affairs. So I emailed Steven and asked “will the party at Jillian’s house require formal wear?”

A few minutes later, while musing that it was nicely progressive for Microsoft to have some executive named “Jillian” who had a male secretary/assistant named “Steven,” it occurred to me that I didn’t know who Jillian was or what she products she owned. Fortunately, at Microsoft, the Outlook Address Book (aka the GAL, Global Address List) contains both full names and titles, so I quickly looked up Steven to see who he worked for.

My heart leapt into my throat when I saw Steven’s title. It wasn’t “Administrative Aide,” “Executive Assistant,” or anything else I might have guessed. “Vice President” it said simply. With mounting alarm, I turned around to ask my office-mate: “Um, who’s Jillian?” He looked confused. “You know, the intern party’s at her place?” I clarified.

I watched as comprehension and then amusement dawned. “Oh, Jillian’s is a sports bar and billiards parlor downtown” he replied. Seeing the horror on my face, he continued “Why do you ask?”

I swiveled back to my computer and went to Outlook’s Sent Items folder to confirm that I had indeed made a huge fool of myself. I began frantically hunting through Outlook’s menus… surely there was some way to fix this. The command “Recall this message” leapt off the screen and for the first time in minutes my pulse began to slow. I invoked the command and gave a silent thanks to whomever had invented such a useful feature.

It was weeks before I learned that the way “Recall this message” works tends to increase the likelihood that someone will read your message. Instead of simply deleting the message, it instead sends the recipient a message indicating that you would like to recall your prior message, and requests their permission to delete the original. Most recipients, I expect, then immediately go read the original to see why you deemed a recall necessary. Fortunately for my fragile ego, either Steven didn’t do that, or he took pity on me and simply didn’t reply.

After this experience, I never replied to an email from someone I didn’t know without first consulting the GAL.

Postscript

It was around 8am on a Saturday morning in the winter of 2010 and I’d just woken up. I got an email directly from Steven asking a deeply technical question (restrictions on Unicode endianness when parsing a Mark-of-the-Web in HTML) about some code he was writing. I was seriously impressed, both in that he was clearly writing code, but also that he’d somehow known exactly the most suitable person to send his question to, far down the organizational ladder. I confirmed the limitation and mentioned how inspirational I found it to be working in an organization where my Vice President wasn’t afraid to get his hands dirty.

The afternoon, I was reminiscing about that incident and my first-ever mail to Steven… then I got a sinking feeling. Popping open the GAL, I confirmed my recollection that he’d been promoted to President the year before. He never corrected me.

History

The new UI of Internet Explorer 7 included a dedicated search box adjacent to the address bar, like the then-new Firefox. As IE7 was built between 2004 and 2006, Microsoft didn’t have a very credible entry into the search engine market—Bing wouldn’t appear until 2009. The IE team made a wise decision in support of the open web—we embraced the nascent OpenSearch specification developed by Amazon for search provider specification, allowing the browser to easily discover search providers offered by the site and enabling users to easily add those providers to IE’s search box.

This was a huge win for openness– it ensured that IE users had their choice of the best search engines the web had to offer. There was no lock-in.

Aside: The Narrative

Part of the Internet Explorer team’s internal narrative1 for years was that only two browsers were properly aligned with user’s interest—the only browsers where the customer was also the user were Safari and Internet Explorer. Safari and IE were the browsers bought by the customers who purchased their vendor’s hardware and software respectively. In contrast, the story went, Firefox had one customer, Google, who paid hundreds of millions of dollars for the right to be the default search engine. Later, Chrome had many thousands of customers, the AdSense advertisers who were buying access to the real product (millions of users’ eyeballs). As a consequence, the narrative went, the IE team were champions of the user and thus we’d make every decision with only our customers’ experience in mind.

What Happened Next

Fortunately, OpenSearch was quickly successful, and both Chrome and Firefox adopted it and the window.external.AddSearchProvider API that allowed a site (upon a user-initiated action) to offer to add a new Search Provider to the browser. This enabled customers to easily access search engines both large (Google, Yahoo, Bing, etc) and niche (Amazon, MSDN, etc) within their browser of choice. Some browsers even used OpenSearch to allow users to access search providers without installing them.

Openness won…

Today

… until it didn’t. The Internet Explorer team has indicated that they don’t plan to support the de facto standard AddSearchProvider API they invented in their next browser, currently codenamed Project Spartan. They’ve offered a variety of defenses of the decision (e.g. “Safari doesn’t support it so we don’t have to!”) that they’ve previously ridiculed in other contacts (e.g. Pointer Events).

Currently, in fact, Spartan is hardcoded to use just one search engine—Bing. I have no doubt that the Spartan team will add additional search engines to their browser before they ship, but only an open API provides the freedom for sites and users to interact without any confounding politics and economic decisions. If I want to switch to a privacy-focused engine like DuckDuckGo, that should be trivial. If I want the ability to quickly run MSDN searches, this shouldn’t require petitioning the IE development team.

Security And Privacy

Making matters worse, Spartan users’ searches are sent to Bing over the network in plaintext, despite the fact that Bing supports HTTPS and the latest versions of both Chrome and Firefox use that HTTPS provider.

Some have argued that AddSearchProvider is “too powerful” and browsers shouldn’t offer APIs that enable changes to their configuration, even with user consent. This argument is compelling until you notice what actually happens—when you take away the sandboxed, restricted API, sites don’t just throw up their hands and say “Ah well, guess we’ll go home.” Instead, they send the user executable downloads that can take any action they like on the system, changing the search provider and, while they’re at it, reconfiguring the user’s other browsers, changing the search page, throwing in a toolbar or two, or whatever. Once the user’s suckered into running your code, why not maximize the value? And the Windows ecosystem continues its swirl toward the drain…

Other users have argued that a “gallery” of search providers, like IEAddons.com is the right way to go. There are many problems with this approach. First, it requires that each site go to Microsoft, hat in hand, and register a provider. It requires that users go out of their way to go to the Gallery. Worst of it, it doesn’t provide any user workaround when the Gallery gets things wrong: for example, both Bing and Google offer HTTPS-based searches, and have for years. But if you install their official providers from the IE Gallery, you get insecure search and leak of your keystrokes as you type in the address bar. Microsoft Security Response Center (MSRC) has indicated that they do not consider this a security vulnerability.

In contrast, when AddSearchProvider is supported, the search engine can itself offer the proper, secure, search URLs. Or a user can build their own provider.

Please join me in begging the Internet Explorer team to reconsider: Support freedom. #SupportOpenSearch.

Vote here to fix Spartan: Bug Tracker link

Update: Hours after this post, the April security update for IE broke the AddSearchProvider API in existing IE versions. :-(

-Eric Lawrence

1 The validity of this narrative is itself worthy of its own post, so please don’t bother flaming the comments below.