Seek and Destroy Non-Secure References Using the moarTLS Analyzer

tl;dr: I made a Chrome Extension that finds security vulnerabilities.
It’s now available for Firefox too!

To secure web connections, TLS-enabling servers is only half the battle; the other half is ensuring that TLS is used everywhere.

Unfortunately, many HTTPS sites today include insecure references that provide an network-based attacker the opportunity to break into the user’s experience as they interact with otherwise secure sites. For instance, consider the homepage of Fidelity Investments:

Fidelity Homepage

You can see that the site has got the green lock and it’s using an EV certificate such that the organization’s name and location are displayed. Looks great! Even if you loaded this page over a coffee shop’s WiFi network, you’d feel pretty good about interacting with it, right?

Unfortunately, there’s a problem. Hit F12 to open the Chrome Developer Tools, and use the tool to select the “Open An Account” link in the site’s toolbar. That’s the link you’d click to start giving the site all of your personal information.

href attribute containing HTTP

Oops. See that http:// hiding out there? Even though Fidelity delivered the entire homepage securely, if a user clicks that link, a bad guy on the network has the opportunity to supply his own page in response! He can either return a HTML response that asks the victim for their information, or even redirect to a phony server (e.g. https://newaccountsetup.com) so that a lock icon remains in the address bar.

Adding insult to injury, what happens if a bad guy doesn’t take advantage of this hole?

    GET http://www.fidelity.com/open-account/overview
    301 Redirect to https://www.fidelity.com/open-account/overview

That’s right, in the best case, the server just sends you over to the HTTPS version of the page anyway. It’s as if the teller at your bank carried cash deposits out the front door, walking them around the building before reentering the bank and carrying them to the vault!

Okay, so, what’s a security-conscious person to do?

First, recognize the problem. If you stumble across a “HTTP” reference, it’s a security bug. Either fix it, or complain to someone who can.

Next, actively seek-and-destroy non-secure references.

A New Category of Mixed Content?

Web developers are familiar with two categories of Mixed Content: Active Mixed-Content (e.g. script) which is blocked by default, and Passive Mixed-Content (images, etc), which browsers tend to allow by default, usually with the penalty of removing the lock from the address bar.

However, secure pages with non-secure links don’t trigger ANY warning in the browser.

For now, let’s call the problem described in this post Latent Mixed Content.

Finding Latent Mixed Content

Finding HTTP links isn’t hard, but it can be tedious. To that end, between late-night feedings of my newborn, I’ve been learning Chrome’s extension model– a wonderful breath of fresh air after years of hacking together COM extensions in IE. The result of that effort is now available for your bug-hunting needs.

Download the moarTLS Chrome Extension.

#icanhazthecodez? Sure.

The extension adds an unobtrusive button to Chrome’s toolbar. The extension is designed to have little-to-no impact on the performance or operation of Chrome unless you actively interact invoke the extension.

When the Button button is clicked, the extension analyzes the current page to find out which hyperlinks (<a> elements) are targeting a non-secure protocol. If the page is free of non-secure links, the report is green:

PayPal showing all green

If the current page’s host sends a HTTP Strict Transport Security directive, a green lock is shown next to the hostname at the top of the report: green lock. Click the hostname to launch the SSLLabs Server Test for the host to explore what secure protocols are supported and find any errors in the host’s certificate or TLS configuration.

If the page contains one or more non-secure links, the report gets a yellow background and the non-secure links are listed:

Yellow warning report, 4 non-secure links

The non-secure links in the content of the page are marked in red for ease-of-identification:

Four Red links in the page

Alt+Click (or Ctrl+Click) on any entry in the report to cause the extension to probe the target hostname to see whether a HTTPS connection to the listed hostname is possible. If a HTTPS connection attempt succeeds, a grey arrow is shown. If the connection attempt fails (indicating that the server is only accessible via HTTP), a red X is shown:

Three grey up-arrows, one red X

If the target is accessible over HTTPS and the response includes a HTTP Strict Transport Security header, the grey arrow is replaced with a green arrow:

Green Arrow

Note: Accepting HTTPS connections alone doesn’t necessarily indicate that the host completely supports HTTPS—the secure connection could result in error pages or redirections to a redirection back to HTTP. But a grey arrow indicates that at least the server has a valid certificate and is listening  for TLS connections on port 443. Before updating each link to HTTPS, verify that the expected page is returned.

Returning to our original example, Fidelity’s non-secure links are readily flagged:

Non-secure links

If you read your email in a web client like GMail or Hotmail, you can also check whether your HTML emails are providing secure links:

Insecure credit card

 

HTTP-Delivered Pages

The examples above presuppose that the current page was delivered over HTTPS. If the page itself was delivered non-securely (over HTTP), invoking the moarTLS extension colors the background of the page itself red. In the report flyout, the hostname shown at the top is prefixed with http/. The icon adjacent to the domain name will either be an up-arrow:

…indicating that the host accepts HTTPS connections, or a red-X, indicating that it does not:

The extension exposes the option (simply right-click the icon) to flip insecurely-delivered images:

moarTLS options

When this option is enabled, images delivered insecurely are flipped vertically, graphically demonstrating one of the least malicious actions a Man-in-the-Middle could undertake when exploiting a site’s failure to use HTTPS.

Upside-down images

The Warn on non-secure downloads option instructs the extension to warn you when a file download occurs if either the page linking to a download, or the download itself, used a non-secure protocol:

Non-secure download

Non-secure file downloads are extremely dangerous; we’ve already seen attacks in-the-wild where a MITM intercepts such requests and responds with malware-wrapped replacements. Authenticode-signing helps mitigate the threat, but it’s not available everywhere, and it should be bolstered with HTTPS.

Limitations

This extension has a number of limitations; some will be fixed in future updates.

False Negatives

  • moarTLS looks only at the links in the markup. JavaScript could intercept a link click and cause an non-secure navigation when the user clicks a link with an otherwise secure HREF.
  • moarTLS does not currently check the source of CSS background images.
  • moarTLS does not currently mangle insecurely-delivered fonts, audio, or video.
  • moarTLS only evaluates links currently in the page. If links are added later (e.g. via AJAX calls), they’re not marked unless you click the button again.

False Positives

  • moarTLS looks only at the links in the markup. JavaScript could intercept a link click and cause an secure navigation when the user clicks a link with an otherwise non-secure HREF. Arguably this isn’t a false-positive because the user may have JavaScript disabled.
  • moarTLS looks only at the link, and does not exempt links which are automatically upgraded by the browser due to a HSTS rule. Arguably this isn’t a false-positive because not all browsers support HSTS, and the user may copy a URL to a non-browser client (e.g. curl, wget, etc).
  • moarTLS isn’t aware of upgrade-insecure-requests, although that only helps for same-origin navigations. Arguably this isn’t a false-positive because not all browsers support this CSP directive, and the user may copy a URL to a non-browser client (e.g. curl, wget, etc).
  • moarTLS isn’t aware of block-all-mixed-content. Arguably this isn’t a false-positive because not all browsers support this CSP directive, and the user may copy a URL to a non-browser client (e.g. curl, wget, etc).

Q&A

Q1. Why is this an extension? Shouldn’t it be in the Developer Tools’ Security pane, which currently flags active and passive mixed content:

Security Report

A1. Great idea. :)

Q2. How do I examine “popups” which don’t show the Chrome toolbar?

No toolbar

A2. Use “Show as Tab” on the system menu:

System Menu command Show As Tab

 

Q3. How much of the Web is HTTPS today?

A3. See https://security.googleblog.com/2016/03/securing-web-together_15.html

Using HTTPS Properly

Disclaimer: I’m a big fan of Pandora. I’ve been a listener for a decade or more, and I started paying for an annual subscription even before there was any real incentive to do so, solely because I loved the service and wanted it to succeed. This post isn’t really about Pandora, per-se, but about common anti-patterns in the industry.

On Friday, I opened Pandora.com in my browser. Unlike my normal pattern of throwing its player tab in the background, I instead decided to click around a bit (I’m testing a new Chrome extension, more on that later). I immediately noticed that the player page was delivered over unsecure HTTP and I frowned.

Then I clicked Settings:

Settings link on Pandora

I was disappointed to see that the page remained over HTTP, and personal information was shown in the page:

Personal Info

Not good. While I have mixed feelings about network snoopers knowing what I’m listening to, I definitely get uncomfortable with the idea that they can see my email address, birth year, zip code, and more. With mounting concern, I clicked down to the Billing link:

Billing Info

Okay, so that’s not good. My full name, zip code, eight digits of my credit card number, and its expiration are all on display in a page delivered over an unprotected channel. Any network snooper can steal this information, or prompt me for more information as if it were the legitimate site.

Perhaps even more galling is the little logo at the bottom right, assuring me that my information is secure:

Norton Secured logo

Okay, so what’s going on here? I’ve encountered sites that aren’t protected with HTTPS at all before, but it’s relatively rare to see a legitimate site claim that it is using HTTPS when it’s not. And Pandora’s a public company, over a decade old, valued at over $2,500,000,000.

Maybe Fiddler will reveal something interesting. The Billing page itself comes over HTTP:

Insecure download of /billing page

… but it doesn’t actually contain my private data. My private data is instead pulled as a JSONP response from a HTTPS URL:

HTTPS request shown in Fiddler

So, my private data doesn’t travel over the network in cleartext. Safe, right?

No, not so fast.

The first problem here is that this is JSONP data, which means that the unsecure calling page has access to all of the data—JSONP is being used to circumvent same-origin-policy. So, while a network-based attacker can’t read my data directly off the wire, he can simply rewrite the HTTP page itself to leak the data.

Furthermore, as a user, I don’t have any way of knowing which parts of this page are trustworthy and which are not. As a consequence, attacker-injected script on this page could ask for a password, and a reasonable person would probably type it. I groaned at spending 20 minutes mocking this up, but it turned out that I don’t even have to… the original HTTP page itself can ask for the password!

Password prompt on HTTP page

There’s no way for the user to know whether this is a legitimate password prompt which will securely handle the user’s password, or a fake injected by an attacker on the network.

At this point, I’d seen enough, and decided to contact the security folks at Pandora. Unfortunately, the RFC2142-recommended email address security@pandora.com just bounces:

security@pandora.com does not exist

I also had a look at the HackerOne directory, but unfortunately there was no entry for Pandora.

March 9 Update: Pandora has enabled a security alias and registered it with the HackerOne Directory.

Okay, time for regular support channels. On Friday afternoon, I sent the following message to customer support:

SECURITY BUG

Your site needs to be using HTTPS for ALL pages. The way it’s designed today allows an attacker to steal all of the private information (credit card digits, expiration, email address, music choices, etc).

-Eric Lawrence

Around 22 hours later, I received the following response:

Hello Eric,

Thanks for writing!

Like a lot of sites these days, by default we only use SSL encryption (the actual under-the-hood technology behind an “https” connection) for the portions of our pages that accept or transmit financial data. This saves a lot of overhead, both on our end and within your own browser, by transmitting most of the page – background color, Pandora logo image and so on – via a non-secure (normal web “http”) connection.

The Verisign logo, and the “in good standing” account status we have with them (which you can see when you click the Verisign badge on our payment page), indicates that we’re really encrypting the parts of that page (like the credit-card-entry fields) that need to be encrypted.

You can see that here: https://trustsealinfo.verisign.com/splash?form_file=fdf/splash.fdf&dn=www.pandora.com&lang=en

As an added precaution, you can manually load https://pandora.com whenever you make a payment on the site or update your credit card information.

You can also upgrade using PayPal rather than entering your card information with us.

Let me know if you have any other questions about this. Thanks again for your support!

Groan. Perhaps the worst part about this response is that it’s clear that I’m not the first to complain about this.

Okay, let’s take this point-by-point:

Like a lot of sites these days, by default we only use SSL encryption (the actual under-the-hood technology behind an “https” connection) for the portions of our pages that accept or transmit financial data.

It’s true that some sites follow this insecure practice, but they’re almost all vulnerable to multiple forms of attack. Beyond the threat of an active man-in-the-middle, conducting most transactions in plaintext means that my ISP (or whoever is along the network path) can profile me (learning what artists I listen to, for instance) and sell that data to advertisers.

We needn’t rathole on the fact that that the Pandora servers thankfully don’t even support SSL anymore, allowing only TLS1 and later.

ProTip: If you think you’re clever enough to securely encrypt only part of your web application, you’re almost certainly wrong.

This saves a lot of overhead, both on our end and within your own browser, by transmitting most of the page – background color, Pandora logo image and so on – via a non-secure (normal web “http”) connection.

First off, TLS is plenty fast. Second, if a site sending an endless stream of 6mb MP3 files needs to optimize performance, it’s unlikely that the “background color” is really the problem. And if it is, using a 2.1K file which is visually identical to your 246K file is probably a good start, as is caching it for more than one day. But I digress.

The Verisign logo, and the “in good standing” account status we have with them (which you can see when you click the Verisign badge on our payment page), indicates that we’re really encrypting the parts of that page (like the credit-card-entry fields) that need to be encrypted.

You can see that here: https://trustsealinfo.verisign.com/splash?form_file=fdf/splash.fdf&dn=www.pandora.com&lang=en

The “Verisign logo” referenced here is actually a “Norton Secured, powered by Symantec” badge. I suspect that the agent had the wrong text because they’ve been copy/pasting the same incorrect information for years.

What this badge actually means is that they have a certificate, not that they’re using it properly. The Certificate Authority does not verify that you’re doing HTTPS properly. “In good standing” means only that your check cleared when you bought the certificate.

As an added precaution, you can manually load https://pandora.com whenever you make a payment on the site or update your credit card information.

Finally, we have a piece of new information—the site is available over HTTPS. Sorta.

Mixed Content notification

Security shouldn’t be optional; Pandora shouldn’t require users install browser add-ons (e.g. HTTPS Everywhere) or remember to type HTTPS every time to keep their information safe.

I emailed back and indicated that the response was inaccurate and requested my ticket be escalated to Pandora’s security team.

Sorry to hear you feel that way, Eric.

Thanks for writing in and sharing your thoughts with us. We value feedback from our listeners whether it’s positive or negative. Unfortunately, I am unable to put you in touch with our security team at this time.

So, that’s a no-go.

ProTip: Unless you want your frontline user-support team triaging security vulnerability reports, get a HackerOne account and hook up a security@ alias.

Since I promised that this post isn’t just about Pandora, here are two more screenshots:

Hulu Certificate Error

Hulu Insecure Login

It’s 2016. Bad actors are all over the network. Certificates are free. TLS is fast. Customers deserve better.

-Eric

March 9 Update: Pandora has enabled a security alias and registered it with the HackerOne Directory. They’ve also said “our engineering department is and has been actively working on transitioning http://www.pandora.com to HTTPS only.”

On Daylight Savings Time

In Fiddler, the Caching tab will attempt to calculate the cache freshness lifetime for responses that lack an explicit Expires or Cache-Control: max-age directive. The standard suggests clients use (0.1 * (DateTime.Now – Last-Modified)) as a heuristic freshness lifetime.

An alert Fiddler user noticed that the values he was seeing were slightly off what he expected: sometimes the values were 6 minutes shorter than he thought they should be.

Consider the following scenarios:

Last-Modified: February 28, 2016 01:00:00
Date: February 29, 2016 01:00:00
These are 24 hours apart (1440 minutes); 10% of that is 144 minutes.
Last-Modified: March 13, 2016 01:00:00
Date: March 14, 2016 01:00:00
Due to the “spring forward” adjustment of Daylight Savings Time, these values are just 23 hours apart (1380 minutes); 10% of that is 138 minutes.
Last-Modified: November 6, 01:00:00
Date: November 7, 01:00:00

Due to the “fall back” adjustment of Daylight Savings Time, these values are 25 hours apart (1500 minutes); 10% of that is 150 minutes.

So when a timespan encompasses an even number of those DST transitions, the effect cancels out. When a timespan encompasses an odd number of these DST transitions, the span is either an hour longer or an hour shorter than it would be if Daylight Savings Time did not exist.

-Eric

Out-of-Memory is (Usually) a Lie

  • The most common exception logged by Fiddler telemetry is OutOfMemoryException.
  • Yesterday, a Facebook friend lamented: “How does firefox have out of memory errors so often while only taking up 1.2 of my 8 gigs of ram?
  • This morning, a Python script running on my machine as a part of the Chromium build process failed with a MemoryError, despite 22gb of idle RAM.

Most platforms return an “Out of Memory error” if an attempt to allocate a block of memory fails, but the root cause of that problem very rarely has anything to do with truly being “out of memory.” That’s because, on almost every modern operating system, the memory manager will happily use your available hard disk space as place to store pages of memory that don’t fit in RAM; your computer can usually allocate memory until the disk fills up (or a swap limit is hit; in Windows, see System Properties > Performance Options > Advanced > Virtual memory).

So, what’s happening?

In most cases, the system isn’t out of RAM—instead, the memory manager simply cannot find a contiguous block of address space large enough to satisfy the program’s allocation request.

In each of the failure cases above, the process was 32bit. It doesn’t matter how much RAM you have, running in a 32bit process nearly always means that there are fewer than 3 billion addresses1 at which the allocation can begin. If you request an allocation of n bytes, the system must have n unused addresses in a row available to satisfy that request.

Making matters much worse, every active allocation in the program’s address space can cause “fragmentation” that can prevent future allocations by splitting available memory into chunks that are individually too small to satisfy a new allocation with one contiguous block.

Out-of-address-space

Running out of address space most often occurs when dealing with large data objects like arrays; in Fiddler, a huge server response like a movie or .iso download can be problematic. In my Python script failure this morning, a 1.3gb file (chrome_child.dll.pdb) needed to be loaded so its hash could be computed. In some cases, restarting a process may resolve the problem by either freeing up address space, or by temporarily reducing fragmentation enough that a large allocation can succeed.

Running 64-bit versions of programs will usually eliminate problems with address space exhaustion, although you can still hit “out-of-memory” errors before your hard disk is full. For instance, to limit their capabilities and prevent “runaway” allocations, Chrome’s untrusted rendering processes run within a Windows job object with a 4gb memory allocation limit:

Job limit 4gb shown in SysInternals Process Explorer

Elsewhere, the .NET runtime restricts individual array dimensions to 2^31 entries, even in 64bit processes2.

-Eric Lawrence

1 If a 32bit application has the LARGEADDRESSAWARE flag set, it has access to s full 4gb of address space when run on a 64bit version of Windows.

2 So far, four readers have written to explain that the gcAllowVeryLargeObjects flag removes this .NET limitation. It does not. This flag allows objects which occupy more than 2gb of memory, but it does not permit a single-dimensional array to contain more than 2^31 entries.

Things I’ve Learned in my first weeks on Chrome

This is a stub post which will be updated periodically.

It would be impossible to summarize how much I’ve learned in the last six weeks working at Google, but it’s easy to throw together some references to the most interesting and accessible things I’ve learned. So that’s this post.

Developing Chrome

Searching the code is trivial. You don’t need to know C++ to read C++. And if you can write C++, the process of adding new code to Chrome isn’t too crazy.

Creating bugs is easy: https://crbug.com

Developing Chrome extensions is easy and approximately 5% as hard as building IE extensions.

Using Chrome

PM’ing at Microsoft was all about deleting email. Surviving at Google is largely an exercise in tab management, since nearly everything is a web page. QuickTabs allows you to find that tab you lost with a searchable most-recently-used list.

You can CTRL+Click *multiple* Chrome tabs and drag your selections out into a new window (unselected tabs temporarily dim). Use SHIFT+Click if you’d prefer to select a range of tabs.

Hit SHIFT+DEL when focused on unwanted entries in the omnibox (addressbar) dropdown to get rid of them.

Want to peek under the hood? Load chrome://chrome-urls to see all of the magic URLs that Chrome supports for examining and controlling its state, caches, etc. For instance, the ability to view network events and export them to a JSON log file (see chrome://net-internals), later importing those events to review them using the same UI is really cool. Other cool pages are chrome://tracing, chrome://crashes, and chrome://plugins/.

Chrome uses a powerful system of experimental field trials; your Chrome instance might behave differently than everyone else’s.

Web Developers and Footguns

If you offer web developers footguns, you’d better staff up your local trauma department.

In a prior life, I wrote a lot about Same-Origin-Policy, including the basic DENY-READ principle that means that script running in the context of origin A.com cannot read content from B.com. When we built the (ill-fated) XDomainRequest object in IE8, we resisted calls to enhance its power and offer dangerous capabilities that web developers might misconfigure. As evidence that even experienced web developers could be trusted to misconfigure almost anything, we pointed to a high-profile misconfiguration of Flash cross-domain policy by a major website (Flickr).

For a number of reasons (prominently including unwillingness to fix major bugs in our implementation), XDomainRequest received little adoption, and in IE10 IE joined the other browsers in supporting CORS (Cross-Origin-Resource-Sharing) in the existing XMLHttpRequest object.

The CORS specification allows sites to allow extremely powerful cross-origin access to data via the Access-Control-Allow-Origin and Access-Control-Allow-Credentials headers. By setting these headers, a site effectively opts-out of the bedrock isolation principle of the web and allows script from any other site to read its data.

Evan Johnson recently did a scan of top sites and found over 600 sites which have used the CORS footgun to disable security, allowing, in some cases, theft of account information, API keys, and the like. One of the most interesting findings is that some sites attempt to limit their sharing by checking the inbound Origin request header for their own domain, without verifying that their domain was at the end of the string. So, victim.com is vulnerable if the attacker uses an attacking page on hostname victim.com.Malicious.com or even AttackThisVictim.com. Oops.

Vulnerability Checklist

For your site to be vulnerable, a few things need to be true:

1. You send Access-Control-Allow headers

If you’re not sending these headers to opt-out of Same-Origin-Policy, you’re not vulnerable.

2. You allow arbitrary or untrusted Origins

If your Access-Control-Allow-Origin header only specifies a site that you trust and which is under your control (and which is free of XSS bugs), then the header won’t hurt you.

3. Your site serves non-public information

If your site only ever serves up public information that doesn’t vary based on the user’s cookies or authentication, then same-origin-policy isn’t providing you any meaningful protection. An attacker could scrape your site from a bot directly without abusing a user’s tokens.

Warning: If your site is on an Intranet, keep in mind that it is offering non-public information—you’re relying upon ambient authorization because your sites’ visitors are inside your firewall. You may not want Internet sites to be able to be able to scrape your Intranet.

Warning: If your site has any sort of login process, it’s almost guaranteed that it serves non-public information. For instance, consider a site where I can log in using my email address and browse some boring public information. If any of the pages on the site show me my username or email address, you’re now serving non-public information. An attacker can scrape the username or email address from any visitor to his site that also happens to be logged into your site, violating the user’s expectation of anonymity.

Visualizing in Fiddler

In Fiddler, you can easily see Access-Control policy headers in the Web Sessions list. Right-click the column headers, choose Customize Columns, choose Response Headers and type the name of the Header you’d like to display in the Web Sessions list.

Add a custom column

For extra info, you can click Rules > Customize Rules and add the following inside the Handlers class:


public static BindUIColumn("Access-Control", 200, 1)
function FillAccessControlCol(oS: Session): String {
  if (!oS.bHasResponse) return String.Empty;
  var sResult = String.Empty;
  var s = oS.ResponseHeaders.AllValues("Access-Control-Allow-Origin");
  if (s.Length > 0) sResult += ("Origin: " + s);

  if (oS.ResponseHeaders.ExistsAndContains(
    "Access-Control-Allow-Credentials", "true"))
  {
    sResult += " +Creds";
  }

  var s = oS.ResponseHeaders.AllValues("Access-Control-Allow-Methods");
  if (s.Length > 0) sResult += (" Methods: " + s);

  var s = oS.ResponseHeaders.AllValues("Access-Control-Allow-Headers");
    if (s.Length > 0) sResult += (" SendHdrs: " + s);

  var s = oS.ResponseHeaders.AllValues("Access-Control-Expose-Headers");
  if (s.Length > 0) sResult += (" ExposeHdrs: " + s);

  return sResult;
}

-Eric

Leaking Keystrokes

Windows 10’s IE11 continues to send your keystrokes over the internet in plaintext as you type in the address bar, a part of the “Search Suggestions” feature:

failing

“But I don’t search from the address bar,” you might say.

That may be, but if you fail to type or paste a URL (sans protocol) into the address bar, all of that text gets leaked too:

Danger

This problem doesn’t exist in Edge (which always gets search suggestions from Bing, regardless of your Search Provider, but it at least uses HTTPS). It also doesn’t occur in Firefox’s or Chrome’s provider for Bing Search, or if you use Google or Yahoo search providers in Internet Explorer.

Extended Validation Certificates – The Introduction

In 2005, one of my first projects on the Internet Explorer team was improving the user-experience for HTTPS sites (“SSLUX”).

Our first task was to change the certificate error experience from the confusing and misleading modal dialog box:

Certificate errors UX

… to something that more clearly conveyed the risk and which more clearly discouraged users from accepting invalid certificates. We quickly settled upon using a blocking page for bad certificates, a pattern seen in all major browsers today.

Next, we wanted to elevate the security information from the lowly lock buried at the bottom of the window (at best, since the status bar could be hidden entirely):

IE6 status bar

As a UI element, the lock resonated with users, but it wasn’t well understood (“I look for the lock and if it’s there, I’m safe”). We felt it was key to ensure that users not only saw that the connection was secure, but also with whom a secure connection had been made. This was especially important as some enterprising phishers had started obtaining HTTPS certificates for their spoofing sites, with domain names like BankOfTheVVest.com. Less urgently, we also wanted to help users understand that a secure connection didn’t necessarily mean the site is safethe common refrain was that we’d happily set up a secure connection to a site run by the Russian Mafia, so long as the user recognized who they were talking to.

We decided to promote the HTTPS certificate information to a new UI element next to the address bar1. Called the “Trust Badge”, the button would prominently display the information about the owner and issuer of the HTTPS certificate, and clicking it would allow users to examine the certificate in full:

Displaying the Issuer of the certificate was deemed especially important– we knew some CAs were doing a much better job than others. High-volume-Low-vetting CAs’ $20 certificates were, to users, indistinguishable from the certificates from CAs who did a much more thorough job of vetting their applicants (usually at a much higher price point). The hope was that the UI would both shame lazy CAs and also provide a powerful branding incentive for those doing a good job.

We were pretty excited to show off our new work in IE7 Beta 1, but five months before our beta shipped, Opera beat us to it with Opera 8 beta 2 with a UI that was nearly identical to what we were building.

During those five months, however, we spoke to some of the Certificate Authorities in the Microsoft Root CA program and mentioned that we’d be making some changes to IE’s certificate UI. They expressed excitement to hear that their names would no longer be buried in the depths of a secondary dialog, but cautioned: “Just so long as you don’t do what Opera did.

Why’s that?” we asked innocently.

Well, they show the Subject organization and location information in their UI.”

“And that’s a problem because…” we prompted.

Well, we don’t validate any of the information in the certificate beyond the domain name.” came the reply.

But you omit any fields you don’t validate, right?” we asked with growing desperation.

Nah, we just copy ‘em over.

After the SSLUX feature team picked our collective jaws off the floor, we asked around and determined that, yes, the ecosystem “race to the bottom” had been well underway over the preceding few years, and so-called “Domain validation” (DV) of certificates was extremely prevalent. While not all DV certificates contained inaccurate information, there was no consistent behavior across CAs.

Those CAs who were doing a good job of vetting certificates were eager to work with browsers to help users recognize their products, and even the “cheap” CAs felt that their vetting was better than that of their competitors2. Soon the group that evolved into the CA/Browser forum was born, pulling in stakeholders of all types (technologists, policy wonks, lawyers) from all over the industry (Microsoft, Mozilla, Konquerer, etc). Meetings were had. And more meetings. And calls. And much sniping and snarking. And more meetings. Eventually, the version 1.0 guidelines for a new sort of certificate were adopted. These Extended Validation (nee “Enhanced Validation”, nee “High Assurance”) certificates required specific validation steps that every CA would be required to undertake.

EV certificates were far from perfect, but we thought they were a great first step toward fixing the worst problems in the ecosystem.

Browsers would clearly identify when a connection was secured with EV (IE flood-filled the address bar with green) to improve user confidence and provide sites with a business reason to invest (time and money) in a certificate with more vetting. For the EV UI treatment, browsers could demand sites and CAs use stronger algorithms and support features like revocation checking. Importantly, this new class of certificates finally gave browsers a stick to wield against popular CAs who did a poor job—in the past, threats to remove a CA from the trust store rang hollow, because the CA knew that users would blame the browser vendor more than the CA (“Why do I care if bad.com got a certificate, good.com should work fine!”); with EV, browsers could strip the EV UX from a CA (leading their paying customers to demand refunds) without issuing an “Internet Death Sentence” for the entire CA itself.

Our feature was looking great. Then the knives really came out.

…to be continued… (preview)

-Eric

1 Other SSLUX investments, like improving the handling of Mixed Content, were not undertaken until later releases.

2 Multiple CAs who individually came to visit Redmond for meetings brought along fraudulent certificates they’d tricked their competitors to issue in our names, perhaps not realizing how shady this made them look.

Life in Austin

The following are some random notes about moving to Austin; previously, I’d spent 11 years in Redmond, Washington working for Microsoft. I grew up mostly in Maryland, except for a three year stint in Michigan. I’m sharing my thoughts here mostly to avoid retyping them each time a friend says they’re thinking about moving to town– something that seems to be happening more and more frequently as the rest of the country notices what a gem Austin is.

My wife and I moved to Austin 1207 days ago (October 2012); I’d meant to write a post a thousand days in, but I’m just getting around to it now.

tl;dr and just want to look at Austin? Check out this amazing “Austin by Air” video by former colleague Gerard Juarez.

Housing
The city is a great one for the young and young at heart and TX in general is definitely very much a place for young families. Families with three or more kids aren’t unusual. The area is growing rapidly with very low unemployment and tons of construction, both downtown and in the ever-expanding suburbs. Housing prices downtown are extremely high, but they fall off exponentially as you get further away from the core of the city. Real estate prices are much better than Redmond overall, but are very tightly tied to distance to the city center. In Redmond, we had a 1560sq foot house 0.5 miles from Microsoft; the house was built in 1968. We basically traded it for our house here, built in 1993 and almost 3k square feet. If we were willing to go another 15 miles out, we could have added another 500 square feet and saved ~$100K on a house five or ten years newer.

We live in the northwest in a neighborhood called “Jester”; it’s a nice neighborhood of wide streets and sidewalks built mostly in the early 1990s.

image

The east is pretty underdeveloped and historically the cheaper place to live. But it is starting to build out with new planned communities and amenities. The west, south of Lake Travis is the hot area for middle/upper middle class… Tons of huge new neighborhoods in the $300-600k range. The north (toward Round Rock) tends to offer significant savings on housing with the tradeoff of (typically) longer commutes.

Finding a house? Kevin Bown was the agent we were recommended when we were pondering a move here. He helped us get up to speed on the different areas and tradeoffs and whatnot. He did a good job of helping us find our house (which we love) in just a few visits, and he works really well over email, which was very helpful since we only visited Austin twice before we moved.

Traffic and Driving – Your perspective on traffic largely depends on where you’re moving from. If you’re coming from a major metropolitan area like D.C. or New York, you’ll laugh about how light it is, but it can be one of the bigger hassles of living here. For many years, the city tried to constrain its growth by refusing to build roads, which only would’ve worked if folks weren’t coming from places with worse traffic congestion. Roads are slowly getting widened and tolling is starting in some areas.

Mass transit is nearly non-existent, although there is a very limited rail service from Leander (in the north) to downtown. Before I started at Google, my commute to the Telerik office (downtown) was ~12 miles; the drive took 23 minutes best case and around 50 minutes at rush hour. Drivers are generally polite and less aggressive than you’ll find on the East Coast, but there are some crazies out there.

Cars are big; roads, lanes, and parking spots are wide. Gas is cheap; my last fill-up was under $1.60 a gallon. You will have tinted windows and daydream inventions to rapidly cool parked cars.

Tech Jobs
Job wise, Austin is super-hot. While tons of tech companies are building major offices here, Austin is mostly getting used for sales/support/marketing/recruiting outposts. The market rate for Senior Developers here seems to be about $110k or so, although the range is very wide, with most startups on the low-end, while remote devs for companies headquartered in more expensive locales can be quite a bit higher.

As of 2021, Amazon is growing a large tech campus here, and Apple’s is coming soon.

Tech
Seem to get good 4G LTE coverage around town. Cable internet is widely available from several providers; we used to pay ~$80 for 30mbps which was dumb; as of 2018 we have Spectrum Cable internet (200-400mbps for $50). Fiber (via Google Fiber or Grande Communications) is available mostly in the south side of the city, but it’s (slowly) spreading northward. We have a Fry’s Electronics. Three HDTV towers are about 3 miles from our house, so we don’t have cable and on the rare occasions that we watch TV (e.g. the Super Bowl) our tiny HD antennas work great.

Weather and Environs
Weather is pretty awesome for about 8 months, bearable for 2 months, and hot as hell for 2 months. In 2012, there were 100 days over 100F. By my recollection, in 2013 there were 30. In 2014 there were 5. In 2015, there might’ve been one or two. It’s currently late January and around 80 degrees, but it’ll be around 60 later in the week.

If the goal is to make Austin look as good as possible, come in March/April; if you want to make it look as bad as possible, come in late August / early September. Unfortunately, many festivals (SxSW, etc) take place during the best times, so airline ticket prices can vary wildly.

People may try to tell you crazy stories about Texas wildlife/bugs/etc. Generally, that’s because such incidents are rare but can be memorable. (We’ve had two scorpions and two tarantulas in our 3 years here). Last year there were a ton of mosquitoes during the summer which was pretty nasty, but that seemed to be an outlier and we didn’t have issues our first two summers. Fire ants are a real problem if you like going barefoot or in sandals.

Hurricanes are east of Austin, tornadoes are north, so the only interesting thing we have is periodic thunderstorms which are intense but fun if you’re inside. Oh, and periodic flooding (only problematic in a few places), drought (when we’re not flooding), and damaging hail storms (even when it’s well above freezing at ground level).

Daylight

Daylight hours in Austin are more even than Seattle; Austin’s day ranges from 10h12m to 14h06m while Seattle’s ranges from a depressing 8h25m to an endless 15h59m.

Activities and Surroundings
Nightlife in downtown Austin can be wild (due to the University, and Austin’s goal to be the “Live Music Capital of the World”). The river/lake running through Austin is fun for boating and water events.

Sports are a big deal, particularly college football. On our first trip to the doctor’s office, my wife and I both happened to be clad in maroon; this got us pegged as fans of the “Aggies” (Texas A&M) and led to some amusing misunderstandings.

Texas is stupid big (“the size of France” as the bumper sticker boasts), but San Antonio is only around 75 minutes away and has tons of stuff (SeaWorld, Six Flags, etc). you can drive to beaches (Galveston / Corpus Christi) in 3 or 4 hours. Neither is as nice as most east coast/California beaches but they’re nice enough for a few days in the sun and sand. Caribbean cruises leave from Galveston and are pretty cheap. Austin’s airport is growing fast, but many destinations require a quick stop in Houston or Dallas. Flights to the various resorts in Mexico and the Caribbean are pretty cheap. We haven’t taken advantage of either of these yet (due to young kids) but we hope to in a few years. (We did, a lot).

Kids Activities
We have a 2.5 year old and a 2 week old; we haven’t discovered most of the activities in the area, yet.

There are lots of parks around town, some with splash pads for the summer. There’s a smallish Zoo south of town. There’s a kids’ museum out near the airport. Seasonal events like the holiday Trail of Lights and the spring kite festival take place in the enormous Zilker Park adjacent to downtown. Around Halloween/Thanksgiving, we started going to Sweet Berry Farm just west of town in Marble Falls to enjoy the festivities.

Food
Food choices are decent; great BBQ, lots of good Mexican. Okay Chinese, about what you’d find in Redmond. They haven’t seemed to have figured out Thai food yet, to my disappointment (we have heard some good things about Chada Thai in Cedar Park).

Here are a few of my favorites:

  • Torchy’s Tacos. Our favorite taco spot. It’s not fancy or pretentious at all, but their tacos are, as they say, damn good. Breakfast tacos are a thing here, and if you’ve not had them before, you’ll soon wonder why not.
  • Mandola’s Italian. It’s an Italian grocery store with sit-down dining. Tasty.
  • The Oasis. Wide menu; nice view of Lake Travis.
  • Chez Zee. Fancier food without attitude.
  • Steiner Ranch Steakhouse. Great steak, nice views, not insanely expensive.

Taxes
No state income tax. Sales tax is 6.5% outside the city, 8.5% inside. Real-estate taxes are wildly variable and a big deal; they’re as low as ~1.5% to almost 3%, all significantly higher than we experienced in Redmond.

Politics and Vibe
Austin is more of a county than a city, really; it’s spread over a huge area. The neighborhood / area makes a big difference in both prices and vibe.

Politeness quietly reigns supreme here; strangers will ask you how things are going and care about your answer. People apologize. You’ll quickly notice that, without regard to convenience, doors are almost always held, and women are always the first to get on and off elevators, an unspoken rule that seems to have almost universal compliance.

Austin is a bit of an oasis inside the craziness that is Texas… it definitely has the Seattle/Portland/California feel with plenty of hipsters and tattoo-bearing twenty-somethings.

The rest of Texas tends toward hardcore conservative, but avoid discussing politics, religion, and world events and we all get along just fine.

Bonus Update: Austin Lingo.

Authenticode and SHA1–Redux

I tried to install Telerik DevCraft Ultimate, but Windows 8.1 and Windows 10 blocked it:

Blocked

“Unknown Publisher”? Hrm.

That’s weird. I know Telerik signs their code and I was pretty sure their code-signing certificate is SHA256, so the new restrictions on SHA1 in code-signing shouldn’t be a problem, right?

Sure enough, the code is signed with a SHA256 certificate:

SHA256

… and we know that SHA1 file digests are still allowed (heck, MD5 digests are still allowed!). So what’s going wrong?

Check out the certificate chain:

image

The intermediate certificate is SHA1.

Other code, signed with the same chain, doesn’t fail, but that’s because that other code was time-stamped before the January 1st deprecation of SHA-1.

To avoid “Unknown Publisher” warnings for your software, you need to ensure that any intermediate certificates in your signing chain are also signed using SHA256. (Only the root certificate at the top of the chain may use SHA1).

-Eric