Client Certificates on Android

Recently, this interesting tidbit crossed my Twitter feed:

Tweet: "Your site asks for a client certificate?"

Sure enough, if you visited the site in Chrome, you’d get a baffling prompt.

My hometown newspaper shows the same thing:

No Certificates Found prompt on Android

Weird, huh?

Client certificates are a way for a browser to supply a certificate to the server to verify the client’s identity (in the typical case, a HTTPS server only sends its certificate so that the client can validate that the server is what it says it is.

In the bad old days of IE6 and IE7, the default behavior was to show a similar prompt, but what’s going on with the latest Chrome on modern Android?

It turns out that this is a limitation of the Android security model, to which Chrome on Android is subject. In order for Chrome to interact with the system’s certificate store, the operating system itself shows a security prompt.

If your server has been configured to accept client certificates (in either require or want mode), you should be sure to test it on Android devices to verify that it behaves as you expect for your visitors (most of whom likely will not have any client certificates to supply).

-Eric

Client Certificates on Android

HTTPS Only Works If You Use It – Tipster Edition

Convoy with three armored tanks and one pickup truck

It’s recently become fashionable for news organizations to build “anonymous tip” sites that permit members of the public to confidentially submit tips about stories of public interest.

Unfortunately, would-be tipsters need to take great care when exploring such options, because many organizations aren’t using HTTPS properly to ensure that the user’s traffic to the news site is protected from snoopers on the network.

If the organization uses any non-secure redirections in loading its “Tips” page, or the page pulls any unique images or other content over a non-secure connection, the fact that you’ve visited the “Tips” page will be plainly visible to your ISP, employer, fellow coffee shop patron, home-router-pwning group, etc.

NYTimes call for Tips, showing non-secure redirects

The New Yorker Magazine call for Tips, showing non-secure redirects

Here are a few best practices for organizations that either a) anonymous tips online or b) use webpages to tell would-be leakers how to send anonymous tips via Tor or non-electronic means:

For end users:

  • Consider using Tor or other privacy-aiding software.
  • Don’t use a work PC or any PC that may have spyware or non-public certificate roots installed.

Stay private out there!

-Eric

HTTPS Only Works If You Use It – Tipster Edition

Security UI in Chrome

The combined address box and search bar at the top of the Chrome window is called the omnibox. The icon and optional verbose state text adjacent to that icon are collectively known as the Security Chip:

image

The security chip can render in a number of states, depending on the status of the page:

image Secure – Shown for HTTPS pages that were securely delivered using a valid certificate and not compromised by mixed content or other problems.
image Secure, Extended Validation – Shown for Secure pages whose certificate indicates that Extended Validation was performed.
image Neutral – Shown for HTTP pages, as well as Chrome’s built-in pages, like chrome://extensions, as well as pages containing passive mixed content, or delivered using a policy-allowed SHA-1 certificates.
image Not Secure – Shown for HTTP pages that contain a password or credit card input field. Learn more.
image Not Secure (Red) – What Chrome will eventually show for all HTTP pages. You can configure a flag (chrome://flags/#mark-non-secure-as) to Always mark HTTP as actively dangerous today to get this experience early.
image Not Secure, Certificate Error – Shown when a site has a major problem with its certificate (e.g. it’s expired).
image Dangerous – Shown when Google Safe Browsing has identified this page as a source of malware or phishing attacks.

The flyout that appears when you click the security chip is called PageInfo or Website Settings; it shows the security status of the page and the permissions assigned to the origin website:

image

The text atop the pageinfo flyout explains the security state of the page:

image imageimage mixedexpired

Clicking the Learn More link on the flyout for valid HTTPS sites once opened the Chrome Developer Tools’ Security Panel, but now it goes to a Help article. To learn more about the HTTPS state of the page, instead press F12 and select the Security Panel:

image

The View certificate button opens the Windows certificate viewer and displays the current origin’s certificate. Reload the page with the Developer Tools open to see all of the secure origins in the Secure Origins List; selecting any origin allows you to view information about the connection and examine its certificate.

image

The floating grey box at the bottom of the Chrome window that appears when you over over a link is called the status bubble. It is not a security surface and, like IE’s status bar, it is easily spoofed.

image

Navigation to sites with severe security problems is blocked by an interstitial page.

image

(A list of interstitial pages in Chrome can be found at chrome://interstitials/ ).

Clicking on the error code (e.g. ERR_CERT_AUTHORITY_INVALID in the screenshot below) will show more information about the certificate of the blocked site:

image

Clicking the Advanced link shows more information, and in some cases will show an override link that allows you to disregard the security protection and proceed to the site anyway.

image

If a site uses HTTP Strict Transport Security, the Proceed link is hidden and the user has no visible option to proceed.

image

In current versions of Chrome, the user can type a fixed string (sometimes referred to as a Konami code) to bypass HSTS, but this option is deliberately undocumented and slated for removal from Chrome.

If a HTTPS problem is sufficiently bad, the network stack will not connect to the site and will show a network error page instead.

image

-Eric

PS: There exists a developer reference to Chrome Security UI across platforms, but it’s somewhat outdated.

Security UI in Chrome

2016 Brotli Update

Windows 10 Build 14986 adds support for Brotli compression to the Edge browser (but, somewhat surprisingly, not IE11). So at the end of 2016, we now have support for this improved compression algorithm in Chrome, Firefox, Edge, Opera, Brave, Vivaldi, and the long tail of browsers based on Chromium. Of modern browsers, only Apple is a holdout, with a “Radar” feature request logged against Safari but no public announcements.

Unfortunately, behavior across browsers varies at the edges:

  • Edge advertises support for and decodes Brotli compression on both HTTP and HTTPS requests.
  • Chrome advertises Brotli for HTTPS connections but will decode Brotli for both HTTPS and HTTP responses.
  • Firefox advertises Brotli for HTTPS connections and will not decode Brotli responses on HTTP responses.

There’s nothing horribly broken here: sites can safely serve Brotli content to clients that ask for it and those clients will probably decode it. The exception is when the request goes over HTTP… the reason Firefox and Chrome limit their request for Brotli to HTTPS is that, historically, middleboxes (like proxies and gateway filters) have been known to corrupt compression schemes other than gzip and deflate. This proved to be such a big problem in the rollout of SDCH (a now defunct compression algorithm Chrome supported), that the Brotli implementers decided to try to avoid the issue by requiring a secure transport.

-Eric

PS: Major sites, including Facebook and Google, have started deploying Brotli in production– if your site pulls fonts from Google Fonts, you’re already using Brotli today! In unrelated news, the 2016 Performance Calendar includes a post on serving Brotli from CDNs that don’t explicitly support it yet. Another recent post shows how to pair maximal compression for static files with fast compression for dynamically generated responses.

2016 Brotli Update

Do Not Lie to Users

Multiple people working on Outlook.com thought this was a reasonable design.

After a user deletes an email, then manually goes into the Deleted Items folder, then clicks Delete again, then acknowledges that they wish to Permanently Delete the deleted item:

Delete

… the item is still not deleted. You can “Recover deleted items” from your Deleted items folder:

Recover

… and voila, they’re all hiding out there:

Purge

Further, if you click the Purge button, you’ll find that it doesn’t actually do anything.

The poor user is expected to:

  1. Be aware of this insane behavior
  2. Individually check a box next to each unwanted message, then click Purge.

Microsoft’s design is offensively anti-privacy.

-Eric

PS: This sums it up pretty well.

image

Do Not Lie to Users

Useful Resources when Developing Chrome Extensions

I’ve built a handful of Chrome extensions this year, and I wrote up some of what I learned in a post back in March. Since then, I’ve found two more tricks that have proved useful.

First, the Chrome Canary channel includes a handy extension error console to quickly expose extension errors.

Simply open the chrome://extensions page and an Errors link will be available for side-loaded extensions (“Load Unpacked Extension”):

Errors link on Chrome://extensions

Error console

This error console can be enabled in other channels (dev/beta/stable) by launching with the --error-console command line argument.

Next, you’ll often end up with both a public version of your extension and a side-loaded developer build installed at the same time. If their Action icons are the same, it’s easy to get confused about which version is which.

To help clarify which is the development version, you can easily add a Dev badge to the icon of the developer build:
Dev badge

 

 

The code to add to your background.js file is straightforward and works even if the background page is not persistent:

 // Our background page isn't persistent, so subscribe to events.
 chrome.runtime.onStartup.addListener(()=> { init(); });
 // onInstalled fires when user uses chrome://extensions page to reload
 chrome.runtime.onInstalled.addListener(() => { init(); });

 function init()
 {
   // Add Badge notification if this is a dev-install
   chrome.management.getSelf( (o)=>{
     if (o.installType === "development") {
       chrome.browserAction.setBadgeText( {text: "dev"} );
     }
   });
 }

Finally, as mentioned in my last post, the Chrome Extension Source Viewer remains invaluable for quickly peeking at the code of existing extensions in the store.

I’ve had more fun writing Chrome extensions than anything else I’ve built this year. If you haven’t tried building an extension yet, I encourage you to try it out. Best of all, your new Chrome extension development skills will readily transfer to building extensions to Opera, Firefox, and Microsoft Edge.

-Eric

Useful Resources when Developing Chrome Extensions

Email Tracking Links are the Worst

Note: The non-secure email link vulnerability described in this post was fixed before I posted it publicly. The Clinton campaign’s InfoSec team was polite, communicative, and fun to work with.

All emailed links to HillaryClinton.com should now use HTTPS.

Since building the MoarTLS Browser Extension to track the use of non-secure hyperlinks, I’ve found that a huge number of otherwise-secure sites and services email links to users that are non-secure. Yes, I’ve ranted about this before; today’s case is a bit more interesting.

Here’s a recent example of an email with non-secure links:

Non-secure links in email

As you can see, all of the donation links are HTTP URLs to the hostname links.hillaryclinton.com, including the link whose text claims that it points to httpS://www.hillaryclinton.com. If you click on that link, you usually end up on a secure page, eventually:

Secure server

So, what’s going on here?

Why would a site with a HTTPS certificate send users through a redirect without one?

The answer, alas, is mundane click tracking, and it impacts almost anyone using an “Email Service Provider” (ESP) like MailChimp.

For instance, here’s the original “Download our Browser” email I got from Brave:

Brave Download link uses HTTP

Brave inserted a HTTPS link to their download in their original email template, but MailChimp rewrote it to non-securely point at their click-tracking server “list-manage.com”. The click tracker allows the emailer to collect metrics about the effectiveness of the email campaign (“How many users clicked, and on which link?”). There’s no inherent reason why the tracker must be non-secure, but this appears to be common practice for most ESPs, including the one used by the Clinton campaign.

Indeed, if you change Clinton’s injected tracking URL to HTTPS, you will see a certificate error in your browser:

Bad Certificate name mismatch

… revealing the source of the problem— the links subdomain of the main site is pointed at a third-party ESP, and that ESP hasn’t bothered to acquire a proper certificate for it.

DNS reveals the "links" domain is pointed at a third party

The entire links subdomain is pointed at a 3rd-party server. A friend mused: “Clinton could’ve avoided this whole debacle if she were running her own email servers.”

So What, Who Cares?

When I complain about things like this on Twitter, I usually get at least a few responses like this:

Who cares tweet

The primary problem is that the responder assumes that the HTTP link will reliably redirect to the HTTPS page… and this is true in most cases, except when there’s an attacker present.

The whole point of HTTPS is to defend against network-based adversaries. If there’s an active man-in-the-middle (MITM) attacker on the network, he can easily perform a sslstripping attack, taking over the non-secure connection from user, fooling the user into submitting their private information. The attacker could simply keep the connection on HTTP so he can monitor and manipulate it, or he could redirect the victim to a fake domain he controls, even one with a valid HTTPS certificate (e.g. https://donations.hillary-clinton.com).

Okay, so that’s bad.

Unfortunately, it gets worse in the Clinton case. Normally a bad guy taking advantage of SSL Stripping  still needs to fool the user into giving up something of value– not a high bar, but nevertheless. In the case of Clinton’s donation’s link, there’s a bigger problem, alluded to in the text of the email itself:

Donations go through "immediately"

That’s right—if you click the link, the server collects your money, no questions asked. Security researchers immediately recognize the threat of a cross-site request forgery… any web page or MITM could direct a victim’s browser to the target link and cause money to be withdrawn from their bank account. To protect against that, a properly developed site includes a secret canary in the URL so that the attacker cannot generate a valid link. And if you look at the markup of the email you see that the campaign has done just that (behind the black boxes to protect my account):

CSRF Canary

Unfortunately, there’s a fatal flaw here: the link is HTTP, which means that the canary is sent in raw form over the network, and the canary doesn’t expire after it’s been used. Anyone who views the unprotected HTTP traffic can collect my secret token and then feed the link back to my browser, forcing me to donate over and over and over again without any kind of prompt.

Aside: Beyond the security problem, there’s a significant functionality problem here. In the HTTP protocol, GET requests link those sent in navigations are meant to be idempotent, a fancy way of saying that sending the same request more than one time shouldn’t have any side effects. The Clinton campaign’s donation page, however, will bill the user every single time the donation link is loaded no matter why the page was loaded. Even a user who is not under attack can suffer multiple-billing if they don’t immediately close the tab after donating. If the user navigates that tab to another site, then clicks the browser’s back button, they’re charged again. Clicking back and forward a few times to figure out what’s happening? Billed over and over and over.

Things are even worse on a memory-constrained device… browsers like Chrome will “page out” tabs to save memory as you switch applications. When you switch back to the browser, it swaps the page back in, reloading it. And you’re billed again:

Push notification from AMEX reveals I've been charged again

… billing continues each time the browser is activated until you have the good sense to close the tab. (You can force Desktop Chrome to manually discard a tab early by visiting chrome://discards/; you’ll note you’re billed again the next time you click the tab).

 

Whether you’re a Presidential Campaign or a streaming music site, please use HTTPS everywhere—there’s no good excuse not to protect your users. And if you’re taking users’ money, you need to be very very sure that your requests contain a nonce to require confirmation before double-billing.

Thanks for your help in securing the web!

-Eric Lawrence

Email Tracking Links are the Worst