Get Help with HTTPS problems

Sometimes, when you try to load a HTTPS address in Chrome, instead of the expected page, you get a scary warning, like this one:

image

Chrome has found a problem with the security of the connection and has blocked loading the page to protect your information.

In a lot of cases, if you’re just surfing around, the easiest thing to do is just find a different page to visit. But what happens if this happens on an important site that you really need to see? You shouldn’t just “click through” the error, because this could put your device or information at risk.

In some cases, clicking the ADVANCED link might explain more about the problem. For instance, in this example, the error message says that the site is sending the wrong certificate; you might try finding a different link to the site using your favorite search engine.

image

Or, in this case, Chrome explains that the certificate has expired, and asks you to verify that your computer clock’s Date and Time are set correctly:

image

You can see the specific error code in the middle of the text:

image

Some types of errors are a bit more confusing. For instance, NET::ERR_CERT_AUTHORITY_INVALID means that the site’s certificate didn’t come from a company that your computer is configured to trust.

image

Errors Everywhere?

What happens if you start encountering errors like this on every HTTPS page that you visit, even major sites like https://google.com?

In such cases, this often means that you have some software on your device or network that is interfering with your secure connections. Sometimes this software is well-meaning (e.g. anti-virus software, ad-blockers, parental control filters), and sometimes it’s malicious (adware, malware, etc). But even buggy well-meaning software can break your secure connections.

If you know what software is intercepting your traffic (e.g. your antivirus) consider updating it or contacting the vendor.

Getting Help

If you don’t know what to do, you may be able to get help in the Chrome Help Forum. When you ask for help, please include the following information:

  • The error code (e.g. NET::ERR_CERT_AUTHORITY_INVALID).
    • To help the right people find your issue, consider adding this to the title of your posting.
  • What version of Chrome you’re using. Visit chrome://version in your browser to see the version number
  • The type of device and network (e.g. “I’m using a laptop on wifi on my school’s network.”)
  • The error diagnostic information.

You can get diagnostic information by clicking or tapping directly on the text of the error code: image. When you do so, a bunch of new text will appear in the page:

image

You should select all of the text:

image

…then hit CTRL+C (or Command ⌘+C on Mac) to copy the text to your clipboard. You can then paste the text into your post. The “PEM encoded chain” information will allow engineers to see exactly what certificate the server sent to your computer, which might shed light on what specifically is interfering with your secure connections.

With any luck, we’ll be able to help you figure out how to surf securely again in no time!

 

-Eric

Get Help with HTTPS problems

Chrome Deprecates Subject CN Matching

If you’re using a Self-Signed certificate for your HTTPS server, a deprecation coming to Chrome may affect your workflow.

Chrome 58 will require that certificates specify the hostname(s) to which they apply in the SubjectAltName field; values in the Subject field will be ignored. This follows a similar change in Firefox 48. If impacted, you’ll see something like this blocking page as you load your HTTPS site:

NET::ERR_CERT_COMMON_NAME_INVALID blocking page in Chrome

NET::ERR_CERT_COMMON_NAME_INVALID is an unfortunate error code, insofar as all common names are now ignored. Chrome is working to improve the debuggability of this experience, via:

Update: Both of these have landed. Chrome now shows [missing_subjectAltName] in the details on the Certificate Error page, and a Subject Alternative Name Missing warning in the Security panel of the Developer tools.

Notably, Windows’ ancient makecert.exe utility cannot set the SubjectAltName field in certificates, which means that if you’re using it to generate your self-signed certificates, you need to stop. Instead, users of modern Windows can use the New-SelfSignedCertificate command in PowerShell.

New-SelfSignedCertificate -DnsName "www.example.com", "example.com" -CertStoreLocation "cert:\CurrentUser\My"

Using openssl for self-signed certificate generation? See https://stackoverflow.com/a/27931596.

This new restriction may also impact users of very old versions of Fiddler (or FiddlerCore), or users who have configured Fiddler to use MakeCert for whatever reason. Fortunately, Fiddler offers a number of different certificate generators, so you just need to make a small configuration change. To switch away from MakeCert, click Tools > Fiddler Options > HTTPS and click the “Certificates generated by MakeCert engine” link. Change the dropdown to CertEnroll and click OK. Click Actions > Reset All Certificates and restart Fiddler.

image

If you’re building an application atop FiddlerCore, you’ll need to make sure you’re not using makecert; see the end of this post for help.

-Eric Lawrence

PS: There’s also a EnableCommonNameFallbackForLocalAnchors policy. You shouldn’t use it and you should just fix your certificates, or they’ll break when it’s removed in Chrome 65 or earlier.

Chrome Deprecates Subject CN Matching

The Trouble with Magic

“Magic” is great… except when it isn’t.

Software Design is largely about tradeoffs, and one of the more interesting tradeoffs is between user experience and predictability. This has come up repeatedly throughout my career and in two independent contexts yesterday that I’ll describe in this post.

Developer Magic

I’m working on a tiny UX change to Google Chrome to deemphasize the data component of data: URIs.

Chrome is a multi-platform browser that runs on Windows, Mac, Linux, ChromeOS, Android, and iOS, which means that I need to make the same change in a number of places. Four, to be precise: Views (our cross-platform UI that runs on Windows, Linux and ChromeOS), Cocoa (Mac), Bling (iOS) and Clank (Android). The change for Views was straightforward and I did it first; porting the change to Mac wasn’t too hard. With the Mac change in hand, I figured that the iOS change would be simple, as both are written in Objective C++. I don’t have a local Mac development box, so I have to upload my iOS changes to the Chromium build bots to see if they work. Unfortunately, my iOS build failed, with a complaint from the linker:

Undefined symbols for architecture arm7:

“gfx::Range::ToNSRange() const”, referenced from:

OmniboxViewIOS::SetEmphasis(bool, gfx::Range) in omnibox_view_ios.o

OmniboxViewIOS::UpdateSchemeEmphasis(gfx::Range) in omnibox_view_ios.o

ld: symbol(s) not found for architecture arm7

Hrm… that’s weird; the Mac build worked and the iOS build used the same APIs. Let’s go have a look at the definition of ToNSRange():

ToNSRange() inside an if(defined(OS_MACOSX)

Oh, weird. It’s in an OS_MACOSX block, so I guess it’s not there for iOS. But how did it compile and only fail at linking?

Turns out that the first bit of “magic” is that when OS_IOS is defined, OS_MACOSX is always also defined:

iOS gets both MacOS and iOS defined

I was relieved to learn that I’m not the only person who didn’t know this, both by asking around and by finding code blocks like this:

If defined(win)||defined(mac)||defined(ios)

Okay, so that’s why it compiled, but why didn’t it link? Let’s look at the build configuration file:

image

Hmmm… That’s a bit suspicious. There’s range_mac.mm and range_win.cc both listed within a single target. But it seems unlikely that the Mac build includes the Windows code, or that the Windows build includes the Mac code. Which suggests that maybe there’s some magic not shown in the build configuration that determines what actually gets built. And indeed, it turns out that such magic does exist.

The overall Build Configuration introduces its own incompatible magic, whereby filenames suffixed with _mac only compile on Mac… and that is limited to actual Mac, not including iOS:

sources_assignment_filters

This meant that the iOS compilation had a header file with no matching implementation, and I was the first lucky guy to stumble upon this by calling the missing code.

Magic handling of filenames is simultaneously great (“So convenient”) and awful—I spoke to a number of engineers who knew that the build does this, but had no idea how, or whether or not iOS builds would include _mac-suffixed files. My  instinct for fixing this would be to rename range_mac.mm to just range_apple.mm (because .mm files are compiled only for Mac and iOS), but instead I’ve been told that the right fix is to just temporarily disable the magic:

Add exclusion via set_sources_assignment_filter

Talking to some of the experts, I learned that the long term goal is to get rid of the sources_assignment_filters altogether (No more magic!) but doing so entails a bunch of boring work (No more magic!).

Magic is great, when it works.

When it doesn’t, I spend a lot of time investigating and writing blog posts. In this case, I ended up flailing about for a few hours (because sending my various fix attempts off to the bots isn’t fast) trying to figure out what was going on.

There’s plenty of other magic that happens throughout the Chromium developer toolchain; some of it visible and some of it invisible. For instance, consider what happens when I forget the name of the command that finds out what release a changelist went into:

git find-release

Git “magically” knows what I meant, and points out my mistake.

Elsewhere, however, Chromium’s git “magically” knows what I meant and just does it:

git cl upalod (typo)

Which approach is better? I suppose it depends. The code that suggests proper commands is irritating (“Dammit, if you knew what I meant, you could just do it!”) but it’s also predictable—only legal commands run and typos cannot go overlooked and propagate throughout scripts, documentation, etc.

This same type of tradeoff appeared in a different scenario by the end of the day.

End-User Magic

This repro won’t work forever, but try clicking this link: https://www.kubernetes.io. If you do this right now, you’ll find that the page works great in Chrome, but doesn’t work in IE or Edge:

Edge and IE show Certificate Error

If you pop the site into SSLLabs’ server test, you can see that the server indeed has a problem:

Certificate subject name mismatch

The certificate’s SubjectAltNames field contains kubernetes.io, but not http://www.kubernetes.io.

So, what gives? Why does the original www URL work in Chrome? If you open the Developer Tools console while following the link, you’ll see the following explanation of the magic:

Console warning about automagic redirection

Basically, Chrome saw that the certificate for http://www.kubernetes.io was misconfigured and recognized that sending the user to the bare domain kubernetes.io was probably the right thing to do. So, it just did that, which is great for the user. Right? Right??

Well, yes, it’s great for Chrome users, and maybe for HTTPS adoption– users don’t like certificate errors, and asking them to manually “fix” things the browser can fix itself is annoying.

But it’s less awesome for users of other browsers without this accommodation, especially when the site developers don’t know about Chrome’s magic behavior and close the bug as “fixed” because they tested in Chrome. So other browsers have to adopt this magic if they want to be as great as Chrome (no browser vendor likes bugs whining “Your browser doesn’t work but Chrome does!”). Then, after all the browsers have the magic in place, then other tools like curl and wfetch and wget etc need to adopt it. And the magic is now a hack that lives on for decades, increasing the development cost of all future web clients. Blargh.

It’s worth noting that this scenario was especially confusing for users of Microsoft Edge, because its address box has special magic that hides the “www.” prefix, even on error pages. The default address is a “lie”:

Edge hides the

Only by putting focus in the address bar can you see the “truth”:

WWW is showing now on focus

When you’re building magic into your software, consider carefully how you do so.

  • Do you make your magic invisible, or obvious?
  • Is there a way to scope it, or will you have to maintain it forever?
  • Are you training users to expect magic, or guiding them away from it?
  • If you’re part of an ecosystem, is your magic in line with your long-term ecosystem goals?

-Eric

The Trouble with Magic

Certified Malice

One unfortunate (albeit entirely predictable) consequence of making HTTPS certificates “fast, open, automated, and free” is that both good guys and bad guys alike will take advantage of the offer and obtain HTTPS certificates for their websites.

Today’s bad guys can easily turn a run-of-the-mill phishing spoof:

HTTP Phish screenshot

…into a somewhat more convincing version, by obtaining a free “domain validated” certificate and lighting up the green lock icon in the browser’s address bar:

HTTPS Phish screenshotPhishing site on Android

The resulting phishing site looks almost identical to the real site:

Real and fake side-by-side

By December 8, 2016, LetsEncrypt had issued 409 certificates containing “Paypal” in the hostname; that number is up to 709 as of this morning. Other targets include BankOfAmerica (14 certificates), Apple, Amazon, American Express, Chase Bank, Microsoft, Google, and many other major brands. LetsEncrypt validates only that (at one point in time) the certificate applicant can publish on the target domain. The CA also grudgingly checks with the SafeBrowsing service to see if the target domain has already been blocked as malicious, although they “disagree” that this should be their responsibility. LetsEncrypt’s short position paper is worth a read; many reasonable people agree with it.

The “race to the bottom” in validation performed by CAs before issuing certificates is what led the IE team to spearhead the development of Extended Validation certificates over a decade ago. The hope was that, by putting the CAs name “on the line” (literally, the address line), CAs would be incentivized to do a thorough job vetting the identity of a site owner. Alas, my proposal that we prominently display the CAs name for all types (EV, OV, DV) of certificate wasn’t implemented, so domain validated certificates are largely anonymous commodities unless a user goes through the cumbersome process of manually inspecting a site’s certificates. For a number of reasons (to be explored in a future post), EV certificates never really took off.

Of course, certificate abuse isn’t limited to LetsEncrypt—other CAs have also issued domain-validated certificates to phishing sites as well:

Comodo cert for a Paypal Phish

Who’s Responsible?

Unfortunately, ownership of this mess is diffuse, and I’ve yet to encounter any sign like this:

image

Blame the Browser

The core proposition held by some (but not all) CAs is that combatting malicious sites is the responsibility of the user-agent (browser), not the certificate authority. It’s an argument with merit, especially in a world where we truly want encryption for all sites, not just the top sites.

That position is bolstered by the fact that some browsers don’t actively check for certificate revocation, so even if LetsEncrypt were to revoke a certificate, the browser wouldn’t even notice.

Another argument is that browsers overpromise the safety of sites by using terms like Secure in the UI—while the browser can know whether a given HTTPS connection is present and free of errors, it has no knowledge of the security of the destination site or CDN, nor its business practices.  Internet Explorer’s HTTPS UX used to have a helpful “Should I trust this site?” link, but that content went away at some point. Security wording is a complicated topic because what the user really wants to know (“Is this safe?”) isn’t something a browser can ever really answer in the affirmative. Users tend to be annoyed when you tell them only the truth– “This download was not reported as not safe.”

The obvious way to address malicious sites is via phishing and malware blocklists, and indeed, you can help keep other users safe by reporting any unblocked phish you find to the Safe Browsing service; this service protects Chrome, Firefox, and Safari users. You can also forward phishing messages to scam@netcraft.com and/or PhishTank. Users of Microsoft browsers can report unblocked phish to SmartScreen (in IE, click Tools > SmartScreen > Report Unsafe Website). Known-malicious sites will get the UI treatment they deserve:

image

Unfortunately, there’s always latency in block lists, and a phisher can probably turn a profit with a site that’s live less than one hour. Phishers also have a whole bag of tricks to delay blocks, including cloaking whereby they return an innocuous “Site not found” message when they detect that they’re being loaded by security researchers’ IP addresses, browser types, OS languages, etc.

Blame the Websites

Some argue that websites are at fault, for:

  1. Relying upon passwords and failing to adopt unspoofable two-factor authentication schemes which have existed for decades
  2. Failing to adopt HTTPS or deploy it properly until browsers started bringing out the UI sledgehammers
  3. Constantly changing domain names and login UIs
  4. Emailing users non-secure links to redirector sites
  5. Providing bad security advice to users

Blame the Humans

Finally, many lay blame with the user, arguing user education is the only path forward. I’ve long given up much hope on that front—the best we can hope for is raising enough awareness that some users will contribute feedback into more effective systems like automated phishing block lists.

We’ve had literally decades of sites and “experts” telling users to “Look for the lock!” when deciding whether a site is to be trusted. Even today we have bad advice being advanced by security experts who should know better, like this message from the Twitter security team which suggests that https://twitter.com.access.info is a legitimate site.

Email from Twitter Security Team

Where Do We Go From Here?

Unfortunately, I don’t think there are any silver bullets, but I also think that unsolvable problems are the most interesting ones. I’d argue that everyone who uses or builds the web bears some responsibility for making it safer for all, and each should try to find ways to do that within their own sphere of control.

Following is an unordered set of ideas that I think are worthwhile.

Signals

Historically, we in the world of computers have been most comfortable with binary – is this site malicious, or is it not? Unfortunately, this isn’t how the real world usually works, and fortunately many systems are moving more toward a set of signals. When evaluating the trustworthiness of a site, there are dozens of available signals, including:

  • HTTPS Certificates
  • Age of the site
  • Has the user visited this site before
  • Has the user stored a password on this site before
  • Has the user used this password before, anywhere
  • Number of visitors observed to the site
  • Hosting location of the site
  • Presence of sensitive terms in the content or URL
  • Presence of login forms or executable downloads

None of these signals is individually sufficient to determine whether a site is good or evil, but combined together they can be used to feed systems that make good sites look safer and evil sites more suspect. Suspicious sites can be escalated for human analysis, either by security experts or even by crowd-sourced directed questioning of ordinary users. For instance, see the heuristic-triggered Is This Phish? UI from Internet Explorer’s SmartScreen system:

image

The user’s response to the prompt is itself a signal that feeds into the system, and allows more speedy resolution of the site as good or bad and subject to blocking.

One challenge with systems based on signals is that, while they grow much more powerful as more endpoints report signals, those signal reports may have privacy implications for individual users.

Reputation

In the real world, many things are based on reputation—without a good reputation, it’s hard to get a job, stay in business, or even find a partner.

In browsers, we have a well-established concept of bad reputation (your site or download appears on a block list) but we’ve largely pushed back against the notion of good reputation. In my mind, this is a critical failing—while much of the objection is well-meaning (“We want a level playing field for everyone”), it’s extremely frustrating that we punish users in support of abstract ideals. Extended Validation certificates were one attempt at creating the notion of good reputation, but they suffered from the whims of CAs’ business plans and the legal system that drove absurdities like:

Security chip showing full legal name of Washington Post's holding company

While there are merits to the fact that, on the web, no one can tell whether your site is a one-woman startup or a Fortune 100 behemoth, there are definite downsides as well.

Owen Campbell-Moore wrote a wonderful paper, Rethinking URL bars as primary UI, in which he proposes a hypothetical origin-to-local-brand mapping, which would allow browsers to help users more easily understand where they really are when interacting with top sites. I think it’s a long-overdue idea brilliant in its obviousness. When it eventually gets built (by Apple, Microsoft, Google, or some upstart?) we’ll wonder what took us so long. A key tradeoff in making such a system practical is that designers must necessarily focus on the “head” of the web (say, the most popular 1000 websites globally)– trying to generalize the system for a perfectly level playing field isn’t practical– just like in the real world.

Assorted

  • Browsers could again make a run at supporting unspoofable authentication methods natively.
  • Browsers could do more to take authentication out from the Zone of Death, limiting spoofing attacks.
  • New features like Must-Staple will allow browsers to respond to certificate revocation information without a performance penalty.
  • Automated CAs could deploy heuristics that require additional validation for certificates containing often-spoofed domains.For instance, if I want a certificate for paypal-payments.com, they could either demand additional information about me personally (allowing a visit from Law Enforcement if I turn out to be a phisher) or they could even just issue a certificate with a validity period a week in the future. The certificate would be recorded in CertificateTransparency logs immediately, allowing brand monitoring firms to take note and respond immediately if phishing is detected when the certificate became valid. Such systems will be imperfect (you need to handle BankOfTheVVest.com as a potential spoof of BankOfTheWest.com, for instance) but even targeting a few high-value domains could make a major dent.

 

Fight the phish!

 

-Eric

Update: Chris wrote a great post on HTTPS UX in Chrome.

Certified Malice

The Line of Death

When building applications that display untrusted content, security designers have a major problem— if an attacker has full control of a block of pixels, he can make those pixels look like anything he wants, including the UI of the application itself. He can then induce the user to undertake an unsafe action, and a user will be none the wiser.

In web browsers, the browser itself usually fully controls the top of the window, while pixels under the top are under control of the site. I’ve recently heard this called the line of death:

Line of death below omnibox

If a user trusts pixels above the line of death, the thinking goes, they’ll be safe, but if they can be convinced to trust the pixels below the line, they’re gonna die.

Unfortunately, this crucial demarcation isn’t explicitly pointed out to the user, and even more unfortunately, it’s not an absolute.

For instance, because the area above the LoD is so small, sometimes more space is needed to display trusted UI. Chrome attempts to resolve this by showing a little chevron that crosses the LoD:

Chrome chevrons

…because untrusted markup cannot cross the LoD. Unfortunately, as you can see in the screenshot, the treatment is inconsistent; in the PageInfo flyout, the chevron points to the bottom of the lock and the PageInfo box overlaps the LoD, while in the Permission flyout the chevron points to the bottom of the omnibox and the Permission box only abuts the LoD. Sometimes, the chevron is omitted, as in the case of Authentication dialogs.

Alas, the chevron is subtle, and I expect most users will fall for a faked chevron, like some sites have started to use1:

Fake chevron in HTML

The bigger problem is that some attacker data is allowed above the LoD; while trusting the content below the LoD will kill your security, there are also areas of death above the line. A more accurate Zones of Death map might look like this:

Zones of Death

In Zone 1, the attacker’s chosen icon and page title are shown. This information is controlled fully by the attacker and thus may consist entirely of deceptive content and lies.

In Zone 2, the attacker’s domain name is shown. Some information security pros will argue that this is the only “trustworthy” component of the URL, insofar as if the URL is HTTPS then the domain correctly identifies the site to which you’re connected. Unfortunately, your idea of trustworthy might be different than the experts’; https://paypal-account.com/ may really be the domain you loaded, but it has no relationship with the legitimate payment service found at https://paypal.com.

The path component of the URL in Zone 3 is fully untrustworthy; the URL http://account-update.com/paypal.com/ has nothing to do with Paypal either, and while spoofing here is less convincing, it also may be harder for the good guys to block because the spoofing content is not found in DNS nor does it create any records in Certificate Transparency logs.

Zone 4 is the web content area. Nothing in this area is to be believed. Unfortunately, on windowed operating systems, this is worse than it sounds, because it creates the possibility of picture-in-picture attacks, where an entire browser window, including its trusted pixels, can be faked:

Paypal window is fake content from evil.example.com

When hearing of picture-in-picture attacks, many people immediately brainstorm defenses; many related to personalization. For instance, if you run your OS or browser with a custom theme, the thinking goes, you won’t be fooled. Unfortunately, there’s evidence that that just isn’t the case.

Story time

Back in 2007 as the IE team was launching Extended Validation (EV) certificates, Microsoft Research was publishing a paper calling into question their effectiveness. A Fortune 500 financial company came to visit the IE team as they evaluated whether they wanted to go into the EV Certificate Authority business. They were excited about the prospect (as were we, since they were a well-known-name with natural synergies) but they noted that they thought the picture-in-picture problem was a fatal flaw.

I was defensive– “It’s interesting,” I conceded, “but I don’t think it’s a very plausible attack.”

They retorted “Well, we passed this screenshot around our entire information security department, and nobody could tell it’s a picture-in-picture attack. Can you?” they slid an 8.5×11 color print across the table.

“Of course!” I said, immediately relieved. I quickly grew gravely depressed as I realized the implications of the fact that they couldn’t tell the difference.

“How?” they demanded.

“It’s a picture of an IE7 browser running on Windows Vista in the transparent Aero Glass theme with a page containing a JPEG of an IE7 browser running on Windows XP in the Luna aka Fisher Price theme?” I pointed out.

“Oh. Huh.” they noted.

My thoughts of using browser personalization as an effective mitigation died that day.

Other mitigations were proposed; one CA built an extension where hovering over the EV Lock Icon (“Trust Badge”) would dim the entire screen except for the badge. One team proposed using image analysis to scan the current webpage for anything that looked like a fake EV badge.

Personally, my favorite approach was Tyler Close’s idea that the browser should use PetNames for site identity– think of them as a Gravatar icon for salted certificate hashes– not only would they make every HTTPS site’s identity look unique to each user, but this could also be used as a means of detecting fraudulent or misissued certificates (in a world before we had certificate transparency).

The Future is Here … and It’s Worse

HTML5 adds a Fullscreen API, which means the Zone of Death looks like this:zodfullscreen

The Metro/Immersive/Modern mode of Internet Explorer in Windows 8 suffered from the same problem; because it was designed with a philosophy of “content over chrome”, there were no reliable trustworthy pixels. I begged for a persistent trustbadge to adorn the bottom-right of the screen (showing a security origin and a lock) but was overruled. One enterprising security tester in Windows made a visually-perfect spoofing site of Paypal, where even the user gestures that displayed the ephemeral browser UI were intercepted and fake indicators were shown. It was terrifying stuff, mitigated only by the hope that no one would use the new mode.

Virtually all mobile operating systems suffer from the same issue– due to UI space constraints, there are no trustworthy pixels, allowing any application to spoof another application or the operating system itself. Historically, some operating systems have attempted to mitigate the problem by introducing a secure user gesture (on Windows, it’s Ctrl+Alt+Delete) that always shows trusted UI, but such measures tend to confuse users (limiting their effectiveness) and often get “optimized away” when the UX team’s designers get ahold of the product.

It will be interesting to see how WebVR tries to address this problem on an even grander scale.

Beyond Browsers

Of course, other applications have the concept of a LoD as well, including web applications. Unfortunately, they often get this wrong. Consider Outlook.com’s rendering of an email:

image

When Outlook has received an email from a trusted sender, they notify the user via a “This message is from a trusted sender.” notice. Which appears directly inside a Zone of Death:

image

Security UI is hard.

-Eric

1 “Why would they fake a permission prompt? What would they gain?” you ask? Because for a real permission prompt, if you click Block,they can never ask you again, while with a fake prompt, they can spam you as much as they like. On the other hand, if you click Allow, they immediately present the real prompt.

The Line of Death

Client Certificates on Android

Recently, this interesting tidbit crossed my Twitter feed:

Tweet: "Your site asks for a client certificate?"

Sure enough, if you visited the site in Chrome, you’d get a baffling prompt.

My hometown newspaper shows the same thing:

No Certificates Found prompt on Android

Weird, huh?

Client certificates are a way for a browser to supply a certificate to the server to verify the client’s identity (in the typical case, a HTTPS server only sends its certificate so that the client can validate that the server is what it says it is.

In the bad old days of IE6 and IE7, the default behavior was to show a similar prompt, but what’s going on with the latest Chrome on modern Android?

It turns out that this is a limitation of the Android security model, to which Chrome on Android is subject. In order for Chrome to interact with the system’s certificate store, the operating system itself shows a security prompt.

If your server has been configured to accept client certificates (in either require or want mode), you should be sure to test it on Android devices to verify that it behaves as you expect for your visitors (most of whom likely will not have any client certificates to supply).

-Eric

Client Certificates on Android

HTTPS Only Works If You Use It – Tipster Edition

Convoy with three armored tanks and one pickup truck

It’s recently become fashionable for news organizations to build “anonymous tip” sites that permit members of the public to confidentially submit tips about stories of public interest.

Unfortunately, would-be tipsters need to take great care when exploring such options, because many organizations aren’t using HTTPS properly to ensure that the user’s traffic to the news site is protected from snoopers on the network.

If the organization uses any non-secure redirections in loading its “Tips” page, or the page pulls any unique images or other content over a non-secure connection, the fact that you’ve visited the “Tips” page will be plainly visible to your ISP, employer, fellow coffee shop patron, home-router-pwning group, etc.

NYTimes call for Tips, showing non-secure redirects

The New Yorker Magazine call for Tips, showing non-secure redirects

Here are a few best practices for organizations that either a) anonymous tips online or b) use webpages to tell would-be leakers how to send anonymous tips via Tor or non-electronic means:

For end users:

  • Consider using Tor or other privacy-aiding software.
  • Don’t use a work PC or any PC that may have spyware or non-public certificate roots installed.

Stay private out there!

-Eric

HTTPS Only Works If You Use It – Tipster Edition