Google Internet Authority G3

For some time now, operating behind the scenes and going mostly unnoticed, Google has been changing the infrastructure used to provide HTTPS certificates for its sites and services.

You’ll note that I said mostly. Over the last few months, I’ve periodically encountered complaints from users who try to load a Google site and get an unexpected error page:

certerror

Now, there are a variety of different problems that can cause errors like this one– in most cases, the problem is that the user has some software (security software or malware) installed locally that is generating fake certificates that are deemed invalid for various reasons.

However, when following troubleshooting steps, we’ve determined that a small number of users encountering this NET::ERR_CERT_AUTHORITY_INVALID error page are hitting it for the correct and valid Google certificates that chain through Google’s new intermediate Google Internet Authority G3. That’s weird.

What’s going on?

The first thing to understand is that Google operates a number of different certificate trust chains, and we have multiple trust chains deployed at the same time. So a given user will likely encounter some certificate chains that go through the older Google Internet Authority G2 chain and some that go through the newer Google Internet Authority G3 chain– this isn’t something the client controls.

G2vG3

You can visit this GIA G3-specific test page to see if the G3 root is properly trusted by your system.

More surprisingly, it’s also the case that you might be getting a G3 chain for a particular Google site (e.g. https://mail.google.com) while some other user is getting a G2 chain for the same URL. You might even end up with a different chain simply based on what Google sites you’ve visited first, due to a feature called HTTP/2 connection coalescing.

In order to see the raw details of the certificate encountered on an error page, you can click the error code text on the blocking page. (If the site loaded without errors, you can view the certificate like so).

Google’s new certificate chain is certainly supposed to be trusted automatically– if your operating system (e.g. Windows 7) didn’t already have the proper certificates installed, it’s expected to automatically download the root certificate from the update servers (e.g. Microsoft WindowsUpdate) and install it so that the certificate chain is recognized as trusted. In rare instances, we’ve heard of this process not working– for instance, some network administrators have disabled root certificate updates for their enterprise’s PCs.

On modern versions of Windows, you can direct Windows to check its trusted certificate list against the WindowsUpdate servers by running the following from a command prompt:

certutil -f -verifyCTL AuthRootWU

Older versions of Windows might not support the -verifyCTL command. You might instead try downloading the R2 GlobalSign Root Certificate directly and then installing it in your Trusted Root Certification Authorities:

InstallBtnLocalMachineTrustedRootFinishyay

Overall, the number of users reporting problems here is very low, but I’m determined to help ensure that Chrome and Google sites work for everyone.

-Eric

Google Internet Authority G3

Chrome 59 on Mac and TeletexString Fields

Update: This change ended up getting backed out, after it was discovered that it impacted smartcard authentication. Thanks for self-hosting Chrome Dev builds, IT teams!

A change quietly went into Chrome 59 that may impact your certificates if they contain non-ASCII characters in a TeletexString field. Specifically, these certificates will fail to validate on Mac, resulting in either a ERR_SSL_SERVER_CERT_BAD_FORMAT error for server certificates or a ERR_BAD_SSL_CLIENT_AUTH_CERT error for client certificates. The change that rejects such certificates is presently only in the Mac version of Chrome, but it will eventually make its way to other platforms.

You can see whether your certificates are using teletexStrings using an ASN.1 decoder program, like this one. Simply upload the .CER file, and look for the TeletexString type in the output. If you find any such fields that contain non-ASCII characters, the certificate is impacted:

Non-ASCII character in string

Background: Certificates are encoded using a general-purpose data encoding scheme called ASN.1. ASN.1 specifies encoding rules, and strings may be encoded using any of a number of different data types (teletexString, printableString, universalString, utf8String, bmpString). Due to the complexity and underspecified nature of the TeletexString, as well as the old practice of shoving Latin1 strings in fields marked as TeletexString, the Chrome change takes a conservative approach to handling TeletexString, only allowing the ASCII subset. utf8String is a well-specified and well-supported standard and should be used in place of the obsolete teletexString type.

To correct the problem with the certificate, regenerate it using UTF8String fields to store non-ASCII data.

-Eric Lawrence

Chrome 59 on Mac and TeletexString Fields

Inspecting Certificates in Chrome

With a check-in on Monday night, Chrome Canary build 60.0.3088 regained a quick path to view certificates from the top-level security UI. When the new feature is enabled, you can just click the lock icon to the left of the address box, then click the “Valid” link in the new Certificate section of the Page Information bubble to see the certificate:

Chrome 60 Page Info dropdown showing certificate section

In some cases, you might only be interested in learning which Certificate Authority issued the site’s certificate. If the connection security is Valid, simply hover over the link to see the issuer information in a tooltip:

Tooltip shows Issuer CA

The new link is also available on the blocking error page in the event of an HTTPS error, although no tooltip is shown:

The link also available at the blocking Certificate Error page

Note: For now, you must manually enable the new Certificate section. Type chrome://flags/#show-cert-link in Chrome’s address box and hit enter. Click the Enable link and relaunch Chrome.

image

This section is enabled by default in Chrome 63 along with some other work to simplify the Page Information bubble.

If you want more information about the HTTPS connection, or to see the certificates of the resources used in the page, hit F12 to open the Developer Tools and click to the Security tab:

Chrome DevTools Security tab shows more information

You can learn more about Chrome’s certificate UIs and philosophy in this post from Chrome Security’s Chris Palmer.

-Eric Lawrence

Inspecting Certificates in Chrome

Get Help with HTTPS problems

Sometimes, when you try to load a HTTPS address in Chrome, instead of the expected page, you get a scary warning, like this one:

image

Chrome has found a problem with the security of the connection and has blocked loading the page to protect your information.

In a lot of cases, if you’re just surfing around, the easiest thing to do is just find a different page to visit. But what happens if this happens on an important site that you really need to see? You shouldn’t just “click through” the error, because this could put your device or information at risk.

In some cases, clicking the ADVANCED link might explain more about the problem. For instance, in this example, the error message says that the site is sending the wrong certificate; you might try finding a different link to the site using your favorite search engine.

image

Or, in this case, Chrome explains that the certificate has expired, and asks you to verify that your computer clock’s Date and Time are set correctly:

image

You can see the specific error code in the middle of the text:

image

Some types of errors are a bit more confusing. For instance, NET::ERR_CERT_AUTHORITY_INVALID means that the site’s certificate didn’t come from a company that your computer is configured to trust.

image

Errors Everywhere?

What happens if you start encountering errors like this on every HTTPS page that you visit, even major sites like https://google.com?

In such cases, this often means that you have some software on your device or network that is interfering with your secure connections. Sometimes this software is well-meaning (e.g. anti-virus software, ad-blockers, parental control filters), and sometimes it’s malicious (adware, malware, etc). But even buggy well-meaning software can break your secure connections.

If you know what software is intercepting your traffic (e.g. your antivirus) consider updating it or contacting the vendor.

Getting Help

If you don’t know what to do, you may be able to get help in the Chrome Help Forum. When you ask for help, please include the following information:

  • The error code (e.g. NET::ERR_CERT_AUTHORITY_INVALID).
    • To help the right people find your issue, consider adding this to the title of your posting.
  • What version of Chrome you’re using. Visit chrome://version in your browser to see the version number
  • The type of device and network (e.g. “I’m using a laptop on wifi on my school’s network.”)
  • The error diagnostic information.

You can get diagnostic information by clicking or tapping directly on the text of the error code: image. When you do so, a bunch of new text will appear in the page:

image

You should select all of the text:

image

…then hit CTRL+C (or Command ⌘+C on Mac) to copy the text to your clipboard. You can then paste the text into your post. The “PEM encoded chain” information will allow engineers to see exactly what certificate the server sent to your computer, which might shed light on what specifically is interfering with your secure connections.

With any luck, we’ll be able to help you figure out how to surf securely again in no time!

 

-Eric

Get Help with HTTPS problems

Chrome Deprecates Subject CN Matching

If you’re using a Self-Signed certificate for your HTTPS server, a deprecation coming to Chrome may affect your workflow.

Chrome 58 will require that certificates specify the hostname(s) to which they apply in the SubjectAltName field; values in the Subject field will be ignored. This follows a similar change in Firefox 48. If impacted, you’ll see something like this blocking page as you load your HTTPS site:

NET::ERR_CERT_COMMON_NAME_INVALID blocking page in Chrome

NET::ERR_CERT_COMMON_NAME_INVALID is an unfortunate error code, insofar as all common names are now ignored. Chrome is working to improve the debuggability of this experience, via:

Update: Both of these have landed. Chrome now shows [missing_subjectAltName] in the details on the Certificate Error page, and a Subject Alternative Name Missing warning in the Security panel of the Developer tools.

Notably, Windows’ ancient makecert.exe utility cannot set the SubjectAltName field in certificates, which means that if you’re using it to generate your self-signed certificates, you need to stop. Instead, users of modern Windows can use the New-SelfSignedCertificate command in PowerShell.

New-SelfSignedCertificate -DnsName "www.example.com", "example.com" -CertStoreLocation "cert:\CurrentUser\My"

Using openssl for self-signed certificate generation? See https://stackoverflow.com/a/27931596.

This new restriction may also impact users of very old versions of Fiddler (or FiddlerCore), or users who have configured Fiddler to use MakeCert for whatever reason. Fortunately, Fiddler offers a number of different certificate generators, so you just need to make a small configuration change. To switch away from MakeCert, click Tools > Fiddler Options > HTTPS and click the “Certificates generated by MakeCert engine” link. Change the dropdown to CertEnroll and click OK. Click Actions > Reset All Certificates and restart Fiddler.

image

If you’re building an application atop FiddlerCore, you’ll need to make sure you’re not using makecert; see the end of this post for help.

-Eric Lawrence

PS: There’s also a EnableCommonNameFallbackForLocalAnchors policy. You shouldn’t use it and you should just fix your certificates, or they’ll break when it’s removed in Chrome 65 or earlier.

Chrome Deprecates Subject CN Matching

The Trouble with Magic

“Magic” is great… except when it isn’t.

Software Design is largely about tradeoffs, and one of the more interesting tradeoffs is between user experience and predictability. This has come up repeatedly throughout my career and in two independent contexts yesterday that I’ll describe in this post.

Developer Magic

I’m working on a tiny UX change to Google Chrome to deemphasize the data component of data: URIs.

Chrome is a multi-platform browser that runs on Windows, Mac, Linux, ChromeOS, Android, and iOS, which means that I need to make the same change in a number of places. Four, to be precise: Views (our cross-platform UI that runs on Windows, Linux and ChromeOS), Cocoa (Mac), Bling (iOS) and Clank (Android). The change for Views was straightforward and I did it first; porting the change to Mac wasn’t too hard. With the Mac change in hand, I figured that the iOS change would be simple, as both are written in Objective C++. I don’t have a local Mac development box, so I have to upload my iOS changes to the Chromium build bots to see if they work. Unfortunately, my iOS build failed, with a complaint from the linker:

Undefined symbols for architecture arm7:

“gfx::Range::ToNSRange() const”, referenced from:

OmniboxViewIOS::SetEmphasis(bool, gfx::Range) in omnibox_view_ios.o

OmniboxViewIOS::UpdateSchemeEmphasis(gfx::Range) in omnibox_view_ios.o

ld: symbol(s) not found for architecture arm7

Hrm… that’s weird; the Mac build worked and the iOS build used the same APIs. Let’s go have a look at the definition of ToNSRange():

ToNSRange() inside an if(defined(OS_MACOSX)

Oh, weird. It’s in an OS_MACOSX block, so I guess it’s not there for iOS. But how did it compile and only fail at linking?

Turns out that the first bit of “magic” is that when OS_IOS is defined, OS_MACOSX is always also defined:

iOS gets both MacOS and iOS defined

I was relieved to learn that I’m not the only person who didn’t know this, both by asking around and by finding code blocks like this:

If defined(win)||defined(mac)||defined(ios)

Okay, so that’s why it compiled, but why didn’t it link? Let’s look at the build configuration file:

image

Hmmm… That’s a bit suspicious. There’s range_mac.mm and range_win.cc both listed within a single target. But it seems unlikely that the Mac build includes the Windows code, or that the Windows build includes the Mac code. Which suggests that maybe there’s some magic not shown in the build configuration that determines what actually gets built. And indeed, it turns out that such magic does exist.

The overall Build Configuration introduces its own incompatible magic, whereby filenames suffixed with _mac only compile on Mac… and that is limited to actual Mac, not including iOS:

sources_assignment_filters

This meant that the iOS compilation had a header file with no matching implementation, and I was the first lucky guy to stumble upon this by calling the missing code.

Magic handling of filenames is simultaneously great (“So convenient”) and awful—I spoke to a number of engineers who knew that the build does this, but had no idea how, or whether or not iOS builds would include _mac-suffixed files. My  instinct for fixing this would be to rename range_mac.mm to just range_apple.mm (because .mm files are compiled only for Mac and iOS), but instead I’ve been told that the right fix is to just temporarily disable the magic:

Add exclusion via set_sources_assignment_filter

Talking to some of the experts, I learned that the long term goal is to get rid of the sources_assignment_filters altogether (No more magic!) but doing so entails a bunch of boring work (No more magic!).

Magic is great, when it works.

When it doesn’t, I spend a lot of time investigating and writing blog posts. In this case, I ended up flailing about for a few hours (because sending my various fix attempts off to the bots isn’t fast) trying to figure out what was going on.

There’s plenty of other magic that happens throughout the Chromium developer toolchain; some of it visible and some of it invisible. For instance, consider what happens when I forget the name of the command that finds out what release a changelist went into:

git find-release

Git “magically” knows what I meant, and points out my mistake.

Elsewhere, however, Chromium’s git “magically” knows what I meant and just does it:

git cl upalod (typo)

Which approach is better? I suppose it depends. The code that suggests proper commands is irritating (“Dammit, if you knew what I meant, you could just do it!”) but it’s also predictable—only legal commands run and typos cannot go overlooked and propagate throughout scripts, documentation, etc.

This same type of tradeoff appeared in a different scenario by the end of the day.

End-User Magic

This repro won’t work forever, but try clicking this link: https://www.kubernetes.io. If you do this right now, you’ll find that the page works great in Chrome, but doesn’t work in IE or Edge:

Edge and IE show Certificate Error

If you pop the site into SSLLabs’ server test, you can see that the server indeed has a problem:

Certificate subject name mismatch

The certificate’s SubjectAltNames field contains kubernetes.io, but not http://www.kubernetes.io.

So, what gives? Why does the original www URL work in Chrome? If you open the Developer Tools console while following the link, you’ll see the following explanation of the magic:

Console warning about automagic redirection

Basically, Chrome saw that the certificate for http://www.kubernetes.io was misconfigured and recognized that sending the user to the bare domain kubernetes.io was probably the right thing to do. So, it just did that, which is great for the user. Right? Right??

Well, yes, it’s great for Chrome users, and maybe for HTTPS adoption– users don’t like certificate errors, and asking them to manually “fix” things the browser can fix itself is annoying.

But it’s less awesome for users of other browsers without this accommodation, especially when the site developers don’t know about Chrome’s magic behavior and close the bug as “fixed” because they tested in Chrome. So other browsers have to adopt this magic if they want to be as great as Chrome (no browser vendor likes bugs whining “Your browser doesn’t work but Chrome does!”). Then, after all the browsers have the magic in place, then other tools like curl and wfetch and wget etc need to adopt it. And the magic is now a hack that lives on for decades, increasing the development cost of all future web clients. Blargh.

It’s worth noting that this scenario was especially confusing for users of Microsoft Edge, because its address box has special magic that hides the “www.” prefix, even on error pages. The default address is a “lie”:

Edge hides the

Only by putting focus in the address bar can you see the “truth”:

WWW is showing now on focus

When you’re building magic into your software, consider carefully how you do so.

  • Do you make your magic invisible, or obvious?
  • Is there a way to scope it, or will you have to maintain it forever?
  • Are you training users to expect magic, or guiding them away from it?
  • If you’re part of an ecosystem, is your magic in line with your long-term ecosystem goals?

-Eric

The Trouble with Magic

Certified Malice

One unfortunate (albeit entirely predictable) consequence of making HTTPS certificates “fast, open, automated, and free” is that both good guys and bad guys alike will take advantage of the offer and obtain HTTPS certificates for their websites.

Today’s bad guys can easily turn a run-of-the-mill phishing spoof:

HTTP Phish screenshot

…into a somewhat more convincing version, by obtaining a free “domain validated” certificate and lighting up the green lock icon in the browser’s address bar:

HTTPS Phish screenshotPhishing site on Android

The resulting phishing site looks almost identical to the real site:

Real and fake side-by-side

By December 8, 2016, LetsEncrypt had issued 409 certificates containing “Paypal” in the hostname; that number is up to 709 as of this morning. Other targets include BankOfAmerica (14 certificates), Apple, Amazon, American Express, Chase Bank, Microsoft, Google, and many other major brands. LetsEncrypt validates only that (at one point in time) the certificate applicant can publish on the target domain. The CA also grudgingly checks with the SafeBrowsing service to see if the target domain has already been blocked as malicious, although they “disagree” that this should be their responsibility. LetsEncrypt’s short position paper is worth a read; many reasonable people agree with it.

The “race to the bottom” in validation performed by CAs before issuing certificates is what led the IE team to spearhead the development of Extended Validation certificates over a decade ago. The hope was that, by putting the CAs name “on the line” (literally, the address line), CAs would be incentivized to do a thorough job vetting the identity of a site owner. Alas, my proposal that we prominently display the CAs name for all types (EV, OV, DV) of certificate wasn’t implemented, so domain validated certificates are largely anonymous commodities unless a user goes through the cumbersome process of manually inspecting a site’s certificates. For a number of reasons (to be explored in a future post), EV certificates never really took off.

Of course, certificate abuse isn’t limited to LetsEncrypt—other CAs have also issued domain-validated certificates to phishing sites as well:

Comodo cert for a Paypal Phish

Who’s Responsible?

Unfortunately, ownership of this mess is diffuse, and I’ve yet to encounter any sign like this:

image

Blame the Browser

The core proposition held by some (but not all) CAs is that combatting malicious sites is the responsibility of the user-agent (browser), not the certificate authority. It’s an argument with merit, especially in a world where we truly want encryption for all sites, not just the top sites.

That position is bolstered by the fact that some browsers don’t actively check for certificate revocation, so even if LetsEncrypt were to revoke a certificate, the browser wouldn’t even notice.

Another argument is that browsers overpromise the safety of sites by using terms like Secure in the UI—while the browser can know whether a given HTTPS connection is present and free of errors, it has no knowledge of the security of the destination site or CDN, nor its business practices.  Internet Explorer’s HTTPS UX used to have a helpful “Should I trust this site?” link, but that content went away at some point. Security wording is a complicated topic because what the user really wants to know (“Is this safe?”) isn’t something a browser can ever really answer in the affirmative. Users tend to be annoyed when you tell them only the truth– “This download was not reported as not safe.”

The obvious way to address malicious sites is via phishing and malware blocklists, and indeed, you can help keep other users safe by reporting any unblocked phish you find to the Safe Browsing service; this service protects Chrome, Firefox, and Safari users. You can also forward phishing messages to scam@netcraft.com and/or PhishTank. Users of Microsoft browsers can report unblocked phish to SmartScreen (in IE, click Tools > SmartScreen > Report Unsafe Website). Known-malicious sites will get the UI treatment they deserve:

image

Unfortunately, there’s always latency in block lists, and a phisher can probably turn a profit with a site that’s live less than one hour. Phishers also have a whole bag of tricks to delay blocks, including cloaking whereby they return an innocuous “Site not found” message when they detect that they’re being loaded by security researchers’ IP addresses, browser types, OS languages, etc.

Blame the Websites

Some argue that websites are at fault, for:

  1. Relying upon passwords and failing to adopt unspoofable two-factor authentication schemes which have existed for decades
  2. Failing to adopt HTTPS or deploy it properly until browsers started bringing out the UI sledgehammers
  3. Constantly changing domain names and login UIs
  4. Emailing users non-secure links to redirector sites
  5. Providing bad security advice to users

Blame the Humans

Finally, many lay blame with the user, arguing user education is the only path forward. I’ve long given up much hope on that front—the best we can hope for is raising enough awareness that some users will contribute feedback into more effective systems like automated phishing block lists.

We’ve had literally decades of sites and “experts” telling users to “Look for the lock!” when deciding whether a site is to be trusted. Even today we have bad advice being advanced by security experts who should know better, like this message from the Twitter security team which suggests that https://twitter.com.access.info is a legitimate site.

Email from Twitter Security Team

Where Do We Go From Here?

Unfortunately, I don’t think there are any silver bullets, but I also think that unsolvable problems are the most interesting ones. I’d argue that everyone who uses or builds the web bears some responsibility for making it safer for all, and each should try to find ways to do that within their own sphere of control.

Following is an unordered set of ideas that I think are worthwhile.

Signals

Historically, we in the world of computers have been most comfortable with binary – is this site malicious, or is it not? Unfortunately, this isn’t how the real world usually works, and fortunately many systems are moving more toward a set of signals. When evaluating the trustworthiness of a site, there are dozens of available signals, including:

  • HTTPS Certificates
  • Age of the site
  • Has the user visited this site before
  • Has the user stored a password on this site before
  • Has the user used this password before, anywhere
  • Number of visitors observed to the site
  • Hosting location of the site
  • Presence of sensitive terms in the content or URL
  • Presence of login forms or executable downloads

None of these signals is individually sufficient to determine whether a site is good or evil, but combined together they can be used to feed systems that make good sites look safer and evil sites more suspect. Suspicious sites can be escalated for human analysis, either by security experts or even by crowd-sourced directed questioning of ordinary users. For instance, see the heuristic-triggered Is This Phish? UI from Internet Explorer’s SmartScreen system:

image

The user’s response to the prompt is itself a signal that feeds into the system, and allows more speedy resolution of the site as good or bad and subject to blocking.

One challenge with systems based on signals is that, while they grow much more powerful as more endpoints report signals, those signal reports may have privacy implications for individual users.

Reputation

In the real world, many things are based on reputation—without a good reputation, it’s hard to get a job, stay in business, or even find a partner.

In browsers, we have a well-established concept of bad reputation (your site or download appears on a block list) but we’ve largely pushed back against the notion of good reputation. In my mind, this is a critical failing—while much of the objection is well-meaning (“We want a level playing field for everyone”), it’s extremely frustrating that we punish users in support of abstract ideals. Extended Validation certificates were one attempt at creating the notion of good reputation, but they suffered from the whims of CAs’ business plans and the legal system that drove absurdities like:

Security chip showing full legal name of Washington Post's holding company

While there are merits to the fact that, on the web, no one can tell whether your site is a one-woman startup or a Fortune 100 behemoth, there are definite downsides as well.

Owen Campbell-Moore wrote a wonderful paper, Rethinking URL bars as primary UI, in which he proposes a hypothetical origin-to-local-brand mapping, which would allow browsers to help users more easily understand where they really are when interacting with top sites. I think it’s a long-overdue idea brilliant in its obviousness. When it eventually gets built (by Apple, Microsoft, Google, or some upstart?) we’ll wonder what took us so long. A key tradeoff in making such a system practical is that designers must necessarily focus on the “head” of the web (say, the most popular 1000 websites globally)– trying to generalize the system for a perfectly level playing field isn’t practical– just like in the real world.

Assorted

  • Browsers could again make a run at supporting unspoofable authentication methods natively.
  • Browsers could do more to take authentication out from the Zone of Death, limiting spoofing attacks.
  • New features like Must-Staple will allow browsers to respond to certificate revocation information without a performance penalty.
  • Automated CAs could deploy heuristics that require additional validation for certificates containing often-spoofed domains.For instance, if I want a certificate for paypal-payments.com, they could either demand additional information about me personally (allowing a visit from Law Enforcement if I turn out to be a phisher) or they could even just issue a certificate with a validity period a week in the future. The certificate would be recorded in CertificateTransparency logs immediately, allowing brand monitoring firms to take note and respond immediately if phishing is detected when the certificate became valid. Such systems will be imperfect (you need to handle BankOfTheVVest.com as a potential spoof of BankOfTheWest.com, for instance) but even targeting a few high-value domains could make a major dent.

 

Fight the phish!

 

-Eric

Update: Chris wrote a great post on HTTPS UX in Chrome.

Certified Malice