Useful Resources when Developing Chrome Extensions

I’ve built a handful of Chrome extensions this year, and I wrote up some of what I learned in a post back in March. Since then, I’ve found two more tricks that have proved useful.

First, the Chrome Canary channel includes a handy extension error console to quickly expose extension errors. Update: This feature is now available in all channels, including Chrome Stable.

Simply open the chrome://extensions page and an Errors link will appear available for extensions after you tick its Collect errors checkbox:

Errors link on Chrome://extensions

Error console

This error console can be enabled in other channels (dev/beta/stable) by launching with the --error-console command line argument.

Next, you’ll often end up with both a public version of your extension and a side-loaded developer build installed at the same time. If their Action icons are the same, it’s easy to get confused about which version is which.

To help clarify which is the development version, you can easily add a Dev badge to the icon of the developer build:
Dev badge

The code to add to your background.js file is straightforward and works even if the background page is not persistent:

 // Our background page isn't persistent, so subscribe to events.
 chrome.runtime.onStartup.addListener(()=> { init(); });
 // onInstalled fires when user uses chrome://extensions page to reload
 chrome.runtime.onInstalled.addListener(() => { init(); });

 function init()
 {
   // Add Badge notification if this is a dev-install
   chrome.management.getSelf( (o)=>{
     if (o.installType === "development") {
       chrome.browserAction.setBadgeText( {text: "dev"} );
     }
   });
 }

As mentioned in my last post, the Chrome Extension Source Viewer remains invaluable for quickly peeking at the code of existing extensions in the store.

Finally, some day I’ll need to spend more time looking through the many Extension Samples published by Google.

I’ve had more fun writing Chrome extensions than anything else I’ve built this year. If you haven’t tried building an extension yet, I encourage you to try it out. Best of all, your new Chrome extension development skills will readily transfer to building extensions to Opera, Firefox, and Microsoft Edge.

-Eric

Email Tracking Links are the Worst

Note: The non-secure email link vulnerability described in this post was fixed before I posted it publicly. The Clinton campaign’s InfoSec team was polite, communicative, and fun to work with.

All emailed links to HillaryClinton.com should now use HTTPS.

Since building the MoarTLS Browser Extension to track the use of non-secure hyperlinks, I’ve found that a huge number of otherwise-secure sites and services email links to users that are non-secure. Yes, I’ve ranted about this before; today’s case is a bit more interesting.

Here’s a recent example of an email with non-secure links:

Non-secure links in email

As you can see, all of the donation links are HTTP URLs to the hostname links.hillaryclinton.com, including the link whose text claims that it points to httpS://www.hillaryclinton.com. If you click on that link, you usually end up on a secure page, eventually:

Secure server

So, what’s going on here?

Why would a site with a HTTPS certificate send users through a redirect without one?

The answer, alas, is mundane click tracking, and it impacts almost anyone using an “Email Service Provider” (ESP) like MailChimp.

For instance, here’s the original “Download our Browser” email I got from Brave:

Brave Download link uses HTTP

Brave inserted a HTTPS link to their download in their original email template, but MailChimp rewrote it to non-securely point at their click-tracking server “list-manage.com”. The click tracker allows the emailer to collect metrics about the effectiveness of the email campaign (“How many users clicked, and on which link?”). There’s no inherent reason why the tracker must be non-secure, but this appears to be common practice for most ESPs, including the one used by the Clinton campaign.

Indeed, if you change Clinton’s injected tracking URL to HTTPS, you will see a certificate error in your browser:

Bad Certificate name mismatch

… revealing the source of the problem— the links subdomain of the main site is pointed at a third-party ESP, and that ESP hasn’t bothered to acquire a proper certificate for it.

DNS reveals the "links" domain is pointed at a third party

The entire links subdomain is pointed at a 3rd-party server. A friend mused: “Clinton could’ve avoided this whole debacle if she were running her own email servers.”

So What, Who Cares?

When I complain about things like this on Twitter, I usually get at least a few responses like this:

Who cares tweet

The primary problem is that the responder assumes that the HTTP link will reliably redirect to the HTTPS page… and this is true in most cases, except when there’s an attacker present.

The whole point of HTTPS is to defend against network-based adversaries. If there’s an active man-in-the-middle (MITM) attacker on the network, he can easily perform a sslstripping attack, taking over the non-secure connection from user, fooling the user into submitting their private information. The attacker could simply keep the connection on HTTP so he can monitor and manipulate it, or he could redirect the victim to a fake domain he controls, even one with a valid HTTPS certificate (e.g. https://donations.hillary-clinton.com).

Okay, so that’s bad.

Unfortunately, it gets worse in the Clinton case. Normally a bad guy taking advantage of SSL Stripping  still needs to fool the user into giving up something of value– not a high bar, but nevertheless. In the case of Clinton’s donation’s link, there’s a bigger problem, alluded to in the text of the email itself:

Donations go through "immediately"

That’s right—if you click the link, the server collects your money, no questions asked. Security researchers immediately recognize the threat of a cross-site request forgery… any web page or MITM could direct a victim’s browser to the target link and cause money to be withdrawn from their bank account. To protect against that, a properly developed site includes a secret canary in the URL so that the attacker cannot generate a valid link. And if you look at the markup of the email you see that the campaign has done just that (behind the black boxes to protect my account):

CSRF Canary

Unfortunately, there’s a fatal flaw here: the link is HTTP, which means that the canary is sent in raw form over the network, and the canary doesn’t expire after it’s been used. Anyone who views the unprotected HTTP traffic can collect my secret token and then feed the link back to my browser, forcing me to donate over and over and over again without any kind of prompt.

Aside: Beyond the security problem, there’s a significant functionality problem here. In the HTTP protocol, GET requests link those sent in navigations are meant to be idempotent, a fancy way of saying that sending the same request more than one time shouldn’t have any side effects. The Clinton campaign’s donation page, however, will bill the user every single time the donation link is loaded no matter why the page was loaded. Even a user who is not under attack can suffer multiple-billing if they don’t immediately close the tab after donating. If the user navigates that tab to another site, then clicks the browser’s back button, they’re charged again. Clicking back and forward a few times to figure out what’s happening? Billed over and over and over.

Things are even worse on a memory-constrained device… browsers like Chrome will “page out” tabs to save memory as you switch applications. When you switch back to the browser, it swaps the page back in, reloading it. And you’re billed again:

Push notification from AMEX reveals I've been charged again

… billing continues each time the browser is activated until you have the good sense to close the tab. (You can force Desktop Chrome to manually discard a tab early by visiting chrome://discards/; you’ll note you’re billed again the next time you click the tab).

 

Whether you’re a Presidential Campaign or a streaming music site, please use HTTPS everywhere—there’s no good excuse not to protect your users. And if you’re taking users’ money, you need to be very very sure that your requests contain a nonce to require confirmation before double-billing.

Thanks for your help in securing the web!

-Eric Lawrence

Troubleshooting Windows 10 Bluescreens

I recently bought a Dell XPS 8900 desktop system with Windows 10. It ran okay for a while, but after enabling Hyper-V, every few minutes the system would freeze for a few seconds and then reboot with no explanation. Looking at the Event Viewer’s Windows Logs > System revealed that the system had bugchecked (blue screened):

Event Viewer - BugcheckBugcheck 0x1a indicates a problem with “Memory Management” .

Run WinDbg as Administrator. File > Open Crash Dump:

WinDBG open crash dump 

Open C:\Windows\memory.dmp. Wait for symbols to download:

Debuggee not connected; symbols downloading

If symbols aren’t downloaded automatically, try typing .symfix and then .reload in the command prompt at the bottom.

Use !analyze -v says WinDBG

Then, follow the tool’s advice and run !analyze -v to have the debugger analyze the crash. WinDBG presents a surprisingly readable explanation:

WinDBG notes driver memory corruption

So a driver’s at fault, but which one?

Stack trace points at WiFi

It looks like bcmwl63a, for which symbols aren’t loaded, one clue that this isn’t Microsoft’s code. Let’s find out more about it using lm vm bcmw163a:

 

Debugger points at Wifi driver

Pop over to the listed path to examine the file’s properties, and see that it’s the WiFi driver:

Driver details

The Dell 1560 802.11ac card is the same type as found in my Dell XPS 13” notebook PC, where it was responsible for a flurry of bluescreens last year. The driver appears to have improved (the XPS 13 doesn’t crash anymore), but it looks like some corner cases got missed, likely related to the Hyper-V virtual networking code. Rather than waiting for an updated driver, the experts on Twitter suggested I simply upgrade to the Intel 7265 and install the latest Intel PROSet wireless driver. At $20 on Amazon, this seemed like a fine approach.

The upgrade was straightforward and would’ve taken less than 5 minutes to install except one of the nearly microscopic sockets broke off as I removed the Dell card’s antenna cables:

BrokenSocket

I used a needle to remove the broken pieces from the antenna’s connector before it would fit on the new card’s socket. After connecting the antenna, the new card easily slid into the slot and Windows recognized it on next boot. I used Device Manager to ensure the drivers loaded for the new card’s Bluetooth support, and installed the latest PROSet driver. Everything’s been working great since.

While WinDBG is one of the more inscrutable tools I use, it worked great in this situation and would point even a novice in the right direction.

 

-Eric

File the Bug

Two experiences this week reminded me of a very important principle for improving the quality of software… if you see something, say something. And the best way to do that is to file a bug.

Something Weird? File a bug!

The first case was last Thursday, when a user filed a bug in Chrome’s tracker noting that Chrome’s window border icons often got “stuck” in a hover state after being moused over. It was a clear, simple bug report and it was easily reproduced. I’ve probably hit this a hundred times over the years and didn’t think much of it… “probably some weird thing in my graphics card or some utility I’m running.” It never occurred to me that everybody else might be seeing this, or that it was exhibited only by Chrome.

Fortunately, the bug report showed that this issue was something others were hitting too, so I took a look. The problem proved to be almost unique to Chrome (not occurring in other Windows applications), and has existed for at least seven years, reproducing on every version from Windows Vista to Windows 10.

A scan of the bug tracker suggests that Thursday’s report was the first time in those seven years that this bug was filed; less than a week later, the simple fix is checked in and on the way to Chrome 54. Obviously, this is only a minor cosmetic issue, but we want our browser looking good at all times!

Animation of the fixed bug

Another cool aspect of this fix is that it will fix other applications too… the Opera and Vivaldi browsers are based on Chromium open-source roots and inherited this problem; they’ll probably pick up this fix shortly too.

th;df – Say Something Anyway

Even if you don’t file a bug, you should still say something. Recently, Ana Tudor noted on Twitter that her system was in a state after restart where neither Chrome nor Brave could render web content; both browsers showed the “crashed tab” experience, even after restarting and reinstalling the browsers. Running with the no-sandbox flag worked, and rebooting the system fully solved the problem. Her report sounded suspiciously similar to a problem I’d encountered back in April; fortunately, I’d filed a bug.

At the time, that bug was deemed unreproducible and I’d dismissed it as some wonkiness on my specific system, but Ana’s complaint brought this back to my attention. She’d also added another piece of data I didn’t have in my original report—the problem also occurred in Brave, but not Firefox or IE.

Even more fortunately, I hit this problem again after a system reboot yesterday, and because of Ana’s report, I was no longer convinced that this bug was some weird quirk on just my system. Playing with the repro, I found that neither Opera nor Vivaldi reproduced the problem; both of those browsers are architecturally similar to Brave, but importantly, both are 32-bit. So this was a great clue that the problem was specific to 64-bit. And I confirmed this, finding that the bug repro’d only in 64-bit Chrome Canary but not in 32-bit Canary. Now we’re cooking with gas!

I built Chromium and ran it through WinDBG, seeing that when the sandboxed content renderer process was starting up, it was hitting three debug breakpoints before dying. The breakpoints were in sandbox::InterceptionAgent::OnDllLoad, a function Chrome uses to thunk certain Windows APIs to inject security filters. At this point, and with a reliable repro in hand, my smarter colleague took over and quickly found that the code to allocate memory for the thunk was failing, due to some logic bugs. Thunks must be located at a particular place in memory – within 2gb of the thunked function – and the code to place our thunks was failing when ASLR randomly loaded the kernel32, gdi32, and user32 DLLs at the very top of the address space, leaving no room for our thunks. When the allocation failed, Chrome refused to allow the DLL to be loaded into the sandbox, and the renderer necessarily died. After the user rebooted the system again, ASLR again moved the DLLs to some other location and (usually) this location gives us room to place our thunks. With 20/20 hindsight, the root cause of this bug (and the upcoming fix) are obvious.

But we only knew to look for the problem because Ana took the time to say something.

Final Thoughts

  • Browser telemetry is great—we catch crashes and all sorts of problems with it. But debugging via telemetry can be really challenging— more akin to solving a mystery than following a checklist. For instance, in the case of the sandbox bug, the fact that the problem reproduced in Brave was a huge clue, and not something we’d ever know from telemetry.
  • Well-run projects love bug reports. Back when I was building Fiddler, a lot of users I talked to said things like: “Well, it’s free and pretty good so I didn’t want to bother you with a complaint about some bug.” This is exactly backwards. For most of Fiddler’s lifetime, bug reports from the community were the only compensation I received from making the tool available to everyone for free. Getting bug reports meant I could improve the product without having to pay for test machines and devices, hire a test organization, etc, etc. When I eventually sold Fiddler to Telerik, a large part of the value they were buying was the knowledge that the tool had been battle-tested by millions of users over 9 years and that I’d fixed thousands of bugs from that community.
  • Filing bugs is generally easy, and it’s especially easy for Chrome.
    • First, simply search for an existing bug on crbug.com
    • If you find it’s a known issue, star it so you get updates
    • If it’s not a known issue, click the New Issue button at the top-left
    • Tell us as much as you can about the problem. Try to put yourself in the reader’s shoes—we don’t know much about your system or scenario, so the more details you can provide, the better.
  • Screenshots and URLs that reproduce problems are invaluable.
  • Find a bug in another browser? Report it!

 

Thanks for your help in improving our digital world!

 

-Eric

Using Fiddler With iOS 10 and Android 7

If you’ve tried to use Fiddler with iOS10 beta or Android 7 Nougat, you have probably found that HTTPS decryption isn’t working, even if you use the latest Fiddler and the Fiddler Certificate Maker add-on. Unfortunately, at the moment both platforms are broken, but for different reasons. In both cases, the client will fail to receive responses for HTTPS requests, and Fiddler will only show a CONNECT tunnel.

iOS 10 Change

After installing the FiddlerRoot certificate, one also needs to go to Settings -> General -> About -> Certificate Trust Settings and manually enable full trust for the FiddlerRoot root certificate, including accepting a dialog that says that this will allow a third-party to eavesdrop on all your communications.

iOS 10 Beta Bug (Fixed for final version)

The beta of iOS 10 had a bug whereby, if the response to a HTTP CONNECT tunnel request contains a Connection: close response header, the client will close the connection instead of doing as it should and waiting until the TCP/IP connection closes. A few minor platforms have had the same bug over the years, but iOS is definitely the first important platform with this issue. At least two bugs have been filed with the Apple “Radar” bug reporter.

Working around this limitation is simple. In Fiddler, click Rules > Customize Rules. Scroll to the OnBeforeResponse function. Just inside that function, add the following lines:

  if (oSession.HTTPMethodIs("CONNECT")) {
oSession["ui-backcolor"] = "red";
oSession.ResponseHeaders.Remove("Connection");
}

Save the file and try connecting again.

Android 7 Feature

In contrast to the iOS regression, the change in Android 7 was intentional. The Android team has decided that, by default, HTTPS certificate validation for apps targeting API Level 24 and later will ignore all user-installed root certificates, meaning that your efforts to manually trust Fiddler’s root certificate will be fruitless. Individually application developers can temporarily override this change while debugging by updating the application’s configuration:

<network-security-config>  
      <debug-overrides>  
           <trust-anchors>  
                <!-- Trust user added CAs while debuggable only -->
                <certificates src="user" />  
           </trust-anchors>  
      </debug-overrides>  
 </network-security-config>

…or at all times…

<network-security-config>  
      <base-config>  
            <trust-anchors>  
                <!-- Trust preinstalled CAs -->  
                <certificates src="system" />  
                <!-- Additionally trust user added CAs -->  
                <certificates src="user" />  
           </trust-anchors>  
      </base-config>  
 </network-security-config>

…Unfortunately, these changes can only be undertaken by application developers and not end-users. End-users will probably need to jailbreak their devices, akin to what is required to circumvent certificate pinning.

Certificate Validity Length

By default, Fiddler-generated certificates are valid for five years (and backdated one year). However, this can cause an ERR_CERT_VALIDITY_TOO_LONG error in Chrome on Android.  To fix this for Fiddler’s default (CertEnroll) certificate generator,  run about:config in QuickExec to edit preferences. Set fiddler.certmaker.ValidDays to 820 and, if needed, reset your certificates in Fiddler using the Tools > Fiddler Options > HTTPS > Actions button.

-Eric

Cheating Authenticode, Redux

Back in 2014, I explained two techniques that have been used by developers to store information in Authenticode-signed executables without breaking the signature, including information about the EnableCertPaddingCheck registry flag that can be set to break the technique1.

Recently, Kevin Jones pointed out that Chrome’s signed installer differs on each download, as you can see in this file comparison of two copies of the Chrome installer, one downloaded from IE and one from Firefox:

Binary diff shows different bytes

Surprisingly, the Chrome installer is not using either unvalidated data-injection technique I wrote about previously.

So, what’s going on?

The Extra Certificate Technique

Fortunately, Kevin wrote an awesome tool for examining Authenticode signatures at a deep level: Authenticode Lint, which he describes in this blog post. Running his tool with the default options, the solution to our mystery is immediately revealed. The signing block contains an extra certificate named (literally) “Dummy Certificate”:

Authenticode Lint reveals "Dummy Certificate"

By running the tool with the -extract argument, we can view the extra (untrusted) certificate included in the signature block and see that it contains a proprietary data field with the per-instance data:image

You can see this technique in use inside Microsoft Edge installers as well:

This technique for injecting unsigned data into a signature block could be a source of vulnerabilities (like this attack) unless you’re very careful; anyone using this Extra Certificate technique should follow the same best practices previously described for the Unvalidated Attributes technique. (Which Kevin further explored in an aborted defense against in his post about Authenticode Sealing).

Authenticode Lint is a great tool, and I strongly recommend that you use it to help ensure you’re following best-practices for Authenticode-signing. My favorite feature is that you can automate the tool to verify an entire folder tree of binaries. You can thus be confident that you’ve signed all of the expected files, and done so properly.

-Eric

1 In the years since my original “Cheating Authenticode” post, I’ve learned that even when EnableCertPaddingCheck is not enabled, the signature verification API will scan the data in the signature block of the file to attempt to detect markers of popular data archival formats (e.g. PKZIP Magic Bytes). If such markers are found, the signature will be treated as invalid. This marker scanning was a Windows Vista-era mitigation (later backported) that attempts to block the abuse of signed insecurely-self-extracting executable files: an attacker could swap out the compressed payload for a malicious payload without breaking the file’s signature.

TLS Fallbacks are Dead

Gravestone reading RIP Fallbacks
tl;dr

Just over 5 years ago, I wrote a blog post titled “Misbehaving HTTPS Servers Impair TLS 1.1 and TLS 1.2.”

In that post, I noted that enabling versions 1.1 and 1.2 of the TLS protocol in IE would cause some sites to load more slowly, or fail to load at all. Sites that failed to load were sending TCP/IP RST packets when receiving a ClientHello message that indicated support for TLS 1.1 or 1.2; sites that loaded more slowly relied on the fact that the browser would retry with an earlier protocol version if the server had sent a TCP/IP FIN instead.

TLS version fallbacks were an ugly but practical hack– they allowed browsers to enable stronger protocol versions before some popular servers were compatible. But version fallback incurs real costs:

  • security – a MITM attacker can trigger fallback to the weakest supported protocol
  • performance – retrying handshakes takes time
  • complexity – falling back only in the right circumstances, creating new connections as needed
  • compatibility – not all clients are willing or able to fallback (e.g. Fiddler would never fallback)

Fortunately, server compatibility with TLS 1.1 and 1.2 has improved a great deal over the last five years, and browsers have begun to remove their fallbacks; first fallback to SSL 3 was disabled and now Firefox 37+ and Chrome 50+ have removed fallback entirely.

In the rare event that you encounter a site that needs fallback, you’ll see a message like this, in Chrome:

Google Chrome 52 error

or in Firefox:

Firefox 45 error

Currently, both Internet Explorer and Edge fallback; first a TLS 1.2 handshake is attempted:

TLS 1.2 ClientHello bytes

and after it fails (the server sends a TCP/IP FIN), a TLS 1.0 attempt is made:

TLS 1.0 ClientHello bytes

This attempt succeeds and the site loads in IE.

UPDATE: Windows 11 disables TLS/1.1 and TLS/1.0 by default for IE. Even if you manually enable these old protocol versions, fallbacks are not allowed unless a registry/policy override is set.

EnableInsecureTlsFallback = 1 (DWORD) set inside

HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Internet Settings or
HKLM\SOFTWARE\Policies\Microsoft\Windows\CurrentVersion\Internet Settings

If you analyze the affected site using SSLLabs, you can see that it has a lot of problems, the key one for this case is in the middle:

Grade F on SSLLabs

This is repeated later in the analysis:

TLS Version intolerant for versions 1.2 and 1.3

The analyzer finds that the server refuses not only TLS 1.2 but also the upcoming TLS 1.3.

Unfortunately, as an end-user, there’s not much you can safely do here, short of contacting the site owners and asking them to update their server software to support modern standards. Fortunately, this problem is rare– the Chrome team found that only 0.0017% of TLS connections triggered fallbacks, and this tiny number is probably artificially high (since a spurious connection problem will trigger fallback).

-Eric

Non-Secure Clicktrackers–The Fastest Path from A+ to F

HTTPS only works if you use it.

Coinbase is an online bitcoin exchange backed by $106M in venture capital investment. They’ve got a strong HTTPS security posture, including the latest ciphers, a 4096bit RSA key, and advanced features like browser-preloaded HSTS and HPKP.

SSLLabs grades Coinbase’s HTTPS deployment an A+:

A+ Grade from SSLLabs

This is a well-secured site with a professional security team.

Here’s the email they just sent me:

"Add a debit card"

Let’s run the MoarTLS Analyzer on that:

All Red

That’s right… every hyperlink in this email is non-secure and any click can be intercepted and sent anywhere by a network-based attacker.

Sadly, Coinbase is far from alone in snatching security defeat from the jaws of victory; my #HTTPSFAIL folder includes a lot of other big names:

HTTPSFailures

 

It doesn’t matter how well you secure your castle if you won’t help your visitors get to it securely. Use HTTPS everywhere.

 

-Eric

Update: I filed a bug with Coinbase on HackerOne. Their security team says that they “agree” that these links should be HTTPS, but the problem is Mailchimp (their email vendor) and they can’t fix it. Mailchimp offers a security vulnerability reporting form, delivered exclusively over HTTP:

MailChimp

Coinbase isn’t the first service whose security is bypassed because their emails are sent with non-secure links; the Brave browser download announcements suffered the same problem.