Strict-Transport-Security for *.dev, *.app and more

Some web developers host their pre-production development sites by configuring their DNS such that hostnames ending in .dev point to local servers. Such configurations were not meaningfully impacted when .dev became an official Generic Top Level Domain a few years back, because even as smart people warned that developers should stop squatting on it, Google (the owner of the .dev TLD) was hosting few (if any) sites on the gTLD.

With Chrome 63, shipping to the stable channel in the coming days, things have changed. Chrome has added .dev to the HSTS Preload list (along with the .foo, .page, .app, and .chrome TLDs). This means that any attempt to visit http://anything.dev will automatically be converted to https://anything.dev.

hstspreload

Other major browsers use the same underlying HSTS Preload list, and we expect that they will also pick up the .dev TLD entry in the coming weeks and months.

Of course, if you were using HTTPS with valid and trusted certificates on your pre-production sites already (good for you!) the Chrome 63 change may not impact you very much right away. But you’ll probably want to move your preproduction sites off of .dev and instead use e.g. .test, a TLD reserved for this purpose.

Secure all the things!

-Eric

PS: Perhaps surprisingly, the dotless (“plain”) hostnames http://dev, http://page, http://app, http://chrome, and http://foo are all impacted by new HSTS rules as well. There’s a policy.

For a Lark

Happy Holidays” David said as he poked his head into my office, handing me an unwrapped holiday card featuring a kitten in a Santa hat. As I took it, I nearly dropped a small white envelope that slipped out from inside. The inscription in the card read simply “Best wishes, David – 2010.”

Uh, thanks, you too!” I replied, both surprised and a bit uncomfortable that my colleague had gotten me a card. We were friendly but not friends. We’d only worked together a few times over the prior year, and it would’ve never occurred to me to get him anything for Christmas. He seemed like an archetypal geek, so we shared an interest in technology and science fiction, but we didn’t hang out or anything like that. Fortunately, he had a stack of cards in his hands, so it wasn’t like I’d been singled out or anything. Hoping he hadn’t gotten me anything fancy, I asked “What’s this?” as I flipped over the rigid envelope. In small print near the flap, it read “Do not open until Christmas 2017.”

His eyes twinkled and he grinned mischievously, an expression I’d never seen from him before. “Just a tiny gift. Well, maybe sort of a test. It’s very important that I give it to you now. But if you open it before 2017, it’ll be the crummiest gift you ever got. If you can wait, maybe it’ll be pretty nice.

I furrowed my brow. “So, like some sort of Savings bond thing?” I asked, thinking back to the bonds I’d gotten from far-off relatives as a little kid… I’d recently stopped dragging them around from apartment to apartment and taken them to the bank to collect the princely sum of $261, two decades after I’d ungratefully wished they were some action figures or a book instead.

David smiled. “Sure, sorta. Don’t lose it. Don’t get it wet. And don’t open it early!

“Uh, I won’t… Thanks?” I promised, and with that, David disappeared into Rob’s office next door to start his spiel all over.

Weird. Well, I’d already bought my direct reports gift cards for the IPic movie theatre and I had one extra left over… I’ll put that in a New Year’s card for David and leave it in his office sometime next week, I resolved. I tossed the card and envelope into the stack of RFC printouts on my bookshelf and went back to the email I was writing, hoping to get everything squared away before leaving for a short Christmas vacation.

The envelope sat on my bookshelf undisturbed, buried in an ever growing pile of paper. I might’ve remembered it the following year, but David had left the company that summer, off to do a volunteer tour with the Peace Corps before joining some startup out in San Francisco. Still buried in a pile of paper when I left the company two years later, the card made its way into an unsorted moving box labeled “office stuff” as we moved across the country to Texas. It then sat quietly in the box in my garage for five more years.

In the summer of 2017, I finally got around to digging through the garage, trashing what I could in an effort to make way for the growing proliferation of tricycles, big wheels, wagons, bikes, and pool toys that our two Texas-born children had collected. I spent a quiet Saturday afternoon in July mired in nostalgia, poring through boxes full of old books and papers and remembering a life before kids and so many responsibilities.

When I eventually uncovered the kitten card in the pile, I snorted and stretched to toss it in the “Recycle” pile before I remembered the weird little envelope. Sure enough, it was still inside, forgotten and untouched for the better part of a decade. I ignored the admonition of the faded green warning and tore it open, long-forgotten curiosity mounting.

The envelope contained a pile of papers folded in thirds. The outermost of these was a cleanly cut sheet of wax paper of the sort that was used for sandwiches back before ziplock bags took over the world. Odd, I mused as I discarded it. The next was a folded sheet of thick white cardstock, taped closed, which bore a short paragraph printed in Christmas-colored ink. It read simply:

Is it Christmas 2017 yet?

If so, happy holidays! Enjoy your present.
If not, please google ‘Marshmallow experiment’ and wait patiently.
You’ve been warned.

I paused as the marshmallow reference tickled something in my memory … some university lecture I’d forgotten long ago? Mildly annoyed, I dragged out my phone and searched as instructed. Oh yeah, that Stanford study about delayed gratification. It’s not like anyone will ever know… I mused as I put down my phone and started to peel back the tape holding the cardstock shut.

The door to the garage opened. “Nap’s over, Nate’s up.” my wife called, and I tossed the letter back in the box, eagerly grabbing the large pile of recycling I’d generated to show her my progress. Dancing around, building lego cars, and wrestling with my two kids, I completely forgot about David’s weird little present for another few months.

That December, off work for a two-week winter holiday vacation, I resolved to finish cleaning out the garage. Thirty minutes into the job, I came across the card. December… close enough. I tore open the cardstock. Inside lay a single laser-printed page with a letter printed on one side. The opening paragraph read:

Happy holidays, friend!

I hope you’ve waited patiently for this small gift and aren’t too annoyed at the oddity of its presentation, but it’s all for a purpose.

My accountant has instructed me to make it very clear that this is a GIFT, granted freely without any restrictions, from me to you on December 17th, 2010. It has an approximate market value of $250. Relax– it only cost me about 8 bucks, and it may well be worthless by the time you open it. (Market value is only possibly important for tax purposes)

Beneath the card was a black and white picture, one inch square, full of smaller squares, the sort of bar code you’ll find on the back of shampoo bottles.

code

The letter continued and I read on, confused but intrigued.

This is a QR code containing the private key for a digital wallet containing bitcoin. Bitcoins are a virtual currency that I’ve been playing with this year, and I thought it would be a lark to give some out as a present. (I’m not clever at picking presents– growing up, I got a new leather wallet every year for Christmas.)

Perhaps in 2017 bitcoins have become worthless (that seems like the most likely outcome), but I have a hunch that perhaps they’ll continue to appreciate over the years. If you’ve been patient, maybe it’s enough to buy you a nicer present now.

If so, happy holidays! If not, I hope my folly brings you at least a chuckle. :)

My heart started to pound in my chest. The page concluded with David’s signature, preceded by 7 hand-scrawled characters.

1000 BTC

Hands shaking with the instant weight of the letter, I dropped it and the universe entered slow motion as the paper fluttered to the ground.

-eric

post-script: https://textslashplain.com/2017/12/06/what-if/

Google Internet Authority G3

For some time now, operating behind the scenes and going mostly unnoticed, Google has been changing the infrastructure used to provide HTTPS certificates for its sites and services.

You’ll note that I said mostly. Over the last few months, I’ve periodically encountered complaints from users who try to load a Google site and get an unexpected error page:

certerror

Now, there are a variety of different problems that can cause errors like this one– in most cases, the problem is that the user has some software (security software or malware) installed locally that is generating fake certificates that are deemed invalid for various reasons.

However, when following troubleshooting steps, we’ve determined that a small number of users encountering this NET::ERR_CERT_AUTHORITY_INVALID error page are hitting it for the correct and valid Google certificates that chain through Google’s new intermediate Google Internet Authority G3. That’s weird.

What’s going on?

The first thing to understand is that Google operates a number of different certificate trust chains, and we have multiple trust chains deployed at the same time. So a given user will likely encounter some certificate chains that go through the older Google Internet Authority G2 chain and some that go through the newer Google Internet Authority G3 chain– this isn’t something the client controls.

G2vG3

You can visit this GIA G3-specific test page to see if the G3 root is properly trusted by your system.

More surprisingly, it’s also the case that you might be getting a G3 chain for a particular Google site (e.g. https://mail.google.com) while some other user is getting a G2 chain for the same URL. You might even end up with a different chain simply based on what Google sites you’ve visited first, due to a feature called HTTP/2 connection coalescing.

In order to see the raw details of the certificate encountered on an error page, you can click the error code text on the blocking page. (If the site loaded without errors, you can view the certificate like so).

Google’s new certificate chain is certainly supposed to be trusted automatically– if your operating system (e.g. Windows 7) didn’t already have the proper certificates installed, it’s expected to automatically download the root certificate from the update servers (e.g. Microsoft WindowsUpdate) and install it so that the certificate chain is recognized as trusted. In rare instances, we’ve heard of this process not working– for instance, some network administrators have disabled root certificate updates for their enterprise’s PCs.

On modern versions of Windows, you can direct Windows to check its trusted certificate list against the WindowsUpdate servers by running the following from a command prompt:

certutil -f -verifyCTL AuthRootWU

Older versions of Windows might not support the -verifyCTL command. You might instead try downloading the R2 GlobalSign Root Certificate directly and then installing it in your Trusted Root Certification Authorities:

InstallBtnLocalMachineTrustedRootFinishyay

Overall, the number of users reporting problems here is very low, but I’m determined to help ensure that Chrome and Google sites work for everyone.

-Eric

Chrome Field Trials

Back in April, we announced:

Beginning in October 2017, Chrome will show the “Not secure” warning in two additional situations: when users enter data on an HTTP page, and on all HTTP pages visited in Incognito mode.

This is true, but it’s perhaps a little misleading, based on some of the tweets we’ve seen:

Screen Shot 2017-10-18 at 8.05.25 AM

What isn’t mentioned in the blog post is exactly how this feature will roll out– many readers naturally assume that it’s as simple as: “If you have Chrome 62, then this feature is present.” After all, that’s how software usually works.

In Chrome, things are more interesting. Where possible, Chrome rolls out new features dynamically using the Field Trials platform. You can think of Field Trials as a set of server-controlled flags that allow Google to change Chrome’s behavior dynamically, at runtime, without shipping a new version. (Update: Google codenames their experimental system “Finch” while Microsoft Edge codenames theirs “ECS“- Experimental Configuration System).

We use Field Trials for two major purposes– for experimentation, and for feature rollouts.

Experimentally, we run many experiments where we create one or more experimental groups and then compare telemetry from those clients against a control group. If a feature isn’t performing as expected (e.g. its usage declines vs. the feature it replaces, or browser crashes increase, or memory usage increases or page load slows, etc), the feature is tuned or removed before it officially “ships.” Experiments are often conducted on the pre-release channels (e.g. Canary, Dev, and Beta) before deciding whether or not a feature should be rolled out to the Stable channel.

After a feature has proven itself via experiments, it’s ready to roll out to users in the Stable channel. Unfortunately, pre-release channels don’t get coverage nearly as broad as we’d like, so we have to take care when a feature is first rolled out to Stable. By using a field trial, we can enable the new feature for a huge number of Stable users while still keeping it at a low percentage of the Stable user base (e.g. 1% of a billion installs == 10 million clients). We keep an eagle eye on telemetry and user-feedback reports to spot any unexpected problems, and assuming we don’t find any, we can quickly ramp up the new feature rollout to 100% of users. If we discover the feature wasn’t fit to ship for whatever reason (e.g. introduces some serious bug), we can dial it back to 0% until a fix can be created.

Unfortunately, field trials are one of the very few inscrutable parts of the otherwise veryopen Chrome– Google does not publish information about the current percentage of users in a trial, and while chrome://version/?show-variations-cmd shows which trials are currently enabled for a given client, there’s no public explanation of what the experiments actually do. Update: Here is a tool to grab the configuration data from Google directly.

Rest assured that I’m eager to push the new Not Secure warnings to 100% and I expect to get to do so very soon. If you just can’t wait, you can override the field trial and turn it on yourself by changing chrome://flags/#mark-non-secure-as and restarting Chrome.

Screen Shot 2017-10-18 at 8.22.16 AM

Note that there’s not a 1:1 correspondence between Flags and Field Trials– while many trials can be overridden by flags, not all experiments have user-toggleable flags. If a feature is controlled by a flag and a field trial, when a flag hasn’t been manually configured, the field trial’s setting (if any) is not reflected in chrome://flags… the flag entry will just show Default.

Protecting your web traffic as fast as we can,

-Eric

UPDATE: See https://textslashplain.com/2019/07/16/updating-browsers-quickly-flags-respins-and-components/ for more discussion related to this topic.

Stealing your own password is not a vulnerability

By far, the most commonly-reported “vulnerability” reported to the Chrome Vulnerability Rewards program boils down to “I can steal my own password.” Despite having its very own FAQ entry, this gets reported to the VRP at varying levels of breathlessness, sometimes multiple times per day.

You can see this “attack” in action:

UnmaskPassword

Yes, it’s true, you can use Chrome (or Edge, or Firefox) to steal your own password.

-Eric

FAQ: What about Password-Stealing software?

PS: “But… but… what if I don’t want to use this easy trick and instead decide to steal my own passwords from myself by installing some software on my own PC? Isn’t that a security bug?”

No, although “Software can read passwords if I give it full permission to run on my computer” is probably the third or fourth most common non-bug reported to Chromium’s Security@ alias. crbug.com/678171 is one of many such filings.

FAQ: What about Group Policy lockdowns?

But… but… what if I’m a clever admin and I don’t allow my users to use the Developer Tools or run arbitrary software. Am I safe then?” Well, no. The user can enter JavaScript in the Omnibox/Address bar, or use the Settings page to view or export passwords to a file.

PPPS: “What about the “Reveal Password” (eye) icon shown in Microsoft Edge? Can that expose an autofilled password?” No.

FAQ: What about “Require A Password To Fill Passwords”?

Edge offers a setting (on edge://settings/passwords) to control the behavior of auto-fill, but this is a speed-bump, not a security boundary. A user with unrestricted access to your logged-in user account can still grab passwords via slightly more cumbersome means.

Chromium uses encryption when storing passwords in almost all cases. However, the browser itself has the decryption key (necessarily). You can read more about why an attacker who has physical access or has compromised your machine is outside of the browser’s threat model in two of my blog posts:

Working with “Big Data” in .NET

For simplicity (and because I didn’t know any better at the time), Fiddler uses plain public byte[] array fields to represent the request and response bodies. This makes working with the body data trivial for authors of extensions and FiddlerScript, but it also creates significant shortcomings. Using fields rather than properties improves performance in some scenarios, but it muddles the contract about mutability of the data and means that developers can easily accidentally create inconsistent state (e.g. by decompressing the body but forgetting to change the Content-Length, Content-Encoding, and Transfer-Encoding headers).

The more serious problem with the use of byte arrays is that they require contiguous memory allocations. In a 64-bit process, this isn’t a major problem, but in a 32-bit process, address space fragmentation means that finding an open address range larger than a few hundred megabytes is often impossible:

Address space fragmentation means there's no place for the data

If Fiddler cannot allocate contiguous memory of the required size, the resulting .NET System.OutOfMemoryException kills the Web Session. While 64-bit processes rarely suffer from address space fragmentation, the use of byte arrays still leads to a problem with large downloads—the .NET Framework imposes a cap of 0x7FFFFFC7 elements in an array, meaning that even 64-bit Fiddler is limited to storing request and response bodies that are just under two gigabytes. In practice, this is rarely a huge problem, but it’s occasionally annoying.

From an API point-of-view, I should have exposed message bodies as a Stream, so the backing data structure could be selected (and changed) as needed for performance and reliability reasons.

Now, as it happens, Fiddler internally uses a Stream buffer when it’s reading the body from the socket—it uses a MemoryStream for this purpose. Unfortunately, the MemoryStream built into the .NET Framework itself uses a plain byte[] to store the data, which means it suffers from the same problems described before, and some additional problems. Its biggest problem is that the growth algorithm for byte array backing the MemoryStream, and I’ve written at length about the issue and how I worked around it in Fiddler by creating a PipeReadBuffer object with smarter growth rules.

I thought things were as good as they could be without swapping to use a different object underlying the PipeReadBuffer, but last night Rafael Rivera pointed out a scenario that’s really broken in Fiddler today. His client was trying to download a 13.6gb ZIP file through Fiddler, and at just below 2gb the download slowed to a crawl. Looking at Fiddler.exe in Process Monitor, nearly 50% of the time was logged in garbage collection.

What’s going on?

The problem is that 64-bit Fiddler defaults to Stream and Forget bodies only when they reach 0x7FFFFFC7 bytes. For various reasons (which are likely not very compelling), Fiddler doesn’t trust the Content-Length response header and will instead keep buffering the response body until the StreamAndForget threshold is reached, at which point the response bytes will be streamed to the client, dropped, and subsequent bytes will be blindly streamed to the client without recording to a buffer. Despite that wasted buffering for the first 2 gigabytes, however, everything ought to work reasonably quickly.

Except.

When I coded the PipeReadBuffer, I made it grow by 64mb at a time, until we got to within 64mb of the .NET max array index of 0x7FFFFFC7. When we got within 64mb of the end, instead of correctly growing to 0x7FFFFFC7 bytes, it instead grows to exactly the length needed with no slack bytes. Which means that when the next network read comes along a millisecond later, the MemoryStream’s byte array has no free space and must be reallocated. And it gets reallocated to exactly the needed size, leaving no slack. This process repeats, with each network read meaning that .NET must:

  • Allocate an array of just under 2gb
  • Copy the 2 billion bytes from the old array to the new array
  • Copy in the ~16kb from the network read to the end of the new array
  • Free the old array

This is not a fast pattern, but things get even worse. Ordinarily, those last 64mb below the threshold will download reasonably quickly, the StreamAndForget threshold will get hit, and then all of the memory is freed and the download will proceed without buffering.

But.

TCP/IP includes a behavior called flow control, which means that the server tries to send data only as fast as the client is able to read it. When Fiddler hits the bad reallocation behavior described, it dramatically slows down how quickly it reads from the network. This, in turn, causes the server to send smaller and smaller packets. Which means Fiddler performs more and more network reads of smaller and smaller data, slowing the download of the 64mb to a virtual crawl.

Before Telerik ships a fix for this bug, anyone hitting this can avoid it with a trivial workaround—just set the Stream and Forget threshold inside Tools > Fiddler Options > Performance to something smaller than 2gb (for most users, 100mb would actually work great).

-Eric

Fiddler And LINQ

Since moving to Google at the beginning of 2016, I’ve gained some perspective about my work on Fiddler over the prior 12+ years. Mostly, I’m happy about what I accomplished, although I’m a bit awed about how much work I put into it, and how big my “little side project” turned out to be.

It’s been interesting to see where the team at Telerik has taken the tool since then. Some things I’m not so psyched about (running the code through an obfuscator has been a source of bugs and annoyance), but the one feature I think is super-cool is support for writing FiddlerScript in C#. That’s a feature I informally supported via an extension, but foolishly (in hindsight) never invested in baking into the tool itself. That’s despite the fact that JScript.NET is a bit of an abomination which is uncomfortable for both proper JavaScript developers and .NET developers. But I digress… C# FiddlerScript is really neat, and even though it may take a bit of effort to port the many existing example FiddlerScript snippets, I think many .NET developers will find it worthwhile.

I’ve long been hesitant about adopting the more fancy features of the modern .NET framework, LINQ key among them. For a while, I justified this as needing Fiddler to work on the bare .NET 2.0 framework, but that excuse is long gone. And I’ll confess, after using LINQ in FiddlerScript, it feels awkward and cumbersome not to.

To use LINQ in FiddlerScript, you must be using the C# scripting engine and you must add System.core.dll inside Tools > Fiddler Options > Scripting. Then, add using System.Linq; to the top of your C# script file.

After you make these changes, you can do things like:

    var arrSess = FiddlerApplication.UI.GetAllSessions();
    bool b = arrSess.Any(s=>s.HostnameIs("Example.com"));
    FiddlerApplication.UI.SetStatusText((b) ? "Found it!":"Didn't find it.");

-Eric Lawrence

Chrome 59 on Mac and TeletexString Fields

Update: This change ended up getting backed out, after it was discovered that it impacted smartcard authentication. Thanks for self-hosting Chrome Dev builds, IT teams!

A change quietly went into Chrome 59 that may impact your certificates if they contain non-ASCII characters in a TeletexString field. Specifically, these certificates will fail to validate on Mac, resulting in either a ERR_SSL_SERVER_CERT_BAD_FORMAT error for server certificates or a ERR_BAD_SSL_CLIENT_AUTH_CERT error for client certificates. The change that rejects such certificates is presently only in the Mac version of Chrome, but it will eventually make its way to other platforms.

You can see whether your certificates are using teletexStrings using an ASN.1 decoder program, like this one. Simply upload the .CER file, and look for the TeletexString type in the output. If you find any such fields that contain non-ASCII characters, the certificate is impacted:

Non-ASCII character in string

Background: Certificates are encoded using a general-purpose data encoding scheme called ASN.1. ASN.1 specifies encoding rules, and strings may be encoded using any of a number of different data types (teletexString, printableString, universalString, utf8String, bmpString). Due to the complexity and underspecified nature of the TeletexString, as well as the old practice of shoving Latin1 strings in fields marked as TeletexString, the Chrome change takes a conservative approach to handling TeletexString, only allowing the ASCII subset. utf8String is a well-specified and well-supported standard and should be used in place of the obsolete teletexString type.

To correct the problem with the certificate, regenerate it using UTF8String fields to store non-ASCII data.

-Eric Lawrence

Inspecting Certificates in Chrome

With a check-in on Monday night, Chrome Canary build 60.0.3088 regained a quick path to view certificates from the top-level security UI. When the new feature is enabled, you can just click the lock icon to the left of the address box, then click the “Valid” link in the new Certificate section of the Page Information bubble to see the certificate:

Chrome 60 Page Info dropdown showing certificate section

In some cases, you might only be interested in learning which Certificate Authority issued the site’s certificate. If the connection security is Valid, simply hover over the link to see the issuer information in a tooltip:

Tooltip shows Issuer CA

The new link is also available on the blocking error page in the event of an HTTPS error, although no tooltip is shown:

The link also available at the blocking Certificate Error page

Note: For now, you must manually enable the new Certificate section. Type chrome://flags/#show-cert-link in Chrome’s address box and hit enter. Click the Enable link and relaunch Chrome.

image

This section is enabled by default in Chrome 63 along with some other work to simplify the Page Information bubble.

If you want more information about the HTTPS connection, or to see the certificates of the resources used in the page, hit F12 to open the Developer Tools and click to the Security tab:

Chrome DevTools Security tab shows more information

You can learn more about Chrome’s certificate UIs and philosophy in this post from Chrome Security’s Chris Palmer.

-Eric Lawrence