browsers, security, Uncategorized

Google Internet Authority G3

For some time now, operating behind the scenes and going mostly unnoticed, Google has been changing the infrastructure used to provide HTTPS certificates for its sites and services.

You’ll note that I said mostly. Over the last few months, I’ve periodically encountered complaints from users who try to load a Google site and get an unexpected error page:

certerror

Now, there are a variety of different problems that can cause errors like this one– in most cases, the problem is that the user has some software (security software or malware) installed locally that is generating fake certificates that are deemed invalid for various reasons.

However, when following troubleshooting steps, we’ve determined that a small number of users encountering this NET::ERR_CERT_AUTHORITY_INVALID error page are hitting it for the correct and valid Google certificates that chain through Google’s new intermediate Google Internet Authority G3. That’s weird.

What’s going on?

The first thing to understand is that Google operates a number of different certificate trust chains, and we have multiple trust chains deployed at the same time. So a given user will likely encounter some certificate chains that go through the older Google Internet Authority G2 chain and some that go through the newer Google Internet Authority G3 chain– this isn’t something the client controls.

G2vG3

You can visit this GIA G3-specific test page to see if the G3 root is properly trusted by your system.

More surprisingly, it’s also the case that you might be getting a G3 chain for a particular Google site (e.g. https://mail.google.com) while some other user is getting a G2 chain for the same URL. You might even end up with a different chain simply based on what Google sites you’ve visited first, due to a feature called HTTP/2 connection coalescing.

In order to see the raw details of the certificate encountered on an error page, you can click the error code text on the blocking page. (If the site loaded without errors, you can view the certificate like so).

Google’s new certificate chain is certainly supposed to be trusted automatically– if your operating system (e.g. Windows 7) didn’t already have the proper certificates installed, it’s expected to automatically download the root certificate from the update servers (e.g. Microsoft WindowsUpdate) and install it so that the certificate chain is recognized as trusted. In rare instances, we’ve heard of this process not working– for instance, some network administrators have disabled root certificate updates for their enterprise’s PCs.

On modern versions of Windows, you can direct Windows to check its trusted certificate list against the WindowsUpdate servers by running the following from a command prompt:

certutil -f -verifyCTL AuthRootWU

Older versions of Windows might not support the -verifyCTL command. You might instead try downloading the R2 GlobalSign Root Certificate directly and then installing it in your Trusted Root Certification Authorities:

InstallBtnLocalMachineTrustedRootFinishyay

Overall, the number of users reporting problems here is very low, but I’m determined to help ensure that Chrome and Google sites work for everyone.

-Eric

Standard
browsers, security, Uncategorized

Stealing your own password is not a vulnerability

By far, the most commonly-reported “vulnerability” reported to the Chrome Vulnerability Rewards program boils down to “I can steal my own password.” Despite having its very own FAQ entry, this gets reported to the VRP at varying levels of breathlessness, sometimes multiple times per day.

You can see this “attack” in action:

UnmaskPassword

Yes, it’s true, you can use Chrome to steal your own password.

You can also grab a knife and stab yourself in the leg. I wonder how often knife-makers get letters to that effect?

-Eric

Standard
Uncategorized

Working with “Big Data” in .NET

For simplicity (and because I didn’t know any better at the time), Fiddler uses plain public byte[] array fields to represent the request and response bodies. This makes working with the body data trivial for authors of extensions and FiddlerScript, but it also creates significant shortcomings. Using fields rather than properties improves performance in some scenarios, but it muddles the contract about mutability of the data and means that developers can easily accidentally create inconsistent state (e.g. by decompressing the body but forgetting to change the Content-Length, Content-Encoding, and Transfer-Encoding headers).

The more serious problem with the use of byte arrays is that they require contiguous memory allocations. In a 64-bit process, this isn’t a major problem, but in a 32-bit process, address space fragmentation means that finding an open address range larger than a few hundred megabytes is often impossible:

Address space fragmentation means there's no place for the data

If Fiddler cannot allocate contiguous memory of the required size, the resulting .NET System.OutOfMemoryException kills the Web Session. While 64-bit processes rarely suffer from address space fragmentation, the use of byte arrays still leads to a problem with large downloads—the .NET Framework imposes a cap of 0x7FFFFFC7 elements in an array, meaning that even 64-bit Fiddler is limited to storing request and response bodies that are just under two gigabytes. In practice, this is rarely a huge problem, but it’s occasionally annoying.

From an API point-of-view, I should have exposed message bodies as a Stream, so the backing data structure could be selected (and changed) as needed for performance and reliability reasons.

Now, as it happens, Fiddler internally uses a Stream buffer when it’s reading the body from the socket—it uses a MemoryStream for this purpose. Unfortunately, the MemoryStream built into the .NET Framework itself uses a plain byte[] to store the data, which means it suffers from the same problems described before, and some additional problems. Its biggest problem is that the growth algorithm for byte array backing the MemoryStream, and I’ve written at length about the issue and how I worked around it in Fiddler by creating a PipeReadBuffer object with smarter growth rules.

I thought things were as good as they could be without swapping to use a different object underlying the PipeReadBuffer, but last night Rafael Rivera pointed out a scenario that’s really broken in Fiddler today. His client was trying to download a 13.6gb ZIP file through Fiddler, and at just below 2gb the download slowed to a crawl. Looking at Fiddler.exe in Process Monitor, nearly 50% of the time was logged in garbage collection.

What’s going on?

The problem is that 64-bit Fiddler defaults to Stream and Forget bodies only when they reach 0x7FFFFFC7 bytes. For various reasons (which are likely not very compelling), Fiddler doesn’t trust the Content-Length response header and will instead keep buffering the response body until the StreamAndForget threshold is reached, at which point the response bytes will be streamed to the client, dropped, and subsequent bytes will be blindly streamed to the client without recording to a buffer. Despite that wasted buffering for the first 2 gigabytes, however, everything ought to work reasonably quickly.

Except.

When I coded the PipeReadBuffer, I made it grow by 64mb at a time, until we got to within 64mb of the .NET max array index of 0x7FFFFFC7. When we got within 64mb of the end, instead of correctly growing to 0x7FFFFFC7 bytes, it instead grows to exactly the length needed with no slack bytes. Which means that when the next network read comes along a millisecond later, the MemoryStream’s byte array has no free space and must be reallocated. And it gets reallocated to exactly the needed size, leaving no slack. This process repeats, with each network read meaning that .NET must:

  • Allocate an array of just under 2gb
  • Copy the 2 billion bytes from the old array to the new array
  • Copy in the ~16kb from the network read to the end of the new array
  • Free the old array

This is not a fast pattern, but things get even worse. Ordinarily, those last 64mb below the threshold will download reasonably quickly, the StreamAndForget threshold will get hit, and then all of the memory is freed and the download will proceed without buffering.

But.

TCP/IP includes a behavior called flow control, which means that the server tries to send data only as fast as the client is able to read it. When Fiddler hits the bad reallocation behavior described, it dramatically slows down how quickly it reads from the network. This, in turn, causes the server to send smaller and smaller packets. Which means Fiddler performs more and more network reads of smaller and smaller data, slowing the download of the 64mb to a virtual crawl.

Before Telerik ships a fix for this bug, anyone hitting this can avoid it with a trivial workaround—just set the Stream and Forget threshold inside Tools > Fiddler Options > Performance to something smaller than 2gb (for most users, 100mb would actually work great).

-Eric

Standard