File the Bug

Two experiences this week reminded me of a very important principle for improving the quality of software… if you see something, say something. And the best way to do that is to file a bug.

Something Weird? File a bug!

The first case was last Thursday, when a user filed a bug in Chrome’s tracker noting that Chrome’s window border icons often got “stuck” in a hover state after being moused over. It was a clear, simple bug report and it was easily reproduced. I’ve probably hit this a hundred times over the years and didn’t think much of it… “probably some weird thing in my graphics card or some utility I’m running.” It never occurred to me that everybody else might be seeing this, or that it was exhibited only by Chrome.

Fortunately, the bug report showed that this issue was something others were hitting too, so I took a look. The problem proved to be almost unique to Chrome (not occurring in other Windows applications), and has existed for at least seven years, reproducing on every version from Windows Vista to Windows 10.

A scan of the bug tracker suggests that Thursday’s report was the first time in those seven years that this bug was filed; less than a week later, the simple fix is checked in and on the way to Chrome 54. Obviously, this is only a minor cosmetic issue, but we want our browser looking good at all times!

Animation of the fixed bug

Another cool aspect of this fix is that it will fix other applications too… the Opera and Vivaldi browsers are based on Chromium open-source roots and inherited this problem; they’ll probably pick up this fix shortly too.

th;df – Say Something Anyway

Even if you don’t file a bug, you should still say something. Recently, Ana Tudor noted on Twitter that her system was in a state after restart where neither Chrome nor Brave could render web content; both browsers showed the “crashed tab” experience, even after restarting and reinstalling the browsers. Running with the no-sandbox flag worked, and rebooting the system fully solved the problem. Her report sounded suspiciously similar to a problem I’d encountered back in April; fortunately, I’d filed a bug.

At the time, that bug was deemed unreproducible and I’d dismissed it as some wonkiness on my specific system, but Ana’s complaint brought this back to my attention. She’d also added another piece of data I didn’t have in my original report—the problem also occurred in Brave, but not Firefox or IE.

Even more fortunately, I hit this problem again after a system reboot yesterday, and because of Ana’s report, I was no longer convinced that this bug was some weird quirk on just my system. Playing with the repro, I found that neither Opera nor Vivaldi reproduced the problem; both of those browsers are architecturally similar to Brave, but importantly, both are 32-bit. So this was a great clue that the problem was specific to 64-bit. And I confirmed this, finding that the bug repro’d only in 64-bit Chrome Canary but not in 32-bit Canary. Now we’re cooking with gas!

I built Chromium and ran it through WinDBG, seeing that when the sandboxed content renderer process was starting up, it was hitting three debug breakpoints before dying. The breakpoints were in sandbox::InterceptionAgent::OnDllLoad, a function Chrome uses to thunk certain Windows APIs to inject security filters. At this point, and with a reliable repro in hand, my smarter colleague took over and quickly found that the code to allocate memory for the thunk was failing, due to some logic bugs. Thunks must be located at a particular place in memory – within 2gb of the thunked function – and the code to place our thunks was failing when ASLR randomly loaded the kernel32, gdi32, and user32 DLLs at the very top of the address space, leaving no room for our thunks. When the allocation failed, Chrome refused to allow the DLL to be loaded into the sandbox, and the renderer necessarily died. After the user rebooted the system again, ASLR again moved the DLLs to some other location and (usually) this location gives us room to place our thunks. With 20/20 hindsight, the root cause of this bug (and the upcoming fix) are obvious.

But we only knew to look for the problem because Ana took the time to say something.

Final Thoughts

  • Browser telemetry is great—we catch crashes and all sorts of problems with it. But debugging via telemetry can be really challenging— more akin to solving a mystery than following a checklist. For instance, in the case of the sandbox bug, the fact that the problem reproduced in Brave was a huge clue, and not something we’d ever know from telemetry.
  • Well-run projects love bug reports. Back when I was building Fiddler, a lot of users I talked to said things like: “Well, it’s free and pretty good so I didn’t want to bother you with a complaint about some bug.” This is exactly backwards. For most of Fiddler’s lifetime, bug reports from the community were the only compensation I received from making the tool available to everyone for free. Getting bug reports meant I could improve the product without having to pay for test machines and devices, hire a test organization, etc, etc. When I eventually sold Fiddler to Telerik, a large part of the value they were buying was the knowledge that the tool had been battle-tested by millions of users over 9 years and that I’d fixed thousands of bugs from that community.
  • Filing bugs is generally easy, and it’s especially easy for Chrome.
    • First, simply search for an existing bug on crbug.com
    • If you find it’s a known issue, star it so you get updates
    • If it’s not a known issue, click the New Issue button at the top-left
    • Tell us as much as you can about the problem. Try to put yourself in the reader’s shoes—we don’t know much about your system or scenario, so the more details you can provide, the better.
  • Screenshots and URLs that reproduce problems are invaluable.
  • Find a bug in another browser? Report it!

 

Thanks for your help in improving our digital world!

 

-Eric

Using Fiddler With iOS 10 and Android 7

If you’ve tried to use Fiddler with iOS10 beta or Android 7 Nougat, you have probably found that HTTPS decryption isn’t working, even if you use the latest Fiddler and the Fiddler Certificate Maker add-on. Unfortunately, at the moment both platforms are broken, but for different reasons. In both cases, the client will fail to receive responses for HTTPS requests, and Fiddler will only show a CONNECT tunnel.

iOS 10 Change

After installing the FiddlerRoot certificate, one also needs to go to Settings -> General -> About -> Certificate Trust Settings and manually enable full trust for the FiddlerRoot root certificate, including accepting a dialog that says that this will allow a third-party to eavesdrop on all your communications.

iOS 10 Beta Bug (Fixed for final version)

The beta of iOS 10 had a bug whereby, if the response to a HTTP CONNECT tunnel request contains a Connection: close response header, the client will close the connection instead of doing as it should and waiting until the TCP/IP connection closes. A few minor platforms have had the same bug over the years, but iOS is definitely the first important platform with this issue. At least two bugs have been filed with the Apple “Radar” bug reporter.

Working around this limitation is simple. In Fiddler, click Rules > Customize Rules. Scroll to the OnBeforeResponse function. Just inside that function, add the following lines:

  if (oSession.HTTPMethodIs("CONNECT")) {
oSession["ui-backcolor"] = "red";
oSession.ResponseHeaders.Remove("Connection");
}

Save the file and try connecting again.

Android 7 Feature

In contrast to the iOS regression, the change in Android 7 was intentional. The Android team has decided that, by default, HTTPS certificate validation for apps targeting API Level 24 and later will ignore all user-installed root certificates, meaning that your efforts to manually trust Fiddler’s root certificate will be fruitless. Individually application developers can temporarily override this change while debugging by updating the application’s configuration:

<network-security-config>  
      <debug-overrides>  
           <trust-anchors>  
                <!-- Trust user added CAs while debuggable only -->
                <certificates src="user" />  
           </trust-anchors>  
      </debug-overrides>  
 </network-security-config>

…or at all times…

<network-security-config>  
      <base-config>  
            <trust-anchors>  
                <!-- Trust preinstalled CAs -->  
                <certificates src="system" />  
                <!-- Additionally trust user added CAs -->  
                <certificates src="user" />  
           </trust-anchors>  
      </base-config>  
 </network-security-config>

…Unfortunately, these changes can only be undertaken by application developers and not end-users. End-users will probably need to jailbreak their devices, akin to what is required to circumvent certificate pinning.

Certificate Validity Length

By default, Fiddler-generated certificates are valid for five years (and backdated one year). However, this can cause an ERR_CERT_VALIDITY_TOO_LONG error in Chrome on Android.  To fix this for Fiddler’s default (CertEnroll) certificate generator,  run about:config in QuickExec to edit preferences. Set fiddler.certmaker.ValidDays to 820 and, if needed, reset your certificates in Fiddler using the Tools > Fiddler Options > HTTPS > Actions button.

-Eric

Cheating Authenticode, Redux

Back in 2014, I explained two techniques that have been used by developers to store information in Authenticode-signed executables without breaking the signature, including information about the EnableCertPaddingCheck registry flag that can be set to break the technique1.

Recently, Kevin Jones pointed out that Chrome’s signed installer differs on each download, as you can see in this file comparison of two copies of the Chrome installer, one downloaded from IE and one from Firefox:

Binary diff shows different bytes

Surprisingly, the Chrome installer is not using either unvalidated data-injection technique I wrote about previously.

So, what’s going on?

The Extra Certificate Technique

Fortunately, Kevin wrote an awesome tool for examining Authenticode signatures at a deep level: Authenticode Lint, which he describes in this blog post. Running his tool with the default options, the solution to our mystery is immediately revealed. The signing block contains an extra certificate named (literally) “Dummy Certificate”:

Authenticode Lint reveals "Dummy Certificate"

By running the tool with the -extract argument, we can view the extra (untrusted) certificate included in the signature block and see that it contains a proprietary data field with the per-instance data:image

You can see this technique in use inside Microsoft Edge installers as well:

This technique for injecting unsigned data into a signature block could be a source of vulnerabilities (like this attack) unless you’re very careful; anyone using this Extra Certificate technique should follow the same best practices previously described for the Unvalidated Attributes technique. (Which Kevin further explored in an aborted defense against in his post about Authenticode Sealing).

Authenticode Lint is a great tool, and I strongly recommend that you use it to help ensure you’re following best-practices for Authenticode-signing. My favorite feature is that you can automate the tool to verify an entire folder tree of binaries. You can thus be confident that you’ve signed all of the expected files, and done so properly.

-Eric

1 In the years since my original “Cheating Authenticode” post, I’ve learned that even when EnableCertPaddingCheck is not enabled, the signature verification API will scan the data in the signature block of the file to attempt to detect markers of popular data archival formats (e.g. PKZIP Magic Bytes). If such markers are found, the signature will be treated as invalid. This marker scanning was a Windows Vista-era mitigation (later backported) that attempts to block the abuse of signed insecurely-self-extracting executable files: an attacker could swap out the compressed payload for a malicious payload without breaking the file’s signature.

TLS Fallbacks are Dead

Gravestone reading RIP Fallbacks
tl;dr

Just over 5 years ago, I wrote a blog post titled “Misbehaving HTTPS Servers Impair TLS 1.1 and TLS 1.2.”

In that post, I noted that enabling versions 1.1 and 1.2 of the TLS protocol in IE would cause some sites to load more slowly, or fail to load at all. Sites that failed to load were sending TCP/IP RST packets when receiving a ClientHello message that indicated support for TLS 1.1 or 1.2; sites that loaded more slowly relied on the fact that the browser would retry with an earlier protocol version if the server had sent a TCP/IP FIN instead.

TLS version fallbacks were an ugly but practical hack– they allowed browsers to enable stronger protocol versions before some popular servers were compatible. But version fallback incurs real costs:

  • security – a MITM attacker can trigger fallback to the weakest supported protocol
  • performance – retrying handshakes takes time
  • complexity – falling back only in the right circumstances, creating new connections as needed
  • compatibility – not all clients are willing or able to fallback (e.g. Fiddler would never fallback)

Fortunately, server compatibility with TLS 1.1 and 1.2 has improved a great deal over the last five years, and browsers have begun to remove their fallbacks; first fallback to SSL 3 was disabled and now Firefox 37+ and Chrome 50+ have removed fallback entirely.

In the rare event that you encounter a site that needs fallback, you’ll see a message like this, in Chrome:

Google Chrome 52 error

or in Firefox:

Firefox 45 error

Currently, both Internet Explorer and Edge fallback; first a TLS 1.2 handshake is attempted:

TLS 1.2 ClientHello bytes

and after it fails (the server sends a TCP/IP FIN), a TLS 1.0 attempt is made:

TLS 1.0 ClientHello bytes

This attempt succeeds and the site loads in IE.

UPDATE: Windows 11 disables TLS/1.1 and TLS/1.0 by default for IE. Even if you manually enable these old protocol versions, fallbacks are not allowed unless a registry/policy override is set.

EnableInsecureTlsFallback = 1 (DWORD) set inside

HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Internet Settings or
HKLM\SOFTWARE\Policies\Microsoft\Windows\CurrentVersion\Internet Settings

If you analyze the affected site using SSLLabs, you can see that it has a lot of problems, the key one for this case is in the middle:

Grade F on SSLLabs

This is repeated later in the analysis:

TLS Version intolerant for versions 1.2 and 1.3

The analyzer finds that the server refuses not only TLS 1.2 but also the upcoming TLS 1.3.

Unfortunately, as an end-user, there’s not much you can safely do here, short of contacting the site owners and asking them to update their server software to support modern standards. Fortunately, this problem is rare– the Chrome team found that only 0.0017% of TLS connections triggered fallbacks, and this tiny number is probably artificially high (since a spurious connection problem will trigger fallback).

-Eric

Non-Secure Clicktrackers–The Fastest Path from A+ to F

HTTPS only works if you use it.

Coinbase is an online bitcoin exchange backed by $106M in venture capital investment. They’ve got a strong HTTPS security posture, including the latest ciphers, a 4096bit RSA key, and advanced features like browser-preloaded HSTS and HPKP.

SSLLabs grades Coinbase’s HTTPS deployment an A+:

A+ Grade from SSLLabs

This is a well-secured site with a professional security team.

Here’s the email they just sent me:

"Add a debit card"

Let’s run the MoarTLS Analyzer on that:

All Red

That’s right… every hyperlink in this email is non-secure and any click can be intercepted and sent anywhere by a network-based attacker.

Sadly, Coinbase is far from alone in snatching security defeat from the jaws of victory; my #HTTPSFAIL folder includes a lot of other big names:

HTTPSFailures

 

It doesn’t matter how well you secure your castle if you won’t help your visitors get to it securely. Use HTTPS everywhere.

 

-Eric

Update: I filed a bug with Coinbase on HackerOne. Their security team says that they “agree” that these links should be HTTPS, but the problem is Mailchimp (their email vendor) and they can’t fix it. Mailchimp offers a security vulnerability reporting form, delivered exclusively over HTTP:

MailChimp

Coinbase isn’t the first service whose security is bypassed because their emails are sent with non-secure links; the Brave browser download announcements suffered the same problem.

File Paths in Windows

Handling file-system paths in Windows can have many subtleties, and it’s easy to forget how some of this very intricate system works under the covers.

Happily, a .NET developer has started blogging a bit about file paths, presumably as they work to improve .NET’s handling of paths longer than the legacy MAX_PATH limit of 260 characters:

Also not to be missed are blog posts about Windows’ handling of File URIs:

 

-Eric

Stupid HexEdit Tricks

In the summer of 2015, I changed my default browser on Windows from Internet Explorer to Chrome, and for the most part, I haven’t looked back—Chrome is fast and stable.

The only real stumbling block I keep hitting is that the Alt+F,C keyboard chord isn’t bound to the command [File Menu > Close tab] as it is in nearly every other browser and Windows application that supports tabs. Neither of the alternative hotkeys (Ctrl+F4 or Ctrl+W) feels as natural (especially on laptop keyboards) and I haven’t been able to get accustomed to using either of them.

Unfortunately, while Chrome’s extensibility model is powerful, there’s no mechanism to add commands to its menus, and writing an extension that takes over Alt+F (suppressing its default behavior of opening the hamburger menu) is an unpleasant alternative.

Now that I’m building Chrome regularly, I considered just maintaining my own private fork, but compiling Chrome is an undertaking not to be taken lightly.

So… what to do? Write a utility process that thread-injects into Chrome to change the menu? Install a global keyboard hook? Something even more fragile?

Things were simpler when I was 15 years old… everything was closed-source, and when a program didn’t work as I liked, I didn’t have a lot of confusing choices. For instance, I once bought a CD of street maps but the viewer program’s window refused to resize larger than 800×600 pixels. Not having any better ideas, I opened the binary in a hex editor program and looked for the sequence of bytes 20 03 near the sequence 58 02, the hexadecimal encodings of 800 and 600 respectively. I then changed both values to 40 06, hoping to set a maximum size of 1600×1600 pixels. And to my surprise and delight, this worked fine.

Luckily for 36 year-old me, Chrome recently added a “Cast” command to the Alt+F menu (if enabled via chrome://flags/#media-router); I don’t ever need to cast my tabs, but there’s now a menu entry that responds to the Alt+F,C chord. All I need to do is change what actually happens when you click it.

Even more fortunately, Chrome is open-source. So I can easily see how the Cast command was added to the code:

  if (media_router::MediaRouterEnabled(browser_context_)) {
menu_model_.AddItemWithStringId(IDC_ROUTE_MEDIA,
IDS_MEDIA_ROUTER_MENU_ITEM_TITLE);

I now know that IDC_ROUTE_MEDIA is the command identifier for the action taken when the user invokes the menu item. IDC_ROUTE_MEDIA is defined as 35011, or 0x88c3, which will be represented in the binary in little-endian C3 88. And my instinctive hunch that the desired command identifier will be named IDC_CLOSE_TAB pans out—it’s defined as 34015, or 0x84df. So, all we need to do is overwrite the correct C3 88 with DF 84 and maybe we’ll magically get the behavior we want?

Uh oh. It turns out that there are almost 300 instances of C3 88 in Chrome.dll… how do we know which one to replace?

Well, the correct replacement will probably be the one near IDS_MEDIA_ROUTER_MENU_ITEM_TITLE (defined as 14630 0x3926, 26 39), right? Let’s whip up a dumb little program to search for those values near each other. Running the program, we find just two instances… one represents Chrome’s main menu and the other is used for the webpage’s context menu.

Unfortunately, this search approach isn’t completely stable, because Chrome’s string resources are generated (generated_resources.h) and thus they may change from build to build. To automate this (since Chrome releases new Canary builds daily, and minor updates to Stable builds every few weeks) we’ll need to use a more relaxed signature to find the bytes of interest.

For similar reasons, we cannot reliably just overwrite the menu text identifier 26 39 with 2D 42 (IDS_TAB_CXMENU_CLOSETAB aka “Close tab”) to change the menu text, since that definition also fluctuates. We could edit the raw “Cast…” menu text string inside en-US.pak, but if we do that it’ll change both the main menu AND the context menu. Since I hit ALT+F,C blindly, I’ll just leave that string alone.

And voila… the dumbest thing that could possibly work.

But… But… Security?

To overwrite a machine-installed Chrome inside the Program Files folder, you must already be running as Administrator. And an Administrator can already control everything on the machine.

Overwriting a per-user-installed Chrome inside the %LocalAppData% folder only gives you the ability to hack yourself.

So, what about code-signing?

As you might hope, Chrome.dll is code-signed:

SHA1 and SHA256 signed

… and if you modify the bytes, the signature is no longer valid:

Signature Invalid

However, this matters relatively little in practice. Windows itself generally does not verify module signatures except under special configurations (e.g. AppLocker). So nobody complains about our little tweak. Some AV programs may freak out.

What Else Might Go Wrong?

I was originally worried that Chrome’s fancy delta-patching system might break when Chrome.dll has been modified, but it turns out that the delta-patches are computed against the cached installer binary. So there’s no harm there.

From experience, I can tell you that whenever Chrome does something weird, I now assume it’s the fault of this little hackery, even when it isn’t.

 

As smart people have said… just because you can do something, doesn’t mean you should.

badIdea

-Eric

Bolstering HTTPS Security

Last Update: 26 October 2023

When #MovingToHTTPS, the first step is to obtain the necessary certificates for your domains and enable HTTPS on your webserver. After your site is fully HTTPS, there are some other configuration changes you should consider to further enhance the site’s security.

Validate Basic Configuration

First, use SSLLab’s Server Test  to ensure that your existing HTTPS settings (cipher suites, etc) are configured well.

SECURE Cookies

After your site is fully secure, all of your cookies should be set with the SECURE attribute to prevent them from ever leaking over a non-secure channel. The (confusingly-named) HTTPOnly attribute should also be set on each cookie that doesn’t need to be accessible from JavaScript.

    Set-Cookie: SESSIONID=b12321ECLLDGH; secure; path=/

Strict Transport Security (HSTS)

Next, consider enabling HTTP Strict Transport Security. By sending a HSTS header from your HTTPS pages, you can ensure that all visitors navigate to your site via HTTPS, even if they accidentally type “http://” or follow an outdated, non-secure link.

HSTS also ensures that, if a network attacker were to try to intercept traffic to your HTTPS site using an invalid certificate, a non-savvy user can’t wreck their security by clicking through the browser’s certificate error page.

After you’ve tested out a short-lived HSTS declaration, validated that there are no problems, and ramped it up to a long-duration declaration (e.g. one year), you should consider requesting that browsers pre-load your declaration to prevent any “bootstrap” threats (whereby a user isn’t protected on their very first visit to your site).

    Strict-Transport-Security: max-age=631138519; includeSubDomains; preload

The includeSubdomains attribute indicates that all subdomains of the current page’s domain must also be secured with HTTPS. This is a powerful feature that helps protect cookies (which have weird scoping rules) but it is also probably the most common source of problems because site owners may “forget” about a legacy non-secure subdomain when they first enable this attribute.

Strict-Transport-Security is supported by virtually all browsers.

Certificate Authority Authorization (CAA)

Certificate Authority Authorization (supported by LetsEncrypt) allows a domain owner to specify which Certificate Authorities should be allowed to issue certificates for the domain. All CAA-compliant certificate authorities should refuse to issue a certificate unless they are the CA of record for the target site. This helps reduce the threat of a bad guy tricking a Certificate Authority into issuing a phony certificate for your site.

The CAA rule is stored as a DNS resource record of type 257. You can view a domain’s CAA rule using a DNS lookup service. For instance, this record for Google.com means that only Symantec’s Certificate Authority may issue certificates for that host:

issue symantec.com

Configuration of CAA rules is pretty straightforward if you have control of your DNS records.

Public Key Pinning (HPKP)

Unfortunately, many CAs have made mistakes over the years (e.g. DigiNotar, India CCA, CNNIC CA, ANSSI CA) and the fact that browsers trust so many different certificate authorities presents a large attack surface.

To further reduce the attack surface from sloppy or compromised certificate authorities, you can enable Public Key Pinning. Like HSTS, HPKP rules are sent as HTTPS response headers and cached on the client. Each domain’s HPKP rule contains a list of Public Key hashes that must be found within the certificate chain presented by the server. If the server presents a certificate and none of the specified hashes are found in that certificate’s chain, the certificate is rejected and (typically) a failure report is sent to the owner for investigation.

Public Key Pinning powerful feature, and sites must adopt it carefully—if a site carelessly sends a long-life HPKP header, it could deny users access to the site for the duration of the HPKP rule.

    Public-Key-Pins: pin-sha256=”8Rw90Ej3Ttt8RRkrg+WYDS9n7IS03bk5bjP/UXPtaY8=”;
pin-sha256=”YLh1dUR9y6Kja30RrAn7JKnbQG/uEtLMkBgFF2Fuihg=”;
max-age=600; includeSubdomains;
report-uri=”https://report-uri.io/report/93cb21052c&#8221;

Free services like ReportUri.io can be used to collect HPKP violation reports.

HPKP is supported by some browsers. 2023 UPDATE: “Dynamic” PKP support, whereby sites can opt-in via HTTPS response headers has been removed from most browsers as it was found to be a “foot-gun” leading to unintentional self-inflected denial-of-service mistakes. Chromium-based browsers have a tiny built-in list of pins as a part of their HSTS preload list and now consider Certificate Transparency the right way to go.

Certificate Transparency

Certificate Transparency is a scheme where each CA must publicly record each certificate it has issued to a public, auditable log. This log can be monitored by site and brand owners for any unexpected certificates, helping expose fraud and phony domains that might have otherwise gone unnoticed.

Site owners who want a lightweight way to look for all CT-logged certificates for their domains can use the Certificate Transparency Lookup Tool. Expect that other tools will become available to allow site owners to subscribe to notifications about certificates of interest.

Since 2015, Chrome has required that all EV certificates issued be logged in CT logs. Future browsers will almost certainly offer a way for a site to indicate to visiting browsers that it should Expect or Require that the received certificate be found in CT logs, reporting or blocking the connection if it is not. Update: What actually happened is that in 2018 browsers began requiring CT logging of all certificates. See my later post for details on an easy way to monitor certificate transparency logs.

Thanks for your help in securing the web!

-Eric

SHA256 and Authenticode REDUX^2

Note: Microsoft has not confirmed this change yet; analysis below comes from looking at behavior of 14 signed installers.

In December of last year, I wrote about all of the different places hashes are used in code-signing. Then, in January I blogged that Windows 10 had stopped accepting SHA-1 certificates and certificate chains for Authenticode-signed binaries (unless a timestamp marked the binary as being signed before 1/1/2016).

I called out the fact that while SHA1 certificates were verboten, “SHA1 file digests are still allowed (heck, MD5 digests are still allowed!)”.

With Windows 10 build 14316, things have changed again. It appears that SHA-1 file digests are now forbidden too, at least in the download codepath of Edge and Internet Explorer. This interrupts the download of Firefox, Opera, Fiddler, and other programs:

Firefox signature rejected

Fiddler signature rejected

As with the earlier lockdown, if you examine the signature in Windows Explorer, it’ll tell you that everything is “OK.”

File Hash is SHA1 and Cert is SHA256, UI shows 'ok'

To resolve this, you should dual-sign your binaries using both SHA-1 and SHA-256. It appears that the same Windows 10 14316 build has a bug whereby a signature containing only a SHA256 file digest will cause the download in IE/Edge to hang at 99%.

-Eric

Silliness – Fiddler Blocks Malware

Enough malware researchers now depend upon Fiddler that some bad guys won’t even try to infect your system if you have Fiddler installed.

The Malware Bytes blog post has the details, but the gist of it is that the attackers use JavaScript to probe the would-be victim’s PC for a variety of software. Beyond Kaspersky, TrendMicro, and MBAM security software, the fingerprinting script also checks for VirtualBox, Parallels, VMWare, and Fiddler. If any of these programs are thought to be installed, the exploit attempt is abandoned and the would-be victim is skipped. This attempt to avoid discovery is called cloaking.

This isn’t the only malware we’ve seen hiding from Fiddler—earlier attempts use tricks to see whether Fiddler is actively running and intercepting traffic and only abandon the exploit if it is.

This behavior is, of course, pretty silly. But it makes me happy anyway.

Preventing Detection of Fiddler

Malware researchers who want to help ensure Fiddler cannot be detected by bad guys should take the following steps:

  1. Do not put Fiddler directly on the “victim” machine, instead run it as a remote proxy.
    – If you must install locally, at least install it to a non-default path.
  2. To run Fiddler as a proxy server external to the victim machine (either use a different physical machine or a VM).
    – Tick the Tools > Fiddler Options > Connections > Allow remote computers to connect checkbox. Restart Fiddler and ensure the machine’s firewall allows inbound traffic to port 8888.
    – Point the victim’s proxy settings at the remote Fiddler instance.
    – Visit http://fiddlerserverIP:8888/ from the victim and install the Fiddler root certificate
  3. Click Rules > Customize Rules and update the FiddlerScript so that the OnReturningError function wipes the response headers and body and replaces them with non-descript strings. Some fingerprinting JavaScript will generate bogus AJAX requests and then scan the response to see whether there are signs of Fiddler.

-Eric