The Fiddler Web Debugger is now old enough to drink, but I still use it pretty much every day. Fiddler hasn’t aged entirely gracefully as platforms and standards have changed over the decades, but the tool is extensible enough that some of the shortcomings can be fixed by extensions and configuration changes.
Last year, I looked back at a few of the mistakes and wins I had in developing Fiddler, and in this post, I explore how I’ve configured Fiddler to maximize my productivity today.
Powerup with FiddlerScript & Extensions
Add a SingleBrowserMode button to Fiddler’s toolbar
By default, Fiddler registers itself as the system proxy and almost all applications on the system will immediately begin sending their traffic through Fiddler. While this can be useful, it often results in a huge amount of uninteresting “noise”, particularly for web developers hoping to see only browser traffic. Fiddler’s rich filtering system can hide traffic based on myriad criteria, but for performance and robustness reasons, it’s best not to have unwanted traffic going through Fiddler at all.
The easiest way to achieve that is to simply not register as the system proxy and instead just launch a single browser instance whose proxy settings are configured to point at Fiddler’s endpoint.
Adding a button to Fiddler’s toolbar to achieve this requires only a simple block of FiddlerScript:
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This button is probably the single most-valuable change I made to my copy of Fiddler in years, and I’m honestly a bit sick that I never thought to include this decades ago.
Disable ZSTD
ZStandard is a very fast lossless compression algorithm that has seen increasing adoption over the last few years, joining deflate/gzip and brotli. Unfortunately, Telerik has not added support for Zstd compression to Fiddler Classic. While it would be possible to plumb support in via an extension, the simpler approach is to simply change outbound requests so that they don’t ask for this format from web servers.
Doing so is simple: just rewrite the Accept-Encoding request header:
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Since moving to the Microsoft Defender team, I spend a lot more time looking at malicious files. You can integrate Fiddler into VirusTotal to learn more about any of the binaries it captures.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Beyond looking at hashes, I also spend far more time looking at malicious sites and binaries, many of which embed malicious content in base64 encoding. Fiddler’s TextWizard (Ctrl+E) offers a convenient way to transform Base64’d text back to the original bytes, and the Web Session List’s context menu’s “Copy > Response DataURI” allows you to easily base64 encode any data.
Add the NetLog Importer
If your goal isn’t to modify traffic with Fiddler, it’s often best not to have Fiddler capture browser traffic at all. Instead, direct your Chromium-based browser to log its the traffic into a NetLog.json file which you can later import to Fiddler to analyze using the Fiddler NetLog Importer extension.
There are a zillion other useful little scripts you might add to Fiddler for your own needs. If you look through the last ten years of my GitHub Gists you might find some inspiration.
Adjust Settings
Configure modern TLS settings
Inside Tools > Fiddler Options > HTTPS, make it look like this:
Use Visual Studio Code as the Diff Tool
If you prefer VSCode to Windiff, type about:config in the QuickExec box below the Web Sessions list to open Fiddler’s Preferences editor.
Add/update the fiddler.config.path.differ entry to point to the file path to your VSCode instance.
Set the fiddler.differ.params value to --diff "{0}" "{1}"
Miscellaneous
On the road and don’t have access to Fiddler? You can quickly explore a Fiddler SAZ file using a trivial web-based tool.
Developers can use Fiddler’s frontend as the UI for their own bespoke tools and processes. For example, I didn’t want to build a whole tampering UI for the Native Messaging Meddler, so I instead use Fiddler as the front-end.
The team recently got a false-negative report on the SmartScreen phishing filter complaining that we fail to block firstline-trucking.com. I passed it along to our graders but then took a closer look myself. I figured that maybe the legit site was probably at a very similar domain name, e.g. firstlinetrucking.com or something, but no such site exists.
Curious.
Simple Investigation Techniques
I popped open the Netcraft Extension and immediately noticed a few things. First, the site is a new site. Suspicious, since they claim to have been around since 2002. Next, the site is apparently hosted in the UK, although they brag about being “Strategically located at the U.S.-Canada border.” Sus... and just above that, they supply an address in Texas. Sus.
Let’s take a look at that address in Google Maps. Hmm. A non-descript warehouse with no signage. Sus.
Well, let’s see what else we have. Let’s go to the “About Us” page and see who claims to be employed here. Right-click the CEO’s picture and choose “Copy image link.”
Investigating the other employee photos and customer pictures from the “Customer testimonials” section reveals that most of them are also from stock photo sites. The unfortunately-named “Marry Hoe” has her picture on several other “About us” pages — it looks like she probably came with the template. Her profile page is all Lorem Ipsum placeholder text.
I was surprised that one of the biggest photos on the site didn’t show up in TinEye at all. Then I looked at the Developer Tools and noticed that the secret is revealed by the image’s filename — ai-generated-business-woman-portrait. Ah, that’ll do it.
I tried searching for the phone number atop the site ((956) 253-7799) but there were basically no hits on Google. This is both very sus and very surprising, because often Googling for a phone number will turn up many complaints about scams run from that number.
Moar Scams!
Hmm…. what about all of those blog posts on the site. They’re not all lorem ipsum text. Hrm… but they do reference other companies. Maybe these scammers just lifted the text from some legit company? It seems plausible that “New England Auto Shipping” is probably a legit company they stole this from. Let’s copy this text and paste it into Google:
I didn’t find the source (likely neautoshipping.com, an earlier version of the scam from October 2024), but I did find another live copy of the attack, hosted on a similar domain:
This version is hosted at firstline-vehicle.com with the phone number (908-505-5378) and an address in New Jersey. They’ve literally been copy/pasting their scam around!
Netcraft reports that it’s first seen next month 🙃. Good thing I’ve got my time machine up and running!
The page title of this scam site doesn’t match the scammers though. Hmm… What happens if I look for “Bergen Auto Logistics” then?
Another scam site, bergen-autotrans.com, this one registered this month and CEO’d by a Stock Photo woman:
There are some more interesting photos here, including some that are less obviously faked:
It looks like there was an earlier version of this site in November 2024 at bergenautotrans.com that is now offline:
Searching around, we see that there’s also currently a legit business in New York named “Bergen Auto” whose name and reputation these scammers may have been trying to coast off of. And now some of the pieces are starting to make more sense — Bergen New York is on the US/Canada border.
Searching for the string "Your car does not need be running in order to be shipped" turns up yet more copies of the scam, including britt-trucking.net with phone number (602) 399-7327:
Another random Stock Photo CEO is here, and our same General Manager now has a new name:
…and hey, look, it’s our old friends, now with a different logo on their shirts!
Interestingly, if you zoom in on the photo, you see that the name and logo don’t even match the scam site. The company logo and filename contain Sunni-Transportation, which was also found in the filename of Marry Hoe on the first site we looked at.
The same "Your car does not need be running in order to be shipped" string was also found on two now-offline sites, unitedauto-transport.com, and unitedautotrans.net.
Not a Phish, but definitely Fishy
I went back to our original complainant and asked for clarification — this site doesn’t seem to be pretending to be the site of any other company, but instead appears to be just entirely manufactured from AI and stock photos.
He explained that the attackers troll Craigslist[1] looking for folks buying used cars. They put up some fake listings, and then act as if the (fake) seller has chosen them as an escrow provider. After a bunch of paperwork, the victim buyer wires the attacker thousands of dollars for the nonexistent car. The attackers immediately send a fake tracking number that goes to an order tracking page that’s never updated. They’re abusing people who are risk-averse enough to seek out an escrow company to protect a big transaction, but who not able to validate the bonafides of that “escrow company”… aka, smart humans. (Having bought houses thrice, I can say that validating the legitimacy of an escrow company is a very difficult task). Escrow scams like this one are only one of several popular attacks — this guide and this one describe several scams and how to avoid them.
Unfortunately, creating a fake business almost entirely in pixels is a simple scam, and one that’s not trivial to protect against. In cases where no existing business’ reputation is being abused, there’s no organization that’s particularly incentivized to do the work to get the bad guys taken down. Phishing protection features like SafeBrowsing and SmartScreen are not designed to protect against “business practices scams.”
The very same things that make online businesses so easy to start — low overhead, no real-estate, templates and AIs can do the majority of the work — make it easy to invent fake businesses that only exist in the minds of their victims. After the scammers get found out, the sites disappear and the crooks behind them simply fade away.
Looking through here, most of the sites are dead, but not all. Some have been live for years!
[1] In college, a friend fell victim to a different scam on Craigslist, the overpayment scam. They’d rented a 3 bedroom apartment and needed a 3rd roommate. They were contacted by an “international student” who needed a room and sent my friends a check $500 dollars larger than requested. “Oops, would you mind wiring back that extra? I really need it right now!” the scammer begged. My kind friends wired back the “overpayment” amount, and a few days later were heartbroken to discover that the original check had, of course, not actually cleared. They were out the $500, a huge sum for two broke young college students.
Recently, there’s been a surge in the popularity of trojan clipboard attacks whereby the attacker convinces the user to carry their attack payload across a security boundary and compromise the device.
Meanwhile, AI hype is all the rage. I recent had a bad experience in what I thought was a simple AI task (draw a map with pushpins in certain cities):
The generated map with wildly incorrect city locations
… but I was curious to see what AI would say if I pretended to be the target of a trojan clipboard attack. I was pleased to discover that the two AIs I tried both gave solid security advice for situation:
ChatGPT and Gemini both understood the attack and the risk
A few days later, the term “vibe-coding” crossed my feed and I groaned a bit when I learned what it means… Just describe what you want to the AI and it’ll build your app for you. And yet. That’s kinda exactly how I make a living as a PM: I describe what I want an app to do, and wait for someone else (ideally, our dev team) to build it. I skimmed a few articles about vibe coding and then moved on with my day. I don’t have a lot of time to set up new workflows, install new devtools, subscribe to code-specific AI models, and so forth.
Back to the day job.
Talking to some security researchers looking into the current wave of trojan clipboard attacks, I brainstormed some possible mitigations. We could try to make input surfaces more clear about risk:
… but as I noted in my old blog post, we could be even smarter, detecting when the content of a paste came from a browser (akin to the “Mark of the Web” on downloads) and provide the user with a context specific warning.
In fact, I realized, we don’t even need to change any of the apps. Years ago, I updated SlickRun to flash anytime the system clipboard’s content changes as a simple user-experience improvement. A simple security tool could do the same thing– watch for clipboard changes, see if the content came from the browser, and then warn the user if it was dangerous.
In the old days, I’d’ve probably spent an evening or two building such an app, but life is busier now, and my C++ skills are super rusty.
But… what if I vibe-coded it? Hmm. Would it work, or would it fail as spectacularly as it did on my map task?
Vibe-coding ClipShield
I popped open Google Gemini (Flash 2.0) and told directed it:
> Write me a trivial C++ app that calls AddClipboardFormatListener and on each WMClipboardUpdate call it scans the text on the clipboard for a string of my choice. If it's found, a MessageBox is shown and the clipboard text is cleared.
In about 15 seconds, it had emitted an entire C++ source file. I pasted it into Visual Studio and tried to compile it, expecting a huge pile of mistakes.
Sure enough, VS complained that there was no WinMain function. Gemini had named its function main(). I wonder if it could fix it itself?
> Please change the entry point from main to WinMain
The new code compiled and worked perfectly. Neat! I wonder how well it would do with making bigger changes to the code? Improvements occurred to me in rapid succession:
> To the WM_CLIPBOARDUPDATE code, please also check if the clipboard contains a format named "Chromium internal source URL".
> Update the code so instead of a single searchString we search for any of a set of strings.
> please make the string search case-insensitive
> When blocking, please also emit the clipboard string in the alert, and send it to the debug console via OutputDebugString
In each case, the resulting code was pretty much spot on, although I took the opportunity to tweak some blocks manually for improved performance. Importantly, however, I wasn’t wasting any time on the usual C++ annoyances, string manipulations and conversions, argument passing conventions, et cetera. I was just… vibing.
There was a compiler warning from Visual Studio in the log. I wonder if it could fix that? I just pasted the error in with no further instruction:
> Inconsistent annotation for 'WinMain': this instance has no annotations. See c:\program files (x86)\windows kits\10\include\10.0.26100.0\um\winbase.h(1060).
Gemini explained what the warning meant and exactly how to fix it. Hmm… What else?
> Is there a way to show the message box on a different thread so it does not block further progress?
Gemini refactored the code to show the alert in a different thread. Wait, is that even legal?
> In Windows API, is it legal to call MessageBox on another thread?
Gemini explained the principles around the UI thread and why showing a simple MessageBox was okay.
> Can you use a mutex to ensure single-instance behavior?
Done. I had to shift the code around a bit (I didn’t want errors to be fatal), but it was trivial.
Hmm…. What else. Ooh… What if I actually got real antivirus into the mix? I could call AMSI with the contents of the clipboard to let Defender or the system antivirus scan the content and give a verdict on whether it’s dangerous.
> Can you add code to call AMSI with the text from the clipboard?
It generated the code instantly. Amazing. Oops, it’s not quite right.
> clipboardText.c_str() is a char* but the AmsiScanString function needs an LPCWSTR
Gemini apologized for the error and fixed it. Hmm. Linking failed. This has always been a hassle. I wonder how Gemini will do?
> How do I fix the problem that the link step says "unresolved external symbol AmsiOpenSession"?
Gemini explained the cause of the problem and exactly how to fix it, including every click I needed to perform in Visual Studio. Awesome!
By now, I was just having tons of fun, pair programming a combination of my knowledge with Gemini’s strengths.
> Please hoist a time_point named lastClipboardUpdate to a global variable and update it each time the clipboard contents change.
> Please rewrite GetTimestamp not to use auto
I like to know what my types actually are.
> Please monitor keystrokes for the Win+R hotkey and if pressed and it's within 30 seconds of the clipboard copy, show a warning.
I see that it's using WM_HOTKEY.
> The RegisterHotKey call will not work because Windows uses that hotkey. Instead use a keyboard hook.
Gemini understands and writes the new code. It's a little kludgy, watching for the keydown and up events and setting booleans.
> Rather than watching for the VK_LWIN use GetAsyncKeyState to check if it's down.
Gemini fixes the code.
I’m super-impressed. Would the AI do as good a job for anyone who didn’t already deeply understand the space? Maybe not, and probably not as quickly. But it was nice that I had the chance to feel useful.
On Windows systems, that source of network threat information is commonly called SmartScreen, and support for querying it is integrated directly into the Microsoft Edge browser. Direct integration of SmartScreen into Edge means that the security software can see the full target URL and avoid the loss of fidelity incurred by HTTPS encryption and other browser network-privacy changes.
SmartScreen’s integration with Microsoft Edge is designed to evaluate reputation for top-level and subframe navigation URLs only, and does not inspect sub-resource URLs triggered within a webpage. SmartScreen’s threat intelligence data targets socially-engineered phishing, malware, and techscam sites, and blocking frames is sufficient for this task. Limiting reputation checks to web frames and downloads improves performance.
When an enterprise deploys Microsoft Defender for Endpoint (MDE), they unlock the ability to extend network protections to all processes using a WFP sensor/throttle that watches for connection establishment and then checks the reputation of the IP and hostname of the target site.
For performance reasons (Network Protection applies to connections much more broadly than just browser-based navigations), Network Protection first checks the target information with a frequently-updated bloom filter on the client. Only if there’s a hit against the filter is the online reputation service checked.
In both the Edge SmartScreen case and the Network Protection case, if the online reputation service indicates that the target site is disreputable (phishing, malware, techscam, attacker command-and-control) or unwanted (custom indicators, MDCA, Web Category Filtering), the connection will be blocked.
Debugging Edge SmartScreen
Within Edge, the Security Diagnostics page (edge://security-diagnostics/) offers a bit of information about the current SmartScreen configuration. Reputation checks are sent directly through Edge’s own network stack (just like web traffic) which means you can easily observe the requests simply by starting Fiddler (or you can capture NetLogs).
The service URL will depend upon whether the device is a consumer device or an MDE-onboarded. Onboarded devices will target a geography-specific hostname– in my case, unitedstates.smartscreen.microsoft.com:
The JSON-formatted communication is quite readable. A request payload describes the in-progress navigation, and the response payload from the service supplies the verdict of the reputation check:
For a device that is onboarded to MDE, the request’s identity\device\enterprise node contains an organizationId and senseId. These identifiers allow the service to go beyond SmartScreen web protection and also block or allow sites based on security admin-configured Custom Indicators, Web Category Filtering, MDCA blocking, etc. The identifiers can be found locally in the Windows registry under HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows Advanced Threat Protection.
In the request, the forceServiceDetermination flag indicates whether the client was forced to send a reputation check because the WCF feature is enabled for the device. When Web Category Filtering is enabled, requests must hit the web service even if the target sites are “safe” (e.g. Facebook) because a WCF policy may demand it (e.g. “Block social media”).
If the target site has a negative reputation, the response’s responseCategory value indicates why the site should be blocked.
Verdict for a phishing site
Verdict for a site that delivers malware
The actions\cache node of the response allows the service to instruct the client component to cache a result to bypass subsequent requests. Blocks from SmartScreen in Edge are never cached, while blocks from the Network Protection filter are cached for a short period (to avoid hammering the web service in the event that an arbitrary client app has a retry-forever behavior or the like). To clear SmartScreen’s results cache, you can use browser’s Delete Browsing Data (Ctrl+Shift+Delete); deleting your history will instruct the SmartScreen client to also discard its cache.
Debugging Network Protection
In contrast to the simplicity of capturing Edge reputation checks, the component that performs reputation checks runs in a Windows service account and thus it will not automatically send traffic to Fiddler. To get it to send its traffic to Fiddler, set the WinHTTP Proxy setting from an Admin/Elevated command prompt:
netsh winhttp set proxy 127.0.0.1:8888 "<-loopback>"
(Don’t forget to undo this later using netsh winhttp reset proxy, or various things will fall down after you stop running Fiddler!)
Unlike Network Protection generally, Web Content Filtering applies only to browser processes, so traffic from processes like Fiddler.exe is not blocked. Thus, to debug issues with WCF while watching the traffic from the nissvc.exe process, you can start a browser instance that ignores the system proxy setting like so:
When a page is blocked by Web Category Filtering, you’ll see the following page:
If you examine the response to the webservice call, you see that it’s $type=block with a responseCategory=CustomPolicy:
Block response from unitedstates.smartscreen.microsoft.com/api/browser/edge/navigate/3
Unfortunately, there’s no indication in the response about what category the blocked site belonged to, although you could potentially look it up in the Security portal or get a hint from a 3rd party classification site.
In contrast, when a page is blocked due to a Custom Indicator, the blocking page is subtly different:
If you examine the response to the webservice call, you see that it’s $type=block with a responseCategory=CustomBlockList:
As you can see in the response, there’s an iocId field that indicates whether the block was targeting a DomainName or ipAddress, and specifically which one was matched.
Understanding Edge vs. Other Clients
On Windows, Network Protection’s integration into Windows Filtering Platform gives it the ability to monitor traffic for all browsers on the system. For browsers like Chrome and Firefox, that means it checks all network connections used by a browser both for navigation/downloads and to retrieve in-page resources (scripts, images, videos, etc).
Importantly, however, on Windows, Network Protection’s WFP filter ignores traffic from machine-wide Microsoft Edge browser installations (e.g. all channels except Edge Canary). In Edge, URL blocks are instead implemented using a Edge browser navigation throttle that calls into the SmartScreen web service. That service returns block verdicts for web threats (phishing, malware, techscams), as well as organizational blocks (WCF, Custom Indicators, MDCA) if configured by the enterprise. Today, Edge’s SmartScreen integration performs reputation checks only against navigation (top-level and subframe) URLs only, and does not check the URLs of subresources.
In contrast, on Mac, Network Protection filtering applies to Edge processes as well: blocks caused by SmartScreen threat intelligence are shown in Edge via a blocking page while blocks from Custom Indicators, Web Category Filtering, MDCA blocking, etc manifest as toast notifications.
Block Experience
TLS encryption used in HTTPS prevents the Network Protection client from injecting a meaningful blocking page into Chrome, Firefox, and other browsers. However, even for unencrypted HTTP, the filter just injects a synthetic HTTP/403 response code with no indication that Defender blocked the resource.
Instead, blocks from Network Protection are shown as Windows “Toast” Notifications:
In contrast, SmartScreen’s direct integration into Edge allows for a meaningful error page:
Troubleshooting Network Protection “Misses”
Because Network Protection relies upon network-level observation of traffic, and browsers are increasingly trying to prevent network-level observation of traffic destinations, the most common complaints of “Network Protection is not working” relate to these privacy features. Ensure that browser policies are set to disable QUIC and Encrypted Client Hello.
Ensure that your scenario actually entails a network request being made: URL changes handled by ServiceWorkers and via pushState don’t require hitting the network.
If your scenario is blocked in Edge but some sites are not blocked in Chrome or Firefox, look at a NetLog to determine if H/2 Connection Coalescing is in use or disable Firefox’s network.http.http2.coalesce-hostnames using about:config.
Ensure that Defender Network Protection is enabled and you do not have exclusions that apply to the target process.
Test Pages
A older test page for SmartScreen scenarios can be found here. Note that some of its tests are based on deprecated scenarios and no longer do anything.
After last year’s disappointing showing at the Capitol 10K, I wanted to do better this time around.
We left the house at 6:47; traffic was light and we pulled into my regular parking spot at 7:09. It was a very chilly morning at 42F with a bracing breeze, so I wore my running tights, making sure to Body Glide everywhere to avoid a repeat of the miserable Austin Half chafing. I headed over to the start line and had a productive stop at the porta-potties on the way. The B corral was completely packed by the time I arrived so I had to wait outside of the queue until it drained up to the start line. My Coros watch successfully streamed music to one earbud for the whole race.
Compared to last year, I started out slower: this year, my pace to the 2 mile split was 9:18 while it was 8:58 last year. But this time, I kept running throughout and finished the first 5K 1:26 faster, and finished the overall race 6:33 faster; still 7:17 below my fastest, but under my goal.
I probably should’ve been running a bit faster throughout, but by far the most important factor was that I only dropped to a walk a few times, and usually for only 30 seconds or so. This year, I didn’t recognize the start of the “KQ Hill” (usually there’s an obvious counting cable you run over) so I didn’t run as hard as I might have otherwise. But I ran the whole hill, and the following hills as well.
Over the years, I’ve gotten in the bad habit of dropping to a walk when things seem hard (“Oh, I’ll walk until the next street light“) but I battled that in this race in two ways — by delaying myself by setting the start-target in the distance (“I’ll start walking when I pass the next street light“) and by avoiding excuses by keeping my heart rate under control for the whole race:
Unlike most past races, my pace was more consistent throughout:
All in all, it wasn’t my best performance, but I had fun with it. After the race, I wandered around the post-race expo (which I had entirely overlooked last year, oops) and tried a few non-alcoholic beers– they’d’ve been much more refreshing if it wasn’t in the low 40s and windy.
I’m excited to try to get an even better result in the Sunshine 10K in just 27 more days.
September 2025 tl;dr: You probably should not touch Exploit Protection settings. This post explains what the feature does and how it works, but admins and end-users should probably just leave it alone to do what it does by default.
Over the last several decades, the Windows team has added a stream of additional security mitigation features to the platform to help application developers harden their applications against exploit. I commonly referred to these mitigations as the Alphabet Soup mitigations because each was often named by an acronym, DEP/NX, ASLR, SEHOP, CFG, etc. The vast majority of these mitigations were designed to help shield applications with memory-safety vulnerabilities, helping prevent an attacker from turning a crash into reliable malicious code execution.
By default, most of these mitigations were off-by-default for application compatibility reasons– Windows has always worked very hard to ensure that each new version is compatible with the broad universe of software, and enabling a security mitigation by default could unexpectedly break some application and prevent users from having a good experience in a new version of Windows.
There were some exceptions; for instance, some mitigations were enabled by default for 64-bit applications because the very existence of a 64-bit app during the mid-200Xs was an existence proof that the application was being maintained.
In one case, Windows offered the user an explicit switch to turn on a mitigation (DEP/NX) for all processes, regardless of whether they opted-in:
But, generally, application developers were required to opt-in to new mitigations by setting compiler/linker flags, registry keys, or by calling the SetProcessMitigationPolicy API. One key task for product security engineers in each product cycle was to research the new mitigations available in Windows and opt the new version of their product (e.g. IE, Outlook, Word, etc) into the newest mitigations.
The requirement that developers themselves opt-in was frustrating to some security architects though– what if there was some older app that was no longer maintained but that could be protected by one of these new mitigations?
In response, EMET (Enhanced Mitigation Experience Toolkit) was born. This standalone application provided a user-friendly experience to enabling mitigations for an app; under the covers, it twiddled the bits in the registry for the process name.
EMET was useful, but it exposed the tradeoff to security architects: They could opt a process into new mitigations, but ran the risk of causing the app to break entirely, or only in certain scenarios. They would have to extensively test each application and mitigation to ensure compatibility across the scenarios they cared about.
EMET 5.52 went out of support way back in 2018, but had since been replaced by the Exploit Protection node in the Windows Security App. Exploit Protection offered a very similar user-experience to EMET, allowing the user to specify protections on a per-app basis as well as across all apps.
If you dig into the settings, you can see the available options:
You can also see the settings on a “per-program” basis:
…including the settings put into the registry by application installers and the like.
IFEO Registry screenshot showing the “Mandatory ASLR” bit set for msfeedssync.exe
While built into Windows, Exploit Protection also works with Microsoft Defender for Endpoint (MDE), enabling security admins to easily deploy rules across their entire tenant. Some rules offer an “Audit mode”, which would allow a security admin to check whether a given rule is likely to be compatible with their “real-world” deployment before being deployed in enforcement mode.
Beyond the Windows UI and MDE, mitigations can also be deployed via a PowerShell module; often, you’ll use the Export link on a machine that’s configured the way you like and then import that XML to your other desktops.
Notably, the Set-ProcessMitigation command should be run as an admin (since it needs to touch systemwide registry keys, and silently ignores Access Denied errors). If you choose to import an XML configuration file, the importer’s parser is extremely liberal (ignoring, for instance, whether the document is well-formed) and simply walks the document looking for AppConfig nodes that specify configuration settings per app.
The Big Challenge
The big challenge with Exploit Protection (and EMET before it) is that, if these mitigations were safe to apply by default, we would have done so. Any of these mitigations could conceivably break an application in a spectacular (or nearly invisible) way.
Exploit Mitigations like “Bottom Up ASLR” are opt-in because they can cause compatibility issues with applications that make assumptions about memory layout. Opting an application into a mitigation can cause the application to crash at startup, or later, at runtime, when the application’s (now incorrect) assumption causes a memory access error. Crashes could occur every time, or randomly.
When a mitigation is hit, you might see an explicit “block” event in the Event Log or Defender Portal events, or you might not. That’s because in some cases, a mitigation doesn’t mean the operation is just blocked, instead Windows terminates it. You might look to see whether Watson has captured a crash of the application as it starts, but typically debugging these sorts of things entails starting the target application under a debugger and stepping through its execution until a failure occurs. That is rarely practical for anyone other than the developers of the application (who have its private symbols and source code). If excluding an application from a mitigation doesn’t work, it may be the case that the executable launches some other executable that also needs an exclusion. You might try collecting a Process Monitor log to see whether that’s the case.
Other Problems…
Beyond the problem that turning on additional mitigations could break your applications in surprising and unusual ways, mitigations are also settable by both admins and developers, but there’s no good way to “reset” your settings if you make a mistake or change your mind. Various PowerShell scripts are available to wipe all of the EP settings from the registry, but doing so will wipe out not only the EP settings you set, but also any IFEO (Image File Execution Options) registry settings set by an application’s own developers, leaving you less secure than when you started.
Developer Best Practices
In the ideal case, developers will themselves opt-in (and verify) all available security mitigations for their apps, ensuring that they do not effectively “offload” the configuration and verification process to their customers.
With the increasing focus on security across the software ecosystem, we see that best practice followed by most major application developers, particularly in the places where it’s most useful (browsers and internet clients). Browser developers in particular tend to go far beyond the “alphabet soup” mitigations and also design their products with careful sandboxing such that, even if remote code execution is achieved, it is confined to a tight sandbox to protect the rest of the system.
Last November, I wrote a post about the basics of security software. In that post, I laid out how security software is composed of sensors and throttles controlled by threat intelligence. In today’s post, we’ll look at the Windows Filtering Platform, a fundamental platform technology introduced in Windows Vista that provides the core sensor and throttle platform upon which several important security features in Windows are built.
What is the Windows Filtering Platform?
The Windows Filtering Platform (WFP) is a set of technologies that enable software to observe and optionally block messages. In most uses, WFP is used to block network messages, but it can also block local RPC messages.
For networking and RPC scenarios, performance is critical, so a consumer of WFP specifies a filter that describes the messages it is interested in, and the platform will only call the consumer when a message that matches the filter is encountered. Filtering ensures that processing is minimized for messages that are not of interest to the consumer.
When a message matching the filter is encountered, it is sent to the consumer which can examine the content of the message and either allow it to pass unmodified, or it can change the content or indicate that WFP should drop the message.
This sensor and throttle architecture empowers several critical security features in Windows.
Windows Firewall
Most prominently, WFP is the technology underneath the Windows Firewall (formally, “Windows Defender Firewall with Advanced Security” despite the feature having little to do with Defender). The Firewall controls whether processes may establish outbound connections or receive inbound traffic using a flexible set of rules. The Firewall settings (wf.msc) allows the user to specify these rules that are then enforced using the Windows Filtering Platform.
…and when a process tries to bind a port to allow inbound traffic, a UI prompt is shown:
For the most part, however, the Windows Firewall operates silently and does not show much in the way of UI. In contrast, the Malware Bytes Windows Firewall Control app provides a much more feature-rich control panel for the Windows Firewall.
One major limitation of the Windows Firewall that existed for almost two decades is it natively only supports rules that target IP addresses. Many websites periodically rotate between IPs for operational, load-balancing, geographic CDNs, etc, and many products are unwilling to commit to a fixed set of IP addresses forever. Thus, the inability to create firewall rules that use DNS names (e.g. “Always permit traffic to VirusTotal.com“) was a significant limitation.
This limitation was later mitigated with a new feature called Dynamic Keywords. Dynamic keywords allow you to specify a target DNS name in a firewall rule. Windows Defender’s Network Protection feature will subsequently watch for any unencrypted DNS lookups of the specified DNS name. When Network Protection observes a resolution for a targeted DNS name, it will send a message to the firewall service directing it to update the rule with the IP address returned from DNS. (Note: This scheme is imperfect in several dimensions: for example, asynchrony means that the firewall rule may be updated milliseconds after the DNS resolution, such that the first attempt to connect to an allowed hostname could be blocked.)
Zero Trust DNS (ZTDNS)
More recently, the Windows team has been building a feature called “Zero Trust DNS” whereby a system can be configured to perform all DNS resolutions through a secure DNS resolver, and any connections to any network address that was not returned in a response from that resolver are blocked by WFP.
In this configuration, your organization’s DNS server becomes the network security control point: if your DNS server returns an address for a hostname, your clients can connect to that address, but if DNS refuses to resolve the hostname (returning no address), the network connection is blocked. (An app that is hardcoded to talk to a particular IP address would find its requests blocked, since no DNS request “unlocked” access to that address). The ZeroTrustDNS feature is obviously only suitable for certain operational environments due to its compatibility impact.
AppContainer
Windows 8 introduced a new application isolation technology called AppContainer that aimed to improve security by isolating “modern applications” from one another and the rest of the system. AppContainers are configured with a set of permissions, and the network permission set permits restricting an app to public or private network access.
James Forshaw from Google’s Project Zero wrote an amazing blog post about how AppContainer’s network isolation works. The post includes a huge amount of low-level detail about the implementation of both WFP and the Windows Firewall.
Restricted Network Access
Years before the advent of AppContainers, Windows included a similar feature to block network traffic from Windows Services. Like the AppContainer controls, it was implemented in WFP below/before the Windows Firewall feature, although like AppContainer’s controls, it was implemented in the service named Firewall.
Windows Service Hardening enables restrictions developers can set on a service’s network access. Network access restrictions for Services
are evaluated before the Windows Firewall rules
are enforced regardless of whether Windows Firewall is enabled
are defined programmatically using the INetFwServiceRestriction and INetFwRule APIs.
Windows Vista and later include a set of predefined network access restrictions for built-in services. Service network access restrictions are stored in the HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\SharedAccess\Parameters\FirewallPolicy\RestrictedServices\ registry key. The predefined restrictions are stored in the Static\System key, and custom restrictions are stored in the Configurable key.
A one-off protection built atop WFP is the Port Scanning Prevention filter, described here. The article also briefly mentions the use of Event Tracing for Windows (ETW) Logging in the WFP.
RPC Filtering
The Windows Filtering Platform is not limited to Network messages; it can also be used to filter RPC messages between processes.
I only learned about MDE’s use of RPC Filtering recently. In 2024, Microsoft Employees were briefly unable to play Netflix or other DRM’d video in the Microsoft Edge browser. The root cause turned out to be Defender’s Disruption RPC filter blocking messages from the PlayReady DRM sandboxed process causing the failure during playback.
Microsoft Defender Network Protection
Network Protection (NP) is composed of a WFP driver (wdNisDrv.sys) and a service (NisSrv.exe). It has the ability to synchronously block TCP traffic based on IP and URI reputation, relying on threat intelligence from the SmartScreen webservice and signatures authored by the Defender team. While SmartScreen is integrated directly by Microsoft Edge, NP can extend SmartScreen’s anti-phishing/anti-malware protection to other clients like Chrome and Firefox.
Unfortunately, the use of TLS causes user-experience problems for the blocking of unwanted content. When Network Protection determines that it must block a HTTPS site, it cannot return an error page to the browser because TLS prevents the injection of content on the secure channel. So, NP instead injects a “Handshake Failure” TLS Fatal Alert into the bytestream to the client and drops the connection. The client browser, unaware that the fatal alert was injected by Network Protection, concludes that the server does not properly support TLS, so it shows an error page complaining of the same.
If the system’s Notifications feature is enabled, a notification toast appears with the hope that you’ll see it and understand what’s happened.
Chromium-based browsers show ERR_SSL_VERSION_OR_CIPHER_MISMATCH when the TLS handshake is interruptedMozilla Firefox shows SSL_ERROR_NO_CYPHER_OVERLAP when the TLS handshake is interrupted
Beyond blocking unwanted connections, Network Protection provides network-related signals to the Defender engine, enabling behavior monitoring signatures to target suspicious network connections. By way of example, by generating a JA3/JA3S/JA4 signature of a HTTPS handshake’s fields and comparing it to known-suspicious scenarios, security software can detect a malicious process communicating with its Command and Control servers, even when the traffic is encrypted and the process is not yet known to be malicious.
Microsoft Defender for Endpoint EDR
MDE’s EDR feature also includes MsSecWfp, a driver aimed at filtering network connections based on rules. It is leveraged on network isolation scenarios when a configuration for the machine is passed to this driver for enforcement. This driver produces audit events in the form of ETW events and it does not perform deep traffic inspection.
When EPP is enabled, a WFP callout in WTD.sys watches for the Server Name Indicator extension in TLS Client Hello messages so that it can understand which hosts a process has established network connections with. If a user subsequently types their domain password, the Web Threat Defense service checks the reputation of the established connections to attempt to determine whether the process is connected to a known phishing site.
And More…
Importantly, WFP isn’t used just by Microsoft — many third party security products are also implemented using the Windows Filtering Platform. Security applications that integrate with the Windows Security Center are required (by policy) to be built upon WFP.
While WFP is a core platform technology used by many products, it’s not the only one available. For example, beyond its use of WFP in Network Protection, Microsoft Defender for Endpoint also includes a Network Detection and Response (NDR) feature.
NDR is composed of a service which listens to ETW notifications generated by Pktmon and WinSock, performing async packet analysis using Zeek, an open-source network monitoring platform. Network data is captured at a layer below/before WFP, using PktMon. Compared to WFP, PktMon enables capture of lower-level network data, useful for watching for attempts to exploit bugs in Windows’ network stack itself (e.g. 2021, 2024).
Finally, some network security solutions are based on network traffic routing (e.g. proxies), optionally with decryption via MitM. Microsoft Entra’s Global Secure Access is a newer offering in this category, but there are many vendors offering solutions in this category.
Limitations & Future Directions
Perhaps the most important limitation of network-level monitoring is that the increasingly ubiquitous use of encryption (HTTPS/TLS) means that request and response payloads are usually indecipherable at the network level.
In the “good old days” of network security, a security product could observe all DNS resolution and examine the full content of requests and responses. Nowadays, DNS traffic is increasingly encrypted, and HTTPS prevents network-level monitoring from observing URLs, requests (headers and body) and responses (headers and bodies). For years, security software could still observe the target hostname by sniffing the SNI out of ClientHello messages, but the ongoing deployment of Encrypted Client Hello means that even this signal is disappearing:
Beyond ECH, a growing fraction of traffic takes place over HTTP3/QUIC which doesn’t use TCP at all. QUIC’s use of “initial encryption” precludes trivial determination of the server’s name.
Some security solutions attempt to inject their code into clients or otherwise obtain HTTPS decryption keys, but these approaches tend to be unreliable or offer poor performance.
It seems likely that in the future, the security industry will need to work with application developers to ensure that their software integrates with network security checks directly (as they already do elsewhere) instead of trying to sniff out the destination of traffic at the network level.
Telerik developers recently changed Fiddler to validate the signature on extension assemblies before they load. If the assembly is unsigned, the user is presented with the following message:
However, it’s important to understand the threat model and tradeoffs here.
Validating signatures every time a file is loaded takes time and slows the startup of the app. That’s particularly true if online certificate revocation checking is performed. The performance impact is one reason why most of my applications have a manifest that indicates that .NET shouldn’t bother:
Signing your installers is critical to help protect them at rest on your servers and streamline the security checks that happen on download and install. Signing the binaries within can be useful in case the user has Smart App Control enabled, and other security software (e.g. Firewall rules that target publisher signatures) may benefit as well.
However, having your app check the signature itself is less useful than you might expect for most applications. The problem is that there’s usually no trust boundary that would preclude an attacker from, for instance, tampering with your app’s code to remove the signature check. In most cases, the attacker could simply modify fiddler.exe to remove the new signature checking code, such that the protection is removed. Similarly, they could likely execute a .DLL hijacking attack to get their code loaded without any signature check at all. Or they could use their own process to inject code into the victim’s address space at runtime. It’s a long list.
In Telerik’s case, tampering to evade signature checking is even simpler. If the user elects to “Always allow” an unsigned extension, that decision is stored as a base64 encoded string in a simple Fiddler preference:
You can use Fiddler’s TextWizard to decode the preference value from about:config
An attacker with sufficient permission to write a .DLL to a place that Fiddler will load it would also have sufficient permission to edit the registry keys where this preference is stored.
Finally, Fiddler’s signature check doesn’t tell the user who signed the file, such that all signatures that chain to any CA are silently allowed. Now, this isn’t entirely worthless– CAs cannot prevent a certificate from being used to sign malware, but in theory a certificate found to do so will eventually get revoked.
If you plan to check code signatures in your application, carefully consider the threat model and ensure that you understand the limits to the protection. And remember that sometimes, “code” may be stored in a type of file that does not natively support signing, as in the case of Fiddler’s script or certain Chromium files.
Spring break is one of the best times to be in Texas. The weather’s usually nice, and outdoor fun things to do aren’t miserably hot. This year, the kids are obsessed with roller coasters, so we bought Season Passes to Six Flags (which also includes a variety of other theme parks and water parks). Thus far, we’ve spent two days at Six Flags in San Antonio, and two days at Six Flags in Dallas.
The excellent “Dr. Diabolical” drop at Six Flags. The kids are in the back row. We all rode it a dozen times.
The kids spent the actual days of spring break on an adventure trip with their mom to Costa Rica:
While they were out of town, I took a quick four day cruise out of Galveston on the Mariner of the Seas.
To keep costs down, this time I took a Deck 7 “interior view” cabin that overlooked the Promenade:
… but I didn’t spend much time in the room. I spent a lot of time at shows, walking the top deck, enjoying music (“Ed”, a Brazilian singer/guitarist) in the pub, and generally relaxing. I passed some time reading Ken Williams’ history of Sierra On-line, a pleasant and nostalgic read that made little impression on me. The weather was imperfect (very windy with intermittent drizzles) but it was quite nice overall.
Most importantly, I achieved my secret goal for the trip, making some crucial progress in writing my long-overdue book which I’ve resolved to publish later this year.
The comedian (Rodney Johnson) was selling a copy of his book, and he autographed my copy with an inscription (coincidentally) perfect for my goals on the cruise:
Apparently, I was the only person to buy the book on the first day, and at the show on the last day, he asked if I was in the audience (“I am!”), had read it (“I did! Cover to cover!”) and what I thought of it (On the spot, I said “Pretty good”, and emailed him a funnier response later).
I often sit in the front row at shows, and I was called on stage to help Michael Holly with a magic trick (We held a chain and he walked through it.).
My original shore excursion (an adventure park) was canceled, I made the best of it with a short speedboat and snorkeling trip.
30 horsepower isn’t a lot, but it was plenty to jump the wavesAfter returning the boat, I killed an hour at a pretty swim-up bar.
With just one day at a destination and two sea days, I also booked a “Behind the Scenes” tour of the boat, getting the chance to see a galley, the provisions storerooms, the laundry, the bridge, the engine control room, and backstage in the theater.
The Engine Control Room
The crew work incredibly long hours: 10 hours a day, 7 days a week, on 7 month contracts. I resolved to think of this guy any time I’m feeling overwhelmed with work– He’s working in a windowless (underwater) room, hand-folding thousands of towels per day from a room-sized pile almost as tall as he is.
All in all, it was a busy but great spring break. Now, buckling down to get back in shape (two 10Ks coming up), finish booking various trips (including Kilimanjaro!), and otherwise get back to some semblance of a routine.
A customer recently complained that after changing the Windows Security Zone Zone configuration to Disable launching apps and unsafe files:
The default is “Prompt”
… trying to right-click and “Save As” on a Text file loaded in Chrome fails in a weird way. Specifically, Chrome’s download manager claims it saved the file (with an incorrect “size” that’s actually the count of files “saved”):
However, if you click on the entry, Chrome notices that the file doesn’t exist:
if you’ve configured the setting Launching applications and unsafe files to Disable in your Internet Control Panel’s Security tab, Chromium will block executable file downloads with a note: Couldn't download - Blocked.
…but this case here is somewhat different than that.
The customer claims that it is a regression, so let’s bisect.
Bisecting, we find that it is indeed a behavior change. Chrome 130 didn’t have this problem. The bisect process tells us:
You are probably looking for a change made after 1366085 (known good), but no later than 1366127 (first known bad)
CHANGELOG URL:
https://chromium.googlesource.com/chromium/src/+log/8c10e43000483bdc4a1b5bf092b39266597d3fc8..ac84b1ec75f49a771ad490760cdaf8872aae8a29
42 changelists is a pretty wide range, so let’s try the Win64 version to see if we can narrow either side:
You are probably looking for a change made after 1366107 (known good), but no later than 1366146 (first known bad).
CHANGELOG URL:
https://chromium.googlesource.com/chromium/src/+log/f3063c6a843ccf316d31f9169972b1ae546945b2..5f8917fb6721c1fd070cfccf45fa2ba44a8d0253
Okay, so if we take the tightest constraints, we end up with a narrower range of 1366107 to 1366127, which is only 20 CLs.
Within that range, only one CL (1366109) appears to have anything to do with downloads. I quickly clicked through the others just to be sure.
Looking at the code of the 1366109 change, we don’t see anything directly related to the Save As scenario.
Neat. I like mysteries. If we follow Sherlock Holmes’ quote: “When you have eliminated the impossible, whatever remains, however improbable, must be the truth” we conclude that the CL in question must be responsible. But how could it be?
Well, this looks mildly interesting:
The code was changed to allow a parallel process to delete the file where that would not have been allowed previously. But still, what would delete the temp file in the middle of the overall download process?
Spoiler Alert: The answer turns out to be the Windows code called to apply the Mark-of-the-Web by the Chromium quarantine code!
If we fire up SysInternals’ Process Monitor against the current version of Chrome, we see that what’s happening is that the downloaded file created in the temp folder (before moved/renamed to its final name) is deleted by CAttachmentServices code when it is called by Chrome’s QuarantineFile function:
In contrast, in the older build of Chrome, we see that the temporary file cannot be deleted because the code that tries to delete it hits a SHARING_VIOLATION, since Chrome’s handle to the file didn’t offer SHARE_DELETE:
So… neat. It was basically an accident that this file could be saved in older Chrome.
Now, with all that said, we’re left with even more questions. For example, you’ll see that if you try to download a dangerous file type via a normal download when the Zone settings are configured as above, the download is blocked:
… but if you try to download our text file via the regular file download flow, it’s allowed:
So how can it be that if you try to perform a Save As on that same text file, it’s somehow blocked? What’s up with that? Chrome’s quarantine code runs on both files!
The secret is how the Windows Attachment Manager code decides whether a file is dangerous during the CAttachmentServices::Save call. Windows’ attachment manager has to rely on the file’s extension to decide whether the type is dangerous. In the “Save As” case, the file’s extension is .tmp, whereas in the regular download manager case, the file has the correct and final extension (.txt).
However, if you look at the SaveFileManager code, the quarantine (MotW annotation) step happens inside OnURLLoaderComplete just after the file is downloaded but before that temporary file gets renamed to its final name (inside RenameAllFiles).
You can confirm that the .tmp filename extension is indeed causing the problem by using the registry to temporarily declare that .tmp is a low-risk file extension:
After you make this change, the Save As scenario works correctly.
I’ve filed a bug on Chrome’s SaveAs code to ensure that the file has the correct filename before the quarantine logic runs.