Understanding SmartScreen and Network Protection

The vast majority of cyberthreats arrive via one of two related sources:

That means that by combining network-level sensors and throttles with threat intelligence (about attacker sites), security software can block a huge percentage of threats.

Protection Implementation

On Windows systems, that source of network threat information is commonly called SmartScreen, and support for querying it is integrated directly into the Microsoft Edge browser. Direct integration of SmartScreen into Edge means that the security software can see the full target URL and avoid the loss of fidelity incurred by HTTPS encryption and other browser network-privacy changes.

SmartScreen’s integration with Microsoft Edge is designed to evaluate reputation for top-level and subframe navigation URLs only, and does not inspect sub-resource URLs triggered within a webpage. SmartScreen’s threat intelligence data targets socially-engineered phishing, malware, and techscam sites, and blocking frames is sufficient for this task. Limiting reputation checks to web frames and downloads improves performance.

When an enterprise deploys Microsoft Defender for Endpoint (MDE), they unlock the ability to extend network protections to all processes using a WFP sensor/throttle that watches for connection establishment and then checks the reputation of the IP and hostname of the target site.

For performance reasons (Network Protection applies to connections much more broadly than just browser-based navigations), Network Protection first checks the target information with a frequently-updated bloom filter on the client. Only if there’s a hit against the filter is the online reputation service checked.

In both the Edge SmartScreen case and the Network Protection case, if the online reputation service indicates that the target site is disreputable (phishing, malware, techscam, attacker command-and-control) or unwanted (custom indicators, MDCA, Web Category Filtering), the connection will be blocked.

Debugging Edge SmartScreen

Within Edge, the Security Diagnostics page (edge://security-diagnostics/) offers a bit of information about the current SmartScreen configuration. Reputation checks are sent directly through Edge’s own network stack (just like web traffic) which means you can easily observe the requests simply by starting Fiddler (or you can capture NetLogs).

The service URL will depend upon whether the device is a consumer device or an MDE-onboarded. Onboarded devices will target a geography-specific hostname– in my case, unitedstates.smartscreen.microsoft.com:

The JSON-formatted communication is quite readable. A request payload describes the in-progress navigation, and the response payload from the service supplies the verdict of the reputation check:

For a device that is onboarded to MDE, the request’s identity\device\enterprise node contains an organizationId and senseId. These identifiers allow the service to go beyond SmartScreen web protection and also block or allow sites based on security admin-configured Custom Indicators, Web Category Filtering, MDCA blocking, etc. The identifiers can be found locally in the Windows registry under HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows Advanced Threat Protection.

In the request, the forceServiceDetermination flag indicates whether the client was forced to send a reputation check because the WCF feature is enabled for the device. When Web Category Filtering is enabled, requests must hit the web service even if the target sites are “safe” (e.g. Facebook) because a WCF policy may demand it (e.g. “Block social media”).

If the target site has a negative reputation, the response’s responseCategory value indicates why the site should be blocked.

Verdict for a phishing site
Verdict for a site that delivers malware

The actions\cache node of the response allows the service to instruct the client component to cache a result to bypass subsequent requests. Blocks from SmartScreen in Edge are never cached, while blocks from the Network Protection filter are cached for a short period (to avoid hammering the web service in the event that an arbitrary client app has a retry-forever behavior or the like). To clear SmartScreen’s results cache, you can use browser’s Delete Browsing Data (Ctrl+Shift+Delete); deleting your history will instruct the SmartScreen client to also discard its cache.

Debugging Network Protection

In contrast to the simplicity of capturing Edge reputation checks, the component that performs reputation checks runs in a Windows service account and thus it will not automatically send traffic to Fiddler. To get it to send its traffic to Fiddler, set the WinHTTP Proxy setting from an Admin/Elevated command prompt:

    netsh winhttp set proxy 127.0.0.1:8888 "<-loopback>"

(Don’t forget to undo this later using netsh winhttp reset proxy, or various things will fall down after you stop running Fiddler!)

Unlike Network Protection generally, Web Content Filtering applies only to browser processes, so traffic from processes like Fiddler.exe is not blocked. Thus, to debug issues with WCF while watching the traffic from the nissvc.exe process, you can start a browser instance that ignores the system proxy setting like so:

chrome.exe --proxy-server=direct:// https://example.com

When a page is blocked by Web Category Filtering, you’ll see the following page:

If you examine the response to the webservice call, you see that it’s $type=block with a responseCategory=CustomPolicy:

Block response from unitedstates.smartscreen.microsoft.com/api/browser/edge/navigate/3

Unfortunately, there’s no indication in the response about what category the blocked site belonged to, although you could potentially look it up in the Security portal or get a hint from a 3rd party classification site.

In contrast, when a page is blocked due to a Custom Indicator, the blocking page is subtly different:

If you examine the response to the webservice call, you see that it’s $type=block with a responseCategory=CustomBlockList:

As you can see in the response, there’s an iocId field that indicates whether the block was targeting a DomainName or ipAddress, and specifically which one was matched.

Understanding Edge vs. Other Clients

On Windows, Network Protection’s integration into Windows Filtering Platform gives it the ability to monitor traffic for all browsers on the system. For browsers like Chrome and Firefox, that means it checks all network connections used by a browser both for navigation/downloads and to retrieve in-page resources (scripts, images, videos, etc).

Importantly, however, on Windows, Network Protection’s WFP filter ignores traffic from machine-wide Microsoft Edge browser installations (e.g. all channels except Edge Canary). In Edge, URL blocks are instead implemented using a Edge browser navigation throttle that calls into the SmartScreen web service. That service returns block verdicts for web threats (phishing, malware, techscams), as well as organizational blocks (WCF, Custom Indicators, MDCA) if configured by the enterprise. Today, Edge’s SmartScreen integration performs reputation checks only against navigation (top-level and subframe) URLs only, and does not check the URLs of subresources.

In contrast, on Mac, Network Protection filtering applies to Edge processes as well: blocks caused by SmartScreen threat intelligence are shown in Edge via a blocking page while blocks from Custom Indicators, Web Category Filtering, MDCA blocking, etc manifest as toast notifications.

Block Experience

TLS encryption used in HTTPS prevents the Network Protection client from injecting a meaningful blocking page into Chrome, Firefox, and other browsers. However, even for unencrypted HTTP, the filter just injects a synthetic HTTP/403 response code with no indication that Defender blocked the resource.

Instead, blocks from Network Protection are shown as Windows “Toast” Notifications:

In contrast, SmartScreen’s direct integration into Edge allows for a meaningful error page:

Troubleshooting Network Protection “Misses”

Because Network Protection relies upon network-level observation of traffic, and browsers are increasingly trying to prevent network-level observation of traffic destinations, the most common complaints of “Network Protection is not working” relate to these privacy features. Ensure that browser policies are set to disable QUIC and Encrypted Client Hello.

Ensure that your scenario actually entails a network request being made: URL changes handled by ServiceWorkers and via pushState don’t require hitting the network.

If your scenario is blocked in Edge but some sites are not blocked in Chrome or Firefox, look at a NetLog to determine if H/2 Connection Coalescing is in use or disable Firefox’s network.http.http2.coalesce-hostnames using about:config.

Ensure that Defender Network Protection is enabled and you do not have exclusions that apply to the target process.

Test Pages

A older test page for SmartScreen scenarios can be found here. Note that some of its tests are based on deprecated scenarios and no longer do anything.

Network Protection test urls include Phishing and Malware.

A Solid 10K

After last year’s disappointing showing at the Capitol 10K, I wanted to do better this time around.

We left the house at 6:47; traffic was light and we pulled into my regular parking spot at 7:09. It was a very chilly morning at 42F with a bracing breeze, so I wore my running tights, making sure to Body Glide everywhere to avoid a repeat of the miserable Austin Half chafing. I headed over to the start line and had a productive stop at the porta-potties on the way. The B corral was completely packed by the time I arrived so I had to wait outside of the queue until it drained up to the start line. My Coros watch successfully streamed music to one earbud for the whole race.

Compared to last year, I started out slower: this year, my pace to the 2 mile split was 9:18 while it was 8:58 last year. But this time, I kept running throughout and finished the first 5K 1:26 faster, and finished the overall race 6:33 faster; still 7:17 below my fastest, but under my goal.

I probably should’ve been running a bit faster throughout, but by far the most important factor was that I only dropped to a walk a few times, and usually for only 30 seconds or so. This year, I didn’t recognize the start of the “KQ Hill” (usually there’s an obvious counting cable you run over) so I didn’t run as hard as I might have otherwise. But I ran the whole hill, and the following hills as well.

Over the years, I’ve gotten in the bad habit of dropping to a walk when things seem hard (“Oh, I’ll walk until the next street light“) but I battled that in this race in two ways — by delaying myself by setting the start-target in the distance (“I’ll start walking when I pass the next street light“) and by avoiding excuses by keeping my heart rate under control for the whole race:

Unlike most past races, my pace was more consistent throughout:

All in all, it wasn’t my best performance, but I had fun with it. After the race, I wandered around the post-race expo (which I had entirely overlooked last year, oops) and tried a few non-alcoholic beers– they’d’ve been much more refreshing if it wasn’t in the low 40s and windy.

I’m excited to try to get an even better result in the Sunshine 10K in just 27 more days.

Defensive Technology: Exploit Protection

September 2025 tl;dr: You probably should not touch Exploit Protection settings. This post explains what the feature does and how it works, but admins and end-users should probably just leave it alone to do what it does by default.

Over the last several decades, the Windows team has added a stream of additional security mitigation features to the platform to help application developers harden their applications against exploit. I commonly referred to these mitigations as the Alphabet Soup mitigations because each was often named by an acronym, DEP/NX, ASLR, SEHOP, CFG, etc. The vast majority of these mitigations were designed to help shield applications with memory-safety vulnerabilities, helping prevent an attacker from turning a crash into reliable malicious code execution.

By default, most of these mitigations were off-by-default for application compatibility reasons– Windows has always worked very hard to ensure that each new version is compatible with the broad universe of software, and enabling a security mitigation by default could unexpectedly break some application and prevent users from having a good experience in a new version of Windows.

There were some exceptions; for instance, some mitigations were enabled by default for 64-bit applications because the very existence of a 64-bit app during the mid-200Xs was an existence proof that the application was being maintained.

In one case, Windows offered the user an explicit switch to turn on a mitigation (DEP/NX) for all processes, regardless of whether they opted-in:

But, generally, application developers were required to opt-in to new mitigations by setting compiler/linker flags, registry keys, or by calling the SetProcessMitigationPolicy API. One key task for product security engineers in each product cycle was to research the new mitigations available in Windows and opt the new version of their product (e.g. IE, Outlook, Word, etc) into the newest mitigations.

The requirement that developers themselves opt-in was frustrating to some security architects though– what if there was some older app that was no longer maintained but that could be protected by one of these new mitigations?

In response, EMET (Enhanced Mitigation Experience Toolkit) was born. This standalone application provided a user-friendly experience to enabling mitigations for an app; under the covers, it twiddled the bits in the registry for the process name.

EMET was useful, but it exposed the tradeoff to security architects: They could opt a process into new mitigations, but ran the risk of causing the app to break entirely, or only in certain scenarios. They would have to extensively test each application and mitigation to ensure compatibility across the scenarios they cared about.

EMET 5.52 went out of support way back in 2018, but had since been replaced by the Exploit Protection node in the Windows Security App. Exploit Protection offered a very similar user-experience to EMET, allowing the user to specify protections on a per-app basis as well as across all apps.

If you dig into the settings, you can see the available options:

You can also see the settings on a “per-program” basis:

…including the settings put into the registry by application installers and the like.

IFEO Registry screenshot showing the “Mandatory ASLR” bit set for msfeedssync.exe

While built into Windows, Exploit Protection also works with Microsoft Defender for Endpoint (MDE), enabling security admins to easily deploy rules across their entire tenant. Some rules offer an “Audit mode”, which would allow a security admin to check whether a given rule is likely to be compatible with their “real-world” deployment before being deployed in enforcement mode.

Beyond the Windows UI and MDE, mitigations can also be deployed via a PowerShell module; often, you’ll use the Export link on a machine that’s configured the way you like and then import that XML to your other desktops.

Notably, the Set-ProcessMitigation command should be run as an admin (since it needs to touch systemwide registry keys, and silently ignores Access Denied errors). If you choose to import an XML configuration file, the importer’s parser is extremely liberal (ignoring, for instance, whether the document is well-formed) and simply walks the document looking for AppConfig nodes that specify configuration settings per app.

The Big Challenge

The big challenge with Exploit Protection (and EMET before it) is that, if these mitigations were safe to apply by default, we would have done so. Any of these mitigations could conceivably break an application in a spectacular (or nearly invisible) way.

Exploit Mitigations like “Bottom Up ASLR” are opt-in because they can cause compatibility issues with applications that make assumptions about memory layout. Opting an application into a mitigation can cause the application to crash at startup, or later, at runtime, when the application’s (now incorrect) assumption causes a memory access error. Crashes could occur every time, or randomly.

When a mitigation is hit, you might see an explicit “block” event in the Event Log or Defender Portal events, or you might not. That’s because in some cases, a mitigation doesn’t mean the operation is just blocked, instead Windows terminates it. You might look to see whether Watson has captured a crash of the application as it starts, but typically debugging these sorts of things entails starting the target application under a debugger and stepping through its execution until a failure occurs. That is rarely practical for anyone other than the developers of the application (who have its private symbols and source code). If excluding an application from a mitigation doesn’t work, it may be the case that the executable launches some other executable that also needs an exclusion. You might try collecting a Process Monitor log to see whether that’s the case.

Other Problems…

Beyond the problem that turning on additional mitigations could break your applications in surprising and unusual ways, mitigations are also settable by both admins and developers, but there’s no good way to “reset” your settings if you make a mistake or change your mind. Various PowerShell scripts are available to wipe all of the EP settings from the registry, but doing so will wipe out not only the EP settings you set, but also any IFEO (Image File Execution Options) registry settings set by an application’s own developers, leaving you less secure than when you started.

Developer Best Practices

In the ideal case, developers will themselves opt-in (and verify) all available security mitigations for their apps, ensuring that they do not effectively “offload” the configuration and verification process to their customers.

With the increasing focus on security across the software ecosystem, we see that best practice followed by most major application developers, particularly in the places where it’s most useful (browsers and internet clients). Browser developers in particular tend to go far beyond the “alphabet soup” mitigations and also design their products with careful sandboxing such that, even if remote code execution is achieved, it is confined to a tight sandbox to protect the rest of the system.

Thanks for your help in protecting users!

-Eric



Defensive Technology: Windows Filtering Platform

Last November, I wrote a post about the basics of security software. In that post, I laid out how security software is composed of sensors and throttles controlled by threat intelligence. In today’s post, we’ll look at the Windows Filtering Platform, a fundamental platform technology introduced in Windows Vista that provides the core sensor and throttle platform upon which several important security features in Windows are built.

What is the Windows Filtering Platform?

The Windows Filtering Platform (WFP) is a set of technologies that enable software to observe and optionally block messages. In most uses, WFP is used to block network messages, but it can also block local RPC messages.

For networking and RPC scenarios, performance is critical, so a consumer of WFP specifies a filter that describes the messages it is interested in, and the platform will only call the consumer when a message that matches the filter is encountered. Filtering ensures that processing is minimized for messages that are not of interest to the consumer.

When a message matching the filter is encountered, it is sent to the consumer which can examine the content of the message and either allow it to pass unmodified, or it can change the content or indicate that WFP should drop the message.

This sensor and throttle architecture empowers several critical security features in Windows.

Windows Firewall

Most prominently, WFP is the technology underneath the Windows Firewall (formally, “Windows Defender Firewall with Advanced Security” despite the feature having little to do with Defender). The Firewall controls whether processes may establish outbound connections or receive inbound traffic using a flexible set of rules. The Firewall settings (wf.msc) allows the user to specify these rules that are then enforced using the Windows Filtering Platform.

Rules can be added via the UI:

…or programmatically using the native API, PowerShell (ibid), or the NetSh command line tool:

netsh.exe advfirewall firewall add rule name="FiddlerProxy" program="C:\Program Files\Fiddler2\Fiddler.exe" action=allow profile=any dir=in edge=deferuser protocol=tcp description="Permit inbound connections to Fiddler"

Users can optionally enable prompting:

…and when a process tries to bind a port to allow inbound traffic, a UI prompt is shown:

For the most part, however, the Windows Firewall operates silently and does not show much in the way of UI. In contrast, the Malware Bytes Windows Firewall Control app provides a much more feature-rich control panel for the Windows Firewall.

One major limitation of the Windows Firewall that existed for almost two decades is it natively only supports rules that target IP addresses. Many websites periodically rotate between IPs for operational, load-balancing, geographic CDNs, etc, and many products are unwilling to commit to a fixed set of IP addresses forever. Thus, the inability to create firewall rules that use DNS names (e.g. “Always permit traffic to VirusTotal.com“) was a significant limitation.

This limitation was later mitigated with a new feature called Dynamic Keywords. Dynamic keywords allow you to specify a target DNS name in a firewall rule. Windows Defender’s Network Protection feature will subsequently watch for any unencrypted DNS lookups of the specified DNS name. When Network Protection observes a resolution for a targeted DNS name, it will send a message to the firewall service directing it to update the rule with the IP address returned from DNS. (Note: This scheme is imperfect in several dimensions: for example, asynchrony means that the firewall rule may be updated milliseconds after the DNS resolution, such that the first attempt to connect to an allowed hostname could be blocked.)

Zero Trust DNS (ZTDNS)

More recently, the Windows team has been building a feature called “Zero Trust DNS” whereby a system can be configured to perform all DNS resolutions through a secure DNS resolver, and any connections to any network address that was not returned in a response from that resolver are blocked by WFP.

In this configuration, your organization’s DNS server becomes the network security control point: if your DNS server returns an address for a hostname, your clients can connect to that address, but if DNS refuses to resolve the hostname (returning no address), the network connection is blocked. (An app that is hardcoded to talk to a particular IP address would find its requests blocked, since no DNS request “unlocked” access to that address). The ZeroTrustDNS feature is obviously only suitable for certain operational environments due to its compatibility impact.

AppContainer

Windows 8 introduced a new application isolation technology called AppContainer that aimed to improve security by isolating “modern applications” from one another and the rest of the system. AppContainers are configured with a set of permissions, and the network permission set permits restricting an app to public or private network access.

James Forshaw from Google’s Project Zero wrote an amazing blog post about how AppContainer’s network isolation works. The post includes a huge amount of low-level detail about the implementation of both WFP and the Windows Firewall.

Restricted Network Access

Years before the advent of AppContainers, Windows included a similar feature to block network traffic from Windows Services. Like the AppContainer controls, it was implemented in WFP below/before the Windows Firewall feature, although like AppContainer’s controls, it was implemented in the service named Firewall.

Windows Service Hardening enables restrictions developers can set on a service’s network access. Network access restrictions for Services

  • are evaluated before the Windows Firewall rules
  • are enforced regardless of whether Windows Firewall is enabled
  • are defined programmatically using the INetFwServiceRestriction and INetFwRule APIs.

Windows Vista and later include a set of predefined network access restrictions for built-in services. Service network access restrictions are stored in the HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\SharedAccess\Parameters\FirewallPolicy\RestrictedServices\ registry key. The predefined restrictions are stored in the Static\System key, and custom restrictions are stored in the Configurable key.

Way back in 2008, Jan De Clercq wrote a great post explaining Service Hardening. A 2014 blog post shows a way to view the rules.

(New-Object -ComObject HNetCfg.FwPolicy2).ServiceRestriction.Rules

or

Get-NetFirewallRule -PolicyStore ConfigurableServiceStore

Port Scanning Prevention Filter

A one-off protection built atop WFP is the Port Scanning Prevention filter, described here. The article also briefly mentions the use of Event Tracing for Windows (ETW) Logging in the WFP.

RPC Filtering

The Windows Filtering Platform is not limited to Network messages; it can also be used to filter RPC messages between processes.

Again, Google’s James Forshaw has an amazing blog post on the RPC Filter, and Akamai’s Ophir Harpaz and Stiv Kupchik wrote a Definitive Guide to the RPC Filter too.

Various guides (e.g. this one) exist for configuring the RPC Filter to block various attack techniques, and it’s used in security features like Microsoft Defender for Endpoint’s Disruption module.

I only learned about MDE’s use of RPC Filtering recently. In 2024, Microsoft Employees were briefly unable to play Netflix or other DRM’d video in the Microsoft Edge browser. The root cause turned out to be Defender’s Disruption RPC filter blocking messages from the PlayReady DRM sandboxed process causing the failure during playback.

Microsoft Defender Network Protection

Network Protection (NP) is composed of a WFP driver (wdNisDrv.sys) and a service (NisSrv.exe). It has the ability to synchronously block TCP traffic based on IP and URI reputation, relying on threat intelligence from the SmartScreen webservice and signatures authored by the Defender team. While SmartScreen is integrated directly by Microsoft Edge, NP can extend SmartScreen’s anti-phishing/anti-malware protection to other clients like Chrome and Firefox.

Beyond Web Threat Protection, Network Protection’s WCF callout is also used as the sensor/throttle for Web Content Filtering and custom network indicators, as well as enforcing blocks on unsanctioned apps via Microsoft Defender for Cloud Apps.

Unfortunately, the use of TLS causes user-experience problems for the blocking of unwanted content. When Network Protection determines that it must block a HTTPS site, it cannot return an error page to the browser because TLS prevents the injection of content on the secure channel. So, NP instead injects a “Handshake Failure” TLS Fatal Alert into the bytestream to the client and drops the connection. The client browser, unaware that the fatal alert was injected by Network Protection, concludes that the server does not properly support TLS, so it shows an error page complaining of the same.

t=2817 [st=305] SOCKET_BYTES_RECEIVED
                --> byte_count = 14
                --> bytes =
                15 03 03 00 02 02 28 15  03 03 00 02 01 00
t=2817 [st=305] SSL_ALERT_RECEIVED
                --> bytes = 02 28
t=2817 [st=305] SSL_HANDSHAKE_ERROR
                --> error_lib = 16
                --> error_reason = 1040
                --> file = "..\\..\\third_party\\boringssl\\src\\ssl\\tls_record.cc"
                --> line = 594
                --> net_error = -113 (ERR_SSL_VERSION_OR_CIPHER_MISMATCH)
                --> ssl_error = 1

If the system’s Notifications feature is enabled, a notification toast appears with the hope that you’ll see it and understand what’s happened.

Chromium-based browsers show ERR_SSL_VERSION_OR_CIPHER_MISMATCH when the TLS handshake is interrupted
Mozilla Firefox shows SSL_ERROR_NO_CYPHER_OVERLAP when the TLS handshake is interrupted

Beyond blocking unwanted connections, Network Protection provides network-related signals to the Defender engine, enabling behavior monitoring signatures to target suspicious network connections. By way of example, by generating a JA3/JA3S/JA4 signature of a HTTPS handshake’s fields and comparing it to known-suspicious scenarios, security software can detect a malicious process communicating with its Command and Control servers, even when the traffic is encrypted and the process is not yet known to be malicious.

Microsoft Defender for Endpoint EDR

MDE’s EDR feature also includes MsSecWfp, a driver aimed at filtering network connections based on rules. It is leveraged on network isolation scenarios when a configuration for the machine is passed to this driver for enforcement. This driver produces audit events in the form of ETW events and it does not perform deep traffic inspection.

Enhanced Phishing Protection

As I described last year, Win11’s Enhanced Phishing Protection (EPP) aims to protect your domain password across all apps on the system.

When EPP is enabled, a WFP callout in WTD.sys watches for the Server Name Indicator extension in TLS Client Hello messages so that it can understand which hosts a process has established network connections with. If a user subsequently types their domain password, the Web Threat Defense service checks the reputation of the established connections to attempt to determine whether the process is connected to a known phishing site.

And More…

Importantly, WFP isn’t used just by Microsoft — many third party security products are also implemented using the Windows Filtering Platform. Security applications that integrate with the Windows Security Center are required (by policy) to be built upon WFP.

Tailscale wrote a great blog on building atop the WFP.

Other Approaches

While WFP is a core platform technology used by many products, it’s not the only one available. For example, beyond its use of WFP in Network Protection, Microsoft Defender for Endpoint also includes a Network Detection and Response (NDR) feature.

NDR is composed of a service which listens to ETW notifications generated by Pktmon and WinSock, performing async packet analysis using Zeek, an open-source network monitoring platform. Network data is captured at a layer below/before WFP, using PktMon. Compared to WFP, PktMon enables capture of lower-level network data, useful for watching for attempts to exploit bugs in Windows’ network stack itself (e.g. 2021, 2024).

Finally, some network security solutions are based on network traffic routing (e.g. proxies), optionally with decryption via MitM. Microsoft Entra’s Global Secure Access is a newer offering in this category, but there are many vendors offering solutions in this category.

Limitations & Future Directions

Perhaps the most important limitation of network-level monitoring is that the increasingly ubiquitous use of encryption (HTTPS/TLS) means that request and response payloads are usually indecipherable at the network level.

In the “good old days” of network security, a security product could observe all DNS resolution and examine the full content of requests and responses. Nowadays, DNS traffic is increasingly encrypted, and HTTPS prevents network-level monitoring from observing URLs, requests (headers and body) and responses (headers and bodies). For years, security software could still observe the target hostname by sniffing the SNI out of ClientHello messages, but the ongoing deployment of Encrypted Client Hello means that even this signal is disappearing:

Beyond ECH, a growing fraction of traffic takes place over HTTP3/QUIC which doesn’t use TCP at all. QUIC’s use of “initial encryption” precludes trivial determination of the server’s name.

Increasingly, network level monitors can only see the IP/Port of connections, and trying to evaluate connections based on IP address is fraught with peril.

Some security solutions attempt to inject their code into clients or otherwise obtain HTTPS decryption keys, but these approaches tend to be unreliable or offer poor performance.

It seems likely that in the future, the security industry will need to work with application developers to ensure that their software integrates with network security checks directly (as they already do elsewhere) instead of trying to sniff out the destination of traffic at the network level.

-Eric

Runtime Signature Checking Threat Model

Telerik developers recently changed Fiddler to validate the signature on extension assemblies before they load. If the assembly is unsigned, the user is presented with the following message:

In theory, this seems fine/good– signing files is a good thing!

However, it’s important to understand the threat model and tradeoffs here.

Validating signatures every time a file is loaded takes time and slows the startup of the app. That’s particularly true if online certificate revocation checking is performed. The performance impact is one reason why most of my applications have a manifest that indicates that .NET shouldn’t bother:

<configuration>
  <runtime> 
	<generatePublisherEvidence enabled="false"/> 
  </runtime> 
</configuration> 

Signing your installers is critical to help protect them at rest on your servers and streamline the security checks that happen on download and install. Signing the binaries within can be useful in case the user has Smart App Control enabled, and other security software (e.g. Firewall rules that target publisher signatures) may benefit as well.

However, having your app check the signature itself is less useful than you might expect for most applications. The problem is that there’s usually no trust boundary that would preclude an attacker from, for instance, tampering with your app’s code to remove the signature check. In most cases, the attacker could simply modify fiddler.exe to remove the new signature checking code, such that the protection is removed. Similarly, they could likely execute a .DLL hijacking attack to get their code loaded without any signature check at all. Or they could use their own process to inject code into the victim’s address space at runtime. It’s a long list.

In Telerik’s case, tampering to evade signature checking is even simpler. If the user elects to “Always allow” an unsigned extension, that decision is stored as a base64 encoded string in a simple Fiddler preference:

You can use Fiddler’s TextWizard to decode the preference value from about:config

An attacker with sufficient permission to write a .DLL to a place that Fiddler will load it would also have sufficient permission to edit the registry keys where this preference is stored.

Finally, Fiddler’s signature check doesn’t tell the user who signed the file, such that all signatures that chain to any CA are silently allowed. Now, this isn’t entirely worthless– CAs cannot prevent a certificate from being used to sign malware, but in theory a certificate found to do so will eventually get revoked.

If you plan to check code signatures in your application, carefully consider the threat model and ensure that you understand the limits to the protection. And remember that sometimes, “code” may be stored in a type of file that does not natively support signing, as in the case of Fiddler’s script or certain Chromium files.

-Eric